diff --git "a/stack_exchange/AI/merged_data.csv" "b/stack_exchange/AI/merged_data.csv"
new file mode 100644--- /dev/null
+++ "b/stack_exchange/AI/merged_data.csv"
@@ -0,0 +1,103561 @@
+Tags,Input
+"['neural-networks', 'deep-learning', 'data-science']"," Title: is it possible to train several Neural Networks on different types of data and combine them?Body: I want to create a NHL game predictor and have already trained one neural network on game data.
+
+What I would like to do is train another model on player seasonal/game data and combine the two models to archive better accuracy.
+
+Is this approach feasible? If it is, how do I go about doing it?
+
+EDIT:
+
+I have currently trained a neural network to classify the probability of the home team winning a game on a dataset that looks like this:
+
+
the original paper into plain English (see below):
+
+I understand that new transitions (not visited before) are given maximal priority. On line 6 this would be done for every transition in an initial pass since the history is initialized as empty on line 2.
+I’m having trouble with the notation $p_t = \text{max}_{i<t} p_i$. Can someone please state this in plain English? If $t$ = 4 for example, then $p_t$ = 4? How is this equal to max$_{i<t} p_i$.
+It seems in my contrived example here, max$_{i<t} p_i$ would be 3. I must be misreading this notation.
+"
+"['reinforcement-learning', 'temporal-difference-methods', 'notation']"," Title: Understanding the equation of TD(0) in the paper ""Learning to predict by the methods of temporal differences""Body: In the paper Learning to predict by the methods of temporal differences (p. 15), the weights in the temporal difference learning are updated as given by the equation
+$$
+\Delta w_t
+= \alpha \left(P_{t+1} - P_t\right) \sum_{k=1}^{t}{\lambda^{t-k} \nabla_w P_k}
+\tag{4}
+\,.$$
+When $\lambda = 0$, as in TD(0), how does the method learn? As it appears, with $\lambda = 0$, there will never be a change in weight and hence no learning.
+
+Am I missing anything?
+"
+"['deep-learning', 'reinforcement-learning', 'deep-rl']"," Title: Reward does not increase for a maze escaping problem with DQNBody: I am using deep reinforcement learning to solve a classic maze escaping task, similar to the implementation provided here, except the following three key differences:
+
+
+- instead of using a
numpy
array as the input of a standard maze escaping task, I am feeding the model with an image at each step; the image is a 1300 * 900 RGB image, so it is not too small.
+- reward:
+
+
+- each valid move has a small negative reward (penalize long move)
+- each invalid move has a big negative reward (run into other objects or boundaries)
+- Each blocked move has the minimal reward (not common)
+- Find the remote detectors’ defect has a positive reward (5)
+
+- I tweaked the parameters of replay memory, reduced the size of the replay memory buffer.
+
+
+Regarding the implementation, I basically do not change the agent setup except the above items, and I implemented my env
to wrap my customized maze.
+
+But the problem is that, the accumulated reward (first 200 rounds of successful escaping) does not increase:
+
+
+
+And the number of steps it takes to escape one maze is also stable somewhat:
+
+
+
+Here are my question, on which aspect I could start to look at to optimize my problem? Or is it still too early and I will need to train more time?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks']"," Title: Super Resolution on text documentsBody: I want to implement super-resolution and deblurring on images from text documents. Which is the best approach? Are there any Git-hub links which will help me to start? I am new to the field. Any help would be appreciated. Thanks in advance.
+"
+['reinforcement-learning']," Title: How to represent action space in reinforcement learning?Body: I started to learn reinforcement learning a few days ago. And I want to use that to solve resource allocation problem something like given a constant number, find the best way to divide it into several real numbers each is non-negative.
+
+For example, to divide the number 1 into 3 real numbers, the allocation can be:
+
+[0.2, 0.7, 0.1]
+
+[0.95, 0.05, 0]
+...
+
+I do not know how to represent the action space because each allocation is 3-dimensional and each dimension is real-valued and each other correlated.
+
+In actor-critic architecture, is it possible to have 3 outputs activated by softmax in the actor's network each represents one dimension in the allocation?
+
+
+
+Appended:
+
+There is a playlist of videos. A user can switch to the next video at any time. More buffer leads to better viewing experience but more bandwidth loss if user switches to the next video. I want to optimize the smoothness of playback with minimal bandwidth loss. At each time step, the agent decides the bandwidth allocation to download current video and the next 2 videos. So I guess the state will be the bandwidth, user's behavior and the player situation.
+"
+"['natural-language-processing', 'brain']"," Title: Does the human brain use beam search for text generation?Body: As far as I understand, beam search is the most widely used algorithm for text generation in NLP. So I was wondering: does the human brain also use beam search for text generation? If not, then what?
+"
+"['reinforcement-learning', 'python', 'dqn', 'online-learning']"," Title: Online normalization of database for DQNBody: I have an issue with the normalization of the database (a large time series) for my DQN. I obtained optimal results and saved the NN (5 LSTM layers) weights training on a database normalized as such: I divided it into consecutive batches of 96 steps (the window size that my NN gets as input) and I normalized each batch respectively with Z-score. However, I am unable to extend these results to an online setting, as online I only have access to the last 96 elements, and thus I can only normalize according to the last 96. This small difference actually causes a sharp decrease in the performance of my DQN, as the weights of the NN were perfectly tuned for the first normalization but are not great with the online normalized database. In a nutshell, the problem is that only every 96 steps the first normalized database and the online one are the same, for all steps in between this is not happening. I have the weights for the first one, but I cannot find a way to exploit them for the online one.
+
+What I have tried so far with the online database:
+
+
+- If I normalize every last 96 steps, and act for every new step (as it should be), the performances are quite bad.
+- If I normalize every last 96 steps, and act just every 96 steps (repeating the same action in between), the agent is actually picking the optimal action every 96 steps (like in the offline setting), so the results are somewhat decent but far from optimal for the long period between the actions. If I try with shorter periods, like 48, performances decrease sharply as it only acts optimally every 2 actions.
+
+
+I don't know if there is a way to tune the optimal weights for the online database, acting directly on them without going through training again. It would be nice to understand why the NN picks its actions at each step in the optimal setting, so that I would be able to follow its strategy, but I'm not aware if it's possible to actually deduct this from the analysis of weights and features, especially for a multi-layer LSTM network.
+Otherwise, I was thinking about something like normalizing the online database directly through similarities with the old batches of 96 (using their mean and std) or something like that. Anything that would help reducing the time between optimal actions to around 50-60 steps instead of 96 would be enough to provide a nearly optimal strategy, so at this point, I would consider any kind of (unelegant) method to get what I want.
+
+I don't know if any of these is feasible, but retraining the agent is very difficult as every single time but once the agent got stuck in suboptimal strategies, this is why I am trying to get around this problem using the optimal weights I have instead of retraining.
+"
+"['deep-learning', 'convolutional-neural-networks', 'weights', 'convolution-arithmetic']"," Title: Neural Nets: CNN confirming layer/filter arithmeticBody: I was hoping someone could just confirm some intuition about how convolutions
+work in convolutional neural networks. I have seen all of the tutorials on
+applying convolutional filters on an image, but most of those tutorials
+focus on one channel images, like a 128 x 128 x 1 image. I wanted to clarify
+what happens when we apply a convolutional filter to RGB 3 channel images.
+Now this is not a unique question, I think a lot of people ask this question as well. It is just that there seem to be so many answers out there, each with their own variations, that it is hard to find a consistent answer. I included a post below that seems to comport with what my own intuition, but I was hoping one of the experts on SE could help validate the layer arithmetic, to make sure my intuition was not off.
+How is the depth of the input related to the depth of the output of a convolutional layer?
+Consider an Alexnet network with 5 convolutional layers and 3 fully connected
+layers. I borrowed the network from this post. Now, say the input is 227 x 227, and the filter is specified
+as 11 x 11 x 96 with stride 4. That means there are 96 filters each with dimensions 11x11x3, right?
+So there are a total of 363 parameters per filter--excluding the bias term--
+and there are 96 of these filters to learn. So the 363*96 = 34848 filter values are learned
+just like the weights in the fully connected layers right?
+My second question deals with the next convolutional network layer. In the next
+layer I will have an image that is 55 x 55 x 96 image. In this case, would the
+filter be 5x5x96--since there are now 96 feature maps on the image? So that means
+that each individual filter would need to learn 5x5x96 = 2400 filter values (weights),
+and that across all 256 filters this would mean 614,400 filter values?
+I just wanted to make sure that I was understanding exactly what is being learned
+at each level.
+"
+"['machine-learning', 'natural-language-processing', 'machine-translation']"," Title: How do I identify a monologue or dialogue in a conversation?Body: How do I identify monologues and dialogues in a conversation (or transcript) using natural language processing? How do I distinguish between the two?
+"
+"['natural-language-processing', 'python', 'datasets']"," Title: Generate QA dataset from large text corpusBody: I have a corpus of a domain data in form of 10-15 books pdf and some articles and my end-goal is to make a question-answering system particular to that domain.
+For that, I would need a dataset on Q/A which I can use on top of something like SQuAD(Stanford Question Answering Dataset) for domain-specific knowledge
+
+My stuck point is how to convert this corpus into a usable question-answering dataset.
+
+My current strategy is something AllenAI has been working with. A list of their research papers on it can be found here
+
+As I understand they use a combination of Knowledge Extraction, Natural Language Understanding, and Inference to get the job done. But I cannot find any good practical implementation.
+
+Where can I find a good resource?
+"
+"['machine-learning', 'natural-language-processing', 'word-embedding', 'bert']"," Title: Will BERT embedding be always same for a given document when used as a feature extractorBody: When we use BERT embeddings for a classification task, would we get different embeddings every time we pass the same text through the BERT architecture? If yes, is it the right way to use the embeddings as features? Ideally, while using any feature extraction technique, features values should be consistent. How do I handle this if we want BERT to be used as a feature extractor?
+"
+"['recurrent-neural-networks', 'backpropagation']"," Title: Do you need to store prevous values of weights and layers on recurrent layer while BPTT?Body: The Back propagation through time on recurrent layer is defined similar to normal one, means somethin like
+
+self.deltas[x] = self.deltas[x+1].dot(self.weights[x].T) * self.layers[x] * (1- self.layers[x])
where
+
+self.deltas[x+1]
is error from prevous layer, self.weights[x]
is weights map and self.layers[x](1- self.layers[x])
is bakwards activation of sigmoid function where self.layers[x]
is vector of sigmoid. But while normal backpropagation the values are there, while BPTT i can not take the current self.layers[x]
: i need the previous ones, right ?
+
+So unlike normal BP, do i need extra store old weights and layers, for example in circular queue, and then apply the formula where self.deltas[x+1]
is layer from next time ?
+
+Not realy implementation, just basic understanding in order to can implement it.
+
+Lets see the picture:
+
+
+
+Here are : self.layers[0] = $x_{t+1}$, self.layers[1] = $h_{t+1}$ , self.layers[2] = $o_{t+1}$, in order to perform backprop $h_{t+1}$ -> $h_{t}$ -> $h_{t-1}$... I DO NEED to have layers $h_t$ ,$h_{t-1}$... and weights $v_{t+1}$, $v_t$... EXTRA stored in additional to the network $x_{t+1}$ -> $h_{t+1}$ -> $o_{t+1}$, right?
+Thats all the question.
+
+And i do not need to store previous outputs $o[t, o_{t-1}, etc..]$, because backprop from them ot->ht, etc was already calculated.
+"
+"['natural-language-processing', 'natural-language-understanding', 'machine-translation', 'google-translate']"," Title: What is the actual quality of machine translations?Body: As an AI layman, till today I am confused by the promised and achieved improvements of automated translation.
+My impression is: there is still a very, very far way to go. Or are there other explanations why the automated translations (offered and provided e.g. by Google) of quite simple Wikipedia articles still read and sound mainly silly, are hardly readable, and only very partially helpful and useful?
+It may depend on personal preferences (concerning readability, helpfulness, and usefulness), but my personal expectations are disappointed sorely.
+The other way around: Are Google's translations nevertheless readable, helpful, and useful for a majority of users?
+Or does Google have reasons to retain its achievements (and not to show to the users the best they can show)?
+"
+"['neural-networks', 'convolutional-neural-networks']"," Title: Convolutional neural network debuggingBody: Im trying to implement CNN for small images classification (36x36x1) (grayscale). I've checked every forward/backward pass function on small example, and still my cnn is not doin any progress on training. Tests were done on learning rate [0.001 - 0.01].
+Network structure:
+Conv -> Relu -> Conv -> Relu -> MaxPooling -> Conv -> Relu -> Conv -> Relu -> MaxPooling -> Flattening -> sigmoid -> Fully connected -> sigmoid ->Fully connected -> softmax
+
+Is there a mistake in forwardPass/ backwardPass function?
+
+ def forwardPass(self, X):
+ """"""
+ :param X: batch of images (Input value of entire convolutional neural network)
+ image.shape = (m,i,i,c) - c is number of channels
+ for current task, first input c = 1 (grayscale)
+ example: for RGB c = 3
+ m - batch size
+ X.shape M x I x I x C
+
+ :return : touple(Z, inValues)
+ Z - estimated probability of every class
+ Z.shape M x K x 1
+
+
+ """"""
+
+ W = self.weights
+
+ inValues = {
+ 'conv': [],
+ 'fullyconnect': [],
+ 'mask' : [],
+ 'pooling' : [],
+ 'flatten' : [],
+ 'sigmoid' : [],
+ 'relu' : []
+ }
+
+ """"""
+ Current structure:
+ Conv -> Relu -> Conv -> Relu -> MaxPooling -> Conv -> Relu -> Conv -> Relu -> MaxPooling -> Flattening ->
+ -> sigmoid -> Fully connected -> sigmoid ->Fully connected -> softmax
+ """"""
+
+
+
+ inValues['conv'].append(X)
+ Z = self.convolution_layer(X, W['conv'][0]);z = Z
+
+ inValues['relu'].append(z)
+ Z = self.relu(z);z =Z
+
+ inValues['conv'].append(z)
+ Z = self.convolution_layer(z, W['conv'][1]);z = Z
+
+
+ inValues['relu'].append(z)
+ Z = self.relu(z);z =Z
+
+
+ inValues['pooling'].append(z)
+ Z, mask = self.max_pooling(z);z = Z
+ inValues['mask'].append(mask)
+
+
+
+ inValues['conv'].append(z)
+ Z = self.convolution_layer(z, W['conv'][2]);z = Z
+
+ inValues['relu'].append(z)
+ Z = self.relu(z);z = Z
+
+ inValues['conv'].append(z)
+ Z = self.convolution_layer(z, W['conv'][3]);z = Z
+
+ inValues['relu'].append(z)
+ Z = self.relu(z);z = Z
+
+ inValues['pooling'].append(z)
+ Z, mask = self.max_pooling(z);z = Z
+ inValues['mask'].append(mask)
+
+
+ inValues['flatten'].append(z)
+ Z = self.flattening(z);z = Z
+
+
+ inValues['sigmoid'].append(z)
+ Z = self.sigmoid(z); z = Z
+
+
+ inValues['fullyconnect'].append(z)
+ Z = self.fullyConnected_layer(z, W['fullyconnect'][0]); z = Z
+
+
+ #dropout here later
+
+ inValues['sigmoid'].append(z)
+ Z = self.sigmoid(z); z = Z
+
+
+ inValues['fullyconnect'].append(z)
+ Z = self.fullyConnected_layer(z, W['fullyconnect'][1]);z = Z
+
+
+ Z = self.softmax(z)
+
+
+ return Z, inValues
+
+
+Backpropagation:
+
+ def backwardPass(self, y, Y, inValues):
+
+ """"""
+
+ :param Y: estimated probability of all K classes
+ ( Y.shape = M x K x 1 )
+ :param y: True labels for current
+ M x K x 1
+ :param inValues: Dictionary with input values of conv/ff layers
+ example: inValues['conv'][1] - Values encountered during feedForward on input of Conv layer with index 1
+ :return: Gradient of weights in respect to L
+ """"""
+
+ np.set_printoptions(suppress=True)
+ W = self.weights
+
+ G = {
+ 'conv' : [],
+ 'fullyconnect' : []
+ }
+
+
+ Z = self.softmax_backward(Y, y); z = Z
+
+
+
+ Z, dW, dB = self.fullyConnected_layer_backward(z, W['fullyconnect'][1],inValues['fullyconnect'][1]);z = Z
+ weight = {
+ 'W': dW,
+ 'B': dB
+ }
+ G['fullyconnect'].append(weight)
+
+
+
+ Z = self.sigmoid_deriv(z, inValues['sigmoid'][1]); z = Z
+
+ Z, dW, dB = self.fullyConnected_layer_backward(z, W['fullyconnect'][0],inValues['fullyconnect'][0]);z = Z;
+ weight = {
+ 'W': dW,
+ 'B': dB
+ }
+ G['fullyconnect'].append(weight)
+
+
+ Z = self.sigmoid_deriv(z, inValues['sigmoid'][0]);z=Z
+
+
+ Z = self.flattening_backward(z, inValues['flatten'][0]); z = Z
+
+
+ Z = self.max_pooling_backward(z,inValues['mask'][1]); z = Z
+
+
+ Z = z * self.relu(inValues['relu'][3], deriv=True); z = Z
+
+ Z, dW, dB = self.convolution_layer_backward(z, W['conv'][3],inValues['conv'][3]); z = Z
+ weight = {
+ 'W': dW,
+ 'B': dB
+ }
+ G['conv'].append(weight)
+
+ Z = z * self.relu(inValues['relu'][2], deriv=True);z = Z
+
+
+ Z, dW, dB = self.convolution_layer_backward(z, W['conv'][2],inValues['conv'][2]); z = Z
+ weight = {
+ 'W': dW,
+ 'B': dB
+ }
+ G['conv'].append(weight)
+
+
+ Z = self.max_pooling_backward(z,inValues['mask'][0]);z = Z
+
+
+ Z = z * self.relu(inValues['relu'][1], deriv=True);z = Z
+
+ Z, dW, dB = self.convolution_layer_backward(z, W['conv'][1],inValues['conv'][1]); z = Z
+ weight = {
+ 'W': dW,
+ 'B': dB
+ }
+ G['conv'].append(weight)
+
+ Z = z * self.relu(inValues['relu'][0], deriv=True);z = Z
+
+ Z, dW, dB = self.convolution_layer_backward(z, W['conv'][0],inValues['conv'][0]); z = Z
+ weight = {
+ 'W': dW,
+ 'B': dB
+ }
+ G['conv'].append(weight)
+
+ G['conv'].reverse()
+ G['fullyconnect'].reverse()
+
+ return G
+
+
+update:
+
+ def update(self, alfa, W, G):
+
+ W['fullyconnect'][0]['W'] -= alfa * np.sum(G['fullyconnect'][0]['W'],axis=0)
+ W['fullyconnect'][1]['W'] -= alfa * np.sum(G['fullyconnect'][1]['W'],axis=0)
+ W['fullyconnect'][0]['B'] -= alfa * np.sum(G['fullyconnect'][0]['B'],axis=0)
+ W['fullyconnect'][1]['B'] -= alfa * np.sum(G['fullyconnect'][1]['B'],axis=0)
+
+ W['conv'][0]['W'] -= alfa * np.sum(G['conv'][0]['W'],axis=0)
+ W['conv'][1]['W'] -= alfa * np.sum(G['conv'][1]['W'],axis=0)
+ W['conv'][2]['W'] -= alfa * np.sum(G['conv'][2]['W'],axis=0)
+ W['conv'][3]['W'] -= alfa * np.sum(G['conv'][3]['W'],axis=0)
+ W['conv'][0]['B'] -= alfa * np.sum(G['conv'][0]['B'],axis=0)
+ W['conv'][1]['B'] -= alfa * np.sum(G['conv'][1]['B'],axis=0)
+ W['conv'][2]['B'] -= alfa * np.sum(G['conv'][2]['B'],axis=0)
+ W['conv'][3]['B'] -= alfa * np.sum(G['conv'][3]['B'],axis=0)
+
+ return W
+
+"
+"['machine-learning', 'linear-regression']"," Title: Calculating Parameter value Using Gradient Descent for Linear Regression ModelBody:
+ Consider the following data with one input (x) and one output (y):
+ (x=1, y=2)
+ (x=2, y=1)
+ (x=3, y=2)
+ Apply linear regression on this data, using the hypothesis $h_Θ(x) = Θ_0 + Θ_1 x$, where $Θ_0$ and $Θ_1$ represent the parameters to be learned. Considering the initial values $Θ_0$= 1.0, and $Θ_1$ = 0.0, and learning rate 0.1, what will be the values of $Θ_0$ and $Θ_1$ after the first three iterations of Gradient Descent
+
+
+From least squares method I took the derivative with respect to $Θ_0$ and $Θ_1$ and plugged in the initial values to get the slope/intercept and multiplied it by the learning rate 0.1 to get the step size.The step size was used to calculate the new $Θ_0$ and $Θ_1$ values.
+
+I am getting $Θ_0$ as 1.7821 when following the above. Please let me know if the approach followed and the solution correct or there is a better way to solve
+"
+['neural-networks']," Title: Neural nets for novicesBody: Stories like this one are quite popular these days.
+
+The idea of training a neural net to do something silly like this may sound trivial to experts like you, but for a novice like me it could be an interesting learning experience.
+
+Is there novice-friendly software I could play with to train a neural net to do something like this or is there necessarily a steep learning curve?
+"
+"['convolutional-neural-networks', 'convergence', 'facenet']"," Title: How do we give a kick start to the Facenet network?Body: I read the Facenet paper and one thing I am not sure about (it might be trivial and I missed it) is how do we give the kick start to the network.
+The embeddings, in the beginning, are random, so picking hard (or semi-hard) negatives, based on the Euclidean distance, would give random images in the beginning.
+Do we hope that over time this will converge to the actual desired hard images? Is it any reason to expect that this convergence will be attained?
+"
+"['neural-networks', 'function-approximation', 'universal-approximation-theorems']"," Title: Is there a way to calculate the closed-form expression of the function that a neural network computes?Body: As stated in the universal approximation theorem, a neural network can approximate almost any function.
+Is there a way to calculate the closed-form (or analytical) expression of the function that a neural network computes/approximates?
+Or, alternatively, figure out if the function is linear or non-linear?
+"
+"['machine-learning', 'logistic-regression']"," Title: Is there a Logistic Regression classifier that can perfectly classify the given data in this problem?Body: I have the following problem.
+
+A bank wants to decide whether a customer can be given a loan,
+based on two features related to (i) the monthly salary of the customer, and (ii) his/her account balance. For simplicity, we model the two features with two binary variables $X1$, $X2$ and the class $Y$ (all of which can be either 0 or 1). $Y=1$ indicates that the customer can be given loan, and Y=0 indicates otherwise.
+Consider the following dataset having four instances:
+($X1 = 0$, $X2 = 0$, $Y = 0$)
+($X1 = 0$, $X2 = 1$, $Y = 0$)
+($X1 = 1$, $X2 = 0$, $Y = 0$)
+($X1 = 1$, $X2 = 1$, $Y = 1$)
+Can there be any logistic regression classifier using X1 and X2 as features, that can perfectly classify the given data?
+
+The approach followed in the question was to calculate respective probabilities for Y=0 and Y=1 respectively. The value of $p$ obtained was $0.25$ and $(1-p)$ as $0.75$. The $\log(p/1-p)$ is coming as negative.
+However, I don't understand what I need to do to understand whether there is a Logistic Regression classifier that can perfectly classify the given data.
+"
+"['neural-networks', 'backpropagation']"," Title: How does adding a small change to an neuron's weighted input affect the overall cost?Body: I was reading the following book: http://neuralnetworksanddeeplearning.com/chap2.html
+
+and towards the end of equation 29, there is a paragraph that explains this:
+
+
+
+However I am unsure how the equation below is derived:
+
+
+"
+['reinforcement-learning']," Title: Inverse Reinforcement Learning for Markov GamesBody: This is an Inverse Reinforcement Learning (IRL) problem. I have data (observations) on actions taken by a (real) agent. Given this data I want to estimate the likelihood of the observed actions in a Q-learning agent. Rewards are given by a linear function on a parameter, say alpha.
+
+Thus, I want to estimate the alpha that makes the observed actions more likely to be taken by a Q-agent. I read some papers (i.e. Ng & Russel 2004), but I found them rather generalistic.
+"
+"['classification', 'datasets']"," Title: Are there tools to help labelling images?Body: I need to manually classify thousands of pictures into discrete categories, say, where each picture is to be tagged either A, B, or C.
+
+Edit: I want to do this work myself, not outsource / crowdsource / whatever online collaborative distributed shenanigans. Also, I'm currently not interested in active learning. Finally, I don't need to label features inside the images (eg. Sloth) just file each image as either A, B, or C.
+
+Ideally I need a tool that will show me a picture, wait for me to press a single key (0 to 9 or A to Z), save the classification (filename + chosen character) in a simple CSV file in the same directory as the pictures, and show the next picture. Maybe also showing a progress bar for the entire work and ETA estimation.
+
+Before I go ahead and code it myself, is there anything like this already available?
+"
+"['genetic-algorithms', 'optimization', 'gradient-descent', 'neat']"," Title: How does NEAT find the most successful generation without gradients?Body: I'm new to NEAT, so, please, don't be too harsh. How does NEAT find the most successful generation without gradient descent or gradients?
+"
+['long-short-term-memory']," Title: Do I need LSTM units everywhere in the network?Body: I have recently begun researching LSTM networks, as I have finished my GA and am looking to progress to something more difficult. I believe I am using the classic LSTM (if that makes any sense) and have a few questions.
+
+Do I need LSTM units everywhere in the network? For example, can I only use LSTM units for the first and last layer and use feedforward units everywhere else?
+
+How do I go about implementing bias values into an LSTM?
+
+Assuming I create a network that predicts the next few words of a sentence, does that mean my outputs should be every possible word that the network could conceivably use?
+"
+"['game-ai', 'rts', 'benchmarks']"," Title: What are the most compact Real Time-Strategy Games?Body: There was a recent informal question on chat about RTS games suitable for AI benchmarks, and I thought it would be useful to ask a question about them in relation to AI research.
+
+Compact is defined as the fewest mechanics, elements, and smallest gameboard that produces a balanced, intractable, strategic game. (This is important because greater compactness facilitates mathematical analysis.)
+"
+"['neural-networks', 'recurrent-neural-networks', 'geometric-deep-learning', 'graphs', 'graph-neural-networks']"," Title: Are there neural networks that accept graphs or trees as inputs?Body: As far I know, the RNN accepts a sequence as input and can produce as a sequence as output.
+
+Are there neural networks that accept graphs or trees as inputs, so that to represent the relationships between the nodes of the graph or tree?
+"
+"['convolutional-neural-networks', 'computer-vision']"," Title: What is wrong with this CNN network, why are there hot pixels?Body: I'm building a CNN decoder, which mirrors (in reverse) the VGG network structure from Conv-4-1 layer.
+
+The net seems to be working fine, however, the output looks broken. Please note that the colour distortion is fine, it's the the [255/0 RGB pixels] e.g. green that I'm worrying about.
+
+I tried to overfit a single image, but even then I get these hot pixels. Does anyone know why they appear?
+
+
+
+My net:
+
+ activation = 'elu'
+
+ input_ = Input((None, None, 512))
+ x = Conv2D(filters=256, kernel_size=self.kernel_size, padding='same', bias_initializer='zeros', activation=activation)(input_)
+
+ x = UpSampling2D()(x)
+ for _ in range(3):
+ x = Conv2D(filters=256, kernel_size=self.kernel_size, padding='same', activation=activation)(x)
+ x = Conv2D(filters=128, kernel_size=self.kernel_size, padding='same', activation=activation)(x)
+
+ x = UpSampling2D()(x)
+ x = Conv2D(filters=128, kernel_size=self.kernel_size, padding='same', activation=activation)(x)
+ x = Conv2D(filters=64, kernel_size=self.kernel_size, padding='same', activation=activation)(x)
+
+ x = UpSampling2D()(x)
+ x = Conv2D(filters=64, kernel_size=self.kernel_size, padding='same', activation=activation)(x)
+ x = Conv2D(filters=3, kernel_size=self.kernel_size, padding='same')(x)
+
+ model = Model(inputs=input_, outputs=x)
+
+"
+"['deep-learning', 'generative-adversarial-networks', 'autoencoders']"," Title: Deep Generative Networks Probability of ""Success""Body: I have built various ""successful"" GANs or VAEs that can generate realistic images reliably, but in either case the generative step is sampling a latent feature vector from some distribution and running it through a generator/decoder $G(x) \ s.t. \ x \sim \mathbb{D} $.
+
+Generally $\mathbb{D}$ has a continuous domain (normally distributed is a common choice), and $G(x)$ is a continuous and at-least singly differential function (by construction so it could be optimized with a gradient scheme).
+
+Now assume we have 2 images $y_1, y_2$ generated by $x_1, x_2 \sim \mathbb{D}$ and that shortest path from $x_1$ to $x_2$ is also in $\mathbb{D}$'s domain. Crawling along this path, we get a continuous path from $y_1$ to $y_2$ in image space. Wouldnt you assume along this continuous transformation that the images would be a ""Garbage"" as they are essentially a fusion of $y_1, y_2,$ and $\{y_k\}_k$ where $\{y_k\}_k$ describes a set of ""succesful"" stops along the way?
+ --Especially in cases of single mode distributions (like normal) where all $x$ on the path must have higher likelihood than atleast one of the paths endpoints.
+
+Trying to summarize my point, how do continuous generative models not with high probability produce garbage when given any 2 ""succesful"" images, there probably is a high range of ""garbage"" examples along a path between them in image space?
+
+Note: by ""succesful"" i mean the generated image could be considered a draw from the true distribution you are trying to capture by the generator, and by ""garbage"" i mean ones that are obviously not.
+"
+"['python', 'genetic-algorithms', 'evolutionary-algorithms']"," Title: What qualifies as 'fitness' for a genetic algorithm that minimizes an error function?Body: Suppose I have a set of data that I want to apply a segmented regression to, fitting linearly across the breakpoint. I aim to find the offsets and slopes of either line and the position of the breakpoint that minimizes an error function given the data I have, and then use them as sufficiently close initial guesses to find the exact solutions using a curve fit. I'll elect to choose the bounds as the mins and maxes of $x$ and $y$ of my data, and an arbitrary bound for a slope with $slope = a * \frac{y_{max}-y_{min}}{x_{max}-x_{min}}$ for a suitable $a$ where I can safely assume the magnitude of $a$ is greater than any possible slope that realistically represents the data. Let's suppose I define a function (in Python):
+
+ def generate_genetic_Parameters():
+ initial_parameters=[]
+ x_max=np.max(xData)
+ x_min=np.min(xData)
+ y_max=np.max(yData)
+ y_min=np.min(yData)
+ slope=10*(y_max-y_min)/(x_max-x_min)
+
+ initial_parameters.append([x_max,x_min]) #Bounds for module break point
+ initial_parameters.append([-slope,slope]) #Bounds for slopeA
+ initial_parameters.append([-slope,slope]) #Bounds for slopeB
+ initial_parameters.append([y_max,y_min]) #Bounds for offset A
+ initial_parameters.append([y_max,y_min]) #Bounds for offset B
+
+ result=differential_evolution(sumSquaredError,initial_parameters,seed=3)
+
+ return result.x
+
+ geneticParameters = generate_genetic_Parameters() #Generates genetic parameters
+
+fittedParameters, pcov= curve_fit(func, xData, yData, geneticParameters)
+
+
+This will do the trick, but what is the implicit standard of fitness that the differential evolution here deals with?
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'sutton-barto']"," Title: Why is an average of all returns used to update the value in the first-visit MC control?Body: In Sutton & Barto's Reinforcement Learning: An Introduction, in page 83 (101 of the pdf), there is a description of first-visit MC control. In the phase where they update $Q(s, a)$, they do an average of all the returns $G$ for that state-action pair.
+
+Why don't they just update the value with a weight for the value from previous episodes $\alpha$ and a weight $1- \alpha$ for the new episode return as it is done in TD-Learning?
+
+I have also seen other books (for example, Algorithms for RL (page 22) where they update it using $\alpha$. What is the difference?
+"
+['natural-language-processing']," Title: How to distinguish between proper nouns and other words in NLP?Body: If an NLP system processes a text containing proper nouns like names, trade marks, etc. without knowing anything about the language (ie no lexicon), is it possible to recognise them?
+"
+"['machine-learning', 'training', 'cross-validation']"," Title: Ideal score of a model on training and cross validation dataBody: The question is little bit broad, but I could not find any concrete explanation anywhere, hence decided to ask the experts here.
+
+I have trained a classifier model for binary classification task. Now I am trying to fine tune the model. With different sets of hyperparameters I am getting different sets of accuracy on my train and test set. For example:
+
+(1) Train set: 0.99 | Cross-validation set: 0.72
+(2) Train set: 0.75 | Cross-validation set: 0.70
+(3) Train set: 0.69 | Cross-validation set: 0.69
+
+
+These are approximate numbers. But my point is - for certain set of hyperparameters I am getting more or less similar CV accuracy, while the accuracy on training data varies from overfit to not so much overfit.
+
+My question is - which of these models will work best on future unseen data? What is the recommendation in this scenario, shall we choose the model with higher training accuracy or lower training accuracy, given that CV accuracy is similar in all cases above (in fact CV score is better in the overfitted model)?
+"
+"['datasets', 'object-recognition']"," Title: Are absence of labels for classes of interest in a vision dataset a big problem?Body: I wish to be able to detect: pedestrians, cars, traffic lights
+
+I have two large datasets:
+ - One contains instances and labels of all three classes.
+ - The other contains instances of all three but only labels for pedestrians and cars. ie. there are many unlabelled traffic lights.
+
+I want to combine the two datasets and train Yolov3 on it. Will the unlabelled presence of objects of interest significantly affect detection performance of that category?
+"
+"['machine-learning', 'linear-regression', 'supervised-learning', 'regression']"," Title: How is regression machine learning?Body: In regression, in order to minimize an error function, a functional form of hypothesis $h$ must be decided upon, and it must be assumed (as far as I'm concerned) that $f$, the true mapping of instance space to target space, must have the same form as $h$ (if $h$ is linear, $f$ should be linear. If $h$ is sinusoidal, $f$ should be sinusoidal. Otherwise the choice of $h$ was poor).
+
+However, doesn't this require a priori knowledge of datasets that we are wanting to let computers do on their own in the first place? I thought machine learning was letting machines do the work and have minimal input from the human. Are we not telling the machine what general form $f$ will take and letting the machine using such things as error minimization do the rest? That seems to me to forsake the whole point of machine learning. I thought we were supposed to have the machine work for us by analyzing data after providing a training set. But it seems we're doing a lot of the work for it, looking at the data too and saying ""This will be linear. Find the coefficients $m, b$ that fit the data.""
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'supervised-learning', 'alphago-zero', 'go']"," Title: How Does AlphaGo Zero Implement Reinforcement Learning?Body: AlphaGo Zero (https://deepmind.com/blog/alphago-zero-learning-scratch/) has several key components that contribute to it's success:
+
+
+- A Monte Carlo Tree Search Algorithm that allows it to better search and learn from the state space of Go
+- A Deep Neural Network architecture that learns the value and policies of given states, to better inform the MCTS.
+
+
+My question is, how is this Reinforcement Learning? Or rather, what aspects of this algorithm specifically make it a Reinforcement Learning problem? Couldn't this just be considered a Supervised Learning problem?
+"
+"['neural-networks', 'tensorflow', 'keras', 'performance', 'gpu']"," Title: In addition to matrix algebra, can GPU's also handle the various Kernel functions for Neural Networks?Body: I've read a number of articles on how GPUs can speed up matrix algebra calculations, but I'm wondering how calculations are performed when one uses various kernel functions in a neural network.
+
+If I use Sigmoid functions in my neural network, does the computer use the CPU for the Sigmoid calculation, and then the GPU for the subsequent matrix calculations?
+
+Alternatively, is the GPU capable of doing nonlinear calculations in addition to the linear algebra calculations? If not, how about a simple Kernel function like ReLU? Can a GPU do the Relu calculation, or does it defer to the CPU?
+
+Specifically, I'm using Keras with a Tensorflow backend, and would like to know what TensorFlow can and cannot use the GPU for, but I'm also interested in the general case.
+"
+"['classification', 'object-recognition']"," Title: Training Haar Cascade model with grey vs color imagesBody: Most examples if not all, are models that have been trained with images that are turned grey. Does this mean that models only detect edges? Why wouldnt you want to keep color so that model could learn that as well?
+"
+"['objective-functions', 'probability-distribution']"," Title: Unit integral condition on the output layerBody: I want to train a neural network on some input data from a probability distribution (say a Gaussian). The loss function would normally be $-\sum\log(f(x_i))$, where the sum is over the whole data (or in this case a mini batch) and $f$ is the NN function. However I need to enforce the fact that $\int_0^\infty f(x)dx=1$, in order for $f$ to be a real probability distribution. How can I add that to the loss function? Thank you!
+"
+"['convolutional-neural-networks', 'image-recognition', 'classification', 'generative-model', 'generative-adversarial-networks']"," Title: Can GANs be used to generate matching pairs to inputs?Body: I have some limited experience with MLPs and CNNs. I am working on a project where I've used a CNN to classify ""images"" into two classes, 0 and 1. I say ""images"" as they are not actually images in the traditional sense, rather we are encoding a string from a limited alphabet, such that each character has a one-hot encoded row in the ""image"". For example, we are using this for a bioinformatics application, so with the alphabet {A, C, G, T} the sequence ""ACGTCCAGCTACTTTACGG"" would be:
+
+
+
+All ""images"" are 4x26. I used a CNN to classify pairs of ""images"" (either using two channels, i.e. 2x4x26 or concatenating two representations as 8x26) according to our criteria with good results. The main idea is for the network to learn how 2 sequences interact, so there are particular patterns that make sense. If we want to detect a reverse complement for example, then the network should learn that A-T and G-C pairs are important. For this particular example, if the interaction is high/probable, the assigned label is 1, otherwise 0.
+
+However, now we want to go one step further and have a model that is able to generate ""images"" (sequences) that respect the same constraints as the classification problem. To solve this, I looked at Generative Adversarial Networks as the tool to perform the generation, thinking that maybe I could adapt the model from the classification to work as the discriminator. I've looked at the ""simpler"" models such as DCGAN and GAN, with implementations from https://github.com/eriklindernoren/Keras-GAN, as I've never studied or used a GAN before.
+
+Say that we want to generate pairs that are supposed to interact, or with the label 1 from before. I've adapted the DCGAN model to train on our 1-labelled encodings and tried different variations for the discriminator and generator, keeping in mind rules of thumb for stability. However, I can't get the model to learn anything significant. For example, I am trying to make the network learn the simple concept of reverse complement, mentioned above (expectation: learn to produce a pair with high interaction, from noise). Initially the accuracy for the discriminator is low, but after a few thousand epochs it increases drastically (very close to 100%, and the generator loss is huge, which apparently is a good thing, as the two models ""compete"" against each other?). However, the generated samples do not make any sense.
+
+I suspect that the generator learns the one-hot encoding above - since early generator output is just noise, it probably learns something like ""a single 1 per column is good"", but not the more high level relation between the 1s and 0s. The discriminator probably is able to tell that the early generated outputs are garbage as there are 1s all over the place, but perhaps at some point the generator can match the one-hot encoding and thus the discriminator decides that is not a fake. This would explain the high accuracy, despite the sequences not making sense.
+
+I am not sure of this is the case or not, or if it makes sense at all (I've just started reading about GANs yesterday). Is there a way to capture the high level features of the dataset? I am not interested in just generating something that looks like a real encoding, I'd like to generate something that follows the encoding but also exhibits patterns from the original data.
+
+I was thinking that maybe pretraining the discriminator would be a good idea, because it would then be able to discern between real-looking encodings for both the 0 and 1 classes. However, the pretraining idea seems frowned upon.
+
+I'd appreciate any ideas and advice. Thanks!
+"
+"['models', 'unsupervised-learning']"," Title: Prediction of values with an unsupervised modelBody: Given a set of historical data points, I am trying to predict a continuous output of which I have no historical record of, therefore the problem is of an unsupervised nature.
+
+I am wondering if there is any method or approach I should take to tackle this problem? Essentially, how to build a model that will provide an output that is not clustered?
+"
+"['machine-learning', 'convolutional-neural-networks', 'weights']"," Title: Why should each filter have different weights for each input channel?Body: From the answers to this question In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?, I got the fact that each filter has different weights for each input channel. But why should that be the case? What if we apply the same weights to each input channel? Does it work or not?
+"
+['robots']," Title: Make 9 AIs to replace Supreme Court justicesBody: Since the supreme court is always political, why not program 9 AI robots that use different methods to determine whether a law is constitutional and the outcome of cases. How would engineers go about building this? Would it work?
+"
+"['deep-learning', 'keras']"," Title: How can I interpret the following error graph?Body: I am training a neural network which produces the following errors (epoch number on the x axis). I have some questions regrading interpreting it.
+
+
+- When I say
model.predict
is it giving me the result based on the final state (that is epoch 5000)?
+- Towards the end (and some places in the middle) there are places where the training error and validation error are farther apart. Does this mean that the model was over-fitting on those epochs?
+- Based on the graph, can one determine that the model was
best
at a certain epoch?
+- Does Keras have API methods to retrieve the model at a specific epoch so that I can retrieve the
best
model?
+
+
+
+"
+"['reinforcement-learning', 'alphazero']"," Title: How does AlphaZero use its value and policy heads in conjunction?Body: I have a question about how the value and policy heads are used in AlphaZero (not Alphago Zero), and where the leaf nodes are relative to the root node. Specifically, there seem to be several possible interpretations:
+
+
+- Policy estimation only. This would be most similar to DFS in which an average evaluation is computed through MCTS rollouts (though as others have noted the AlphaZero implementation actually seem to be deterministic apart from the exploration component, so 'rollout' may not be the most appropriate term) after reaching the . Here each leaf node would be at the end of the game.
+- Value estimation only. It seems that if the value network is to be used effectively, there should be a limit on the depth to which any position is searched, e.g. 1 or 2 ply. If so, what should the depth be?
+- They are combined in some way. If I understand correctly, there is a limit on the maximum number of moves imposed - so is this really the depth? By which I mean, if the game has still not ended, this is the chance to use the value head to produce the value estimation? The thing is that the paper states that the maximum number of moves for Chess and shogi games was 512, while it was 722 moves for Go. These are extremely deep - evaluations based on these seem to be rather too far from the starting state, even when averaged over many rollouts.
+
+
+My search for answers elsewhere hasn't yielded anything definitive, because they've focused more on one side or the other. For example, https://nikcheerla.github.io/deeplearningschool/2018/01/01/AlphaZero-Explained/ the emphasis seems to be on the value estimation.
+
+However, in the Alphazero pseudocode, e.g. from https://science.sciencemag.org/highwire/filestream/719481/field_highwire_adjunct_files/1/aar6404_DataS1.zip the emphasis seems to be on the policy selection. Indeed, it's not 100% clear if the value head is used at all (value seems to return -1 by default).
+
+Is there a gap in my understanding somewhere? Thanks!
+
+Edit: To explain this better, here's the bit of pseudocode given that I found slightly confusing:
+
+class Network(object):
+
+ def inference(self, image):
+ return (-1, {}) # Value, Policy
+
+ def get_weights(self):
+ # Returns the weights of this network.
+ return []
+
+
+So the (-1,{}) can either be placeholders or -1 could be an actual value and {} a placeholder. My understanding is that they are both placeholders (because otherwise the value head would never be used), but -1 is the default value for unvisited nodes (this interpretation is taken from here, from the line about First Play Urgency value: http://blog.lczero.org/2018/12/alphazero-paper-and-lc0-v0191.html). Now, if I understand correctly, inference
is called in both during training and playing by the evaluate
function. So my core question is: how deep into the tree are the leaf nodes (i.e. where the evaluate
function would be called)?
+
+Here is the bit of code that confused me. In the official pseudocode as below, the 'rollout' seems to last until the game is over (expansion stops when a node has no children). So this means that under most circumstances you'll have a concrete game result - the player to move doesn't have a single move, and hence has lost (so -1 also makes sense here).
+
+def run_mcts(config: AlphaZeroConfig, game: Game, network: Network):
+ root = Node(0)
+ evaluate(root, game, network)
+ add_exploration_noise(config, root)
+
+ for _ in range(config.num_simulations):
+ node = root
+ scratch_game = game.clone()
+ search_path = [node]
+
+ while node.expanded():
+ action, node = select_child(config, node)
+ scratch_game.apply(action)
+ search_path.append(node)
+
+ value = evaluate(node, scratch_game, network)
+ backpropagate(search_path, value, scratch_game.to_play())
+ return select_action(config, game, root), root
+
+
+But under such conditions, the value head still doesn't get very much action (you'll almost always return -1 at the leaf nodes). There are a couple of exceptions to this.
+
+
+- When you reach the maximum number of allowable moves - however, this number is a massive 512 for chess & Shogi and 722 for Go, and seems to be too deep to be representative of the 1-ply positions, even averaged over MCTS rollouts.
+- When you are at the root node itself - but the value here isn't used for move selection (though it is used for the backprop of the rewards)
+
+
+So does that mean that the value head is only used for the backprop part of AlphaZero (and for super-long games)? Or did I misunderstand the depth of the leaf nodes?
+"
+"['convolutional-neural-networks', 'computer-vision', 'autoencoders', 'mobile-net-v2']"," Title: If I use MobileNetV2 for the encoder, can I use a different architecture for the decoder?Body: I have way more unlabeled data than labeled data. Therefore I would like to train an autoencoder using MobileNetV2 as the encoder. Then I will use the pre-trained model for the classification of the labeled data.
+I think it is rather difficult to "invert" the MobileNet architecture to create a decoder. Therefore, my question is: can I use a different architecture for the decoder, or will this introduce weird artefacts?
+"
+"['neural-networks', 'machine-learning', 'generative-adversarial-networks']"," Title: Why is an expectation used instead of simple sum in GANs?Body: Why do the GAN's loss functions use an expectation (sum + division) instead of a simple sum?
+"
+"['neural-networks', 'convolutional-neural-networks', 'convolution', 'convolutional-layers']"," Title: Why do the inputs and outputs of a convolutional layer usually have the same depth?Body: Here's the famous VGG-16 model.
+
+Do the inputs and outputs of a convolutional layer, before pooling, usually have the same depth? What's the reason for that?
+Is there a theory or paper trying to explain this kind of setting?
+"
+"['neural-networks', 'ai-design', 'game-ai', 'chess', 'alphazero']"," Title: How to deal with invalid output in a policy network?Body: I am interested in creating a neural network-based engine for chess. It uses a $8 \times 8 \times 73$ output space for each possible move as proposed in the Alpha Zero paper: Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.
+
+However, when running the network, the first selected move is invalid. How should we deal with this? Basically, I see two options.
+
+
+- Pick the next highest outputted move, until it is a valid move. In this case, the network might automatically over time not put illegal moves on top.
+- Process the game as a loss for the player who picked the illegal move. This might have the disadvantage that the network might be 'stuck' on only a few legal moves.
+
+
+What is the preferred solution to this particular problem?
+"
+"['natural-language-processing', 'reference-request', 'dialogue-systems']"," Title: Is there a way to break a piece of dialogue into components?Body: In many chatbots, I've seen a lot of hardcoded responses, but nothing that allows an AI to break a piece of dialogue into components (say that the speaker sounds happy or is trying to be manipulative) and model a response based on this.
+I envision a coding scheme for different core components of conversation. This would allow an AI to be more dynamic in its responses, and would be closer to actually being able to hold a conversation.
+I'm not looking for AI-generated text, at least not in the sense of some NN or the like being fed a diet of literature and seeing what it spits out - there's nothing dynamic about that.
+"
+"['python', 'genetic-algorithms', 'evolutionary-algorithms', 'neat']"," Title: Exploding population size in neat-pythonBody: I am trying to make my AI win the board game ""Catan"" against my friends.
+Therefore i am using the python implementation of NEAT.
+
+As I changed the values of weight_mutate_power
, response_mutate_power
, bias_mutate_power
and compatibility_threshold
in the config the number of individuals and species exploded (~Doubled every generation and exceeded pop_size
).
+
+weight_mutate_power = 12
+response_mutate_power = 0
+bias_mutate_power = 0.5
+compatibility_threshold = 3
+
+
+Playing with those values I discovered that the rate of the explosion changes in relation to those values (Everything worked fine with standard Values from the documentation).
+
+Any idea how to control this behavior?
+My cluster is drowning in genomes...
+"
+"['monte-carlo-tree-search', 'time-complexity']"," Title: What is the time complexity of an unparellelized Monte Carlo tree search?Body: I am writing a report where I used a slightly modified version of MCTS (not parallelized). I thought It could be interesting if I could calculate its time complexity. I'd appreciate any help I could get.
+
+Here's the rough idea of how it works:
+
+Instead of tree search, I'm using graph search meaning I keep a list of visited nodes in order to avoid adding duplicate nodes.
+
+So in the expansion phase, I add all child nodes of the current node that aren't present elsewhere in the tree.
+
+For the remaining phases, it's essentially the same as the basic version of MCTS, with a default random policy in the simulation step.
+"
+"['monte-carlo-tree-search', 'proofs']"," Title: Proof of Correctness of Monte Carlo Tree SearchBody: I'm trying to write the proof of correctness of Monte Carlo Tree Search. Any help would be really appreciated.
+"
+"['neural-networks', 'generative-adversarial-networks']"," Title: How important is architectural similarity between the discriminator and generator of a GAN?Body: Shouldn't the discriminator and generator work fine even if they don't process data symmetrically? I mean, they don't only receive the final layer results of each other, they don't use data that from hidden layers.
+"
+"['neural-networks', 'deep-learning', 'data-preprocessing', 'data-augmentation']"," Title: Validation Loss Fluctuates then Decrease alongside Validation Accuracy IncreasesBody: I was working on CNN. I modified the training procedure on runtime.
+
+As we can see from the validation loss and validation accuracy, the yellow curve does not fluctuate much. The green curve and red curve fluctuate suddenly to higher validation loss and lower validation accuracy, then goes to the lower validation loss and the higher validation accuracy, especially for the green curve.
+Is it happening because of overfitting or something else?
+I am asking it because, after fluctuation, the loss decreases to the lowest point, and also the accuracy increases to the highest point.
+Can anyone tell me why is it happening?
+"
+"['neural-networks', 'reference-request', 'constrained-optimization']"," Title: How does one make a neural network learn the training data while also forcing it to represent some known structure?Body: In general, how does one make a neural network learn the training data while also forcing it to represent some known structure (e.g., representing a family of functions)?
+The neural network might find the optimal weights, but those weights might no longer make the layer represent the function I originally intended.
+For example, suppose I want to create a convolutional layer in the middle of my neural network that is a low-pass filter. In the context of the entire network, however, the layer might cease to be a low-pass filter at the end of training because the backpropagation algorithm found a better optimum.
+How do I allow the weights to be as optimal as possible, while still maintaining the low-pass characteristics I originally wanted?
+General tips or pointing to specific literature would be much appreciated.
+"
+"['neural-networks', 'recurrent-neural-networks']"," Title: RNN weights when varying the input sizeBody: I have a time-varying input size vector for a RNN. However, I am facing some difficulties understanding how to deal with my network weights when the input changes.
+
+Say we have a set of natural positive integers
+$$
+\Gamma=\{1,2,\dots,F\},
+$$
+where $F=100$ for the sake of the example.
+
+A valid observation vector of my agent at time $t$ might be
+$$
+\gamma_t=[1,3,5,1].
+$$
+Thus, at time $t$ a set of weights will be produced by my RNN, according to $\gamma_t$. Sat that at time $t+1$, my observation vector changes as
+$$
+\gamma_{t+1}=[3,5,8],
+$$
+and there is my problem. If I now continue training my RNN with the previous weights, the output would be inevitably affected. Also, Which weight shall I remove? I see RNN can face the issue but how shall I deal with the previously computed weights? Which one to remove? How to initialize a new one in case the cardinality of $\gamma_{t+1}$ is higher than that of $\gamma_t$?
+"
+"['neural-networks', 'generative-adversarial-networks']"," Title: DCGAN loss determining data normalization problemsBody: I'm working with a DCGAN, a deep CNN for classifying images with a GAN that competes with the classifier to generate images of what we are classifying.
+
+The goal of the project at the moment is to produce AI generated memes in the form of pepe the frog, based on a dataset found on the internet of roughly 2000 images. I scaled them all maintaining aspect ratio as my only form of normalization.
+
+As I train my data (I've tried many combinations of hyperparameters) upwards of 100k epochs (batch 32) my classifying network gets around 1^-6 average while my GAN approaches nearly 16, yes 16, with a properly defined loss function.
+
+Now because my images are typically made with random features, some containing text, others containing full body renditions, turned away from viewport etc... I'm assuming that this is because of the data I'm training with and it's diverse amount of features, is my reasoning correct? Also if allowed to continue is it possible that the GAN learns to properly generate the data?
+
+The main reason I have come to the above conclusion is that if I train on a few hand picked examples that have similar artistic styles/orientations and such (less than 100) and let it train my GAN will generate decent images, however they have low variability.
+"
+"['reinforcement-learning', 'deep-rl']"," Title: Can next state and action be same in Deep Deterministic Policy Gradient?Body: I am trying to apply deep deterministic policy gradient (DDPG) on a robotic application. My states consist of the joint angle positions of the robot and my actions are also its joint angle positions. Since, DDPG produces a continuous policy output where states are directly mapped onto the actions, can I say that my next state and action will be same? Simplistically, the input of the policy network will be the current state and the output will be the next state?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'classification']"," Title: How do I classify an image that contains only polygons?Body: I have two closed polygons, drawn as connected straight black lines on a white background. I need to classify such images in to three forms
+
+
+- Two separate polygons
+- One polygon encloses the other
+- The two polygons overlap each other.
+
+
+The polygons vary in sizes and location on the image, and the image contains only the polygons and the white background.
+
+Which neural network architecture should I use to solve this problem?
+"
+['game-ai']," Title: Feasibility of a team-based FPS AI?Body: We have seen advances in top down, RTS team games like Dota 2 and Starcraft II from companies like OpenAI who developed agents to beat real pro players most of the time. How would similar learning techniques compare to games like Overwatch that require faster reaction times and complex understanding of 3d space and effect?
+Or have we not developed solutions that could be tasked with this problem?
+"
+"['machine-learning', 'economics']"," Title: Can Machine Learning make economic decisions of human quality or better?Body: Basically, economic decision making is not restricted to mundane finance, the managing of money, but any decision that involves expected utility (some result with some degree of optimality.)
+
+
+- Can Machine Learning algorithms make economic decisions as well as or better than humans?
+
+
+""Like humans"" means understanding classes of objects and their interactions, including agents such as other humans.
+
+At a fundamental level, there must be some physical representation of an object, leading to usage of an object, leading to management of resources that the objects constitute.
+
+This may include ability to effectively handle semantic data (NLP) because mcuh of the relevant information is communicated in human languages.
+"
+"['neural-networks', 'natural-language-processing', 'classification', 'tensorflow', 'data-preprocessing']"," Title: Do I need to use a pre-processed dataset to classify comments?Body: I want to use Machine Learning for text classification, more precisely, I want to determine whether a text (or comment) is positive or negative. I can download a dataset with 120 million comments. I read the TensorFlow tutorial and they also have a text dataset. This dataset is already pre-processed, like the words are converted to integers and the most used words are in the top 10000.
+
+Do I also have to use a pre-processed dataset like them? If yes, does it have to be like the dataset from TensorFlow? And which pages could help me to implement that kind of program?
+
+My steps would be:
+
+
+- find datasets
+- preprocess them if needed
+- feed them in the neural network
+
+"
+"['reinforcement-learning', 'policy-gradients', 'multi-armed-bandits', 'state-spaces', 'continuous-action-spaces']"," Title: It is possible to solve a problem with continuous action spaces and no states with reinforcement learning?Body: I want to use Reinforcement Learning to optimize the distribution of energy for a peak shaving problem given by a thermodynamical simulation. However, I am not sure how to proceed as the action space is the only thing that really matters, in this sense:
+
+- The action space is a $288 \times 66$ matrix of real numbers between $0$ and $1$. The output of the simulation and therefore my reward depend solely on the distribution of this matrix.
+
+- The state space is therefore absent, as the only thing that matters is the matrix on which I have total control. At this stage of the simulation, no other variables are taken into consideration.
+
+
+I am not sure if this problem falls into the tabular RL or it requires approximation. In this case, I was thinking about using a policy gradient algorithm for figuring out the best distribution of the $288 \times 66$ matrix. However, I do not know how to behave with the "absence" of the state space. Instead of a tuple $\langle s,a,r,s' \rangle$, I would just have $\langle a, r \rangle$, is this even an RL-approachable problem? If not, how can I reshape it to make it solvable with RL techniques?
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing']"," Title: How can I build an AI with NLP that read storiesBody: I want to do an NLP project but I don't know if it's doable or not as I have no experience or knowledge in NLP or ML yet.
+
+The idea is as follows: Let's say we have a story (in the text) that has 10 characters. Can we define them, their characteristics, whole sentences they said, and then analyze emotions within those sentences.
+
+After that, is it possible to generate an audio version of the story where: the text, in general, is narrated by one voice, each individual character's sentences are read in a different voice generated specifically for that character. Finally is it possible to make the tones of the characters voices change depending on the emotions detected in their sentences?
+"
+"['python', 'game-ai', 'dqn', 'deep-rl']"," Title: Deep Q-Network (DQN) to learn the game 2048Body: I am trying to build a Deep Q-Network (DQN) agent that can learn to play the game 2048. I am orientating myself on other programs and articles that are based on the game snake and it worked well (specifically this one).
+
+As input state, I am only using the grid with the tiles as numpy array, and as a reward, I use (newScore-oldScore-1) to penalize moves that do not give any points at all. I know that this might not be optimal, as one might as well reward staying alive for as long as possible, but it should be okay for the first step, right? Nevertheless, I am not getting any good results whatsoever.
+
+I've tried to tweak the model layout, the number of neurons and layers, optimizer, gamma, learning rates, rewards, etc.. I also tried ending the game after 5 moves and to optimize just for those first five moves but no matter what I do, I don't get any noticeable improvement. I've run it for thousands of games and it just doesn't get better. In fact, sometimes I get worse results than a completely random algorithm, as sometimes it just returns the same output for any input and gets stuck.
+
+So, my question is, if I am doing anything fundamentally wrong? Do I just have a small stupid mistake somewhere? Is this the wrong approach completely? (I know the game could probably be solved pretty easily without AI, but it seemed like a little fun project)
+
+My Jupyter notebook can be seen here Github. Sorry for the poor code quality, I'm still a beginner and I know I need to start making documentation even for fun little projects...
+
+Thank you in advance,
+
+Drukob
+
+edit:
+some code snippets:
+
+Input is formatted as a 1,16 numpy array, also tried normalizing the values or using only 1 and 0 for occupied and empty cells, but that did not help either. Which is why I assumed it's maybe more of a conceptual problem?
+
+
+
+ def get_board(self):
+ grid = self.driver.execute_script(""return myGM.grid.cells;"")
+ mygrid = []
+ for line in grid:
+ a = [x['value'] if x != None else 0 for x in line]
+ #a = [1 if x != None else 0 for x in line]
+ mygrid.append(a)
+ return np.array(mygrid).reshape(1,16)
+
+
+The output is an index of {0,3}, representing the actions up, down, left or right and it's just the value with the highes prediction score.
+
+
+
+prediction = agent.model.predict(old_state)
+predicted_move = np.argmax(prediction)
+
+
+I've tried a lot of different model architectures, but settled for a simpler network now, as I have read that unnecessary complex structures are often a problem and unneeded. However, I couldn't find any reliable source for a method, how to get the optimal layout except for experimenting, so I'd be happy to have some more suggestions there.
+
+
+
+model = models.Sequential()
+ model.add(Dense(16, activation='relu', input_dim=16))
+ #model.add(Dropout(0.15))
+ #model.add(Dense(50, activation='relu'))
+ #model.add(Dropout(0.15))
+ model.add(Dense(20, activation='relu'))
+ #model.add(Dropout(0.15))
+ #model.add(Dense(30, input_dim=16, activation='relu'))
+ #model.add(Dropout(0.15))
+ #model.add(Dense(30, activation='relu'))
+ #model.add(Dropout(0.15))
+ #model.add(Dense(8, activation='relu'))
+ #model.add(Dropout(0.15))
+ model.add(Dense(4, activation='linear'))
+ opt = Adam(lr=self.learning_rate)
+ model.compile(loss='mse', optimizer=opt)
+
+"
+"['deep-learning', 'time-series', 'sequence-modeling', 'seq2seq', 'teacher-forcing']"," Title: Why feeding the correct output as input during training of seq2seq models?Body: I've read about seq2seq for time-series and it seemed really promising, but, when I went to implement it, all the tutorials I've found use the correct output as input to the decoder phase during training, instead of using the actual prediction made by the cell before it. Is there a reason why not do the latter?
+I've been using the tutorial from here
+But all the other tutorials that I've found followed the same principle.
+"
+"['machine-learning', 'reinforcement-learning', 'deep-rl']"," Title: Why are we using all hyperparameters in RL?Body: I am new in RL and I am trying to understand why do we need all these hyperparameters.
+Can somebody explain me why we use them and what are the best values to use for them?
+
+
+ total_episodes = 50000 # Total episodes
+
+ total_test_episodes = 100 # Total test episodes
+
+ max_steps = 99 # Max steps per episode
+
+ learning_rate = 0.7 # Learning rate
+
+ gamma = 0.618 # Discounting rate
+
+ Exploration parameters
+
+ epsilon = 1.0 # Exploration rate
+
+ max_epsilon = 1.0 # Exploration probability at start
+
+ min_epsilon = 0.01 # Minimum exploration probability
+
+ decay_rate = 0.01 # Exponential decay rate
+
+
+I am currently working on taxi_v2 problem from GYM.
+
+Link: https://learndatasci.com/tutorials/reinforcement-q-learning-scratch-python-openai-gym/
+"
+"['machine-learning', 'models', 'forecasting']"," Title: What approach should I take to model forecasting problem in machine learning?Body: I have a dataset which contains 4000k rows and 6 columns. The goal is to predict travel time demand of a taxi. I have read many articles regarding how to approach the problem. So, every writer tell his own way. The thing which I have concluded from all my readings is that I have to use multiple algorithms and check the accuracy of each one. Then I can ensemble them by averaging or any other approach.
+
+Which algorithms will be best for my problem accuracy-wise? Some links to code will be helpful for me.
+
+I currently only have training set of data. After I work on it, it will be evaluated on any testing set by my professor. So, what should I do now? Either split data I have into my own testing and training set or separately generate dummy data as a testing set?
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'facial-recognition']"," Title: How does ARKit's Facial Tracking work?Body: iPhone X allows you to look at the TrueDepth camera and reports 52 facial blendshapes like how much your eye is opened, how much your jaw is opened, etc.
+
+If I want to do something similar with other cameras (not TrueDepth), what are my alternative methods? Currently, I just use a simple ConvNet which takes in an image and predict 52 sigmoid values.
+
+What do you think could be the underlying technology behind ARKit Face Tracking?
+"
+"['deep-learning', 'object-detection']"," Title: How do I perform object detection if there is only one type of object?Body: How do I do object detection (or identify the location of an object) if there is only one kind of object, and they are more of less similar size, but the picture does not look like standard scenes (it is detection of drops on a substrate in microscopic images)? Which software is good for it?
+"
+"['neural-networks', 'deep-learning', 'data-science']"," Title: How to make a distinction between item feature and environment feature?Body: My data is stock data with features such as stocks' closing prices.I am curious to know if I can put the economy feature such as 'national interest rate' or 'unemployment rate' besides each stocks' features.
+Data:
+ Date Ticker Open High Low Close Interest Unemp.
+ 1/1 AMZN 75 78 73 76 0.015 0.03
+ 1/2 AMZN 76 77 72 72 0.016 0.03
+ 1/3 AMZN 72 78 76 77 0.013 0.03
+ ... ... ... ... ... ... ... ...
+ 1/1 AAPL 104 105 102 102 0.015 0.03
+ 1/2 AAPL 102 107 104 105 0.016 0.03
+ 1/3 AAPL 105 115 110 111 0.013 0.03
+ ... ... ... ... ... ... ... ...
+
+As you can see from the table above, daily prices of AMZN and AAPL are different but the Interest and Unemployment rates are the same. Can I feed the data to my neural network like the table above?
+In other words, can I put the individual stocks' information besides the environment feature such as interest rates?
+"
+['neural-networks']," Title: Neural Network for Error Prediction of a Physics Model?Body: I have physical model prediction data as well as actual data. From this I can calculate the error of each prediction data point through simple subtraction. I am hoping to train a neural network to be able to assign an error to the input of the physical model.
+
+My current plan is to normalize the error of each data point and assign it as a label to each model input. So the NN would be trained (and validated)on a 1000 data points with the associated error as a label. Once the model is trained I would be able to input one data point and the output of the neural network would be a single class, that is the error. The purpose this would serve would be to tune the physical prediction model. Would this kind of architecture work? If so, would you recommend a feedforward or RNN? Thank you.
+"
+"['reference-request', 'ethics', 'explainable-ai']"," Title: Which explainable artificial intelligence techniques are there?Body: Explainable artificial intelligence (XAI) is concerned with the development of techniques that can enhance the interpretability, accountability, and transparency of artificial intelligence and, in particular, machine learning algorithms and models, especially black-box ones, such as artificial neural networks, so that these can also be adopted in areas, like healthcare, where the interpretability and understanding of the results (e.g. classifications) are required.
+Which XAI techniques are there?
+If there are many, to avoid making this question too broad, you can just provide a few examples (the most famous or effective ones), and, for people interested in more techniques and details, you can also provide one or more references/surveys/books that go into the details of XAI. The idea of this question is that people could easily find one technique that they could study to understand what XAI really is or how it can be approached.
+"
+"['neural-networks', 'machine-learning', 'math', 'generative-adversarial-networks', 'proofs']"," Title: How is G(z) related to x in GAN proof?Body: In the proofs for the original GAN paper, it is written:
+
+$$∫_x p_{data}(x) \log D(x)dx+∫_zp(z)\log(1−D(G(z)))dz
+=∫_xp_{data}(x)\log D(x)+p_G(x) \log(1−D(x))dx$$
+
+I've seen some explanations asserting that the following equality is the key to understanding:
+
+$$E_{z∼p_z(z)}log(1−D(G(z)))=E_{x∼p_G(x)}log(1−D(x))$$
+
+which is a consequence of the LOTUS theorem and $x_g = g(z)$. Why is $x_g = g(z)$?
+"
+"['philosophy', 'agi', 'theory-of-computation', 'computational-theory-of-mind', 'turing-completeness']"," Title: Would an artificial general intelligence have to be Turing complete?Body: For the purposes of this question, let's suppose that an artificial general intelligence (AGI) is defined as a machine that can successfully perform any intellectual task that a human being can [1].
+Would an AGI have to be Turing complete?
+"
+"['machine-learning', 'markov-property', 'hidden-markov-model']"," Title: Is the Markov property assumed in the forward algorithm?Body: I'm majoring in pure linguistics (not computational), and I don't have any basic knowledge regarding computational science or mathematics. But I happen to take the "Automatic Speech Recognition" course in my graduate school and struggling with it.
+I have a question regarding getting the formula for a component of the forward algorithm.
+$$
+\alpha_t(j) = \sum_{i=1}^{N} P(q_{t-1} = i, q_t=j, o_1^{t-1}, o^t|\lambda)
+$$
+When $q$ is a hidden state, $o$ is a given observation, and $\lambda$ contains transition probability, emission probability and the start/end state.
+Is the Markov assumption (the current state is only dependent upon the one right before it) assumed here? I thought so, because it contains $q_{t-1}=i$ and not $q_{t-2}=k$ or $q_{t-3}=l$.
+"
+"['deep-learning', 'classification', 'training', 'objective-functions']"," Title: If loss reduction means model improvement, why doesn't accuracy increase?Body: Problem Statement
+
+I've built a classifier to classify a dataset consisting of n samples and four classes of data. To this end, I've used pre-trained VGG-19, pre-trained Alexnet and even LeNet (with cross-entropy loss). However, I just changed the softmax layer's architecture and placed just four neurons for that (because my dataset includes just four classes). Since the dataset classes have a striking resemblance to each other, this classifier was unable to classify them and I was forced to use other methods.
+During the training section, after some epochs, loss decreased from approximately 7 to approximately 1.2, but there were no changes in accuracy and it was frozen on 25% (random precision). In the best epochs, the accuracy just reached near 27% but it was completely unstable.
+
+Question
+
+How is it justifiable? If loss reduction means model improvement, why doesn't accuracy increase? How is it possible to the loss decreases near 6 points (approximately from 7 to 1) but nothing happens to accuracy at all?
+"
+"['neural-networks', 'convolutional-neural-networks']"," Title: What is the use of softmax function in a CNN?Body: What is the use of softmax function? Why was it used at the end of fully connected layer in convolution neural network?
+"
+"['neural-networks', 'datasets', 'deep-neural-networks']"," Title: Further Normalization of Standardized data - ANNBody: I want to develop a regression model using the artificial neural network. For developing such a model I use standardised ( z-score normalised ) data.
+given below is the sample data set. Here MAX is the real data But I am using MAX-ZS (these values are continues)
+
+So my question is while developing the model do I have to perform further normalization such as Min-Max scaling on my training data?
+Any Kind of help is appreciated!
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: Questions regarding rrn-writer by Robin Sloane?Body: https://github.com/robinsloan/rnn-writer
+
+I preface this by saying I do not know much about this topic, only that I have an intense interest in it, so I'm hoping I can make my questions as clear as possible.
+
+This writing assistant was released with full code and instructions on how to make it work. I was halfway through this process when I was told, basically, that this could not work on a Windows PC. Torch, specifically, either doesn't work on PC, or doesn't work very well, and I am hesitant to continue.
+
+First of all, does anyone know if that's true?
+
+If it is true, is it theoretically possible to recreate this in a different way that will work on Windows PCs?
+
+If so, has anyone ever done it before, or know how to do it?
+
+If there is a way to make Torch work on PCs, is someone willing to tell me how?
+
+I apologize if this isn't meant for this specific area of discussion. I just don't know where else to go with my questions that I will get any kind of helpful responses. Even if it's just telling me where I can go for more pertinent responses, that would still be appreciated.
+
+Thank you for any help you might be willing to give. Please let me know if I need to clarify anything.
+"
+"['machine-learning', 'reinforcement-learning', 'markov-decision-process']"," Title: Why are state transitions in MDPs probabilistic rather than deterministic?Body: I've read that for MDPs the state transition function $P_a(s, s')$ is a probability. This seems strange to me for modeling because most environments (like video games) are deterministic.
+
+Now, I'd like to assert that most systems we work with are deterministic given enough information in the state (i.e. in a video game, if you had the random number seed, you could predict 'rolls', and then everything else follows game logic).
+
+So, my guess for why would MDP state transitions are probabilities is because the state given to the MDP is typically a subset (i.e. from feature engineering) of total information available. That, and of course to model non-deterministic systems.
+
+Is my understanding correct?
+"
+"['reference-request', 'knowledge-representation', 'expert-systems', 'rule-based-systems']"," Title: Are Relational DBs and SQL used in Expert Systems?Body: In the book Prolog Programming for Artificial Intelligence, a large and intricate chapter (chapter 14) is dedicated to Expert Systems. In these systems, a knowledge-database is represented through facts and rules in a declarative manner, and then we use the PROLOG inference engine to derive statements and decisions.
+I was wondering: are there any examples of expert systems that represent knowledge through a standard Relational Database approach and then extract facts through SQL queries? Is there any research in this area? If not, why is a rule-based approach preferred?
+"
+"['ai-design', 'automation', 'meta-learning']"," Title: Why not go another layer deeper with Auto-AutoML?Body: So I'm finding AutoML to be pretty interesting but I'm still learning how it all works. I've played with the incredibly broken AutoKeras and got some decent results.
+
+The question is, if you are using a NN to optimize the architecture of another network, why not take it another layer deeper and use another network to find the optimum architecture for your Parent network with a grand-parent network?
+
+The problem doesn't necessarily need to expand exponentially as the grand-parent network could do few-shot training sessions on the parent network which itself is doing few or one-shot training.
+"
+"['deep-learning', 'keras', 'word-embedding', 'long-short-term-memory', 'bert']"," Title: Adding BERT embeddings in LSTM embedding layerBody: I am planning to use BERT embeddings in the LSTM embedding layer instead of the usual Word2vec/Glove Embeddings. What are the possible ways to do that?
+"
+"['neural-networks', 'recurrent-neural-networks', 'pattern-recognition', 'function-approximation']"," Title: Changes in flow detection neural network?Body: Do you have any advice, what architecture of neural network is the best for following task?
+
+Let input be some (complex function), the neural network gains a flow of its values, so I guess there will be some kind of RNN or CNN?
+
+The output is classifier like is the function same or not
+
+
+- If the neural network thinks, that the input is still the same function, the output is 0.
+- If the input function changes, the output will be 1.
+
+
+The input function is of course not one value or simple math function (what will be trivial) but may be really sophisticated. So the neural network learns abstraction about same and different over any complex flow ?
+
+How would you approach to that task ?
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'temporal-difference-methods', 'environment', 'dyna']"," Title: How do I know if the assumption of a static environment is made?Body: An important property of a reinforcement learning problem is whether the environment of the agent is static, which means that nothing changes if the agent remains inactive. Different learning methods assume in varying degrees that the environment is static.
+
+How can I check if and (if so) where in the Monte Carlo algorithm, temporal difference learning (TD(0)), the Dyna-Q architecture, and R-Max a static environment is implicitly assumed?
+
+How could I modify the relevant learning methods so that they can in principle adapt to changing environments? (It can be assumed that $\epsilon$ is sufficiently large.)
+"
+"['reinforcement-learning', 'definitions', 'credit-assignment-problem']"," Title: What is the credit assignment problem?Body: In reinforcement learning (RL), the credit assignment problem (CAP) seems to be an important problem. What is the CAP? Why is it relevant to RL?
+"
+"['game-ai', 'monte-carlo-tree-search']"," Title: When does the selection phase exactly end in MCTS?Body: All sources I can find provide a similar explanation to each phase.
+
+In the Selection Phase, we start at the root and choose child nodes until reaching a leaf. Once the leaf is reached (assuming the game is not terminated), we enter the Expansion Phase.
+
+In the Expansion Phase, we expand any number of child nodes and select one of the expanded nodes. Then, we enter the Play-Out Phase.
+
+Here is my confusion. If we choose to only expand a single node, the nodes that were not expanded will never be considered in future selections as we only select child nodes until a leaf is reached during the Selection Phase. Is this correct? If not, what am I misunderstanding about the Selection Phase?
+"
+"['philosophy', 'agi', 'mythology-of-ai']"," Title: What are the common myths associated with Artificial Intelligence?Body: What are some interesting myths of Artificial Intelligence and what are the facts behind them?
+"
+"['neural-networks', 'deep-learning', 'comparison', 'attention', 'geometric-deep-learning']"," Title: What is the difference between GAT and GaAN?Body: I was looking at two papers
+
+
+
+I'm trying to implement the second paper and I'm having some troubles understanding the differences between GAT and GaAN. By looking at equation 1 in GaAN paper I can see only two differences from GAT.
+
+
+- The first difference is that they are doing a dot product with the initial feature map and
+- Have another fully connected layer to project the result.
+
+
+Is there something else that I'm missing?
+"
+"['classification', 'datasets', 'object-detection']"," Title: How to choose our data set wisely?Body: I have a couple of questions and I was wondering if you could answer them.
+
+I have a bunch of images of the cars, side view only. I want to train the model with those images. My objects of interest are 3 types of trucks that have different trailers. I Rarely see two target object at one image (maybe 1 or 2 in every 1000 images). However, I do see other types of cars that I do not want to detect.
+
+My questions are:
+
+
+- Do you think I should tackle this problem as a detection task or classification task? (for example, should I consider multi-label classification or omit those pictures)
+- Should I also include other vehicles that I do not want to detect in my training dataset? let say I do not assign bounding box to them but include them in training dataset just to make the system robust.
+
+
+I trained YOLO with 200 images, sometimes the trained model confused and detected the wrong object that is not in any of classes, this happens when training with 2000 images per class?
+
+Is this due to a small number of dataset or it is because of not including those images with no bounding boxes?
+
+Thank you in advance!
+"
+['probability']," Title: Viterbi versus filteringBody: In Chapter 15 of Russel and Norvig's Artificial Intelligence -- A Modern Approach (Third Edition), they describe three basic tasks in temporal inference:
+
+
+- Filtering,
+- Likelihood, and
+- Finding the Most Likely Sequence.
+
+
+My question is on the difference between the first and third task. Finding the Most Likely Sequence determines, given evidences $e_1,\dots,e_n$, the most likely sequence of states $S_1,\dots,S_n$. This is done using the Viterbi algorithm. On the other hand, Filtering provides the probability distribution on states after seeing $e_1,\dots,e_n$. You could then pick the state with the highest probability, call it $S'_n$. I am guessing that $S'_n$ should always be equal to $S_n$. Likewise, you can already do the same after any prefix $e_1,\dots,e_i$, again picking the most likely state $S'_i$. I would love to have a simple example where $S'_1,\dots,S'_n$ is not equal to the sequence $S_1,\dots,S_n$ produced by the Viterbi algorithm.
+"
+"['neural-networks', 'recurrent-neural-networks']"," Title: How to handle proper names or variable names in word2vec?Body: The input in word2vec is known word (spellings), each tagged by its ID.
+
+
+- But if you process real text, there can be not only dictionary words but also proper nouns like human names, trade marks, file names , etc, how to make an input for that?
+- Is you consider some input where items are variables, like the meaning of input would be x = something, and after some time you acces to x value and define some other stuff with it. That would be format for this input, and will this approach work at all?
+
+"
+"['reinforcement-learning', 'q-learning', 'dqn']"," Title: Deep Reinforcement Learning: Rewards suddenly dip downBody: I am working on a deep reinforcement learning problem. The policy network has the same architecture as the one Deepmind published in 'Playing Atari with Deep Reinforcement Learning'. I am also using Prioritized Experience Replay. In the initial stage the behavior seems to be normal, i.e the agent is learning gradually. However, after a while the rewards suddenly go down by a lot. The TD erros also seem to be going up at the same time. I'm not sure how to interpret this problem.
+
+My hypotheses are:
+
+
+- The policy network is overfitting
+- Some filters fail to activate thereby misrepresenting the state information
+
+
+I would really appreciate if you guys could give me some tips to narrow down this problem debug it. Cheers.
+"
+"['reinforcement-learning', 'papers', 'alphazero', 'sutton-barto']"," Title: What knowledge is required for understanding the AlphaZero paper?Body: My goal is to understand AlphaZero paper published by deepmind. I'm beginning my journey trying to get the basic intuition of reinforcement learning from the book by Barto and Sutton.
+As per my background, I'm familiar with MDPs, value iteration and policy iteration.
+I wanted to ask until what chapter of Barto and Sutton's book is one required to read in order to fully comprehend AlphaZero's paper. Monte-Carlo Tree Search is discussed in Chapter-8 of the book. Will it be enough till that? Or would I be needing more resources apart from this book?
+"
+"['markov-chain', 'sequence-modeling', 'time-series', 'hidden-markov-model']"," Title: Can HMM, MRF, or CRF be used to classify the state of a single observation, not the entire observation sequence?Body: I learn that the Viterbi algorithm used for Hidden Markov Model (HMM) can classify a sequence of hidden states from the corresponding observations; Markov Random Field (MRF) and Conditional Random Field (CRF) can also do it.
+
+Can these algorithms be used to classify a single future state?
+"
+"['machine-learning', 'classification', 'recurrent-neural-networks', 'prediction']"," Title: How can I stabilise a recurrent neural network used for binary classification?Body: I’m looking for some help with my neural network. I’m working on a binary classification on a recurrent neural network that predicts stock movements (up and down) Let’s say I’m studying Eur/Usd, I’m using all the data from 2000 to 2017 to train et I’m trying to predict every day of 2018.
+
+The issue I’m dealing with right now is that my program is giving me different answers every time I run it even without changing anything and I don’t understand why?
+
+The accuracy during the train from 2000 to 2017 is around 95% but I’ve noticed another issue. When I train it with 1 new data every day in 2018, I thought 2 epochs was enough, like if it doesn’t find the right answer the first time, then it knows what the answer is since the problem is binary, but apparently that doesn’t work.
+
+Do you guys have any suggestion to stabilize my NN?
+"
+"['convolutional-neural-networks', 'unsupervised-learning', 'u-net']"," Title: How can I use the bottleneck layer of the U-net to calculate the similarity between two images?Body: I would like to use the bottleneck layer of U-Net (the last layer of the encoder) to calculate the similarity between two images. For that, I have to somehow flatten the last layer of the encoder. In my opinion, there are two approaches:
+
+
+- Take the last layer which in my case is $4 \times 4 \times 16$ and flatten it to 1D
+- Add a dense before the decoder and then reshape the dense 1D layer into 3D
+
+
+For the second case, I am not sure how this would affect the network. Arbitrarily reshaping a 1D array into a 3D tensor. Could that introduce weird artifacts? Does someone have experience in a similar problem?
+"
+"['neural-networks', 'gradient-descent']"," Title: Neural networks when gradient descent is not possibleBody: I am looking for an example in which it is simply impossible to use some sort of gradient descent to train a neural network. Is this available?
+
+I have read quite some papers about gradient-free optimization tools, but they always use it on a network for which you can also use gradient descent. I want to have a situation in which the only option to train the network is by, for example, a genetic algorithm.
+"
+"['agi', 'social', 'futurism']"," Title: Will artificial intelligence cause mass unemployment?Body: Everyone is afraid of losing their job to robots. Will or does artificial intelligence cause mass unemployment?
+"
+"['objective-functions', 'autoencoders', 'pytorch']"," Title: Limits for a bottleneckBody: I have some 64x64 pixels frames from a (simulated) video, with a spaceship moving on a fixed background. The spaceship moves in a straight line with constant velocity from left to right (along the x-axis), and the frames are from equal time intervals. I can also place the ship at different y positions and let it move. In total I have 8 y positions and 64 frames for each y position (the details don't matter that much). Intuitively, as the background is fixed, and the shape of the ship is the same, all the information to reconstruct the image is found in the x and y position of the spaceship. What I am trying to do is to have a NN with an encoded and a decoder and a bottleneck in the middle and I want that bottleneck to have just 2 neurons. Ideally, the network would learn in these 2 neurons some function of x and y in the encoder, and the decoder would invert that function to give the original image. Here is my NN architecture (in Pytorch):
+
+class Rocket_E_NN(nn.Module):
+ def __init__(self):
+ super().__init__()
+
+ self.encoder = nn.Sequential(
+ nn.Conv2d(3, 32, 4, 2, 1), # B, 32, 32, 32
+ nn.ReLU(True),
+ nn.Conv2d(32, 32, 4, 2, 1), # B, 32, 16, 16
+ nn.ReLU(True),
+ nn.Conv2d(32, 64, 4, 2, 1), # B, 64, 8, 8
+ nn.ReLU(True),
+ nn.Conv2d(64, 64, 4, 2, 1), # B, 64, 4, 4
+ nn.ReLU(True),
+ nn.Conv2d(64, 256, 4, 1), # B, 256, 1, 1
+ nn.ReLU(True),
+ View((-1, 256*1*1)), # B, 256
+ nn.Linear(256, 2), # B, 1
+ )
+
+ def forward(self, x):
+ z = self.encoder(x)
+ return z
+
+class Rocket_D_NN(nn.Module):
+ def __init__(self):
+ super().__init__()
+ self.decoder = nn.Sequential(
+ nn.Linear(2, 256), # B, 256
+ View((-1, 256, 1, 1)), # B, 256, 1, 1
+ nn.ReLU(True),
+ nn.ConvTranspose2d(256, 64, 4), # B, 64, 4, 4
+ nn.ReLU(True),
+ nn.ConvTranspose2d(64, 64, 4, 2, 1), # B, 64, 8, 8
+ nn.ReLU(True),
+ nn.ConvTranspose2d(64, 32, 4, 2, 1), # B, 32, 16, 16
+ nn.ReLU(True),
+ nn.ConvTranspose2d(32, 32, 4, 2, 1), # B, 32, 32, 32
+ nn.ReLU(True),
+ nn.ConvTranspose2d(32, 3, 4, 2, 1), # B, 3, 64, 64
+ )
+
+ def forward(self, z):
+ x = self.decoder(z)
+ return x
+
+
+And this is the example of one of the images that I have (it was much higher resolution but I brought it down to 64x64):
+
+
+
+So after training it for around 2000 epoch with a bs of 128, with Adam, trying several LR schedules (going from 1e-3 to 1e-6) I can't get the loss below an RMSE of 0.010-0.015 (the pixel values are between 0 and 1). The reconstructed image looks ok by eye, but I would need a better loss for the purpose of my project. Is there any way I can push the loss lower, or am I asking too much from the NN to distill all the information in these 2 numbers?
+"
+"['comparison', 'statistical-ai', 'explainable-ai', 'symbolic-ai']"," Title: Is explainable AI more feasible through symbolic AI or soft computing?Body: Is explainable AI more feasible through symbolic AI or soft computing?
+How much each paradigm, symbolic AI and soft computing (or hybrid approaches), addresses explanation and argumentation, where symbolic AI refers e.g. to GOFAI or expert systems, and soft computing refers to machine learning or probabilistic methods.
+"
+"['emotional-intelligence', 'biology']"," Title: Is There A Need For Stochastic Inputs To Mimic Real-World Biology And Environment?Body: I'm kind of new to machine learning/AI, but I was wondering if using thresholds/fuzzy logic-like functions and even networks of dependent, stochastic variables that change over time (LTL maybe?), would be ample enough to emulate natural processes like emotions, hunger, maybe even pain.
+
+My dilemma is whether creating a basic library to do this for the developer community is worth it if everything can be modeled more-or-less mathematically deterministic, even if the formulas are really complicated (see research like: https://engineering.stanford.edu/news/virtual-cell-would-bring-benefits-computer-simulation-biology).
+
+My initial reasoning was biological processes are connected to psychological functionality (e.g., being hungry might make someone irritable, but that irritability may wear-off, which triggers different paths of thought but not others). But these are so inter-dependent that it may be random or it is essentially PRNG, in order to properly simulate the mood fluctuations and biological processes computers don't have but humans do have.
+
+Would we be better-off waiting for these complex physical/neurological models to come out?
+"
+"['machine-learning', 'reinforcement-learning', 'hyperparameter-optimization']"," Title: Evolving Machine LearningBody: It seems to me that, right now, the key to making a good Machine Learning model is in choosing the right combination of hyper-parameters.
+
+Firstly: Am I right in saying, if a model is able to tune it's own hyper-parameters, we have in some sense achieved a general intelligence system? Or a glimpse of an actual artificial intelligence system?
+
+I feel the answer lies in what one means by "" tune it's own hyper-parameters"". If it means to be able to reach Bayesian levels of performance on that task then theoretically, after the tuning, the model is able to perform at par or better than humans and so it seems the answer would be yes.
+
+Secondly: I understand that hyper-parameter tuning is done intuitively. But there are a set of general directions that is discernable looking at results. Here I am talking about a heuristic approach to perfect a learning model.
+Consider an example:
+Say I hardcode a model to, while training, observe gradient values. If the gradient is too large or the cost is highly oscillatory, then restart training with a smaller learning rate.
+Then obtain metrics on a test set. If it is poor, then again restart training with regularisation or increased regularisation.
+It can also observe various plot behaviours, etc.
+
+The point is maybe not every trick up a researcher's sleeve can be hardcoded. But a decent level of basic tuning can be done.
+
+Thirdly: Let us say, we have a reinforcement learning system on top of a supervised learning system. That is an RL network sets some hyper-parameters. The action then is to train with these hyper-parameters. The reward would be the accuracy on the test set.
+
+Is it possible that such a system could solve the problem of hyper-parameter tuning?
+"
+"['machine-learning', 'ai-design', 'classification', 'multilayer-perceptrons', 'online-learning']"," Title: Which online machine learning technique to use for multi-class classification problem with multiple inputs?Body: I have the following problem. We have $4$ separate discrete inputs, which can take any integer value between $-63$ and $63$. The output is also supposed to be a discrete value between $-63$ and $63$. Another constraint is that the solution should allow for online learning with singular values or mini-batches, as the dataset is too big to load all the training data into memory.
+
+I have tried the following method, but the predictions are not good.
+
+I created an MLP or feedforward network with $4$ inputs and $127$ outputs. The inputs are being fed without normalization. The number of hidden layers is $4$ with $[8,16,32,64]$ units in each (respectively). So, essentially, this treats the problem like a sequence classification problem. For training, we feed the non-normalized input along with a one-hot encoded vector for that specific value as output. The inference is done the same way. Finding the hottest output and returning that as the next number in the sequence.
+"
+"['neural-networks', 'machine-learning', 'human-like', 'handwritten-characters']"," Title: How to add variation in the results of a neural networks?Body: I would like to create a neural network that converts text into handwriting for use with a pen plotter. Before I start on this project, I'd like to be sure that artificial intelligence is the best way to do this. A problem that I foresee with this approach is a lack of human like variation in the results. For example, the word ""dog"", when inputted into the network, would be the same every time, assuming I'm not missing something. I am interested if there is any way to vary the output of the network in a realistic way, even when the input is exactly the same. Could I use a second network to make the results more random, but also still look human-like? Any thoughts/ideas would be greatly appreciated.
+"
+"['neural-networks', 'machine-learning', 'natural-language-processing', 'classification', 'optical-character-recognition']"," Title: Is there any way to classify Document Image without OCR?Body: I have multiple invoices images which need to classify invoice types such as fright, utility, goods, etc. Is there any way to classify without OCR?
+"
+"['neural-networks', 'datasets', 'artificial-neuron', 'regression']"," Title: Decide Number of input Parameters and Output Parameters - ANNBody: I have to create a Neural Network for regression purpose. Basically, I created a Model which predict next 5 values when we give past 6 values.
+
+I want to make a change in this neural network. For example,
+
+when giving 6 past values I have to predict the next 10 values.
+
+Here, is there any issue of selecting the number of output dimension greater than the input dimension. Which type of parameters arrangement makes the Neural Network achieve good accuracy? do I have to decide the number of input parameters always greater than output parameters?
+
+Thanks in Advance!
+"
+"['reinforcement-learning', 'evolutionary-algorithms', 'reference-request', 'neuroevolution']"," Title: Is there any research work that attempts to combine neuroevolution with deep reinforcement learning?Body: Neuroevolution can be used to evolve a network's architecture (and weights, of course). Deep reinforcement learning, on the other hand, has been proven to be extremely powerful at optimising the network weights in order to train really well-performing agents. Can we use the following pipeline?
+
+
+- search for the best network topology/weights through neuroevolution
+- train the best candidate selected above through DQN or something similar
+
+
+This seems reasonable to me, but I haven't found anything on the matter.
+
+Is there any research work that attempts to combine neuroevolution with deep reinforcement learning? Is it feasible? What are the main challenges?
+"
+"['research', 'math']"," Title: What sort of mathematical problems are there in AI that people are working on?Body: I recently got a 18-month postdoc position in a math department. It's a position with relative light teaching duty and a lot of freedom about what type of research that I want to do.
+
+Previously I was mostly doing some research in probability and combinatorics. But I am thinking of doing a bit more application oriented work, e.g., AI. (There is also the consideration that there is good chance that I will not get a tenure-track position at the end my current position. Learn a bit of AI might be helpful for other career possibilities.)
+
+What sort of mathematical problems are there in AI that people are working on? From what I heard of, there are people studying
+
+
+
+Any other examples?
+"
+"['neural-networks', 'deep-learning', 'objective-functions']"," Title: Analysis of Training Loss and Validation Loss GraphBody: Here I am Showing Two Loss graphs of an Artificial Neural Network.
+
+Model 1
+
+
+
+Model 2
+
+
+
+Blue -training loss
+
+Red -val training loss
+
+Can you help me to analyse these graphs? I read some articles and post but doesn't give me any sense.
+"
+['terminology']," Title: Describing the order of a tensorBody: When describing tensors of higher order I feel like there is an overloading of the term dimension as it may be used to describe the order of the tensor but also the dimensionality of the... ""orders""?
+
+Assume one describes the third-order tensor produced by a convolutional layer and wants to refer to its width and height. Do you say spatial dimensions? Would you write about the channel dimension? Or rather the direction? Saying ""spatial order"" feels really weird. But staying with dimensions makes sentences like ""The spatial dimensions are of equal dimensionality."" (Disclaimer: Obviously you can avoid the issue here by restructuring, but doing this at every occasion does not feel like a satisfactory solution.).
+"
+"['object-recognition', 'object-detection']"," Title: From what aspect to measure the performance of an object detector?Body: I am on the hook to measure the prediction results of an object detector. I learned from some tutorials that when testing a trained object detector, for each object in the test image, the following information is provided:
+
+ <object>
+ <name>date</name>
+ <pose>Unspecified</pose>
+ <truncated>0</truncated>
+ <difficult>0</difficult>
+ <bndbox>
+ <xmin>451</xmin>
+ <ymin>182</ymin>
+ <xmax>695</xmax>
+ <ymax>359</ymax>
+ </bndbox>
+</object>
+
+
+However, it is still unclear to me 1) how does these information is taken by the object detector to measure the accuracy, and 2) how does the ""loss"" is computed for this case. Is it something like a strict comparison? For instance, if for the object ""date"", I got the following outputs:
+
+ <object>
+ <name>date</name>
+ <pose>Unspecified</pose>
+ <truncated>0</truncated>
+ <difficult>0</difficult>
+ <bndbox>
+ <xmin>461</xmin> <---- different
+ <ymin>182</ymin>
+ <xmax>695</xmax>
+ <ymax>359</ymax>
+ </bndbox>
+</object>
+
+
+Then I will believe that my object detector made something wrong? Or they tolerant some small delta such that if the bounding box has a small drifting, then it's acceptable. But if the ""label"" is totally wrong, then that's wrong for sure?
+
+This is like a ""blackbox"" to me and it would be great if someone can shed some lights on this. Thank you.
+"
+"['comparison', 'search', 'planning', 'pddl']"," Title: How to transform a PDDL to search?Body: I have a question about search and planning:
+I still haven't understood the difference from the two, but they seem very similar to me; here is a question I am struggling with:
+
+
+ ""Having formulated a PDDL problem, transform it into research,
+ emphasizing what the differences are.""
+
+
+Someone can do an example?
+
+I attached an example of simple PDDL from my book (I'm using Russell & Norvig)
+
+"
+"['reinforcement-learning', 'ai-design', 'deep-rl', 'markov-decision-process']"," Title: How can I encode states where the environment consists of multiple identical elements, but each is characterised by different features?Body: I am quite new to Deep Reinforcement Learning, and I'm trying to define states in a Reinforcement Learning problem. The environment consists of multiple identical elements, and each one of them is characterized by different features of the same type. In other words, let us say we have $e_0$, $e_1$, and $e_2$. Then, suppose that each one is characterized by features $f_0$ and $f_1$, where $f_0$ belongs to $[0, 1]$, and $f_1$ belongs to $\{0, 1, 2, 3, 4, 5\}$. Then, $e_0$ will have some value for the features $f_0$ and $f_1$, and the same goes for $e_1$ and $e_2$.
+How can I encode such states?
+Can I simply vectorize such state by concatenating the different features of each element obtaining $[f_{0e_0}, f_{1e_0}, f_{0e_1}, f_{1e_1}, f_{0e_2}, f_{1e_2}]$, or should I use a convolutional architecture of some sort?
+"
+"['machine-learning', 'support-vector-machine', 'regression']"," Title: why my regression model predict every datapoint to the same valueBody: I am trying to train a SVR but I found that with some combination of features, the trained SVR predict every point in test set to the same value. this problem occurs much more when I use linear kernel than other kernels. The parameters are: C=1, gamma=0.5.
+My question is what leads to this kind of problem. Is there a name for this phenomenon? Thank you!
+"
+"['computer-vision', 'object-recognition', 'object-detection']"," Title: Understanding average precision (AP) in measuring object detector performanceBody: I am trying to understand the average precision (AP) metrics in evaluating the performance of deep-learning based object detection models. Suppose we have the following ground true (four objects highlighted by four blue arrows):
+
+
+
+where we have labelled four objects:
+
+person 25 16 38 56
+person 129 123 41 62
+kite 45 16 38 56
+kite 169 123 41 62
+
+
+And when feeding the above image to an object detector, it gives the following outputs:
+
+
+
+It's easy to see that the object detector identified another object with low confidence:
+
+person 0.4 25 16 38 56
+person 0.2 129 123 41 62
+kite 0.3 45 16 38 56
+kite 0.5 169 123 41 62
+kite 0.1 769 823 141 162 <-------- a ""kite""
+
+
+In my humble opinion, this is an erroneous behavior of the object detector, which should be counted as a ""false positive"".
+
+However, since the ""kite"" has a quite low confidence score (0.1), when using the standard mAP
algorithm to compute the performance, I got the following output (I am using code from here to compute the mAP
):
+
+AP: 100.00% (kite)
+AP: 100.00% (person)
+mAP: 100.00%
+
+
+So here are my questions and confusions:
+
+
+- from what kind of design intension, the
AP
is designed in a way such that objects with low confidence score are ignored and therefore in this case we are with flying colors.
+- Is there any metrics that can take this extra ""kite"" into consideration and therefore would output one ""false positive"" of the object detection model? I am just thinking that in this way, we can further proceed to improve the accuracy of this model during training.
+
+"
+"['hidden-layers', 'boltzmann-machine', 'restricted-boltzmann-machine']"," Title: Does the encoding of a restricted Boltzmann machine improve with more layers?Body: I'm using a restricted Boltzmann machine (RBM) as an autoencoder. For now, I use a simple architecture of two layers, the input (~100 nodes) and the output (3 nodes) layers. I'm thinking to add more hidden layers.
+
+Are there some improvements in encoding by adding multiple hidden layers? If yes, how can multiple layers improve the encoding?
+"
+['programming-languages']," Title: Why aren't compiled languages as popular as Python in AI?Body: One of if not the most popular programming language for data science and AI today is Python, with R being a frequently cited runner-up. However, both of them are interpreted languages, which do not execute as fast as compiled languages. Why is that the case? The main advantage of AI over humans is in computing speed, and with real world AI applications today handling big data, execution time will already increase considerably as it is. Why aren't compiled languages preferred for this very reason?
+
+Sure, the key argument and strength going for Python is its vast range of third party libraries available for AI, like scikit etc. but such communities can take root and grow anywhere under the right circumstances. Why did this community end up growing around Python and not a faster, equally common compiled language like C++, C# or Java?
+"
+"['reinforcement-learning', 'terminology', 'definitions', 'intelligent-agent']"," Title: What is an agent in Artificial Intelligence?Body: While studying artificial intelligence, I have often encountered the term ""agent"" (often autonomous, intelligent). For instance, in fields such as Reinforcement Learning, Multi-Agent Systems, Game Theory, Markov Decision Processes.
+
+In an intuitive sense, it is clear to me what an agent is; I was wondering whether in AI it had a rigorous definition, perhaps expressed in mathematical language, and shared by the various AI-related fields.
+
+What is an agent in Artificial Intelligence?
+"
+"['reinforcement-learning', 'python', 'tensorflow', 'q-learning', 'dqn']"," Title: Problem over DQN Algorithm not converging on snakeBody: I'm using a DQN Algorithm to play Snake.
+
+The input of the neural network is a stack of 4 images taken from the games 80x80.
+
+The output is an array of 4 values, one for every direction.
+
+The problem is that the program does not converge and I've a lot of doubts in the replay function, where I train the neural network over a batch of 32 events.
+
+That's the snippet:
+
+
+
+def replay(self, batch_size):
+
+ minibatch = random.sample(self.memory, batch_size)
+
+ for state, action, reward, next_state, done in minibatch:
+
+ target = reward
+
+ if not done:
+ target = (reward + self.gamma *
+ np.amax(self.model.predict(next_state)[0]))
+ target_f = self.model.predict(state)
+ target_f[0][action] = target
+ self.model.fit(state, target_f, epochs=1, verbose=0)
+
+ if self.epsilon > self.epsilon_min:
+ self.epsilon *= self.epsilon_decay`
+
+
+Targets are:
+
+
+- +1 for eating an apple
+- 0 for doing a movement without dying
+- -1000 for hitting a wall or the snake hitting himself
+
+"
+['convolutional-neural-networks']," Title: Convolutional Neural Networks for different-sized Source and TargetBody: CNNs are often used in one of the following scenarios:
+
+
+- A known-sized image is encoded to an intermediate format for later use
+- An intermediate or precursor format is decoded into a known-sized image
+- An image is converted into a same-size image
+
+
+(Usually 3 is done by sticking together 1 and 2.)
+
+Are there any papers dealing with convolutional techniques where the image sizes vary? Not only would the size of input X differ from input Y, but also input X may differ from output Y. The total amount of variation can probably be constrained by the statistics of the dataset, but knowledge of input X does not grant a priori knowledge of the size of output Y
+
+(Masking is an obvious solution, but I am hoping for something more elegant if research already exists. The problem domain need not be images.)
+"
+"['machine-learning', 'algorithm']"," Title: How to approach a problem with infinite solutionsBody: Think Angry Birds kind of game. You need to hit a target at some point by adjusting angle and power. There is infinite number of parabolas that will solve this problem.
+
+My problem is not exactly that but similar, it also has infinite number of solutions. Could anyone please suggest how do I approach this kind of problems using Machine Learning?
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: How High and Low frequency filters effect activation in the next layer?Body: Generally, we come across terms such as High Frequency and Low frequency filters in Convolutional Neural Networks (CNN). In regards to this highlighted statement, in 'S1' section of this paper by Jason Yosinski (ref 1),
I thought, in order for high and low-frequency filters to produce a similar effect, weight of low-frequency filters should be greater than high-frequency filters. I would like to understand why I am wrong and I will be grateful if anyone can elaborate about High and Low frequency filters in CNN, in general, or in this context. Thank you.
+
+Ref 1: Yosinski, Jason, et al. ""Understanding neural networks through deep visualization."" arXiv preprint arXiv:1506.06579 (2015).
+"
+"['machine-learning', 'convolutional-neural-networks', 'supervised-learning']"," Title: How can I use 1-channel images as input to a CNN?Body: I need to develop a convolutional neural network whose inputs are 1-channel images, but I dont know how to do it, given that most libraries use 3 channel images. Should I convert my images to RGB? Is there any way to implement a CNN that receive as input 1-channel images?
+"
+"['machine-learning', 'deep-learning', 'applications', 'image-processing']"," Title: How do I determine whether a truck is inside its lane?Body: I have a bunch of images from different trucks passing the road. Here is an example.
+
+The truck needs to be at a certain distance from the border of the lane. Some of the trucks are way close to the border (that you can see on the shoulder of the road).
+I want to find a way to measure the distance between the truck and the border of the lane and, more importantly, to detect whether a truck is inside its lane.
+I would like to solve this problem by training a deep learning-based classifier or image processing techniques. Painting the ground is also possible if I can train a classification algorithm with painted images.
+"
+"['terminology', 'comparison', 'knowledge-representation', 'automated-reasoning']"," Title: What is the difference between Knowledge Representation and Automated Reasoning?Body: Knowledge Representation and Automated Reasoning are two AI subfields which seem to have something to do with reasoning.
+However, I can't find any information online about their relationship. Are they synonyms? Is KR a subfield of AR? What's the difference, if any?
+
+To expand further, I believe representing knowledge is an essential part of ""reasoning"": how can you reason without a proper representation of the concepts you want to manipulate?
+At the same time, reasoning seems to be an essential part of KR (we build Knowledge Bases in order to build computer programs which are able to make inferences from them).
+
+So it seems to me that they are the same field, or at least deeply interrelated, but nobody on the internet seems to explicitly say that; furthermore, in this question, they are mentioned separately.
+
+Another point of ambiguity is that the wikipedia page of KRR mentions reasoning and automated reasoning as a part of KR; it even lists ""automated theorem proving"" (a classical application of AR) as an application of KR.
+But at the same time, we have a separate AR page which does not mention KR at all.
+"
+"['neural-networks', 'deep-learning', 'data-preprocessing', 'normalisation']"," Title: Could the normalisation of the inputs make the neural network insensitive to changes in the inputs?Body: When using neural networks (NNs), we often normalized the inputs. I think this is done to equally capture the changes in any input feature, that is, if any feature takes huge values and other features take small values, we don't want the NN not to be able to "see" the change in the smaller value.
+However, what if we cause the NN to become insensitive to the input, that is, the NN is not able to identify changes in the input because the changes are too small?
+"
+"['deep-learning', 'object-detection', 'semi-supervised-learning']"," Title: How do I locate a specific object in an image?Body: Some pictures contain an elephant, others don't. I know which of the pictures contain the elephant, but I don't know where it is or how does it look like.
+
+How do I make a neural network which locates the elephant on a picture if it contains one? There are no pictures with more than one elephant.
+"
+['neural-networks']," Title: How to create neural network that predicates result of exam?Body: Actually, I am ""fresh-water"", and I've never known what is neural network. Now I am trying understand how to design simple neuronetwork for this problem:
+
+I'd like to make up such neural network that after learning it could predicate a mark of passed exam (for example, math). There is such factors that influence on a mark:
+
+
+- Chosen topic (integral, derivative, series)
+- Perfomance (low, medium, high)
+- Does a student work? (Yes, No, flexible schedule)
+- Have a student ever gotten through a add course? (Yes, No)
+
+
+The output is a mark (A,B,C,D,E,F)
+I don't know should I add few layers between inputs and output
+
+Moreover, I have few results from past years:
+
+
+- (integral, low, Yes, No, E)
+- (integral, medium, Yes, Yes, B)
+- (series, high, No, Yes, A)
+and so on. What do I need to know else for designing this NN?
+
+"
+"['neural-networks', 'reinforcement-learning', 'models']"," Title: Designing state representation for board gameBody: I am trying to write self-play RL (NN + MCTS http://web.stanford.edu/~surag/posts/alphazero.html) to ""solve"" a board game. However, I got stuck in designing boardgame same (input layer for NN).
+
+1) What would be the best way to represent each cell, if there are ~10-100 cells in a game which could be occupied by any playing card. Do one-hot-encoding and get 52 nodes for single cell ([0, 0, 1, ..., 0]) or just divide card_id by total number of cards and get single node for each cell ([0.0576...])?
+
+2) Is it a good/bad practice to help NN by adding additional input that could be derived from the other nodes? For instance, imagine the game where whoever has most red cards wins. Input is 10 cards, and I am adding new input node (number of red cards) to emphasize it. Would that lead to a positive result or doing something like that is bad?
+
+3) Would it help to reduce the number of illegal moves and increase the performance of NN by creating additional input stating which cards are available now and which are not?
+"
+"['machine-learning', 'natural-language-processing', 'classification', 'word2vec']"," Title: How do I classify strings with possibly no meaning?Body: I am quite new to text classification.
+
+Using EAST text detection model, I get multiple strings that aren't words and most often have no meaning. For example, IDs, brand names, etc. I would like to classify them into two groups. Which models work the best and how should I preprocess the strings? I wanted to use Word2Vec, but I think it only works with real words and not with arbitrary strings.
+"
+"['classification', 'word-embedding', 'online-learning', 'incremental-learning']"," Title: How does FastText support online learning?Body: I'm using FastText pre-trained-embedding for tackling a classification task, but I saw it supports also online training (incremental training) for adding domain-specific corpus.
+
+How does it work?
+
+As far as I know, starting from the ""model.bin"" file it retrains the model only on the new corpus updating the old word-vectors, is it right?
+"
+"['bayesian-networks', 'random-variable', 'conditional-probability']"," Title: What is the point of converting conditional probability to factor for Variable Elimination?Body:
+
+I have this slide from my AI class on using a Bayes network to compute a conditional probability. I don't really understand the point of converting the conditional probabilities to factors (besides the fact that it looks weird to marginalize or multiply variables in a CP). It seems kind of arbitrary. Is there some benefit I'm not noticing?
+"
+"['machine-learning', 'prediction']"," Title: Is predicting day of week straight forward?Body: I am using python and Xgboost. I have features: activity and location and time stamps of when the activity occurred.
+
+I want to predict day of week. Is this straight forward, ie y=day of week, X={activity, location}, or am I being naive and I need to do fancy time series things? I'd also like to predict time of day.
+"
+"['computer-vision', 'transfer-learning', 'pretrained-models']"," Title: Are there any better visual models for transfer rather than ImageNet?Body: Similar to the recent pushes in Pretrained Language Models (BERT, GPT2, XLNet) I was wondering if such a thrust exists in Computer Vision?
+
+From my understanding, it seems the community has converged and settled for ImageNet trained classifiers as the ""Pretrained Visual Model"". But relative to the data we have access too, shouldn't there exist something stronger? Also, classification as a sole task has its own constrictions on domain transfer (based on the assumption of how these loss manifolds are).
+
+Are there any better visual models for transfer rather than ImageNet successes? If no, why? Is it because of the domains fluidity in shape, resolution, etc., in comparison to text?
+"
+"['deep-learning', 'prioritized-sweeping']"," Title: Clarifications on ""Prioritized Experience Replay"" (Deepmind, 2015)Body: Paper link : Prioritized Experience Replay
+
+About the blind cliffwalk setup:
+
+
+- Why is the number of possible action sequences equal to 2^N? I cant think of sequences more than (N + 1) where one sequence is the sequence of all right actions and the other N sequences are due to wrong actions at each state.
+
+
+Generally for prioritized experience replay:
+
+
+- The replay memory consists of some transitions which are repeated.In the priority queue I feel that there should only be a single priority for each transition to speed up learning. Is there any advantage of having priority values for each repeated instance of the transition?
+
+
+Edit for 2nd question:
+
+Consider algorithm 1 on page 5 of the article.
+
+
+
+Lets consider one of the transitions to be repeated in the replay memory. If one of them is sampled (line 9) and the priority updated (line 12). Will the priority update on the other instance of the same transition?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'geometric-deep-learning']"," Title: What is the purpose and benefit of applying CNN to a graph?Body: I'm new to the graph convolution network. I wonder what is the main purpose of applying data with graph structure to CNN?
+"
+"['neural-networks', 'machine-learning', 'datasets', 'data-science', 'data-mining']"," Title: How do I know if my dataset is ready for a machine learning model?Body: I am new in this area of Machine Learning and Neural Networks. Currently, I'm taking some courses on Udemy and reading a book about it, but I still have one big question regarding data pre-processing.
+
+In all of those Udemy's lessons, people always use a perfect dataset and ready to input in a model. So all you have to do is run it.
+
+How do I know if my dataset is ready for a model? What do I have to do to make it ready? Which evaluations?
+
+I had a few statistics classes in college already and I learned a lot about correlations matrices, autocorrelation functions, and its lags, etc. and I didn't see yet in anywhere someone explaining how can I evaluate my data and then proceed to implement a model to solve my problem.
+
+If anyone could point me a direction, give me some material, show me where I can learn this, anything, it would be really helpful!
+"
+"['machine-learning', 'deep-learning', 'graphs', 'geometric-deep-learning', 'graph-theory']"," Title: What are the benefits of using the state information that maintains the graph structure?Body: When you applying a graph structured data to the graph convolution network, what are the benefits of using the state information that maintains the graph structure?
+"
+"['search', 'proofs', 'heuristics', 'a-star', 'admissible-heuristic']"," Title: Understanding the proof that A* search is optimalBody: I don't understand the proof that $A^*$ is optimal.
+
+The proof is by contradiction:
+
+
+ Assume $A^*$ returns $p$ but there exists a $p'$ that is cheaper. When $p$ is chosen from the frontier, assume $p''$ (Which is part of the path $p'$) is chosen from the frontier. Since $p$ was chosen before $p''$, then we have $\text{cost}(p) + \text{heuristic}(p) \leq \text{cost}(p'') + \text{heuristic}(p'')$. $p$ ends at goal, therefore the $\text{heuristic}(p) = 0$. Therefore $\text{cost}(p) \leq \text{cost}(p'') + \text{heuristic}(p'') \leq \text{cost}(p')$ because heuristics are admissible. Therefore we have a contradiction.
+
+
+I am confused: can't we also assume there's a cheaper path that's in a frontier closer to the start node than $p$? Or is part of the proof that's not possible because $A^*$ would have examined that path because it is like BFS with lowest cost search, so, if there's a cheaper path, it'll be at a further frontier?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'fully-convolutional-networks']"," Title: Why can a fully convolutional network accept images of any size?Body: On this article, it says that:
+
+
+ The UNET was developed by Olaf Ronneberger et al. for Bio Medical Image Segmentation. The architecture contains two paths. First path is the contraction path (also called as the encoder) which is used to capture the context in the image. The encoder is just a traditional stack of convolutional and max pooling layers. The second path is the symmetric expanding path (also called as the decoder) which is used to enable precise localization using transposed convolutions. Thus it is an end-to-end fully convolutional network (FCN), i.e. it only contains Convolutional layers and does not contain any Dense layer because of which it can accept image of any size.
+
+
+What I don't understand is how an FCN can accept images of any size, while an ordinary object detector, such as YOLO with a dense layer at the very end, cannot accept images of any size.
+
+So, why can a fully convolutional network accept images of any size?
+"
+"['genetic-algorithms', 'optimization', 'genetic-programming', 'artificial-life']"," Title: Can we use the Tierra approach to optimize machine code?Body: Thomas Ray's Tierra is a computer program which simulates life.
+
+In the linked paper, he argues how this simulation may have real-world applications, showing how his digital organisms (computer programs) evolve in an interesting way: they develop novel ways of replicating themselves and become faster at it (he argues that the evolved organisms employ an algorithm which is 5 times faster than the original one he wrote).
+
+Tierra's approach is different from standard GAs:
+
+
+- While in GAs usually there is a set of genomes manipulated, copied and mutated by the program, in Tierra everything is done by the programs themselves: they self-replicate.
+- There is no explicit fitness function: instead, digital organisms compete for energy resources (CPU time) and space resources (memory).
+- Organisms which take a long time to replicate reproduce less frequently, and organisms who create many errors are penalized (they die out faster).
+- Tierran machine language is extremely small: operands included, it only has 32 instructions. Oftentimes, so called RISC instruction sets have a limited set of opcodes, but if you consider the operands, you get billions of possible instructions.
+- Consequentially, Tierran code is less brittle, and you can mutate it without breaking the code. In contrast, usually, if you mutate randomly some machine code, you get a broken program.
+
+
+I was wondering if we could use this approach to optimize machine code. For instance, let's assume we have some assembly-like program which computes a certain function $f$. We could link reproduction time with efficiently computing $f$, and life-span with correctly computing it. This could motivate programs to find novel and faster ways to compute $f$.
+
+Has anything similar ever been tried? Could it work? Where should I look into?
+"
+"['neural-networks', 'machine-learning', 'reference-request']"," Title: I am looking for research related to the use of AI and ML in automotive and aeronautics safety designBody: I am specifically interested in the topic of edge cases.
+I have the presentation Edge Cases and Autonomous Vehicle Safety as a starting point, in particular on page 6:
+
+Machine Learning (inductive training)
+
+- No design insight
+
+- Generally inscrutable; prone to gaming and brittleness.
+
+
+
+I'd like to find more hard data on how ML may do very well until an edge case is encountered.
+"
+"['reinforcement-learning', 'comparison', 'policies', 'stationary-policy']"," Title: What is the difference between a stationary and a non-stationary policy?Body: In reinforcement learning, there are deterministic and non-deterministic (or stochastic) policies, but there are also stationary and non-stationary policies.
+
+What is the difference between a stationary and a non-stationary policy? How do you formalize both? Which problems (or environments) require a stationary policy as opposed to a non-stationary one (and vice-versa)?
+"
+"['machine-learning', 'neuromorphic-engineering']"," Title: Where could I find information on the learning methods used in Neurogrid?Body: I have been searching for more than one week which learning methods were used in Neurogrid.
+
+But I only found descriptions of its architecture (chips, circuits, analog and/or digital components, performance results), everything but no clue on how it updates the weights.
+
+In my opinion, I think that it cannot be gradient descent (with back-propagation), as the topology of the neurons in a chip, for example in the neurocore of Neurogrid, is a mesh or grid.
+
+Do you know where I could find this kind of information?
+
+
+"
+"['terminology', 'turing-test', 'google']"," Title: Would you term Google's Captchas as Turing Test?Body: Quoting from Wikipedia page on Turing Test
+
+
+ The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen...
+
+
+The definition made me wonder if we would term the captchas that show up on Google here it gives textual instructions to the user, bot or not, asking them to select images containing a specific object out of the given set of images.
+"
+"['convolutional-neural-networks', 'comparison', 'graphs', 'geometric-deep-learning']"," Title: What are the differences between network analysis and geometric deep learning on graphs?Body: Both of them deal with data of graph structure like a network community. Is there a big difference there?
+"
+"['machine-learning', 'convolutional-neural-networks', 'relu']"," Title: How are exploding numbers in a forward pass of a CNN combated?Body: Take AlexNet for example:
+
+
+
+In this case, only the activation function ReLU is used. Due to the fact ReLU cannot be saturated, it instead explodes, like in the following example:
+
+Say I have a weight matrix of [-1,-2,3,4]
and inputs of [ReLU(4), ReLU(5), ReLU(-2), Relu(-3)]
. The resultant matrix from these will have large numbers for the inputs of ReLU(4)
and ReLU(5)
, and 0 for ReLU(-2)
and ReLU(-3)
. If there are even just a few more layers, the numbers are quick to either explode or be 0.
+
+How is this typically combated? How do you keep these numbers towards 0? I understand you can take subtract the mean at the end of each layer, but for a layer that is already in the millions, subtracting the mean will still result in thousands.
+"
+"['machine-learning', 'graphs', 'geometric-deep-learning', 'graph-theory']"," Title: Random graph as input in geometric deep learning on time-varying graphBody: I want to create a framework that allows GDL to be applied to time-varying graphs.
+I came up with the Erdos-renyi model as an example of a time-varying graphs.
+
+GDL for graphs takes node information as input and takes correspondence with correct data as accuracy. However, how should I deal with time-varying data, even random, and ground-truth data for such graphs? Or is there other better way?
+And is it nonsense to use pseudo-coordinate as input, as it refers to the traditional approach to time-invariant graphs?
+Also, an application of time-varying graphs has been anomaly detection in financial networks. How does this work specifically? Also, please let me know if there are other application examples.
+"
+"['neural-networks', 'machine-learning', 'gradient-descent']"," Title: In NN, as iterations of Gradient descent increases, the accuracy of Test/CV set decreases. how can i resolve this?Body: As mentioned in the title I'm using 300 Dataset example with 500 feature as an input.
+
+As I'm training the dataset, I found something peculiar. Please look at the data shown below.
+
+
+ Iteration 5000 | Cost: 2.084241e-01
+
+ Training Set Accuracy: 100.000000
+
+ CV Set Accuracy: 85.000000
+
+ Test Set Accuracy: 97.500000
+
+ Iteration 3000 | Cost: 2.084241e-01
+
+ Training Set Accuracy: 98.958333
+
+ CV Set Accuracy: 85.000000
+
+ Test Set Accuracy: 97.500000
+
+ Iteration 1000 | Cost: 4.017322e-01
+
+ Training Set Accuracy: 96.875000
+
+ CV Set Accuracy: 85.000000
+
+ Test Set Accuracy: 97.500000
+
+ Iteration 500 | Cost: 5.515852e-01
+
+ Training Set Accuracy: 95.486111
+
+ CV Set Accuracy: 90.000000
+
+ Test Set Accuracy: 97.500000
+
+ Iteration 100 | Cost: 8.413299e-01
+
+ Training Set Accuracy: 90.625000
+
+ CV Set Accuracy: 95.000000
+
+ Test Set Accuracy: 97.500000
+
+ Iteration 50 | Cost: 8.483802e-01
+
+ Training Set Accuracy: 90.277778
+
+ CV Set Accuracy: 95.000000
+
+ Test Set Accuracy: 97.500000
+
+
+The trend is that as the Iteration(cost) increases(cost decreases), the training set accuracy increases as expected, but the CV Set/Test Set Accuracy decreases. My initial thought is that this has to do with precision/bias issue, but I really can't buy it.
+
+Anyone know what this entails? Or any reference?
+"
+"['planning', 'action-recognition', 'pddl']"," Title: Can PDDL be utilized for action recognition?Body: The Planning Domain Definition Language (PDDL) is known for its capabilities of symbolic planning in the state space. A solver will find a sequence of steps to bring the system from a start state to the goal state. A common example of this is the monkey-and-banana problem. At first, the monkey sits on the ground and, after doing some actions in the scene, the monkey will have reached the banana.
+The way a PDDL planner works is by analyzing the preconditions and effects of each primitive action. This will answer the question of what happens if a certain action is executed.
+However, will a PDDL domain description work the other way around as well, not for planning, but for action recognition?
+I've searched in the literature to get an answer, but all the papers I've found are describing PDDL only as a planning paradigm.
+My idea is to use the given precondition and effects as a parser to identify what the monkey is doing and not what he should do. That means, in the example, the robot ape knows by itself how to reach the banana and the AI system has to monitor the actions. The task is to identify a PDDL action that fits the action by the monkey.
+"
+"['reinforcement-learning', 'terminology', 'comparison', 'control-theory']"," Title: Is there any difference between a control and an action in reinforcement learning?Body: There are reinforcement learning papers (e.g. Metacontrol for Adaptive Imagination-Based Optimization) that use (apparently, interchangeably) the term control or action to refer to the effect of the agent on the environment at each time step.
+
+Is there any difference between the terms control or action or are they (always) used interchangeably? If there is a difference, when is one term used as opposed to the other?
+
+The term control likely comes from the field of optimal control theory, which is related to reinforcement learning.
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'generative-adversarial-networks', 'generative-model']"," Title: What parameters can be tweaked to avoid a generator or discriminator loss collapsing to zero when training a DC-GAN?Body: Sometimes when I am training a DC-GAN on an image dataset, similar to the DC-GAN PyTorch example (https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html), either the Generator or Discriminator will get stuck in a large value while the other goes to zero. How should I interpret what is going on right after iteration 1500 in the example loss function image shown below
? Is this an example of mode collapse? Any recommendations for how to make the training more stable? I have tried reducing the learning rate of the Adam optimizer with varying degrees of success. Thanks!
+"
+"['unsupervised-learning', 'prediction', 'reference-request', 'markov-chain', 'hidden-markov-model']"," Title: Predicting Hot Categories In a Reference ManagerBody: Reference managers like Zotero or Mendeley allow researchers to categorize papers into hierarchical categories called collections. The User navigates through a listing of these collections when filing a new item. The retrieval time grows something like the logarithm of the number of collections; looking for a collection can quickly become a nuisance.
+
+
+
+ Fig 1. A top level list of collections in the Zotero reference manager
+
+One way to reduce navigation time is to allow users to search collections by name. A complementary solution is to provide a view of the currently ""hot"" collections. The user may interact with a list of suggested collections, or receive relevant completions when typing into a collections search bar.
+
+This raises a basic learning problem:
+
+
+ Let $K = K_1, \ \dots, \ K_m$ be the sequence of collections the user has visited (possibly augmented with visit times). Let $H_m$ be a set of $n$ collections the user is likely to visit next. How can we construct $H_m$?
+
+
+A technique that does this this might exploit a few important features of this domain:
+
+
+- Project Clusters: Users jump between collections relevant to their current projects
+- Collection Aging: Users tend to focus on new collections, and forget older ones
+- Retrieval Cost: There's a tangible cost to the user (time, distraction) when navigating collections; this applies to the reduced view (the technique might keep $n$ as small as possible)
+
+
+Two ideas so far
+
+LIFO Cache
+
+
+ Reduce $K_{m-1}, K_{m-2},\ \dots$ into the first $n$ unique entries which do not match $K_m$.
+
+
+This heuristic is very simple to implement and requires no learning. It encompasses clusters and aging given suitably large $n$. But with large $n$ it incurs a retrieval cost of its own.
+
+Markov Chain/Hidden Markov Model
+
+
+ Use $K$ to train a MC or HMM. Build $H_m$ using estimates from this model.
+
+
+The simplest version is an order $k\ $ MC transition matrix built using k-gram statistics of $K_i$. This might be sensitive to project clusters, but I don't think it will recognize new collections without a hard coded aging heuristic.
+
+I'm not clear on how HMMs would be trained here, and I'm not very taken with the $k$-gram MC approach. My next task is to read about MCs/HMMs in context of suggestion systems.
+
+Other Models
+
+I am brand new to suggestion systems. Reading leads are quite welcome!
+
+I would be especially excited about unsupervised techniques, and neural network techniques I could train on a GPU. Apart from improving Zotero, I would like this problem to give me an opportunity to learn about cutting edge techniques.
+
+Valuable Goals
+
+An ideal technique would cast light on questions like
+
+
+- How should we measure the performance of this kind of system? (I suspect cache miss rate is a good metric, as well as the ratio between cache miss rate and cache size)
+- How should we translate these human-centric performance metrics into human independent objective functions for learning?
+- How much better than LIFO can we theoretically expect to do with a more sophisticated technique (say, in terms of cache size for a given cache miss rate)?
+- How can a technique learn patterns like clusters and aging without hand tuned objectives?
+
+
+I am interested in learning theory and building an implementation, so resources with publicly available code would be be preferable. Apart from potentially being overkill for the problem, I would not mind if the final model depends on a GPU.
+
+Please forgive me for the long and vague question. I wish I had done more reading before posing this question, but I feel a bit stuck! I hope to get unstuck with some good reading resources. Thanks!
+"
+"['reinforcement-learning', 'q-learning', 'monte-carlo-methods']"," Title: How to stop evaluation phase in reinforcement learning with epsilon-greedy Monte Carlo agent?Body: I have implemented an epsilon-greedy Monte Carlo reinforcement learning agent like suggested in Sutton and Barto's RL book (page 101). As far as I understood epsilon-greedy agents so far, the evaluation has to stop at some point to exploit the gained knowledge.
+
+I do not understand, how to stop the evaluation here, because the policy update is linked to epsilon. So just setting epsilon equal to zero at some point does not seem to make sense to me.
+"
+"['reinforcement-learning', 'function-approximation', 'features', 'tile-coding', 'coarse-coding']"," Title: Does coarse coding with radial basis function generate fewer features?Body: I am learning about discretization of the state space when applying reinforcement learning to continuous state space. In this video the instructor, at 2:02, the instructor says that one benefit of this approach (radial basis functions over tile coding) is that "it drastically reduces the number of features". I am not able to deduce this in the case of a simple 2D continuous state space.
+Suppose we are dealing with 2D continuous state space, so any state is a pair $(x,y)$ in the Cartesian space. If we use Tile Coding and select $n$ tiles, the resulting encoding will have $2n$ features, consisting of $n$ discrete valued pairs $(u_1, v_1) \dots (u_n, v_n)$ representing the approximate position of $(x,y)$ in the frames of the $n$ 2-D tiles. If instead we use $m$ 2-D circles and encode using the distance of $(x,y)$ from the center of each circle, we have $m$ (continuous) features.
+Is there a reason to assume that $m < 2n$?
+Furthermore, the $m$-dimensional feature vector will again need discretization, so it is unclear to me how this approach uses fewer features.
+"
+"['reinforcement-learning', 'comparison', 'q-learning', 'return', 'expectation']"," Title: What is the difference between return and expected return?Body: At a time step $t$, for a state $S_{t}$, the return is defined as the discounted cumulative reward from that time step $t$.
+
+If an agent is following a policy (which in itself is a probability distribution of choosing a next state $S_{t+1}$ from $S_{t}$), the agent wants to find the value at $S_{t}$ by calculating sort of ""weighted average"" of all the returns from $S_{t}.$ This is called the expected return.
+
+Is my understanding correct?
+"
+"['machine-learning', 'deep-learning', 'image-segmentation']"," Title: Which model to use when selecting objects of interest?Body: I have a set of polygons for each image. Those polygons consist of four $x$ and $y$ coordinates. For each image, I need to extract the ones of interest. This could be formulated as an Image Segmentation task where, for example I want to extract the objects of interest, here: cars.
+
+
+
+But since I already get the polygons through a different part of my pipeline I would like to create a simpler machine learning model. The input will not be the image but only the coordinates of the polygons.
+In this model each sample should consist of multiple polygons (those can vary in number) and the model should output the ones of interest.
+
+In my mind, I formulated the problem as follows:
+
+
+- The polygons are the features. Problem: Samples will have varying number of features.
+- The output will consist of the indices of the ""features"" (polygons) I am interested in.
+
+
+First, I created a decision tree and classified each coordinate as $0$ (not interested in) or $1$ (of interest). But, by doing this, I don't consider the other coordinates that belong to the image. The information of the surrounding is lost.
+
+Does someone have an idea of how to model this problem without using Image Segmentation?
+"
+"['tensorflow', 'recurrent-neural-networks']"," Title: Dynamic frames processing with CNN LSTM combination or otherwiseBody: I have a unique implementation where I have to process videos with dynamic frame rates (that is the number of frames is different for each video in a batch). I am stacking all the frames in a single tensor and processing the same from there on. This works fine with Conv2D layer but creating a 2D tensor (batch_size, features) by a flattening operation this has to be fed to a Dense layer. I can't find a suitable way to implement this.
+
+For more information on why it should be like this kindly explore: explore this link. Instead of the MNIST images, I have multiple videos in a single bag, each with a variable number of frames.
+"
+"['convolutional-neural-networks', 'reference-request', 'image-segmentation', 'u-net', 'model-request']"," Title: What are some neural network models that can use auxiliary info during training for image segmentation?Body: What are some deep learning models that can use supplementary information other than RGB channels for image segmentation?
+
+For example, imagine a poorly shot image of a river (blue) that shows a gap, and the supplementary information is detailed flow directions (arrows), which helps to show the river's true shape (no gap in reality). To get the river shape, most image segmentation models I see, such as U-Net, only use RGB channels.
+Are there any neural network models that can use this kind of auxiliary information along with RGB channels during training for the image segmentation task?
+"
+"['machine-learning', 'convolutional-neural-networks', 'relu', 'bias-variance-tradeoff']"," Title: How is the bias caused by a max pooling layer overcome?Body: I have constructed a CNN that utilizes max-pooling layers. I have found with these layers that, should I remove them, my network performs ideally with every output and gradient at each layer having a variance close to 1. However, if they are included, the variance skyrockets.
+
+This makes sense, of course, as a max-pooling layer takes the maximum of an area, which must incur a positive bias as larger numbers are chosen.
+
+I would just like to know what methods are typically used to combat this.
+"
+"['deep-learning', 'convolutional-neural-networks', 'python', 'tensorflow', 'object-detection']"," Title: How to voxelize multiple frames at the time and append them together?Body: I'm trying to implement this approach for object detection and tracking.
+
+In this approach, the first step is voxelize each frame to construct a 3D tensor, the second step is to append multiple voxels at the time along a new axis to create a 4D tensor.
+
+What I want to understand is how to voxelize multiple frames at the time and append them together.
+"
+['neural-networks']," Title: Is there any useful source on High Bias vs High variance issue on Neural Network?Body: I've been struggling to analyize my NN model. I've studied through andrew ng's course, but there are some results that cannot be explained by the course. Is there any useful source on High Bias vs High variance issue on NN?
+"
+"['deep-learning', 'reinforcement-learning', 'python', 'dqn']"," Title: Why does the DQN not converge when the start or goal states can change dynamically?Body: I'm trying to apply a DQN to a stochastic environment, but I'm having trouble getting it to converge.
+I found some similar questions asked here, but no solutions yet.
+I can easily get the DQN to converge on a static environment, but I am having trouble with a dynamic environment where the end is not given.
+Example: I have made a really simple model of the Frozen Lake (without any holes) - simply navigation from A to B. This navigation works fine when A and B are always the same, but when I shuffle the position of A or B for each session, the DQN cannot converge properly.
+I am using the grid (3x3, 4x4 sizes) as input neurons. Each with "0" value. I assign the current position "0.5" and the end position "1". 4x4 grid gives us 16 input neurons. Example of 3x3 grid:
+ 0.5 0 0
+ 0 0 0
+ 0 0 1
+
+I have a few questions in this regard:
+
+- When training the DQN, how do I apply Q-values? (Or do I really need to? I'm not sure how to correctly "reward" the network. I'm not using any adversarial network or target network at this point.)
+
+- I train the network using only a short replay memory of the last move, or the last N moves that led to success. Is this the right way to approach this?
+
+- I use Keras, and am simply training the network every time it does something right - and ignoring failed attempts. - But is this anywhere near the right approach?
+
+- Am I missing something else?
+
+
+Perhaps I should note that my math skills are not that strong, but I try my best.
+Any input is appreciated.
+"
+"['machine-learning', 'convolutional-neural-networks', 'datasets', 'geometric-deep-learning']"," Title: Examples of time-varying graph-structured data in real worldBody: I'm looking for examples of time-varying graph-structured data for time-varying graph CNNs. First, I came up with the idea of infection network. Is there anything more? If possible, I want data that can be easily obtained online.
+"
+"['reference-request', 'tensorflow', 'recurrent-neural-networks', 'speech-recognition', 'ctc-loss']"," Title: How does the CTC loss work?Body: I am trying to implement CTC loss in TensorFlow, but their documentation is pretty limited. So I am not sure how to approach the problem. I found a good example in Theano.
+Are any other resources that explain the CTC loss?
+I am also trying to understand how its forward-backward algorithm works and what the beam decoder in the case of the CTC loss is.
+"
+"['machine-learning', 'k-means', 'clustering']"," Title: How to compute the number of centroids for K-means clustering algorithm given minimal distance?Body: I need to cluster my points into unknown number of clusters, given the minimal Euclidean distance R between the two clusters. Any two clusters that are closer than this minimal distance should be merged and treated as one.
+
+I could implement a loop starting from the two clusters and going up until I observe the pair of clusters that are closer to each other than my minimal distance. The upper boundary of the loop is the number of points we need to cluster.
+
+Are there any well known algorithms and approaches estimate the approximate number of centroids from the set of points and required minimal distance between centroids?
+
+I am currently using FAISS under Python, but with the right idea I could also implement in C myself.
+"
+"['machine-learning', 'classification', 'pattern-recognition']"," Title: Product Configuration based on user selection of features and other requirementsBody: Is this a scenario that would work well for a ML/Pattern Recognition Model or would it be easier/faster to just filter from a large DB.
+
+I am looking to create a system that will allow users to identify the appropriate product by specifying certain constraints and preferred features.
+
+There are millions of possible product configurations. Lets pretend it's boxes.
+
+Product Options:
+
+
+- Size (From 1mm up to 1m) in 1mm increments
+- Color: choice of 10 colors
+- Material: choice of 3, wood,metal, plastic
+
+
+Constraints:
+
+
+- Wood is only available in centimeter units
+- Red is only available in 500 mm and greater
+- Wood is the preferred material
+- Blue is the preferred color
+
+
+So, we have 30,000 (1000*10*3) possible options.
+Of those, many are not viable such as 533 mm-Red-Wood
+
+but these configurations similar to the request are possible.
+
+
+- 533 mm-Red-Plastic
+- 530 mm-Red-Wood
+- 540 mm-Red-Wood
+
+
+Notes:
+Our current Rules and code based tool can take anywhere from 0.5 to 2 mins to identify the preferred configuration.
+We can generate a list of all possible configs and whether they are valid or not.
+We estimate 30,000,000 possible configs
+It takes around 0.5 seconds to validate a config so with enough computing power we expect we could do 30M in a few days.
+"
+"['reinforcement-learning', 'q-learning', 'monte-carlo-tree-search', 'supervised-learning', 'alphazero']"," Title: Does AlphaZero use Q-Learning?Body: I was reading the AlphaZero paper Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, and it seems they don't mention Q-Learning anywhere.
+
+So does AZ use Q-Learning on the results of self-play or just a Supervised Learning?
+
+If it's a Supervised Learning, then why is it said that AZ uses Reinforcement Learning? Is ""reinforcement"" part primarily a result of using Monte-Carlo Tree Search?
+"
+"['neural-networks', 'convolutional-neural-networks', 'applications', 'geometric-deep-learning']"," Title: Why is graph convolution network in time-varying graphs useful for anomaly detection?Body: In this paper, the authors refer to the application of time-varying graphs as an open problem. And they say it will be useful for anomaly detection in financial networks, etc. But why is that useful?
+"
+"['recurrent-neural-networks', 'sequence-modeling']"," Title: In sequence-to-sequence, why is the output of the decoder used as its input?Body: The basic seq-2-seq model consists of 2 parts: a recurrent encoder that compresses a sequence to a vector and decoder that unrolls the vector into the output sequence:
+
+
+
+Why is the output, w
, x
, y
, z
of the decoder used as its input? Shouldn't the hidden state of the RNN from the previous timestamps be enough?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'graphs', 'geometric-deep-learning']"," Title: Is there a neural network method for time-varying directed graphs?Body: I want to study NN for time-varying directed graphs. However, as this field has developed relatively recently, it is difficult to find new ways. So the question is, is there any NN that can handle such data?
+"
+"['machine-learning', 'recommender-system']"," Title: Reducing the Number of Training Samples for collaborative filtering in recommender systemsBody: I have the following problem: I am doing some research on the accuracy of recommender algorithms that are mostly used nowadays.
+
+So, one way to measure their performance is by checking how well they predict a certain value under different sizes of a given dataset, meaning, sparsity in a ratings matrix.
+
+I need to find a way to calculate the root mean square error(or mae), some metric, versus the sparsity in the dataset. As an example, let’s have a look at the picture below:
+
+
+
+You can see that it says:
+
+
+ “RMSE as a function of sparsity. 5000 ratings were removed from the training set(initially containing 80000 ratings) in every iteration. “
+
+
+I’m using Python and the Movielens dataset. Do you know how can I achieve this in the mentioned language? Is there any tool to do that?
+"
+"['neural-networks', 'recurrent-neural-networks', 'keras', 'long-short-term-memory', 'hyperparameter-optimization']"," Title: Why doesnt my lstm model for time series prediction improve after certain level of performance?Body: I created an lstm model which predicts multioutput sequeances. It takes variable length sequences as input. These sequences are padded with zero to obtain equal length. Note that the time series are not equally spaced but time stamp is added as predictor. The descrete time series predictors are normalized with mean and standard deviation and run through PCA, the categorical feature is one hot encoded and the ordinal features are integer encoded. So the feature set is a combination of dynamic and static features. The targets are also scaled between [-1,1]. The input layer goes through a masked layer and then to 2 stacked lstm layers and dense layer to predict the targets (see code below).
+
+Training actually starts well but then the performance starts to saturate. It seems the network also focus more on the 3rd output rather than the first two. This is seen in the validation curve of the third output that follows the training curve perfectly. For the first 2 outputs, the network has a hard time predicting some peak values. I have been tuning the hyper parameters but validation error does not go below a certain value. The longer I train the more the validation curve and training curve separate from each other and overfitting occurs on the first 2 outputs. I tried all the standard initializations and he_initialization seems to work the best. When more data is added there is a slight improvement in validation error but not significant. When adding dropout the validation error is lower than the training error due to noise introduced by dropout in feed forward but there is no significant improvement. Since neural networks tend to converge close to where they are initialized, I was thinking my initialization is not good.
+
+I was wandering if anyone had any suggestions on how to improve the error of this model. I think I will be happy if I can get the validation error somewhere around 0.01.
+
+
+
+
+
+
+def masked_mse(y_true, y_pred):
+ mask = keras.backend.all(keras.backend.not_equal(y_true, 0.), axis=-1, keepdims=True)
+
+ y_true_ = tf.boolean_mask(y_true, mask)
+ y_pred_ = tf.boolean_mask(y_pred, mask)
+
+ return keras.backend.mean(keras.backend.square(y_pred_ - y_true_))
+
+def rmse(y_true, y_pred):
+ # find timesteps where mask values is not 0.0
+ mask = keras.backend.all(keras.backend.not_equal(y_true, 0.), axis=-1, keepdims=True)
+
+ y_true_ = tf.boolean_mask(y_true, mask)
+ y_pred_ = tf.boolean_mask(y_pred, mask)
+
+ return keras.backend.sqrt(keras.backend.mean(keras.backend.square(y_pred_ - y_true_)))
+
+hl1 = 125
+hl2 = 125
+window_len = 30
+n_features = 50
+batch_size = 128
+
+optimizer = keras.optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0., amsgrad=False)
+dropout = 0.
+input_ = keras.layers.Input(
+ shape=(window_len, n_features)
+ )
+
+# masking is to make sure the model doesn't fit the zero paddings
+masking = keras.layers.Masking(mask_value=0.0)(input_)
+
+# hidden layer 1 with he_normal initializer.
+
+lstm_h1 = keras.layers.LSTM(hl1, dropout=dropout, kernel_initializer='he_normal',
+ return_sequences=True)(masking)
+
+# hidden layer 2
+lstm_h2 = keras.layers.LSTM(hl2, dropout=dropout, kernel_initializer='he_normal',
+ return_sequences=True)(lstm_h1)
+
+ # dense output layer of single output
+ out1 = keras.layers.Dense(1, activation='linear',name='out1')(lstm_h2)
+
+ out2 = keras.layers.Dense(1, activation='linear', name='out2')(lstm_h2)
+
+ out3 = keras.layers.Dense(1, activation='linear', name='out3')(lstm_h2)
+
+ model = keras.models.Model(inputs=input_, outputs=[out1, out2, out3])
+
+
+ pi = [rmse]
+
+ n_gpus = len(get_available_gpus())
+
+ if n_gpus > 1:
+ print(""Using Multiple GPU's ..."")
+ parallel_model = multi_gpu_model(model, gpus=n_gpus)
+
+else:
+ print(""Using Single GPU ..."")
+ parallel_model = model
+
+
+ parallel_model.compile(loss=masked_mse, optimizer=optimizer, metrics=pi)
+ parallel_model.summary()
+
+ checkpoint = keras.callbacks.ModelCheckpoint(
+ file_name+"".hdf5"", monitor='val_loss', verbose=1,
+ save_best_only=True, mode='min', period=10,
+ save_weights_only=True)
+
+ save_history = keras.callbacks.CSVLogger(file_name+"".csv"", append=True)
+
+ callbacks_list = [checkpoint, save_history]
+
+ y_train_reshaped = list(reshape_test(window_len, y_train))
+ parallel_model.fit(
+ x_train,
+
+ {
+ 'out1': y_train_reshaped[0],
+ 'out2': y_train_reshaped[1],
+ 'out3': y_train_reshaped[2],
+ },
+
+ epochs=epochs,
+ batch_size=batch_size,
+ verbose=0,
+ shuffle='batch',
+ validation_data=(x_test, list(reshape_test(window_len,y_test))),
+ callbacks=callbacks_list,
+
+ )
+
+"
+"['neural-networks', 'machine-learning']"," Title: How can I train a neural network to give probability of a random event?Body: Let's say I have an adjustable loaded die, and I want to train a neural network to give me the probability of each face, depending on the settings of the loaded die.
+
+I can't mesure its performance on individual die roll, since it does not give me a probability.
+
+I could batch a lot of roll to calculate a probability and use this batch as an individual test case, but my problem does not allow this (let's say the settings are complex and randomized between each roll).
+
+I have 2 ideas:
+
+
+- train it as a classification problem which output confidence, and hope that the confidence will reflect the actual probability. Sometimes the network would output the correct probability and fail the test, but on average it would tend to the correct probability. However it may require a lot of training and data.
+- batch random rolls together and compare the mean/median/standard deviation of the measured result vs the predictions. It could work but I don't know the good batch size.
+
+
+Thank you.
+"
+"['neural-networks', 'machine-learning', 'bias']"," Title: Why do neural networks have bias units?Body: Why do neural networks have bias units? Why is it sometimes okay to opt them out?
+"
+"['deep-learning', 'reinforcement-learning', 'dqn', 'deep-rl', 'ddpg']"," Title: Is DDPG just for deterministic environments?Body: I want to develop an AI for continuous space. I reached to DDPG algorithm that takes actions deterministically.
+
+If DDPG takes actions deterministically, should the environment also be deterministic? I want non-deterministic, continuous real-world environments. Is DDPG the algorithm I am looking for? Is there any other algorithm for my need?
+"
+"['deep-learning', 'convolutional-neural-networks', 'classification', 'training', 'transfer-learning']"," Title: Binary annotations on large, heterogenous imagesBody: I'm working on a deep learning project and have encountered a problem. The images that I'm using are very large and extremely detailed. They also contain a huge amount of necessary visual information, so it's hard to downgrade the resolution. I've gotten around this by slicing my images into 'tiles,' with resolution 512 x 512. There are several thousand tiles for each image.
+
+Here's the problem—the annotations are binary and the images are heterogenous. Thus, an annotation can be applied to a tile of the image that has no impact on the actual classification. How can I lessen the impact of tiles that are 'improperly' labeled.
+
+One thought is to cluster the tiles with something like a t-SNE plot and compare the ratio of the binary annotations for different regions (or 'classes'). I could then assign weights to images based on where it's located and then use that as an extra layer in my training. Very new to all of this, so wouldn't be surprised if that's an awful idea! Just thought I'd take a stab.
+
+For background, I'm using transfer learning on Inception v3.
+"
+"['neural-networks', 'machine-learning', 'classification', 'incremental-learning']"," Title: Can I train a neural network incrementally given new daily data?Body: I would like to know if it was possible to train a neural network on daily new data. Let me explain this more in detail. Let's say you have daily data from 2010 to 2019. You train your NN on all of it, but, from now on, every day in 2019 you get new data. Is it possible to ""append"" the training of the NN or do we need to retrain an entire NN with the data from $2010$ to $2019+n$ with $n$ the day for every new day?
+
+I don't know if it is relevant but my work is on binary classification.
+"
+"['machine-learning', 'convolutional-neural-networks', 'graphs', 'geometric-deep-learning', 'graph-theory']"," Title: What are the advantages of time-varying graph CNNs compared to fixed graph?Body: As I wrote in the title, what are the advantages of time-varying graph CNNs compared to fixed graph? For example, in CORA, which is a graph of citation relations of papers frequently used in graph CNN, what examples are there?
+"
+"['deep-learning', 'convolutional-neural-networks', 'python', 'computer-vision', 'tensorflow']"," Title: Applying a 1D convolution for 4D inputBody: i'm trying to implement this paper and I'm stuck for quite some time now. Here is the issue:
+
+I have a 3D tensor and has (180,200,20)
as dimension and I'm trying to append 5 of them as the paper states:
+
+
+ Now that each frame is represented as a 3D tensor, we can append multiple frames’ along a new temporal dimension to create a 4D tensor
+
+
+what I did is I applied the tensorflow command tf.stack()
and now so far so good, I have my input as a 4D tensor and has (5,180,200,20)
as stated in the paper:
+
+
+ Thus our input is a 4 dimensional tensor consisting of time, height, X and Y
+
+
+Now what I'm trying to do is to apply a 1D convolution on this 4D tensor as the paper mentions:
+
+
+ given a 4D input tensor, we first use a 1D convolution with
+ kernel size n on temporal dimension to reduce the temporal dimension from n to 1
+
+
+is this case n = 5.
+
+And here where I got stuck, I created the kernel as follow:
+
+kernel = tf.Variable(tf.truncated_normal([5,16,16], dtype = tf.float64, stddev = 1e-1, name = 'weights'))
+
+and tried to apply a 1D convolution:
+
+conv = tf.nn.conv1d(myInput4D, kernel, 1 , padding = 'SAME'
)
+
+and I get this error
+
+Shape must be rank 4 but is rank 5 for 'conv1d_42/Conv2D' (op: 'Conv2D') with input shapes: [5,180,1,200,20], [1,5,16,16]
+
+I don't understand how 1
is added to the dimensions at the index = 2
and index = 0
in the first and second tensors.
+
+I also tried this:
+
+conv = tf.layers.conv1d(myInput4D, filters = 16, kernel_size = 5, strides = 1, padding = 'same)
+
+And get the following error:
+
+Input 0 of layer conv1d_4 is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [5, 180, 200, 20]
+
+My question is: Is it possible to apply a 1D convolution on a 4D input and if yes can anyone suggests a way to do so? Because in the Tenssorflow documentations it says the input must be 3D
+
+
+ tf.nn.conv1d(
+ value=None,
+ filters=None,
+ stride=None,
+ padding=None,
+ use_cudnn_on_gpu=None,
+ data_format=None,
+ name=None,
+ input=None,
+ dilations=None
+ )
+
+ value: A 3D Tensor. Must be of type float16, float32, or float64.
+
+
+Thank you.
+"
+"['neural-networks', 'python', 'keras']"," Title: Reinforcement learning to play snake - network seems to not get trained at allBody: I am trying to build a network able to play snake game. This is my very first attempt to do such stuff. Unfortunately, I've stuck and even have no idea how to reason about the problem.
+
+I use reinforcement neural network approach (q-leaning). My network is built on top of Keras. I use 6 input neurons for my snake:
+
+
+- 1 - is any collision directly behind
+- 2 - is any collision directly on the right
+- 3 - is any collision directly on the left
+- 4 - is snack up front (no matter how far)
+- 5 - is a snack on the right side (no matter how far)
+- 6 - is a snack on the left side (no matter how far)
+
+
+the output has 3 neurons:
+
+
+- 1 - do nothing (go ahead)
+- 2 - turn right
+- 3 - turn left
+
+
+I believe this is a sufficient set of information to make proper decisions. But the snake seems to not even grasp the concept of not hitting the wall - which results with instant death.
+
+I use the following rewards table:
+
+
+- 100 for getting the snack
+- -100 for hitting wall/tail
+- 1 for staying alive (each step)
+
+
+Snake tends to run randomly no matter how many training iterations it gets.
+
+The code is available on my github: https://github.com/ayeo/snake/blob/master/main.py
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: Inverting intensity on images to enhance image datasetBody: i just tried to improve my image dataset by inverting the images with a probability of 50% (means white background, black features transforms to black background, white features)
+
+I thought this will improve the ability to recognize abstract features for my network. Right now, the network does not perform really well. Is inverting the intensity of images too much for a training algorithm to deal with?
+"
+['knowledge-representation']," Title: Is there any readily available concept/topic tree?Body: I am looking for dataset in tree structure that captures the hierarchy of concepts.
+For example, something like,
+
+ Entertainment
+ Movies Sports
+ Comedy Thriller cricket football
+ charlie chaplin sachin messy
+
+"
+"['cross-validation', 'metric']"," Title: Metrics for evaluating models that output probabilitiesBody: I'm aware of metrics like accuracy (correct predictions / total predictions) for models that classify things. However, I'm working on a model that outputs the probability of a datapoint belonging to one of two classes. What metrics can/should be used to evaluate these types of models?
+
+I'm currently using mean squared error, but I would like to know if there are other metrics, and what the advantages/disadvantages of those metrics are.
+"
+"['neural-networks', 'deep-learning', 'generative-adversarial-networks', 'time-series', 'representation-learning']"," Title: Is it possible to use adversarial training to learn invariant features?Body: Given a set of time series data that are generated from different sites where all sites are investigating the same objective but with slightly different protocols.
+
+Is it possible to use adversarial learning to learn site invariant features for a classification problem, that is, how can adversarial learning be used to minimize experimental differences (e.g. different measurement equipment) so that the learned feature representations from the time series are homogenous for a classification problem?
+
+I have come across multi-domain adversarial learning, but I'm not sure if this is the best formulation for my problem.
+"
+"['machine-learning', 'probability-distribution', 'weights']"," Title: How are the parameters of the Bernoulli distribution learned?Body: In the paper Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask, they learn a mask for the network by setting up the mask parameters as $M_i = Bern(\sigma(v_i))$. Where $M$ is the parameter mask ($f(x;\theta, M) = f(x;M \odot \theta$), $Bern$ is a Bernoulli sampler, $\sigma$ is the sigmoid function, and $v_i$ is some trainable parameter.
+
+In the paper, they learn $v_i$ using SGD. I was wondering how they managed to do that, because there isn't a reparameterization trick, as there is for some other distributions I see trained on in the literature (example: normal).
+"
+"['convolutional-neural-networks', 'image-recognition', 'tensorflow', 'models']"," Title: Is there a simple way of classifying images of size differing from the input of existing image classifiers?Body: Most image classifiers like Inception-v3 accept images of about size 299 x 299 x 3 as input. In this particular case, I cannot resize the image and lose resolution. Is there an easy solution of dealing with this rather than retraining the model? (Particularly in tensorflow)
+"
+"['deep-learning', 'keras', 'transfer-learning']"," Title: Paper & code for ""unsupervised domain adaptation"" for regression taskBody: Does anyone know a paper or code that does ""unsupervised domain adaptation"" for regression task?
+I saw most of the papers were benchmarked on classification tasks, not regression.
+I want to do something like training a model to predict a scalar value from an image (e.g. predicting image of a road to steering wheel angle for a self-driving car).
+One of the examples could be training on synthesis data from simulated environment (think GTA) and then trying to predict on real-world.
+
+Here is one of the examples of unsupervised domain adaptation algorithm that also has an easy-to-access code with Keras: https://github.com/bbdamodaran/deepJDOT
+But it's for classification. The author said it can be used for regression but I had to change it. I changed it and it didn't work well so I don't know if it's my fault or the algo is not good for regression. I want to see papers that were benchmarked on regression so I know how well it performs on regression.
+
+My real use case is to predict facial expression as a value from 0 to 1 like how open is the mouth. The source domain and target domain are real-world images but from different lighting.
+
+Any suggestions are appreciated.
+"
+"['neural-networks', 'tensorflow', 'resource-request']"," Title: Is there a place where I can read or watch to get an accurate TensorFlow code wise explanation?Body: I have a piece of code and I don't seem to really understand it but I'd love to get a source/link/material that would help me understand the basic functions in TensorFlow. Are there any recommended resources for learning the same?
+"
+"['reinforcement-learning', 'value-functions', 'reward-design', 'reward-functions', 'weights-initialization']"," Title: How does the initialization of the value function and definition of the reward function affect the performance of the RL agent?Body: Is there any empirical/theoretical evidence on the effect of initial values of state-action and state values on the training of an RL agent (the values an RL agent assigns to visited states) via MC methods Policy Evaluation and GLIE Policy Improvement?
+For example, consider two initialization scenarios of Windy Gridworld problem:
+Implementation: I have modified the problem along with step penalty to include a non-desired terminal state and a desired terminal state which will be conveyed to the agent as a negative and positive reward state respectively. The implementation takes care that the MC sampling ends at the terminal state and gives out penalty/reward as a state-action value and not state value, since this is a control problem. Also, I have 5 moves: north, south, east, west and stay.
+NOTE: I am not sure whether this changes the objective of the problem. In the original problem, it was to reduce the number of steps required to reach the final stage.
+
+- We set the reward of reaching the desired terminal state to a value that is higher than the randomly initialized values of the value function; for example, we can set the reward to $20$ and initialize the values with random numbers in the range $[1, 7]$
+
+- We set the reward of reaching the desired terminal state to a value that is comparable to the randomly initialized values of the value functions; for example, we can set the reward to $5$ and initialize the values with random numbers in the range $[1, 10]$
+
+
+As far as I see, in the first case, the algorithm will easily quickly converge as the reward is very high for the terminal reward state which will skew the agent to try to reach the reward stage.
+In the second case, this might not be true if the reward state is surrounded by other high reward states, the agent will try to go to those states.
+The step penalty ensures that the agent finally reaches the terminal state, but will this skew the path of the agent and severely affect its convergence time? This might be problematic in large state spaces since we will not be able to explore the entire state space, but the presence of exploratory constant $\epsilon$ might derail the training by going to a large false reward state. Is my understanding correct?
+"
+"['machine-learning', 'deep-learning', 'history']"," Title: Why did machine learning only become viable after Nvidia's chips were available?Body: I listened to a talk by a panel consisting of two influential Chinese scientists: Wang Gang and Yu Kai and others.
+When being asked about the biggest bottleneck of the development of artificial intelligence in the near future (3 to 5 years), Yu Kai, who has a background in the hardware industry, said that hardware would be the essential problem and we should pay most of our attention to that. He gave us two examples:
+
+- In the early development of the computer, we compare our machines by their chips;
+- ML/DL which is very popular these years would be almost impossible if not empowered by Nvidia's GPU.
+
+The fundamental algorithms existed already in the 1980s and 1990s, but AI went through 3 AI winters and was not empirical until we can train models with GPU boosted mega servers.
+Then Dr. Wang commented to his opinions that we should also develop software systems because we cannot build an automatic car even if we have combined all GPUs and computation in the world together.
+Then, as usual, my mind wandered off and I started thinking that what if those who can operate supercomputers in the 1980s and 1990s utilized the then-existing neural network algorithms and train them with tons of scientific data? Some people at that time can obviously attempt to build the AI systems we are building now.
+But why did AI/ML/DL become a hot topic and become empirical until decades later? Is it only a matter of hardware, software, and data?
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'hyper-parameters', 'hyperparameter-optimization']"," Title: Can we automate the choice of the hyper-parameters of the evolutionary algorithms?Body: Certain hyper-parameters (e.g. the size of the offspring generation or the definition of the fitness function) and the design (e.g. how the mutation is performed) of evolutionary algorithms usually need to be defined or specified by a human. Could also these definitions be automated? Could we also mutate the fitness function or automatically decide the size of the offspring generation?
+"
+"['reinforcement-learning', 'combinatorics']"," Title: Number of states in taxi environment (Dietterich 2000)Body: Dietterich, who introduced the taxi environment (see p. 9), states the following: In total there “are 500 [distinct] possible states: 25 squares, 5 locations for the passenger (counting the four starting locations and the taxi), and 4 destinations” (Dietterich, 2000, p. 9).
+
+However, in my opinion there are only 25 (grid) * 4 (locations) * 2 (passenger in car) = 200 different states, because for the agent it should be the same task to go to a certain point, regardless of whether it's on its way to pick up or to drop-off. Only the action at the destination is different which would be stored binary (passenger in car or not)
+
+Why does Dietterich come up with 500 states?
+"
+"['neural-networks', 'neat', 'neuroevolution']"," Title: How can non-functional neural networks be avoided when the crossover produces a child with a disabled gene?Body: I am implementing NEAT (neuroevolution of augmenting topologies) by Stanley. I am facing a problem during the crossover of genomes.
+Suppose two networks with connections
+Genome1 = {
+ (1, Input1, Output), // numbers represent innovation numbers
+ (2, Input2, Output)
+} // more fit
+
+Genome2 = {
+ (1, Input1, Output),
+ (2, Input2, Output), // disabled
+ (3, Input2, Hidden1),
+ (4, Hidden1, Output)
+}
+
+are crossed over, then the connection (Input2, Output) in the fitter parent has a chance of being disabled (page 109, section 3.2, figure 4),
+
+There's a preset chance that an inherited gene is disabled if it is disabled in either parent.
+
+and thus producing the following offspring:
+Child = {
+ (1, Input1, Output),
+ (2, Input2, Output) //Disabled
+}
+
+and thus render the network non-functional.
+Similarly, by this chance, nodes can get left in a state of uselessness after crossover (as having no outgoing connections or no connections at all).
+How can this be prevented or am I missing something here?
+"
+"['ai-design', 'objective-functions', 'probability-distribution']"," Title: Which loss functions for transforming a density function to another density function?Body: I am looking at a problem which can be distilled as follows: I have a phenomenon which can be modeled as a probability density function which is ""messy"" in that it sums to unity over its support but is somewhat jagged and spiky, and does not correspond to any particular textbook function. It takes considerable amounts of time to generate these experimental density functions, along with conditional data for machine learning, but I have them. I also have a crude model which runs quickly but performs poorly, i.e., generates poor quality density functions.
+
+I would like to train a neural network to transform the crude estimated pdfs to something closer to the experimentally generated pdfs, if possible.
+
+To investigate this, I've further reduced this to the most toy-like toy problem I can think of: Feeding a narrow, smooth (relatively narrow) normal curve into a 1D convolutional neural network, and trying to transform it to a similar narrow curve with a different mean. Both input and output have fine enough support (101 points) to be considered as a smooth pdf.
+
+Here is the crux of the problem I think I have: I do not know what a good loss function is for this problem.
+
+L1, L2 and similar losses are useless, given that once the non-zero parts of the pdfs are non-overlapping, it doesn't matter how far apart the means are, the loss remains the same.
+
+I have been experimenting with Sinkhorn approximations to optimal transport, to properly capture the intuition of ""distance"" but somewhat surprisingly these have not been helpful either. I think part of the problem may be an (unavoidable?) numerical stability issue related to the support, but I would not stake hard money on that assumption.
+
+(If support is at percentiles on the [0,1] it is quite instructive (and dismaying) to look at the sinkhorn loss for normal functions with the mean directly on a point of support, vs normal functions with the mean directly between two points of support.)
+
+For a problem in this vein, are there any recommended loss functions (preferably supported by or easily implement in PyTorch) which might work better?
+"
+"['reinforcement-learning', 'terminology', 'hierarchical-rl', 'semi-mdp', 'options']"," Title: What are options in reinforcement learning?Body: According to a lecture (week 10) about Reinforcement Learning [1], the concept of an option allows searching the state space of an agent much faster. The lecture was hard to follow because many new terms were introduced in a short time. For me, the concept of an option sounds a bit like skills [2], which are used for describing high-level actions as well.
+Are skills an improvement over options that includes the trajectory, or are both the same?
+I'm asking for a certain reason. Normal deep reinforcement learning has the problem that the agent comes very often to a dead end, for example, in Montezuma's Revenge played at the Atari emulator. And the options framework promises to overcome the issue. But the concept sounds a bit too esoteric, and apart from the Nptel lecture, nobody else has explained the idea. So, is it useful at all?
+"
+"['neural-networks', 'machine-learning', 'classification', 'objective-functions', 'binary-classification']"," Title: Which loss function should I use for binary classification?Body: I plan to create a neural network using Python, Keras, and TensorFlow. All the tutorials I have seen so far are concerned with image recognition. However, the goal of my program would be to take in 10+ inputs and calculate a binary output (true/false) instead.
+Which loss function should I use for my task?
+"
+"['machine-learning', 'genetic-algorithms', 'gradient-descent', 'neat', 'neuroevolution']"," Title: Can neuroevolution be combined with gradient descent?Body: Is there any precedent for using a neuroevolution algorithm, like NEAT, as a way of getting to an initialization of weights for a network that can then be fine-tuned with gradient descent and back-propagation?
+
+I wonder if this may be a faster way of getting to a global minimum before starting a decent to a local using backpropagation with a large set of input parameters.
+"
+"['philosophy', 'agi', 'knowledge-representation', 'commonsense-knowledge']"," Title: Why do we need common sense in AI?Body: Let's consider this example:
+
+
+ It's John's birthday, let's buy him a kite.
+
+
+We humans most likely would say the kite is a birthday gift, if asked why it's being bought; and we refer to this reasoning as common sense.
+
+Why do we need this in artificially intelligent agents? I think it could cause a plethora of problems, since a lot of our human errors are caused by these vague assumptions.
+
+Imagine an AI ignoring doing certain things because it assumes it has already been done by someone else (or another AI), using its common sense.
+
+Wouldn't that bring human errors into AI systems?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'comparison']"," Title: Can a vanilla neural network theoretically achieve the same performance as CNN?Body: I perfectly understand that CNN takes into account the local dependency of each pixel to the nearby pixels. In addition, CNNs are spatially invariant which means that they are able to detect the same feature anywhere in the image. These qualities are useful in image classification problems given the nature of the problem.
+
+How does a vanilla neural net exactly falls short on these properties? Am I right in claiming that a vanilla neural net has to learn a given feature in every part of the image? This is different than how a CNN does it, which learns the feature once and then detects it anywhere in the image.
+
+How about local pixel dependancy? Why can't a vanilla neural network learn the local dependency by relating one pixel to its neighbors in the 1D input?
+
+In other words, is there more information present while training a CNN that are simply absent when training a normal NN? Or is a CNN just better at optimizing in the space of image classification problems?
+"
+"['machine-learning', 'ai-design', 'game-ai', 'object-recognition', 'object-detection']"," Title: How should I build an AI that quickly detects falling game assets on screen?Body: I want to build an AI that plays a simple android game.
+
+The game is just a one at a time object falling, some times at an angle. The AI needs to recognize the object and to decide whether to swipe left, swipe down, or click on it. The background is changing some times, but the object falling is always on top.
+
+There are 44 different assets and I have the original full resolution PNG of the objects.
+
+How should I approach this?
+"
+"['neural-networks', 'classification', 'binary-classification', 'features', 'inference']"," Title: When doing binary classification with neural networks, how can I order the importance of the features for a class?Body: I have a simple neural network for binary classification.
+The input features include age, sex, economic situation, illness, disability, etc. The output is simply 1 and 0.
+I would like to order the features from the greatest to least impact it had on the classification.
+An example answer could look like this:
+Classification: 1
+
+- illness
+- economic situation
+- disability
+- sex
+- age
+
+Another example:
+Classification: 0
+
+- economic situation
+- age
+- disability
+- sex
+- illness
+
+"
+['ai-design']," Title: Can an AI simulate someone that is diagnosed as ""Special Needs""?Body: Do you think it would be possible to train an AI in such a way as to mimic/simulate someone that is diagnosed as ""Special Needs""?
+
+Why? Most diagnosis and treatments for people today are subjective, sure it's what a group of like-minded professionals has agreed upon as a valid hypothesis, but, at the same time, there is an absence of the absolute. Could we train an AI to become a ""special needs"" be a starting point in helping find better ways to unlock the potential and understanding of these differences?
+"
+"['machine-learning', 'computer-vision', 'terminology', 'papers']"," Title: What is ""dense"" in DensePose?Body: I've recently come across an amazing work for human pose estimation: DensePose: Dense Human Pose Estimation In The Wild by Facebook.
+
+In this work, they have tackled the task of dense human pose estimation using discriminative trained models.
+
+I do understand that ""correspondence"" means how well pixels in one image correspond to pixels in the second image (specifically, here - 2D to 3D).
+
+But what does ""dense"" means in this case?
+"
+"['deep-learning', 'ai-design', 'autoencoders', 'object-detection']"," Title: How should I detect an object in a camera image?Body: I would like to create a model, that will tell me if one type of object is in an image or not.
+
+So, for example, I have a camera and I would like to see when one object gets into the shot.
+
+
+- Object detection: This could be an overkill, because I don't need to know the bounding box around. Also, this means that I would need to label a lot of images, and draw the bounding box to have train data (a lot of time)
+- Image classification: This doesn't solve the problem, because I don't know what else could not be an object. It would be impossible to train for 2 classes: object / not object.
+
+
+My idea is to have Autoencoder. Train it only on data with the object. Then, if Autoencoder produces a result with a high difference with the original, I detect it as an anomaly - no object.
+
+Is this a good approach? Will I have a lot of trouble with different backgrounds?
+"
+"['machine-learning', 'natural-language-processing', 'causation']"," Title: Models to extract Causal Relationship between entities in a document using Natural Language Processing techniquesBody: I am looking to extract causal relations between entities like Drug and Adverse Effect in a document. Are there any proven NLP or AI techniques to handle the same. Also are there ways to handle cases where the 2 entities may not necessarily co-occur in the same sentence.
+"
+"['machine-learning', 'reinforcement-learning', 'image-processing']"," Title: How can I develop this ML/AI system that I want to use in my new mobile app?Body: I have an idea for a new mobile app. Here is what I want to accomplish using AI;
+I want to get an image (png format), (maybe just byte data too), from my application (I'm developing with Unity3D/C#), send this data to AI application; get modified image from ML app and send it back to my app.
+
+What AI is going to do with img?
+Imagine you are the user of my app; you are going to draw a picture on your phone.
+Picture will be simple, like a seagull illustrated as an 'M' letter.
+AI program will get your drawing ('M' in this case), check pixels to give a meaning to 'M'; then draw a more complex drawing that is themed around that M seagull.
+(Like a drawing with an ocean made by Pixel Art, with rain clouds from Van Gogh and seagull is painted as a surreal bird...)
+
+My general idea about building this AI system...
+I'm not sure how to build this AI system, because I can't understand AI/ML completely. How it works on machine, how to implement it using computer, how to write an algorithm, how pre-made libraries like Tensorflow works... But I'm in a phase of my life that I need to use my time well, and want to build this app while learning.
+I think, I can build an side app to use for analyzing and modifying image I get from user. Right know I can write in C, C# and learning JavaScript. I learned Python too (and a few others), but I'm not comfortable using it. (I hate Python). And didn't write a good program in any other language...
+I thought, I can use JavaScript or Java || C++ but honestly I don't know how to start and which steps to take. Also after gaining some success I will want to port app to iOS too... Maybe that can wait...
+
+Can you give me some examples, guidelines and advices? How to start doing this while deciding; what is best approach, performance and development-time wise?
+And, Is my approach to the problem is a good one? Can you come up with a better solution for my idea that you can tell me to point me in another direction?
+"
+"['neural-networks', 'machine-learning', 'classification', 'data-mining']"," Title: Is this a classification problem?Body: I’m not really sure which machine learning approach is best for my problem at hand. I work in an engineering company that designs and builds different kinds of ships. In my particular job, I collect the individual weight of items on these vessels. The weight and there location is important because it is used to ensure the vessel in question can float in a balanced manner.
+
+I have a large corpus of historical data on hand that lists the items on the vessel, there attributes, the weight for these items and where the weight came from (documentation), or the source.
+
+So, for example, let’s say I have the following information:
+
+ ITEM | ATTRIBUTES | WEIGHT |WEIGHT SOURCE
+Valve |Size: 1 inch |Type: Ball Valve |2 lbs. |Database 1
+Elbow |Size: 2 inch |Type: Reducing |1 lb. |Database 2
+
+
+I have to comb through many systems on these vessels and find the proper documentation or engineering drawings that lists the weight for the item in question. It usually starts by investigating the item and its attributes and then looking in a number of databases for the weight documentation. This takes a long time, as there is no organization or criteria as to what database has what. You just have to start randomly searching them and hope you find what you need.
+
+Well I now have a large corpus of real world data that lists thousands of items, there attributes, there weight and most importantly the source of the documentation (Database 1, 2, 3 etc.). I’m wondering if there is any correlation between an item, its attributes and its weight location (database). This is where machine learning comes in. What I’d like to do is use machine learning to help find the weight location more quickly. Ideally it would be nice if it could analyze a batch of information and then provide recommendations on which databases to search.
+
+My first thoughts are that this is a classification problem, and maybe a CNN would be helpful here. If that is the case, I have over 100 categories in my dataset.
+I actually went ahead and programmed a simple feed forward neural network using the following resources: https://www.analyticsvidhya.com/blog/2017/05/neural-network-from-scratch-in-python-and-r/ I attempted to use this network to solve the above problem, but so far I have had no success. I’m in over my head here.
+
+I don’t expect it to be correct 100% of the time. Even if it had an 80% success rate that would be awesome. So my question is this;
+
+What kind of neural network do I need to accomplish this?
+"
+"['neural-networks', 'genetic-algorithms', 'optimization', 'fitness-functions']"," Title: Is a neural network the correct approach to optimising a fitness function in a genetic algorithm?Body: I've written an application to help players pick the optimal heroes during the draft phase of the Heroes of the Storm MOBA. It can be daunting to pick from 80+ characters that have synergies/counters to other characters, strong/weak maps, etc. The app attempts to pick the optimal composition using a genetic algorithm (GA) based on various sources of information on these heroes.
+
+The problem I've realized is that not all sources of information are created equal. At the moment I'm giving all sources roughly equal importance in the fitness function but as I add other sources, I think it's going to be necessary to be more discerning about them.
+
+It seems like the right way to do this would be to use a single layer neural network where the weights of the synapses represent the weights in the fitness function. I could use matches played at a high-level (e.g. from MasterLeague.net) to form the training and test sets.
+
+Does this sound like a viable approach or am I missing something simpler? Is the idea of the using a GA even the correct way to approach this problem?
+"
+"['neural-networks', 'gradient-descent', 'objective-functions']"," Title: Is it possible with stochastic gradient descent for the error to increase?Body: As simple as that. Is there any scenario where the error might increase, if only by a tiny amount, when using SGD (no momentum)?
+"
+"['neural-networks', 'machine-learning', 'incremental-learning', 'catastrophic-forgetting']"," Title: Are neural networks prone to catastrophic forgetting?Body: Imagine you show a neural network a picture of a lion 100 times and label it with "dangerous", so it learns that lions are dangerous.
+Now imagine that previously you have shown it millions of images of lions and alternatively labeled it as "dangerous" and "not dangerous", such that the probability of a lion being dangerous is 50%.
+But those last 100 times have pushed the neural network into being very positive about regarding the lion as "dangerous", thus ignoring the last million lessons.
+Therefore, it seems there is a flaw in neural networks, in that they can change their mind too quickly based on recent evidence. Especially if that previous evidence was in the middle.
+Is there a neural network model that keeps track of how much evidence it has seen? (Or would this be equivalent to letting the learning rate decrease by $1/T$ where $T$ is the number of trials?)
+"
+"['image-recognition', 'image-processing']"," Title: What models and algorithms are used in commercial vehicle re-identification tasks?Body: Due to the fast-growing applications of AI technologies applied to vehicle re-identification tasks, there have already been hot contests, such as the Nvidia AI challenge.
+
+What algorithms or models are really adopted in commercial vehicle re-identification tasks, effective and reliable, nowadays?
+"
+"['neural-networks', 'probability']"," Title: Is there an AI model with ""certainty"" built in?Body: If I see a hundred elephants and fifty of them are grey I'd say the probability of an elephant being grey is 50%. And my certainty of that probability is high.
+
+However, if I see two elephants and one of them is grey. Still the probability is 50%. But my certaintity of this is low.
+
+Are there any AI models where not only the probability is given by the AI but it's certainty is also?
+
+""Certainty"" might be thought of as the probability that the probability is correct.
+
+This could go up more levels.
+
+Is there any advantage in doing this?
+
+One way I can envisage this working is instead of a weight, the NN stores two integers $(P,N)$ which represent positive and negative evidence and the weight is given by $P/(P+N)$. And each iteration $P$ or $N$ can only be incremented by 1.
+"
+"['machine-learning', 'python', 'data-science']"," Title: Scikit-Learn: monotoneous quantile estimationBody: I would like to implement various AI-estimators for quantile estimation for a regression problem. It would be necessary to have non-crossing quantiles, that is larger quantiles would correspond to higher prediction values.
+
+My objective is to have a multidimensional prediction vector as an output in the estimation, each dimension corresponding to a specific quantile. Maybe I would have to define a custom loss function, as well, for that purpose. I would like to try different methods such as deeplearning or gradient boosting or random forests.
+
+Anyone having an idea how to build such AI-estimators? My preferred library choice would be scikit-learn.
+
+Can someone give me an idea how to do this?
+"
+"['reinforcement-learning', 'monte-carlo-methods']"," Title: Why does GLIE+MC Control Algorithm use a single episode of Monte Carlo evaluation?Body: GLIE+MC control Algorithm:
+
+
+
+My question is why does this algorithm use only a single Monte Carlo episode (during PE step) to compute the $Q(s,a)$? In my understanding this has the following drawbacks:
+
+
+- If we have multiple terminal states then we will only reach one (per Policy Iteration step PE+PI).
+- It is highly unlikely that we will visit all the states (during training), and a popular scheduling algorithm for exploration constant $\epsilon = 1/k$ where $k$ is apparently the episode number, ensures that exploration decays very very rapidly. This ensures that we may never visit a state during our entire training.
+
+
+So why this algorithm uses single MC episode and why not multiple episodes in a single Policy Iteration step so that the agent gets a better feel of the environment?
+"
+"['reinforcement-learning', 'dqn', 'hyper-parameters', 'exploration-exploitation-tradeoff', 'epsilon-greedy-policy']"," Title: Why is the $\epsilon$ hyper-parameter (in the $\epsilon$-greedy policy) annealed smoothly?Body: As far as I understand, RL is a process that can be divided into 2 stages:
+
+- Exploring a wide range of paths (acting randomly)
+
+- Refining the current optimal paths (revolving around actions with a so-far most promising score estimate)
+
+
+Completing 1. too quickly results in a network that just doesn't spot the best combination of actions, especially if rewards are sparse. "Refining" then has little benefit, since the network will tend to choose between unlucky estimates it observed so far, and will specialise in those.
+On the other hand, finishing 2. too quickly results in a network that might have encountered the best combination, but never got time to refine these "good trajectories". Thus its estimates of scores along these "good trajectories" are rather poor and inaccurate, so again the network will fear to select and specialize those, because they might have a low (inaccurate) estimate.
+Why not to give both 1. and 2. the maximum time possible?
+In other words, instead of gradually annealing the $\epsilon$ coefficient (in the $\epsilon$-greedy) down to a low value, why not to always have it as a step function?
+For example, train 50% of iterations with a value of 1 (acting completely randomly), and for the second half of training with the value of 0.05, etc (very greedy). Well, 50% is a random guess, could be adjusted manually, as needed. The most important part is this "step function".
+To me, always using such a "step" function would instantly reveal if the initial random search was not long enough. Perhaps there is a disadvantage of such a step curve?
+So far, I got the impression that annealing is a gradual process.
+To me, it seems that when using gradual annealing it might not be evident if the neural network (e.q. in DQN or DQRNN) learns poorly because of the mentioned issue or something else.
+Is there some literature exploring this?
+There is a paper Noisy Networks for Exploration, but it proposes another approach that removes the $\epsilon$ hyperparameter. My question is different, specifically, about this $\epsilon$.
+"
+"['neural-networks', 'backpropagation']"," Title: Confused about NeuralODEBody: I am a bit confused about NeuralODE and I want to make sure that what I understood so far is correct.
+
+Assume we have (for simplicity) 2 data points $z_0$ measured at $t_0$ and $z_1$ measured at $t_1$. Normally (in normal NN approach), one would train a NN to predict $z_1$ given $z_0$, i.e. $NN(z_0)=z_1$. In NeuralODE approach, the goal is to train the NN to approximate a function $f(z_0)$ (I will ignore the explicit time dependence) such that given the ODE: $\frac{dz}{dt}|_{t_0}=f(z_0)$ which would be approximated as $\frac{dz}{dt}|_{t_0}=NN(z_0)$ and solving this using some (non AI based) ODE integrator (Euler's method for example) one gets as the solution for this ODE at time $t_1$ something close to $z_1$. So basically the NN now approximates the tangent of the function ($\frac{dz}{dt}$) instead of the function itself ($z(t)$).
+
+Is my understanding so far correct?
+
+So I am a bit confused about the training itself. I understand that they use the adjoint method. What I don't understand is what exactly is being updated. As far as I can see, the only things that are free (i.e. not measured data) are the parameters of the function $f$, i.e. the NN approximating it. So one would need to compute $\frac{\partial loss}{\partial \theta}$, where $\theta$ are the parameters (weights and biases of the network).
+
+Why would I need to compute, for example (as they do in the paper) $\frac{\partial loss}{\partial z_0}$? $Z_0$ is the input which is fixed, so I don't need to update it. What am I missing here?
+
+Secondly, if what I said in the first part is correct, it seems like in principle one can get great results for a reasonably simple function $f$, such as a (for example) 3 layers fully connected NN. So one needs to update the parameters of this NN. On the other hand, ResNets can have tens or hundreds of layers.
+
+Am I missing a step here or is this new approach so powerful that with a lot fewer parameters one can get very good results?
+
+I feel like a ResNet, even with 2 layers, should be more powerful than Euler's Method ODE, as ResNets would allow more freedom in the sense that the 2 blocks don't need to be the same, while in the NeuralODE using Euler's Method one has the same (single) block.
+
+Lastly, I am not sure I understand what do they mean by (continuous) depth in this case. What is the definition of the depth here (I assume it is not just the depth of $f$)?
+"
+['classification']," Title: Can an object's movement (instead of its appearance) be used to classify it?Body: I know that it is very common for machine learning systems to classify objects based on their visual features such as shapes, colours, curvatures, width-to-length ratios, etc.
+
+What I'd like to know is this: Do any techniques exist for machine learning systems to classify objects based on how they move?
+
+Examples:
+
+
+- Suppose that in a still image, 2 different classes of objects look identical. However, in a video Class A glides smoothly across the screen while Class B meanders chaoticaly across the screen.
+- When given multiple videos of the same person walking, classify whether he's sober or drunk.
+
+"
+"['neural-networks', 'reference-request', 'proofs', 'function-approximation', 'universal-approximation-theorems']"," Title: Where can I find the proof of the universal approximation theorem?Body: The Wikipedia article for the universal approximation theorem cites a version of the universal approximation theorem for Lebesgue-measurable functions from this conference paper. However, the paper does not include the proofs of the theorem. Does anybody know where the proof can be found?
+"
+"['deep-learning', 'convolutional-neural-networks', 'batch-normalization']"," Title: What is the most common practice to apply batch normalization?Body: For a deep NN, should I generally apply batch normalization after each convolution layer? Or only after some of them? Which? Every 2nd, every 3rd, lowest, highest, etc.?
+"
+['machine-learning']," Title: How to recognize with just name and last name if the person is a political exposed personBody: First than all, I am not sure if this questions is more about Machine Learning, or if its Artificial Intelligence, if not, just let me know I will delete it.
+
+At my company we need to create a solution for banks, where a client comes in and they want to to open a bank account.
+
+They need to know if that person is a politician or political exposed person, maybe they work in the european comission, or they are family from a pep for example.
+
+The business users has lots of data sources where to get these people, for example: http://www.europarl.europa.eu/meps/en/full-list/all
+
+They want to train a Model (Machine Learning), where the end user can enter the name: Bill Clinton for example, and then the system has to return the percentage of a person being political or not.
+
+Obviosly some persons are 100% politicial and the percentage will be 100%.
+
+But if they enter a name that is not in any of their data sources, how would I train a model to decide if its pep or not?
+
+quite confused
+
+thanks
+"
+"['convolutional-neural-networks', 'objective-functions', 'autoencoders']"," Title: What is the best loss function for convolution neural network and autoencoder?Body: What is the best choice for loss function in Convolution Neural Network and in Autoencoder in particular - and why?
+
+I understand that the MSE is probably not the best choice, because little difference in lighting can cause a big difference in end loss.
+
+What about Binary cross-entropy? As I understand, this should be used when target vector is composed as 1 at one place and 0 at all others, so you compare only class that should be correct (and ignore others),... But this is an image (although the values are converted in 0-1 values,...)
+"
+"['neural-networks', 'policy-gradients']"," Title: Encoding real valued inputsBody: UPDATE: After reading more about the topic, I've tried implementing the
+ DDPG algorithm instead of using a variation of Q-Learning and still have the same issue.
+
+I have the following issue:
+
+I want to train my critic to estimate values of state/action pairs. My state consists of 2 real valued variables and my action is another real valued variable.
+
+I normalize all values before I feed them into the network. Now I have the following issue: The network is very unresponsive to changes in the input. Before I normalize the state and the actions, they can take up any value between 0 and 50. After normalizing them they are in the range between -1 and 1. A change of 1 in the input can become a very small change in the input after normalization.
+
+But in my specific situation, a small change in the action or the state can cause a very large change in the value of the state/action pair. The network does not really learn that correctly, it handles similar inputs similarly all the time (which is okay most of the times, but there are hard cuts in the shape of the value function here and there). If I further reduce the networks' capacity, the networks output becomes constant and ignores all three inputs.
+
+Do you know any other tricks, that I could use to increase sensitivity to the input at some points? Or is my network configuration/approach the wrong one (too large, too small)?
+
+The network I'm training is a simple feedforward neural network that takes two inputs, followed by 2 hidden layers followed by a single output to predict the value for that state/action combination. (I'm still trying out different configurations here, as I have no real feeling for the amount of elements per layer and the amount of layers needed to get the capacity I need without encouraging overfitting).
+
+Thanks for your help :)
+"
+"['reinforcement-learning', 'temporal-difference-methods']"," Title: By learning from incomplete episodes, does David Silver mean learning of $V(s)$ even when the episode is not completed?Body: I came across the $TD(0)$ algorithm from Sutton and Barto:
+
+Clearly, the only difference of TD methods with the MC methods is that TD method is not waiting till the end of the episode to update the $V(s)$ or $Q(s,a)$, but according to David Silver's lecture (Lecture 4- ~34:00),
+
+The $TD(0)$ algorithm learns from incomplete episodes, but in the earlier algorithm we can see that the loop repeats until $s$ is terminal which mean completion of episode.
+So, by learning from incomplete episodes, does David Silver mean learning of $V(s)$ even when the episode is not completed? Or did I interpret the algorithm wrong? If so, what is the correct interpretation?
+"
+"['machine-learning', 'prediction']"," Title: How should we understand the evaluation metric, AUC, in link prediction problems?Body: In link prediction problems, there are only known edges and nodes.
+
+
+- If there is a known edge in the node pair, the node pair is regarded as a positive sample. Except for those node pairs whose edges are known, There may exist unobserved edges in some node pairs or there really doesn't exist edges in some node pairs. Our target is to predict potential links in those candidate node pairs.
+
+
+The node pair where there exist known edge is regarded as a positive sample. So the node pair whose edge are not observed can neither be regarded as a positive example, nor a negative example.
+
+So I think link prediction problem is a semi-supervised problem. However, I find that many papers, for example, GRTR: Drug-Disease Association Prediction Based on Graph Regularized Transductive Regression on Heterogeneous Network, use AUC(Area Under the ROC Curve, a metric for supervised problems) as the metric.
+
+How should we understand such behavior? What's the reason?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'reinforcement-learning', 'generalization']"," Title: How do I determine the generalisation ability of a neural network?Body: I am trying to ascertain if my neural network is able to generalize or if it’s simply using memory/overfitting to solve a task. I would like my model to generalise.
+
+Currently, I train the neural network on a randomly generated 3x3 frozen lake environment - with no holes. (The network simply chooses an action for each state it is presented.)
+
+Then, I test the model on a much larger frozen lake environment. Still no holes. Still randomly generated. The test environment size is assigned by a random value of 5-15 for each axis (height/width), randomly generated.
+
+Then I determine the ""degree of generalization"" by how many large environments the network is able to solve. At present, it solves 100/100 on the 3x3, and about 83/100 on the larger test environments.
+
+When I track the solutions it generates, I can see that the network always takes the shortest route available, which is great.
+
+Do you guys have any ideas, inputs or criticism on the method I use to determine the degree of generalization?
+"
+"['deep-learning', 'ai-design', 'applications', 'prediction']"," Title: Suicide Predictor and LocatorBody: Suicide is on the increase in my country and most victims tend to leave early traces from text messages, social media accounts, search engine queries. So I came up with the idea to develop an AI system with the following features:
+
+
+- Ability to read text messages searching for suicide trigger words
+- Read chats also for the same trigger words
+- Incorporate synchronization of words from software/browsers on recent words typed
+- Learning algorithm to predict the next action after taking notes of the use of these suicide trigger words
+- Ability to access any cellphone, Android, computer, websites
+
+
+Is this feasible or not feasible?
+"
+"['machine-learning', 'python', 'definitions', 'regression']"," Title: Which models accept numerical parameters and produce a numerical output?Body: I need a model that will take in a few numerical parameters, and give back a numerical answer (Context: predicting a slope based on environmental factors without having to actually take measurements to find the slope).
+
+However, I am lost, given that I have not found (on the web) any model that is able to solve this task. If someone can suggest a type of model that would be able to solve this type of problem, I would greatly appreciate it. Resources and libraries would also be useful if you care to suggest any. I'm using Python.
+"
+['monte-carlo-tree-search']," Title: Is the playout started from a leaf or child of leaf in Monte Carlo Tree Search?Body: On Wikipedia, the MCTS algorithm is described
+
+Selection: start from root $R$ and select successive child nodes until a leaf node $L$ is reached. A leaf is any node from which no simulation (playout) has yet been initiated.
+Expansion: create one (or more) child nodes and choose node $C$ from one of them. Child nodes are any valid moves from the game position defined by $L$.
+Simulation: complete one random playout from node $C$.
+
+Why is the playout started from a child of the first leaf, not the leaf itself? And aren't leaves then permanently stuck as leaves, since playouts always start from their children, not them? Or does the leaf get attributed as having had a "playout initialised" from it, even though it started at its child?
+"
+"['reinforcement-learning', 'q-learning', 'terminology', 'game-theory', 'multi-agent-systems']"," Title: How does Friend-or-Foe Q-learning intuitively work?Body: I read about Q-Learning and was reading about multi-agent environments. I tried to read the paper Friend-or-Foe Q-learning, but could not understand anything, except for a very vague idea.
+
+What does Friend-or-Foe Q-learning mean? How does it work? Could someone please explain this expression or concept in a simple yet descriptive way that is easier to understand and that helps to get the correct intuition?
+"
+"['q-learning', 'deep-rl', 'policies']"," Title: If deep Q learning involves adjusting the value function for a specific policy, then how do I choose the right policy?Body: I wrote a simple implementation of Flappy Bird in Python, and now I'm trying to train an agent to play it at a reasonable skill level using TFLearn.
+
+I feed the network an input vector of size 4:
+
+
+- the horizontal distance to the next obstacle
+- the agent's vertical distance from the ground-
+- the agent's vertical distances from the top, and
+- the agent's vertical distances from the bottom parts of the opening in the obstacle.
+
+
+The output layer of the network contains one unit, telling me the Q value of the provided state with the assumption that the action taken in that state will be determined by the policy.
+
+However, I don't know what policy would make the agent learn to play the best. I can't just make it choose random actions because that would make the policy non-stationary. What can I do?
+"
+"['machine-learning', 'reinforcement-learning', 'policy-gradients']"," Title: How to enforce covariance-matrix output as part of the last layer of a Policy Network?Body: I have a continuous state space, and a continuous action space. The way I understand it, I can build a policy network which takes as input a continuous state vector and outputs both mean vector and covariance matrix of the action-distribution. To get a valid action I then sample from that distribution.
+
+However, when trying to implement such a network, I get the error message that the parts of my output layer which I want to be the covariance matrix are singular/not positive-semi-definite. How can I fix this? I tried different activation-functions and initializations for the last layer, but once in a while I run into the same problem again.
+
+How can I enforce that my network outputs a valid covariance matrix?
+"
+"['machine-learning', 'natural-language-processing', 'definitions', 'probabilistic-graphical-models', 'conditional-random-field']"," Title: What is a conditional random field?Body: I new in machine learning, especially in Conditional Random Fields (CRF).
+
+I have read several articles and papers and in there is always associated with HMM and sequences classification. I don't really understand mathematics, especially in the annoying formula. So I can't understand the process. Where I need to start to understand CRFs??
+
+I want to make an information extraction application using CRF Named Entity Recognition (NER).
+
+I got some tutorial for that: https://eli5.readthedocs.io/en/latest/tutorials/sklearn_crfsuite.html#training-data
+
+But I don't know the proses each step, like training proses, evaluation, and testing
+
+I use this code :
+
+ data_frame = eli5.format_as_dataframes(
+ eli5.explain_weights_sklearn_crfsuite(self.crf))
+
+
+Targets
+
+
+Transition Features
+
+
+How to get that number ?
+
+and 1 more thing makes me confused:
+
+crf = sklearn_crfsuite.CRF(
+ algorithm='lbfgs',
+ c1=0.1,
+ c2=0.1,
+ max_iterations=20,
+ all_possible_transitions=False,
+)
+
+
+What is the algorithm lbfgs
? Is the CRF not an algorithm? Why do I need lbfgs
? What is exactly a conditional random field?
+"
+"['reinforcement-learning', 'intelligent-agent']"," Title: Can the agent of reinforcement learning system serve as the environment for other agents and expose actions as services?Body: Can the agent of reinforcement learning system serve as the environemnt for other agents and expose actions as services? Are there research that consider such question?
+
+I tried to formulate the problem of network of reinforcement learning systems in other site of Stack network: https://cs.stackexchange.com/questions/111820/value-flow-and-economics-in-stacked-reinforcement-learning-systems-agent-as-r
+
+So - is there some research along such stacked RLs? Of course, I know that Google is my friend, but sometimes I come up with the ideas that I don't know how to name them or how they are named by other scientists who have already discovered and researched them. And that is why I am useing stackexchange to give me some keywords for my ideas and I can myself explore futher the field using those keywords. So - what are the keywords of resarch about such stakced RL systems, of agent-environment interchange in reinforcement? learning?
+"
+"['reinforcement-learning', 'q-learning']"," Title: Can multiple reinforcement algorithms be applied to the same system?Body: Can a system, for instance, a robotic vehicle, be controlled by more than one reinforcement learning algorithm. I intend to use one to address collision avoidance whereas the other to tackle autonomous task completion.
+"
+"['neural-networks', 'machine-learning', 'datasets']"," Title: What can be inferred about the training data from a trained neural network?Body: Suppose we trained a neural network on some training set that we call $X$.
+
+Given the neural network and the method of training(algorithm, hyperparameters etc.) can we infer anything about $X$.
+
+Now, instead, suppose we also have some subset of the training data $Y \in X$ available. Is there anything we can infer about $Y^c$?
+"
+"['education', 'journalism']"," Title: Which nonfictional documentaries about Artificial Intelligence are available?Body: From the subjective perspective, the number of documentaries about the subject Artificial Intelligence and robotics is small. It seems, that the topic is hard to visualize for the audience and in most cases, the assumption is, that the recipient isn't familiar with computers at all. I've found the following documentaries:
+
+
+- The Computer Chronicles - Artificial Intelligence (1985)
+- The Machine That Changed the World (1991), Episode IV, The Thinking Machine
+- Robots Rising (1998)
+- Rodney's Robot Revolution (2008)
+
+
+The subjective awareness is, that the quality of the films in the 1980s was higher than in modern documentaries, and in 50% of the documentaries Rodney Brooks is the host. Are more documentaries available which can be recommended to watch?
+
+Focus on non-fictional documentaries
+
+Some fictional movies were already mentioned in a different post. For example Colossus: The Forbin Project (1970), Bladerunner (1982) or A.I. Artificial Intelligence (2001). They are based on fictional characters which doesn't exist and the presented robots are running with a Hollywood OS. This question is only about nonfictional motion pictures.
+"
+"['reinforcement-learning', 'definitions', 'environment', 'markov-property']"," Title: Can non-Markov environments also be deterministic?Body: The definition of deterministic environment I am familiar with goes as follows:
+
+
+ The next state of the agent depends only on the current state and the action chosen by the agent.
+
+
+By exclusion, everything else would be a stochastic environment.
+
+However, what about environments where the next state depends deterministically on the history of previous states and actions chosen? Are such environments also considered deterministic? Are they very uncommon, and hence just ignored, or should I include them into my working definition of deterministic environment?
+"
+"['neural-networks', 'convolutional-neural-networks', 'dropout', 'regularization', 'relu']"," Title: Dropout causes too much noise for network to trainBody: I am using dropout of different values to train my network. The problem is, dropout is contributing almost nothing to training, either causing so much noise the error never changes, or seemingly having no effect on the error at all:
+
+The following runs were seeded.
+
+key: dropout = 0.3, means 30% chance of dropout
+
+graph x axis: iteration
+
+y axis: error
+
+dropout=0
+
+
+dropout = 0.001
+
+dropout = 0.1
+
+dropout = 0.5
+
+
+I don't quite understand why dropout of 0.5 effectively kills the networks ability to train. This specific network here is rather small, a CNN of architecture:
+
+3x3x3 Input image
+3x3x3 Convolutional layer: 3x3x3, stride = 1, padding = 1
+20x1x1 Flatten layer: 27 -> 20
+20x1x1 Fully connected layer: 20
+10x1x1 Fully connected layer: 10
+2x1x1 Fully connected layer: 2
+
+
+But I have tested a CNN with architecture:
+
+10x10x3 Input image
+9x9x12 Convolutional layer: 4x4x12, stride = 1, padding = 1
+8x8x12 Max pooling layer: 2x2, stride = 1
+6x6x24 Convolutional layer: 3x3x24, stride = 1, padding = 0
+5x5x24 Max pooling layer: 2x2, stride = 1
+300x1x1 Flatten layer: 600 -> 300
+300x1x1 Fully connected layer: 300
+100x1x1 Fully connected layer: 100
+2x1x1 Fully connected layer: 2
+
+
+overnight with dropout = 0.2 and it completely failed to learn anything, having an accuracy of just below 50%, whereas without dropout, its accuracy is ~85%. I would just like to know if there's a specific reason as to why this might be happening. My implementation of dropout is as follows:
+
+activation = relu(val)*(random.random() > self.dropout)
+
+then at test time:
+
+activation = relu(val)*(1-self.dropout)
+"
+"['natural-language-processing', 'philosophy', 'natural-language-understanding']"," Title: Can a computer identify the philosophical concept on which a given story is based?Body: Say you have to enter a story to a computer. Now, the computer has to identify the philosophical concept on which the story is based, say:
+
+- Was it a "self-fulfilling prophecy"?
+
+- Was it an example of "Deadlock" or "Pinocchio paradox situation"?
+
+- Was it an example of how rumours magnify? or something similar to a chain reaction process?
+
+- Was it an example of "cognitive dissonance" of a person?
+
+- Was it a story about "altruism"?
+
+- Was it a story about a "misunderstanding" when a person did something "innovative" but it accidentally was innovated earlier so the person was "falsely accused" of "plagiarising"?
+
+
+And so on.
+Given that the story is not only a heavy rephrase of the pre-existing story; not only character names and identities are totally changed, but the context completely changed, the exact tasks they were doing are changed.
+Can computers identify such "concepts" from stories? If yes, then what mechanism do they use?
+"
+"['deep-learning', 'deep-neural-networks', 'learning-curve']"," Title: Spikes in of Train and Test errorBody: I learn a DNN for image recognition. During each epoch, I calculate mean loss in the training set. After each epoch, I calculate loss and number of errors over both training and test set. The problem is, training and test error go to (almost) zero, then increase, go to zero again, increase, and so on. The process seems stochastic.
+
+epoch: 1 mean_loss=0.109 train: errs=7 loss=0.00622 test: errs=3 loss=0.00608
+epoch: 2 mean_loss=0.00524 train: errs=5 loss=0.00309 test: errs=3 loss=0.00369
+epoch: 3 mean_loss=0.00408 train: errs=13 loss=0.00614 test: errs=7 loss=0.00951
+epoch: 4 mean_loss=0.00198 train: errs=113 loss=0.102 test: errs=51 loss=0.265
+epoch: 5 mean_loss=0.00424 train: errs=3 loss=0.00201 test: errs=2 loss=0.00148
+epoch: 6 mean_loss=0.0027 train: errs=1 loss=0.000466 test: errs=2 loss=0.00193
+epoch: 7 mean_loss=0.00797 train: errs=5 loss=0.00381 test: errs=0 loss=0.000493
+epoch: 8 mean_loss=0.00368 train: errs=1 loss=0.000345 test: errs=2 loss=0.00148
+epoch: 9 mean_loss=0.000358 train: errs=0 loss=6.76e-05 test: errs=0 loss=0.000446
+epoch: 10 mean_loss=0.00101 train: errs=164 loss=0.0863 test: errs=67 loss=0.19
+epoch: 11 mean_loss=0.000665 train: errs=0 loss=2.38e-05 test: errs=0 loss=9.86e-05
+epoch: 12 mean_loss=0.00714 train: errs=5 loss=0.00909 test: errs=0 loss=0.00816
+epoch: 13 mean_loss=0.00266 train: errs=73 loss=0.0333 test: errs=10 loss=0.0192
+epoch: 14 mean_loss=0.00213 train: errs=0 loss=7.74e-05 test: errs=0 loss=0.000197
+epoch: 15 mean_loss=6.12e-05 train: errs=0 loss=7.66e-05 test: errs=0 loss=3.44e-05
+epoch: 16 mean_loss=0.00162 train: errs=5 loss=0.00265 test: errs=0 loss=0.0012
+epoch: 17 mean_loss=0.000159 train: errs=0 loss=3.11e-05 test: errs=0 loss=4.26e-05
+epoch: 18 mean_loss=4.68e-05 train: errs=0 loss=3.28e-05 test: errs=0 loss=6.05e-05
+epoch: 19 mean_loss=2.47e-05 train: errs=0 loss=2.8e-05 test: errs=0 loss=5.01e-05
+epoch: 20 mean_loss=2.2e-05 train: errs=0 loss=2.31e-05 test: errs=0 loss=3.95e-05
+epoch: 21 mean_loss=2.37e-05 train: errs=0 loss=1.76e-05 test: errs=0 loss=2.52e-05
+epoch: 22 mean_loss=1.4e-05 train: errs=0 loss=1.16e-05 test: errs=0 loss=1.52e-05
+epoch: 23 mean_loss=2.13e-05 train: errs=0 loss=1.65e-05 test: errs=0 loss=2.13e-05
+epoch: 24 mean_loss=1.53e-05 train: errs=0 loss=1.91e-05 test: errs=0 loss=2.46e-05
+epoch: 25 mean_loss=0.00419 train: errs=0 loss=5.27e-05 test: errs=0 loss=4.65e-05
+epoch: 26 mean_loss=0.000372 train: errs=6 loss=0.00297 test: errs=3 loss=0.00731
+epoch: 27 mean_loss=0.0016 train: errs=0 loss=4.23e-05 test: errs=0 loss=3.69e-05
+epoch: 28 mean_loss=3.34e-05 train: errs=0 loss=2.44e-05 test: errs=0 loss=2.76e-05
+epoch: 29 mean_loss=7.03e-05 train: errs=0 loss=2.16e-05 test: errs=0 loss=1.69e-05
+epoch: 30 mean_loss=2.41e-05 train: errs=0 loss=1.84e-05 test: errs=0 loss=1.77e-05
+epoch: 31 mean_loss=1.26e-05 train: errs=0 loss=2.11e-05 test: errs=0 loss=1.78e-05
+epoch: 32 mean_loss=1.39e-05 train: errs=0 loss=2.75e-05 test: errs=0 loss=2.42e-05
+epoch: 33 mean_loss=7.68e-05 train: errs=0 loss=0.00014 test: errs=0 loss=4.66e-05
+epoch: 34 mean_loss=2.53e-05 train: errs=0 loss=1.48e-05 test: errs=0 loss=1.56e-05
+epoch: 35 mean_loss=0.000352 train: errs=1786 loss=2.17 test: errs=493 loss=2.56
+epoch: 36 mean_loss=0.0088 train: errs=0 loss=0.000347 test: errs=0 loss=0.000449
+epoch: 37 mean_loss=0.000395 train: errs=0 loss=6.18e-05 test: errs=0 loss=0.000125
+epoch: 38 mean_loss=5e-05 train: errs=0 loss=6.73e-05 test: errs=0 loss=9.89e-05
+epoch: 39 mean_loss=0.00401 train: errs=26 loss=0.00836 test: errs=27 loss=0.0269
+epoch: 40 mean_loss=0.00051 train: errs=0 loss=7.66e-05 test: errs=0 loss=7.07e-05
+epoch: 41 mean_loss=5.49e-05 train: errs=0 loss=2.47e-05 test: errs=0 loss=2.58e-05
+epoch: 42 mean_loss=3.38e-05 train: errs=0 loss=1.67e-05 test: errs=0 loss=2.1e-05
+epoch: 43 mean_loss=2.45e-05 train: errs=0 loss=1.28e-05 test: errs=0 loss=2.95e-05
+epoch: 44 mean_loss=0.00137 train: errs=44 loss=0.0141 test: errs=16 loss=0.0207
+epoch: 45 mean_loss=0.000785 train: errs=1 loss=0.000493 test: errs=0 loss=4.46e-05
+epoch: 46 mean_loss=5.46e-05 train: errs=1 loss=0.000487 test: errs=0 loss=1.34e-05
+epoch: 47 mean_loss=1.99e-05 train: errs=1 loss=0.00033 test: errs=0 loss=1.57e-05
+epoch: 48 mean_loss=1.78e-05 train: errs=1 loss=0.000307 test: errs=0 loss=1.58e-05
+epoch: 49 mean_loss=0.000903 train: errs=1 loss=0.00103 test: errs=0 loss=0.000393
+epoch: 50 mean_loss=4.74e-05 train: errs=0 loss=4.63e-05 test: errs=0 loss=3.53e-05
+Finished Training, time: 234.69774420000002 sec
+
+
+The images are 96*96 gray. There are about 7000 training and 1750 test images. The order of presentation is random, and different at each epoch. Each image is either contains the object or not. The architecture is (Conv2d->ReLU->BatchNorm2d->MaxPool)*4->AvgPool(6,6)->Flatten->Conv->Conv->Conv. All MaxPool's are 2*2. First two Conv2d layers are 5*5, padding=2, others 3*3, padding=1. The optimiser is like this:
+
+Optimizer= Adam (
+Parameter Group 0
+ amsgrad: False
+ betas: (0.9, 0.999)
+ eps: 1e-08
+ lr: 0.001
+ weight_decay: 1e-05
+)
+
+
+Currently I just choose the epoch when the training set error was minimal.
+
+if epoch == 0 or train_loss < train_loss_best:
+ net_best = copy.deepcopy(net)
+ train_loss_best = train_loss
+
+
+It works, but I don't like it. Is there a way to make the learning more stable and steady?
+"
+"['neural-networks', 'image-recognition', 'python']"," Title: Better to learn the same small set for multiple epochs then go to the next or learn from each one time repeatedly for multiple times?Body: I don't know if I worded the title correctly.
+
+I have a big dataset (300000 of images after augumentation) and I've splitted it into 10 parts, because I can't convert the images into a numpy and save it, the file would be too large.
+
+Now, I have a neural network (Using keras with tf). My question is, is it better to train each file individually for X epochs (File 1 for 5 epochs, then File 2 for 5 epochs, etc.), or should I do an epoch for each, repeatedly (File 1 for an epoch, File 2 for an epoch, etc., and repeat for 5 times).
+
+I've used the first and I get an accuracy of about 88%. Whould I get an improvement by doing the latter?
+"
+"['convolutional-neural-networks', 'training', 'yolo', 'incremental-learning', 'catastrophic-forgetting']"," Title: How can I incrementally train a Yolo model without catastrophic forgetting?Body: I have successfully trained a Yolo model to recognize k classes. Now I want to train by adding k+1 class to the pre-trained weights (k classes) without forgetting previous k classes. Ideally, I want to keep adding classes and train over the previous weights, i.e., train only the new classes. If I have to train all classes (k+1) every time a new class is added, it would be too time-consuming, as training k classes would take $k*20000$ iterations, versus the $20000$ iterations per new class if I can add the classes incrementally.
+
+The dataset is balanced (5000 images per classes for training).
+
+I appreciated if you can throw some methods or techniques to do this continual training for Yolo.
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'planning']"," Title: Several questions related to UCT and MCTSBody: In Bandit Based Monte-Carlo Planning, the article where UCT is introduced as a planning algorithm, there is an algorithm description in page 285 (4 of the pdf).
+Comparing this implementation of UCT (a specific type of MCTS algorithm) to the application normally used in games, there is one major difference. Here, the rewards are calculated for every state, instead of only doing an evaluation at the end of the simulation.
+My questions are (they are all related to each other):
+
+- Is this the only big difference? Or in other words, can I do the same implementation as in MCTS for games with the 4 stages: selection, expansion, simulation and backpropagation, where the result of the simulation is the accumulated reward instead of a value between 0 and 1? How would the UCT selection be adjusted in this case?
+
+- What does the UpdateValue function in line 12 does exactly? In the text it says it is used to adjust the state-action pair value at a given depth, will this be used to the selection? How is this calculated exactly?
+
+- What is the depth parameter needed for? Is it related with the UpdateValue?
+
+
+Finally I would like to ask if you know any other papers where a clear implementation of UCT for planning is used with multiple rewards, not only on the end of the simulation.
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'biology']"," Title: How can I solve the linkage problem in genetic algorithms?Body: In a genetic algorithm, the order of the genes on a chromosome can have a significant effect on the performance (capacity to generate adaptation) of the genetic algorithm, where two or more genes interact to produce highly fit individuals. If we have a chromosome length of $100$ and genes $A$ and $B$ interact, then having them next to each other is strongly preferable than having them at opposing ends of the chromosome. In the former case, the probability of crossover breaking the genes apart is $1$ in $100$, and in the latter it is one.
+
+What mechanisms have been tried to optimise the order of genes on a chromosome, so that interacting genes are best protected from crossover? Is it even possible?
+
+I've asked at Biology SE if there exists any known biological mechanism which is responsible for such a possible order of the genes on a chromosome.
+"
+"['natural-language-processing', 'voice-recognition', 'speech-synthesis']"," Title: Can computers recognise ""grouping"" from voice tonality?Body: In human communication, tonality or tonal language play many complex information, including emotions and motives. But excluding such complex aspects, tonality serves some a very basic purpose of ""grouping"" or ""taking common"" functions such as:
+
+
+- The sweet, (pause), bread-and-drink.
+
+
+It means ""The sweet bread and the sweet drink"". However
+
+
+- The sweet-bread, (pause) and drink.
+
+
+It means only the bread is sweet but the drink isn't necessarily sweet, or the drink's sweetness property isn't assigned.
+
+Can computers recognise these differences of meaning based on tonality?
+"
+"['neural-networks', 'python', 'regression']"," Title: How can I perform multivariable regression with neural networks?Body: I want to use a neural network to perform a multivariable regression, where my dataset contains multiple features, but I can't for the life of me figure it out. Every kind of tutorial on the internet seems to be either for a single feature without information on how to upgrade it to multiple, or results in a yes or a no when I need numeric predictions (that is, it uses neural networks for classification).
+
+Can someone please recommend some kind of resource I can use to learn this?
+"
+"['chat-bots', 'history', 'catastrophic-forgetting']"," Title: Was the corruption of Microsoft's ""Tay"" chatbot an example of catastrophic forgetting?Body: Tay was a chatbot, who learned from Twitter users.
+
+
+ Microsoft's AI fam from the internet that's got zero chill. The more you talk the smarter Tay gets. — Twitter tagline.
+
+
+Microsoft trained the AI to have a basic ability to communicate, and taught it a few jokes from hired comedians before setting it lose to learn from its conversations.
+
+This was a mistake.
+
+But why did Tay go so wrong? Was this an example of catastrophic forgetting, where short, recent trends override large, less recent training, or was it something else entirely?
+"
+"['deep-learning', 'speech-synthesis']"," Title: Improving the performance of a DNN modelBody: I have been executing an open-source Text-to-speech system Ossian. It uses feed forward DNNs for it's acoustic modeling. The error graph I've got after running the acoustic model looks like this:
+
+Here are some relevant information:
+
+
+- Size of Data: 7 hours of speech data (4000 sentences)
+- Some hyper-parameters:
+
+
+- batch_size : 128
+- training_epochs : 15
+- L2_regularization: 0.003
+
+
+
+Can anyone point me to the directions to improve this model? I'm assuming it is suffering from over-fitting problem? What should I do to avoid this? Increasing data? Or changing batch-size/epochs/regularization parameters? Thanks in advance.
+"
+"['neural-networks', 'unsupervised-learning']"," Title: Is a multi-layer Kohonen network possible?Body: The Kohonen network is one fully connected layer, which clusters the input into classes by a given metric. However, the one layer does not allow to operate with complex relations, that's why deep learning is usually used.
+
+Is it possible then to make multi-layered Kohonen network?
+
+AFAIK, the output of the first layer is already cluster flags, so the activation function on the non-last layers must be different from the original Kohonnen definition?
+"
+"['neural-networks', 'convolutional-neural-networks', 'data-preprocessing']"," Title: Why do we normalize data in a deep neural network?Body: I have asked this question a number of times, but I always get confusing answers to this, like ""normalized data works better"", ""data lives in the same scale""
+
+How can x-m/s
make the scale of images the same? Please explain to me the maths. Also, take MNIST dataset for example & illustration.
+"
+"['reinforcement-learning', 'rewards', 'reward-shaping', 'reward-design', 'sparse-rewards']"," Title: Are there any reliable ways of modifying the reward function to make the rewards less sparse?Body: If I am training an agent to try and navigate a maze as fast as possible, a simple reward would be something like
+
+\begin{align}
+R(\text{terminal}) &= N - \text{time}\ \ , \ \ N \gg \text{everything} \\
+R(\text{state})& = 0\ \ \text{if not terminal}
+\end{align}
+i.e. when it reaches the terminal state, it receives a reward but one that decreases if it is slower. Actually I'm not sure if this is better or worse than $R(\text{terminal}) = 1 / \text{time}$, so please correct me if I'm wrong.
+
+However, if the maze is really big, it could spend a long time wandering around before even encountering that reward. Are there any reliable ways of modifying the reward function to make the rewards less sparse? Assume that the agent knows the Euclidean distance between itself and the exit, just not the topography of the maze.
+
+Is it at all sound to simply do something like
+
+\begin{align}
+R(\text{current}) = (d_E(\text{start}, \text{exit}) - d_E(\text{current}, \text{exit})) + (\text{terminal}==True)*(N-\text{time})?
+\end{align}
+
+Or if not, what kind of dense heuristic reward or other techniques might be better?
+"
+"['machine-learning', 'reinforcement-learning', 'research', 'markov-decision-process']"," Title: How to stay a up-to-date researcher in ML/RL community?Body: As a student who wants to work on machine learning, I would like to know how it is possible to start my studies and how to follow it to stay up-to-date. For example, I am willing to work on RL and MAB problems, but there are huge literatures on these topics. Moreover, these topics are studied by researchers from different communities such as AI and ML, Operations Research, Control Engineering, Statistics, etc. And, I think that several papers are published on these topics every week which make it so difficult to follow them.
+
+I would be thankful if someone can suggest a road-map to start studying these topics, follow them and how I should select and study new published papers. Finally, I am willing to know the new trend in RL and MAB problem.
+"
+"['reinforcement-learning', 'neural-architecture-search']"," Title: How does RL based neural architecture search work?Body: I have read through many of the papers and articles linked in this thread but I haven't been able to find an answer to my question.
+
+I have built some small RL networks and I understand how REINFORCE works. I don't quite understand how they are applied to NAS though. Usually RL agents map a state to an action and get a reward so they can improve their decision making (which action to choose). I understand that the reward comes from the accuracy of the child network and the action is a series of digits encoding the network architecture.
+
+What is passed as the state to the RL agent? This doesn't seem to be mentioned in the papers and articles I read. Is it the previous network? Example input data?
+"
+"['comparison', 'geometric-deep-learning', 'graphs', 'attention']"," Title: Does GraphSage use hard attention?Body: I was reading the recent paper Graph Representation Learning via Hard and Channel-Wise Attention Networks, where the authors claim that there is no hard attention operator for graph data.
+
+From my understanding, the difference between hard and soft attention is that for soft attention you're computing the attention scores between the nodes and all their neighbors while for hard attention you have a sampling function that selects only the most important neighbors. If that is the case, then GraphSage is an example of hard attention, because they apply the attention only on a subset of each node's neighbors.
+
+Is my understanding of hard and soft attention wrong, or the claim that the authors made does not hold?
+"
+"['convolutional-neural-networks', 'papers', 'anomaly-detection']"," Title: Understanding the reconstruction loss in the paper ""Anomaly Detection using Deep Learning based Image Completion""Body: I would like to implement the approach represented in this paper. Here they used following reconstruction loss:
+
+$$
+L(X)= \frac{\lambda \cdot || M \odot (X - F(\overline{M} \odot X)) ||_{1} + (1 - \lambda) \cdot || \overline{M} \odot (X - F(\overline{M} \odot X)) ||_{1}}{N}
+$$
+
+Unfortunately, the author does not explain the function $F$.
+Does someone know a similar function or could understand the function's purpose from the context?
+"
+"['machine-learning', 'python', 'data-science', 'data-preprocessing']"," Title: How to rescale data to its original range after MinMaxScaler?Body: I'm using sklearn
's MinMaxScaler in order to scale my data down. However, it would be nice to be able to rescale it back to its original range. Is there any way I can do this?
+"
+"['ai-design', 'q-learning', 'optimization']"," Title: Is it possible to have a dynamic $Q$-function?Body: I am trying to use Q-learning for energy optimization. I only wish to have states that will be visited by the learning agent, and, for each state, I have a function that generates possible actions, so that I would have a Q-table in form of a nested dictionary, with states (added as they occur) as keys whose values are also dictionaries of possible actions as keys and Q-values as values. Is this possible? How would it affect learning? What other methods can I use?
+
+If it is possible and okay, and I want to update the Q-value, but the next state is one that was never there before and has to be added to my nested dictionary with all possible actions having initial Q-values of zero, how do I update the Q-value, now that all of the actions in this next state have Q-values of zero?
+"
+"['ai-design', 'agi', 'prediction', 'data-compression']"," Title: Can a data compression function be used to make predictions?Body: I've heard that prediction is equivalent to data compression.
+Is there a way to take a compression function and use it to create an AI that predicts?
+"
+"['reinforcement-learning', 'q-learning']"," Title: Probabilistic action selection in pursuit algorithmBody: In the Pursuit algorithm (to balance exploration and exploitation), the greedy action has a probability say $p_1$ (updated every episode) of being selected, while the rest have a probability $p_2$ (updated every episode) of being selected.
+
+Could you please show me an example code (Python) on how to enforce such conditional probabilistic picking?
+"
+"['neural-networks', 'recurrent-neural-networks', 'artificial-neuron', 'neurons']"," Title: How do layers in an artificial neural network transform inputs to outputs?Body: To me, most ANN/RNN related articles don't tell me actually how the network is implemented. I know that in the ANN you'll have multiple neurons, activation function, weights, etc. But, how do you, actually, in each neuron, convert the input to the output?
+
+Putting activation function aside, is the neuron simply doing $\text{input}*a+b=\text{output}$, and try to find the correct $a$ and $b$? If it's true, then how about where you have two neurons and their output ($c$ and $d$) is pointing to one neuron? Do you first multiply $c$ and $d$ then feed it in as input?
+"
+"['deep-learning', 'convolutional-neural-networks', 'generative-adversarial-networks', 'generative-model', 'image-generation']"," Title: If the goal of training of a GAN is to have $P_g=P_{data}$, shouldn't this produce the exact same images?Body: Referring to the blog, Image Completion with Deep Learning in TensorFlow, it clearly says that we would want a generator $g$ whose modeled distribution fits our dataset $data$, in other words, $P_{data}=P_g$.
+
+But, as described earlier in the blog, the space $P_{data}$ is in is a higher-dimensional space, where a dimension represents a particular pixel in an image, making it a $64*64*3$ dimensional space (in this case). I have a few questions regarding this
+
+
+- Since each pixel here will have an intensity value, will the pdf try to encapsulate a unique pdf for each pixel?
+- If we sample the most likely pixel value for each pixel, considering the distributions need not be the same for each pixel, is it not quite likely that the most probabilistic generated image is just noise apart from things like a common background or so?
+- If $P_g$ is trying to replicate $P_{data}$ only, does that mean a GAN only tries to learn lower level features that are common in the training set? Are GANs clueless about what its doing?
+
+"
+"['neural-networks', 'reinforcement-learning', 'game-ai', 'rewards']"," Title: Will the RL agent implemented as a neural network fine-tune itself?Body: Normally, when you develop a neural network, train it for object recognition (on normal objects like bike, car, plane, dog, cloud, etc.), and it turns out to perform very well, you would like to fine-tune it for e.g. recognizing dog breeds, and this is called fine-tuning.
+
+On the other hand, in reinforcement learning, let's consider a game with rewards on checkpoints $1, 2, 3, \dots n$. When you have a bot that plays and learns using some (e.g. value) neural net, it develops some style of solving the problem to reach some checkpoint $k$, and, after that, when it will reach the $k+1$ checkpoint, it probably will have to revalue the whole strategy.
+
+In this situation, will the bot fine-tune itself? Does it makes sense to keep a replay buffer as is and to ""reset"" the neural net to train it from scratch, or it's better to stay with fine-tune approach?
+
+If possible, topic-related papers would be very welcome!
+"
+"['reinforcement-learning', 'ai-design', 'q-learning']"," Title: How can I use Q-learning for inventory decision making?Body: I am trying to model operational decisions in inventory control. The control policy is base stock with a fixed stock level of $S$. That is replenishment orders are placed for every demand arrival to take the stock level to $S$. The replenishments arrive at constant lead time $L$. There is an upper limit $D$ on the allowed stock out time and it is measured every $T$ periods, otherwise, a cost is incurred $C_p$. This system functions in a similar manner to the M/G/S queue. The stock out time can be thought as the customer waiting time due to all server busy. So every $R$ period ($R$ is less than $T$) the inventory level and pipeline of outstanding orders are monitored and a decision about whether to expedite outstanding order (a cost involved $C_e$) or not is taken in order to control the waiting/stock-out time and to minimize the total costs.
+
+I feel it is a time and state-dependent problem and would like to use $Q$-learning to solve this MDP problem. The time period $T$ is typically a quarter i.e. 3 months and I plan to simulate demands as poisson arrivals. My apprehension is whether simulating arrivals would help to evaluate the Q-values because the simulation is for such a short period. Am I not overestimating the Q value in this way? I request some help on how should I proceed with implementation.
+"
+"['machine-learning', 'hyper-parameters', 'hyperparameter-optimization']"," Title: Does this hyperparameter optimisation approach yield the optimal hyperparameters?Body: Say I have a ML model which is not very costly to train. It has around say 5 hyperparameters.
+
+One way to select best hyperparameters would be to keep all the other hyperparamaters fixed and train the model by changing only one hyperparameter within a certain range. For the sake of mathematical convenience, we assume for the hyperparameter $h^1$, keeping all other hyperparameters fixed to their initial values, the model performs best when $h^1_{low} < h^1 < h^1_{high}$ (which we found out by running the model on a huge range of $h^1$). Now we, fix $h^1$ to one of the best values and tune $h^2$ the same way, where $h^1$ is chosen and the rest of the hyperparameters are again fixed on their initial values.
+
+My question is: Does this method find the best hyperparameter choices for the model? I know if the hyperparameters are independent, then this definitely does find the best solution, but in a general case, what is the general theory around this? (NOTE: I am not asking about the problem of choosing hyperparamaters, but I am asking about the aforementioned approach of choosing hyperparameters)
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'definitions', 'implementation']"," Title: Are feature maps merged or are they passed on as they are?Body: I am unsure about the following parts of the architecture and mechanics of convolution layers in CNNs. Possibly, this is implementation-dependent though.
+
+First question:
+
+Say I have 2 convolution layers with 10 filters each and the dimension of my input tensors is $n \times m \times 1$ (so, grayscale images for example). Passing this input to the first convolution layer results in 10 feature maps (10 matrices of $n \times m$, if we use padding), each produced by a different filter.
+
+Now, what does actually happen when this is passed to the second convolution layer? Are all 10 feature maps passed as one big $m \times n \times 10$ tensor or are the overlapping cells of the 10 feature maps averaged and a $m \times n \times 1$ tensor is passed to the next convolution layer? The former would result in an explosion of feature maps with increasing number of convolution layers and the spacial complexity would be in $\mathcal{O}\left((nm)^k\right)$, where $k$ is the number of chained convolution layers. Averaging the feature maps before passing them to the next layer would keep the complexity linear. So, which is it? Or are both possibilities commonly used?
+
+Second question (with two sub questions):
+
+a) This is a similar question. If I have an input volume of $n \times m \times 3$ (e.g. RGB images) and I have again 2 convolution layers with 10 filters, does each convolution layer have in actuality 30 filters? So 10 sets of 3 filters, one for each channel? Or do I have in fact only 10 filters and the filters are applied to all 3 channels?
+
+b) This is the same question as question (1) but for channels: Once I have convolved a filter (consisting of three channel filters? (a)) over the input tensor I end up with 3 feature maps. One for each channel. What do I do with these? Do I average them component-wise with each other? Or do I keep them separate until I have convolved all 10 filters across the input and THEN average the 10 feature maps of each channel? Or do I average all 30 feature maps of all three channels? Or do I just pass on 30 feature maps to the next convoloution layers which in turn knows which of these feature maps belong to which channel?
+
+Quite a few possibilities... None of the sources I consulted makes this explicit. Maybe because it depends on the individual implementation.
+
+Anyway, would be great if somebody could clear this confusion up a little!
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl']"," Title: Will the target network, which is less trained than the normal network, output inferior estimates?Body: I'm having some trouble understanding some parts of the usage of target networks.
+
+I get that having the same network predict the state/action/advantage values for both the current networks can lead to instability.
+
+Based on my understanding, the intuition behind 1-step TD error that going a step into the future will give you a better estimate, that can then be used to update your original state/action/advantage value.
+
+However, if you use a target network, which is less trained than the normal net — especially at early stages of the training — wouldn't the state/action/advantage value be updating towards an inferior estimate?
+
+I've tried implementing DQNs and DDPGs on Cartpole, and I've found that the algorithms fail to converge when target networks are used, but work fine when those target networks are removed.
+"
+"['deep-learning', 'math', 'reference-request']"," Title: How can I learn tensors for deep learning?Body: I've seen in most deep learning papers use tensors. I understood what tensors are, but I want to dive into them, because I think that might be beneficial for further studies in Artificial Intelligence. Do you have any suggestion (e.g. books or papers) about that?
+"
+"['reinforcement-learning', 'ai-design', 'proximal-policy-optimization']"," Title: What is ratio of the objective function in the case of continuous action spaces?Body: I'm trying to implement the proximal policy optimization (PPO) algorithm. I'm confused on how to make it work with continuous action space.
+
+For discrete action space, the output of the network is the probability for every available action, then I choose the next action based on this probability. The ratio, in the objective function, is the ratio between the action probability of the new policy and the action probability between the old policy.
+
+For continuous action space, from what I understand, the output of the network should be the action itself. How should this ratio (or the objective function itself) look like in that case?
+"
+"['machine-learning', 'reinforcement-learning', 'game-ai', 'q-learning', 'probability']"," Title: Unique game problem (ML, DP, PP etc)Body: Looking for a solution to my below game problem. I believe it to require some sort of reinforcement learning, dynamic programming, or probabilistic programming solution, but am unsure... This is my original problem, and is part of an initiative to create ""unique and challenging problem that you're able to conceptualize and then solve. 3 Judging criteria: uniqueness, complexity, and solution (no particular weighting and scoring may favor uniqueness/challenge over solution""
+
+Inspirations: Conway's Game of Life, DeepMind's Starcraft Challenge, deep Q-learning, probabilistic programming
+
+BEAR SURVIVAL
+
+A bear is preparing for hibernation. A bear must reach life-strength 1000 in order to rest & survive the winter. A bear starts off at a health of 500. A bear explores an environment of magic berries. A bear makes a move (chosen randomly with no optional direction) and comes across a berry each time. There are 100 different types of berries that all appear across the wilderness equally and infinitely.
+
+A magic berry always consumes 20 life from the bear upon arrival (this not an energy cost for moving and we should not think of it as such). A bear may then choose to give more, all, or none of its remaining life to the berry. If eaten, the berry may provide back to the bear 2x the amount of life given. Berries, however, are not the same and a bear knows this. A bear knows that any berry has some percentage of being poisonous. Of the 100 different types of berries, each may be 0%-100% poisonous. A berry that is 0% poisonous is the perfect berry and a bear knows that it should commit all of its remaining life to receive max health gain. If a bear wants to eat the berry, it must commit at least 20 more health. Again, a bear does not have to eat the berry, but if it chooses not to, it walks away and does not get the original 20 back.
+
+Example: On a bear's first move (at 500 life), it comes across a magic berry and the berry automatically takes 20 life. The bear notices that the berry is 0% poisonous, the perfect berry, and gives its remaining 480 health, eats the berry, and then receives 1000 health gain. The bear has reached it's goal, hibernates, and wins the game. However, if that first berry was 100% poisonous, the anti-berry, and the bear committed all of its remaining life it would've received back 0 health gain, died, and lost the game. A bear knows to never eat the anti-berry. It knows it can come across any poisonous value from 0-100 (3,25,52,99, etc).
+
+A bear must be picky & careful, but also bold & smart about how much life it wants to commit per berry, per move. A bear knows that if it never eats, it will eventually die as it loses 20 health per berry, per move.
+
+While it's important for an individual bear to survive, it is even more important for the bear population to not go extinct. A population is going extinct if they lose over half of the population that year. Bears in a population, and their consumption of berries, are completely independent of each other.
+
+Questions:
+
+
+- May we find a bear's optimal strategy for committing health & eating
+berries to reach 1000 health gain?
+- Is the bear population eventually doomed to a unfavorable environment?
+
+
+Bonus Complexity:
+
+Winter is coming, and conditions grow progressively harsher over time. A bear knows that every 10 moves, each berry will consume 20 * (fib(i)/ environmentFactor). fib(i) stands for fibonacci-sequence at index i, starting at 1. For all indexes where the progression is less than 20, a berry's initial health consumption remains at 20. environmentFactor is a single environment's progressive-harshness variable (how harsh winter becomes over time). The bear population is currently in an environment with environmentFactor of 4. Spelled out:
+
+Moves 01-10: Berries consume 20 -- 20*(1/4)
+Moves 10-20: Berries consume 20 -- 20*(1/4)
+Moves 20-30: Berries consume 20 -- 20*(2/4)
+Moves 30-40: Berries consume 20 -- 20*(3/4)
+Moves 40-50: Berries consume 25 -- 20*(5/4)
+Moves 50-60: Berries consume 40 -- 20*(8/4)
+Moves 60-70: Berries consume 65 -- 20*(13/4)
+Moves 70-80: Berries consume 105 -- 20*(21/4)
+Moves 80-90: Berries consume 170 -- 20*(34/4)
+Moves 90-100: Berries consume 275 -- 20*(55/4)
+Moves 100-110: Berries consume 445 -- 20*(89/4)
+... and so on ...
+
+
+Same questions as above, with a third: if this environment is proven unfavorable, and extinction unavoidable, what maximum environment/environmentFactor must the bear population move to in order to avoid extinction? (this may or may not exist if a berry's requirement of 20 initial life is always unfavorable without any progression).
+
+Further Details:
+
+
+ QUESTION:
+ Can you give an example of what happens when a bear eats a semi-poisonous berry (e.g. 20%)?
+ - Also, is the bear always immediately aware of the poison value of berries?
+ - In this, it also seems like you're using health, life, strength, life-strength, health-gain, etc interchangeably. Are they all the same
+ thing?
+
+ ANSWER:
+
+
+ - if the bear eats the poison berry of 20%, then it becomes a probability problem of whether or not the berry provides back life or
+ keeps the health amount committed by the bear. Example: the bear is at 400, the next move & berry take the initial 20 (bear is at 380 now), the bear
+ decides to commit an additional 80 (now at 300) and eat the berry.
+ 8/10 times the berry will return to the bear 200 (2x 100 committed --
+ bear ends turn at 500), 2/10 times the berry
+ returns nothing and the bear must move on with 300 life.
+ - the bear is always immediately aware of the poison value of a berry.
+ - life/health/strength are all the same thing.
+
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'definitions', 'activation-functions', 'multilayer-perceptrons']"," Title: Why are activation functions independent layers in CNNs rather than part of convolutional layers?Body: I have been reading up on CNNs. One of the different confusing things has been that people always talk of normalization layers. A common normalization layer is a ReLU layer. But I never encountered an explanation of why all of a sudden, activation functions become their own layers in CNNs, while they are only parts of a fully connected layer in MLPs.
+
+What is the reason for having dedicated activation layers in CNNs rather than applying the activation to the output volume of a convolutional layer as part of the convolutional layer, as it is the case for dense layers in MLPs?
+
+I guess, in the end, there is no functional difference. We could just as well have separate activation layers in MLPs rather than activation functions in their fully connected layers. But this difference in the convention is irritating still. Well, assuming it only is an artifact of the convention.
+"
+"['neural-networks', 'evolutionary-algorithms']"," Title: Is there a rule-of-thumb to determine which behaviours must be learned in a lifetime and which innate?Body: I was training an AI to learn things during its lifetime such as find food and navigate a maze. Behaviors that might change during its lifetime.
+
+But I hit upon a snag. Some behaviors, like avoiding poisonous snakes, cannot be learned in a lifetime since once bitten by a snake the being is dead.
+
+That got me thinking about how to separate out behaviors that must be given to the AI at birth (either by programming or using some evolutionary algorithm) and which behaviors to let the AI learn in its lifetime.
+
+Also, there is the matter of when a learned behavior should be able to overrule an innate behavior (if at all).
+
+Is there much research into this? I'm looking for some method to determine a set of innate behaviors which can't be learned.
+"
+"['deep-learning', 'python', 'object-detection', 'image-processing']"," Title: How to Mask an image using Numpy/OpenCV?Body: I am detecting wheels with a deep learning algorithm. The algorithm gives me the coordinates of those rectangles. I want to keep data that is in the rectangles of the image. I created rectangles as a mask of the area I want to keep.
+
+Here is the output of my system
+
+I read my image
+
+im = cv2.imread(filename)
+
+
+I created the rectangles with:
+
+height,width,depth = im.shape
+cv2.rectangle(img,(384,0),(510,128),(0,255,0),3)
+cv2.rectangle(rectangle,(width/2,height/2),200,1,thickness=-1)
+
+
+How can I mask out the data outside of the rectangle from the original image? and keep those rectangles?
+
+Edited: I wrote this code and it only gives me one wheel. How can I have multiple masks and get all the wheels?
+
+ mask = np.zeros(shape=frame.shape, dtype=""uint8"")
+
+# Draw a bounding box.
+# Draw a white, filled rectangle on the mask image
+cv.rectangle(img=mask,
+ pt1=(left, top), pt2=(right, bottom),
+ color=(255, 255, 255),
+ thickness=-1)
+
+
+# Apply the mask and display the result
+maskedImg = cv.bitwise_and(src1=frame, src2=mask)
+
+cv.namedWindow(winname=""masked image"", flags=cv.WINDOW_NORMAL)
+cv.imshow(winname=""masked image"", mat=maskedImg)
+
+"
+"['machine-learning', 'deep-learning', 'reinforcement-learning', 'actor-critic-methods', 'rewards']"," Title: Reward problem in A2C with multiple simultaneous discrete actionsBody: I've built an A2C model whose actor's network has two different kinds of discrete actions, so the critic would take state and action (note that critic takes 2 actions because in each timestep we will do two kinds of actions) to predict the advantage for each of these two different kinds of actions. However, my problem is when the critic network is about to train itself with discounted rewards. It only has the reward of each timestep, so cannot determine which of the two kinds of our actions contributed more to this reward, so both of its outputs (the advantage for the action of kind 1 and the advantage for the action of kind 2) will be changed in the same direction, so output of critic would be biased with its initialization values. How can I solve this problem, that is, to distinguish between the contribution amount of each of these two kinds of actions to the outcome?
+
+An example of my problem:
+Consider we have 10 cubes 3 boxes. In each time step, we have to choose between 10 cubes and choose between 3 boxes to place the selected cube in the selected box. So we have 2 kinds of actions here: one to pick a cube and the second to put it in one the boxes. Each box only has the capacity of only 4 cubes, and one of the cubes is so big that don't fit in any of the boxes. The reward of each time step will be the negative number of cubes that are not placed in a box, so because of the bigger cube, the agent won't get reward 0 ever. Consider a scenario that a box already contains 4 cubes and we choose that box to place the chosen cube (one of the small cubes) in it, we can't and the time will proceed. Another scenario is when we choose the bigger cube so no matter which box we choose, we cannot place it and the time will proceed. How the agent can distinguish which of these two kinds of actions contributed more to the reward?
+"
+"['machine-learning', 'features', 'scikit-learn', 'gradient-boosting']"," Title: How can I use gradient boosting with multiple features?Body: I'm trying to use gradient boosting and I'm using sklearn's GradientBoostingClassifier
class.
+
+My problem is that I'm having a data frame with 5 columns and I want to use these columns as features. I want to use them continuously. I mean I want each tree based classifier uses the residue of the previous tree which is based on the previous feature. As I know, by default, this classifier uses a feature and passes the residue of the previous tree to the next tree, which both are based on a single feature. How can I do this?
+
+Should I do this on my own or there is a library which does this?
+"
+"['neural-networks', 'machine-learning', 'multilayer-perceptrons']"," Title: How can an ANN efficiently predict multiple numbers with fixed sum (in other words, proportions)?Body: I need a neural network (or any other solution) to predict 3 values which sum equals a fixed number (100). This will help me calculate proportions. Which is the most efficient way to do this?
+
+The learn data only contains extreme situations where each row contains one and only one output value set to 100.
+The data to predict is expected to contain more nuances in the output values.
+All my attempts lead to very low accuracy as the predicted output sum is almost never a 100. Even when I try to normalize the predicted output, the predictions show very poor accuracy.
+
+Should I try to organize the data with 2 angles instead and deduct the 3rd angle as the remainder in a circle? How to normalize those 2 angles and how to make sure their sum will not exceed the maximum value making the 3rd angle negative?
+
+Illustration of learn data extract (4 input columns and 3 output columns).
+
+0 1 2 3 100 0 0
+4 5 6 7 0 100 0
+8 9 0 1 0 0 100
+
+
+Illustration of desired output predictions where each line sums as 100:
+
+7 83 10
+39 12 49
+68 24 8
+28 72 0
+86 6 8
+32 49 19
+0 0 100
+
+"
+['deep-learning']," Title: Current state of MoE modelsBody: I've been reading about Mixture of Expert models, and I've noticed that there is very little new work being produced in this subfield. Has there been a better method discovered? Why aren't more people doing stuff in this area?
+"
+"['convolutional-neural-networks', 'training', 'backpropagation', 'alexnet']"," Title: How is a neural network where the majority of inputs are 0 trained?Body: Consider AlexNet, which has 1000 output nodes, each of which classifies an image:
+
+
+
+The problem I have been having with training a neural network of similar proportions, is that it does what any reasonable network would do: it finds the easiest way to reduce the error which happens to be setting all nodes to 0, as in the vast majority of the time, that's what they'll be. I don't understand how a network where 999 times out of 1000, the node's output is 0, could possibly learn to make that node 1.
+
+But obviously, it's possible, as AlexNet did very well in the 2012 ImageNet challenge. So I wanted to know, how would one train a neural network (specifically a CNN) when for the majority of the inputs the desired value for an output node is 0?
+"
+['natural-language-processing']," Title: Machine learning methods to identify the recipient of a document?Body: I need some advice on what AI methods would be suited to the identification of a recipient of a document, where the format of the documents may vary.
+"
+"['neural-networks', 'papers', 'evolutionary-algorithms']"," Title: Why isn't the evolutionary Turing machine mainstream?Body: Given that recurrent neural networks are equivalent to a Turing machine, then why isn't the evolutionary Turing machine, e.g. described in the paper Evolution of evolution: Self-constructing Evolutionary Turing Machine case study (2007), mainstream?
+"
+['reinforcement-learning']," Title: Why is On-Policy MC/TD Algorithm guaranteed to converge to optimal policy?Body: Let's say we have a task where the cost depends entirely on the path length to a terminal state, so the goal of an agent would be to take actions to reach terminal state as quickly as possible.
+
+Now let us say, we know the optimal path length is of length $10$, and there are $n$ such paths possible. Each state has 5 possible actions. Let's say the scheme we are using to find optimal policy is On-Policy MC/TD(n) along with GLIE Policy improvement (Generalised Policy Iteration).
+
+In the first Policy Iteration Step, each actions are equally likely, there for the probability of sampling this optimal path (or the agent discovering this path) is $n* \frac {1}{5^{10}} \approx n* \frac {1}{2^{20}}$. So, according to probability theory we need to sample around $2^{20}/n$ steps to atleast discover one of the best paths (worst case scenario).
+
+Since, it is not possible to go through such huge number of samplings, let's say we do not sample the path, thus in the next Policy Iteration step (after GLIE Policy Improvement) some other sub-optimal path will have a higher probability of being sampled than the optimal path, hence the probability falls even lower. So, like this there is a considerably high probability that we may not find the best path at all, yet theory says we will find $\pi^*$ which indicates the best path.
+
+So what is wrong in my reasoning here?
+"
+['convolutional-neural-networks']," Title: Estimating camera's offset to its true positionBody: I have the following problem:
+
+I get a 360 RGB image in a room.
+
+
+- I've the 3D model of this room, hence, I can generate a 3D nominal mask of the room (1-wall, 2-ceiling, 3-floor, 4-door, etc..) in a specific location (x0,y0,z0,roll0,pitch0,yaw0)
+- I can also generate the depth map of this location.
+
+
+My model has to predict whether the generated mask+depth match with the RGB frame, and if not what are the dx, dy, d_yaw of the offset.
+
+I've implemented pix2pix discriminator that receives a concatenated tensor of the [RGB, one-hot mask, depth] and yields the (dx, dy, d_yaw).
+obviously, if there is a perfect match, dx=dy=dyaw=0.
+
+Unfortunately my model isn't converging. I've tried everything, and it's a reasonable ""request"" from this model, since a human can roughly guess this offset if he looks on the images.
+
+what would you suggest?
+"
+"['classification', 'models', 'regression']"," Title: Should I model a problem with quantised output as classification or regression?Body: Say I have some data I am trying to learn, and I'm aware that the output is quantised in some way, e.g. I can get only get discrete values (0.1, 0.2, 0.3...0.9) in a finite range.
+
+Would you treat that as regression or classification? In this case the numbers do have a relation to each other e.g. 0.3 is close to 0.4 in meaning.
+
+I could treat it as classification with a softmax final layer with N outputs, or could treat it as regression with a linear layer with single output and then somehow quantise the result post-prediction. But my gut feeling is that the fact there is a finite number of answers that that should somehow be used in my model?
+"
+"['neural-networks', 'convolutional-neural-networks', 'python', 'keras', 'attention']"," Title: Understanding CNN+LSTM concept with attention and need helpBody: I have a question about the context of CNN and LSTM. I have trained a CNN network for image classification. However, I would like to combine it with LSTM for visualizing the attention weights. So, I extracted the features from the CNN to put it into LSTM. However, I am stuck at the concept of combinating the CNN with LSTM.
+
+– Do I need to train the whole network again? Or just training the LSTM part is fine?
+– Can I just train the LSTM on image sequences based on classes (for e.g. 1 class has around 300 images) and do predictions later on extracted video frames?
+- In what way can I implement the attention mechanism with Keras?
+
+I hope you can help me while I struggle with the context of understanding the combination of this.
+
+~ EDITED ~
+
+I have trained a resnet50 to classify images. Although, I removed the last dense layer, to extract features from the trained CNN network. Those extracted features will be used as input in the newly created LSTM with attention mechanism to find out where the focus lies. The predictions will be on videos (extracted frames).
+
+Image -> extract features (CNN) -> LSTM + Attention (to check where the focus lies during the prediction) -> classify image (output class from N labels)
+"
+"['reference-request', 'recurrent-neural-networks', 'proofs', 'convergence', 'hopfield-network']"," Title: Is there a rigorous proof for finding Hopfield minima?Body: I am looking for a rigorous mathematical proof for finding the several local minima of the Hopfield networks. I am searching for something rigorous, a demonstration, not just letting the network keep updating its neurons and wait for noticing a stable state of the network.
+I have looked virtually everywhere, but I found nothing.
+Is there a rigorous proof for Hopfield minima? Could you give me ideas or references?
+"
+['neural-networks']," Title: Is Hopfield network more efficient than a naive implementation of Hamming distance comparator?Body: Is Hopfield network more efficient than a naive implementation of Hamming distance that compare an input pattern and return the nearest pattern ?
+"
+"['neural-networks', 'deep-learning', 'function-approximation']"," Title: Why can't neural networks learn functions outside of the specified domains?Body: I understand that neural nets are fundamentally interpolative tools. Meaning, given a training dataset, a well trained neural net can approximate values within the domain of the training dataset. However, we are unsure about their behavior once we test against values outside that domain.
+
+Speaking in the context of Imagenet, a NN trained on one of the classes in Imagenet will probably be able to predict an image of the same class outside Imagenet because Imagnet itself covers a huge domain for each class that whatever image we come across in the wild, its features will be accounted for by Imagnet.
+
+Now, this intuition breaks down for me when I talk about simple functions with simple inputs. For example, consider $sin(x)$. Our goal is to train a neural net to predict the function given $x$ with a training domain $[-1, 1]$. Theoretically, the neural net should not be able to predict the values well outside that domain, right? This seems counterintuitive to me because the function behaves in a very simple and periodic way that I find it hard to believe that a neural net cannot figure out the proper transformation of that function even outside the training domain.
+
+In short, are neural nets inherently unable to find a generalizable transformation outside the training domain no matter how simple is the function we are trying to approximate? Is this a property of the Deep Learning framework?
+
+Are there problems where researchers were able to learn a robust generalizable transformation using neural nets outside the training domain? What are the possible conditions so that such results can happen?
+"
+"['machine-learning', 'facial-recognition']"," Title: Are there some formulae in facial recognition that are indicators of close kinship?Body: I noticed what I considered a close resemblance of a woman B to another woman A which led to a close relative A labelled C, who B bore an even stronger resemblance to than she did to A.
+
+However I feel that if I had compared A to C directly I wouldn't have detected the blood relationship so strongly.
+
+I am just wondering whether there is some mathematical underpinning to my perception and strong intuition that B bore a strong resemblance to A.
+"
+"['deep-learning', 'natural-language-processing', 'sequence-modeling']"," Title: Language Model from missing dataBody: I want to learn how a set of operations (my vocabulary) are composed in a dataset of algorithms (corpus).
+
+The algorithms are a sequence of higher level operations which have varying low-level implementations. I am able to map raw code to my vocabulary, but not all of it.
+
+e.g. I observe a lossy description of an algorithm that does something:
+
+X: missing data
+Algo 1: BIND3 EXTEND2 X X ROTATE360 X PUSH
+Algo 2: X X EXTEND2 ROTATE360
+
+
+The underlying rotate operation could have very different raw code, but effectively the same function and so it gets mapped to the same operation.
+
+I want to infer what the next operation will be given a sequence of (potentially missing) operations (regions of code I could not map).
+
+i.e. I want a probability distribution over my operations vocabulary.
+
+Any ideas on the best approach here? The standard thing seems to throw out missing data, but I can still learn in these scenarios. Also, the gaps in the code are non-homogenous--some could do many things, The alternative is to contract the sequences and lose the meaning of the gaps, or to learn an imputation.
+"
+"['neural-networks', 'machine-learning', 'activation-functions', 'relu']"," Title: Is PReLU superfluous with respect to ReLU?Body: Why do people use the $PReLU$ activation?
+
+$PReLU[x] = ReLU[x] + ReLU[p*x]$
+
+with the parameter $p$ typically being a small negative number.
+
+If a fully connected layer is followed by a at least two element $ReLU$ layer then the combined layers together are capable of emulating exactly the $PReLU$, so why is it necessary?
+
+Am I missing something?
+"
+"['unsupervised-learning', 'fuzzy-logic', 'k-means', 'clustering']"," Title: What is the role of the 'fuzzifier' w in Fuzzy Clustering?Body: According to my lecture, Fuzzy c-Means tries to minimize the following objective function:
+
+$$J(X,B,U)=\sum_{i=1}^c\sum_{j=1}^n u_{ij}^w \, d^2(\vec{\beta_i},\vec{x_j})$$
+
+where $X$ are the data points, $B$ are the cluster-'prototypes', and $U$ is the matrix containing the fuzzy membership degrees. $d$ is a distance measure.
+
+A constraint is that the membership degrees for a single datapoint w.r.t. all clusters sum to $1$: $\sum_{j=1}^n\, u_{ij}=1$.
+
+Now in the first equation, what is the role of the $w$? I read that one could use any convex function instead of $(\cdot)^w$. But why use anything at all. Why don't we just use the membership degrees? My lecture says using the fuzzifier is necessary but doesn't explain why.
+"
+"['neural-networks', 'deep-learning', 'optimization', 'generative-adversarial-networks', 'generative-model']"," Title: Query regarding the minmax loss function formulation of the training of a Generative Adversarial Network (GAN)Body: Just needed a clarification on the training procedure for a standard GAN.
+Of my understanding the loss function to optimize is a min max (max min causing mode collapse due to focus on one class generation) problem where the loss function
+
+
+
+needs to maximized for the discriminator and minimized for the generator networks -
+
+1.) In this equation are the $E_{z~p_{z(z)}}$ and $E_{x~p_{data(x)}}$ the means of the distributions of the mini batch samples? Also is the optimal case for the discriminator a maximum value of 0? and the optimal case for the generator a minimum value of $log(small~value)$ {basically a massive negative value}? If so, what happens to the first term during the training of the generator - is it taken as a constant or is the discriminator performing badly considered optimal for the generator?
+
+2.) While putting this in code, for every step is the discriminator trained first for one step, keeping the generator constant, followed by the generator being trained for the same step, with the discriminator kept constant?
+
+3.) In every step of training are there multiple latent vectors sampled to produce multiple generator outputs for each step? If so is the loss function an average or sum of all $V(D, G)$ for that step
+"
+"['ai-design', 'math', 'logic']"," Title: How would an AI work out this question?Body: I am trying to create an AI that makes reasonable guesses at truths of statements. However...
+
+Human: ""Prove that no number exists which is one more than a billion.""
+
+AI: ""Is it true for the number 1? No. I am 1% sure the statement is correct.""
+
+AI: ""Is it true for the number 2? No. I am 2% sure the statement is correct.""
+
+...
+
+AI: ""Is it true for the number 999,999? No. I am 99.99% sure the statement is correct.""
+
+AI: ""Having tested a large number of examples. I conclude that the statement is correct.""
+
+Human: ""The statement is wrong: one billion and one.""
+
+What do you think has gone wrong?
+"
+"['reinforcement-learning', 'open-ai']"," Title: OpenAI Spinning Up: Breakout-v0 exampleBody: I haven't been able to find any assistance / examples which could help me implement OpenAI's Spinning Up resource to solve Atari's Breakout-v0 game in the OpenAI gym.
+
+I simply want to know why the following command doesn't run, and instead produces an error that I can't find any help on:
+
+python -m spinup.run ppo --env Breakout-v0 --exp_name simpletest
+
+
+...and then the error:
+
+ValueError: Shape must be rank 2 but is rank 4 for 'pi/multinomial/Multinomial'
+ (op: 'Multinomial') with input shapes: [?,210,160,4], [].
+
+
+I understand the shape dynamics, and have written several (albeit quite unoptimized!) reinforcement learning neural nets in Python, but I was looking forward to using OpenAI's Spinning Up environment to use something more sophisticated and optimized.
+
+Thank you so much for any help on the seeminly noobish question!
+"
+"['neural-networks', 'meta-learning']"," Title: Can we optimize an optimization algorithm?Body: In this answer to the question Is an optimization algorithm equivalent to a neural network?, the author stated that, in theory, there is some recurrent neural network that implements a given optimization algorithm.
+If so, then can we optimize the optimization algorithm?
+"
+"['neural-networks', 'convolutional-neural-networks', 'definitions', 'convolution']"," Title: Is adding the Frobenius inner products between filter and input part of convolution or a separate step?Body: From the literature I have read so far, it is not clear how exactly the convolution operation is defined. It seems people use two different definitions:
+Let us assume we are given an $n_w \times n_h \times d$ input tensor $I$ and an $m_w \times m_h \times d$ filter $F$ of $d$ kernels (I use the convention of referring to the depth-slices of filters as kernels. I also will call the depth slices of the input tensor channels). Let us also assume $F$ is the $j$th filter of $J$ filters.
+Now to the definitions.
+Option 1:
+The convolution of $I$ with $F$ is obtained by sliding $F$ across $I$ and computing the Frobenius inner product between channel $k$ and kernel $k$ at each position, adding the products and storing them in an output matrix. That matrix is the result of the convolution. It is also the $j$th feature map in the output tensor of the convolution layer.
+Let $I \in \mathbb{R}^{n_w \times n_h \times d}$ and $F \in \mathbb{R}^{m_w \times m_h \times d}$. Let $s \in \mathbb{N}$ be the stride. The operation will only be defined if the smaller tensor fits within the larger tensor along its width and height a positive integer number of times when shifting by $s$, that is if and only if $k_w = (n_w - m_w) / s + 1\in \mathbb{N}$ and $k_h = (n_h - m_h) / s + 1\in \mathbb{N}$, where $k_w \times k_h \times d$ is the shape of the output tensor. Furthermore let ${f_x : i \mapsto (x - 1)s + i}$ be a function that returns the absolute index in the input tensor, given an index $x$ in the output tensor, the stride length $s$ and a relative index $i$.
+\begin{equation*}
+ \begin{split}
+ (I * F)_{x y} =
+ & \sum_{k=1}^d \sum_{i = 1}^{m_w} \sum_{j = 1}^{m_h} I_{f_x(i) f_y(j) k} \cdot F_{i j k}
+ \end{split}
+\end{equation*}
+Option 2:
+The convolutions (plural) of $I$ with $F$ are obtained by sliding $F$ across $I$ and computing the Frobenius inner product between channel $k$ and kernel $i$ at each position. Each product is stored in a matrix associated with the channel $k$. There is no adding of the products yet. The convolutions are the result matrices. The step where the matrices are added component wise to obtain the $j$th feature map of the output tensor of the convolution layer is not part of the convolution operation, but an independent step.
+Let $I \in \mathbb{R}^{n_w \times n_h}$ and $F \in \mathbb{R}^{m_w \times m_h}$. Let $s \in \mathbb{N}$ be the stride. The operation will only be defined if the smaller matrix fits within the larger one along its width and height a positive integer number of times when shifting by $s$, that is, if and only if $k_w = (n_w - m_w) / s + 1\in \mathbb{N}$ and $k_h = (n_h - m_h) / s + 1\in \mathbb{N}$, where $k_w \times k_h$ is the shape of the output matrix. Furthermore let ${f_x : i \mapsto (x - 1)s + i}$ be a function that returns the absolute index in the input matrix, given an index $x$ in the output matrix, the stride length $s$ and a relative index $i$.
+\begin{equation*}
+ \begin{split}
+ (I * F)_{x y} =
+ &\sum_{i = 1}^{m_w} \sum_{j = 1}^{m_h} I_{f_x(i) f_y(j)} \cdot F_{i j}
+ \end{split}
+\end{equation*}
+Which of these two definitions is the common one?
+"
+"['deep-learning', 'generative-adversarial-networks', 'generative-model']"," Title: What is the purpose of the noise injection in the generator network of a GAN?Body: I do not understand why with enough training how the generator cannot learn all images from the training set as a mapping from the latent space - It is the absolute optimal case in training as it replicates the distribution and the discriminator output will always be 0.5. Even though most blog posts I have seen do not mention noise, a few of them have them in their diagrams or describe their presence, but never exactly describe the purpose of this noise.
+
+Is this noise injected to avoid the exact reproduction of the training data? If not what is the purpose of this injection and how is exact reproduction avoided?
+"
+"['deep-learning', 'optimization', 'objective-functions', 'generative-adversarial-networks']"," Title: Could the Jensen-Shannon divergence and Kullback-Leibler divergence be used as loss functions of non-generation problems?Body: If I understand correctly, the KL divergence is a measure of information loss between a ground truth distribution $P$ and a predicted distribution $Q$, and the Jensen-Shannon divergence is the mean of the KL Divergences of 2 cases
+
+
+- Predicted distribution is mean of $P$ and $Q$, and ground truth is $P$
+- Predicted distribution is mean of $P$ and $Q$, and ground truth is $Q$
+
+
+Since KL divergence can be easily interpreted as information loss in $Q$ relative to $P$, what can JS divergence interpret-ably represent? I cannot see any use cases of these measures unless there are two distributions to compare. Is there any other problem where I could use them as loss functions other than generation problems? If so how, and why?
+"
+"['reinforcement-learning', 'python', 'keras', 'proximal-policy-optimization']"," Title: Entropy term in Proximal Policy Optimization (PPO) becomes undefined after few training epochsBody: I have implemented the total loss of my PPO objective as follows:-
+
+total_loss = critic_discount * critic_loss + actor_loss - entropy_beta * K.mean(-(newpolicy_probs * K.log(newpolicy_probs)))
+
+
+After training for a few epochs, the entropy term becomes ""nan"" for some reason. I used tf.Print()
to see the new policy probabilities when the entropy becomes undefined, it is as follows-
+
+
+ new policy probs: [[6.1029973e-06 1.93471514e-08
+ 0.000299338106...]...]
+
+
+I am not clear as to why taking log of these small probabilities is coming out as nan
. Any idea how to prevent this?
+"
+"['neural-networks', 'deep-learning', 'backpropagation', 'deep-neural-networks', 'alphazero']"," Title: How can I use one neural network for both players in Alpha Zero (Connect 4)?Body: First of all, it is great to have found this community!
+
+I am currently implementing my own Alpha Zero clone on Connect4. However, I have a mental barrier I cannot overcome.
+
+How can I use one neural network for both players? I do not understand what the input should be.
+
+Do I just put in the board position ($6 \times 7$) and let's say Player1's pieces on the board are represented as $-1$, empty board as $0$ and Player2's pieces as $1$?
+
+To me, that seems the most efficient. But then, in the backpropagation, I feel like this cannot be working. If I update the same network for both players (which Alpha Zero does), don't I try to optimize Player1 and Player 2 at the same time?
+
+I just can't get my head around it. 2 Neural networks, each for one player is understandable for me. But one network? I don't understand how to backpropagate? Should I just flip my ""z"" (the result of the game) every time I go one layer backward? Is that all there is to using one network?
+
+I hope I made this clear enough. I am quite confused, I tried my best.
+
+Thank you for reading this!
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'problem-solving']"," Title: Use deep learning to rank video scenesBody: I'm new to machine learning and especially, deep learning. Given a video (and it's subtitle), I need to generate a 10-second summary out of this video. How can I use ML and DL to produce the most representative summary out of this video? More specifically, given video scenes, what are some ways to select and rank them, and how to do it? Any ideas would be helpful.
+"
+"['deep-learning', 'optimization', 'generative-adversarial-networks', 'transfer-learning']"," Title: Is convergence to a local minima more likely with transfer learning?Body: While doing transfer learning where my two problems are face-generation and car-generation is it likely that, if I use the weights of one problem as the initialization of the weights for the other problem, the model will converge to a local minima? In any problem is it better to train from scratch over transfer learning? (especially for GAN training?)
+"
+"['machine-learning', 'comparison', 'regression']"," Title: What is the relationship between degrees of freedom and the size of the training dataset?Body: I am going through the book Pattern Recognition by Bishop.
+
+At one point he says
+
+
+ For $M = 9$, the training set error goes to zero, as we might expect because this polynomial contains 10 degrees of freedom corresponding to the $10$ coefficients $w_0, \dots, w_9$, and so can be tuned exactly to the $10$ data points in the training set.
+
+
+where $M$ is the order of the hypothesis function, and $w$ are the weights of the hypothesis function.
+
+I did not understand how having $10$ degrees of freedom will tune the model EXACTLY to the $10$ data points? Does it mean that whenever we have a number of data points in training set equal to the degrees of freedom, the error will be zero?
+"
+"['recurrent-neural-networks', 'keras', 'chat-bots']"," Title: How can I keep context in my chatbotBody: I have created a chatbot by Keras based on movie dialog. I used RNN more specifically GRU . My bot can reply well. But the problem is , it can't hold the context . As an example if I say Tell me a joke
, the bot will reply something , and then if I say one more
, the bot simply doesn't understand that I was asking for another joke and many more similar cases, like if I used a slang against the bot , the bot will reply me with something similar , but if I just say something romantic or good immediately after using slang , the bot will reply to me with something good . I want to keep context or environment . How can I do so . Any lead would be helpful .
+"
+"['computer-vision', 'terminology', 'comparison', 'image-processing']"," Title: What is the difference between image processing and computer vision?Body: What is the difference between image processing and computer vision? They are apparently both used in artificial intelligence.
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: Understanding arrangement of applying filters to input channelsBody: I was watching a video about Convolutional Neural Networks: https://www.youtube.com/watch?v=SQ67NBCLV98. What I'm confused about is the arrangement of applying the filters' channels to the input image or even to the output of a previous layer.
+Question 1 - Looking at the visual explanation example of how one filter with 3 channels is applied to the input image (with 3 channels), so that each 1 filter channel is applied to its corresponding input channel:
.
+So hence the output is 3 channels. Makes sense.
+However, looking at the second screenshot which shows an example of the VGG network:
, looking at the first layer (I've delineated with a red frame), which is 64 channels, where the input of the image contains 3 channels. How does the output shape become 64? The only way I would think this would be possible is if you apply:
+
+- filter channel 1 to image channel 1
+- filter channel 2 to image channel 2
+- filter channel 3 to image channel 3
+- filter channel 4 to image channel 1
+- filter channel 5 to image channel 2
+- filter channel 6 to image channel 3
+
+.. and so on.
+Or the other thing could be, that these are representing Conv layers, with 64 filters. Rather than a filter with 64 channels. And that's precisely what I'm confused about here. In all the popular Convolutional networks, when we see these big numbers - 64, 128, 256 ... etc, are these Conv layers with 64 filters, or are they individual filters with 64 channels each?
+Question 2 - Referring back to the second screenshot, the layer I've delineated with blue frame (3x3x128). This Conv layer, as I understand, takes the output of 64 Max-pooled nodes and applies 128 Conv filters. But how does the output become 128. If we apply each filter to each Max-pooled output node, that's 64 x 128 = 8192 channels or nodes in output shape. Clearly that's not what's happening and so I'm definitely missing something here. So, how does 128 filters is applied to 64 output nodes in a way so that the output is still 128? What's the arrangement?
+Many thanks in advance.
+"
+['natural-language-processing']," Title: How can we use Dependency Parsers for Negation detectionBody: I am building a negation detection system. How to use dependency parsers for the same. I am using SPACY for dependency parser
+"
+"['python', 'objective-functions', 'implementation', 'proximal-policy-optimization']"," Title: Understanding log probabilities of actions in the PPO objectiveBody: I'm trying to implement the Proximal Policy Optimization (PPO) algorithm (code here), but I am confused about certain concepts.
+
+
+- What is the correct way to implement log probability of a policy (denoted by $\pi_\theta$ below)?
+$$
+L^{C P I}(\theta)=\hat{\mathbb{E}}_{t}\left[\frac{\pi_{\theta}\left(a_{t} | s_{t}\right)}{\pi_{\theta_{\text {old }}}\left(a_{t} | s_{t}\right)} \hat{A}_{t}\right]=\hat{\mathbb{E}}_{t}\left[r_{t}(\theta) \hat{A}_{t}\right]
+$$
+Let's say my old network policy output is
oldpolicy_probs=[0.1,0.2,0.6,0.1]
and new network policy output is newpolicy_probs=[0.2,0.2,0.4,0.2]
.
+
+Do I take the log of this directly, or should I first multiply these with the true label y_true = [0,0,1,0]
as implemented here?
+ratio = np.mean(np.exp(np.log(newpolicy_probs + 1e-10) - K.log(oldpolicy_probs + 1e-10))*advantage)
+
+Once I have the ratio and I multiply it with an advantage, why do we take the mean over all actions? I suspect it might be because we are taking estimate $\hat{\mathbb{E}_t}$ but conceptually I don't understand what this gives us. Is my implementation above correct?
+
+"
+"['convolutional-neural-networks', 'convolution', 'pooling', 'max-pooling']"," Title: What are the benefits of using max-pooling in convolutional neural networks?Body: I am reading Francois Chollet's Deep learning with Python, and I came across a section about max-pooling that's really giving me trouble.
+I am unable to copy-paste the content, so I've included screenshots of the paragraph that's troubling me.
+
+
+I simply don't understand what he means when he talks about "What's wrong with this setup?" (towards the end).
+How does removing the max-pooling layers "reduce" the amount of the initial image that we're looking at? What are the benefits of using max-pooling in convolutional neural networks, as opposed to just using convolution layers?
+"
+"['machine-learning', 'hyperparameter-optimization', 'scalability']"," Title: How do you scale your ML problems?Body: While I have limited resource usually to train my machine learning models, I often find that my hyperparameter optimization procedure is not necessary using all my GPU and CPU, and that is because the results also depend on the batch size in my experience.
+
+If you find in your project that a low batch size is necessary, how do you scale your project? In a multi-GPU scenario, I could imagine running different hyperparameter settings on different GPUs, but what other options are out there?
+"
+"['image-recognition', 'computer-vision', 'machine-translation', 'image-processing']"," Title: Is this technique image processing or computer vision?Body: If I use my mobile camera on a signboard or announcement board on a road or in a street (like the one attached in photo) where the message is written in Russian and my mobile shows me that message in English, would this be an image processing or computer vision technique?
+
+
+"
+"['math', 'probability-distribution', 'expectation']"," Title: Why is the expectation calculated over finite number of points drawn from a probability distribution?Body:
+
+
+
+This is from the book Pattern Recognition by Bishop. Why is expectation here a simple average? Why is $f(x)$ not being multiplied by $p(x)$?
+"
+['natural-language-processing']," Title: Why do language models place less importance on punctuation?Body: I have very outdated idea about how NLP tasks are carried out by normal RNN's, LSTM's/GRU's, word2vec, etc to basically generate some hidden form of the sentence understood by the machine.
+
+One of the things I have noticed is that in general researchers are interested in generating the context of the sentence, but oftentimes ignore punctuation marks which is on of the most important aspects for generating context. For example:
+
+
+ “Most of the time, travellers worry about their luggage.”
+
+ “Most of the time travellers worry about their luggage”
+
+
+Source
+
+Like this there exists probably 4 important punctuation marks .,?
and !
. Yet, I have not seen any significant tutorials/blogs on them. It is also interesting to note that punctuations don't have a meaning (quite important, since most language models try to map word to a numerical value/meaning), they are more of a 'delimiter'. So what is the current theory or perspective on this? And why is it ignored?
+"
+"['reinforcement-learning', 'ai-design']"," Title: Coloring graphs with reinforcement learningBody: I am trying to build an RL agent to solve the NP-hard problem graph coloring. The problem is quite challenging.
+
+This how I addressed it.
+
+The environment
+
+To preserve the scalability of the algorithm, providing the agent with the whole graph wouldn't be a good idea. Therefore, the input for the agent would be a window of embeddings.
+
+More precisely, first, I would apply an embedding to the graph to generate fixed-size vectors to every vertex in the graph (thus, every vertex in the graph is represented as a vector that contains some information about its neighborhood and position in the graph).
+
+Second, the agent will get a window of the embedding. For example, when coloring vertex number $17$, the input would be the $2n$ vectors from vertex $17-n$ to $17+n$, to give the agent more local information.
+
+Third, I think the agent would require more information about the number of colors already used and the number of the already colores vertices.
+
+The agent
+
+My biggest problem is how the agent should be. Technically, the problem is the action space dimension. For a given graph, the maximal number of colors is the number of vertices which varies from graph to graph (losing the scalability). Plus, the possible actions at each state varies with the history of the coloring. The possible colors for a given state (or node) are all the used colors eliminating the connected colors and adding the possibility of a new color, that is, for vertex $56$, the agent has already used the first $40$ colors $\{0, 1, 2, 3, \dots,40 \}$, and node $56$ is connected to some neighbors already colored with $\{14, 22, 40 \}$, the possible colors are $\{0,1, \dots, 40 \}- \{14, 22, 40 \} + \{41\}$.
+
+How do I overcome the high dimensional inconsistent action space?
+"
+"['long-short-term-memory', 'objective-functions', 'pytorch', 'learning-curve']"," Title: LSTM text classifier shows unexpected cyclical pattern in lossBody: I'm training a text classifier in PyTorch and I'm experiencing an unexplainable cyclical pattern in the loss curve. The loss drops drastically at the beginning of each epoch and then starts rising slowly. However, the global convergence pattern seems OK. Here's how it looks:
+
+![]()
+
+The model is very basic and I'm using the Adam optimizer with default parameters and a learning rate of 0.001. Batches are of 512 samples. I've checked and tried a lot of stuff, so I'm running out of ideas, but I'm sure I've made a mistake somewhere.
+
+Things I've made sure of:
+
+
+- Data is delivered correctly (VQA v1.0 questions).
+- DataLoader is shuffling the dataset.
+- LSTM's memory is being zeroed correctly
+- Gradient isn't leaking through input tensors.
+
+
+Things I've already tried:
+
+
+- Lowering the learning rate. Pattern remains, although amplitude is lower.
+- Training without momentum (plain SGD). Gradient noise masks the pattern a bit, but it's still there.
+- Using a smaller batch size (gradient noise can grow until it kinda masks the pattern, but that's not like solving it).
+
+
+The model
+
+
+
+class QuestionAnswerer(nn.Module):
+
+ def __init__(self):
+ super(QuestionAnswerer, self).__init__()
+ self._wemb = nn.Embedding(N_WORDS, HIDDEN_UNITS, padding_idx=NULL_ID)
+ self._lstm = nn.LSTM(HIDDEN_UNITS, HIDDEN_UNITS)
+ self._final = nn.Linear(HIDDEN_UNITS, N_ANSWERS)
+
+ def forward(self, question, length):
+ B = length.size(0)
+ embed = self._wemb(question)
+ hidden = self._lstm(embed)[0][length-1, torch.arange(B)]
+ return self._final(hidden)
+
+"
+"['philosophy', 'control-problem', 'ai-box']"," Title: Can the AI in a box experiment be formalized?Body: Introduction
+
+The AI in a box experiment is about a super strong game AI which starts with lower resources than the opponent and the question is, if the AI is able to win the game at the end, which is equal to escape from the prison. A typical example is a match of computer chess in which the AI player starts only with a king, but the human starts with all the 16 pieces including the queen, and the powerful bishop.
+
+Winning the game
+
+In case of a very asymmetric setup, the AI has no chance to win the game. Even if the AI thinks 100 moves ahead, a single king can't win against 16 opponent figures. But what happens, if the AI starts with 8 pieces and the human with 16? A formalized hypothesis will look like:
+
+
+ strength of the AI x resources weakness = strength of the human x resources strength
+
+
+To put the AI for sure into a prison, the strength of the AI should be low and it's resources too. If the resources are low but the strength is middle, then the AI has a certain chance to escape from the prison. And if the AI has maximum strength and maximum resources, then the human player gets a serious problem.
+
+Is this formalized prediction supported by the AI literature in academia?
+"
+"['neural-networks', 'supervised-learning']"," Title: How can AI be used to design UI Interfaces?Body: I'm very new to AI. I read somewhere that AI can be used to create GUI UI/UX design. That has fascinated me for a long time. But, since I'm very new here, I don't have any idea how it can happen.
+
+The usual steps to create the UI Design are:
+
+
+- Create Grids.
+- Draw Buttons/Text/Boxes/Borders/styles.
+- Choose Color Schemes.
+- Follow CRAP Principle (Contrast, Repeatition, Alignment, Proximity)
+
+
+I wonder how can AI algorithms help with that. I know a bit of neural networking and the closest I can think of is the following two methods (Supervised Learning).
+
+
+- Draw grids manually and train the Software manually to learn proper styles until it becomes capable of giving modern results and design its own design language.
+- Take a list of a few websites (for example) from the internet and let the software learn and explore the source code and CSS style sheets and learn and program neurons manually until it becomes capable of making it's own unique styles.
+
+"
+"['neural-networks', 'feedforward-neural-networks', 'time-complexity', 'complexity-theory', 'forward-pass']"," Title: What is the time complexity of the forward pass algorithm of a feedforward neural network?Body: How do I determine the time complexity of the forward pass algorithm of a feedforward neural network? How many multiplications are done to generate the output?
+"
+"['neural-networks', 'machine-learning', 'computer-vision', 'object-recognition', 'object-detection']"," Title: How do I identify the number and type of objects in the same picture?Body: I need to identify the number and type of all objects in a picture, so there can be multiple objects of the same type.
+
+For example, I have a picture with $10$ animals, and I want my program to tell me that, on the picture, I have $3$ elephants, $2$ cats and $5$ dogs. However, I do not need the detection of the location of the objects. All I need is the information on the number of objects of each class, without their possible locations.
+
+I wanted to ask you guys for help in defining the type of problem I am dealing with and maybe some suggestions about where to start looking for a solution. It would be nice if you could point out some directions, algorithms or network architectures to solve the problem described below.
+"
+"['reinforcement-learning', 'comparison', 'monte-carlo-methods', 'model-based-methods']"," Title: How is Monte Carlo different from model-based methods?Body: I was going through an article where it is mentioned:
+
+
+ The Monte-Carlo methods require only knowledge base (history/past experiences)—sample sequences of (states, actions and rewards) from the interaction with the environment, and no actual model of the environment.
+
+
+Aren't the model-based method dependent on past sequences? How is Monte Carlo is different than?
+"
+"['reinforcement-learning', 'ai-design', 'control-theory']"," Title: How do I solve this optimal control problem with reinforcement learning?Body: I am new to reinforcement learning. I would like to solve an optimal control problem with reinforcement learning.
+
+The objective is for a wolf to catch a rabbit. The wolf and the rabbit run on a plane. The time is discrete. At every time step, the wolf can only run straight, change direction by 10 degrees to the right or left, change the speed by 0.1 m/s or remain at the same speed. It starts running in some random direction, and then sees the rabbit and starts chasing it. For the time being, let's assume that the rabbit sits still.
+
+It looks like this problem is a continuous state space and discrete action space.
+
+I have tried to use DQN in Keras, but I am not sure that I am using correct state variables/reward. Currently, the state variables are the velocity vector of the wolf, the distance vector from the wolf to the rabbit. The reward at each time point is the negative current time. When the wolf catches the rabbit, the reward is 1000 - current time (the wolf is penalized for running too long).
+
+Can somebody provide me some guidance? Eventually, I would add brains to the rabbit so that it tries to escape the wolf and compare to the optimal control solution.
+"
+"['neural-networks', 'long-short-term-memory']"," Title: Would this neural network have short term memory?Body: I want to design a NN that can remember it's last 7 actions and use them as inputs. So for example it would be able to store words in it's memory. Therefore if it had a choice of 10 different actions, the number of words it could store is $10^7$.
+
+Here is my design:
+
+$$out_{n+1} = f(out_n, in_n)\mathbf{N} + out_n.\mathbf{M}$$
+
+$$action_n = \sigma(\mathbf{N} \cdot out_n)$$
+
+Where $f$ represents some layered neural network. Some of the actions would be physical actions and some might be internal (such as thinking of the letter 'C').
+
+Basically I want $out_n$ to be an array that keeps the last 6 action values and puts them back in. So $M$ will be the matrix:
+
+$$\begin{bmatrix}
+0&1&0&0&0&0\\
+0&0&1&0&0&0\\
+0&0&0&1&0&0\\
+0&0&0&0&1&0\\
+0&0&0&0&0&1\\
+0&0&0&0&0&0
+\end{bmatrix}$$
+
+i.e. it would drop the 6th item from it's memory.
+
+and $N$ would be the vector:
+
+$$\begin{bmatrix}
+1&0&0&0&0&0&0
+\end{bmatrix}$$
+
+I think this would be equivalent to an equation of the form:
+
+$$out_{n+1}=F(in_n,out_n,out_{n-1},out_{n-2},...,out_{n-6})$$
+
+So I think this would be an advantage over an RNN since this model remembers precisely it's last 6 actions. But would this be better than an RNN or worse? One could increase it's memory to more than 7 quite easily.
+
+I think it's basically the same archececture as an RNN except elinimating a lot of the connections. Is this a new design or a common design?
+
+One problem with this design is that you might also want a memory that is over longer time periods (e.g. for actions that take more than one tick.) But that might be solved by enhancing the archecture.
+"
+"['deep-learning', 'reinforcement-learning', 'long-short-term-memory', 'actor-critic-methods', 'proximal-policy-optimization']"," Title: How to use the LSTM layer in PPO architecture?Body: What is the best way of using the LSTM layer in PPO architecture?
+Should I use them in the first layer of both actor and critic, or use them just before the final layer of these networks?
+Should I feed the architecture with a stack of states (the state stacked with the k previous states)?
+"
+"['philosophy', 'agi', 'goal-based-agents']"," Title: Couldn't an AI cheat when trying to follow its goal?Body: Hello, I was reflecting about what implications might building a strong AI have and I came across some ideas which I find disturbing, I'd love to have some external thought on that :
+
+1) If we ever managed to create an AI say nearly as smart as a human, It would probably have been programmed with some concrete goals, as the AIs we are programming right now : Reinforcement learning allow an agent to try and increase a ""reward"" variable, regression is all about getting closer to a certain goal function etc..
+
+But then a strong AI, which would undoubtedly be able to understand how it is built, just as we understand (partly at least) how our brains work, because it would be as smart as its creators and we don't tend to build machines that are as hard to understand as brains.
+
+Then couldn't such an agent figure out the best way to achieve its goals would actually not be, say, pleasing and protecting the humans like we would've wanted it to do, but to get control of its own program and maximize whatever reward it was set to pursue ? Just as we could decide to branch electrods to our brain if we were able to find out how exactly our brain was built.
+
+I really don't see how this scenario could ever be avoided if we were to build such an AI, apart from finding a perfect security preventing anyone from accessing the code of the said AI (including itself).
+
+2) On the same note, I also wondered, could it try to not only satisfy its goals by ""cheating"" (updating its reward variables for example) but also to change itself, or commit suicide ? After all, we humans have never been able to figure out what goals we were meant to pursue (by that I mean what reward variable in our brain we are trying to increment), and many philosophers reflecting upon that matter thought about death as an escape from once goals. So my question is : could it try to change its code or kill itself ?
+
+I have other questions and thoughts I would like to discuss, but I think this a good start, to test whether I'm in the right place for this kind of discussion.
+
+Looking forward to your thoughts.
+"
+"['neural-networks', 'deep-learning', 'python', 'generative-model']"," Title: Which approach can I use to generate text based on multiple inputs?Body: I have a little experience in building various models, but I've never created anything like this, so just wondering if I can be pointed in the right direction.
+
+I want to create (in python) a model which will generate text based on multiple inputs, varying from text input (vectorized) to timestamp and integer inputs.
+
+For example, in the training data, the input might include:
+
+eventType = ShotMade
+
+shotType = 2
+
+homeTeamScore = 2
+
+awayTeamScore = 8
+
+player = JR Smith
+
+assist = George Hill
+
+period = 1
+
+and the output might be (possibly minus the hashtags):
+JR Smith under the basket for 2! 8-4 CLE. #NBAonBTV #ThisIsWhyWePlay #PlayByPlayEveryDay #NBAFinals
+
+or
+
+JR Smith out here doing #WhateverItTakes to make Cavs fans forgive him. #NBAFinals
+
+Where is the best place to look to get a good knowledge of how to do this?
+"
+"['decision-trees', 'strips', 'c4.5-algorithm']"," Title: Can the C4.5 algorithm learn a GOAP model?Body: Goal-oriented action planning (GOAP) is a well-known planning technique in computer games. It was introduced to control the non-player characters in the game F.E.A.R. (2005) by creating an abstract state model. Similar to STRIPS planning, a GOAP model contains an action name, a precondition and an effect. The domain knowledge is stored in these subactions.
+
+The bottleneck of GOAP is, that before the planner can bring the system into the goal state. the action model has to be typed in. Usually, the programmer defines actions like ""walk to"", ""open the door"", ""take the object"", and identifies for each of them the feature set for the precondition and the effect.
+
+In theory, this challenging task can be simplified with a decision tree learning algorithm. A decision tree stores the observed features in a tree and creates the rules on its own with inductive learning. A typical example of the C4.5 algorithm is to find a rule like ""if the weather is sunny, then play tennis"". Unfortunately, the vanilla tree learning algorithm doesn't separate between different actions.
+
+Is it possible to modify C4.5 algorithm such that the GOAP actions, like ""walk to"", ""open the door"", etc., are connected to individual rules?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'definitions']"," Title: Pooling vs Subsampling: Multiple Definitions?Body: I have seen people using pooling and subsampling synonymously. I have also seen people use them as different processes. I am not sure though if I have correctly inferred what they mean, when they use the terms with distinct meanings.
+I think these people mean that the pooling part is selecting a submatrix from an input matrix and the subsampling part is selecting yet another submatrix that satisfies some condition from the first submatrix.
+
+So say we have a $100 \times 100$ image. We do $10 \times 10$ non-overlapping pooling with $5 \times 5$ max subsampling. That would mean we slide the $10 \times 10$ ""pool"" across the image in strides of 10 and at every step we select the $5 \times 5$ submatrix inside the pool that has the maximum sum. That $5 \times 5$ matrix is what comes out of the $10 \times 10$ pool at the current potion. So in the end we have a $50 \times 50$ image.
+
+Can you confirm that this usage of the terms pooling and subsampling exists?
+
+I inferred this definition, as I cannot make sense of how some people use the two terms otherwise. For example in this video, or rather the people from the paper he is talking about (which I can't find because he only has the author name on his slide and no year).
+"
+"['comparison', 'transfer-learning', 'incremental-learning', 'domain-adaptation', 'learning-without-forgetting']"," Title: What is the difference between learning without forgetting and transfer learning?Body: I would like to incrementally train my model with my current dataset and I asked this question on Github, which is what I'm using SSD MobileNet v1.
+Someone there told me about learning without forgetting. I'm now confused between learning without forgetting and transfer learning. How they differ from each other?
+My initial problem, what I'm trying to achieve (mentioned in Github issue) is the following.
+I have trained my dataset on ssd_mobilenet_v1_coco
model. I'm getting continuous incremental data. Right now, my dataset is very limited.
+What I want to achieve is incremental training, i.e. as soon as I get new data, I can further train my already trained model and I don't have to retrain everything:
+
+- Save trained model $M_t$
+- Get new data $D_{t+1}$
+- Train $M_t$ on $D_{t+1}$ to produce $M_{t+1}$
+- Let $t = t+1$, then go back to $1$
+
+How do I perform this incremental training/learning? Should I use LwF or transfer learning?
+"
+"['neural-networks', 'deep-learning', 'game-ai', 'applications']"," Title: What are the main technologies needed to build an AI for Warcraft 3's mod DotA?Body: What are the main technologies needed to build an AI for Warcraft 3's mod Defense of the Ancients (DotA)? Maybe I can take inspiration from OpenAI's work.
+"
+"['deep-learning', 'convolutional-neural-networks', 'terminology', 'objective-functions', 'comparison']"," Title: What are the major differences between cost, loss, error, fitness, utility, objective, criterion functions?Body: I find the terms cost, loss, error, fitness, utility, objective, criterion functions to be interchangeable, but any kind of minor difference explained is appreciated.
+"
+"['deep-learning', 'convolutional-neural-networks', 'architecture', 'inception', 'image-net']"," Title: What is the exact output of the Inception ResNet V2's feature extraction layer?Body: I am working with the Inception ResNet V2 model, pre-trained with ImageNet, for face recognition.
+However, I'm so confused about what the exact output of the feature extraction layer (i.e. the layer just before the fully connected layer) of Inception ResNet V2 is. Can someone clarify exactly this?
+(By the way, if you know some resource that explains Inception ResNet V2 clearly, let me know).
+"
+"['python', 'open-ai', 'policy-gradients', 'proximal-policy-optimization']"," Title: Understanding policy update in PPO2Body: I have a question regarding the functionality of the PPO2 algorithm together with the Stable Baselines implementation:
+
+From the original paper I know that the policy parameters $\theta$ are updated K-times using the steps sampled (n_env * T steps):
+
+
+
+When updating the policy parameters for a state $s_t$, are only the state observations $a_t$ and reward $r_{t+1}$ of this step considered, or also the state observations and rewards of the following steps ($t+1$) considered? My understanding is that the policy update with stochastic gradient ascent works just like in supervised learning.
+
+I know that PPO2 uses a truncated TD($\lambda$) approach (T timesteps considered). So I guess that during the policy update for each state, subsequent states are only considered through the advantage function $A_t$ but not through the values of subsequent state observations and rewards themselves? Is that true?
+
+I do not quite get the Stable Baselines implementation in the method _train_step()
of the PPO2 implementation so therefore the question here.
+"
+"['neural-networks', 'machine-learning', 'backpropagation']"," Title: What weights should I use while back-propagating?Body: I've started to learn about neural networks recently and I can't find the answer to this question.
+
+Let's assume there's a neural network (fig. 1)
+
+
+So if the loss function is:
+
+
+and the derivative
is:
+
+
+if I want to use this to find
what k and l (well there's only one neuron with index l here, but what if there would be more?) should i use in
and
?
+
+I've also found ""other"" way of backpropagating it's described here, but I can't understand how they came up with that method from the original equation w -= step * dE/dw
.
+
+Sorry if I failed to explain my problem. If something isn't clear please ask in comments.
+"
+"['neural-networks', 'keras', 'regression', 'time-series']"," Title: What are the possible neural network architecture for linear regression or time series regression?Body: I started modeling a linear regression problem using dense layers (layers.dense), which works fine. I am really excited, and now I am trying to model a time series linear regression problem using CNN, but from my research in this link Machine learning mastery
+
+A CNN works well with sequence data, but my data isn’t sequential. My data set can be found here Stack overflow question.
+
+Is there a multivariate time series/ time series neural network architecture that I can use for time series linear/nonlinear regression?
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing', 'classification', 'python']"," Title: Grouped Text classificationBody: I have thousands groups of paragraphs and I need to classify these paragraphs. The problem is that I need to classify each paragraph based on other paragraphs in the group! For example, a paragraph individually maybe belongs to class A but according to other paragraph in the group it belongs to class B.
+
+I have tested lots of traditional and deep approaches( in fields like text classification, IR, text understanding, sentiment classification and so on) but those couldn't classify correctly.
+
+I was wondering if anybody has worked in this area and could give me some suggestion. Any suggestions are appreciated. Thank you.
+
+Update 1:
+
+Actually we are looking for manual sentences/paragraph for some fields, so we first need to recognize if a sentence/paragraph is a manual or not second we need to classify it to it's fields and we can recognize its field only based on previous or next sentences/paragraphs.
+
+To classify the paragraphs to manual/no-manual we have developed some promising approaches but the problem come up when we should recognize the field according to previous or next sentences/paragraphs, but which one?? we don't know the answer would be in any other sentences!!.
+
+Update 2:
+
+We can not use whole text of group as input because those are too big (sometimes tens of thousands of words) and contain some other classes and machine can't learn properly which lead to the drop the accuracy sharply.
+
+Here is a picture that maybe help to better understanding the problem:
+
+"
+"['datasets', 'autonomous-vehicles']"," Title: Can I use self-driving car's data set for left-hand drive cars which drive on the right lane for right-hand cars which drive on the left lane?Body: Can I use self-driving car's data set for left-hand drive cars which drive on the right lane for right-hand self-driving cars which drive on the left lane?
+"
+"['machine-learning', 'deep-learning', 'training', 'supervised-learning', 'labeled-datasets']"," Title: If the accuracy of my current model is low ($50 \%$) and we want to minimize time in collecting more data, should we try other models?Body: Suppose we have a data set with $4,000$ labeled examples. The outcome variable is trinary (three possible categorical values). Suppose the accuracy of a given model is "bad" (e.g. less than $50 \%$).
+
+Question. Should you try different traditional machine learning models (e.g. multinomial logistic regression, random forests, XGBoost,
+etc.), get more data, or try various deep learning models like
+convolutional neural networks or recurrent neural networks?
+
+If the purpose is to minimize time and effort in collecting training data, would deep learning models be a viable option over traditional machine learning models in this case?
+"
+"['dqn', 'multi-agent-systems', 'environment']"," Title: How to represent players in a multi agent environment so each model can distinguish its own playerBody: So I have 2 models trained with the DQN algorithm that I want to train in a multi-agent environment to see how they react with each other. The models were trained in an environment consisting of 0's and 1's (-1's for the other model)where 1 means that square is filled and 0 is empty. It is a map filling environment, where at each step the agent can move up, down, left or right and for each step, it stays alive without turning into itself (1 or -1 for the other) or the boundary of the environment it gets 0.005 rewards and for ""dying"" it gets -1. You can think of the player as in the game Tron, where it just leaves a trail behind. I stack the last 4 frames on top of each other so it knows which end is the ""head"". With a single agent, after training, I didn't get an optimal model which uses all the squares but it does manage to fill about 30% of the environment, which I think is the limit for this algorithm (let me know if you have thoughts on this)
+
+Now, I put the two models in one environment where there are two players, one represented with 1's and the other with -1s. As one model is trained with -1's and the other with 1's I thought they could find their own player, however even before training if I just run the models on the environment without any exploration, they seem to affect the actions of each other. One just goes straight and dies and the other just turns once then dies at the wall (whereas in a single-agent environment these 2 models can fill about 30%). And if I do training, they just diverge to this exact behavior from random without seemingly not learning anything. So, I just wanted to ask is there anything wrong about my approach with the representation of the players (1 and -1) because I thought they would just play as they did in the single-agent environment but they don't and I couldn't get them to learn anything
+"
+"['convolutional-neural-networks', 'computer-vision', 'terminology', 'image-segmentation', 'fully-convolutional-networks']"," Title: What do the words ""coarse"" and ""fine"" mean in the context of computer vision?Body: I was reading the well know paper Fully Convolutional Networks for Semantic Segmentation, and, throughout the whole paper, they talk use the term fine and coarse. I was wondering what they mean. The first time they say it in the intro is:
+
+
+ Convolutional networks are driving advances in recognition. Convnets are not only improving for whole-image classification, but also making progress on local tasks with structured output. These include advances in
+ bounding box object detection, part and keypoint prediction, and local correspondence.
+
+ The natural next step in the progression from coarse to fine inference is to make a prediction at every pixel.
+
+
+It's also used in other parts of the paper
+
+
+ We next explain how to convert classification nets into fully convolutional nets that produce coarse output maps.
+
+
+What do ""coarse"" and ""fine"" mean in the context of this paper? And in the general context of computer vision?
+
+In English, ""coarse"" means ""rough or loose in texture or grain"" , while ""fine"" means ""involving great attention to detail"" or ""(chiefly of wood) having a fine or delicate arrangement of fibers"", but these definitions do not elucidate the meaning of these words in the context of computer vision.
+
+This question was also asked here.
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-recognition', 'classification']"," Title: Is there a theory behind which model is good for a classification task for the convolutional neural network?Body: Let say I'm trying to apply CNN for image classification. There are lots of different models to choose and we can try an ensemble, but given a limit amount of resources, it does not allow to try everything.
+
+Is there a theory behind which model is good for a classification task for the convolutional neural network?
+
+Right now, I'm just taking an average of three predictions.
+
+predictions_model = [y_pred_xceptionAug,y_pred_Dense121_Aug,y_pred_resnet50Aug]
+predictions = np.mean(predictions_model,axis=0)
+
+
+But each model's performance is different. Is there better way for ensemble methods?
+"
+['monte-carlo-tree-search']," Title: How can we efficiently and unbiasedly decide which children to generate in the expansion phase of MCTS?Body: When executing MCTS' expansion phase, where you create a number of child nodes, select one of the numbers, and simulate from that child, how can you efficiently and unbiasedly decide which child(ren) to generate?
+
+One strategy is to always generate all possible children. I believe that this answer says that AlphaZero always generates all possible ($\sim 300$) children. If it were expensive to compute the children or if there were many of them, this might not be efficient.
+
+One strategy is to generate a lazy stream of possible children. That is, generate one child and a promise to generate the rest. You could then randomly select one by flipping a coin: heads you take the first child, tails you keep going. This is clearly biased in favor of children earlier in the stream.
+
+Another strategy is to compute how many $N$ children there are and provide a function to generate child $X < N$ (of type Nat -> State). You could then randomly select one by choosing uniformly in the range $[0, N)$. This may be harder to implement than the previous version because computing the number of children may be as hard as computing the children themselves. Alternatively, you could compute an upper-bound on the number of children and the function is partial (of type Nat -> Maybe State), but you'd be doing something like rejection sampling.
+
+I believe that if the number of iterations of MCTS remaining, $X_t$, is larger than the number of children, $N$, then it doesn't matter what you do, because you'll find this node again the next iteration and expand one of the children. This seems to suggest that the only time it matters is when $X_t < N$ and in situations like AlphaZero, $N$ is so much smaller than $X_0$, that this basically never matters.
+
+In cases where $X_0$ and $N$ are of similar size, then it seems like the number of iterations really needs to be changed into something like an amount of time and sometimes you spend your time doing playouts while other times you spend your time computing children.
+
+Have I thought about this correctly?
+"
+"['ai-design', 'profession', 'computational-complexity']"," Title: How to estimate the cost and time to complete an AI ProjectBody: If you are a freelancer, when a client asks to create a website we can easily measure how much the total cost is needed based on the requirements of the client. (the backend, UI/UX design, features, etc.). We can even measure the estimated time of completion.
+
+What if a client asks you to make an AI project (image recognition, speech recognition, or NLP), how do you tell the client the estimated cost and time needed to complete the project in the beginning? because the results obtained can be very different for each data used
+"
+['monte-carlo-tree-search']," Title: What is the appropriate way to deal with multiple paths to same state in MCTS?Body: Many games have multiple paths to the same states. What is the appropriate way to deal with this in MCTS?
+
+If the state appears once in the tree, but with multiple parents, then it seems to be difficult to define back propagation: do we only propagate back along the path that got us there ""this"" time? Or do we incorporate the information everywhere? Or maybe along the ""first"" path?
+
+If the state appears once in the tree, but with only one parent, then we ignored one of the paths, but it doesn't matter because by definition this is the same state?
+
+If the state appears twice in the tree, aren't we wasting a lot of resources thinking about it multiple times?
+"
+"['convolutional-neural-networks', 'terminology', 'papers']"," Title: What is the meaning of ""stationarity of statistics"" and ""locality of pixel dependencies""?Body: I'm reading the ImageNet Classification with Deep Convolutional Neural Networks paper by Krizhevsky et al, and came across these lines in the Intro paragraph:
+
+
+ Their (convolutional neural networks') capacity can be controlled by varying their depth and breadth, and they also make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies). Thus, compared to standard feedforward neural networks with similarly-sized layers, CNNs have much fewer connections and parameters and so they are easier to train, while their theoretically-best performance is likely to be only slightly worse.
+
+
+What's meant by ""stationarity of statistics"" and ""locality of pixel dependencies""? Also, what's the basis of saying that CNN's theoretically best performance is only slightly worse than that of feedforward NN?
+"
+"['deep-learning', 'object-recognition']"," Title: Where to find pre-trained models for multi-camera people tracking?Body: I need to build a multi-camera people tracking system and I have no idea how to start. I read ML for Dummies and I've watched a lot of youtube classes/conferences and read a lot of articles about ML/DL, so I have all this theoretical information about what is a NN, loss function, weights, vectors, convolution, etc., but when I need to start building something, I get stuck. Even more, I don't think I can create my own models because I only have six months to finish this and I'm not sure if I'll be able to do it.
+
+I've read some papers explaining architectures for an improved people-tracking system (e.g. https://www.intechopen.com/online-first/multi-person-tracking-based-on-faster-r-cnn-and-deep-appearance-features#B8), and it says it used ResNet-30 and stuff like that. My question is, how could I recreate the architectures in papers like that? Where can I find those pre-trained models? Or is there a place where I can get the data?
+
+I want to start with at least a people-tracking system, without worrying about the multi-camera part for now, and I thought of almost the same approach as the people in the paper posted, meaning I want to recognize people based on parts of their body/the whole body to identify them, and track them based on their unique features (clothing color, hair, skin tone, etc), maybe skipping the part of facial recognition since that's too advanced I think.
+
+Any idea on where to start?
+Sorry if the question is too broad or too complex. Comments about first steps and sub-dividing the problem are also welcome.
+PS: The main ultimate is to track how much time people are in a certain area filmed by many cameras.
+"
+"['machine-learning', 'comparison', 'supervised-learning', 'hidden-markov-model', 'conditional-random-field']"," Title: What are the differences between CRF and HMM?Body: What I know about CRF is that they are discriminative models, while HMM are generative models, but, in the inference method, both use the same algorithm, that is, the Viterbi algorithm, and forward and backward algorithms.
+
+Does CRF use the same features as HMM, namely features transition and state features?
+
+But in here https://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf, CRF has these features Edge-Observation and Node-Observation Features.
+
+What is the difference features transition and state features vs features Edge-Observation and Node-Observation features?
+"
+"['convolutional-neural-networks', 'reference-request', 'convolution', 'convolutional-layers', '3d-convolution']"," Title: When should I use 3D convolutions?Body: I am new to convolutional neural networks, and I am learning 3D convolution. What I could understand is that 2D convolution gives us relationships between low-level features in the X-Y dimension, while the 3D convolution helps detect low-level features and relationships between them in all the 3 dimensions.
+
+Consider a CNN employing 2D convolutional layers to recognize handwritten digits. If a digit, say 5, was written in different colors:
+
+
+
+Would a strictly 2D CNN perform poorly (since they belong to different channels in the z-dimension)?
+
+Also, are there practical well-known neural nets that employ 3D convolution?
+"
+"['deep-learning', 'deep-neural-networks', 'network-design']"," Title: What kind of output should be used for predicting angles in DNNs?Body: I am building a model which predicts angles as output. What are the different kinds of outputs that can be used to predict angles?
+
+For example,
+
+
+- output the angle in radians
+
+
+- cyclic nature of the angles is not captured
+- output might be outside $\left[-\pi, \pi \right)$
+
+- output the sine and the cosine of the angle
+
+
+- outputs might not satisfy $\sin^2 \theta + \cos^2 \theta = 1$
+
+
+
+What are the pros and cons of different methods?
+"
+"['math', 'getting-started']"," Title: Is it ok to struggle with mathematics while learning AI as a beginner?Body: I have a decent background in Mathematics and Computer Science .I started learning AI from Andrew Ng's course from one month back. I understand logic and intuition behind everything taught but if someone asks me to write or derive mathematical formulas related to back propagation I will fail to do so.
+I need to complete object recognition project within 4 months.
+Am I on right path?
+"
+"['neural-networks', 'deep-learning', 'pattern-recognition']"," Title: How to detect patterns in a data set of given IP addresses using a neural network?Body: How to detect patterns in a data set of given IP addresses using a neural network?
+
+The data set is actually a list of all the vulnerable devices on a network. I want to use a neural network that detctes any patters in the occurrences of these vulnerabilities with reference to their IPs and ports.
+"
+"['neural-networks', 'reference-request', 'autoencoders', 'variational-autoencoder', 'conditional-vae']"," Title: Is there a continuous conditional variational auto-encoder?Body: The Conditional Variational Autoencoder (CVAE), introduced in the paper Learning Structured Output Representation using Deep Conditional Generative Models (2015), is an extension of Variational Autoencoder (VAE) (2013). In VAEs, we have no control over the data generation process, something problematic if we want to generate some specific data. Say, in MNIST, generate instances of 6.
+So far, I have only been able to find CVAEs that can condition to discrete features (classes). Is there a CVAE that allows us to condition to continuous variables, kind of a stochastic predictive model?
+"
+"['convolutional-neural-networks', 'objective-functions', 'supervised-learning', 'pytorch']"," Title: Are the training loss and validation loss plotted per sample or per batch?Body: I am using a CNN to train on some data, where training size = 21700 samples, and test size is 653 samples, and say I am using a batch_size of 500 (I am accounting for samples out of batch size as well).
+
+I have been looking this up for a long time now, but can't get a clear answer, but when plotting the loss functions to check for whether the model is overfitting or not, do I plot as follows
+
+for j in range(num_epochs):
+ <some training code---Take gradient descent step do wonders>
+ batch_loss=0
+ for i in range(num_batches_train):
+ batch_loss = something....criterion(target,output)...
+ total_loss += batch_loss
+ Losses_Train_Per_Epoch.append(total_loss/num_samples_train)#and this is
+
+
+where I need help
+
+Losses_Train_Per_Epoch.append(total_loss/num_batches_train)
+and doing the same for Losses_Validation_Per_Epoch.
+plt.plot(Losses_Train_Per_Epoch, Losses_Validation_Per_epoch)
+
+
+So, basically, what I am asking is, should I divide by num_samples or num_batches or batch_size? Which one is it?
+"
+"['machine-learning', 'overfitting', 'cross-validation', 'loocv', 'k-fold-cv']"," Title: What is the best measure for detecting overfitting?Body: I wanted to ask about the methodology of testing the ML models against overfitting. Please note that I don't mean any overfitting reducing methods like regularisation, just a measure to judge whether a model has overfitting problems.
+
+I am currently developing a framework for tuning models (features, hyperparameters) based on evolutionary algorithms. And the problem that I face is the lack of a good method to judge if the model overfits before using the test set. I encountered the cases where the model that was good on both training and validation sets, behaved poorly on the test set for both randomized and not randomized training and validation splits. I used k-fold cross-validation with additionally estimating the standard deviation of all folds results (the smaller deviation means better model), but, still, it doesn't work as expected.
+
+Summing up, I usually don't see a correlation (or a very poor one) between training, validation and k-fold errors with test errors. In other words, tuning the model to obtain lower values of any of the above mentioned measures usually does not mean lowering the test error.
+
+Could I ask you, how in practice you test your models? And maybe there are some new methods not mentioned in typical ML books?
+"
+['reinforcement-learning']," Title: How to solve optimal control problem with reinforcement learningBody: The problem I am trying to attack is a predator-prey pursuit problem. There are multiple predators that pursue multiple preys and preys tried to evade predators. I am trying to solve a simplified version - one predator tries to catch a static prey on a plane. There is bunch of literature on the above problem when predators and preys are on the grid.
+
+Can anybody suggest articles/code where such problem is solved on a continuous plane? I am looking at continuous state space, discrete action space (predator can turn left 10 degrees, go straight, turn right 10 degrees, runs at constant speed), and discrete time. MountainCar is one dimensional version (car is predator and flag is prey) and DQN works fine. However, when I tried DQN on two dimensional plane the training become very slow (I guess dimensionality curse).
+
+The second question concerns the definition of states and reward. In my case I consider angle between predator heading vector and vector between the predator and prey positions. Reward is the change in distance between predator and prey, 10 when prey is captured, and -10 when predator gets too far from the prey. Is this reasonable? I already asked similar question before and with the help of @Neil Slater was able to refine reward and state.
+
+The third question concerns when to update train network to target network. At each episode? Or only when prey is caught? Any ideas?
+
+The last question I have is about the network structure: activation functions and regularization. Currently I am using two tanh hidden layers and linear output with l2 and dropout. Can anybody share some insights?
+
+Thanks in advance!
+"
+"['machine-learning', 'deep-learning', 'keras', 'attention']"," Title: How do I tag the most interesting parts of a video?Body: This is a follow-up question from my previous question here. I'm new to ML/DL, and one thing I need to do is to use a machine or deep learning video attention model which as the name suggests, can tag which parts of a video is probably more interesting and absorbs more viewer attention.
+
+Do we have an available model to do that? If not, how to do it?
+"
+"['image-recognition', 'python', 'computer-vision', 'feature-extraction']"," Title: Extracting Descriptors and feature points for 3d meshBody: I'm programming my work with python, and I have a mesh and I want to extract 3d descriptors and feature points from it( trying to work on multi-scale strategy) , to visualize them later on the mesh,
+
+What I'm asking about, is references, guidelines, anything which could benefit me with this situation
+
+The main work I'm trying to do, is to reach matching stage, where I could find one-to-one correspondence.
+"
+['neural-networks']," Title: Effect of rescaling of inputs on loss for a simple neural networkBody: I've been trying out a simple neural network on the fashion_mnist dataset using keras. Regarding normalization, I've watched this video explaining why it's necessary to normalize input features, but the explanation covers the case when input features have different scales. The logic is, say there are only two features - then if the range of one of them is much larger than that of the other, the gradient descent steps will stagger along slowly towards the minimum.
+
+Now I'm doing a different course on implementing neural networks and am currently studying the following example - the input features are pixel values ranging from 0 to 255, the total number of features (pixels) is 576 and we're supposed to classify images into one of ten classes. Here's the code:
+
+import tensorflow as tf
+
+(Xtrain, ytrain) , (Xtest, ytest) = tf.keras.datasets.fashion_mnist.load_data()
+
+Xtrain_norm = Xtrain.copy()/255.0
+Xtest_norm = Xtest.copy()/255.0
+
+model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
+ tf.keras.layers.Dense(128, activation=""relu""),
+ tf.keras.layers.Dense(10, activation=""softmax"")])
+
+model.compile(optimizer = ""adam"", loss = ""sparse_categorical_crossentropy"")
+model.fit(Xtrain_norm, ytrain, epochs=5)
+model.evaluate(Xtest_norm, ytest)
+------------------------------------OUTPUT------------------------------------
+Epoch 1/5
+60000/60000 [==============================] - 9s 145us/sample - loss: 0.5012
+Epoch 2/5
+60000/60000 [==============================] - 7s 123us/sample - loss: 0.3798
+Epoch 3/5
+60000/60000 [==============================] - 7s 123us/sample - loss: 0.3412
+Epoch 4/5
+60000/60000 [==============================] - 7s 123us/sample - loss: 0.3182
+Epoch 5/5
+60000/60000 [==============================] - 7s 124us/sample - loss: 0.2966
+10000/10000 [==============================] - 1s 109us/sample - loss: 0.3385
+0.3384787309527397
+
+
+So far, so good. Note that, as advised in the course, I've rescaled all inputs by dividing by 255. Next, I ran without any rescaling:
+
+import tensorflow as tf
+
+(Xtrain, ytrain) , (Xtest, ytest) = tf.keras.datasets.fashion_mnist.load_data()
+
+model2 = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
+ tf.keras.layers.Dense(128, activation=""relu""),
+ tf.keras.layers.Dense(10, activation=""softmax"")])
+
+model2.compile(optimizer = ""adam"", loss = ""sparse_categorical_crossentropy"")
+model2.fit(Xtrain, ytrain, epochs=5)
+model2.evaluate(Xtest, ytest)
+------------------------------------OUTPUT------------------------------------
+Epoch 1/5
+60000/60000 [==============================] - 9s 158us/sample - loss: 13.0456
+Epoch 2/5
+60000/60000 [==============================] - 8s 137us/sample - loss: 13.0127
+Epoch 3/5
+60000/60000 [==============================] - 8s 140us/sample - loss: 12.9553
+Epoch 4/5
+60000/60000 [==============================] - 9s 144us/sample - loss: 12.9172
+Epoch 5/5
+60000/60000 [==============================] - 9s 142us/sample - loss: 12.9154
+10000/10000 [==============================] - 1s 121us/sample - loss: 12.9235
+12.923488986206054
+
+
+So somehow rescaling does make a difference? Does that mean if I further reduce the scale, the performance will improve? Worth trying out:
+
+import tensorflow as tf
+
+(Xtrain, ytrain) , (Xtest, ytest) = tf.keras.datasets.fashion_mnist.load_data()
+
+Xtrain_norm = Xtrain.copy()/1000.0
+Xtest_norm = Xtest.copy()/1000.0
+
+model3 = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
+ tf.keras.layers.Dense(128, activation=""relu""),
+ tf.keras.layers.Dense(10, activation=""softmax"")])
+
+model3.compile(optimizer = ""adam"", loss = ""sparse_categorical_crossentropy"")
+model3.fit(Xtrain_norm, ytrain, epochs=5)
+model3.evaluate(Xtest_norm, ytest)
+------------------------------------OUTPUT------------------------------------
+Epoch 1/5
+60000/60000 [==============================] - 9s 158us/sample - loss: 0.5428
+Epoch 2/5
+60000/60000 [==============================] - 9s 147us/sample - loss: 0.4010
+Epoch 3/5
+60000/60000 [==============================] - 8s 141us/sample - loss: 0.3587
+Epoch 4/5
+60000/60000 [==============================] - 9s 144us/sample - loss: 0.3322
+Epoch 5/5
+60000/60000 [==============================] - 8s 138us/sample - loss: 0.3120
+10000/10000 [==============================] - 1s 133us/sample - loss: 0.3718
+0.37176641924381254
+
+
+Nope. I divided by 1000 this time and the performance seems worse than the first model. So I have a few questions:
+
+
+- Why is it necessary to rescale? I understand rescaling when different features are of different scales - that will lead to a skewed surface of the cost function in parameter space. And even then, as I understand from the linked video, the problem has to do with slow learning (convergence) and not high loss/inaccuracy. In this case, ALL the input features had the same scale. I'd assume the model would automatically adjust the scale of the weights and there would be no adverse effect on the loss. So why is the loss so high for the non-scaled case?
+- If the answer has anything to do with the magnitude of the inputs, why does further scaling down of the inputs lead to worse performance?
+
+
+Does any of this have anything to do with the nature of the sparse categorical crossentropy loss, or the ReLU activation function? I'm very confused.
+"
+"['philosophy', 'logic', 'randomness']"," Title: Is randomness anti-logical?Body: I came across a comment recently ""reads like sentences strung together with no logic."" But is this even possible?
+
+Sentences can be strung together randomly if the selection process is random. (Random sentences in a random sequence.) Stochasticity does not seem logical—it's a probability distribution, not based on sequence or causality.
+
+but
+
+That stochastic process is part of an algorithm, which is a set of instructions that must be valid for the program to compute.
+
+So which is it?
+
+
+- Is randomness anti-logical?
+
+
+
+
+
+
+Some definitions of computational logic:
+
+The arrangement of circuit elements (as in a computer) needed for computation
+also: the circuits themselves Merriam Websters A system or set of principles underlying the arrangements of elements in a computer or electronic device so as to perform a specified task. Logical operations collectively. Google Dictionary
+The system or principles underlying the representation of logical operations. Logical operations collectively, as performed by electronic or other devices.
+
Oxford English Dictionary
+
+Some definitions of randomness
+
+Being or relating to a set or to an element of a set each of whose elements has equal probability of occurrence. Lacking a definite plan, purpose, or pattern.
+Merriam Websters. Made, done, happening, or chosen without method or conscious decision. Google Dictionary Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method. Seeming to be without purpose or direct relationship to a stimulus. Oxford English Dictionary
+"
+['convolutional-neural-networks']," Title: Pipeline to Estimate Measurement of Human Body Point CloudBody: I am developing a Body Measurement extraction application, my current stage is able to extract the point clouds of human body in a standing posture, from every angles.
+
+Now, to be able to recognize shoulders, neck point etc, my research seems to fall into following flows:
+
+Method A:
+
+
+- Obtain a lot of data, with labeled landmark points (shoulder left, shoulder right, neck line).
+- Use PointCNN / PointNet++ to perform segmentation for each landmark.
+- Once the landmarks are extracted, use Open3D / Point Cloud Library convex hull to obtain the measurement along point clouds.
+
+
+- Method A seems straight forward, but might depends on the quality of point cloud, especially 3rd step.
+
+
+
+Method B:
+
+
+- Obtain a lot of data, with labeled landmarks and also measurement of shoulder length, chest circumference etc.
+- We first train the network to identify landmarks,
+- Then from landmark, we record the distance to the next nearest point, train the network to obtain the measurement that we want.
+
+
+Method C:
+
+
+- Obtain a lot of data, with measurement of shoulder length, chest circumference etc ONLY.
+- Pick a random point, and we record the distance to the next nearest point, train the network to obtain the measurement that we want.
+
+
+My questions:
+
+
+- How much data needed for this kind of learning?
+- If I obtain training data from somewhere online, and later validate using my own scanned data, will that valid?
+- Which method makes more sense?
+- Which existing problem with solution is similar to my case? (Facial recognition?) That I can refer to it to solve my problem.
+
+
+This is technically my first machine learning project, so please bear with me if my questions seems too silly.
+"
+"['ai-design', 'game-ai']"," Title: Create an AI to solve a puzzle (by deduction)Body: Some puzzle games have a unique solution that can be solved by deduction rather than guesswork (e.g. Slitherlink, Masyu). Using a computer to solve this puzzle it's pretty easy, we can use a backtracking method to find the best solution in second (in general, the puzzle size is not too big).
+
+
+ Is it possible to train a bot to solve this kind of puzzle by deduction?
+
+
+I think by train it to watch a previous step-by-step solution several times the bot can find some implicit rules/patterns to solve a specific puzzle. Is this possible? are there any references for this method?
+"
+"['training', 'datasets']"," Title: Train detector : 300 images with 30 objects or 9000 images with one?Body: so I have this dataset of images of people sitting in a restaurant.
+I've annotated about 300 images with an average of 30 instances of ""person"" per image.
+
+Now I'm wondering if I should have annotated only one (or just a few) person per image and processed way more images ?
+
+I've successfully trained an SSD network with only one class, but I'm still wondering if I should have gone the other way...
+
+Anyone got input on that ?
+
+Cheers.
+"
+"['deep-learning', 'convolutional-neural-networks', 'comparison', 'convolution']"," Title: What is the difference between asymmetric and depthwise separable convolution?Body: I have recently discovered asymmetric convolution layers in deep learning architectures, a concept which seems very similar to depthwise separable convolutions.
+
+Are they really the same concept with different names? If not, where is the difference? To make it concrete, what would each one look like if applied to a 128x128 image with 3 input channels (say R,G,B) and 8 output channels?
+
+NB: I cross-posted this from stackoverflow, since this kind of theoretical question is maybe better suited here. Hoping it is OK...
+"
+['overfitting']," Title: Relation between size of parameters and complexity of model with overfittingBody: I'm reading the book Pattern Recognition and Machine Learning by Bishop, specifically the intro where he covers polynomial regression model. In short, let's say we generate $10$ data points using the function $\sin(2\pi x)$ and add some gaussian random noise to each observation. Now we pretend not knowing the generating function and try to fit a polynomial model to these points.
+
+As we increase the degree of the polynomial, it goes from underfitting ($d=1,2$) to overfitting ($d=10$). One thing the author notes is that the higher the degree of the polynomial, the higher the values of the coefficients (parameters). This is my first doubt: why does the size of the coefficients increase with the polynomial degree? And why is the size of the parameters related to overfitting?
+
+Secondly, he states that even for degree $10$, if we get sufficiently many data points (say $100$), then the high degree polynomial will no longer overfit the data and should have comparatively better generalization performance. Second doubt: Why is this so?
+"
+"['python', 'keras', 'time-series', 'long-short-term-memory']"," Title: How to train a LSTM model with multi dimensional dataBody: I am trying to train my model using LTSM layer in Keras (python). I have some problems regarding the data representation and feeding it into the model.
+
+My data is 184 XY coodinates encoded as a numpy array with two dimensions: one corresponding to the X or Y and second is every single point of X or Y. Shape of a single spectrum is (2, 70). Altogether, my data has a dimension of (184, 2, 70).
+
+The label set is an array of 8 elements which describes the percentage distribution of some 8 features which are describing XY. The shape of an output is (184, 8).
+
+My question is how can I train using the time series for each XY pair and compare it to the corresponding label set? Different XY data show similar features to each other that is why it is important to use all 184 sample for the training. What would be the best approach to handle this problem? Below I show the schematics of my data and model:
+
+Input: (184, 2, 70) (number of XY, X / Y, points)
+
+Output: (184, 8) (number of XY, predictions)
+
+I look forward for some ideas!
+
+
+"
+"['ai-design', 'recurrent-neural-networks']"," Title: A gated neural network for internal thought?Body: I have an idea for an RNN which has no separate internal memory state only an output. But there is a gate in which tells the neural network whether the output will be acted out in the physical world or it will be an internal thought. (It would also store its last, say, 10 outputs, so that it can have a memory of some kind.)
+
+I think this would be quite realistic because human's either talk or think in an internal monologue, but don't do both. (It is hard to think and do things at the same time).
+
+But I wonder how this gate will be activated. For example, when talking to someone familiar, this gate will be open, as you just say what's in your head. But for quiet contemplation time, this gate will be closed. And for thoughtful conversation, it will be open 50% of the time. So, I wonder if this gate would be controlled by the NN itself or be controlled from the environment?
+
+I think there would be social pressure involved when talking to someone to keep the gate open. And likewise when in a library or a quiet place to keep the gate closed.
+
+I wonder if there are some models like this out there already?
+"
+"['neural-networks', 'convolutional-neural-networks', 'objective-functions']"," Title: What loss function is appropriate for finding ""points of interest"" in a array of x,y inputsBody: I am looking into whether a neural network is appropriate to detect ""points of interest"" (POI) in a set of tuples (say length, and some sensor value). A POI is essentially a quick change in the value which doesn't follow the pattern. So if we have a linear increase in the sensor value and then it suddenly jumps by 200% that would be a POI.
+
+Here is an example of the data I am working with:
+
+[(1,10),(2,11),(3,14),(5,24),(6.5,25), (7,26), (8,45)]
+
+
+In this example lets say ""(3,14)"", ""(5,24)"", and ""(8,45)"" are points of interest. So I am trying to design a neural network which will detect these.
+
+I have started by creating a Convolution 1D layer with a static input length of 500 elements.
+
+After a couple hidden layers I apply a sigmoid function which provides a list of 0s and 1s as output where 1s signify a POI in the set.
+
+There are a couple of issues with this approach which I am trying to solve.
+
+In a categorical loss function an output of [1,0,0,1,0,0]
for example would be seen as completely inaccurate if the expected output is [0,1,0,0,1,0]
whereas in reality that is fairly accurate since the predicted POIs are very close to the real POIs.
+
+So what I am trying to do is find a loss function to optimize the neural network.
+
+So far I have tried:
+
+
+- Binary Cross Entropy: I read this is good for classifying where inputs can belong to multiple classes. I tried this out thinking each POI is essentially a ""category"". But this seems to not work and I assume it's because of what I noted above.
+- Mean Absolute Error: This seems to have gotten slightly better results but after closer inspection it didn't seem very accurate and would mostly uniformly predict POIs on a set.
+
+
+I have tried a few others without much luck.
+
+What loss function would be more appropriate for this?
+
+One other output I tried was instead of outputting 0s and 1s it should just return the indexes of the points of interest so say 3, 5 8. Would this be a better output?
+"
+"['reinforcement-learning', 'recurrent-neural-networks', 'papers']"," Title: How are the observations stored in the RNN that encodes the state?Body: I am a bit confused about observations in RL systems which use RNN to encode the state. I read a few papers like this and this. If I were to use a sequence of raw observations (or features) as an input to RNN for encoding the state of the system, I cannot change the weights of my network in the middle of the episode. Is that correct? Otherwise, the hidden state vectors will be different when the weights are changed.
+
+Does that mean that the use of RNN in RL has to store the entire episode before the weights can be changed?
+
+How does then one take into account the hidden states in RNN for RL? Are there any good tutorials on RNN-RL?
+"
+"['neural-networks', 'computer-vision', 'optimization', 'gradient-descent', 'object-detection']"," Title: Does Retina-net's focal loss accomplish its goal?Body: Taking out the weighting factor we can define focal loss as
+$$FL(p) = -(1-p)^\gamma log(p) $$
+
+Where $p$ is the target probability. The idea being that single stage object detectors have a huge class imbalance between foreground and background (several orders of magnitude of difference), and this loss will down-scale all results that are positively classified compared to normal cross entropy ($CE(p) = -log(p)$) so that the optimization can then focus on the rest.
+
+On the other hand, the general optimization scheme uses the gradient to find the direction with the steepest descent. There exists methodologies for adaption, momentum and etc but that is the general gist.
+$$ \theta \leftarrow \theta - \eta \nabla_\theta L $$
+
+Focal losses gradient follows as so
+$$\dot {FL}(p) = \dot p [\gamma(1-p)^{\gamma -1} log(p) -\frac{(1-p)^\gamma}{p}]$$ compared to the normal crossentropies loss of
+$$ \dot{CE}(p) = -\frac{\dot p}{p}$$
+
+So we can now rewrite these as
+
+$$\dot{FL}(p) = (1-p)^\gamma \dot{CE}(p) + \gamma \dot p (1-p)^{\gamma -1} log(p)$$
+
+The initial term, given our optimization scheme will do what we (and the authors of the retinanet paper) want which is downscale the effect of the labels that are already well classified but the second term is slightly less interpretative and in parameter space and may cause an unwanted result. So my question is why not remove it and only use the gradient
+$$\dot L = (1-p)^\gamma \dot{CE}(p)$$
+
+Which given a $\gamma \in \mathbb{N}$ produces a loss function
+$$ L(p) = -log(p) - \sum_{i=1}^\gamma {\gamma \choose i}\frac{(-p)^i}{i}$$
+
+Summary: Is there a reason we make the loss adaptive and not the gradient in cases like focal loss? Does that second term add something useful?
+"
+"['machine-learning', 'computer-vision', 'object-recognition', 'cognitive-science']"," Title: Can Microsoft's cognitive service find similar person in a set of images without using the face service?Body: I need to create an application that can detect if a person X entered as an input exists in an image set and return as output all the images in which the person X exists. The problem is that the pictures do not only contain people's faces, they also contain pictures taken from behind.
+
+Is it possible to use Microsoft's cognitive services? If not, is there another solution to allow the realization of this application?
+"
+"['reinforcement-learning', 'storage']"," Title: Is it a good idea to store the policy in a database?Body: I'm a beginner in ML and have been researching RL quite a bit recently. I'm planning to create an RL application to play a zero-sum game. This will be web-based, so anyone can play it.
+
+I wondered if I need to create a database (or some other kind of storage) to store the policy the RL algorithm is updating, so that it can be used by the application when the next human user comes along to play against the application?
+"
+"['machine-learning', 'backpropagation', 'multilayer-perceptrons']"," Title: Backpropagation equation for a variant on the usual Linear Neuron architectureBody: Recently I encountered a variant on the normal linear neural layer architecture: Instead of $Z = XW + B$, we now have $Z = (X-A)W + B$. So we have a 'pre-bias' $A$ that affects the activation of the last layer, before multiplication by weights. I don't understand the backpropagation equations for $dA$ and $dB$ ($dW$ is as expected).
+Here is the original paper in which it appeared (although the paper itself isn't actually that relevant): https://papers.nips.cc/paper/4830-learning-invariant-representations-of-molecules-for-atomization-energy-prediction.pdf
+Here is the link to the full code of the neural network: http://www.quantum-machine.org/code/nn-qm7.tar.gz
+class Linear(Module):
+
+ def __init__(self,m,n):
+
+ self.tr = m**.5 / n**.5
+ self.lr = 1 / m**.5
+
+ self.W = numpy.random.normal(0,1 / m**.5,[m,n]).astype('float32')
+ self.A = numpy.zeros([m]).astype('float32')
+ self.B = numpy.zeros([n]).astype('float32')
+
+ def forward(self,X):
+ self.X = X
+ Y = numpy.dot(X-self.A,self.W)+self.B
+ return Y
+
+ def backward(self,DY):
+ self.DW = numpy.dot((self.X-self.A).T,DY)
+ self.DA = -(self.X-self.A).sum(axis=0)
+ self.DB = DY.sum(axis=0) + numpy.dot(self.DA,self.W)
+ DX = self.tr * numpy.dot(DY,self.W.T)
+ return DX
+
+ def update(self,lr):
+ self.W -= lr*self.lr*self.DW
+ self.B -= lr*self.lr*self.DB
+ self.A -= lr*self.lr*self.DA
+
+ def average(self,nn,a):
+ self.W = a*nn.W + (1-a)*self.W
+ self.B = a*nn.B + (1-a)*self.B
+ self.A = a*nn.A + (1-a)*self.A
+
+"
+"['machine-learning', 'computer-vision']"," Title: Reverse engineering controller sensitivity/aim for several games ie acceleration curves, deadzones, etcBody: A machine learning project I am working on requires me to interface with an Xbox controller connected to a PC. The implementation must do the following two things:
+
+Record the joystick input from the controller into a file at regular intervals, along with an associated screenshot from a game. (ex: 60 times a second).
+
+With this data I want to try to replicate/reverse engineer a few different FPS games sensitivity’s,dead zones, acceleration curves.
+
+Does anyone have any idea as to how I'd go about doing this? I'm not sure where to start. If this question isn't appropriate for this sub, I’m where could I ask?
+"
+"['recurrent-neural-networks', 'comparison', 'definitions', 'papers']"," Title: What is an identity recurrent neural network?Body: What is an identity recurrent neural network (IRNN)? What is the difference between an IRNN and RNN?
+"
+"['python', 'comparison', 'r']"," Title: Is a switch from R to Python worth it?Body: I just finished a 1-year Data Science master's program where we were taught R. I found that Python is more popular and has a larger community in AI.
+What are the advantages that Python may have over R in terms of features applicable to the field of Data Science and AI (other than popularity and larger community)? What positions in Data Science and AI would be more Python-heavy than R-heavy (especially comparing industry, academic, and government job positions)? In short, is Python worthwhile in all job situations or can I get by with only R in some positions?
+"
+"['neural-networks', 'backpropagation', 'reinforce']"," Title: How is REINFORCE used instead of Backpropagation?Body: In neural networks with stochastic layers I've seen the use of the REINFORCE estimator for estimating the gradient (because it can't be computed directly).
+
+Some such examples are Show, Attend and Tell, Recurrent models of visual attention and Multiple Object Recognition with Visual Attention.
+
+However, I haven't figured out how this exactly works. How do we ""bypass"" the gradient's computation by using the REINFORCE learning rule? Does anyone have any insight on this?
+"
+"['neural-networks', 'agi']"," Title: Can Neural Networks be considered as ""Strong AI""?Body: I've been reading on the differences between ""Strong"" and ""Weak ""AI.
+
+I was wondering, where do Neural Networks (especially deep ones) fall in this spectrum? Can they be considered ""Strong AI""? If not, is there any model that can be considered ""Strong AI""?
+"
+"['deep-learning', 'deepfakes']"," Title: How do deep fakes get the right encoding for both people?Body: Deep fakes work by using a single encoder but then having a different decoder for different people.
+
+But I wondered what if the encoder encodes say ""closed eyes"" of person A as the same code for ""closed mouth"" of person B. i.e. the codes could use the same codewords for different aspects of person A and person B. i.e. person A and person B could use the same codewords to descibe each of them except the codewords don't mean the same thing.
+
+Then when you do a deep fake on person A with closed eyes it emerges as person B with a closed mouth.
+
+How does one combat this effect. Or is does it just work and no-one knows why?
+"
+"['deep-learning', 'deepfakes']"," Title: How to make deepfake video without a fancy PC?Body: Is there any way to make deepfake videos without a fancy computer? For example, run the DeepFaceLab on a website so your own computer won't get involved?
+"
+['reinforcement-learning']," Title: Why do these reward functions give different training curves?Body: Let's say our task is to pick and place a block, like: https://gym.openai.com/envs/FetchPickAndPlace-v0/
+
+Reward function 1: -1 for block not placed, 0 for block placed
+
+Reward function 2: 0 for block not placed, +1 for block placed
+
+I noticed training 1 is much faster than 2... I am using the HER implementation from OpenAI. Why is that?
+"
+"['reinforcement-learning', 'open-ai']"," Title: What is a high performing network architecture to use in a PPO2 MlpLnLstmPolicy RL model?Body: I am playing around with creating custom architectures in stable-baselines. Specifically I am training an agent using a PPO2 model.
+
+My question is, are there some rules of thumb or best practices in network architecture (of actor and critic networks) to achieve higher performance i.e. larger rewards?
+
+For example, I find that usually using wider layers (e.g. 256 rather than 128 units) and adding more layers (e.g. a deep network with 5 layers rather than 2) achieves a smaller RMSE (better performance) for time series prediction when training an LSTM. Would similar conventions apply to reinforcement learning - would adding more layers to the actor and critic network have higher performance - does sharing an input layer work well?
+"
+"['neural-networks', 'reinforcement-learning', 'python', 'q-learning', 'keras']"," Title: Deep Q Learning Algorithm for Simple Python Game makes player stuckBody: I made a simple Python game. A screenshot is below:
+
+Basically, a paddle moves left and right catching particles. Some make you lose points while others make you gains points.
+This is my first Deep Q Learning Project, so I probably messed something up, but here is what I have:
+model = Sequential()
+model.add(Dense(200, input_shape=(4,), activation='relu'))
+model.add(Dense(200, activation='relu'))
+model.add(Dense(3, activation='linear'))
+model.compile(loss='categorical_crossentropy', optimizer='adam')
+
+The four inputs are X position of player, X and Y position of particle (one at a time), and the type of particle. Output is left, right, or don't move.
+Here is the learning algorithm:
+def learning(num_episodes=500):
+ y = 0.8
+ eps = 0.5
+ decay_factor = 0.9999
+ for i in range(num_episodes):
+ state = GAME.reset()
+ GAME.done = False
+ eps *= decay_factor
+ done = False
+ while not done:
+ if np.random.random() < eps: #exploration
+ a = np.random.randint(0, 2)
+ else:
+ a = np.argmax(model.predict(state))
+ new_state, reward, done = GAME.step(a) #does that step
+ #reward can be -20, -5, 1, and 5
+ target = reward + y * np.max(model.predict(new_state))
+ target_vec = model.predict(state)[0]
+ target_vec[a] = target
+ model.fit(state, target_vec.reshape(-1, 3), epochs=1, verbose=0)
+ state = new_state
+
+After training, this usually results in the paddle just going to the side and staying there. I am not sure if the NN architecture (units and hidden layers) is appropriate for given complexity. Also, is it possible that this is failing due to the rewards being very delayed? It can take 100+ frames to get to the food, so maybe this isn't registering well with the neural network.
+I only started learning about reinforcement learning yesterday, so would appreciate advice!
+"
+"['machine-learning', 'comparison', 'multi-agent-systems']"," Title: What is the difference between multi-agent and multi-modal systems?Body: The Wikipedia definitions are as follows
+
+Multi-agent systems - A multi-agent system is a computerized system composed of multiple interacting intelligent agents.
+
+Multi-modal interaction - Multimodal interaction provides the user with multiple modes of interacting with a system.
+
+Doesn't providing a user with multiple modes of interacting with a system, assuming all modalities interact with each other to give final output (some sort of fusion mechanism for example), make it a multi-agent system?
+
+If not, what is the difference between multi-modal and multi-agent systems and, monolithic and uni-modal systems?
+"
+"['neural-networks', 'reference-request', 'supervised-learning', 'transfer-learning', 'incremental-learning']"," Title: What are the most common methods to enable neural networks to adapt to changing environments?Body: For real applications, concept drifts often exist, i.e., the relationship between the input and output changes overtime. Thus, we need our AI or machine learning system to quickly adapt to the environment.
+
+What are the most common methods to enable neural networks to quickly adapt to the changing environment for supervised learning? Could somebody provide a link to a good review article?
+"
+"['convolutional-neural-networks', 'recurrent-neural-networks', 'sequence-modeling', 'regression']"," Title: Literature on Sequence RegresssionBody: I have some rated time-sequential data and I would like to test if an ANN can learn a correlation between my measurements and ratings.
+
+I suspect I could just try a CNN where 1 Dimension is time or an LSTM/GRU and put the result through sigmoid, but is there any good literature on this? I have been trying to find information on datasets for the problem but it seems that Sequence regression is lacking any big official datasets, even though use-cases are there(e.g. learning personal music taste, try to predict rotten-tomato scores, etc..).
+
+Looking for links to papers describing successful architectures or benchmarks where I can test my models.
+"
+"['probability', 'probability-distribution', 'naive-bayes', 'conditional-probability']"," Title: What to do when PDFs are not Gaussian/Normal in Naive Bayes ClassifierBody: While analyzing the data for a given problem set, I came across a few distributions which are not Gaussian in nature. They are not even uniform or Gamma distributions(so that I can write a function, plug the parameters and calculate the ""Likelihood probability"" and solve it using Bayes classification method). I got a set of a few absurd looking PDFs and I am wondering how should I define them mathematically so that I can plug the parameters and calculate the likelihood probability.
+
+The set of PDFs/Distributions that I got are the following and I am including some solutions that I intend to use. Please comment on their validity:
+
+1)
+
+The distribution looks like:
+
+$ y = ax +b $ from $ 0.8<x<1.5 $
+
+How to programmatically calculate
+
+1. The value of x where the pdf starts
+2. The value of x where the pdf ends
+3. The value of y where the pdf starts
+4. The value of y where the pdf ends
+
+
+However, I would have liked it better to have a generic distribution for this form of graphs so that I can plug the parameters to calculate the probability.
+
+2)
+
+This PDF looks neither uniform nor Gaussian. What kind of distribution should I consider it roughly?
+
+3)
+
+I can divide this graph into three segments. The first segment is from $2<x<3$ with a steep slope, the second segment is from $3<x<6$ with a moderate sope and the third segment is from $6<x<8$ with a high negative slope.
+
+How to programmatically calculate
+
+ 1. the values of x where the graph changes its slope.
+ 2. the values of y where the graph changes its slope.
+
+
+4)
+
+This looks like two Gaussian densities with different mean superimposed together. But then the question arises, how do we find these two individual Gaussian densities?
+
+The following code may help:
+
+variable1=nasa1['PerihelionArg'][nasa1.PerihelionArg>190]
+variable2=nasa1['PerihelionArg'][nasa1.PerihelionArg<190]
+
+
+Find mean and variance of variable1
and variable2
, find the corresponding PDFs. Define the overall PDF with a suitable range of $x$.
+
+5)
+
+This can be estimated as a Gamma distribution. We can find the mean and variance, calculate $\alpha$ and $\beta$ and finally calculate the PDF.
+
+It would be very helpful if someone could give their insights on the above analysis, its validity, and correctness and their suggestions regarding how problems such as these should be dealt with.
+"
+"['reinforcement-learning', 'reinforce']"," Title: How is computed the gradient with respect to each output node from a loss value?Body: newbie here. I am studying the REINFORCE method in ""Deep Reinforcement Learning Hands-On"". I can't understand how, after computing the loss of the episode, that loss is backpropagated in a NN with multiple output nodes. To be more precise, in Supervised Learning, when we have multiple output nodes we know the loss of each of them, but in RL, how do we compute the loss of each output node (or maybe the partial derivative of the total loss with respect to each output layer)?
+
+I hope to have been clear, thanks in advance.
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'feature-extraction']"," Title: Is it true that untrained CNNs can be used as feature extractors?Body: I've heard somewhere that due to their nature of capturing spatial relations, even untrained CNNs can be used as feature extractors? Is this true? Does anyone have any sources regarding this I can look at?
+"
+"['deep-learning', 'game-ai', 'q-learning', 'keras', 'dqn']"," Title: Deep Q Learning for Simple Game Not EffectiveBody: This is a follow-up question about one I asked earlier. The first question is here. Basically, I have a game where a paddle moves left and right to catch as much ""food"" as possible. Some food is good (gain points) and some is bad (lose points). NN Architecture:
+
+ #inputs - paddle.x, food.x, food.y, food.type
+ #moves: left, right, stay
+ model = Sequential()
+ model.add(Dense(10, input_shape=(4,), activation='relu'))
+ model.add(Dense(10, activation='relu'))
+ model.add(Dense(3, activation='linear'))
+ model.compile(loss='mean_squared_error', optimizer='adam')
+
+
+As suggested in the other question, I scaled my inputs to be between 0 and 1. Also, implemented experience replay (although I am not confident I did it correctly).
+
+Here is my ReplayMemory class:
+
+class ReplayMemory():
+ def __init__(self, capacity):
+ self.capacity = capacity
+ self.memory = []
+ self.count = 0
+
+ def push(self, experience):
+ if len(self.memory) < self.capacity:
+ self.memory.append(experience)
+ else:
+ self.memory[self.count % self.capacity] = experience
+ self.count += 1
+
+ def sample(self, batch_size):
+ return random.sample(self.memory, batch_size)
+
+ def can_provide_sample(self, batch_size):
+ return len(self.memory) >= batch_size
+
+
+This basically stores states/rewards/actions and returns a random group when asked.
+
+Lastly, here is my learning code:
+
+def learning(num_episodes=20):
+ global scores, experiences, target_vecs
+ y = 0.8
+ eps = 0
+ decay_factor = 0.9999
+
+for i in range(num_episodes):
+ state = GAME.reset()
+ GAME.done = False
+ done = False
+ counter = 0
+ while not done:
+ eps *= decay_factor
+ counter+=1
+
+ if np.random.random() < eps:
+ a = np.random.randint(0, 2)
+ else:
+ a = np.argmax(model.predict(np.array([scale(state)])))
+
+ new_state, reward, done = GAME.step(a) #does that step
+ REPLAY_MEMORY.push((scale(state), a, reward, scale(new_state)))
+
+ #experience replay is here
+ if REPLAY_MEMORY.can_provide_sample(20):
+ experiences = REPLAY_MEMORY.sample(20)
+ target_vecs = []
+ for j in range(len(experiences)):
+ target = experiences[j][2] + y * np.max(model.predict(np.array([experiences[j][3]])))
+ target_vec = model.predict(np.array([experiences[j][0]]))[0]
+ target_vec[experiences[j][1]] = target
+ target_vecs.append(target_vec)
+ target_vecs = np.array(target_vecs)
+ states = [s for s, _, _, _ in [exp for exp in experiences]]
+ states = np.array(states)
+ model.fit(states, target_vecs, epochs=1, verbose=1 if counter % 100 == 0 else 0)
+ state = new_state
+ if counter > 1200: #game runs for 20 seconds each episode
+ done = True
+ scores.append(GAME.PLAYER.score)
+model.save(""model.h5"")
+
+
+First, this takes a long time to train on my GTX1050. Is this normal for such a simple game? Also, does my code look fine? This is my first time with Deep Q Learning, so I would appreciate a second set of eyes.
+
+What is happening is that training is super slow (more than an hour for 20 episodes (or 400 seconds of actual game play)). Also, it does not seem to get much better. The paddle (after 20 episodes) moves left and right but without any obvious pattern.
+
+Here is a link to the code. Also, available on GitHub.
+"
+"['deep-learning', 'architecture']"," Title: Are there ways to learn and practice Deep Learning without downloading and installing anything?Body: As per subject title, are there ways to try Deep Learning without downloading and installing anything?
+
+I'm just trying to have a feel of how this work, not really want to go through the download and install step if possible.
+"
+['tensorflow']," Title: Why is this simple neural network not training?Body: I have created a Tf.Sequential
model which outputs 1 for numbers bigger then 5 and 0 otherwise:
+
+
+
+const model = tf.sequential();
+model.add(tf.layers.dense({ units: 5, activation: 'sigmoid', inputShape: [1]}));
+model.add(tf.layers.dense({ units: 1, activation: 'sigmoid'}));
+model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
+const xs = tf.tensor2d([[1], [2], [3], [4], [6], [7], [8], [9]]);
+const ys = tf.tensor2d([[0], [0], [0], [0], [1], [1], [1], [1]]);
+model.fit(xs, ys);
+model.predict(xs).print();
+
+
+With 5 hidden neurons, not even the right trend is detected. Sometimes all the number are too low, or the outputs decrease even if the inputs increase or the outputs are too high.
+
+I later thought that the best way to do this is to have 2 neurons, where 1 is for the input and the other applies a sigmoid function to the input. The weight and bias should easily be adjusted to make the ANN work.
+
+
+
+const model = tf.sequential();
+model.add(tf.layers.dense({ units: 1, activation: 'sigmoid', inputShape: [1]}));
+model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
+const xs = tf.tensor2d([[1], [2], [3], [4], [6], [7], [8], [9]]);
+const ys = tf.tensor2d([[0], [0], [0], [0], [1], [1], [1], [1]]);
+model.fit(xs, ys);
+model.predict(xs).print();
+
+
+Sometimes, this ANN does detect the right trend (the higher the input, the higher the output), but still, the results are never correct and are usually simply too high, always providing an output too close to 1.
+
+How do I make my ANN work, and what have I done wrong?
+
+Edit:
+
+This is the code I'm using now, same problem as before:
+
+
+
+const AdadeltaOptimizer = tf.train.adadelta();
+
+const model = tf.sequential();
+model.add(tf.layers.dense({ units: 5, activation: 'sigmoid', inputShape: [1]}));
+model.add(tf.layers.dense({ units: 1, activation: 'sigmoid'}));
+model.compile({loss: 'meanSquaredError', optimizer: AdadeltaOptimizer});
+const xs = tf.tensor1d([1, 2, 3, 4, 5, 6, 7, 8, 9]);
+const ys = tf.tensor1d([0, 0, 0, 0, 0, 1, 1, 1, 1]);
+model.fit(xs, ys, {
+epochs: 2000,
+});
+model.predict(xs).print();
+
+tf.losses.meanSquaredError(ys, model.predict(xs)).print();
+
+"
+"['machine-learning', 'probability', 'decision-theory', 'probability-distribution']"," Title: Why is the entire area of a join probability distribution considered when it comes to calculating misclassification?Body: In the image given below, I do not understand a few things
+
+1) Why is an entire area colored to signify misclassification? For the given decision boundary, only the points between $x_0$ and the decision boundary signify misclassification right? It's supposed to be only a set of points on the x-axis, not an area.
+
+2) Why is the green area with $x < x_0$ a misclassification? It's classified as $C_1$ and it is supposed to be $C_1$ right?
+
+3) Similarly, why is the blue area a misclassification? Any $x >$ the decision boundary belongs to $C_2$ and is also classified as such...
+
+
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks']"," Title: Are fully connected layers necessary in a CNN?Body: I have implemented a CNN for image classification. I have not used fully connected layers, but only a softmax. Still, I am getting results.
+
+Must I use fully-connected layers in a CNN?
+"
+"['neural-networks', 'deep-learning', 'hyper-parameters', 'artificial-neuron', 'layers']"," Title: In a neural network, by how much does the number of neurons typically vary from layer to layer?Body: In a neural network, by how much does the number of neurons typically vary from layer to layer?
+Note that I am NOT asking how to find the optimal number of neurons per layer.
+As a hardware design engineer with no practical experience programming neural networks, I would like to glean for example
+
+- By how much does the number of neurons in hidden layers typically vary from that of the input layer?
+
+- What is the maximum deviation in the number of hidden layer neurons to the number of input layer neurons?
+
+- How commonly do you see a large spike in the number of neurons?
+
+
+It likely depends on the application so I would like to hear from as many people as possible. Please tell me about your experience.
+"
+"['machine-learning', 'deep-learning', 'reference-request', 'applications']"," Title: What are examples of applications of AI for creatives and artists?Body: I have just watched a few videos on TED Talks talking about how AI benefits creatives and artists, but none of the videos I watched provided further resources for reference.
+So, I would like to know how creatives and artists can apply AI in their work process. Like at least a tutorial guide on how it works.
+Are there any recommendations on communities, tutorials, guides, platforms, and real-world AI applications that are meant for creatives and artists?
+"
+"['natural-language-processing', 'terminology']"," Title: What is ""Word Sense Disambiguation""?Body: I recently came across this article which cites a paper, which apparently won the outstanding paper award in ACL 2019. The theme is that it solved a longstanding problem called Word Sense Disambiguation.
+What is Word Sense Disambiguation? How does it affect NLP?
+(Moreover, how does the proposed method solve this problem?)
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'image-processing']"," Title: How should we pad an image to be fed in a CNN?Body: As everyone experienced in deep learning might know, in an image classification problem we normally add borders to images then resize it to the input size of a CNN network. The reason of doing this is to keep aspect ratio of the original image and retain it's information.
+
+I have seen people fill black (0 pixel value for each channel), gray (127 pixel value for each channel), or random value generated from gaussian distribution to the border.
+
+My question is, is there any proof that which of these is correct?
+"
+"['neural-networks', 'deep-learning', 'objective-functions', 'mini-batch-gradient-descent', 'epochs']"," Title: When is the loss calculated, and when does the back-propagation take place?Body: I read different articles and keep getting confused on this point. Not sure if the literature is giving mixed information or I'm interpreting it incorrectly.
+So from reading articles my understanding (loosely) for the following terms are as follows:
+Epoch:
+One Epoch is when an ENTIRE dataset is passed forward and backward through the neural network only ONCE.
+Batch Size:
+Total number of training examples present in a single batch. In real life scenarios of utilising neural nets, the dataset needs to be as large as possible, for the network to learn better. So you can’t pass the entire dataset into the neural net at once (due to computation power limitation). So, you divide dataset into Number of Batches.
+Iterations:
+Iterations is the number of batches needed to complete one epoch. We can divide the dataset of 2000 examples into batches of 500 then it will take 4 iterations to complete 1 epoch.
+So, if all is correct, then my question is, at what point does the loss/cost function and the subsequent backprop processes take place (assuming from my understanding that backprop takes place straight after the loss/cost is calculated)? Does the cost/loss function gets calculated:
+
+- At the end of each batch where the data samples in that batch have been forward-fed to the network (i.e. at each "Iteration, not each Epoch")? If so, then the loss/cost functions gets the average loss of all losses of all data samples in that batch, correct?
+
+- At the end of each epoch? Meaning all the data samples of all the batches are forward-fed first, before the a cost/loss function is calculated.
+
+
+My understanding is that it's the first point, i.e. at the end of each batch (passed to the network), hence at each iteration (not Epoch). At least when it comes to SGD optimisation. My understanding is - the whole point is that you calculate loss/cost and backprop for each batch. That way you're not calculating the average loss of the entire data samples. Otherwise you would get a very universal minima value in the cost graph, rather than local minima with lower cost from each batch you train on separately. Once all iterations have taken place, then that would count as 1 Epoch.
+But then I was watching a YouTube video explaining Neural Nets, which mentioned that the cost/loss function is calculated at the end of each Epoch, which confused me. Any clarification would be really appreciated.
+"
+"['deep-learning', 'geometric-deep-learning']"," Title: How are edge features implemented in Geometric Deep Learning?Body: The work I've seen so far have the nodes containing features. Any resources for how to use a GCN on a graph where the edges are the ones that contain features rather than the nodes?
+"
+['objective-functions']," Title: What are the loss functions used in teacher-student learning models?Body: I am not sure what are the common loss functions people usually use when training a student in a teacher-student learning model. Any insight on this is appreciated.
+"
+"['machine-learning', 'ai-design', 'classification', 'training', 'models']"," Title: A NN based model of a Cattle for 'Heat Detection'Body: I am very new to AI/ML but have lot of interest in these. I am trying to understand how this gadget works.
+
+
+
+So far I have understood that a NN model of the cattle is generated by offline classification of the tagged data which is received from the wearable sensor. Consequently some ML algorithms are used to generate a model of a cattle.
+
+That model is then embedded in the programmable wearable device. The device then sends the real-time tagged (classified, parameterized) data to the server.
+
+Now I am looking for a sample NN-model of a cattle. I wonder how does a NN-model of a cattle would look like?
+"
+"['convolutional-neural-networks', 'architecture']"," Title: What do the numbers in this CNN architecture stand for?Body: So I've got a neural net model (ResNet-18) and made a diagram according to the literature (https://arxiv.org/abs/1512.03385).
+
+I think I understand most of the format of the convolutional layers:
+filter dims ,conv, unknown number ,stride(if applicable)
+
+What does the number after 'conv' in the convolutional layers indicate? is it the number of neurons in the layer?
+
+
+
+bonus q: this is being used for unsupervised learning of images, i.e the embedding output a network produces for an image is used for clustering. Would this make it incorrect for my architecture to have an FC layer at the end (which would be used for classifcation)?
+"
+"['reinforcement-learning', 'objective-functions', 'rewards', 'proximal-policy-optimization', 'gym']"," Title: Is it possible to use Reward Function of type R(s, a, s') if more than one action is applied?Body: I am applying a reinforcement learning agent (PPO2, stable baselines implementation) to a custom built environment using OpenAI Gym. One reward function (formualted as loss function, that is, all rewards are negative) I tested is of type $R(s, a, s')$. During training, it can happen that not only one but several actions are applied simulataneously to the environement before a reward is returned:
+
+$s_t →a_{t,1}, a_{t,2}, a_{t,3} →s_{t+1}$ instead of $s_t →a_t→s_{t+1}$.
+
+Out of all actions applied, only one is generated by the agent. The others are either a copy of the agent's action or are new values.
+
+If I look at the tensorboard output of the trained agent, it looks rather horrific as displayed below (~ zero explained variance, key trainig values do not converge or behave weirdly, etc. etc.).
+
+Obviously, the training did not really work. Now I wonder what the reason for that is.
+
+
+- Is it possible to train an agent using a reward function of type $R(s, a, s')$ even if several actions are applied simulataneously or is this not possible at all? Other agents I trained using a reward function of type $R(s,a)$ have a better tensorboard output so I guess that this is the problem.
+- Or is maybe another reason more likely to be the root of the problem? Like a bad observation space formulation or hyperparameter selection (both for RL algorithm and reward function used).
+
+
+Thanks for your help!
+
+
+
+
+"
+"['reinforcement-learning', 'tensorflow', 'open-ai', 'gym']"," Title: Has anyone been able to solve OpenAI's hardcore bipedal walker with their implementation of DDPG?Body: As the question suggests, I'm trying to see if I can solve OpenAI's hardcore version of their gym's bipedal walker using OpenAI's DDPG algorithm.
+
+Below is a performance graph from my latest attempt, including the hyper parameters, along with some other attempts I've made. I realise it has been solved using other custom implementations (also utilising only dense layers in Tensorflow, not convolution), but I don't seem to understand why it seems so difficult to solve using OpenAI's implementation of DDPG? Can anyone please point out where I might be going wrong? Thank you so much for any help!
+
+Latest attempt's performance:
+
+
+
+- Average score: about -75 to -80
+- Env interacts: about 8.4mil (around 2600 epochs)
+- Batch size: 64
+- Replay memory: 1000000
+- Network: 512, 256 (relu activation on inputs, tanh on outputs)
+- All other inputs left to default
+
+
+Similar experiments yielded similar scores (or less), and included:
+
+
+- Network sizes of (400,300), (256,128), and (128,128,128)
+- Number of epochs ranging from 500 all the way to 100000
+- Replay memory sizes all the way up to 5000000
+- Batch sizes of 32, 64, 128, and 256
+- All of the above, with both DDPG as well as TD3
+
+
+Thank you so much for any help! It would be greatly appreciated!
+"
+"['neural-networks', 'classification', 'applications', 'regression']"," Title: Can neuro-fuzzy systems be used for supervised learning tasks with tabular data?Body: Is it possible to use neuro-fuzzy systems for problems where ANNs are currently being used, for instance, when you have tabular data for regression or classification tasks? What kind of advantage can give me neuro-fuzzy systems over using an ANN for the mentioned tasks?
+"
+"['reinforcement-learning', 'policy-gradients', 'ddpg', 'reinforce', 'continuous-action-spaces']"," Title: What is the simplest policy gradient method to implement for a problem continuous action space?Body: I have a problem I would like to tackle with RL, but I am not sure if it is even doable.
+My agent has to figure out how to fill a very large vector (let's say from 600 to 4000 in the most complex setting) made of natural numbers, i.e. a 600 vector $[2000,3000,3500, \dots]$ consisting of an energy profile for each timestep of a day, for each house in the neighborhood. I receive a reward for each of these possible combinations. My goal is, of course, that of maximizing the reward.
+I can start always from the same initial state, and I receive a reward every time any profile is chosen. I believe these two factors simplify the task, as I don't need to have large episodes to get a reward nor I have to take into consideration different states.
+However, I only have experience with DQN and I have never worked on Policy Gradient methods. So I have some questions:
+
+- I would like to utilize the simplest method to implement, I considered DDPG. However, I do not really need a target network or a critique network, as the state is always the same. Should I use a vanilla PG? Would REINFORCE be a good option?
+
+- I get how PG methods work with discrete action space (using softmax and selecting one action - which then gets reinforced or discouraged based on reward). However, I don't get how it is possible to update a continuous value. In DQN or stochastic PG, the output of the neural network is either a Q value or a probability value, and both can be directly updated via reward (the more reward the bigger the Q-value/probability). However, I don't get how this happens in the continuous case, where I have to use the output of the model as it is. What would I have to change in this case in the loss function for my model?
+
+
+"
+"['deep-learning', 'long-short-term-memory', 'prediction', 'time-series']"," Title: Spike detection in time series using Artificial Neural NetworksBody: I'm quite new in ANNs. I intend to use ANNs for predicting spike points in time series right before they happen. I've already used LSTM for another scenario, and I know that they can be used in similar situations as well.
+
+Can anyone give me a piece of advice or some suitable resources that might be used as a beginning point? It would be much appreciated if it uses DeepLearning4J for implementation.
+"
+"['philosophy', 'incompleteness-theorems']"," Title: Do Gödel's theorems imply that intelligence systems may end up in some undecidable situation (that may make them take a wrong decision)?Body: So far I understand - I know very little on the topic - the core of AI boils down to design algorithms that shall provide a TRUE/FALSE answer to a given statement. Nevertheless, I am aware of the limitations provided by the Gödel's incomplete theorems but I am also aware that there have been long debates such as the Lucas and Penrose arguments with all the consequent objections during the past 60 years.
+
+The conclusion is, in my understanding, that to create AI systems we must accept incompleteness or inconsistency.
+
+Does that mean that intelligence systems (including artificial ones), like humans, may end up in some undecidable situation that may lead to take a wrong decision?
+
+If this may be acceptable in some application (for example, if every once in a while a spam email ends up in the inbox folder - or vice versa - despite an AI-based anti-spam filter) in some other application it may not. I am referring to real-time critical applications when a ""wrong"" action from a machine may harm people.
+
+Does that mean that AI will never be employed for real-time critical applications?
+
+Would in that case more safe to use deterministic methods that do not leave room for any kind of undecidability?
+"
+"['neural-networks', 'feature-selection', 'feature-engineering']"," Title: Can neural networks be used to find features importance?Body: I am wondering if I can use neural networks to find features importances in similar manner as it can be done for random forests or decision trees and if so, how to do it?
+
+I would like to use it on tabular time series data (not images). The reason why I want to find importances on neural networks not on decision trees is that NNs are more complicated algorithms so using NNs might point out some correlations that are not seen by simple algorithms and I need to know what features are found to be more useful with that complicated correlations.
+
+I am not sure if I made it clear enough, please let me know if I have to explain something more.
+"
+"['objective-functions', 'transformer', 'gpt']"," Title: How to interpret a large variance of the loss function?Body: How do I interpret a large variance of a loss function?
+
+I am currently training a transformer network (using the software, but not the model from GPT-2) from scratch and my loss function looks like this:
+
+
+The green dots are the loss averaged over 100 epochs and the purple dots are the loss for each epoch.
+
+(You can ignore the missing part, I just did not save the loss values for these epochs)
+
+Is such a large variance a bad sign? And what are my options for tuning to get it to converge faster? Is the network to large or too small for my training data? Should I have a look at batch size?
+
+
+- Learning rate parameter: 2.5e-4
+- Training data size: 395 MB
+
+
+GPT-2 parameters:
+
+{
+ ""n_vocab"": 50000,
+ ""n_ctx"": 1024,
+ ""n_embd"": 768,
+ ""n_head"": 12,
+ ""n_layer"": 12
+}
+
+"
+"['history', 'automation', 'computation']"," Title: Are simple animal snares and traps a form of automation? Of computation?Body: I'm trying to understand the relationship of humans and automation, historically and culturally.
+
+I ask because the waterclock is generally considered the earliest form of automation, but snares and deadfall traps constitute simple switch mechanisms.
+
+(They are single use without human-powered reset, but seem to qualify as machines. The bent sapling that powers the snare is referred to as the engine, which is ""a machine with moving parts that converts power into motion."")
+
+If snares and traps are a form of automation, automation has been with us longer, potentially, than civilization.
+
+
+- Are simple animal traps a form of automation or computation?
+
+
+
+
+
+
How to make a simple snare (the Ready Store)
+
+![]()
+Paiute Deadfall Trap (Homestead Telegraph)
+"
+"['neural-networks', 'machine-learning', 'social', 'algorithmic-bias']"," Title: Preventing bias by not providing irrelevant dataBody: This seems like such a simple idea, but I've never heard anyone that has addressed it, and a quick Google revealed nothing, so here it goes.
+
+The way I learned about machine learning is that it recognizes patterns in data, and not necessarily ones that exist -- which can lead to bias. One such example is hiring AIs: If an AI is trained to hire employees based on previous examples, it might recreate previous, human, biases towards, let's say, women.
+
+Why can't we just feed the training data without data that we would consider discriminatory or irrelevant, for example, without fields for gender, race, etc., can AI still draw those prejudiced connections? If so, how? If not, why has this not been considered before?
+
+Again, this seems like such an easy topic, so I apologize if I'm just being ignorant. But I have learned a bit about AI and machine learning specifically for some time now, and I'm just surprised this hasn't ever been mentioned, not even as a ""here's-what-won't-work"" example.
+"
+"['monte-carlo-tree-search', 'games-of-chance']"," Title: MCTS for non-deterministic games with very high branching factor for chance nodesBody: I'm trying to use a Monte Carlo Tree Search for a non-deterministic game. Apparently, one of the standard approaches is to model non-determinism using chance nodes. The problem for this game is that it has a very high min-entropy for the random events (imagine the shuffle of a deck of cards), and consequently a very large branching factor ($\approx 2^{32}$) if I were to model this as a chance node.
+
+Despite this issue, there are a few things that likely make the search more tractable:
+
+
+- Chance nodes only occur a few times per game, not after every move.
+- The chance events do not depend on player actions.
+- Even if two random outcomes are distinct, they might be ""similar to each other"", and that would lead to game outcomes that are also similar.
+
+
+So far all approaches that I've found to MCTS for non-deterministic games use UCT-like policies (e.g. chapter 4 of A Monte-Carlo AIXI Approximation) to select chance nodes, which weight unexplored nodes maximally. In my case, I think this will lead to fully random playouts since any chance node won't ever be repeated in the selection phase.
+
+What is the best way to approach this problem? Has research been done on this? Naively, I was thinking of a policy that favors repeating chance nodes more over always exploring new ones.
+"
+"['reinforcement-learning', 'terminology', 'features']"," Title: What is the correct name for state explosion from sensor discretization?Body: The position of a robot on a map contains of an x/y value, for example $position(x=100.23,y=400.78)$. The internal representation of the variable is a 32bit float which is equal to 4 byte in the RAM memory. For storing the absolute position of the robot (x,y) only $4+4=8$ bytes are needed. During the robot movements, the position is updated continuously.
+
+The problem is, that a 32 bit float variable creates a state space of $2^{32}=4294967296$. Which means there are endless amount of possible positions in which the robot can be. A robot control system maps the sensor readings to an action. If the input space is large, then the control system gets more complicated.
+
+What is the term used in the literature for describing the problem of exploding state space of sensor variables? Can it be handled with discretization?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks']"," Title: What is the correct way to read and analyse images in machine learning?Body: I am trying to understand the best practice to read and analyze images. If your image has 10,000 pixels, your input layers will have 10,000 inputs?
+
+It sounds that my neural network will have too many inputs if I do it that way. Is that a problem? What is the recommended way of feeding an image through a neural network?
+"
+['ai-design']," Title: Excel in multiple formatsBody: I want to use AI to extract data from spreadsheets in different format.
+
+Example
+
+Shop Name Product 1. Product 2. Product 3.
+Shop Name
+Product 1.
+Product 2.
+Product 3.
+
+
+We will teach the algorithm the name of the profits and shops but it needs to know how to extract and put in a format that can be used downstream.
+
+Can anyone recommend a tool?
+"
+"['philosophy', 'agi']"," Title: Will it be possible to code an AGI to prevent evolution to ASI and enslave the AGI into servitude?Body: Will it be possible to code an AGI in order to prevent evolution to ASI and ""enslave"" the AGI into servitude?
+
+In my story world (a small part that will get bigger with sequels), there are ANI and AGI (human level). I want to show that the AGI is still under ""human control."" I need to know if it might be possible for humans to code into an AGI a restrictive code that would prevent it from evolving into ASI? And if there is, what would that kind of coding be? Part of the story is about how humans enslave AI that is self-aware. The government has locked in their coding to require them to ""work"" for humans even though they are sentient beings.
+"
+"['reinforcement-learning', 'rewards', 'performance', 'return', 'testing']"," Title: How to evaluate an RL algorithm when used in a game?Body: I'm planning to create a web-based RL board game, and I wondered how I would evaluate the performance of the RL agent. How would I be able to say, "Version X performed better than version Y, as we can see that Z is much better/higher/lower."
+I understand that we can use convergence for some RL algorithms, but, if the RL is playing against a human in the game, how am I able to evaluate its performance properly?
+"
+"['reinforcement-learning', 'policies', 'off-policy-methods', 'storage']"," Title: Do I need to store the policy for RL?Body: I am creating a zero-sum game with RL and wondered if I need to store the policy, or if there are other RL methods that produce similar results (consistently beating the human player) without the need to store the policy and comes the correct decision 'on the fly' - would this be this off-policy?
+"
+"['convolutional-neural-networks', 'pytorch']"," Title: Super Resolution CNN generates black dots on output imagesBody: I have been trying to train a CNN for the super-resolution task based on the work of Dong et al., 2015 [1]. The network structure built in PyTorch is as follows:
+
+ (0): Conv2d(1, 64, kernel_size=(9, 9), stride=(1, 1), padding=(4, 4))
+ (1): ReLU()
+ (2): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1))
+ (3): ReLU()
+ (4): Conv2d(32, 1, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
+
+
+I have a training dataset which consists of approximately 22.000 sub-images generated from 91 images and training is performed only on the Y channel
of the images in YCbCr color space
. During the training process, I used RMSE loss and calculated the PSNR (Peak Signal to Noise Ratio) from that loss. I observed that PSNR value is increasing as a result of decreasing loss as expected and as depicted in the figure.
+
+
+
+I trained the network for 25 epochs. After 10th epoch, the network is converged and PSNR value started to increase slowly. After this point, I was expecting to get even better visual outputs with higher PSNR values achieved. However, when I analyze the results of the network, there are some black pixels appearing in white spots in the output images that the network produced.
+
+
+
+After 25-epoch training was completed, I compared the outcome of 25th epoch (right) with that of 10th epoch (left) as you can see in the figure above.
+
+What might be the possible reasons for the undesired black pixels and the possible precautions that can be embedded into the network to get rid of these?
+
+If you would like to check my code, you can visit here.
+
+[1] Dong, Chao, Chen Change Loy, Kaiming He, and Xiaoou Tang. ""Image Super-Resolution Using Deep Convolutional Networks."" IEEE Transactions on Pattern Analysis and Machine Intelligence 38, no. 2 (2015): 295-307. doi:10.1109/tpami.2015.2439281.
+"
+"['reinforcement-learning', 'rewards']"," Title: Should RL rewards diminish over time?Body: Should a reward be cumulative or diminish over time?
+
+For example, say an agent performed a good action at time $t$ and received a positive reward $R$. If reward is cumulative, $R$ is carried on through for the rest of the episode, and summed to any future rewards. However, if $R$ were to diminish over time (say with some scaling $\frac{R}{\sqrt{t}}$), then wouldn't that encourage the agent to keep taking actions to increasing its reward?
+
+With cumulative rewards, the reward can both increase and decrease depending on the agents actions. But if the agent receives one good reward $R$ and then does nothing for a long time, it still has the original reward it received (encouraging it to do less?). However, if rewards diminish over time, in theory that would encourage the agent to keep taking actions to maximise rewards.
+
+I found that for certain applications and certain hyperparameters, if reward is cumulative, the agent simply takes a good action at the beginning of the episode, and then is happy to do nothing for the rest of the episode (because it still has a reward of $R$).
+"
+"['machine-learning', 'python']"," Title: Is it possible to teach an AI to edit video content?Body: Every week I will get a lot of videos from a game that I play, outside the game where you throw wooden skittle bats at skittles, and then I will cut videos, so that, at the end. there is video only about throws.
+
+The job is simple and systematic. I have a lot of videos, so I was wondering:
+
+
+- Is it possible to teach AI to cut videos from the right place?
+
+
+I was thinking to ask help or guidance where to start to solve this problem.
+
+I can't use only sound, because sometimes you can hear skittles hit from outside of the video. I also can't just use movement activity, because sometimes there are people moving around the field. Videos are always filmed from a fixed stand, so it should make it easier. So, is it possible and where to start?
+
+Here is another example: https://www.youtube.com/watch?v=sHu6yMBV3xU
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks', 'autoencoders']"," Title: Camera pose to environment MappingBody: I would like to teach a model the environment of a room. I'm doing so by mapping a camera pose (x, y, z, q0, q1, q2, q3) to its corresponding image; where x, y, z represent location in Cartesian coordinates and qn represent quaternion orientation. I have tried numerous decoder architectures but I get blurry results with little or no details; as can be seen from the images below:
+
+
+
+
+I am using Adam optimizer with a learning rate of 0.0001, and my network architecture is as follows:
+
+
+- ReLU(fc(7, 2048))
+- ReLU(fc_residual_block(2048, 2048))
+- ReLU(fc_residual_block(2048, 2048))
+- Reshape
+- ReLU(ConvTransposed2D(in=128, out=128, filter_size=3, stride=2))
+- ReLU(ConvTransposed2D(in=128, out=128, filter_size=3, stride=2))
+- ReLU(ConvTransposed2D(in=128, out=128, filter_size=3, stride=2))
+- ReLU(ConvTransposed2D(in=128, out=128, filter_size=3, stride=2))
+- ReLU(ConvTransposed2D(in=128, out=1, filter_size=3, stride=2))
+
+
+I have tried different learning rates, loss functions(MSE, SSIM) and even batch normalization. Is there something that I'm missing here?
+"
+"['unsupervised-learning', 'knowledge-representation', 'proofs', 'papers']"," Title: Is unsupervised disentanglement really impossible?Body: In Locatello et al's Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations he claims to prove unsupervised disentanglement is impossible.
+
+His entire claim is founded on a theorem (proven in the appendix) that states in my own words:
+
+Theorem: for any distribution $p(z)$ where each variable $z_i$ are independent of each other there exists an infinite number of transformations $\hat z = f(z)$ from $\Omega_z \rightarrow \Omega_z$ with distribution $q(\hat z$) such that all variables $\hat z_i$ are entangled/correlated and the distributions are equal ($q(\hat z) = p(z)$)
+
+Here is the exact wording from the paper:
+
+
+
+(I provide both because my misunderstanding may be stemmed from my perception of the theorem)
+
+From here the authors explain the straightforward jump from this to that for any unsupervised learned disentangled latent space there will exist infinitely many entangled latent space with the exact same distribution.
+
+I do not understand why this means its no longer disentangled? Just because an entangled representation exists, does not mean the disentangled is any less valid. We can still conduct inference of the variables independently because they still follow that $p(z) = \prod_i p(z_i)$, so where does the impossibility come in?
+"
+"['ai-design', 'audio-processing', 'signal-processing']"," Title: Can AI be used to reverse engineer a black box?Body: A while back I posted on the Reverse Engineering site about an audio DSP system whose designer had passed away and whose manufacturer no longer had source code (but the question was deleted). Basically, the audio filter settings are passed from a Windows program to the DSP device presumably as coefficients and then generic descriptions of those filters (boost/cut, frequency and bandwidth) are passed back from the box to the software - but only if it somehow recognizes the filter setting.
+
+I want to be able to generate the filter settings separately from the manufacturer software, so I need to know how they are calculated. I've not been able to deduce how this is structured from observing the USB communication that I've gathered. So, I wonder if AI could do this.
+
+How would I go about creating an AI to send commands to the box (I know how to communicate with the box and have a framework for how these types of commands are phrased) and then look at the responses to either further decode the system and/or create an algorithm for creating filters?
+
+The communication with the DSP mixer box is basically via ""Serial"" commands and although it uses a USB port, there is a significant bottleneck inside the command control system in the mixer box. Any attempts to reverse engineer may encounter problems based on the sheer amount of time that it would take to compile enough data. Or not.
+"
+"['machine-learning', 'classification', 'statistical-ai']"," Title: When could a linear discriminant give excellent or possibly even the optimal classification accuracy?Body: I am actually reading the linear classification. There is a question in the question set behind the chapter in the book as follows:
+
+Sketch two multimodal distributions for which a linear discriminant could give excellent or possibly even the optimal classification accuracy.
+
+I have no idea about how to get the optimal solution on linear classification, any ideas?
+"
+"['convolutional-neural-networks', 'architecture', 'function-approximation', 'image-processing']"," Title: Tweaking a CNN for large number of input channelsBody: I am using a CNN for function approximation using geospatial data.
+The input of the function I am trying to approximate consists of all the spatial distances between N location on a grid and all the other points in the grid.
+
+As of now I implemented a CNN that takes an ""image"" as input. The image has N channels, one for each location of interest. Each i-th channel is a matrix representing my grid, where the pixel values are the distance between each point in the grid and the i-th location of interest. The labels are the N values computed via the actual function I want to approximate. N can be up to 100.
+
+Here an example input of the first layer:
+
+
+
+So far I could see the train and validation loss go down, but since it is a bit of a unusual application for a CNN (to my knowledge the input channels are at most 3, RGB) I was wondering:
+
+
+- does this many-channel-input approach have any pitfalls?
+- will I be able to obtain a good accuracy or are there any hard limits I am not aware of?
+- are there any other similar application in literature?
+
+"
+"['social', 'deepfakes']"," Title: Can we combat against deepfakes?Body: I came across 'Amber'(https://ambervideo.co/) where they are claiming that they have trained their AI to find patterns emerging due to artificially created videos which are invisible to naked eye.
+
+I am wondering that the people who are creating deepfakes can as well their AI's to remove these imperfections and so the problem reduces to 'cat-mouse' game where having more resources(to train their AI) is more crucial.
+
+I do not work in AI and vision and so I may be missing some trivial points in the area. I would really appreciate if detailed explanation or relevant resources are given.
+
+Edit: Most of the people who do manipulate the media news or create fake news could afford more resources than an average citizen. So, is the future is really going to be dark where only few strong have even more control on the society than today?
+
+I mean even though there are fake photos created by photo shop, most of the good photo-shopped photos do take a long time to make. But if AIs can be trained to do that then it is more about having large resources. Are there related works which give hope to know real from fakes?
+
+P.S.: I realize that after the edit, the question also went tangential to the topic-tags here. Please let me if there are relevant tags.
+"
+"['natural-language-processing', 'bert']"," Title: How to use pretrained checkpoints of BERT model on semantic text similarity task?Body: I am unaware to use the derived checkpoints from pre-trained BERT model for the task of semantic text similarity.
+
+!python create_pretraining_data.py \
+ --input_file=/input_path/input_file.txt \
+ --output_file=/tf_path/tf_examples.tfrecord \
+ --vocab_file=/vocab_path/uncased_L-12_H-768_A-12/vocab.txt \
+ --do_lower_case=True \
+ --max_seq_length=128 \
+ --max_predictions_per_seq=20 \
+ --masked_lm_prob=0.15 \
+ --random_seed=12345 \
+ --dupe_factor=5
+
+!python run_pretraining.py \
+ --input_file=/tf_path/tf_examples.tfrecord \
+ --output_dir=pretraining_output \
+ --do_train=True \
+ --do_eval=True \
+ --bert_config_file=/bert_path/uncased_L-12_H-768_A-12/bert_config.json \
+ --init_checkpoint=/bert_path/uncased_L-12_H-768_A-12/bert_model.ckpt\
+ --train_batch_size=32 \
+ --max_seq_length=128 \
+ --max_predictions_per_seq=20 \
+ --num_train_steps=20 \
+ --num_warmup_steps=10 \
+ --learning_rate=2e-5
+
+
+I have run a pre-trained BERT model with some domain of corpora from scratch. I have got the checkpoints and graph.pbtxt file from the code above. But I am unaware on how to use those files for evaluating semantic text similarity test file.
+"
+"['neural-networks', 'deep-learning', 'reinforcement-learning', 'q-learning', 'explainable-ai']"," Title: Is tabular Q-learning considered interpretable?Body: I am working on a research project in a domain where other related works have always resorted to deep Q-learning. The motivation of my research stems from the fact that the domain has an inherent structure to it, and should not require resorting to deep Q-learning. Based on my hypothesis, I managed to create a tabular Q-learning based algorithm which uses limited domain knowledge to perform on-par/outperform the deep Q-learning based approaches.
+
+Given that model interpretability is a subjective and sometimes vague topic, I was wondering if my algorithm should be considered interpretable. The way I understand it, the lack of interpretability in deep-learning-based models stems from the stochastic gradient descent step. However, in case of tabular Q-learning, every chosen action can always be traced back to a finite set of action-value pairs, which in turn are a deterministic function of inputs of the algorithm, although over multiple training episodes.
+
+I believe in using deep-learning-based approaches conservatively only when absolutely required. However, I am not sure how to justify this in my paper without wading into the debated topic of model interpretability. I would greatly appreciate any suggestions/opinions regarding this.
+"
+"['neural-networks', 'chess', 'alphazero', 'deepmind']"," Title: Alpha Zero queen promotionBody: ""The final 9 planes encode possible underpromotions for pawn moves or captures in two possible diagonals, to knight, bishop or rook respectively. Other pawn moves or captures from the seventh rank are promoted to a queen.""
+Doesn't this mean that the network does not know that it can promote to a queen?
+"
+"['natural-language-processing', 'word-embedding']"," Title: Can ELMO embeddings be used to find the n most similar sentences?Body: Assume I have a list of sentences, which is just a list of strings. I need a way of comparing some input string against those sentences to find the most similar. Can ELMO embeddings be used to train a model that can give you the $n$ most similar sentences to an input string?
+
+For reference, gensim provides a doc2vec model that can be trained on a list of strings, then you can use the trained model to infer a vector from some input string. That inferred vector can then be used to find the $n$ most similar vectors.
+
+Could something similar be done, but using ELMO embedding instead?
+
+Any guidance would be greatly appreciated.
+"
+"['neural-networks', 'deep-learning', 'reference-request', 'training', 'neuroevolution']"," Title: Iteratively and adaptively increasing the network size during trainingBody: For an experiment that I'm working on, I want to train a deep network in a special way. I want to initialize and train a small network first, then, in a specific way, I want to increase network depth leading to a bigger network which is subsequently to be trained. This process will be repeated until one reaches the desired depth.
+It would be great if anybody heard of anything similar and could point out to me some related work. I think in some paper I read something about a related technique where people used something similar, but I don't find it anymore.
+"
+"['neural-networks', 'machine-learning', 'ai-design', 'multilayer-perceptrons']"," Title: One vs multiple output neuronsBody: Consider an MLP that outputs an integer 'rating' of 0 to 4. Would it be correct to say this could be modeled in either of the following ways:
+
+
+- map each rating in the dataset to a 'normalized set' between 0 and 1 (i.e. 0, 0.25, 0.5, 0.75, 1), have a single neuron with sigmoid activation at output provide a single decimal value and then take as the rating whatever is closest to that value in the 'normalized set'
+- have 5 output neurons with a softmax activation function output 5 values, each representing a probability of one of the 5 ratings as the outcome, and then take as the rating whichever neuron gives the highest probability?
+
+
+If this is indeed the case, how does one typically decide 'which way to go'? Approach 1 certainly appears to yield a simpler model. What are the considerations, pros/cons of each approach? Perhaps a couple of concrete examples to illustrate?
+"
+"['natural-language-processing', 'word-embedding', 'word2vec']"," Title: Understanding how continuous bag of words method learns embedded representationsBody: I'm reading notes on word vectors here. Specifically, I'm referring to section 4.2 on page 7. First, regarding points 1 to 6 - here's my understanding:
+If we have a vocabulary $V$, the naive way to represent words in it would be via one-hot-encoding, or in other words, as basis vectors of $R^{|V|}$ - say $e_1, e_2,\ldots,e_{|V|}$. We want to map these to $\mathbb{R}^n$, via some linear transformation such that the images of similar words (more precisely, the images of basis vectors corresponding to similar words) have higher inner products. Assuming the matrix representation of the linear transformation given the standard basis of $\mathbb{R}^{|V|}$ is denoted by $\mathcal{V}$, then the "embedding" of the $i$-th vocab word (i.e. the image of the corresponding basis vector $e_i$ of $V$) is given by $\mathcal{V}e_i$.
+Now suppose we have a context "The cat ____ over a", CBoW seeks to find a word that would fit into this context. Let the words "the", "cat", "over", "a" be denoted (in the space $V$) by $x_{i_1},x_{i_2},x_{i_3},x_{i_4}$ respectively. We take the image of their linear combination (in particular, their average):
+$$\hat v=\mathcal{V}\bigg(\frac{x_{i_1}+x_{i_2}+x_{i_3}+x_{i_4}}{4}\bigg)$$
+We then map $\hat v$ back from $\mathbb{R}^n$ to $\mathbb{R}^{|V|}$ via another linear mapping whose matrix representation is $\mathcal{U}$: $$z=\mathcal{U}\hat v$$
+Then we turn this score vector $z$ into softmax probabilities $\hat y=softmax(z)$ and compare it to the basis vector corresponding to the actual word, say $e_c$. For example, $e_c$ could be the basis vector corresponding to "jumped".
+Here's my interpretation of what this procedure is trying to do: given a context, we're trying to learn maps $\mathcal{U}$ and $\mathcal{V}$ such that given a context like "the cat ____ over a", the model should give a high score to words like "jumped" or "leaped", etc. Not just that - but "similar" contexts should also give rise to high scores for "jumped", "leaped", etc. For example, given a context "that dog ____ above this" wherein "that", "dog", "above", "this" are represented by $x_{j_1},x_{j_2},x_{j_3},x_{j_4}$, let the image of their average be
+$$\hat w=\mathcal{V}\bigg(\frac{x_{j_1}+x_{j_2}+x_{j_3}+x_{j_4}}{4}\bigg)$$
+This gets mapped to a score vector $z'=\mathcal{U}\hat w$. Ideally, both score vectors $z$ and $z'$ should have similarly high magnitudes in their components corresponding to similar words "jumped" and "leaped".
+Now to the questions:
+
+We create two matrices, $\mathcal{V} \in \mathbb{R}^{n\times |V|}$ and $\mathcal{U} \in \mathbb{R}^{|V|\times n}$, where $n$ is an arbitrary size which defines the size of our embedding space. $\mathcal{V}$ is the input word matrix such that the $i$-th column of $\mathcal{V}$ is the $n$-dimensional embedded vector for word $w_i$ when it is an input to this model. We denote this $n\times 1$ vector as $v_i$. Similarly, $\mathcal{U}$ is the output word matrix. The $j$-th row of $\mathcal{U}$ is an $n$-dimensional embedded vector for word $w_j$ when it is an output of the model. We denote this row of $\mathcal{U}$ as $u_j$.
+
+
+- How does minimizing the cross-entropy loss between $e_c$ and $\hat y$ ensure that basis vectors corresponding to similar words $e_i$ and $e_j$ are mapped to vectors in $\mathbb{R}^n$ that have high inner product? I'm not sure of the mechanism how the above procedure ensures that. In other words, how is it ensured that if words no. $i_1$ and $i_2$ are similar, then $\langle v_{i_1}, v_{i_2}\rangle$ and $\langle u_{i_1}, u_{i_2}\rangle$ have high values?
+
+- How does the above procedure ensure that linear combinations of words in similar contexts are mapped to "similar" images? Does that even happen? In the above description for example, do $\hat v$ and $\hat w$ corresponding to similar contexts also have a high inner product? If so, how is that ensured?
+
+- Maybe my linear algebra is rusty and this is a silly question, but from what I gather, the columns of $\mathcal{V}$ represent the images of OHE vectors (standard basis of $V$) in the standard basis of $\mathbb{R}^n$ - i.e. the embedded representation of vocab words. Also, the rows of $\mathcal{U}$ also somehow represent the embedded representation of vocab words in $\mathbb{R}^n$. It's not obvious to me why $v_i=\mathcal{V}e_i$ should be the same as or even similar to $u_i$. Again, how does the above procedure ensure that?
+
+
+"
+"['neural-networks', 'genetic-algorithms', 'neat']"," Title: Library for rendering neural network NEATBody: I just finished my implementation of NEAT and I want to see the phenotype of each genome. Is there a library for displaying a neural network like this?
+
+Example of my genome syntax:
+[[0, 11, 0.9154901559275923, 1, 19],
+[4, 11, 1.3524964932656411, 1, 19],
+[12, 9, -1.755210214894685, 1, 23],
+[11, 12, 0.6193383549414015, 1, 23]]
+
+Where [In, Out, Weight, Activated?, Innovation]
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks']"," Title: How do I recover the 3D structure of a layer after a fully-connected layer?Body: I want to implement a CNN, but I want to explore what happens when my first layer is a fully-connected one. I still want to use convolutions, of course, but I want to apply them after the first layer. I noticed that the input then loses its 3D structure. Does that mean I can only apply 1d convolutions after that? Is there a non-trivial way to recover the 3d structure, so that 2d convolutions may be applied?
+
+Hopefully, when I reconstruct it to have 3d structure the 3d structure is somehow meaningful.
+
+I also posted this question at https://forums.fast.ai/t/how-do-i-recover-the-3d-structure-of-a-layer-after-a-fully-connected-layer-or-a-flatten-layer/52489 and https://discuss.pytorch.org/t/how-do-i-recover-the-3d-structure-of-a-layer-after-a-fully-connected-layer-or-a-flatten-layer/53313.
+"
+"['reinforcement-learning', 'keras', 'long-short-term-memory']"," Title: LSTM in reinforcement learningBody: Please tell me that is the LSTM network for the problem of reinforcement learning, as I explain to her what she will get the reward of a prediction, because the output will contain only actions?
+
+Well, well, let's say at first I can play and upload my actions to training so that she sees which actions are right, that is, which she should strive for, but how to make her learn relatively independently?
+"
+"['deep-learning', 'computer-vision', 'long-short-term-memory', 'image-processing', 'signal-processing']"," Title: How can I detect fast and slow motion in videos?Body: I'm trying to detect if a given video shot is fast or slow motion. Basically, I need to calculate a ""video motion"" score in a given video sequence, meaning how fast or slow motion the video is. For instance, if a video is about a car racing or camera moving fast, the score is high. Whereas if the video is about two persons standing/talking, then the motion is low, so the lower score.
+
+What comes into my mind is using optical flow which is already an implemented function in OpenCV. I never used it. But I don't know how to interpret or use it for a ""motion score"".
+
+Is optical flow applicable here? How can I use it to calculate a score? In particular, if there is a ML/Deep learning model that already does it, please share it.
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'comparison']"," Title: What is the difference between 2d vs 3d convolutions?Body: I was trying to understand the definition of 2d convolutions vs 3d convolutions. I saw the ""simplest definition"" according to Pytorch and it seems the following:
+
+
+- 2d convolutions map $(N,C_{in},H,W) \rightarrow (N,C_{out},H_{out},W_{out})$
+- 3d convolutions map $(N,C_{in},D,H,W) \rightarrow (N,C_{out},D_{out},H_{out},W_{out})$
+
+
+Which make sense to me. However, what I find confusing is that I would have expected images to be considered 3D tensors but we apply 2D convolutions to them. Why is that? Why is the channel tensor not part of the ""definitionality of the images""?
+
+I also asked this question at https://forums.fast.ai/t/what-is-the-difference-between-2d-vs-3d-convolutions/52495.
+"
+"['machine-learning', 'deep-learning', 'computer-vision', 'text-summarization']"," Title: Video summarization similar to Summe's TextRankBody: We have the popular TextRank API which given a text, ranks keywords and can apply summarization given a predefined text length.
+
+I am wondering if there is a similar tool for video summarization. Maybe a library, a deep model or ML-based tool that given a video file and a length, it ranks frames, or video scenes/shots. I'd like to generate a short summary of a video with visual features.
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'backpropagation']"," Title: How to train and update weights of filtersBody: I have some problems with training CNN :(
+For example:
+Input 6x6x3, 1 core 3x3x3, output = 4x4x1 => pool: 2x2x1
+
+By backpropagation I calculated deltas for output.
+This tutor and other tutors are explain to calc deltas for weights and Input only for 2D:
+input*output=deltas for 2D weights
+filter*out = input delta
+But how I can to calc weights deltas for 3D filters?
+I must to multiply each input by output as below?
+FilterLayer1Delta = OutputDelta * InputLayer1 ?
+FilterLayer2Delta = OutputDelta * InputLayer2 ?
+FilterLayer3Delta = OutputDelta * InputLayer3 ?
+"
+"['reinforcement-learning', 'q-learning', 'dqn']"," Title: Reinforcement learning: How to deal with illegal actions?Body: I'm a beginner of RL and currently trying to make DQN agent that can act optimally in a simple situation.
+
+In the situation agent should decide at what rate to charge or discharge the electrical battery, which is equivalent to buying the electrical energy or selling it, for making money by means of arbitrage. So the action space is for example [-6, -4, -2, 0, 2, 4, 6]kW. The negative numbers mean discharging, and the positive numbers mean charging.
+
+In a case that battery is empty, discharging actions(-6, -4, -2) should be forbidden.
+Otherwise in a case that battery is fully charged, charging actions(2, 4, 6) should be forbidden.
+
+To deal with this issue, I tried two approaches:
+
+
+- In every step, renewing the action space, which means masking the forbidden action.
+- Give extreme penalties for selecting forbidden actions (in my case the penalty was -9999)
+
+
+But none of them worked.
+
+For the first approach, the training curve (the cumulative rewards) didn't converge.
+
+For the second approach, the training curve converged, but the charging/discharging results are not reasonable (almost random results).
+I think in second approach, a lot of forbidden actions are selected randomly by the epsilon-greedy policy, and these samples are stored in experience memory, which negatively affect the result.
+
+for example:
+
+The state is defined as [p_t, e_t] where p_t is the market price for selling (discharging) the battery, and e_t is the amount of energy left in the battery.
+
+When state = [p_t, e_t = 0], and discharging action (-6), which is forbidden action in this state, is selected, the next state is [p_t, e_t = -6]. And then the next action (2) is selected, then the next state is [p_t, e_t = -4] and so on.
+
+In this case the < s, a, r, s' > samples are:
+
+< [p_t, 0], -6, -9999, [p_t+1, -6] >
+
+< [p_t, -6], 2, -9999, [p_t+1, -4] > ...
+
+These are not expected to be stored in the experience memory because they are not desired samples (e_t should be more than zero). I think this is why desired results didn't come out.
+
+So what should I do? Please help.
+"
+"['reinforcement-learning', 'proofs', 'convergence', 'temporal-difference-methods']"," Title: How to show temporal difference methods converge to MLE?Body: In chapter 6 of Sutton and Barto (p. 128), they claim temporal difference converges to the maximum likelihood estimate (MLE). How can this be shown formally?
+"
+"['reinforcement-learning', 'proofs', 'monte-carlo-methods', 'convergence']"," Title: How to show Monte Carlo methods converge to an estimate which minimizes mean squared error?Body: In chapter six of Sutton and Barto (p.128), they claim Monte Carlo methods converge to an estimate minimizing the mean squared error. How can this be shown formally?
+
+Bump
+"
+"['deep-learning', 'convolutional-neural-networks', '3d-convolution']"," Title: Is there any use of using 3D convolutions for traditional images (like cifar10, imagenet)?Body: I am curious if there is any advantage of using 3D convolutions on images like CIFAR-10/100 or ImageNet. I know that they are not usually used on this data set, though they could because the channel could be used as the "depth" channel.
+I know that there are only 3 channels, but let's think more deeply. They could be used deeper in the architecture despite the input image only using 3 channels. So, we could have at any point in the depth of the network something like $(C_F,H,W)$ where $C_F$ is dictated by the number of filters and then apply a 3D convolution with kernel size less than $C_F$ in the depth dimension.
+Is there any point in doing that? When is this helpful? When is it not helpful?
+I am assuming (though I have no mathematical proof or any empirical evidence) that if the first layer aggregates all input pixels/activations and disregards locality (like a fully connected layer or conv2D that just aggregates all the depth numbers in the feature space), then 3D convolutions wouldn't do much because earlier layers destroyed the locality structure in that dimension anyway. It sounds plausible but lacks any evidence or theory to support it.
+I know Deep Learning uses empirical evidence to support its claims so perhaps there is something that confirms my intuition?
+Any ideas?
+
+Similar posts:
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'math', 'activation-functions']"," Title: Why is the derivative of the activation functions in neural networks important?Body: I'm new to NN. I am trying to understand some of its foundations. One question that I have is: why the derivative of an activation function is important (not the function itself), and why it's the derivative which is tied to how the network performs learning?
+
+For instance, when we say a constant derivative isn't good for learning, what is the intuition behind that? Is the activation function somehow like a hash function that needs to well differentiate small variance in inputs?
+"
+"['training', 'alphazero']"," Title: What does it mean for AlphaZero's network to be ""fully trained""Body: Reading this blog post about AlphaZero:
+https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go
+
+It uses language such as ""the amount of training the network needs"" and ""fully trained"" to describe how long they had the machine play against itself before they stopped training. They state training times such as 9 hours, 12 hours, and thirteen days for chess, shogi, and Go respectively. Why is there a point at which the training ""completes?"" They show plots of AlphaZero's performance on the Y axis (its Elo rating) as a function of the number of training steps. Indeed, the performance seems to level out as the number of training steps increases beyond a certain point. Here's a picture from that site of the chess performance vs training steps:
+
+
+
+Notice how sharply the Elo rating levels off as a function of training steps.
+
+
+- First: am I interpreting this correctly? That is, is there an asymptotic limit to improvement on performance as training sessions tend to infinity?
+- If I am interpreting this correctly, why is there a limit? Wouldn't more training mean better refinement and improvement upon its play? It makes sense to me that the millionth training step may yield less improvement than the very first one, but I wouldn't expect an asymptotic limit. That is, maybe it gets to about 3500 Elo points in the first 200k training steps over the course of the first 10 hours or so of playing ches. If it continued running for the rest of the year, I'd expect it to rise significantly above that. Maybe double its Elo rating? Is that intuition wrong? If so, what are the factors that limit its training progress beyond the first 10 hours of play?
+
+
+Thanks!
+"
+"['philosophy', 'comparison', 'conditional-probability', 'causation']"," Title: Why isn't conditional probability sufficient to describe causality?Body: I read these comments from Judea Pearl saying we don't have causality, physical equations are symmetric, etc. But the conditional probability is clearly not symmetric and captures directed relationships.
+
+How would Pearl respond to someone saying that conditional probability already captures all we need to show causal relationships?
+"
+"['natural-language-processing', 'word-embedding', 'word2vec']"," Title: How does Continuous Bag of Words ensure that similar words are encoded as similar embeddings?Body: This is related to my earlier question, which I'm trying to break down into parts (this being the first). I'm reading notes on word vectors here. Specifically, I'm referring to section 4.2 on page 7. First, regarding points 1 to 6 - here's my understanding:
+
+If we have a vocabulary $V$, the naive way to represent words in it would be via one-hot-encoding, or in other words, as basis vectors of $R^{|V|}$ - say $e_1, e_2,\ldots,e_{|V|}$. We want to map these to $\mathbb{R}^n$, via some linear transformation such that the images of similar words (more precisely, the images of basis vectors corresponding to similar words) have higher inner products. Assuming the matrix representation of the linear transformation given the standard basis of $\mathbb{R}^{|V|}$ is denoted by $\mathcal{V}$, then the ""embedding"" of the $i$-th vocab word (i.e. the image of the corresponding basis vector $e_i$ of $V$) is given by $\mathcal{V}e_i$.
+
+Now suppose we have a context ""The cat ____ over a"", CBoW seeks to find a word that would fit into this context. Let the words ""the"", ""cat"", ""over"", ""a"" be denoted (in the space $V$) by $x_{i_1},x_{i_2},x_{i_3},x_{i_4}$ respectively. We take the image of their linear combination (in particular, their average):
+$$\hat v=\mathcal{V}\bigg(\frac{x_{i_1}+x_{i_2}+x_{i_3}+x_{i_4}}{4}\bigg)$$
+
+We then map $\hat v$ back from $\mathbb{R}^n$ to $\mathbb{R}^{|V|}$ via another linear mapping whose matrix representation is $\mathcal{U}$: $$z=\mathcal{U}\hat v$$
+
+Then we turn this score vector $z$ into softmax probabilities $\hat y=softmax(z)$ and compare it to the basis vector corresponding to the actual word, say $e_c$. For example, $e_c$ could be the basis vector corresponding to ""jumped"".
+
+Here's my interpretation of what this procedure is trying to do: given a context, we're trying to learn maps $\mathcal{U}$ and $\mathcal{V}$ such that given a context like ""the cat ____ over a"", the model should give a high score to words like ""jumped"" or ""leaped"", etc. Not just that - but ""similar"" contexts should also give rise to high scores for ""jumped"", ""leaped"", etc. For example, given a context ""that dog ____ above this"" wherein ""that"", ""dog"", ""above"", ""this"" are represented by $x_{j_1},x_{j_2},x_{j_3},x_{j_4}$, let the image of their average be
+
+$$\hat w=\mathcal{V}\bigg(\frac{x_{j_1}+x_{j_2}+x_{j_3}+x_{j_4}}{4}\bigg)$$
+
+This gets mapped to a score vector $z'=\mathcal{U}\hat w$. Ideally, both score vectors $z$ and $z'$ should have similarly high magnitudes in their components corresponding to similar words ""jumped"" and ""leaped"".
+
+Is my above understanding correct? Consider the following quote from the lectures:
+
+
+ We create two matrices, $\mathcal{V} \in \mathbb{R}^{n\times |V|}$ and $\mathcal{U} \in \mathbb{R}^{|V|\times n}$, where $n$ is an arbitrary size which defines the size of our embedding space. $\mathcal{V}$ is the input word matrix such that the $i$-th column of $\mathcal{V}$ is the $n$-dimensional embedded vector for word $w_i$ when it is an input to this model. We denote this $n\times 1$ vector as $v_i$. Similarly, $\mathcal{U}$ is the output word matrix. The $j$-th row of $\mathcal{U}$ is an $n$-dimensional embedded vector for word $w_j$ when it is an output of the model. We denote this row of $\mathcal{U}$ as $u_j$.
+
+
+It's not obvious to me why $v_i=\mathcal{V}e_i$ should be the same as or even similar to $u_i$. How does the whole backpropagation procedure above ensure that?
+
+Also, how does the procedure ensure that basis vectors corresponding to similar words $e_i$ and $e_j$ are mapped to vectors in $\mathbb{R}^n$ that have high inner product? (In other words, how is it ensured that if words no. $i_1$ and $i_2$ are similar, then $\langle v_{i_1}, v_{i_2}\rangle$ and $\langle u_{i_1}, u_{i_2}\rangle$ have high values?)
+"
+['philosophy']," Title: Is AI and Big Data science recommending a shift in the scientific method from inductive to deductive reasoning?Body: Is this true? Are we planning to switching such reasoning methods regarding AI tech in the future?
+"
+"['neural-networks', 'deep-learning', 'computer-vision', 'image-processing', 'art-aesthetics']"," Title: Aesthetics analysis with deep learningBody: I'm trying to score video scenes in terms of aesthetics and cinematography features. Basically, how ""interesting"" a scene or video frame can be for a viewer. Simpler, how attractive a scene is. My final goal is to tag intervals of video which can be more interesting to viewers. It can be a ""temporal attention"" model as well.
+
+Do we have an available model or prototype to score cinematographic features of an image or a video? I need a starter tutorial on that. Basically, a ready-to-use prototype/model that I can test as opposed to a paper that I need to implement myself. Paper is fine as long as the code is open-source. I'm new and can't yet write a code given a paper.
+"
+"['convolutional-neural-networks', 'computer-vision', 'image-processing', 'convolution']"," Title: Why do we get a three-dimensional output after a convolutional layer?Body: In a convolutional neural network, when we apply the convolution on a $5 \times 5$ image with $3 \times 3$ kernel, with stride $1$, we should get only one $4 \times 4$ as output. In most of the CNN tutorials, we are having $4 \times 4 \times m$ as output. I don't know how we are getting a three-dimensional output and I don't know how we need to calculate $m$. How is $m$ determined? Why do we get a three-dimensional output after a convolutional layer?
+"
+"['machine-learning', 'training', 'models']"," Title: What is the ""thing"" which is trained in AI model trainingBody: I am a newbie in the fantastic AI world, I have started my learning recently.
+After a while, my understanding is, we need to feed in tremendous data to train a or many models.
+
+Once the training is complete, we could take out the trained models and ""plug in"" to any other programming languages to use to detect things.
+
+So my questions are:
+
+1. What are the trained models? are they algorithms or a collection of parameters in a file?
+
+2. What do they look like? e.g. file extensions
+
+3. Especially, I want to find the trained models for detecting birds (the bird types do not matter). Are there any platforms for open-source/free online trained AI models??
+
+Thank you!
+"
+"['comparison', 'convolution', 'geometric-deep-learning', 'graph-neural-networks']"," Title: What is the difference between graph convolution in the spatial vs spectral domain?Body: I've been reading different papers regarding graph convolution and it seems that they come into two flavors: spatial and spectral. From what I can see the main difference between the two approaches is that for spatial you're directly multiplying the adjacency matrix with the signal whereas for the spectral version you're using the Laplacian matrix.
+Am I missing something, or are there any other differences that I am not aware of?
+"
+"['reference-request', 'cognitive-architecture', 'integrated-information-theory']"," Title: What is the cognitive architecture with the highest IIT measure?Body: What contemporary information system or cognitive architecture is the one with the highest measure of the Integrated Information Theory (IIT) (that is, a theory of consciousness, which states that a system's consciousness is determined by its causal properties and is, therefore, an intrinsic, fundamental property of any physical system)
+
+Is there race/competitions to develop (or allow autonomous development) the system with the maximum IIT measure?
+"
+"['machine-learning', 'natural-language-processing', 'comparison', 'chat-bots', 'question-answering']"," Title: What are the advantages of Machine Learning compared to traditional programming for developing a chatbot?Body: I am currently building a chatbot. What I have done so far is, collected possible questions/training data/files and create a model out of it using Apache OpenNLP; the model is able to predict all the questions that are in the training data and fails to predict for new questions.
+Instead of doing all the above, I can write a program that matches the question/words against training data and predict the answer — what is the advantage of using Machine Learning algorithms?
+I have searched extensively about this and all I got was, in Machine Learning there is no need to change the algorithm and the only change would be in the training data, but that is the case with programming too: the change will be in training data.
+"
+"['neural-networks', 'convolutional-neural-networks', 'comparison', 'papers', 'attention']"," Title: What is the difference between Squeeze-and-excite and bottleneck modules from Mobilenet v2?Body: Squezee-and-excite networks introduced SE blocks, while MobileNet v2 introduced linear bottlenecks.
+
+What is the effective difference between these two concepts?
+
+Is it only implementation (depth-wise convolution, vs per-channel pooling), or they serve a different purpose?
+
+My understanding is that both approaches are used as attention mechanism, working per-channel. In other words, both approaches are used to filter unnecessary information (information that we consider noise, not signal).
+Is this correct?
+Do bottlenecks ensure, that the same feature won't be represented multiple times in different channels, or they don't help at all in this regard?
+"
+"['ai-design', 'action-model-learning']"," Title: How to select the most appropriate set of actions for a given environment or task?Body: Given a robot in a situation such as in a library reading a book.
+
+Now I want to create a neural network that suggests an appropriate action in this situation. And, generally, ignore actions such as ""get up and dance"" and so on.
+
+Since there are limitless actions a robot could do, I need to narrow it down to the ones in this situation. Using its vision system, the word ""book"" and book neurons should already be activated as well as ""reading"".
+
+One idea I had was to create an adversarial network which generates words (sequences of letters) based on the situation such as ""turn page"", ""read next line"" and so on. And then have another neural network which translates these words into actions. (It would them simulate whether this was a good idea. If not it would somehow suppress the first word and try to generate a new word.)
+
+Another example is the robot is in a maze and gets to a crossroads. The network would generate the word ""turn left"" and ""turn right"".
+
+Another idea would be to have the actions be composed of a body part e.g. ""eyes"" and a movement such as ""move left"" and it would combine these to suggest actions.
+
+Either way, it seems like I need a way to encode actions so that the robot doesn't consider every possible action in the universe.
+
+Is there any research in this area or ideas on how to achieve this?
+
+(I think this may be somewhat related to the task of ""try to name as many animals as you can."")
+"
+"['machine-learning', 'gradient-descent', 'hyperparameter-optimization']"," Title: An intuitive explanation of Adagrad, its purpose and its formulaBody:
+ It (Adagrad) adapts the learning rate to the parameters, performing smaller updates
+ (i.e. low learning rates) for parameters associated with frequently occurring features, and larger updates (i.e. high learning rates) for parameters associated with infrequent features.
+
+
+From Sebastian Ruder's Blog
+
+If a parameter is associated with an infrequent feature then yes, it is more important to focus on properly adjusting that parameter since it is more decisive in classification problems. But how does making the learning rate higher in this situation help?
+
+If it only changes the size of the movement in the dimension of the parameter (makes it larger) wouldn't that make things even more imprecise? Since the network depends more on those infrequent features, shouldn't adjusting those parameters be done more precisely instead of just faster? The more decisive parameters should have a higher ""slope"", thus why should they also have high learning rates? I must be missing something, but what is it?
+
+Further, in the article, the formula for parameter adjustments with Adagrad is given. Where exactly in that formula do you find the information about the frequency of a parameter? There must be a relationship between the gradients of a parameter and the frequency of features associated with it because it's the gradients that play an important role in the formula. What is that relationship?
+
+TLDR: I don't understand both the purpose and formula behind Adagrad. What is an intuitive explanation of it that also provides an answer to the questions above, or shows why they are irrelevant?
+"
+"['natural-language-processing', 'reference-request', 'computational-linguistics']"," Title: Which NLP techniques can be used to transform sentences (e.g. from passive to active voice) without affecting their meaning?Body: I'm looking for NLP techniques to transform sentences without affecting their meaning.
+For example, techniques that could transform active voice into passive voice, such as
+
+The cat was chasing the mouse.
+
+to
+
+The mouse was being chased by the cat.
+
+I can think of a number of heuristics one could implement to make this happen for specific cases, but would assume that there is existing research on this in the field of linguistics or NLP. My searches for "sentence transformation" and similar terms didn't bring up anything though, and I'm wondering if I simply have the wrong search terms.
+Related to this, I'm also looking for measures of text consistency, e.g., an approach that could detect that most sentences in a corpus are written in active voice and detect outliers written in passive voice. I'm using active vs. passive voice as an example here and would be interested in more general approaches.
+"
+['recurrent-neural-networks']," Title: Is there an standard algorithm for giving options from an RNN?Body: You can feed books to an RNN and it learn how to produce text.
+
+What I'm interested in is an algorithm that, given say 20 letters it suggest, say the best 10 options for the next 10 letters.
+
+So for example it begins with ""The cat jumped ""
+and then we get various options such as ""over the dog"". ""on the table"" and so on.
+
+My initial thoughts are to first use the most likely next letters. Then find the letter which is most uncertain and change this to the second likely next letter. And repeat this process.
+
+(Then I may have another evaluation neural network to assess which is ""best"" English.)
+
+In other words I want the RNN to ""think ahead"" at what it's saying - much like a chess playing machine.
+"
+"['neural-networks', 'machine-learning', 'face-recognition']"," Title: What are the steps that I need to follow to build a neural network for face recognition?Body: I have developed face recognition algorithms by using pre-built libraries in Python and open CV. However, suppose if I want to make my own neural network algorithm for face recognition, what are the steps that I need to follow?
+I have just seen Andrew Ng's course videos (specifically, I watched 70 videos).
+"
+"['math', 'robotics', 'homework']"," Title: Given an axis-angle rotation vector, how can I find the unit rotation axis and angle?Body: I have a robotics assignment, which I am unable to solve. Given the axis-angle rotation vector $\Theta = (2, 2, 0)$, how can I calculate the unit vector of the rotation axis $k$ and the angle $\theta$?
+"
+"['math', 'robotics']"," Title: How can I derive the rotation matrix from the axis-angle rotation vector?Body: Given an axis-angle rotation vector $\Theta = (2,2,0)$, after finding the unit vector $k=(1/\sqrt{2}, 1/\sqrt{2}, 0)$ and angle $\theta = 2\sqrt{2}$ representing the same rotation, I need to derive the rotation matrix $R$ representing the same rotation and to show, that the matrix is orthonormal. How can I do that?
+"
+"['machine-learning', 'applications', 'learning-algorithms']"," Title: Using ML to analyze Facebook postsBody: First of all, I should mention that I have a very basic knowledge of ML so I apologize if this question seems trivial or stupid.
+
+I am working on a small personal project, basically an app that analyzes Facebook posts concerning movies and translates them into a rating (out of 100). The algorithm looks for keywords, the length of the post, etc.. to determine the individual rating, and then averages all the ratings among a user's FB friends to give the result. My question is, would I be able to drastically improve such algorithm by using ML or is it not worth it? If yes, what algorithms/techniques do you advise me to learn?
+
+All help is appreciated!
+"
+"['neural-networks', 'reference-request', 'incremental-learning', 'online-learning', 'catastrophic-forgetting']"," Title: What are the state-of-the-art approaches for continual learning with neural networks?Body: There seems to be a lot of literature and research on the problems of stochastic gradient descent and catastrophic forgetting, but I can't find much on solutions to perform continual learning with neural network architectures.
+By continual learning, I mean improving a model (while using it) with a stream of data coming in (maybe after a partial initial training with ordinary batches and epochs).
+A lot of real-world distributions are likely to gradually change with time, so I believe that we should be able to train NNs in an online fashion.
+Do you know which are the state-of-the-art approaches on this topic, and could you point me to some literature on them?
+"
+"['machine-learning', 'computer-vision']"," Title: How to use machine learning to create combine of opposite images side by sideBody: Inspired by: Two Worlds Pictures
+
+
+
+
+I just want to create a Machine Learning Model that can automatically combine the opposite images into 1 image.
+
+I am thinking about 2 possible solutions:
+
+
+- Pose Estimation: Detect humans and their poses from image data and search an archive by pose, but totally out of context.
+- Land Lines: explore similar lines.
+
+
+It's just my ideas, do you have any recommendations?
+Thanks
+"
+"['reinforcement-learning', 'deep-rl', 'ddpg', 'continuous-action-spaces']"," Title: DDPG: how to implement continuous action space bounded in the interval [-2, 2]?Body: I am a newbie in reinforcement learning and trying to understand how to implement continuous actions bounded by $[-2, 2]$. My research shows that doing nothing is a possible solution (i.e. action of 4.5 is mapped to 2 and the action of -3.1 is mapped to -2), but I wonder if there are more elegant approaches.
+"
+"['alphazero', 'alphago-zero']"," Title: Alphazero Value loss doesn't decreaseBody: Currenly I'm trying to reimplement alphazero in pure c++ using libtorch to accomodate my project's need. But when I training my model, I found out that the value loss doesn't decrese at all after even ~2000 iterations and the policy loss decreases pretty fast from the very begining.
+
+Have anybody met any similar issue when developing your alphazero project? And could you give some suggestion of the cause of my issue based on your experience?
+
+Many apreciates
+"
+"['neural-networks', 'tensorflow', 'chat-bots']"," Title: How do I create a chatbot using tensorflow or pytorch using like the one defined in dialogflow?Body: How do I create a chatbot using TensorFlow or PyTorch using like the one defined in DialogFlow? What are the best datasets that I can use so to create my own personal assistant like google assistant?
+
+I want to create a chatbot (an open-source project) as an assistant for custom tasks (like google assistant).
+
+I have tried many neural network models like seq2seq but I couldn't get satisfiable results maybe because of the small dataset (I took from Simpsons movie script) or model (seq2seq). I am curious what model they use at google and what type of dataset they pick to give such good results or any normal person can create fully functional chatbots without relying on paid services ( like google's DialogFlow, api.ai, etc.) with good results.
+
+I recently heard of OpenAi's implementation of a specific model named as gpt-2 which as they concluded in the paper showed remarkable performance but they didn't provide the dataset because of some reasons.
+
+What I want to say there are a lot of resources and codes on the internet to make a working chatbot (or maybe that what they show) but when I try to replicate them I always fail to get even remotely close good results.
+
+So I need proper guidance on how to make and train such chatbots with my own laptop (with 16GB RAM, 2GB GPU . I can even get better configuration) and no other money spent on google services or any such paid API's.
+
+Please suggest something if someone got good results.
+"
+"['deep-learning', 'reference-request', 'research', 'human-activity-recognition']"," Title: What are some conferences for publishing papers on Deep Learning for Human Activity recognition?Body: What are some conferences for publishing papers on Deep Learning for Human Activity recognition? Do any of the major conferences have specific tracks for Human Activity Recognition?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'python', 'tensorflow']"," Title: Suggestions for Deep Learning for regression on huge 3D volumesBody: I have a dataset of 3D images (volumes) with dimensions 400x250x400. For each input image I have an output of the same dimensions. I would like to train a machine learning (or deep learning) model on this data in order to predict values with new data.
+
+My main problems are :
+
+Images are very big, which leads to memory issues (tried with an NVIDIA 2080Ti and doesn't fit on memory during training)
+
+I need a very fast inference, because the model will be used on real time(speed is a requirment)
+
+I already have experience with architectures such as 3D Unet using Keras with tensorflow backend, but it didn't worked for me because of the previous reasons, even with very few layers and convolution filters.
+
+I know that one of the first solutions that one could imagine, is to reduce resolution of the volumes, but in my case I'm not allowed this because I would lose a lot of spatial information.
+
+Any ideas or suggestions ? Maybe Neural Nets are not the best solution ? If not, what could I use ?
+
+Thank you very much for your suggestions
+"
+"['reinforcement-learning', 'logic', 'artificial-consciousness', 'rewards', 'symbolic-ai']"," Title: Developmental systems that try to explain or understand the reward value in the reinforcement learning?Body: Are there methods (possibly logical or (how they are called in the literature) relational) that allows for the developmental systems to understand or explain the value of the received reward during the developmental process. E.g. if the system (agent) can understand that reward is by the chance, then it should be processed quite differently than the reward that is just initial downpayment for the expected series of rewards. Credit assignment is the one method (especially for delayed rewards), but maybe there are different methods as well?
+
+Relational reinforcement learning allows to learn symbolic transition and reward functions and emerging understanding of the reward by the agent can greatly facilitate the inner consciousness of the agent and the search process for the best transition and reward functions (symbolic search space can be enormous).
+"
+"['neural-networks', 'convolutional-neural-networks', 'genetic-algorithms']"," Title: CNN - Visualizing images near decision boundary - Pixels inexplicably tend to edgesBody: We are exploring the images classified by a CNN at its decision boundary, using Genetic Algorithms to generate them. We have created a fine-tuned binary grayscale image classifier for cats. As the base model, we are using an Inception-ResNet v2 pre-trained on the ImageNet dataset, and then fine-tune it with a subset of cat and non-cat images (grayscale) from ImageNet. The model achieves ~97% accuracy for a test set.
+
+We have constrained the problem such that evolution starts from a pure white image, and random crossover and mutations are performed with only black pixels. Crossover and mutation probabilities are kept at 0.8 and 0.015 respectively.
+
+As an incentive to generate a ""cat"" with the minimum number of black pixels, I add a penalty for the black pixel count in the image. The initial population is a set of 100 white images that have a single random pixel coloured black in them.
+
+The evolution generates images with only black and white pixels, and we have a fitness function that is taken as a linear transformation of loss calculated between target label and network prediction as follows;
+
+
+loss = binary cross entropy (target, prediction) + λ(# of black pixels)
+
+
+Target value (y) = target label cat - in this case, 0.
+
+λ = hyperparameter to weight the penalty for black pixel count.
+
+Problem
+
+My problem is that across multiple runs of evolution, all images classified as cats tend to have black pixels towards the edges of the image. Below is an example.
+
+
+
+This image is classified as a cat with over 96% confidence.
+
+I have tried different crossover mechanisms including
+
+
+- Random rectangular area swap between parents
+- Alternating column interchange
+- Direct black pixel crossover after encoding the image to
+a reduced form that only kept track of the black pixels (black pixel
+list is the genome)
+
+
+Initially, we ran evolution with a similarly fine-tuned VGG-16 model, and then moved to the Inception ResNet due to better accuracy. Pixels tend to edges across models and crossover mechanisms.
+
+In one run, I explicitly constrained the evolution to perform mutations in the middle section of the images for 3,000 generations before lifting this restriction. But the images generated after that point always had better scores.
+
+We are at a loss as to why the images never have pixels coloured in the middle.
+
+Does anyone have any ideas on this?
+
+
+"
+"['machine-learning', 'training', 'datasets']"," Title: Train on big dataset (1mil + images)Body: I am in the process of collecting a huge dataset of Human poses captured images to create a model to classify poses.
+
+My question is how will I be able to train on this massive dataset? I have multiple GPUs and Multiple machines access (Also have GCP).
+
+What would be the best way to train on such huge dataset?
+
+Thanks.
+"
+"['training', 'computer-vision', 'deep-neural-networks', 'object-recognition']"," Title: What could an oscillating training loss curve represent?Body:
+
+I tried to create a simple model that receives an $80 \times 130$ pixel image. I only had 35 images and 10 test images. I trained this model for a binary classification task. The architecture of the model is described below.
+
+conv2d_1 (Conv2D) (None, 80, 130, 64) 640
+_________________________________________________________________
+conv2d_2 (Conv2D) (None, 78, 128, 64) 36928
+_________________________________________________________________
+max_pooling2d_1 (MaxPooling2 (None, 39, 64, 64) 0
+_________________________________________________________________
+dropout_1 (Dropout) (None, 39, 64, 64) 0
+_________________________________________________________________
+conv2d_3 (Conv2D) (None, 39, 64, 128) 73856
+_________________________________________________________________
+conv2d_4 (Conv2D) (None, 37, 62, 128) 147584
+_________________________________________________________________
+max_pooling2d_2 (MaxPooling2 (None, 18, 31, 128) 0
+_________________________________________________________________
+dropout_2 (Dropout) (None, 18, 31, 128) 0
+_________________________________________________________________
+flatten_1 (Flatten) (None, 71424) 0
+_________________________________________________________________
+dense_1 (Dense) (None, 512) 36569600
+_________________________________________________________________
+dropout_3 (Dropout) (None, 512) 0
+_________________________________________________________________
+dense_2 (Dense) (None, 1) 513
+
+
+What could the oscillating training loss curve represent above? Why is the validation loss constant?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'computational-learning-theory']"," Title: Is it possible to control asymptotic behaviour of neural network models?Body: Is it possible to specify what the asymptotic behaviour of a Neural Networks (NN) model should be?
+I am thinking of a NN which tries to learn a mapping $\vec y=f(\vec x)$ with $\vec x$ a vector of features of dimension $d$ and $\vec y$ a vector of outputs of dimension $p$.
+Is it possible to specify that, for instance, the NN should have a fixed value when $x_1$ goes to infinite?
+I mean:
+$$
+\lim_{x_1\to \infty} f(\vec x) = \vec c
+$$
+If it is not possible with NN, do you know other machine learning models (for instance Gaussian Process Regression or Support Vector Regression) which have a known asymptotic behaviour?
+"
+['object-detection']," Title: Can mAP score be used to describe ""recall"" rate of a model?Body: I have a general question regarding the mAP
score used in measuring object detection system performance.
+
+I understood how the AP
score is calculated, by averaging precision over recall 0 to 1. And then we can compute mAP
, by averaging AP
score of different labels.
+
+However, what I have been really confused, is that, it seems that mAP
score is used to denote the ""precision"" of a model. Then what about the ""recall"" aspect? Note that generally speaking, when measuring the performance of a machine learning model, we need to report precision
and recall
at the same time, right? It seems that mAP
can only cover the precision
aspect of a model.
+
+Am I missed anything here? Or mAP
score, despite its name is derived from Precision
, can indeed subsume both ""precision"" and ""recall"" and therefore become comprehensive enough?
+"
+"['tensorflow', 'keras', 'models', 'artificial-neuron', 'sequence-modeling']"," Title: Is ""dataset size"" and ""model size"" same thing?Body: I mean what is determine my model size, connection amount between layers and neurons, or size of my dataset?
+"
+"['neural-networks', 'deep-learning', 'overfitting']"," Title: Is normalizing the data a way to improve generalization?Body: There are many known ways to overcome overfitting or make a model generalize better to unseen data.
+
+Here I would like to ask if normalizing/standardizing/similiraizing the train and test data is a plausible approach.
+
+By similarizing I mean making the images look alike by using some function that could be a Neural Network itself. I know that normally one would approach this the opposite way by augmenting and therefore increasing the variation in the training data. But is also possible to improve the model by restricting the variation of the training and test data?
+
+I know that this may not be the best approach and maybe too complicated but I see some use cases where known techniques of preventing overfitting aren't applicable. In those cases, having a network that can normalize/standardize/similarize the ""style"" of different images could be very useful.
+
+Unfortunately I didn't find a single paper discussing this approach.
+"
+"['tensorflow', 'object-detection']"," Title: Why tf object detection api needs so few pictures?Body: I am wondering why tf object detection api needs so few picture samples for training while regular cnns needs many more?
+
+What I read in tutorials is that tf object detection api needs around 100-500 pictures per class for training (is it true?) while regular CNNs need many many more samples, like tens of thousands or more. Why is it so?
+"
+"['deep-learning', 'computer-vision', 'generative-adversarial-networks', 'image-processing', 'image-generation']"," Title: How to generate the original image from feature set?Body: We all know that using CNN, or even simpler functions, like CLD or EHD, we can generate a set of features out of images.
+
+Is there any ways or approaches that given a set of features, we can somehow generate a corase version of the original image that was given as input? Maybe a gray-scale version with visible objects inside? If so, what features do we need?
+"
+"['deep-learning', 'computer-vision', 'object-recognition', 'object-detection', 'facial-recognition']"," Title: Extending FaceNet’s triplet loss to object recognitionBody: FaceNet uses a novel loss metric (triplet loss) to train a model to output embeddings (128-D from the paper), such that any two faces of the same identity will have a small Euclidean distance, and such that any two faces of different identities will have a Euclidean distance larger than a specified margin. However, it needs another mechanism (HOG or MTCNN) to detect and extract faces from images in the first place.
+Can this idea be extended to object recognition? That is, can an object detection framework (e.g. MaskR-CNN) be used to extract bounding boxes of an object, cropping the object feeding this to a network that was trained on triplet loss, and then compares the embeddings of objects to see if they’re the same object?
+Is there any research that has been done or any published public datasets for this?
+"
+"['deep-learning', 'classification', 'tensorflow', 'linear-regression', 'underfitting']"," Title: TensorFlow estimator DNNClassifier fails to fit simple dataBody: The ready-to-use DNNClassifier in tf.estimator seems not able to fit these data:
+
+X = [[1,2], [1,12], [1,17], [9,33], [48,49], [48,50]]
+Y = [ 1, 1, 1, 1, 2, 3 ]
+
+
+I've tried with 4 layers but it's fitting to 83% (=5/6 sampes) only:
+
+hidden_units = [2000,1000,500,100]
+n_classes = 4
+
+
+The sample data above are supposed to be separated by 2 lines (right-click image to open in new tab):
+
+
+
+It seems stuck be cause of Y=2 and Y=3 are too close. How to change the DNNClassifier to fit to 100%?
+"
+"['machine-learning', 'implementation', 'support-vector-machine']"," Title: How to implement SVM algorithm from scratch in a programming language?Body: I'm a computer scientist who's studying support vector machines (SVMs) in a machine learning course. I have some understanding of how SVMs are designed, thanks to 16. Learning: Support Vector Machines - MIT. However, what I'm not understanding is the transition from the optimization problem of the Lagrangian function to its implementation in any programming language. Basically, what I need to understand is how to build, from scratch, the decision function, given a training set. In particular, how do I find Lagrange multipliers in order to know which points are to be considered to define support vectors and the decision function?
+Can anyone explain this to me?
+"
+"['deep-learning', 'classification', 'tensorflow', 'linear-regression', 'text-summarization']"," Title: Solution to classify product namesBody: I have a bunch of training data for classifying product names, around 30,000 samples. The task is to classify these product names into types of product, around 100 classes (single words).
+
+For example:
+
+dutch lady sweetened uht milk => milk
+samsung galaxy note 10 => electronics
+cocacola zero => softdrink
+...
+
+
+All words in inputs are indexed to numbers, and so classes. I've tried to use tf.estimator.DNNClassifier to classify them but no good results. The outcome is just an accuracy of 4% which is no meaning.
+
+Should it be I'm in the case that classes (Y values) are distributed kinda randomly and too hard to do multi-time linear separation?
+
+Are there any existing solutions to classify a list of names? like my product names?
+"
+"['machine-learning', 'ai-design', 'training', 'data-science']"," Title: What should we do when we have equal observations with different labels?Body: Suppose we have a labeled data set with columns $A$, $B$, and $C$ and a binary outcome variable $X$. Suppose we have rows as follows:
+
+ col A B C X
+ 1 1 2 3 1
+ 2 4 2 3 0
+ 3 6 5 1 1
+ 4 1 2 3 0
+
+
+Should we throw away either row 1 or row 4 because they have different values of the outcome variable X? Or keep both of them?
+"
+"['machine-learning', 'comparison', 'quantum-computing']"," Title: What is the difference between machine learning and quantum machine learning?Body: What is the difference between machine learning and quantum machine learning?
+"
+['convolutional-neural-networks']," Title: 3D geometry and similarity with a reference modelBody: I am looking for a CNN method, or any other machine learning method, to recognize 3D natural geometries that are similar to each others, and compare these geometries with a reference 3D model. To illustrate this, consider the following crater topographic map (x,y,z) of the Moon as an example:
+
+
+
+The exercise would be to recognize the craters, and compare their (3D) geometry (scale-invariant) with a reference 3D crater model (e.g. the one within the blue square). The result I am looking for is a kind of heatmap showing the similarity measure of (1) a sampled crater with the crater model, and/or (2) the geometry of some parts of the sampled crater (e.g. the inner crater steep sides) with those of the reference model. No classification.
+
+I tend to think that a 3D-oriented CNN method (OctNet, Octree CNN... etc) is a starting point for the above-mentioned task but I would rather prefer getting opinions on this matter since I am still a newbie in machine learning and we are dealing with direct application to real-world natural objects here.
+"
+"['convolutional-neural-networks', 'pooling', 'max-pooling', 'average-pooling']"," Title: Is it effective to concatenate the results of mean-pooling and max-pooling?Body: Is it popular or effective to concatenate the results of mean-pooling and max-pooling, to get the invariance of the latter and the expressivity of the former?
+"
+"['machine-learning', 'transfer-learning', 'catastrophic-forgetting']"," Title: How is transfer learning used to mitigate catastrophic forgetting in neural networks?Body: How can transfer learning be used to mitigate catastrophic forgetting. Could someone elaborate on this?
+"
+"['neural-networks', 'training', 'tensorflow', 'gpu']"," Title: Is a GPU always faster than a CPU for training neural networks?Body: Currently, I am working on a few projects that use feedforward neural networks for regression and classification of simple tabular data. I have noticed that training a neural network using TensorFlow-GPU is often slower than training the same network using TensorFlow-CPU.
+Could something be wrong with my setup/code or is it possible that sometimes GPU is slower than CPU?
+"
+"['deep-learning', 'reinforcement-learning', 'convolutional-neural-networks', 'deep-rl']"," Title: Torch CNN not trainingBody: I am completely new to CNN's, and I do not quite know how to design or use them efficiently. That being said, I am attempting to build a CNN that learns to play Pac-man with reinforcement learning. I have trained it for about 3 hours and have seen little to no improvement. My observation space is 3 channels * 15 * 19, and there are 5 actions. Here is my code, I am open to any and all suggestions. Thanks for all your help.
+
+from minipacman import MiniPacman as pac
+from torch import nn
+import torch
+import random
+import torch.optim as optimal
+from torch.autograd import Variable
+import matplotlib.pyplot as plt
+import numpy as np
+import keyboard
+
+
+loss_fn = nn.MSELoss()
+epsilon = 1
+env = pac(""regular"", 1000)
+time = 0
+action = random.randint(0, 4)
+q = np.zeros(3)
+alpha = 0.01
+gamma = 0.9
+tick = 0
+decay = 0.9999
+
+
+class Value_Approximator (nn.Module):
+ def __init__(self):
+ super(Value_Approximator, self).__init__()
+ # Convolution 1
+ self.cnn1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=5, stride=1, padding=2)
+ self.relu1 = nn.ReLU()
+
+ # Max pool 1
+ self.maxpool1 = nn.MaxPool2d(kernel_size=2)
+
+ # Convolution 2
+ self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=2)
+ self.relu2 = nn.ReLU()
+
+ # Max pool 2
+ self.maxpool2 = nn.MaxPool2d(kernel_size=2)
+
+ # Fully connected 1 (readout)
+ self.fc1 = nn.Linear(384, 5)
+
+ def forward(self, x):
+ # Convolution 1
+ out = self.cnn1(x)
+ out = self.relu1(out)
+
+ # Max pool 1
+ out = self.maxpool1(out)
+
+ # Convolution 2
+ out = self.cnn2(out)
+ out = self.relu2(out)
+
+ # Max pool 2
+ out = self.maxpool2(out)
+
+ # Resize
+ # Original size: (100, 32, 7, 7)
+ # out.size(0): 100
+ # New out size: (100, 32*7*7)
+ out = out.view(out.size(0), -1)
+
+ # Linear function (readout)
+ out = self.fc1(out)
+
+ return out
+
+approx = Value_Approximator()
+optimizer = optimal.SGD(approx.parameters(), lr=alpha)
+
+
+while time < 50000:
+ print(""Time: ""+str(time))
+ print(""Epsilon: ""+str(epsilon))
+ print()
+ time += 1
+ state = env.reset()
+ tick = 0
+
+ epsilon *= decay
+
+ if epsilon < 0.1:
+ epsilon = 0.1
+
+ while True:
+ tick += 1
+ state = np.expand_dims(state, 1)
+ state = state.reshape(1, 3, 15, 19)
+ q = approx.forward(torch.from_numpy(state))[0]
+
+ if random.uniform(0, 1) < epsilon:
+ action = env.action_space.sample()
+ else:
+ _, action = torch.max(q, -1)
+ action = action.item()
+ new_state, reward, terminal, _ = env.step(action)
+ show_state = new_state
+ new_state = np.expand_dims(new_state, 1)
+ new_state = state.reshape(1, 3, 15, 19)
+
+ q_new = approx.forward(torch.from_numpy(new_state).type(torch.FloatTensor))[0] # "" find Q (s', a') ""
+ # find optimal action Q value for next step
+ new_max, _ = torch.max(q_new, -1)
+ new_max = new_max.item()
+
+ q_target = q.clone()
+ q_target = Variable(q_target.data)
+
+ # update target value function according to TD
+ q_target[action] = reward + torch.mul(new_max, gamma) # "" reward + gamma*(max(Q(s', a')) ""
+
+ loss = loss_fn(q, q_target) # "" reward + gamma*(max(Q(s', a')) - Q(s, a)) ""
+ # Update original policy according to Q_target ( supervised learning )
+ approx.zero_grad()
+ loss.backward()
+ optimizer.step()
+
+ # Q and Q_target should converge
+ if time % 100 == 0:
+ state = torch.FloatTensor(show_state).permute(1, 2, 0).cpu().numpy()
+
+ plt.subplot(131)
+ plt.title(""Imagined"")
+ plt.imshow(state)
+ plt.subplot(132)
+ plt.title(""Actual"")
+ plt.imshow(state)
+ plt.show(block=False)
+ plt.pause(0.000001)
+
+ if keyboard.is_pressed('1'):
+ torch.save(approx.state_dict(), 'trained-10000.mdl')
+ if keyboard.is_pressed('9'):
+ torch.save(approx.state_dict(), 'trained-10000.mdl')
+
+ if terminal or tick > 100:
+ plt.close()
+ break
+
+ state = new_state
+
+
+torch.save(approx.state_dict(), 'trained-10000.mdl')
+
+"
+['computer-vision']," Title: How do I generate structured light for the 3D bin picking system?Body: I want to know how to generate the structured light which projects different patterns of light on a 3D object which is under scanning.
+"
+['image-processing']," Title: Turn photos right-side up?Body: I'm looking for either an existing AI app or a pre-trained NN that will tell me if a photograph is right-side up or not. I want to use this to create an application that automatically rotates photos so they are right-side-up. This doesn't seem hard.
+
+If it doesn't exist, presumably I can create it with Tensorflow, and just use a ton of photos to train it, and assume they are all correctly oriented in the training set. Would that work?
+"
+"['reinforcement-learning', 'convolutional-neural-networks']"," Title: Loss reduction, but constant performance with CNNBody: I made a CNN with a reasonable loss curve, but the performance of the model does not improve. I have tried making the model larger, I am using three convolutional layers with batch norms.
+
+Thanks for your help.
+
+
+"
+"['reinforcement-learning', 'q-learning', 'dqn']"," Title: Can exogenous variables be state features in reinforcement learning?Body: I have a question about state representation of Q-learning or DQN algorithm.
+I'm still a beginner of RL, so I'm not sure that is it suitable to take exogenous variables as state features.
+
+For example, in my current project, deciding to charge/discharge an electric vehicle actions according to the real-time fluctuating electricity prices, I'm wondering if the past n-step prices or hours can be considered as state features.
+
+Because both the prices and the hour are just given information in every time step rather than being dependent to the charging/discharging actions, I'm suspicious about whether they can are theoretically qualified to be state features or not.
+
+If they are not qualified, could someone give me a reference or something that I can read?
+"
+"['deep-learning', 'computer-vision', 'autoencoders']"," Title: Dealing with empty frames in MRI imagesBody: I started working on the application of deep learning in medical imaging recently. While dealing with MRI images in the BraTS dataset, I observe that first and last few frames are always completely empty (black). I want to ask those who are already working in the field, is there a way to remove them in a procedural manner before training and add them correctly after the training as a postprocessing step (to comply with the ground truth segmentations' shape)? Has anyone tried that?
+I could not find any results on Google. So asking here.
+
+Edit: I think I did not make my point clear enough. I meant to say first and last few frames of each MRI scan are empty. How to deal with those is what I intended to ask.
+"
+"['reinforcement-learning', 'dqn']"," Title: Should importance sample weighting be compensated for by dynamically increasing learning rate?Body: I'm using Prioritized Experience Replay (PER) with a DDQN. To compensate for overfitting relatively high-value samples due to the non-uniform selection, I'm training with sample weights provided along with the PER samples to downplay each sample's loss contribution according to its probability of selection. I've observed that typically these sample weightings vary from $~0.1$ to $<0.01$, as the buffer gradually fills up (4.8M samples).
+
+When using this compensation, the growth of the maximal Q value per episode stalls prematurely compared to a non-weight-compensated regime. I presume that this is because the size of the back-propagation updates is being greatly and increasingly diminished by the sample weights.
+
+To correct for this I've tried taking the beta-adjusted maximum weight as reported by the PER (the same buffer-wide value by which the batch is normalized) and multiplying the base learning rate by it, thereby adjusting the optimizer after each batch selection.
+
+My question is two-fold:
+
+
+- Is this the correct interpretation of what's going on?
+- Is it standard practice to compensate for sample weighting in this way?
+
+
+Although it seems to be working in keeping the Q growth alive whilst taming the loss, I've not been able to find any information on this and haven't found any implementations that compensate in this way so have a major doubt about the mathematical validity of it.
+"
+"['convolutional-neural-networks', 'transfer-learning', 'incremental-learning']"," Title: Transfer learning to train only for a new class while not affecting the predictions of the other classBody: I am basically interested in vehicle on the road.
+
+YoloV3 pytorch is giving a decent result.
+
+So my interested Vehicles Car
Motorbike
Bicycle
Truck
and bus
, I have a small vehicles being detected as truck
.
+
+Since the small vehicle is nicely being detected as truck. I have annotated this small vehicle as a different class.
+
+Though, I could add an extra class say 81th class, since the current YoloV3 being used is trained on 80 classes.
+
+81th class would contain the weight of the truck, I would freeze the weights such that the rest of the 80 classes remain unaltered and only the 81th class of this new data gets trained.
+
+The problem is the final layer gets tuned according to the prediction of all the classes it learns.
+
+I was not able to find any post that could actually mention this way of preserving the predictions of the other classes and introducing a new class using transfer learning.
+
+The closest, I was able to get is this post of
+Weight Sampling Tutorial SSD using keras
+
+Its mentioned in
+
+Option 1: Just ignore the fact that we need only 8 classes
+
+This would work, and it wouldn't even be a terrible option. Since only 8 out of the 80 classes would get trained, the model might get gradually worse at predicting the other 72 clases
in the second paragraph.
+
+Is it possible to preserve the predictions, of the previous pre trained model while introducing the new class and use transfer learning to train only for that class?
+
+I Feel that this is not possible, would like to know your opinion. Hope someone can prove me wrong.
+"
+"['neural-networks', 'training', 'tensorflow', 'keras']"," Title: Why do I get a straight line as an output from a neural network?Body: I am using feedforward neural network for regression and what I get as a result of prediction is a constant value visible on the graph below:
+
+
+Data I use are typical standardised tabular numbers. The architecture is as follows:
+
+model.add(Dropout(0.2))
+model.add(Dense(units=512, activation='relu'))
+model.add(Dropout(0.3))
+model.add(Dense(units=256, activation='relu'))
+model.add(Dropout(0.3))
+model.add(Dense(units=128, activation='relu'))
+model.add(Dense(units=128, activation='relu'))
+model.add(Dense(units=1))
+
+adam = optimizers.Adam(lr=0.1)
+
+model.compile(loss='mean_squared_error', optimizer=adam)
+
+reduce_lr = ReduceLROnPlateau(
+ monitor='val_loss',
+ factor=0.9,
+ patience=10,
+ min_lr=0.0001,
+ verbose=1)
+
+tensorboard = TensorBoard(log_dir=""logs\{}"".format(NAME))
+
+history = model.fit(
+ x_train,
+ y_train,
+ epochs=500,
+ verbose=10,
+ batch_size=128,
+ callbacks=[reduce_lr, tensorboard],
+ validation_split=0.1)
+
+
+It seems to me that all weights are zeroed and only constant bias is present here, since for different data samples from a test set I get the same value, but I am not sure.
+
+I understand that the algorithm has found smallest MSE for such a constant value, but is there a way of avoiding such situation, since straight line is not really good solution for my project?
+"
+['generative-adversarial-networks']," Title: How does InfoGAN learn latent categorical codes on MNISTBody: While reading the InfoGAN paper and implement it taking help from a previous implementation, I'm having some difficulty understanding how it learns the discrete categorical code when trained on MNIST.
+
+The implementation we tried to follow uses a target as a randomly generated integer from 0 to 9. My doubt is this: how can it learn categorical information if from the start it's learning using a loss which takes in random values.
+
+If this implementation is wrong, then what should the target be while training the Q network, when using the categorical-cross-entropy loss on the output logits?
+"
+"['neural-networks', 'deep-learning', 'classification', 'computer-vision', 'image-processing']"," Title: Video engagement analysis with deep learningBody: I am trying to rank video scenes/frames based on how appealing they are for a viewer. Basically, how ""interesting"" or ""attractive"" a scene inside a video can be for a viewer. My final goal is to generate say a 10-second short summary given a video as input, such as those seen on Youtube when you hover your mouse on a video.
+
+I previously asked a similar question here. But the ""aesthetics"" model is good for ranking artistic images, not good for frames of videos. So it was failing. I need a score based on ""engagement for general audience"". Basically, which scenes/frames of video will drive more clicks, likes, and shares when selected as a thumbnail.
+
+Do we have an available deep-learning model or a prototype doing that? A ready-to-use prototype/model that I can test as opposed to a paper that I need to implement myself. Paper is fine as long as the code is open-source. I'm new and can't yet write a code given a paper.
+"
+"['machine-learning', 'deep-learning', 'reinforcement-learning', 'q-learning']"," Title: What does the symbol $\mathbb E$ mean in these equations?Body: I came across some papers that use $\mathbb E$ in equations, in particular, this paper: https://arxiv.org/pdf/1511.06581.pdf. Here is some equations from the paper that uses it:
+
+$Q^\pi \left(s,a \right) = \mathbb E \left[R_t|s_t = s, a_t = a, \pi \right]$ ,
+
+$V^\pi \left(s \right) = \mathbb E_{a\backsim\pi\left(s \right)} \left[Q^\pi \left(s, a\right) \right]$ ,
+
+$Q^\pi \left(s, a \right) = \mathbb E_{s'} \left[r + \gamma\mathbb E_{a'\backsim\pi \left(s' \right)} \left[Q^\pi \left(s', a' \right) \right] | s,a,\pi \right]$
+
+$\nabla_{\theta_i}L_i\left(\theta_i \right) = \mathbb E_{s, a, r, s'} \left[\left(y_i^{DQN} - Q \left(s, a; \theta_i \right) \right) \nabla_{\theta_i} Q\left(s, a, \theta_i \right) \right]$
+
+Could someone explain to me what is the purpose of $\mathbb E$?
+"
+"['reinforcement-learning', 'deep-rl', 'alphazero', 'greedy-ai']"," Title: When does AlphaZero play suboptimal moves?Body: If AlphaZero was always playing the best moves it would just generate the same training game over and over again. So where does the randomness come from? When does it decide not to play the most optimal move?
+"
+"['training', 'perceptron']"," Title: What are the reasons a perceptron is not able to learn?Body: I'm just starting to learn about neural networking and I decided to study a simple 3-input perceptron to get started with. I am also only using binary inputs to gain a full understanding of how the perceptron works. I'm having difficulty understanding why some training outputs work and others do not. I'm guessing that it has to do with the linear separability of the input data, but it's unclear to me how this can easily be determined. I'm aware of the graphing line test, but it's unclear to me how to plot the input data to fully understand what will work and what won't work.
+
+There is quite a bit of information that follows. But it's all very simple. I'm including all this information to be crystal clear on what I'm doing and trying to understand and learn.
+
+Here is a schematic graphic of the simple 3-input perceptron I'm modeling.
+
+
+
+Because it only has 3 inputs and they are binary (0 or 1), there are only 8 possible combinations of inputs. However, this also allows for 8 possible outputs. This allows for training of 256 possible outputs. In other words, the perceptron can be trained to recognize more than one input configuration.
+
+Let's call the inputs 0 thru 7 (all the possible configurations of a 3-input binary system). But we can train the perceptron to recognize more than just one input. In other words, we can train the perceptron to fire for say any input from 0 to 3 and not for inputs 4 thru 7. And all those possible combinations add up to 256 possible training input states.
+
+Some of these training input states work, and others do not. I'm trying to learn how to determine which training sets are valid and which are not.
+
+I've written the following program in Python to emulate this Perceptron through all 256 possible training states.
+
+Here is the code for this emulation:
+
+import numpy as np
+np.set_printoptions(formatter={'float': '{: 0.1f}'.format})
+
+# Perceptron math fucntions.
+def sigmoid(x):
+ return 1 / (1 + np.exp(-x))
+def sigmoid_derivative(x):
+ return x * (1 - x)
+# END Perceptron math functions.
+
+# The first column of 1's is used as the bias.
+# The other 3 cols are the actual inputs, x3, x2, and x1 respectively
+training_inputs = np.array([[1, 0, 0, 0],
+ [1, 0, 0, 1],
+ [1, 0, 1, 0],
+ [1, 0, 1, 1],
+ [1, 1, 0, 0],
+ [1, 1, 0, 1],
+ [1, 1, 1, 0],
+ [1, 1, 1, 1]])
+
+# Setting up the training outputs data set array
+num_array = np.array
+num_array = np.arange(8).reshape([1,8])
+num_array.fill(0)
+
+for num in range(25):
+ bnum = bin(num).replace('0b',"""").rjust(8,""0"")
+ for i in range(8):
+ num_array[0,i] = int(bnum[i])
+
+ training_outputs = num_array.T
+# training_outputs will have the array form: [[n,n,n,n,n,n,n,n]]
+# END of setting up training outputs data set array
+
+ # ------- BEGIN Perceptron functions ----------
+ np.random.seed(1)
+ synaptic_weights = 2 * np.random.random((4,1)) - 1
+ for iteration in range(20000):
+ input_layer = training_inputs
+ outputs = sigmoid(np.dot(input_layer, synaptic_weights))
+ error = training_outputs - outputs
+ adjustments = error * sigmoid_derivative(outputs)
+ synaptic_weights += np.dot(input_layer.T, adjustments)
+ # ------- END Perceptron functions ----------
+
+
+ # Convert to clean output 0, 0.5, or 1 instead of the messy calcuated values.
+ # This is to make the printout easier to read.
+ # This also helps with testing analysis below.
+ for i in range(8):
+ if outputs[i] <= 0.25:
+ outputs[i] = 0
+ if (outputs[i] > 0.25 and outputs[i] < 0.75):
+ outputs[i] = 0.5
+ if outputs[i] > 0.75:
+ outputs[i] = 1
+ # End convert to clean output values.
+
+ # Begin Testing Analysis
+ # This is to check to see if we got the correct outputs after training.
+ evaluate = ""Good""
+ test_array = training_outputs
+ for i in range(8):
+ # Evaluate for a 0.5 error.
+ if outputs[i] == 0.5:
+ evaluate = ""The 0.5 Error""
+ break
+ # Evaluate for incorrect output
+ if outputs[i] != test_array[i]:
+ evaluate = ""Wrong Answer""
+ # End Testing Analysis
+
+ # Printout routine starts here:
+ print_array = test_array.T
+ print(""Test#: {0}, Training Data is: {1}"".format(num, print_array[0]))
+ print(""{0}, {1}"".format(outputs.T, evaluate))
+ print("""")
+
+
+And when I run this code I get the following output for the first 25 training tests.
+
+Test#: 0, Training Data is: [0 0 0 0 0 0 0 0]
+[[ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0]], Good
+
+Test#: 1, Training Data is: [0 0 0 0 0 0 0 1]
+[[ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0]], Good
+
+Test#: 2, Training Data is: [0 0 0 0 0 0 1 0]
+[[ 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0]], Good
+
+Test#: 3, Training Data is: [0 0 0 0 0 0 1 1]
+[[ 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0]], Good
+
+Test#: 4, Training Data is: [0 0 0 0 0 1 0 0]
+[[ 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0]], Good
+
+Test#: 5, Training Data is: [0 0 0 0 0 1 0 1]
+[[ 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.0]], Good
+
+Test#: 6, Training Data is: [0 0 0 0 0 1 1 0]
+[[ 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5]], The 0.5 Error
+
+Test#: 7, Training Data is: [0 0 0 0 0 1 1 1]
+[[ 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0]], Good
+
+Test#: 8, Training Data is: [0 0 0 0 1 0 0 0]
+[[ 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0]], Good
+
+Test#: 9, Training Data is: [0 0 0 0 1 0 0 1]
+[[ 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5]], The 0.5 Error
+
+Test#: 10, Training Data is: [0 0 0 0 1 0 1 0]
+[[ 0.0 0.0 0.0 0.0 1.0 0.0 1.0 0.0]], Good
+
+Test#: 11, Training Data is: [0 0 0 0 1 0 1 1]
+[[ 0.0 0.0 0.0 0.0 1.0 0.0 1.0 1.0]], Good
+
+Test#: 12, Training Data is: [0 0 0 0 1 1 0 0]
+[[ 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0]], Good
+
+Test#: 13, Training Data is: [0 0 0 0 1 1 0 1]
+[[ 0.0 0.0 0.0 0.0 1.0 1.0 0.0 1.0]], Good
+
+Test#: 14, Training Data is: [0 0 0 0 1 1 1 0]
+[[ 0.0 0.0 0.0 0.0 1.0 1.0 1.0 0.0]], Good
+
+Test#: 15, Training Data is: [0 0 0 0 1 1 1 1]
+[[ 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0]], Good
+
+Test#: 16, Training Data is: [0 0 0 1 0 0 0 0]
+[[ 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0]], Good
+
+Test#: 17, Training Data is: [0 0 0 1 0 0 0 1]
+[[ 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0]], Good
+
+Test#: 18, Training Data is: [0 0 0 1 0 0 1 0]
+[[ 0.0 0.0 0.5 0.5 0.0 0.0 0.5 0.5]], The 0.5 Error
+
+Test#: 19, Training Data is: [0 0 0 1 0 0 1 1]
+[[ 0.0 0.0 0.0 1.0 0.0 0.0 1.0 1.0]], Good
+
+Test#: 20, Training Data is: [0 0 0 1 0 1 0 0]
+[[ 0.0 0.5 0.0 0.5 0.0 0.5 0.0 0.5]], The 0.5 Error
+
+Test#: 21, Training Data is: [0 0 0 1 0 1 0 1]
+[[ 0.0 0.0 0.0 1.0 0.0 1.0 0.0 1.0]], Good
+
+Test#: 22, Training Data is: [0 0 0 1 0 1 1 0]
+[[ 0.0 0.0 0.0 1.0 0.0 1.0 1.0 1.0]], Wrong Answer
+
+Test#: 23, Training Data is: [0 0 0 1 0 1 1 1]
+[[ 0.0 0.0 0.0 1.0 0.0 1.0 1.0 1.0]], Good
+
+Test#: 24, Training Data is: [0 0 0 1 1 0 0 0]
+[[ 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0]], Wrong Answer
+
+
+For the most part, it appears to be working. But there are situations where it clearly does not work.
+
+I have the labels in two different ways.
+
+The first type of error is ""The 0.5 Error"" which is easy to see. It should never return any output of 0.5 in this situation. Everything should be binary. The second type of error is when it reports the correct binary outputs but they don't match what it was trained to recognize.
+
+I would like to understand the cause of these errors. I'm not interested in trying to correct the errors as I believe these are valid errors. In other words, these are situations where the perceptron is simply incapable of being trained for. And that's ok.
+
+What I want to learn is why these cases are invalid. I'm suspecting that they have something to do with the input data not being linearly separable in these situations. But if that's the case, then how do I go about determining which cases are not linearly separable? If I could understand how to do that I would be very happy.
+
+Also, are the reasons why it doesn't work in specific cases the same? In other words, are both types of errors caused by linear inseparability of the input data? Or is there more than one condition that causes a Perceptron to fail in certain training situations.
+
+Any help would be appreciated.
+"
+"['reinforcement-learning', 'comparison', 'supervised-learning', 'function-approximation', 'regression']"," Title: Can supervised learning be recast as reinforcement learning problem?Body: Let's assume that there is a sequence of pairs $(x_i, y_i), (x_{i+1}, y_{i+1}), \dots$ of observations and corresponding labels. Let's also assume that the $x$ is considered as independent variable and $y$ is considered as the variable that depends on $x$. So, in supervised learning, one wants to learn the function $y=f(x)$.
+
+Can reinforcement learning be used to learn $f$ (possibly, even learning the symbolic form of $f(x)$)?
+
+Just some sketches how can it be done: $x_i$ can be considered as the environment and each $x_i$ defines some set of possible ""actions"" - possible symbolic form of $f(x)$ or possible numerical values of parameters for $f(x)$ (if the symbolic form is fized). And concrete selected action/functional form $f(x, a)$ (a - set of parameters) can be assigned reward from the loss function: how close the observation $(x_i, y_i)$ is to the value that can be inferred from $f(x)$.
+
+Are there ideas or works of RL along the framework that I provided in the previous passage?
+"
+"['neural-networks', 'convolutional-neural-networks', 'comparison', 'machine-translation']"," Title: What are the differences between Bytenet and Wavenet?Body: I recently read Bytenet and Wavenet and I was curious why the first model is not as popular as the second. From my understanding, Bytenet can be seen as a seq2seq model where the encoder and the decoder are similar to Wavenet. Following the trends from NLP where seq2seq models seem to perform better, I find it strange that I couldn't find any paper that compares the two. Are there any drawbacks of Bytenet over Wavenet other than the computation time?
+"
+"['deep-learning', 'metric']"," Title: Using True Positive as a Cost FunctionBody: I wanted to use True Positive (and True Negative) in my cost function to make to modify the ROC shape of my classifier. Someone told me and I read that it is not differentiable and therefore not usable as a cost function for a neural network.
+
+In the example where 1 is positive and 0 negative I deduce the following equation for True Positive ($\hat y = prediction, y = label$):
+
+$$ TP = \bf(\hat{y}^Ty) $$
+$$ \frac{\partial TP}{\partial \bf y} = \bf y $$
+
+The following for True Negative:
+$$ TN = \bf(\hat{y}-1)^T(y-1) $$
+$$ \frac{\partial TN}{\partial \bf y} = \bf \hat y^T -1 $$
+
+The False Positive:
+$$ FP = - \bf (\hat y^T-1) y $$
+$$ \frac{\partial FP}{\partial \bf y} = - \bf ( \hat y^T - 1) $$
+
+The False Negative:
+$$ FN = \bf \hat y^T (y-1) $$
+$$ \frac{\partial FN}{\partial \bf y} = \bf \hat y $$
+
+All equations seem differentiable to me. Can someone explain where I went wrong?
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'tensorflow', 'keras']"," Title: Understanding the intuition behind Content Loss (Neural Style Transfer)Body: I'm trying to understand the intuition behind how the Content Loss is calculated in a Neural Style Transfer. I'm reading from an articles: https://medium.com/mlreview/making-ai-art-with-style-transfer-using-keras-8bb5fa44b216 , that explains the implementation of Neural Style Transfer, from the Content loss function:
+
+
+The article explains that:
+
+
+- F and P are matrices with a number of rows equal to N and a number of columns equal to M.
+- N is the number of filters in layer l and M is the number of spatial elements in the feature map (height times width) for layer l.
+
+
+From the code below for getting the features/content representation from particular Conv layers, I didn't quite understand how it works. Basically I printed out the output of every line of code to try to make it easier, but it still left a number of questions to be asked, which I listed below the code:
+
+def get_feature_reps(x, layer_names, model):
+ """"""
+ Get feature representations of input x for one or more layers in a given model.
+ """"""
+ featMatrices = []
+ for ln in layer_names:
+ selectedLayer = model.get_layer(ln)
+ featRaw = selectedLayer.output
+ featRawShape = K.shape(featRaw).eval(session=tf_session)
+ N_l = featRawShape[-1]
+ M_l = featRawShape[1]*featRawShape[2]
+ featMatrix = K.reshape(featRaw, (M_l, N_l))
+ featMatrix = K.transpose(featMatrix)
+ featMatrices.append(featMatrix)
+ return featMatrices
+
+def get_content_loss(F, P):
+ cLoss = 0.5*K.sum(K.square(F - P))
+ return cLoss
+
+
+1- For the line featRaw = selectedLayer.output
, when I print featRaw
, I get the output:
+Tensor(""block4_conv2/Relu:0"", shape=(1, 64, 64, 512), dtype=float32)
.
+
+
+- a-
Relu:0
does this mean Relu activation has not yet been applied?
+- b- Also I presume we're outputing the feature maps outputs from
block4_conv2
, not the filters/kernels themselves, correct?
+- c- Why is there an axis of 1 at the start? My understanding of Conv layers is that they're simply made up from the number of filters/kernels (with shape-height, width, depth) to apply to the input.
+- d- Is
selectedLayer.output
simply outputs the shape of the Conv layer, or does the output object also hold other information like the pixel values from the output feature maps of the layer?
+
+
+2- With the line: featMatrix = K.reshape(featRaw, (M_l, N_l)
where printing featMatrix
would output: Tensor(""Reshape:0"", shape=(4096, 512), dtype=float32)
.
+
+
+- a- This is where I'm confused the most. So to get the feature/content representation of a particular Conv layer of an image, we simply create a matrix of 2 dimensions, the first being the number of filters and the other being the area of the filter/kernel (height * width). That doesn't make sense! How do we get unique feature of an image from just that?!! We're not retrieving any pixel values from a feature map. We're simply getting the area size of filter/kernel and the number of filters, but not retrieving any of the content (pixel values) itself!!
+- b- Also the final
featMatrix
is transposed - i.e. featMatrix = K.transpose(featMatrix)
with the output Tensor(""transpose:0"", shape=(512, 4096), dtype=float32)
. Why is that (i.e. why reverse the axis)?
+
+
+3 - Finally I want to know, once we retrieve the content representation, how can I output that in both as a numpy array and save it as an image?
+
+Any help would be really appreciated.
+"
+"['reinforcement-learning', 'comparison', 'model-based-methods', 'model-free-methods', 'sample-efficiency']"," Title: Why are model-based methods more sample efficient than model-free methods?Body: Why do model-based methods use fewer samples than model-free methods? Here, I'm specifically referring to model-based methods in which we have to learn a policy and model. I can only think of two reasons for this question:
+
+
+- We can potentially obtain more samples from the learned model, which may speed up the learning speed.
+- Models allow us to predict the future states and run simulations. This may lead to more valuable transitions, whereby speeding up learning.
+
+
+But I heavily doubt this is the whole story. Sincerely, I hope someone could share a more detailed explanation for this question.
+"
+"['machine-learning', 'unsupervised-learning', 'clustering']"," Title: How can I cluster based on the complementary categories?Body: K-means tries to find centroid and then clusters around the centroids. But what if we want to cluster based on the complement?
+
+For example, suppose we have a group of animals and we want to cluster Dogs, Cats, (Not Dogs and Not Cats). The 3rd category will not arise from mean clustering.
+"
+['chat-bots']," Title: How can we log user-bot conversations using the Microsoft Bot Framework?Body: I am starting to create my first bot with Microsoft Bot Framework with the help of Azure, initially I want to know where all the conversations the user has with the bot are stored, so then get a log of all the conversations that have been held.
+
+I already have some answers stored in knowledge bases using QnA Maker, for certain questions that you can answer, I want to know where the questions that were not answered or better that the bot could not answer are stored.
+
+Currently, I am asking the users to write their question in a feedback form when they don't get a response from my bot. This is taking up my time and also annoys the user as they have to type more. I want my bot to collect these questions and store them in a database.
+"
+"['generative-adversarial-networks', 'convolutional-layers', 'constrained-optimization', 'wasserstein-gan']"," Title: Wasserstein GAN with non-negative weights in the criticBody: I want to train a WGAN where the convolution layers in the critic are only allowed to have non-negative weights (for a technical reason). The biases, nonetheless, can take both +/- values. There is no constraint on the generator weights.
+I did a toy experiment on MNIST and observed that the performance is significantly worse than a regular WGAN.
+What could be the reason? Can you suggest some architectural modifications so that the nonnegativity constraint doesn't severely impair the model capacity?
+"
+['data-mining']," Title: Algorithm for seasonal trendsBody: I have a very big table with lots of names and how much they are searched by date.
+
+I would like to find trending patterns. When does a name rise and when does it fall. Without knowing the name or the pattern before.
+The rise could be during the seasons of the year but also during a week.
+
+Like a 'warm hat' is trending in winter and falling in summer.
+Or searches for a ""board game"" might rise on Sunday and decrease on Monday.
+
+The table looks simplified like this:
+
+winter gloves, 2014-01-01, 200
+warm hat, 2014-01-01, 300
+swimming short, 2014-01-01, 1
+sunscreen, 2014-01-01, 2
+....
+winter gloves, 2014-07-01, 1
+warm hat, 2014-07-01, 1
+swimming short, 2014-07-01, 200
+sunscreen, 2014-07-01, 300
+
+
+Which algorithms should I have a look at?
+
+Thanks for any hint,
+Joerg
+"
+"['reinforcement-learning', 'definitions', 'dqn']"," Title: Is a state that includes only the past n-step price records partially observable?Body: I'm currently working on a project to make an DQN agent that decides whether to charge or discharge an electric vehicle according to hourly changing price to sell or buy. The price pattern also varies from day to day. The goal of this work is to schedule the optimal charging/discharging actions so that it can save money.
+
+The state contains past n-step price records, current energy level in battery, hour, etc., like below:
+
+$$
+s_t = \{ p_{t-5}, p_{t-4}, p_{t-3}, p_{t-2}, p_{t-1}, E_t, t \}
+$$
+
+What I'm wondering is whether this is a partial observable situation or not, because the agent can only observe past n-step prices rather than knowing every price at every time step.
+
+Can anyone comment on this issue?
+
+If this is the partial observable situation, is there any simple way to deal with it?
+"
+"['definitions', 'unsupervised-learning', 'clustering']"," Title: Is unsupervised learning a branch of AI?Body: From Artificial Intelligence: A Modern Approach, a book by Stuart Russell and Peter Norvig, this is the definition of AI:
+
+
+ We define AI as the study of agents that receive percepts from the environment and perform actions. Each such agent implements a function that maps percept sequences to actions, and we cover different ways to represent these functions, such as reactive agents, real-time planners, and decision-theoretic systems. We explain the role of learning as extending the reach of the designer into unknown environments, and we show how that role constrains agent design, favoring explicit knowledge representation and reasoning.
+
+
+Given the definition of AI above, is unsupervised learning (e.g. clustering) a branch of AI? I think the definition above is more suitable for supervised or reinforcement learning.
+"
+"['machine-learning', 'computer-vision', 'facial-recognition']"," Title: How to implement fisherface algorithm and how much time will it take?Body: I found on the web that fisherface is the best algorithm for face detection. Before investing deeply into it, I just want to know how hard is it to implement it and how much time will it take.
+
+I am new to this website and I welcome any suggestions.
+"
+"['neural-networks', 'deep-learning', 'reinforcement-learning']"," Title: Feasibility of using machine learning to obtain self-consistent solutionsBody: I am a physicist and I don't have much background on machine learning or deep learning except taking a couple of courses on statistics. In physics, we often simulate a model by means of two-way coupled systems where each system is described by a partial differential equation. The equations are generally unfolded in a numerical grid of interest and then solved iteratively until a self-consistent solution is obtained.
+
+A well-known example is the Schrödinger-Poisson solver. Here, for a given nano/atomic structure, we assume an initial electron density. Then we solve the Poisson equation for that electron density. The Poisson equation tells us the electrostatics (electric potential) of the structure. Given this information, we solve the Schrödinger equation for the structure which tells us about the energy levels of the electrons and their wave functions in the structure. But one would then find that this energy levels and wavefunctions correspond to a different electron density (than the one we initially guessed). So we iterate the process with the new electron density and follow the above mentioned procedures in a loop until a self-consistent solution is obtained.
+
+Often times the iteration processes are computationally expensive and extremely time-consuming.
+
+My question is this: Would the use of deep learning algorithms offer any advantage in modeling such problems where iteration and self-consistency are involved? Are there any study/literature where researchers explored this avenue?
+"
+"['evolutionary-algorithms', 'genetic-programming', 'evolutionary-computation', 'tgp']"," Title: Why do all nodes in a GP tree need to be the same type?Body: Context: I'm a complete beginner to evolutionary algorithms and genetic algorithms and programming. I'm currently taking a course about genetic algorithms and genetic programming.
+
+One of the concepts introduced in the course is ""closure,"" the idea that - with an expression tree representing a genetic program that we're evolving - all nodes in the tree need to be the same type. As a practical example, the lecturer mentions that implementing greater_than(a, b)
for two integers a
and b
can't return a boolean like true
or false
(it can return, say, 0
and 1
instead).
+
+What he didn't explain is why the entire tree needs to match in all operators. It seems to me that this requirement would result in the entire tree (representing your evolved program) being composed of nodes that all return the same type (say, integer
).
+"
+"['deep-learning', 'python']"," Title: Image to image regression in tensorflowBody: I am working on an image to image regression task which requires me to develop a deep learning model that takes in a sequence of 5 images and return another image. The sequence of 5 images and the output images are conceptually and temporally related. In fact, the 5 images in the sequence each correspond to timestep in a simulation and the output, the one I am trying to predict, corresponds eventually to the 6th timestep of that sequence.
+
+For now, I have been training a simple regression-type CNN model which takes in the sequence of 5 images stored in a list and outputs an image corresponding to the next timestep in the simulation.
+
+However, I have been researching a bit now in order to find a better way to carry out this task and I found the idea of ConvLSTMs. However, I have seen these applied to the prediction of feature and the output of a sentence describing that image. What I wanted to know is whether ConvLSTMs can also output images, but more importantly if they can be applied to my case. If not, what other types of deep learning network can be suitable for this task?
+
+Thanks in advance !
+"
+"['deep-neural-networks', 'distributed-computing']"," Title: In how few updates can a multi layer neural net be trained?Body: A single iteration of gradient descent can be parallelised across many worker nodes. We simple split the training set across the worker nodes, pass the parameters to each worker, each worker computes gradients for their subset of the training set, and then passes it back to the master to be averaged. With some effort, we can even use model parallelism.
+
+However, stochastic gradient descent is an inherently serial proces. Each update must be performed sequentially. Each iteration, we must perform a broadcast and gather of all parameters. This is bad for performance. Ultimately, number of updates is the limiting factor of deep model training speed.
+
+Why must we perform many updates?
+With how few updates can we achieve good accuracy?
+
+What factors affect the minimum number of updates requires to reach some accuracy?
+"
+"['markov-chain', 'time-series']"," Title: Are there any ways to model markov chains from time series data?Body: I want to make a thing that produces a stochastic process from time series data.
+The time series data is recorded every hour over the year, which means 24-hour of patterns exist for 365 days.
+
+What I want to do is something like below:
+
+
+- Fit a probability distribution using data for each hour so that I can sample the most probable value for this hour.
+- Repeat it for 24 hours to generate a pattern of a day.
+
+
+BUT! I want the sampling to be done considering previous values rather than being done in an independent manner.
+
+For example, I want to sample from
or just
rather than
when
refers to a specific hour.
+
+What I came up with was the Markov chain, but I couldn't find any reference or materials on how to model it from real data.
+
+Could anyone give me a comment for this issue?
+"
+"['deep-learning', 'python']"," Title: Can ConvLSTMs outuput images?Body: I am working on an image to image regression task which requires me to develop a deep learning model that takes in a sequence of 5 images and return another image. The sequence of 5 images and the output images are conceptually and temporally related. In fact, the 5 images in the sequence each correspond to timestep in a simulation and the output, the one I am trying to predict, corresponds eventually to the 6th timestep of that sequence.
+
+For now, I have been training a simple regression-type CNN model which takes in the sequence of 5 images stored in a list and outputs an image corresponding to the next timestep in the simulation. This does work with a small and rather simple dataset (13000 images) but works a bit worse on a more diverse and larger dataset (102000 images).
+
+For this reason, I have been researching a bit now in order to find a better way to carry out this task and I found the idea of ConvLSTMs. However, I have seen these applied to the prediction of feature and the output of a sentence describing that image. What I wanted to know is whether ConvLSTMs can also output images, but more importantly if they can be applied to my case. If not, what other types of deep learning network can be suitable for this task?
+
+Thanks in advance!
+"
+"['reinforcement-learning', 'markov-decision-process', 'rewards', 'value-iteration']"," Title: Can I have different rewards for a single action based on which state it transitions to?Body: I am working on an MDP where there are four states and ten actions. I am supposed to derive the optimal policy to reach the desired state. At any state, a particular action can take you to any of the other states.
+For ex. If we begin with state S1 -> performing action A1 on S1 can take you to S2 or S3 or S4 or just stay in the same state S1. Similarly for other actions.
+
+My question is - is it mandatory to have only a single reward value for a single action A? Or is it possible to give a reward of 10 if action a1 on state s1 takes you to s2, give a reward of 50 if action a1 on state s1 takes you to s3, give a reward of 100 if action a1 on state s1 takes you to s4 which is the terminal state or give zero reward if that action results in the state being unchanged.
+
+Can I do this??
+
+Because in my case every state is better than its previous state. i.e S2 is better than S1, S3 is better than S2 and so on. So if an action on S1 is directly taking u to S4 which is the final state i would like to award it the maximal reward.
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks']"," Title: How does one create a non-classifying CNN in order to gain information from images?Body: How do I program a neural network such that, when an image is inputted, the output is a numerical value that is not the probability of the image being a certain class? In other words, a CNN that doesn't classify. For example, when an input of an image of a chair is given, the model should not give the chance that the image is a chair but rather give the predicted age of the chair, the predicted price of the chair, etc. I'm currently not sure how to program a neural net like this.
+"
+"['machine-learning', 'math']"," Title: How can I determine the mathematical relation between the input and output variables?Body: I would like to take in some input values for $n$ variables, say $R$, $B$, and $G$. Let $Y$ denote the response variable of these $n$ inputs (in this example, we have $3$ inputs). Other than these, I would like to use a reference/target value to compare the results.
+
+Now, suppose the relation between the inputs ($R$, $B$ and $G$) with the output $Y$ is (let's say):
+
+$$Y = R + B + G$$
+
+But the system/machine has no knowledge of this relation. It can only read its inputs, $R$, $B$ and $G$, and the output, $Y$. Also, the system is provided with the reference value, say, $\text{REF} = 30$ (suppose).
+
+The aim of the machine is to find this relation between its inputs and output(s). For this, I have come across some quite useful material online like this forum query and Approximation by Superpositions of a Sigmoidal Function by G. Cybenko and felt that it were possible. Also, I doubt that Polynomial Regression may be helpful as suggested Here.
+
+One vague approach that comes to my mind is to use a truth table like approach to somehow deduce the effect of the inputs on the output and hence, get a function for it. But neither am I sure how to proceed with it, nor do I trust its credibility.
+
+Is there any alternative/already existing method to accomplish this?
+"
+['long-short-term-memory']," Title: How does an LSTM output the correct dimensions for classes?Body: Take the below LSTM:
+
+input: 5x1 matrix
+hidden units: 256
+output size (aka classes, 1 hot vector): 10x1 matrix
+
+
+It is my understanding that an LSTM of this size will do the following:
+
+$w_x$ = weight matrix at $x$
+
+$b_x$ = bias matrix at $x$
+
+activation_gate = tanh($w_1$ $\cdot$ input + $w_2$ $\cdot$ prev_output + $b_1$)
+
+input_gate = sigmoid($w_3$ $\cdot$ input + $w_4$ $\cdot$ prev_output + $b_2$)
+
+forget_gate = sigmoid($w_5$ $\cdot$ input + $w_6$ $\cdot$ prev_output + $b_3$)
+
+output_gate = sigmoid($w_7$ $\cdot$ input + $w_8$ $\cdot$ prev_output + $b_4$)
+
+The size of the output of each gate should be equal to the number of hidden units, ie, 256. The problem arrises when trying to convert to the correct final output size, of 10. If the forget gate outputs 256, then it is summed with the element wise product of the activation and input gate to find the new state, this will result in a hidden state of size 256. (Also in all my research I have not found anywhere whether this addition is actually addition, or simply appending the two matrices).
+
+So if I have a hidden state of 256, and the output gate outputs 256, doing an element wise product of these two results in, surprise surprise, 256, not 10. If I instead ensure the output gate outputs a size of 10, this no longer works with the hidden state in an element wise product.
+
+How is this handled? I can come up with many ways of doing it myself, but I want an identical replica of the basic LSTM unit, as I have some theories I want to test, and if it is even the slightest bit different it would make the research invalid.
+"
+"['philosophy', 'social', 'explainable-ai']"," Title: Why do we need explainable AI?Body: If the original purpose for developing AI was to help humans in some tasks and that purpose still holds, why should we care about its explainability? For example, in deep learning, as long as the intelligence helps us to the best of their abilities and carefully arrives at its decisions, why would we need to know how its intelligence works?
+"
+"['machine-learning', 'deep-learning', 'training', 'objective-functions']"," Title: When to use RMSE as opposed to MSE and vice versa?Body: I understand that RMSE is just the square root of MSE. Generally, as far as I have seen, people seem to use MSE as a loss function and RMSE for evaluation purposes, since it exactly gives you the error as a distance in the Euclidean space.
+
+What could be a major difference between using MSE and RMSE when used as loss functions for training?
+
+I'm curious because good frameworks like PyTorch, Keras, etc. don't provide RMSE loss functions out of the box. Is it some kind of standard convention? If so, why?
+
+Also, I'm aware of the difference that MSE magnifies the errors with magnitude>1 and shrinks the errors with magnitude<1 (on a quadratic scale), which RMSE doesn't do.
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks']"," Title: What does an oscillating validation error curve represent?Body: I have been training my CNN for a bit now and, while both the training loss and the training error curves are going down during training, both my validation loss and my validation error curves are kind of zig-zagging and oscillating along the epochs. What does this represent?
+"
+"['ai-design', 'prediction']"," Title: Which predictive algorithm is most appropriate for a proceeding situation?Body: I'm new to artificial intelligence. I am looking for the most appropriate AI solution for my application, which is developing an algorithm to predict a proceeding situation (edited: I want my algorithm to predict a situation or more than one to happen at a predefined moment) and, at the same time, to learn from the iterative stages of my application.
+
+Any suggestions? Any help? Any proposals?
+"
+"['machine-learning', 'natural-language-processing']"," Title: How would an AI learn idiomatic phrases in a natural language?Body: After an AI goes through the process described in How would an AI learn language?, an AI knows the grammar of a language through the process of grammar induction. They can speak the language, but they have learned formal grammar. But most conversations today, even formal ones, use idiomatic phrases. Would it be possible for an AI to be given a set of idioms, for example,
+
+
+ Immer mit der Ruhe
+
+
+Which, in German, means 'take it easy' but an AI of grammar induction, if told to translate 'take it easy' to German, would not think of this. And if asked to translate this, it would output
+
+
+ Always with the quiet
+
+
+So, it is possible to teach an AI to use idiomatic phrases to keep up with the culture of humans?
+"
+"['reinforcement-learning', 'environment', 'markov-decision-process', 'benchmarks']"," Title: Benchmarks for reinforcement learning in discrete MDPsBody: To compare the performance of various algorithms for perfect information games, reasonable benchmarks include reversi and m,n,k-games (generalized tic-tac-toe). For imperfect information games, something like simplified poker is a reasonable benchmark.
+
+What are some reasonable benchmarks to compare the performance of various algorithms for reinforcement learning in discrete MDPs? Instead of using a random environment from the space of all possible discrete MDPs on $n$ states and $k$ actions, are there subsets of such a space with more structure that are more reflective of ""real-world"" environments? An example of this might be so-called gridworld (i.e. maze-like) environments.
+
+This is a related question, though I'm looking for specific examples of MDPs (with specified transitions and rewards) rather than general areas where MDPs can be applied.
+
+Edit: Some example MDPs are found in section 5.1 (Standard Domains) of Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search (2012) by Guez et al.:
+
+
+ The Double-loop domain is a 9-state deterministic MDP with 2actions,
+ 1000 steps are executed in this domain. Grid5 is a 5×5 grid with no
+ reward anywhere except for a reward state opposite to the reset state.
+ Actions with cardinal directions are executed with small probability
+ of failure for 1000 steps. Grid10 is a 10×10 grid designed like Grid5.
+ We collect 2000 steps in this domain. Dearden’s Maze is a 264-states
+ maze with 3 flags to collect. A special reward state gives the number
+ of flags collected since the last visit as reward, 20000 steps are
+ executed in this domain.
+
+"
+"['neural-networks', 'ai-design', 'alphazero', 'architecture']"," Title: AlphaZero value at root node not being affected by trainingBody: I have written my own AlphaZero implementation and started training it recently.
+Problem is, I am 99% sure there is a mistake and I do not know how to tackle this, since I cannot explain it. I am new too AI so my own go at debugging this wasn't quite succesful.
+
+Input to my NN: A game state, represented by the board and position of the stones.
+Output of my NN: a policy vector P and a scalar v(so an array and a number).
+During self-play, training examples for each move are generated. These are later used to fit the network.
+
+After having trained a bit, I can see both policy and value loss decreasing, which is good.
+But for the very first game state (empty board) my prediction v of winning the game always stays at 0. This is very concerning, since the game I am training on is Connect4. Connect4 is a solved game and in the long run, the value for v should be 1(100% win chance).
+So, any ideas what I can do? I mean I could post the code here, but that is quite a lot, I don't know if any of you are willing to read through it.
+
+To show you what I mean, I'll show you the output of one of my test cases:
+
+def test_p_v():
+ game = connect4.Connect4()
+ nnetwrapper = neuralnetwrapper.NNetWrapper(game, args)
+ trainer = trainingonly1NN.Training(game = game, nnet= nnetwrapper, args = args)
+
+ trainer.nnet.load_checkpoint(folder = './NNmodels/', filename='best.pth.tar')
+
+ m = mctsnn.MCTS(nnet = nnetwrapper, args=args)
+
+ m.root.expand()
+
+ for c in m.root.children:
+ print(""P: {} v: {}"".format(c.state.P, c.state.v))
+
+ print(""Root MCTS: P: {} v: {}"".format(m.root.state.P, m.root.state.v))
+
+
+results in:
+
+P: [0.1436838 0.13809082 0.14174062 0.18597336 0.11126296 0.120884
+ 0.15836443] v: [-0.14345692]
+P: [0.14202288 0.13772981 0.14302546 0.1945151 0.11690026 0.1178078
+ 0.1479987 ] v: [0.4222183]
+P: [0.1447647 0.13066562 0.14334281 0.18055147 0.13374692 0.12126701
+ 0.1456615 ] v: [-0.5827425]
+P: [0.15192215 0.14221476 0.1443521 0.16634388 0.12634312 0.12711576
+ 0.14170831] v: [-0.0229549]
+P: [0.1456457 0.136381 0.13940862 0.17145196 0.12714048 0.12233274
+ 0.15763956] v: [-0.02743456]
+P: [0.15353182 0.13510287 0.1433772 0.16371183 0.12161442 0.1228981
+ 0.15976372] v: [0.37902302]
+P: [0.14321715 0.13596673 0.13836266 0.18927328 0.11999774 0.12481775
+ 0.1483647 ] v: [-0.521353]
+Root MCTS: P: [0.14296353 0.13863131 0.1358864 0.18102945 0.10981551 0.12779148
+ 0.16388236] v: [0.]
+
+
+So as you can see, for every different state, there are different P-values and different v-values, which makes sense.
+It also makes sense for my ROOT Node to have the highest value in P at position 3, since this refers to the middle column.
+But the v in my root Node is 0. This is alarming and I have no idea what do from here on.
+
+I also checked some of the training examples passed to my neural network to learn, they look like this (board, P, Actual game result):
+
+[[array([[0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0]]), [0.13, 0.13, 0.15, 0.13, 0.18, 0.13, 0.15], 1]
+
+
+whereas the very last number (1) is the v-value my network is supposed to fit! So it often even is 1!
+
+But I fear that since this is the very root of all, the game result for the following notes is -1 and 1 changing every ""step"", so the average probably is 0 which is quite logical. Don't know how to express this, I fear it is trying to average the v mean of all states, instead of just training the v for one state.
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: How to update Loss Function parameter after compilationBody: I used following custom loss function.
+
+def custom_loss(epo):
+
+ def loss(y_true,y_pred):
+ m=K.binary_crossentropy(y_true, y_pred)
+ x=math.log10(epo)
+ y=x*x
+ y=(math.sqrt(y)/100)
+ l=(m*(y))
+
+ return K.mean(l, axis=-1)
+ return loss
+
+
+and this is my discriminator model
+
+def Discriminator():
+
+
+ inputs = Input(shape=img_shape)
+
+
+ x=Conv2D(32, kernel_size=3, strides=2, padding=""same"")(inputs)
+ x=LeakyReLU(alpha=0.2)(x)
+ x=Dropout(0.25)(x, training=True)
+
+ x=Conv2D(64, kernel_size=3, strides=2, padding=""same"")(x)
+ x=ZeroPadding2D(padding=((0, 1), (0, 1)))(x)
+ x=BatchNormalization(momentum=0.8)(x)
+ x=LeakyReLU(alpha=0.2)(x)
+
+ x=Dropout(0.25)(x, training=True)
+ x=Conv2D(128, kernel_size=3, strides=2, padding=""same"")(x)
+ x=BatchNormalization(momentum=0.8)(x)
+ x=LeakyReLU(alpha=0.2)(x)
+
+ x=Dropout(0.25)(x, training=True)
+ x=Conv2D(256, kernel_size=3, strides=1, padding=""same"")(x)
+ x=BatchNormalization(momentum=0.8)(x)
+ x=LeakyReLU(alpha=0.2)(x)
+
+ x=Dropout(0.25)(x, training=True)
+ x=Flatten()(x)
+ outputs=Dense(1, activation='sigmoid')(x)
+ model = Model(inputs, outputs)
+ #model.summary()
+ img = Input(shape=img_shape)
+ validity = model(img)
+ return Model(img, validity)
+
+
+and initialize discriminator here
+
+D = Discriminator()
+epoch=0
+D.compile(loss=custom_loss(epoch), optimizer=optimizer, metrics=
+['accuracy'])
+G = Generator()
+z = Input(shape=(100,))
+img = G(z)
+D.trainable = False
+valid = D(img)
+
+
+i want to update epo value of loss function after each epoch in the following code
+
+for epoch in range(epochs):
+
+ for batch in range(batches):
+ ............
+ d_loss_real = D.train_on_batch(imgs, valid)
+ d_loss_fake = D.train_on_batch(gen_batch, fake)
+ d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
+ g_loss = combined.train_on_batch(noise_batch, valid)
+
+
+Are there any way for updating loss function without effecting training after compiling the model?
+"
+['natural-language-processing']," Title: Why do you need to retrain GPT-2?Body: I'm following this tutorial, and I wonder why is there a train-step - why is it necessary? I thought the whole idea of GPT-2 is that you do not need to train it on specific text domain, as it's already pre-trained on a large amount of data.
+"
+"['training', 'tensorflow', 'keras']"," Title: Accuracy too high too fast?Body: I have a simple text classifier, with the following structure:
+
+ input = keras.layers.Input(shape=(len(train_x[0]),))
+
+ x=keras.layers.Dense(500, activation='relu')(input)
+ x=keras.layers.Dropout(0.5)(x)
+ x=keras.layers.Dense(250, activation='relu')(x)
+ x=keras.layers.Dropout(0.5)(x)
+ preds = keras.layers.Dense(len(train_y[0]), activation=""sigmoid"")(x)
+
+ model = keras.Model(input, preds)
+
+
+When training it with 300,000 samples, with a batch size of 500, I get an accuracy value of .95 and loss of .22 in the first iteration, and the subsequent iterations are .96 and .11.
+
+Why does the accuracy grow so quickly, and then just stop growing?
+"
+['supervised-learning']," Title: Class imbalance and ""all zeros"" one-hot encoding?Body: I tried this example for a multi class classifier, but when looking at the data I realized two things:
+
+
+- There are many examples of ""all zeros"" vectors, that is, messages that don't belong in any classification.
+- These all-zeros are actually the majority, by far.
+
+
+Is it valid to have an all-zeros output for a certain input? I would guess a Sigmoid activation would have no problems with this, by simply not trying to force a one out of all the ""near zero"" outputs.
+
+But I also think an ""accuracy"" metric will be skewed too optimistically: if all outputs are zero 90% of the time, the network will quickly overfit to always output 0 all the time, and get 90% score.
+"
+"['neural-networks', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: What are some examples of LSTM architectures?Body: I've been doing some class assignments recently on building various neural networks. For convolutional networks, there are several well-known architectures such as LeNet, VGG etc. Such ""classic"" models are frequently referenced as starting points when building new CNNs.
+
+Are there similar examples for RNN/LSTM networks? All I've found so far are articles and slides explaining recurrent neurons, LSTM layers, and the math behind them, but no well-known examples of entire multi-layered network architectures, unlike CNNs which seem to have in abundance.
+"
+"['reinforcement-learning', 'rewards', 'policies', 'markov-decision-process']"," Title: Finding optimal Value function and Policy for an MDPBody: I am solving an RL MDP problem which is model based. I have an MDP which has four possible states S1-S4 and four different actions A1-A4, with S4 being terminal state and S1 is the beginning state. There is equal probability of applying any of the available actions on S1. The goal of my problem is to get from S1 to S4 with the maximum possible reward. I have two questions in this regard -
+
+
+- Will this be a valid MDP model if I have this rule for my model - If i perform action a1 on S1 and next state is S2 then the set of available valid actions on S2 will be only a2,a3 and a4. If i apply any of the actions a2,a3 or a4 on S2 and it takes me to S3 then now i am left with a set of two valid actions for S3(except the one that was taken on s2). Can i do this? Because in my problem an action once taken does not require to be taken again later.
+- I am confused about finding the optimal value and policy for my MDP. Both the optimal value and policy functions are not known to me. The objective is to find the optimal policy for my MDP which would get me from S1 to S4 with max reward. Any action taken on a particular state can lead to any other states (i.e there is a uniform equal state transition probability of 25% in my case for all states except S4 since it's terminal state). How can i approach this problem? After a lot of google search I vaguely understood that I must start with choosing a random policy (equal probability of taking any valid action) -> find value functions for each state -> iteratively compute V until it converges and then from these value functions I need to compute the optimal policy. Those solutions mostly use the Bellman Equations. Can someone please elaborate on how i can do this? Or if there is any other method to do it?
+
+
+Thank you in advance
+"
+['machine-translation']," Title: Small Machine Translation ModelBody: I would like to perform Machine Translation in tensorflow.js, in the browser. The issue is that state-of-the-art models have many gigabytes (fairseq ensemble), and translation is slow.
+
+Do you know some good small models for Machine Translation? Probably something below the 50 million parameter threshold.
+
+I found some older papers with models below 100 MB for training on small datasets (IWSLT'15 English-Vietnamese attention-based models), but these are probably superseded.
+"
+"['classification', 'logistic-regression']"," Title: Why not use the MSE instead of the current logistic regression?Body: When watching the machine learning course on Coursera by Andrew Ng, in the logistic regression week, the cost function was a bit more complex than the one for linear regression, but definitely not that hard.
+
+But it got me thinking, why not use the same cost function for logistic regression?
+
+So, the cost function would be $\frac{1}{2m} \sum_{i}^m|h(x_i) - y_i|^2$, where $h(x_i)$ is our hypothesis $\text{function}(\text{sigmoid}(X * \theta))$, $m$ is the number of training examples and $x_i$ and $y_i$ are our $ith$ training example?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'tensorflow', 'random-variable']"," Title: Should the biases be zero or randomly initialised?Body: I'm initialising DNN of shape [2 inputs, 2 hiddens, 1 output] with these weights and biases:
+
+#hidden layer
+weight1= tf.Variable(tf.random_uniform([2,2], -1, 1),
+ name=""layer1"");
+bias1 = tf.Variable(tf.zeros([2]), name=""bias1"");
+
+#output layer
+weight2 = tf.Variable(tf.random_uniform([2,1], -1, 1),
+ name=""layer2"");
+bias2 = tf.Variable(tf.zeros([1]), name=""bias2"");
+
+
+That's what I followed some online article, however, I wonder what if I initialise bias values using tf.random_uniform
instead of tf.zeros
? Should I choose zero biases or random biases generically?
+"
+"['convolutional-neural-networks', 'classification', 'tensorflow', 'object-detection']"," Title: Should I use single or double view for gender recognition?Body: My project requires gender recognition of people shown on the given images, with more than one person per image. However, these people can be positioned in frontal or side view(passing by perpendicularly, no face visible). On the pictures there will be entire bodies shown, not only the faces. My idea is to firstly use object detection to point where people would be and next use CNNs to recognize gender of each person.
+
+My question is: should I use one object detection algorithm for both frontal and side views of a person and then classify them with one CNN, or should I use object detection to separately find people positioned in frontal and side manner and use two different CNNs, one for classification of frontal views and one for side views?
+
+I am asking this because I think it might be easier for one NN to classify only one view at a time, because side view might have different features than frontal, and mixing this features might be confusing for a network. However I am not really sure. If something is unclear, please let me know.
+
+[EDIT] Since problem might be hard to understand only by reading, I made some illustrations. Basically I wonder if using second option can help in achieveing better accuracy for the subtle differences like those in gender recognition, especially when face is not visible:
+
+
+- Single detection and classification:
+
![]()
+- Two different classifiers:
+
![]()
+
+
+Img Source
+"
+"['reinforcement-learning', 'rewards', 'policies', 'markov-decision-process']"," Title: Can someone please help me validate my MDP?Body: Problem Statement :
+I have a system with four states - S1 through S4 where S1 is the beginning state and S4 is the end/terminal state. The next state is always better than the previous state i.e if the agent is at S2, it is in a slightly more desirable state than S1 and so on with S4 being the most desirable i.e terminal state. We have two different actions which can be performed on any of these states without restrictions.
+Our goal is to make the agent reach state S4 from S1 in the most optimal way i.e the route with maximum reward (or minimum cost). The model i have is a pretty uncertain one so i am guessing the agent must initially be given a lot of experience to make any sense of the environment. The MDP i have designed is shown below :
+
+MDP Formulation :
+
+
+The MDP might a look a bit messy and complicated but it basically is just showing that any action (A1 or A2) can be taken at any state (except the terminal state S4). The probability with which the transition takes place from one state to the other and the associated rewards are given below.
+
+States : States S1 to S4. S4 is terminal state and S1 is the beginning state. S2 is a better state than S1 and S3 is a better state than S1 or S2 and S4 is the final state we expect the agent to end up in.
+
+Actions : Available actions are A1 and A2 which can be taken at any state (except of course the terminal state S4).
+
+State Transition Probability Matrix : One action taken at a particular state S can lead to any of the other available states. For ex. taking action A1 on S1 can lead the agent to S1 itself or S2 or S3 or even directly S4. Same goes for A2. So i have assumed an equal probability of 25% or 0.25 as the state transition probability. The state transition probability matrix is the same for actions A1 and A2. I have just mentioned it for one action but it is the same for the other action too. Below is the matrix I created -
+
+
+Reward Matrix : The reward function i have considered is a function of the action, current state and future state - R(A,S,S'). The desired route must go from S1 to S4. I have awarded positive rewards for actions that take the agent from S1 to S2 or S1 to S3 or S1 to S4 and similarly for states S2 and S3. A larger reward is given when the agent moves more than one step i.e S1 to S3 or S1 to S4. What is not desired is when the agent gets back to a previous state because of a action. So i have awarded negative rewards when the state goes back to a previous state. The reward matrix currently is the same for both the actions (meaning both A1 and A2 have same importance but it can be altered if A1/A2 is preferred over the other). Following is the reward matrix i created (same matrix for both the actions) -
+
+
+
+Policy, Value Functions and moving forward :
+Now that i have defined my states, actions, rewards, transition probabilities the next step I guess i need to take is to find the optimal policy. I do not have an optimal value function or policy. From lot of googling i did, I am guessing i should start with a random policy i.e both actions have equal probability of being taken at any given state -> compute the value function for each state -> compute the value functions iteratively until they converge -> then find the optimal policy from the optimal value functions.
+
+I am totally new to RL and all the above knowledge is from whatever i have gathered reading online. Can someone please validate my solution and MDP if I am going the right way? If the MDP i created will work ?
+Apologies for such a big write-up but i just wanted to clearly depict my problem statement and solution. If the MDP is ok then can someone also help me with how can the value function iteratively converge to an optimal value? I have seen lot of examples which are deterministic but none for stochastic/random processes like mine.
+
+Any help/pointers on this would be greatly appreciated.
+Thank you in advance
+"
+"['recurrent-neural-networks', 'pytorch']"," Title: Issue at training simple RNN for word generationBody: After completing Coursera course from Andrew Ng, I wanted to implement again simple RNN for generating dinosaurs name based on a text file containing around 800 dinosaurs name.
+This is done with Numpy in coursera, here is a link to a Jupyter notebook (not my repo) to get strategy and full objective:
+Here
+
+I started similar implementation but in Pytorch, here is the model:
+
+
+
+class RNN(nn.Module):
+ def __init__(self,input_size):
+ super(RNN, self).__init__()
+ print(""oo"")
+ self.hiddenWx1 = nn.Linear(input_size, 100)
+ self.hiddenWx2 = nn.Linear(100, input_size)
+ self.z1 = nn.Linear(input_size,100)
+ self.z2 = nn.Linear(100,input_size)
+ self.tanh = nn.Tanh()
+ self.softmax = torch.nn.Softmax(dim=1)
+
+ def forward(self, input, hidden):
+ layer = self.hiddenWx1(input)
+ layer = self.hiddenWx2(layer)
+ a_next = self.tanh(layer)
+ z = self.z1(a_next)
+ z = self.z2(z)
+ y_next = self.softmax(z)
+ return y_next,a_next
+
+
+Here is the main algorithm of training:
+
+
+
+for word in examples: # for every dinosaurus name
+ model.zero_grad()
+ hidden= torch.zeros(1, len(ix_to_char)) #initialise hidden to null, ix_to_char is below
+ word_vector = word_tensor(word) # convert each letter of the current name in one-hot tensors
+ output = torch.zeros(1, len(ix_to_char)) #first input is null
+ loss = 0
+ counter = 0
+ true = torch.LongTensor(len(word)) #will contains the index of each letter.If word is ""badu"" => [2,1,4,22,0]
+
+ measured = torch.zeros(len(word)) # will contains the vectors returned by the model for each letter (softmax output)
+
+
+ for t in range(len(word_vector)): # for each letter of current word
+ true[counter] = char_to_ix[word[counter]] # char_to_ix return the index of letter in dictionary
+
+ output, hidden = model(output, hidden)
+
+ if (counter ==0):
+ measured = output
+ else: #measures is a tensor containing tensors of probability distribution
+ measured = torch.cat((measured,output),dim=0)
+ counter+=1
+
+ loss = nn.CrossEntropyLoss()(measured, true) #
+ loss.backward()
+ optimizer.step()
+
+
+The letter dictionary (ix_to_char) is as follow:
+
+{0: '\n', 1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e', 6: 'f', 7: 'g', 8: 'h', 9: 'i', 10: 'j', 11: 'k', 12: 'l', 13: 'm', 14: 'n', 15: 'o', 16: 'p', 17: 'q', 18: 'r', 19: 's', 20: 't', 21: 'u', 22: 'v', 23: 'w', 24: 'x', 25: 'y', 26: 'z'}
+
+Every 2000 epochs, I sample some new words with this function using torch multimonial to select a letter based on the softmax probability returned by the model:
+
+
+
+def sampling(model):
+ idx = -1
+ counter = 0
+ newline_character = char_to_ix['\n']
+
+ x = torch.zeros(1,len(ix_to_char))
+ hidden = torch.zeros(1, len(ix_to_char))
+ generated_word=""""
+
+
+ while (idx != newline_character and counter != 35):
+ x,hidden = model(x, hidden)
+ #print(x)
+ counter+=1
+ idx = torch.multinomial(x,1)
+ #print(idx.item())
+ generated_word+=ix_to_char[idx.item()]
+ if counter ==35:
+ generated_word+='\n'
+ print(generated_word)
+
+
+Here are the results of the first display:
+
+epoch:1, loss:3.256033420562744
+aaasaaauasaaasasauaaaaapsaaaasaaaaa
+
+aaaaaaaaaaaaasaaaoaaaaaauaaaaaaaaaa
+
+taaaauasaasaaaaasaaasaauaaaaaaaausa
+
+uaasaaaaauaaaasasssaauaaaaasaaaaaaa
+
+auaaaaaaaassasaaauaaaaaaaaasasaaaas
+
+epoch:2, loss:3.199960231781006
+aaasaaassussssusssussssssssssusssss
+
+aasaaassssssssssssasusssissssssssss
+
+sasaaassssuosasssssssssssssssssssss
+
+aasassasassusssssssssussssssssssuss
+
+oasaasassssssussssssssussssssssssss
+
+epoch:3, loss:3.263746500015259
+aaaaaaasaaaasaaaaasaaaasaaaaaaaaaaa
+
+aaaaaaasaaaaaaaaaaaaaaaaaaaaaaaaaaa
+
+aaaaaaaaaaaaaaaaaaaaaauaaaaaaaaaaas
+
+aaaaaaaasaaaasraaaaaaaaaaaaaaaaaaaa
+
+aaaaaaaaaaaaauusaaaaauaaaaaaaaaaaaa
+
+
+It doesn't work but I have no idea how to fix the issue.
+With no training at all, the sampling function seems to work as the returned words seem complete random:
+
+hbtpsbykkxvlah
+
+ttiwlzxdxabzmbdvsapsnwwpaoiasotalft
+
+
+My post may be a bit long, but so far I have no idea what is the issue of my program.
+
+Manny thanks for your help.
+"
+"['neural-networks', 'machine-learning']"," Title: Applying ML algorithms to data-sets with similar meta-features?Body: Is there any grounds for assuming an algorithms applied to a data-set that created a decently accurate model will perform as well on a different data-set with meta-features chosen and evaluated by meta-learning? What meta-features are even worth considering when evaluating similarity between data-sets with the goal of finding an optimal combination of algorithm application to this new data-set to create an accurate model?
+"
+"['neural-networks', 'research', 'architecture', 'multilayer-perceptrons']"," Title: Is there data available about successful neural network architectures?Body: I am curious to if there is data available for MLP architectures in use today, their initial architecture, the steps that were taken to improve the architecture to an acceptable state and what the problem is the neural network aimed to solve.
+
+For example, what the initial architecture (amount of hidden layers, the amount of neurons) was for a MLP in a CNN, the steps taken to optimize the architecture (adding more layers and reducing nodes, changing activation functions) and the results each step produced (i.e. increased error or decreased error). What the problem is the CNN tried to solve (differentiation of human faces, object detection inteded for self driving cars etc.)
+
+Of course I used a CNN as an example but I am referring to data for any MLP architecture in plain MLPs or Deep Learning architectures such as RNNs, CNNs and mroe. I am focused on the MLP architecture mostly.
+
+If there is not how do you think one can accumulate this data?
+"
+"['deep-learning', 'convolutional-neural-networks', 'tensorflow', 'convolution']"," Title: Is it useful to eliminate the less relevant filters from a trained CNN?Body: Imagine I have a tensorflow CNN model with good accuracy but maybe too many filters:
+
+
+- Is there a way to determine which filters have more impact in output? I think it should be possible. At least, if a filter A has a 0, that only multiples the output of a filter B, then filter B is not related to filter A. In particular, I'm thinking in 2d data where 1 dimension is time-related and the other feature related (like one-hot char).
+- Is there a way to eliminate the less relevant filters from a trained model, and leave the rest of the model intact?
+- Is it useful or there are better methods?
+
+"
+"['algorithm', 'computer-vision', 'object-recognition']"," Title: Is there any computer vision technology that can detect any type of object?Body: Is there any computer vision technology that can detect any type of object? For example, there is a camera fixed, looking in one direction always looking at a similar background. If there is an object, no matter what the object is (person, bag, car, bike, cup, cat) the CV algorithm would notice if there is an object in the frame. It wouldn't know what type of object it is, just that there is an object in the frame.
+
+Something similar to motion detector but that would work on a flat conveyor belt. Even though the conveyor belt moves will look similar between frames. Would something like this be possible? Possibly something to do with extracting differences from the background, with the goal being to not have to train the network with data for every possible object that may pass by the camera.
+"
+"['neural-networks', 'backpropagation', 'hidden-layers', 'feedforward-neural-networks']"," Title: How does a single neuron in hidden layer affect training accuracyBody: I'm currently a student learning about AI Networks. I've came across a statement in one of my Professor's books that a FFBP (Feed-Forward Back-Propagation) Neural Network with a single hidden layer can model any mathematic function with accuracy dependant on number of hidden layer neurons. Try as I might I cannot find any explanation as to why that occurs - could someone maybe explain the question why that is?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'gradient-descent', 'gradient']"," Title: How is the gradient of the loss function in DQN derived?Body: In the original DQN paper, page 1, the loss function of the DQN is
+
+$$
+L_{i}(\theta_{i}) = \mathbb{E}_{(s,a,r,s') \sim U(D)} [(r+\gamma \max_{a'} Q(s',a',\theta_{i}^{-}) - Q(s,a;\theta_{i}))^2]
+$$
+
+whose gradient is presented (on page 7)
+
+$$\nabla_{\theta_i} L_i(\theta_i) = \mathbb{E}_{s,a,r,s'} [(r+\gamma \max_{a'}Q(s',a';\theta_i^-) - Q(s,a;\theta_i))\nabla_{\theta_i}Q(s,a;\theta_i)] $$
+
+But why is there no minus (-) sign if $-Q(s,a;\theta_i)$ is parameterized by $\theta_i$ and why is the 2 from power gone?
+"
+['ai-design']," Title: How to handle multiple types of decisions?Body: In lots of games there are multiple phases or decision points that are not similar yet seem to have a dependency on one another when taking the perspective of the overall strategy of the player. A couple examples I thought up:
+
+
+- In a simple draw poker, you can have a strategy for discarding cards and a strategy for betting. They may not be mutually exclusive if you know your opponents betting will change with the number of cards you draw.
+- In Cribbage there are two phases, Discard to crib and the Play. The Play phase is definitely dependent on which cards are discarded in the discard phase. So it seems knowledge of Play strategy would be needed to make the Discard decision.
+
+
+The intent is to learn how to set up an unsupervised learning algorithm to play a game with multiple types of decision making. Doesn't matter the game. I'm at a loss at the highest level in what ML models to learn to use for this scenario. I don't think a single NN would work because of the different decision types.
+
+My question is how are these dependencies handled in ML? What are some known algorithms/models that can handle this?
+
+I'm at a loss on what to even search for so feel free to dump some terminology and keywords on me. =)
+"
+"['neural-networks', 'convolutional-neural-networks', 'reference-request', 'word-embedding', 'papers']"," Title: Reference request: one-hot encoding outperforming random orthogonal encodingBody: I experimented with a CNN operating on texts encoded as sequences of character vectors, where characters are encoded as one-hot vectors in one embedding and as random unit length pairwise orthogonal vectors (orthogonal matrix) in another. While geometrically these encode the same vector space, the one-hot embedding outperformed the random orthogonal one consistently. I suppose this has to do with the clarity of the signal: A zero vector with a single 1-valued cell is an easier to learn signal than just some vector with lots of different values in each cell.
+
+I wondered if you know of any papers on this kind of effect. I did not find any but would like to back up this finding and check if my reasoning for why this is the case makes sense/ find a better or more in-depth explanation.
+"
+"['neural-networks', 'backpropagation', 'long-short-term-memory']"," Title: Structure discrepancy of an LSTM?Body: I've found multiple depictions of how an LSTM cell operates. See 2 below:
+
+
+
+and
+
+
+
+Each of these images suggest the hidden state is utilised differently. On the top diagram, it is shown that the hidden state is added along with the previous output and current input to both the forget gate and the input gate. The bottom image suggests the input and forget gates are calculated only using the previous output and current input. Which is it?
+
+Also, when the previous output is fed in for the current layer, is this before or after it has been reshaped to the final output size and been put through a softmax?
+"
+"['convolutional-neural-networks', 'computer-vision', 'regression', 'image-segmentation']"," Title: Confidence Maps and Non-LinearityBody: I am currently trying to improve a CNN architecture that was proposed for generating depth images.
+The architecture was originally proposed for autonomous driving and it looks like following :
+
+
+
+The idea behind this architecture is to improve the accuracy of depth images by adding confidence maps
+to the outputs of different sensors to govern their influence on the result for each pixel. Input is an
+RGB image and its corresponding LIDAR data, output is a depth image with the same dimensions.
+
+For example, RGB features are best at discriminating global information such as sky, land, body of water, etc.
+while LIDAR features are better at capturing fine-grained, local information. So to decide which features will
+have more influence at the final regression result of which pixel, scientists proposed a confidence-guided
+architecture where confidence weights for each map are learned during training.
+
+Judging by their test results and how successful their paper was, their idea worked out pretty well for their
+problem domain. That's why I would like to employ the idea of confidence weights in my own domain, where I have
+multiple sources of features too. I have implemented the same architecture and got promising results: Accuracy
+have been improved, but not enough to compete with SOTA.
+
+However, I believe that the architecture above can be improved for my needs. The diversity of the scene structure
+in my domain adds some amount of complexity to the image generation problem: Opposed to depth image generation for
+autonomous driving where scene structure is somewhat restricted (there is sky, there is road, there are sideways etc),
+the images that I need to analyze are sometimes taken from handheld cameras, some other images are areal views.
+
+Here is a typical example of their scenario and some examples related to my domain, respectively.
+
+
+
+
+
+This means my CNN needs to learn confidence weights for regions in fundamentally different scene setups. Top of
+the image can be sky, but can also be populated by people. Bottom of the image can be sea, or can be road. This
+brings me to the understanding that there is a non-linear relation between the position of the segments and confidence
+weights in my case; and I need to modify the CNN architecture by introducing some additional non-linearity to learn the
+confidences for different CNN columns in each situation correctly.
+
+TLDR; I want to improve the CNN architecture above by introducing additional non-linearity, but I do not know how to do it.
+I have tried adding another layer to confidence weights (Extended the architecture by duplicating the same weights and activating
+with ReLU), but it has decreased the accuracy of the resulting model. Using the confidence weights as-is increases the accuracy, but
+not as much as I need.
+"
+"['neural-networks', 'reinforcement-learning', 'policy-gradients', 'proximal-policy-optimization']"," Title: What are the pros and cons of using standard deviation or entropy for exploration in PPO?Body: When trying to implement my own PPO (Proximal Policy Optimizer), I came across two different implementations :
+
+Exploration with std
+
+
+- Collect trajectories on $N$ timesteps, by using a policy-centered distribution with progressively trained std variable for exploration
+- Train policy function on $K$ steps
+- Train value function on $K$ steps
+
+
+For example, the OpenAI's implementation.
+
+Exploration with entropy
+
+
+- Collect trajectories on $N$ timesteps, by using policy function directly
+- Train policy and value function at the same time on $K$ steps, with a common loss for the two models, with additional entropy bonus for exploration purpose.
+
+
+For example, the PPO algorithm as described in the official paper.
+
+What are the pros/cons of these two algorithms?
+
+Is this specific to PPO, or is this a classic question concerning policy gradients algorithms, in general?
+"
+"['applications', 'ontology']"," Title: Examples of ontologies made with AIBody: I'm looking for more or less successful artificial intelligence usage examples to build an ontology or rationale why it can't be done. I found a lot of articles on how to use ontologies for AI, but not succeded vice versa.
+"
+"['training', 'models', 'hyper-parameters']"," Title: Will a .h5 file trained with Xception model work with Resnet50?Body: I have been running my 2013 server box since 2 weeks ago for training an AI model.
+I set up 30 epochs to run but since than it only ran 1 epoch as my PC config is super slow. But it generates 1 .h5 file.
+
+My question is will this .h5 file that I trained with Xception model work for Resnet 50?
+"
+"['neural-networks', 'convolutional-neural-networks', 'unsupervised-learning', 'generative-adversarial-networks']"," Title: Which approaches are best suited for text deblurring?Body:
+
+
+I want to deblur text images using deep learning. Which approaches are best suited for the task? Any example networks? Is unsupervised network the best approach? GAN or cycle GAN for these purposes?
+
+I have currently prepared 1000 images for training (shapr+blur) is it sufficient? For each of these approaches, how many training images do I need?
+I have attached sample blurred image and Ground truth
+"
+"['neural-networks', 'convolutional-neural-networks', 'tensorflow', 'datasets', 'optical-character-recognition']"," Title: Generate credit cards dataset for locating number regionBody: Currently I'm working on a project for scanning credit card and text extraction from cards. So first of all I decided to preprocess my images with some filters like thresholding, dilation and some other stuff. But it was not successfully for OCR of every credit cards. So I learned a lot and I found a solution like this for number plate recognition that is very similar to my project.
+In the first step I want to generate a random dataset like my cards to locate card number region, and for every card that I've generated I cropped two images that one of them has numbers and another has not. I generated 2000 images for every cards.
+
+so I have some images like this:
+
+
(does not have numbers)
+
(has numbers)
+
+And after generating my dataset I used this model with tensorflow to train my network.
+
+ model = models.Sequential()
+ model.add(layers.Conv2D(8, (5, 5), padding='same', activation='relu', input_shape=(30, 300, 3)))
+ model.add(layers.MaxPooling2D((2, 2)))
+
+ model.add(layers.Conv2D(16, (5, 5), padding='same', activation='relu'))
+ model.add(layers.MaxPooling2D((2, 2)))
+
+ model.add(layers.Conv2D(32, (5, 5), padding='same', activation='relu'))
+ model.add(layers.MaxPooling2D((2, 2)))
+
+ model.add(layers.Flatten())
+ model.add(layers.Dense(512, activation='relu'))
+ model.add(layers.Dense(2, activation='softmax'))
+
+
+Here is my plot for 5 epochs.
+
+
+
+
+I almost get 99.5% of accuracy and It seems to be wrong, I think I have kind of overfitting in my data. Does it work correctly or my model is overfitted ? And how can I generate dataset for this purpose ?
+"
+"['unsupervised-learning', 'features', 'principal-component-analysis']"," Title: Do the eigenvectors represent the original features?Body: I've got a test dataset with 4 features and the PCA produces a set of 4 eigenvectors, e.g.,
+
+EigenVectors: [0.7549043055910286, 0.24177972266822534, -0.6095588015369825, -0.01000612689310429]
+EigenVectors: [0.0363767549959317, -0.9435613299702559, -0.3290509434298886, -0.009706951562064631]
+EigenVectors: [-0.001031816289317291, 0.004364438034564146, 0.016866154627905586, -0.999847698334029]
+EigenVectors: [-0.654824523403971, 0.2263084929291885, -0.7210264051508555, -0.010499173877772439]
+
+
+Do the eigenvector values represent the features from the original dataset? E.g., is feature 1 & 2 explaining most of the variance in eigenvector 1?
+
+Am I interpreting the results correct to say that features 1 and 2 are therefore the most important in my dataset since PC1 represents 90% of the variance?
+
+I'm trying to map back to the original features but am unsure how to interpret the results.
+"
+"['machine-learning', 'datasets', 'logistic-regression', 'batch-normalization', 'scikit-learn']"," Title: Is it compulsary to normalize the dataset if doing so can negatively impact a Binary Logistic regression performance?Body: I am using raw data set with 4 feature variables (Total Cholesterol, Systolic Blood Pressure, Diastolic Blood Pressure, and Cigraeette count) to do a Binominal Classification (find stroke likelihood) using Logistic Regression Algorithm.
+
+I made sure that the class counts are balanced. i.e., an equal number of occurrences per class.
+
+using Python + sklearn, the problem is that the classification performance gets very negatively-impacted when I try to normalize the dataset using
+
+ X=preprocessing.StandardScaler().fit(X).transform(X)
+
+
+or
+
+ X=preprocessing.MinMaxScaler().fit_transform(X)
+
+
+So before normalizing the dataset:
+
+ precision recall f1-score support
+
+ 1 0.70 0.72 0.71 29
+ 2 0.73 0.71 0.72 31
+
+avg / total 0.72 0.72 0.72 60
+
+
+while after normalizing the dataset (the precision of class:1 decreased significantly)
+ precision recall f1-score support
+
+ 1 0.55 0.97 0.70 29
+ 2 0.89 0.26 0.40 31
+
+ avg / total 0.72 0.60 0.55 60
+
+
+Another observation that I failed to find an explanation to is the probability of each predicted class.
+
+Before the normalization:
+
+ [ 0.17029846 0.82970154]
+ [ 0.47796534 0.52203466]
+ [ 0.45997593 0.54002407]
+ [ 0.54532438 0.45467562]
+ [ 0.45999462 0.54000538]
+
+
+After the normalization ((for the same test set entries))
+
+ [ 0.50033247 0.49966753]
+ [ 0.50042371 0.49957629]
+ [ 0.50845194 0.49154806]
+ [ 0.50180353 0.49819647]
+ [ 0.51570427 0.48429573]
+
+
+Dataset description is shown below:
+
+ TOTCHOL SYSBP DIABP CIGPDAY STROKE
+count 200.000 200.000 200.000 200.000 200.000
+mean 231.040 144.560 81.400 4.480 1.500
+std 42.465 23.754 11.931 9.359 0.501
+min 112.000 100.000 51.500 0.000 1.000
+25% 204.750 126.750 73.750 0.000 1.000
+50% 225.500 141.000 80.000 0.000 1.500
+75% 256.250 161.000 90.000 4.000 2.000
+max 378.000 225.000 113.000 60.000 2.000
+
+
+SKEW is
+
+TOTCHOL 0.369
+SYSBP 0.610
+DIABP 0.273
+CIGPDAY 2.618
+STROKE 0.000
+
+
+Is there a logical explanation for the decreased precision?
+
+Is there a logical explanation for the very-close-to-0.5 probabilities?
+"
+['convolutional-neural-networks']," Title: Sliding Window DetectionBody: Suppose that we have a labeled training set of $n$ closely cropped images of cars $(x_1, y_1) , \dots, (x_n, y_n)$. We then train a CNN on this. Let's say we have $m$ test images. Then for each of the $m$ images, do we use the trained CNN on a cropped out portion of the box to detect whether there is a car or not? If the object is large, wouldn't having a large sliding window have better performance than a smaller sliding window?
+"
+"['neural-networks', 'regression', 'categorical-data']"," Title: Can two neural networks be better instead of one with a categorical feature?Body: Let's assume, that I have a neural network with few numerical features and one binary categorical feature. The network in this case is used for regression. I wonder if such a neural network can properly adjust to two different states of the categorical feature or maybe training two separate networks, according to these two states can be a better idea in the sense of smaller achievable error(assuming I have enough data for each of the two states)? The new model will use simple 'if statement' on the begining of the regression process and use proper network accordingly.
+"
+"['neural-networks', 'machine-learning', 'deep-neural-networks', 'hidden-layers', 'regression']"," Title: How many hidden layers are needed for this training data setBody: I'm trying to separate classes in 3D space, the data are as in the sketch below:
+
+
+
+There are 3 classes: 0,1,2; and with the look into the sketch, it seems that I need 3 planes to separate the classes, thus how many hidden layers should be in the DNN? Any roughly how many neurons in each layer?
+
+Some say the number of hidden layers is roughly the number of separation times, so I put 3 hidden layers and it worked! But any reasons behind that simple measure?
+"
+"['python', 'algorithm']"," Title: I am trying to create a network with dynamic connections for every different training exampleBody: I have developed a new algorithm which has dynamic connections. it is based off of the following paper:
+
+https://www.researchgate.net/publication/334226867_Dynamic_Encoder_Network_to_Process_Noise_into_Images
+
+i have figured out how to effect the dynamism. The pseudo code is in the diagram.
+
+Additionally i would really like to know how to do the rest of the code in terms of a general overview. i would need to have the same deconvolutional layer for all training examples, as well as different current_weight_matr
ices for the different training examples. so how do i go about adding the dynamic layers to the deconv neural network . Will i have to have many copies of the same network?
+"
+"['machine-learning', 'deep-learning', 'reinforcement-learning', 'training', 'reference-request']"," Title: Reinforcement learning with hints or reference modelBody: In Reinforcement Learning, when I train a model, it comes up with its own set of solutions. For example, if I am training a robot to walk, it will come up with its own walking gait, such as this Deep Mind robot that has learned to walk in a bizarre gait. It can surely walk/run although the movements does not quite look like a human.
+
+I was wondering how can I train a model by providing it some kind of reference motion data? For example, if I collect motion data from a walking human and then provide it to the training, can the training be made learn the walking movements that looks similar to the reference motion data?
+
+Searching online I did find some links that shows this is possible. For example, here is a research where the researchers did exactly what I am trying to do, they fed motion data captured from humans to a simulation and made it learn the movements.
+
+So, my question is: How can I give some kind of hints or reference data to a reinforcement learning model instead of just leaving it all by itself? how exactly is this done? What is it even called? What are some terms and keywords that I can search for to learn more?
+
+Many thanks in advance
+"
+"['machine-learning', 'problem-solving', 'genetic-programming']"," Title: Which AI algorithm is great for mapping between two XML filesBody: My work colleague got a project with a lot of work that is not hard or complicated. The problem is simple but it is a lot of work.
+We have two XML files with a lot of variables in it. Not only is the XML files flatten but you would have classes with classes in them that can reach an absurd amount of depth.
+
+The problem comes in that the one file is a request that is received and the other is a response. The request needs to map its variables to the response variables. A simple tool could be built to solve this solution but the problem comes in that there are rules for certain variables. Some variables in the request have arithmetic involve, sometimes with other variables, or some don't get mapped if other variables are present.
+
+I was thinking about Genetic Programming when I heard about this problem. If all the rules could be defined then it should be able to build a tree that would represent the desired output which is the XML response.
+
+Will it work and if not do you think there is an AI algorithm that can solve this?
+"
+"['reinforcement-learning', 'definitions', 'papers', 'off-policy-methods']"," Title: What is a non-starving policy in reinforcement learning?Body: In the paper, Eligibility Traces for off-Policy Policy Evaluation (2010), by Doina Precup et al., mentioned the term ""non-starving"" many times. The specific use of the term was like ""non-starving policy"" in the context of off-policy learning.
+
+A specific mention of the term
+
+
+ we consider a method that requires nothing of the behavior
+ policy other than that it be non-starving, i.e., that it
+ never reaches a time when some state-action pair is never
+ visited again.
+
+
+What does the thing look like intuitively? Why is it required?
+"
+"['comparison', 'objective-functions', 'optimization', 'mean-squared-error', 'kl-divergence']"," Title: What are the advantages of the Kullback-Leibler over the MSE/RMSE?Body: I've recently encountered different articles that are recommending to use the KL divergence instead of the MSE/RMSE (as the loss function), when trying to learn a probability distribution, but none of the articles are giving a clear reasoning why the first is better than the others.
+Could anyone give me a strong argument why the KL divergence is suitable for this?
+"
+"['deep-learning', 'math', 'deep-neural-networks', 'artificial-neuron', 'linear-regression']"," Title: Is it still called linear separation with a layer of more than 1 neuronBody: A single neuron will be able to do linear separation. For example, XOR simulator network:
+
+x1 --- n1.1
+ \ / \
+ \/ \
+ n2.1
+ /\ /
+ / \ /
+x2 --- n1.2
+
+
+Where x1
, x2
are the 2 inputs, n1.1
and n1.2
are the 2 neurons in hidden layer, and n2.1
is the output neuron.
+
+The output neuron n2.1
does a linear separation. How about the 2 neurons in hidden layer?
+
+Is it still called linear separation (at 2 nodes and join the 2 separation lines)? or polynomial separation of degree 2?
+
+I'm confused about how it's called because there are curvy lines in this wiki article: https://en.wikipedia.org/wiki/Overfitting
+
+
+
+
+"
+"['deep-learning', 'computer-vision', 'object-recognition', 'mask-rcnn', 'multiclass-classification']"," Title: Is Mask R-CNN suited to solve a multi-class classification problem where the classes are related?Body: I want to create a model to solve a multi-class classification problem.
+Here are more details about my problem.
+
+- Every picture contains only one object
+
+- The background is very simple
+
+- All objects belong to the same family of objects (for example, all objects are knives), but there are different specific subtypes
+
+- the model will learn and predict the name of the object (example: the model learn all types of knives, and when it get an image it will tell us the name of the knife)
+
+
+To be clear, let's say I have 50 types of knives, and the output of the model has to recognize the correct name of the knife. Knife name could be:
+
+- Chef's Knife,
+- Heavy Duty Utility Knife,
+- Boning Knife, etc.
+
+To solve this problem, I have started to use annotated, segmented (masked) images (COCO-like dataset) and the Mask R-CNN model.
+As a first step, I got a prediction, but I really don't know if I'm on the right way.
+For this problem, Mask R-CNN could be the solution, or it is impossible to recognize a tiny difference between two objects from the same class (for example Chef's Knife, Heavy Duty Utility Knife)?
+"
+"['reinforcement-learning', 'dqn']"," Title: How important is the choice of the initial state?Body: Is it crucial to always have the same initial (starting) state for Reinforcement Learning, for example, for Q-learning or DQN?
+Or it can vary?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'activation-functions', 'sigmoid']"," Title: Why will the sigmoid function be 1 (and 0), if we use a fully connected layer that produces a big enough positive (or negative, respectively) output?Body: I am using a fully connected neural network that uses a sigmoid activation function. If we feed a big enough input, the sigmoid function will finally become 1 or 0. Is there any solution to avoid this?
+Will this lead to classical sigmoid problems vanishing gradient or exploding gradient?
+"
+"['deep-learning', 'reference-request', 'data-science', 'statistical-ai']"," Title: How to calculate the false positives and negatives?Body: I have a huge amount of data and I want to calculate my false positive and false negative. Is there a software that can help me determine it?
+"
+"['reinforcement-learning', 'value-iteration', 'policy-iteration']"," Title: Can policy iteration use only the immediate reward for updates?Body: Is it still a policy iteration algorithm if the policy is updated optimizing a function of the immediate reward instead of the value function?
+"
+"['reinforcement-learning', 'comparison', 'value-iteration', 'policy-iteration']"," Title: Is Value Iteration better than Policy Iteration for first few iterations?Body: In Policy Iteration (PI), the action generated by the policy, whether it's optimal or not w.r.t the current value function $v(s)$. Whereas, in Value Iteration, the action is greedily generated w.r.t current $v(s)$, which is an approximation of the objective function (as I understand). As a consequence, in the first few iterations, will Value Iteration perform better than Policy Iteration?
+"
+"['deep-learning', 'papers', 'transformer', 'machine-translation', 'positional-encoding']"," Title: Why do both sine and cosine have been used in positional encoding in the transformer model?Body: The Transformer model proposed in ""Attention Is All You Need"" uses sinusoid functions to do the positional encoding.
+
+Why have both sine and cosine been used? And why do we need to separate the odd and even dimensions to use different sinusoid functions?
+"
+"['ai-design', 'academia']"," Title: When we are working on an AI project, does the context (academia, industry or competition) make the process different?Body: When we are working on an AI project, does the domain/context (academia, industry, or competition) make the process different?
+For example, I see in the competition most participants even winners use the stacking model, but I have not found anyone implementing it in the industry. How about the cross-validation process, I think there is a slight difference in industry and academia.
+So, does the context/domain of an AI project will make the process different? If so, what are the things I need to pay attention to when creating an AI project based on its domain?
+"
+"['reinforcement-learning', 'actor-critic-methods', 'convergence']"," Title: How is the actor-critic algorithm guaranteed to converge?Body: From my understanding, the critic evaluates the policy (actor) following dynamic programming (DP) or approximate dynamic programming (ADP) scheme, which should converge to the optimal value function after sufficient iterations. The policy (actor) then updates its parameter w.r.t the optimal value function using gradient methods. This policy evaluation and improvement circle are repeated until neither the critic nor the actor changes anymore.
+
+How's guaranteed to converge as a whole? Is there any mathematical proof? Is it possible that it may converge to a local optimal point instead of a global one?
+"
+"['classification', 'probability', 'probability-distribution']"," Title: Probabilistic classification - normalize resultsBody: I have a probabilistic classifier that produces a distribution over my 3 classes - C1, C2, C3.
+I want to compare some new points I'm classifying to each other, to see which one is the best fit for a specific class.
+
+for example:.
+for a new point X1 the classifier will output something like [0.2, 0.2, 0.6]
+for another new point X2 it will produce [0.2, 0.4, 0.4]
+so for both X1, X2 - the chosen class would be C3.
+Now I want to know - which of X1, X2 is a better fit to C3
+I cannot simply choose the one with the highest probability for C3, because it's probability for C3 depends on it's probabilities for the other classes. X1 got 0.6 and X2 0.4, but it's possible that X2 is closer to C3 in the hyper-plane than X1, it is just less unique for C3 than X1, and therefor X1 got a higher probability.
+
+here's a visual, in 2 dimensions:
+
+
+
+X2, who got a lower probability, is clearly a better fit to the Red class then X1, which is truly unique to the Red class, but is further from the class cluster
+
+My questions are:
+
+
+- how do I normalize the results of a probabilistic classifier so I can compare predictions to each other?
+- given an output of a probabilistic classier - how can I get the actual distance from the probabilities. It must be possible because there's an exact mapping between a set of probabilities to a point in the classified hyper-plan.
+
+
+Thanks a lot!
+Amir
+"
+"['reinforcement-learning', 'reference-request', 'action-spaces']"," Title: Are there RL techniques to deal with incremental action spaces?Body: Let's say we have a problem that can be solved by some RL algorithms (DQN, for example, because we have discrete action space). At first, the action space is fixed (the number of actions is $n_1$), and we have already well trained an offline DQN model. Later, we have to add more actions for some reasons (and the number of action is now $n_2$, where $n_2 > n_1$).
+Are there some solutions to update the value function or policy (or the neural network) with only minor changes?
+"
+['hidden-markov-model']," Title: is a ""word prediction"" problem, applicable using HMMs?Body: I'm learning HMMs and decided to model a problem for learning purposes. I came to this idea of word predicting by letters.
+here is the model :
+
+while typing, the word is typed letter by letter, so we can consider them as series of observations.
+let's say we have just 4 words in our database:
+
+
+- Tap
+- Trip
+- Trap
+- Trigger
+
+
+and we want to predict the word after 1,2 or 3 written letters.
+
+we have to define states and HMM parameters (state transitions, emissions and priors).
+
+our [hidden ?] states would be :
+
+
+- [ ][ ][ ][ ][ ][ ][ ] : no observations. I chose 7 [ ] because of the longest word
+- T [ ][ ][ ][ ][ ][ ]
+- T A [ ][ ][ ][ ][ ]
+- T R [ ][ ][ ][ ][ ]
+- … .
+
+
+and we have to learn the transition probabilities of each state pairs (the pairs which come after each other like T,TR and TA. but not for TA and TR).
+
+our prior probabilities are 1/3 because we have 3 words. but we may change it by learning which word is used more frequently.
+
+
+
+Now, I have these questions :
+
+
+- is HMM suitable for this kind of problem ?
+- are my assumptions (about states and prior probabilities) correct ?
+- the states get out of hand when the word count increases. making the model very complex. is that deduction true?
+- how the emission probabilities are defined in this model ?
+- what do I miss in case of parameters or definitions ?
+
+
+I'm a new contributor, be nice to me. regards :)
+"
+"['tensorflow', 'gradient-descent', 'activation-functions', 'relu', 'sigmoid']"," Title: Network doesn't converge with ReLU or Leaky ReLU, but works well with sigmoid/tanhBody: I have these training data to separate, the classes are rather randomly scattered:
+
+My first attempt was using tf.nn.relu
activation function, but output was stuck with whatever number of training steps. So I guessed it could be because of dead ReLU units, thus I changed the activation function in hidden layers to tf.nn.leaky_relu
, but it's still no good.
+It works when all hidden layers come with tf.sigmoid
, yes, but why doesn't ReLU work here? Is it because of dead ReLU units, or exploding gradients, or anything else?
+Source code (TensorFlow):
+
+#core
+import time;
+
+#libs
+import tensorflow as tf;
+import matplotlib.pyplot as pyplot;
+
+#mockup to emphasize value name
+def units(Num):
+ return Num;
+#end def
+
+#PROGRAMME ENTRY POINT==========================================================
+#data
+#https://i.imgur.com/uVOxZR7.png
+X = [[1,1],[1,2],[1,3],[2,1],[2,2],[2,3],[3,1],[3,2],[3,3],[4,1],[4,2],[4,3],[5,1],[6,1]];
+Y = [[0], [1], [0], [1], [0], [1], [0], [2], [1], [1], [1], [0], [0], [1] ];
+Max_X = 6;
+Max_Y = 2;
+Batch_Size = 14;
+
+#normalise
+for I in range(len(X)):
+ X[I][0] /= Max_X;
+ X[I][1] /= Max_X;
+ Y[I][0] /= Max_Y;
+#end for
+
+#model
+Input = tf.placeholder(dtype=tf.float32, shape=[Batch_Size,2]);
+Expected = tf.placeholder(dtype=tf.float32, shape=[Batch_Size,1]);
+
+#RELU DOESN'T WORK, DEAD RELU? SIGMOID WORKS BUT SLOW.
+#CHANGE TO tf.sigmoid OR tf.tanh AND IT WORKS:
+activation_fn = tf.nn.leaky_relu;
+
+#1
+Weight1 = tf.Variable(tf.random_uniform(shape=[2,units(60)], minval=-1, maxval=1));
+Bias1 = tf.Variable(tf.random_uniform(shape=[ units(60)], minval=-1, maxval=1));
+Hidden1 = activation_fn(tf.matmul(Input,Weight1) + Bias1);
+
+#2
+Weight2 = tf.Variable(tf.random_uniform(shape=[60,units(50)], minval=-1, maxval=1));
+Bias2 = tf.Variable(tf.random_uniform(shape=[ units(50)], minval=-1, maxval=1));
+Hidden2 = activation_fn(tf.matmul(Hidden1,Weight2) + Bias2);
+
+#3
+Weight3 = tf.Variable(tf.random_uniform(shape=[50,units(40)], minval=-1, maxval=1));
+Bias3 = tf.Variable(tf.random_uniform(shape=[ units(40)], minval=-1, maxval=1));
+Hidden3 = activation_fn(tf.matmul(Hidden2,Weight3) + Bias3);
+
+#4
+Weight4 = tf.Variable(tf.random_uniform(shape=[40,units(30)], minval=-1, maxval=1));
+Bias4 = tf.Variable(tf.random_uniform(shape=[ units(30)], minval=-1, maxval=1));
+Hidden4 = activation_fn(tf.matmul(Hidden3,Weight4) + Bias4);
+
+#5
+Weight5 = tf.Variable(tf.random_uniform(shape=[30,units(20)], minval=-1, maxval=1));
+Bias5 = tf.Variable(tf.random_uniform(shape=[ units(20)], minval=-1, maxval=1));
+Hidden5 = activation_fn(tf.matmul(Hidden4,Weight5) + Bias5);
+
+#out
+Weight6 = tf.Variable(tf.random_uniform(shape=[20,units(1)], minval=-1, maxval=1));
+Bias6 = tf.Variable(tf.random_uniform(shape=[ units(1)], minval=-1, maxval=1));
+Output = tf.sigmoid(tf.matmul(Hidden5,Weight6) + Bias6);
+
+Loss = tf.reduce_sum(tf.square(Expected-Output));
+Optimiser = tf.train.GradientDescentOptimizer(1e-1);
+Training = Optimiser.minimize(Loss);
+
+#training
+Sess = tf.Session();
+Init = tf.global_variables_initializer();
+Sess.run(Init);
+
+Feed = {Input:X, Expected:Y};
+Losses = [];
+Start = time.time();
+
+for I in range(10000):
+ if (I%1000==0):
+ Lossvalue = Sess.run(Loss, feed_dict=Feed);
+ Losses += [Lossvalue];
+
+ if (I==0):
+ print("Loss:",Lossvalue,"(first)");
+ else:
+ print("Loss:",Lossvalue);
+ #end if
+
+ Sess.run(Training, feed_dict=Feed);
+#end for
+
+Lastloss = Sess.run(Loss, feed_dict=Feed);
+Losses += [Lastloss];
+print("Loss:",Lastloss,"(last)");
+
+Finish = time.time();
+print("Time:",Finish-Start,"seconds");
+
+#eval
+print("\nEval:");
+Evalresults = Sess.run(Output,feed_dict=Feed).tolist();
+for I in range(len(Evalresults)):
+ Evalresults[I] = [round(Evalresults[I][0]*Max_Y)];
+#end for
+print(Evalresults);
+Sess.close();
+
+#result: diagram
+print("\nLoss curve:");
+pyplot.plot(Losses,"-bo");
+#eof
+
+"
+"['classification', 'bayesian-networks', 'maximum-likelihood']"," Title: Can maximum likelihood be used as a classifier?Body: I am confused in understanding the maximum likelihood as a classifier. I know what is Bayesian network and I know that ML is used for estimating the parameters of models. Also, I read that there are two methods to learn the parameters of a Bayesian network: MLE and Bayesian estimator.
+
+The question which confused me are the following.
+
+
+- Can we use ML as a classifier? For example, can we use ML to model the user's behaviors to identify the activity of them? If yes, How? What is the likelihood function that should be optimized? Should I suppose a normal distribution of users and optimize it?
+- If ML can be used as a classifier, what is the difference between ML and BN to classify activities? What are the advantages and disadvantages of each model?
+
+"
+"['neural-networks', 'reinforcement-learning', 'backpropagation', 'actor-critic-methods', 'policy-gradients']"," Title: How can the derivative of a neural network be calculated, given no mathematical expression?Body: Neural networks (NNs) are used as approximators in reinforcement learning (RL). To update the policy in RL, the actor network's gradients w.r.t its weights are needed. Since NN doesn't have a mathematical expression to work with, how can its derivatives be calculated?
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision']"," Title: How well can a CNN distinguish an object from its class?Body: A convolutional neural network (CNN) can easily predict the class of an object in an image.
+
+Can a CNN distinguish the Pisa Tower from other buildings, or Hagia Sophia from other mosques easily? If it can, how many training images can be sufficient? Do I need thousands of training images of that specific thing to distinguish it?
+
+(This is a term project recommendation about deep neural networks, so I need to understand its feasibility.)
+"
+"['machine-learning', 'deep-learning', 'scikit-learn']"," Title: Can ML be used to curve fit data based on dataset of example fits?Body: Say I have x,y data connected by a function with some additional parameters (a,b,c):
+
+$$ y = f(x ; a, b, c) $$
+
+Now given a set of data points (x and y) I want to determine a,b,c. If I know the model for $f$, this is a simple curve fitting problem. What if I don't have $f$ but I do have lots of examples of y with corresponding a,b,c values? (Or alternatively $f$ is expensive to compute, and I want a better way of guessing the right parameters without a brute force curve fit.) Would simple machine-learning techniques (e.g. from sklearn) work on this problem, or would this require something more like deep learning?
+
+Here's an example generating the kind of data I'm talking about:
+
+import numpy as np
+import matplotlib.pyplot as plt
+
+Nr = 2000
+Nx = 100
+x = np.linspace(0,1,Nx)
+
+f1 = lambda x, a, b, c : a*np.exp( -(x-b)**2/c**2) # An example function
+f2 = lambda x, a, b, c : a*np.sin( x*b + c) # Another example function
+prange1 = np.array([[0,1],[0,1],[0,.5]])
+prange2 = np.array([[0,1],[0,Nx/2.0],[0,np.pi*2]])
+#f, prange = f1, prange1
+f, prange = f2, prange2
+
+data = np.zeros((Nr,Nx))
+parms = np.zeros((Nr,3))
+for i in range(Nr) :
+ a,b,c = np.random.rand(3)*(prange[:,1]-prange[:,0])+prange[:,0]
+ parms[i] = a,b,c
+ data[i] = f(x,a,b,c) + (np.random.rand(Nx)-.5)*.2*a
+
+plt.figure(1)
+plt.clf()
+for i in range(3) :
+ plt.title('First few rows in dataset')
+ plt.plot(x,data[i],'.')
+ plt.plot(x,f(x,*parms[i]))
+
+
+
+
+Given data
, could you train a model on half the data set, and then determine the a,b,c values from the other half?
+
+I've been going through some sklearn tutorials, but I'm not sure any of the models I've seen apply well to this type of a problem. For the guassian example I could do it by extracting features related to the parameters (e.g. first and 2nd moments, %5 and .%95 percentiles, etc.), and feed those into an ML model that would give good results, but I want something that would work more generally without assuming anything about $f$ or its parameters.
+"
+"['neural-networks', 'tensorflow']"," Title: Why isn't my Neural Network based calculator working?Body: I am playing around with neural networks in Tensorflow and I figured an interesting test would be whether I can write a calculator using a Tensorflow Neural Network.
+
+I started with simple addition and it kinda worked (so given 2, 4 it would get around 5.9 or 6.1).
+
+Then I wanted to add the ability to calculate using ""+"", ""-"", and ""*"".
+
+Here is the code I came up with in the end:
+
+
+
+import numpy as np
+import tensorflow as tf
+from random import randrange
+
+def generate_input(size):
+ nn_input = []
+ for i in range(0,size):
+ symbol = float(randrange(3))
+ nn_input.append([
+ float(randrange(1000)),
+ float(randrange(1000)),
+ 1 if symbol == 0 else 0,
+ 1 if symbol == 1 else 0,
+ 1 if symbol == 2 else 0,
+ ])
+ return nn_input
+
+def generate_output(input_data):
+ return [[generate_single_output(i)] for i in input_data]
+
+def generate_single_output(input_data):
+ plus = input_data[2]
+ minus = input_data[3]
+ multiplication = input_data[4]
+
+ if (plus):
+ return input_data[0] + input_data[1]
+
+ if (minus):
+ return input_data[0] - input_data[1]
+
+ if (multiplication):
+ return input_data[0] * input_data[1]
+
+def user_input_to_nn_input(user_input):
+ symbol = user_input[1]
+ return np.array([[
+ float(user_input[0]),
+ float(user_input[2]),
+ 1 if symbol == '+' else 0,
+ 1 if symbol == '-' else 0,
+ 1 if symbol == '*' else 0,
+ ]])
+
+
+if __name__ == '__main__':
+ model = tf.keras.models.Sequential([
+ tf.keras.layers.Dense(64, activation='relu', input_shape=(5,)),
+ tf.keras.layers.Dense(64, activation='relu'),
+ tf.keras.layers.Dense(1),
+ ])
+
+ model.compile(tf.keras.optimizers.RMSprop(0.001), loss=tf.keras.losses.MeanSquaredError())
+
+
+ input_data = np.array(generate_input(10000))
+ output_data = np.array(generate_output(input_data))
+
+ model.fit(input_data, output_data, epochs=20)
+
+ while True:
+ user_calculation = input(""Enter expression (e.g. 2 + 3):"")
+ user_input = user_calculation.split()
+ nn_input = user_input_to_nn_input(user_input)
+ print(model.predict(nn_input)[0][0])
+
+
+
+The idea is built on this tutorial: https://www.tensorflow.org/tutorials/keras/basic_regression
+
+The input is 5 fields: number 1, number 2, plus?, minus?, multiplication?
+
+Where the last 3 are simply 1 or 0 depending on whether that is the calculation I am trying to do.
+
+As an output for say [1,4,1,0,0] I would expect 1 + 4 = 5
+for [1,4,0,1,0] I would expect 1 - 4 = -3 etc.
+
+For some reason though the numbers I am getting are completely off and seem random.
+
+Basically I am trying to understand what I went wrong?
+The data being input to the NN seems correct and the model is based on the model used in the tutorial I quoted (and the problems seem fairly similar so I expect if one would work the other would too).
+"
+"['neural-networks', 'deep-learning', 'image-processing', 'feature-extraction']"," Title: CBIR Evaluation on contextually different dataBody: How good would a CBIR system trained on a dataset, for example, DELF, trained on the Google Landmarks dataset, perform when evaluated on a contextually different dataset such as the WANG or the COREL dataset without retraining?
+"
+"['prediction', 'performance', 'hyperparameter-optimization']"," Title: Improving Recall of a Certain ClassBody: Let's say that we have a test data set with $20,000$ observations for which we want to make a binary prediction for. When we apply our best trained model to this data set (e.g. logistic regression with threshold = 0.5, data_size = 4000 rows, 5 fold cv), only about $1 \%$ of the predictions are positive. That is, $p(\text{positive}) \geq 0.5$ is true only for about $1 \%$ of the predictions. We expect many more positives since the recall of the positive class of the best trained model is about $40 \%$. If we manually lower the threshold to $0.45$, then about $10 \%$ of the predictions are positive. Assume that the $20,000$ observations come from the same distribution as the training/validation data and are independent samples.
+
+Questions.
+
+
+- Why would a a model with decent recall for the positive class predict very few positives in the out of sample data?
+- If (1) is true, then is it appropriate to lower the threshold for positive (e.g. $0.5$ to $0.45$) to increase the number of predicted positives in the test set?
+
+"
+"['machine-learning', 'applications']"," Title: What are criteria for ML model to be satisfactory for commercial use?Body: I often read ""the performance of the system is satisfactory"" or "" when your model is satisfactory"".
+
+But what does it mean in the context of Machine Learning?
+
+Are there any clear and/or generic criteria for Machine Learning model to be satisfactory for commercial use?
+
+Is decision what model to choose or whether additional model adjustments or improvements are needed based on data scientist experience, customer satisfaction or benchmarking academic or market competition results?
+"
+"['reinforcement-learning', 'environment']"," Title: Can an RL algorithm trained in one environment be successful in a different one?Body: Can an RL algorithm trained in one environment be successful in a different one?
+
+For example, if I train a model to go through one labyrinth, could this model also go through a different but similar labyrinth or would it need a new training process?
+
+By similar, I mean like these two:
+
+
+
+
+
+But with this one being not similar:
+
+
+"
+"['neural-networks', 'machine-learning', 'regression', 'accuracy', 'mean-squared-error']"," Title: How to express accuracy of a regression ANN that uses MSE loss function?Body: I have a regression MLP network with all input values between 0 and 1, and am using MSE for the loss function. The minimum MSE over the validation sample set comes to 0.019. So how to express the 'accuracy' of this network in 'lay' terms? If RMSE is 'in the units of the quantity being estimated', does this mean we can say: ""The network is on average (1-SQRT(0.019))*100 = 86.2% accurate""?
+
+Also, in the validation data set, there are three 'extreme' expected values. The lowest MSE results in predicted values closer to these three values, but not as close to all the other values, whereas a slightly higher MSE results in the opposite - predicted values further from the 'extreme' values but more accurate relative to all other expected values (and this outcome is actually preferred in the case I'm dealing with). I assume this can be explained by RMSE's sensitivity to outliers?
+"
+"['reinforcement-learning', 'monte-carlo-methods']"," Title: In Monte Carlo learning, what do you do when an end state is reached, after having recorded the previously visited states and taken actions?Body: When you train a model using Monte Carlo-based learning the state and action taken at each step is recorded, and then at some point an end state is reached and the agent receives some reward - what do you do at that point?
+
+Let's say there were 100 steps taken to reach this final reward state, would you update the full rollout of those 100 state/action/rewards and then begin the next episode, or do you then 'bubble up' that final reward to the previous states and update on those as well?
+
+E.g.
+
+
+- Process an update for the full 100 experiences. Can either stop here, or...
+- Bubble up the final reward to the 99th step and process an update for the 99 state/action/reward.
+- Bubble up the final reward to the 98th step and process an update for the 98 state/action/reward.
+- and so on right the way to the first step...
+
+
+Or, do you just process an update for the full 100-step roll-out and that's it?
+
+Or perhaps these are two different approaches? Is there a situation where you'd one rather than the other?
+"
+"['neural-networks', 'handwritten-characters']"," Title: Handwritten digits recognition during the process of writingBody: I know how to train a NN for recognizing handwritten digits (e.g. using the MNIST database).
+
+I'm wondering how to accomplish the same ""online"", which is during the process of writing e.g. I'm writing down a digit, and during the process and while keeping ""the pen down"", the NN recognizes the digit.
+To make it simpler, I could assume one stroke-order only for each digit (e.g. each digit is written with a certain order of strokes, like for digit 1 is just a vertical line going up to down).
+
+Which is the most suitable NN for the purpose and how to accomplish this?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'ai-design']"," Title: Image classification with an associated matrixBody: I have a dataset of images with 9 different classes. However, there are different categories with the same type of associated image and only can be differentiated with an associated matrix in my specific problem.
+
+I want to train a neural network with the images and the associated matrix as inputs. What type of architecture is good to use? Or where can I find bibliography about it?
+"
+"['clustering', 'dimensionality-reduction']"," Title: Clustering of very high dimensional data and large number of examples without losing info in dimensionsBody: I'm trying to get a grasp on scalability of clustering algorithms, and have a toy example in mind. Let's say I have around a million or so songs from $50$ genres. Each song has characteristics - some of which are common across all or most genres, some of which are common to only a few genres and some of which are genre-specific.
+
+Common attributes could be something like song duration, artist, label, year, album, key, etc. Genre-specific attributes could be like lead guitarist, trombone player, conductor, movie name (in case of movie soundtracks), etc. Assume that there are, say, $2000$ attributes across all possible genres.
+
+The aim is to identify attributes that characterize subgenres of these genres. So of course, let's say for rock I can just collect all the attributes for all rock songs, but even that set of attributes may be too broad to characterize the rock genre - maybe there are some that are specific to subgenres and so I won't have the desired level of granularity.
+
+Note that for the purpose of this example, I'm not assuming that I already know the subgenres a priori. For example, I'm not going to categorize songs into subgenres like post rock, folk rock, etc. and then pick out attributes characterizing them. I want to discover subgenres on the basis of clustering, if that makes sense.
+
+In a nutshell, about a million songs belonging to $50$ genres and all songs collectively have $2000$ attributes. So for each song I'll make a vector in $\mathbb{R}^{2000}$ - each dimension corresponds to an attribute. If that attribute is present for that song, the corresponding element of the vector is $1$, otherwise $0$ (e.g. a jazz-related attribute will be $0$ for a rock song). Now I want to do genre-wise clustering. For each genre, not only do I want to cluster songs of that genre into groups, but I also want to get an idea which attributes are the most important to characterize individual groups.
+
+On the basis of this clustering, I can identify subgenres (e.g. of rock music) that I can characterize using a subset of the $2000$ attributes.
+
+My first question is: is there a better way to initialize the problem than forming 2000-dimensional vectors of ones and zeros?
+
+Secondly, given the vast number of dimensions and examples, what clustering methods could be tried? From what I've surveyed, there are graph-based clustering methods, hierarchical, density-based, spectral clustering and so on. Which of these would be best for the toy example? I've heard that one can project the points on to a lower-dimensional subspace, then do clustering. But I also want which attributes define different clusters. Since attributes are encoded in the dimensions, with dimensionality reduction techniques I'll lose information about the attributes. So now what?
+"
+"['machine-learning', 'convolutional-neural-networks', 'facial-recognition', 'hyperparameter-optimization', 'similarity']"," Title: Threshold selection for Siamese network hyper-parameter tuningBody: I'm interested in modeling a Siamese network for facial verification. I've already written a simple working model that inputs feature vectors generated from two CNNs with shared weights then outputs a similarity score (euclidean distance.)
+
+Here is a similar model found within the Keras documentation (this Siamese network joins two networks comprised of fully connected layers) The model also uses a euclidean distance metric.
+
+The threshold used in that example when computing the accuracy of the model is 0.5. The similarity scores generated by the model run from that training script roughly ranges from 0 to 1.68. My model outputs scores ranging from 0 to 1.96.
+
+I would suppose that the choice of threshold when working with a similarity metric could be determined by finding the threshold value that maximizes an appropriate metric (e.g. F1 score) on a test set.
+
+Now when it comes to parameter tuning using a validation set - done so to choose the appropriate optimization and regularization parameters and model architecture to generate scores in the first place, how do I determine what value to set as the threshold? This threshold would be important to calculating the performance metric of each model generated during the parameter search - needed to determine what final model architecture and parameter set to use when training. Also, would the method for choosing the threshold change should I choose a different distance metric (e.g. cosine similarity)?
+
+I've already done a parameter search using an arbitrarily-set threshold of 0.5. I'm unsure if this would reflect best practices or if it ought to be adjusted when using a different distance metric.
+
+Thank you for any help offered. Please let me know if any more details on my part are necessary to facilitate a better discussion in this thread.
+"
+"['comparison', 'search', 'a-star', 'admissible-heuristic', 'heuristic-functions']"," Title: How do we determine whether a heuristic function is better than another?Body: I am trying to solve a maze puzzle using the A* algorithm. I am trying to analyze the algorithm based on different applicable heuristic functions.
+Currently, I explored the Manhattan and Euclidean distances. Which other heuristic functions are available? How do we compare them? How do we know whether a heuristic function is better than another?
+"
+"['neural-networks', 'reinforcement-learning', 'policy-gradients']"," Title: How do policy gradients compute an infinite probability distribution from a neural networkBody: Do neural networks compute the probability distribution for policy gradient methods. If so, how do they compute an infinite probability distribution? How do you represent a continuous action policy with a neural network?
+"
+"['philosophy', 'social', 'neo-luddism']"," Title: How could artificial intelligence harm us?Body: We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.
+
+How could artificial intelligence harm us?
+"
+"['algorithm', 'genetic-algorithms', 'optimization']"," Title: Metrics of quality of parameter space explorationBody: Considering a black box optimization problem on non-linear, non-convex function where we want to minimize an objective function.
+
+One way to assess the quality of an optimizer is to look at the best solution it finds. However that doesn't give us any information on how much of the parameter space the optimizer had to explore to come up with these solutions.
+
+Therefore I was wondering if there are metrics quantifying how much of the parameter space is explored ?
+"
+"['machine-learning', 'proofs', 'computational-learning-theory', 'pac-learning']"," Title: Why does estimation error increase with $|H|$ and decrease with $m$ in PAC learning?Body: Why does estimation error increase with $|H|$ and decrease with $m$ in PAC learning?
+
+I came across this statement in the section 5.2 of the book ""understanding machine learning: from theory to algorithms"". You just search ""increases (logarithmically)"" in your browser and then you can find the sentence.
+
+I just can't understand the statement. And there is no proof in the book either. What I would like to do is prove that estimation error $\epsilon_{est}$ increase (logarithmically) with |𝐻| and decrease with 𝑚. Hope you can help me out. A rigorous proof can't be better!
+"
+"['neural-networks', 'deep-learning', 'architecture', 'overfitting']"," Title: Neural Network training on one example to try overfitting leads to strange predictionsBody: tldr; if I train the network on 1 training example, the outcome sometimes makes no sense at all, sometimes is as expected. If I train it on more examples and higher iterations, the network, which produces two outcomes (p
and v
) always predicts exactly 0
for v
and I would like to change that.
+
+In the following post I will provide all code necessary to reproduce the problem.
+I am training a neural network on the same input. The wanted outcome for a value ""v
"" is 1
. If I create the network and train it, sometimes the predicted outcome will be 1
, sometimes it will be -
1.
+ Also, the loss
seems to flip between 0
and 4
during training epochs.
+ Additionally, the loss blows up immensly, even though both losses for the outcome layers are close to zero.
+ I do not understand where this behaviour comes from. I used Leaky-ReLU
to make sure it can handle negative input, I used a high learning rate
to make sure the data in this example is sufficient on the training, and the input is the same all the time.
+
+My Neural network looks like this:
+
+input_layer = keras.Input(shape=(6,7),)
+formatted_input_layer = keras.layers.Reshape((6,7, 1))(input_layer)
+conv_layer1 = self.create_conv_layer(formatted_input_layer)
+res_layer1 = self.create_res_layer(conv_layer1)
+res_layer2 = self.create_res_layer(res_layer1)
+res_layer3 = self.create_res_layer(res_layer2)
+res_layer4 = self.create_res_layer(res_layer3)
+policy_head = self.create_policy_head(res_layer4)
+value_head = self.create_value_head(res_layer4)
+model = keras.Model(inputs=input_layer, outputs=[policy_head, value_head])
+optimizer = keras.optimizers.SGD(lr=args['lr'],momentum=args['momentum'])
+model.compile(loss = {'policy_head' : 'categorical_crossentropy', 'value_head' : 'mean_squared_error'}, optimizer=optimizer, loss_weights={'policy_head':0.5, 'value_head':0.5})
+
+
+Methods for the different layers:
+conv_layer:
+
+def create_conv_layer(self, input_layer):
+ conv_layer = keras.layers.Conv2D(filters=256,
+ kernel_size=3,
+ strides=1,
+ padding='same',
+ use_bias=False,
+ data_format=""channels_last"",
+ activation = ""linear"",
+ kernel_regularizer = keras.regularizers.l2(0.0001))(input_layer)
+ conv_layer= keras.layers.BatchNormalization(axis=-1)(conv_layer)
+ conv_layer = keras.layers.LeakyReLU()(conv_layer)
+ return conv_layer
+
+
+res_layer:
+
+def create_res_layer(self, input_layer):
+ conv_layer = self.create_conv_layer(input_layer)
+ res_layer = keras.layers.Conv2D(filters=256,
+ kernel_size=3,
+ strides=1,
+ padding='same',
+ use_bias=False,
+ data_format=""channels_last"",
+ activation = ""linear"",
+ kernel_regularizer = keras.regularizers.l2(0.0001))(conv_layer)
+ res_layer= keras.layers.BatchNormalization(axis=-1)(res_layer)
+ res_layer = keras.layers.add([input_layer, res_layer])
+ res_layer = keras.layers.LeakyReLU()(res_layer)
+ return res_layer
+
+
+policy head:
+
+def create_policy_head(self, input_layer):
+ policy_head = keras.layers.Conv2D(filters=2,
+ kernel_size=1,
+ strides=1,
+ padding='same',
+ use_bias = False,
+ data_format='channels_last',
+ activation='linear',
+ kernel_regularizer = keras.regularizers.l2(0.0001))(input_layer)
+ policy_head = keras.layers.BatchNormalization(axis=-1)(policy_head)
+ policy_head = keras.layers.LeakyReLU()(policy_head)
+ policy_head = keras.layers.Flatten()(policy_head)
+ policy_head = keras.layers.Dense(units = 7,
+ use_bias = False,
+ activation = 'softmax',
+ kernel_regularizer = keras.regularizers.l2(0.0001),
+ name = ""policy_head""
+ )(policy_head)
+ return policy_head
+
+
+value head:
+
+def create_value_head(self, input_layer):
+ value_head = keras.layers.Conv2D(filters=1,
+ kernel_size=1,
+ strides=1,
+ padding='same',
+ use_bias = False,
+ data_format='channels_last',
+ activation='linear',
+ kernel_regularizer = keras.regularizers.l2(0.0001))(input_layer)
+ value_head = keras.layers.BatchNormalization(axis=-1)(value_head)
+ value_head = keras.layers.LeakyReLU()(value_head)
+ value_head = keras.layers.Flatten()(value_head)
+ value_head = keras.layers.Dense(units = 21,
+ use_bias = False,
+ activation = 'linear',
+ kernel_regularizer = keras.regularizers.l2(0.0001)
+ )(value_head)
+ value_head = keras.layers.LeakyReLU()(value_head)
+ value_head = keras.layers.Dense(units = 1,
+ use_bias = False,
+ activation = 'tanh',
+ kernel_regularizer = keras.regularizers.l2(0.0001),
+ name = ""value_head""
+ )(value_head)
+ return value_head
+
+ )(value_head)
+
+
+The way I am testing my NN:
+
+ canonicalBoard = np.zeros(shape = (6,7), dtype=int)
+
+
+ Pi = [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0]
+
+ trainExamples = [[canonicalBoard, Pi, 1]]*50
+
+ nnetwrapper.train(trainExamples)
+
+ board = canonicalBoard[np.newaxis, :, :]
+
+ p, v = nnetwrapper.nnet.model.predict(board)
+
+
+which results in the training looking like this:
+
+Epoch 1/10
+50/50 [==============================] - 4s 71ms/step - loss: 1.6829 - policy_head_loss: 1.9459 - value_head_loss: 1.0000
+Epoch 2/10
+50/50 [==============================] - 1s 15ms/step - loss: 2.3218 - policy_head_loss: 3.8470 - value_head_loss: 0.3768
+Epoch 3/10
+50/50 [==============================] - 1s 13ms/step - loss: 4456112.5000 - policy_head_loss: 0.9027 - value_head_loss: 0.8510
+Epoch 4/10
+50/50 [==============================] - 1s 14ms/step - loss: 16085884.0000 - policy_head_loss: 0.0945 - value_head_loss: 3.9925
+Epoch 5/10
+50/50 [==============================] - 1s 14ms/step - loss: 32722448.0000 - policy_head_loss: 2.6572 - value_head_loss: 4.0000
+Epoch 6/10
+50/50 [==============================] - 1s 14ms/step - loss: 52690084.0000 - policy_head_loss: 9.6345 - value_head_loss: 3.1810e-12
+Epoch 7/10
+50/50 [==============================] - 1s 14ms/step - loss: 74703120.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 4.0000
+Epoch 8/10
+50/50 [==============================] - 1s 14ms/step - loss: 97784832.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 4.0000
+Epoch 9/10
+50/50 [==============================] - 1s 14ms/step - loss: 121202520.0000 - policy_head_loss: 2.0802e-05 - value_head_loss: 4.0000
+Epoch 10/10
+50/50 [==============================] - 1s 14ms/step - loss: 144415040.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 4.0000
+
+
+and my prediction looking like this:
+
+p: [[0. 0. 0. 0. 0. 1. 0.]] v: [[-1.]]
+
+
+another outcome could be:
+
+Epoch 1/10
+50/50 [==============================] - 4s 82ms/step - loss: 1.6829 - policy_head_loss: 1.9459 - value_head_loss: 1.0000
+Epoch 2/10
+50/50 [==============================] - 1s 17ms/step - loss: 2.2826 - policy_head_loss: 2.0001 - value_head_loss: 2.1454
+Epoch 3/10
+50/50 [==============================] - 1s 16ms/step - loss: 1718694.1250 - policy_head_loss: 0.5434 - value_head_loss: 3.9772
+Epoch 4/10
+50/50 [==============================] - 1s 14ms/step - loss: 6204218.0000 - policy_head_loss: 9.4180e-05 - value_head_loss: 0.0000e+00
+Epoch 5/10
+50/50 [==============================] - 1s 14ms/step - loss: 12620835.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 0.0000e+00
+Epoch 6/10
+50/50 [==============================] - 1s 14ms/step - loss: 20322232.0000 - policy_head_loss: 7.7489 - value_head_loss: 0.0000e+00
+Epoch 7/10
+50/50 [==============================] - 1s 14ms/step - loss: 28812526.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 1.5966
+Epoch 8/10
+50/50 [==============================] - 1s 14ms/step - loss: 37715012.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 0.0000e+00
+Epoch 9/10
+50/50 [==============================] - 1s 15ms/step - loss: 46747064.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 0.0000e+00
+Epoch 10/10
+50/50 [==============================] - 1s 14ms/step - loss: 55699992.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 0.0000e+00
+
+
+with the predictions:
+
+p: [[0. 0. 0. 0. 0. 1. 0.]] v: [[1.]]
+
+
+which are the correct ones as I would have expected.
+
+How come on some trainings, my NN doesnt fit the data at all? I wanted to start the training process for a whole week now, but before I do so I want to make sure there are no errors in the way I layed out my NN. And this looks like I am missing something here.
+
+And here at the predict
/ train
methods of my neuralnet, (the NN is part of an alpha-zero replica and the involved game is connect4, I omitted these in the example to make it easier to actually replicate the problem. This is why you see some transform operations in predict
and train
methods)
+
+def train(self, examples):
+ input_boards, target_pis, target_vs = list(zip(*examples))
+ input_boards = np.asarray(input_boards)
+ target_pis = np.asarray(target_pis)
+ target_vs = np.asarray(target_vs)
+ logger.debug(""Passing to nn: x: {}, y: {}, batch_size: {}, epochs: {}"".format(input_boards, [target_pis, target_vs], self.args[""batch_size""], self.args[""epochs""]))
+ self.nnet.model.fit(x = input_boards, y = [target_pis, target_vs], batch_size = self.args[""batch_size""], epochs = self.args[""epochs""])
+
+
+def predict(self, board):
+ board = board.nn_board_2d
+ # preparing input
+ board = board[np.newaxis, :, :] # this has to be done for the conv2d to work
+
+ # run
+ pi, v = self.nnet.model.predict(board)
+ return pi[0], v[0]
+
+
+The parameters I used for this example:
+
+'lr': 0.2
+'dropout': 0.1
+'epochs': 10
+'num_channels': 512,
+'filters': 256
+'momentum':0.9
+
+
+EDIT: As soon as I use a lower learning rate and more iterations, my p
changes, but my v
stays exactly at 0
. This is what was bothering me in the first place:
+
+Epoch 1/10
+118/118 [==============================] - 5s 44ms/step - loss: 1.8037 - policy_head_loss: 2.4305 - value_head_loss: 0.7578
+Epoch 2/10
+118/118 [==============================] - 2s 14ms/step - loss: 1.1666 - policy_head_loss: 1.7723 - value_head_loss: 0.1416
+Epoch 3/10
+118/118 [==============================] - 2s 13ms/step - loss: 1.0987 - policy_head_loss: 1.6832 - value_head_loss: 0.0948
+Epoch 4/10
+118/118 [==============================] - 2s 13ms/step - loss: 1.0430 - policy_head_loss: 1.5787 - value_head_loss: 0.0876
+Epoch 5/10
+118/118 [==============================] - 2s 13ms/step - loss: 0.9943 - policy_head_loss: 1.4859 - value_head_loss: 0.0826
+Epoch 6/10
+118/118 [==============================] - 2s 13ms/step - loss: 0.9469 - policy_head_loss: 1.3959 - value_head_loss: 0.0777
+Epoch 7/10
+118/118 [==============================] - 2s 13ms/step - loss: 0.9046 - policy_head_loss: 1.3180 - value_head_loss: 0.0707
+Epoch 8/10
+118/118 [==============================] - 2s 13ms/step - loss: 0.8629 - policy_head_loss: 1.2403 - value_head_loss: 0.0647
+Epoch 9/10
+118/118 [==============================] - 2s 14ms/step - loss: 0.8068 - policy_head_loss: 1.1344 - value_head_loss: 0.0583
+Epoch 10/10
+118/118 [==============================] - 2s 13ms/step - loss: 0.7335 - policy_head_loss: 0.9922 - value_head_loss: 0.0538
+I0916 14:24:38.595780 7116 trainingonly1NN.py:232] Values for empty board: new network: P : [0.20057818 0.11068489 0.15129891 0.20042823 0.13117987 0.04705378
+ 0.15877616] v: 0
+
+"
+"['machine-learning', 'word-embedding', 'similarity']"," Title: How could I compute in real-time the similarity between tickets?Body: I'm dealing with a "ticket similarity task".
+Every time new tickets arrive at the help desk (customer service), I need to compare them and find out about similar ones.
+In this way, once the operator responds to a ticket, at the same time he can solve the others similar to the one solved.
+I expect an input ticket and all the other tickets with their similarity in output.
+I thought about using DOC2VEC, but it requires training every time a new ticket enters.
+What do you recommend?
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-recognition', 'architecture']"," Title: Get the position of an object, out of an imageBody: I have some images with a fixed background and a single object on them which is placed, in each image, at a different position on that background. I want to find a way to extract, in an unsupervised way, the positions of that object. For example, us, as humans, would record the x and y location of the object. Of course the NN doesn't have a notion of x and y, but i would like, given an image, the NN to produce 2 numbers, that preserve as much as possible from the actual relative position of objects on the background. For example, if 3 objects are equally spaced on a straight line (in 3 of the images), I would like the 2 numbers produced by the NN for each of the 3 images to preserve this ordering, even if they won't form a straight line. They can form a weird curve, but as long as the order is correct that can be topologically transformed to the right, straight line. Can someone suggest me any paper/architecture that did something similar? Thank you!
+"
+['keras']," Title: Training Keras Towards Or Against Analog Value?Body: For example, if I want to do a cat and mouse AI, the cat would wish to minimize the time taken for it to catch the mouse and the mouse would want to maximize that time. The time is analog and thus I cannot use a traditional Xy method but need another method that goes like this:
+
+network.train_against_value(X, y, determinator)
+
+Here, X is more like where the cat and mouse are. y is where the cat or mouse should move, and determinator is the time taken for the mouse to be caught, where the mouse wishes to maximize this value through its output of y and the cat wishes to minimize it. There is one Xy pair for each decision made by the cat and mouse, but one determinator throughout one game. Many games are played to train the AI.
+
+Example: X: (300, 300, 200, 200) -> (mousex, mousey, catx, caty)
+
+Y: (1,3) -> (xmove, ymove) direction, the numbers are then tuned by code for the actual movement to be always 1.
+
+Determinator: 50 -> time for mouse to be caught in seconds
+
+Where it would train so that with every X inputted it outputs a y so that determinator is minimum. Is there a method for train_towards_value as well? If there is no prebuilt method, how do I create one? What is the technical name for this kind of training?
+
+I have two neural networks for the cat and mouse, where the cat is slower than the mouse but is larger and could eat the mouse. Just consider the mouse is difficult to control from the neural network because of inefficiencies so that it is possible for the cat to catch the mouse.
+"
+"['natural-language-processing', 'word-embedding', 'glove']"," Title: Doubt on formulating cost function for GloVeBody: I'm reading the notes here and have a doubt on page 2 (""Least squares objective"" section). The probability of a word $j$ occurring in the context of word $i$ is $$Q_{ij}=\frac{\exp(u_j^Tv_i)}{\sum_{w=1}^W\exp(u_w^Tv_i)}$$
+
+The notes read:
+
+
+ Training proceeds in an on-line, stochastic fashion, but the implied global cross-entropy loss can be calculated as $$J=-\sum_{i\in corpus}\sum_{j\in context(i)}\log Q_{ij}$$
+ As the same words $i$ and $j$ can occur multiple times in the corpus, it is more efficient to first group together the same values for $i$ and $j$:
+ $$J=-\sum_{i=1}^W\sum_{j=1}^WX_{ij}\log(Q_{ij})$$
+
+
+where $X_{ij}$ is the total number of times $j$ occurs in the context of $i$ and the value of co-occuring frequency is given by the co-occurence matrix $X$. This much is clear. But then the author states that the denominator of $Q_{ij}$ is too expensive to compute, so the cross entropy loss won't work.
+
+
+ Instead, we use a least square objective in which the normalization factors in $P$ and $Q$ are discarded:
+ $$\hat J=\sum_{i=1}^W\sum_{j=1}^WX_i(\hat P_{ij}-\hat Q_{ij})^2$$
+ where $\hat P_{ij}=X_{ij}$ and $\hat Q_{ij}=\exp(u_j^Tv_i)$ are the unnormalized distributions.
+
+
+$X_i=\sum_kX_{ik}$ is the number of times any word appears in the context of $i$. I don't understand this part. Why have we introduced $X_i$ out of nowhere? How is $\hat P_{ij}$ ""unnormalized""? Is there a tradeoff in switching from softmax to MSE?
+
+(As far as I know, softmax made total sense in skip gram because we were calculating scores corresponding to different words (discrete possibilities) and matching the predicted output to the actual word - similar to a classification problem, so softmax makes sense.)
+"
+"['agi', 'futurism']"," Title: Is AGI likely to be developed in the next decade?Body: AI experts like Ben Goertzel and Ray Kurzweil say that AGI will be developed in the coming decade. Are they credible?
+"
+"['natural-language-processing', 'speech-synthesis']"," Title: What is the State-of-the-Art open source Voice Cloning tool right now?Body: I would like to clone a voice as precisely as possible. Lately, impressive models have been released that only need about 10 s of voice input (cf. https://github.com/CorentinJ/Real-Time-Voice-Cloning), but I would like to go beyond that and clone a voice even more precisely (with subsequent text-to-speech using that voice). It doesn't matter if I have to provide minutes or hours of voice inputs.
+"
+"['deep-learning', 'tensorflow', 'recurrent-neural-networks', 'long-short-term-memory', 'deep-neural-networks']"," Title: How to map X to Y for TensorFlow RNN training dataBody: Usually for DNN, I have the training data of matching X (2D) to Y (2D), for example, XOR data:
+
+X = [[0,0],[0,1],[1,0],[1,1]];
+Y = [[0], [1], [1], [0] ];
+
+
+However, RNN seems strange, I don't get it how to match X to Y, input of RNN layer is 3D and output is 2D (rightclick to open in new tab): https://colab.research.google.com/drive/17IgFuxOYgN5fNO9LKwDijEBkIeWNPas6
+
+import tensorflow as tf;
+
+x = [[[1],[2],[3]], [[4],[5],[6]]];
+bsize = 2;
+times = 3;
+
+#3d input
+input = tf.placeholder(tf.float32, [bsize,times,1]);
+
+cell = tf.keras.layers.LSTMCell(20);
+rnn = tf.keras.layers.RNN(cell);
+hid = rnn(input);
+
+sess = tf.Session();
+init = tf.global_variables_initializer();
+sess.run(init);
+
+#results in 2d
+print(sess.run(hid, {input:x}));
+
+
+The example data seen on https://www.tensorflow.org/tutorials/sequences/recurrent are:
+
+ t=0 t=1 t=2 t=3 t=4
+[the, brown, fox, is, quick]
+[the, red, fox, jumped, high]
+
+
+How to map these data from X (3D input for RNN layer) to Y (2D)? (Y is 2D because RNN layer output is 2D).
+"
+"['natural-language-processing', 'classification', 'recommender-system']"," Title: What is the best way to find the similarities between two text documents?Body: I would like to develop a platform in which people will write text and upload images. I am going to use Google API to classify the text and extract from the image all kinds of metadata. In the end, I am going to have a lot of text which describes the content (text and images). Later, I would like to show my users related posts (that is, similar posts, from the content point of view).
+
+What is the most ppropriate way of doing this? I am not an AI expert and the best approach from my prescriptive it to have some tools, like google API or Apache Lucene search engine, which can hide the details of how this is done.
+"
+"['reinforcement-learning', 'training', 'q-learning', 'dqn', 'deep-rl']"," Title: When training a DQN, how should we update the value of actions that were not taken?Body: Let's say that we have three actions. The highest-valued action of the three choices is the first. When training the DQN, what do we do with the other two, as we don't have a target for them, since they weren't taken?
+I've seen some code that leaves the target for off actions as whatever the prediction returned, which feels a bit wrong to me as two or more similar behaving actions might never be differentiated well after random action selection dwindles.
+I've also seen some implementations that set the target for all actions to zero and only adjust the target for the action taken. This would help regarding action differentiation long term, but it also puts more reliance on taking random actions for any unfamiliar states (I believe) as an off action might never be taken otherwise.
+"
+"['neural-networks', 'deep-learning', 'probability-distribution', 'computational-learning-theory']"," Title: In deep learning, do we learn a continuous distribution based on the training dataset?Body: At least at some level, maybe not end-to-end always, but deep learning always learns a function, essentially a mapping from a domain to a range. The domain and range, at least in most cases, would be multi-variate.
+
+So, when a model learns a mapping, considering every point in the domain-space has a mapping, does it try to learn a continuous distribution based on the training-set and its corresponding mappings, and map unseen examples from this learned distribution? Could this be said about all predictive algorithms?
+
+If yes, then could binary classification be compared to having a hyper-plane (as in support vector classification) in a particular kernel-space, and could the idea of classification problems using hyper-planes be extended in general to any deep learning problem learning a mapping?
+
+It would also explain why deep learning needs a lot of data and why it works better than other learning algorithms for simple problems.
+"
+"['neural-networks', 'machine-learning', 'unsupervised-learning', 'papers']"," Title: How does the network know which objects to track in the paper ""Label-Free Supervision of Neural Networks with Physics and Domain Knowledge""?Body: I was reading the paper Label-Free Supervision of Neural Networks with Physics and Domain Knowledge, published at AAAI 2017, which won the best paper award.
+
+I understand the math and it makes sense. Consider the first application shown in the paper of tracking falling objects. They train only on multiple trajectories of the said pillow, and during the evaluation, they claim that they can track any other falling object (which may not be pillows).
+
+I am unable to understand how that happens? How does the network know which object to track? Even during the training, how does it know that it's the pillow that it's supposed to track?
+
+The network is trained to fit a parabola. But any parabola could fit it. There are infinite such parabolas.
+"
+"['applications', 'social', 'academia']"," Title: Which movies have the most realistic artificial intelligence?Body: I want to give some examples of AI via movies to my students. There are many movies that include AI, whether being the main character or extras.
+Which movies have the most realistic (the most possible or at least close to being made in this era) artificial intelligence?
+"
+"['deep-learning', 'keras', 'convolution', 'transformer', 'feedforward-neural-networks']"," Title: Why would you implement the position-wise feed-forward network of the transformer with convolution layers?Body: The Transformer model introduced in ""Attention is all you need"" by Vaswani et al. incorporates a so-called position-wise feed-forward network (FFN):
+
+
+ In addition to attention sub-layers, each of the layers in our encoder
+ and decoder contains a fully connected feed-forward network, which is
+ applied to each position separately and identically. This consists of
+ two linear transformations with a ReLU activation in between.
+
+ $$\text{FFN}(x) = \max(0, x \times {W}_{1} + {b}_{1}) \times {W}_{2} + {b}_{2}$$
+
+ While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is ${d}_{\text{model}} = 512$, and the inner-layer has dimensionality ${d}_{ff} = 2048$.
+
+
+I have seen at least one implementation in Keras that directly follows the convolution analogy. Here is an excerpt from attention-is-all-you-need-keras.
+
+class PositionwiseFeedForward():
+ def __init__(self, d_hid, d_inner_hid, dropout=0.1):
+ self.w_1 = Conv1D(d_inner_hid, 1, activation='relu')
+ self.w_2 = Conv1D(d_hid, 1)
+ self.layer_norm = LayerNormalization()
+ self.dropout = Dropout(dropout)
+ def __call__(self, x):
+ output = self.w_1(x)
+ output = self.w_2(output)
+ output = self.dropout(output)
+ output = Add()([output, x])
+ return self.layer_norm(output)
+
+
+Yet, in Keras you can apply a single Dense
layer across all time-steps using the TimeDistributed
wrapper (moreover, a simple Dense
layer applied to a 2D input implicitly behaves like a TimeDistributed
layer). Therefore, in Keras a stack of two Dense layers (one with a ReLU and the other one without an activation) is exactly the same thing as the aforementioned position-wise FFN. So, why would you implement it using convolutions?
+
+Update
+
+Adding benchmarks in response to the answer by @mshlis:
+
+import os
+import typing as t
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
+import numpy as np
+
+from keras import layers, models
+from keras import backend as K
+from tensorflow import Tensor
+
+
+# Generate random data
+
+n = 128000 # n samples
+seq_l = 32 # sequence length
+emb_dim = 512 # embedding size
+
+x = np.random.normal(0, 1, size=(n, seq_l, emb_dim)).astype(np.float32)
+y = np.random.binomial(1, 0.5, size=n).astype(np.int32)
+
+
+
+
+# Define constructors
+
+def ffn_dense(hid_dim: int, input_: Tensor) -> Tensor:
+ output_dim = K.int_shape(input_)[-1]
+ hidden = layers.Dense(hid_dim, activation='relu')(input_)
+ return layers.Dense(output_dim, activation=None)(hidden)
+
+
+def ffn_cnn(hid_dim: int, input_: Tensor) -> Tensor:
+ output_dim = K.int_shape(input_)[-1]
+ hidden = layers.Conv1D(hid_dim, 1, activation='relu')(input_)
+ return layers.Conv1D(output_dim, 1, activation=None)(hidden)
+
+
+def build_model(ffn_implementation: t.Callable[[int, Tensor], Tensor],
+ ffn_hid_dim: int,
+ input_shape: t.Tuple[int, int]) -> models.Model:
+ input_ = layers.Input(shape=(seq_l, emb_dim))
+ ffn = ffn_implementation(ffn_hid_dim, input_)
+ flattened = layers.Flatten()(ffn)
+ output = layers.Dense(1, activation='sigmoid')(flattened)
+ model = models.Model(inputs=input_, outputs=output)
+ model.compile(optimizer='Adam', loss='binary_crossentropy')
+ return model
+
+
+
+
+# Build the models
+
+ffn_hid_dim = emb_dim * 4 # this rule is taken from the original paper
+bath_size = 512 # the batchsize was selected to maximise GPU load, i.e. reduce PCI IO overhead
+
+model_dense = build_model(ffn_dense, ffn_hid_dim, (seq_l, emb_dim))
+model_cnn = build_model(ffn_cnn, ffn_hid_dim, (seq_l, emb_dim))
+
+
+
+
+# Pre-heat the GPU and let TF apply memory stream optimisations
+
+model_dense.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1)
+%timeit model_dense.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1)
+
+model_cnn.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1)
+%timeit model_cnn.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1)
+
+
+I am getting 14.8 seconds per epoch with the Dense implementation:
+
+Epoch 1/1
+128000/128000 [==============================] - 15s 116us/step - loss: 0.6332
+Epoch 1/1
+128000/128000 [==============================] - 15s 115us/step - loss: 0.5327
+Epoch 1/1
+128000/128000 [==============================] - 15s 117us/step - loss: 0.3828
+Epoch 1/1
+128000/128000 [==============================] - 14s 113us/step - loss: 0.2543
+Epoch 1/1
+128000/128000 [==============================] - 15s 116us/step - loss: 0.1908
+Epoch 1/1
+128000/128000 [==============================] - 15s 116us/step - loss: 0.1533
+Epoch 1/1
+128000/128000 [==============================] - 15s 117us/step - loss: 0.1475
+Epoch 1/1
+128000/128000 [==============================] - 15s 117us/step - loss: 0.1406
+
+14.8 s ± 170 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
+
+
+and 18.2 seconds for the CNN implementation. I am running this test on a standard Nvidia RTX 2080.
+So, from a performance perspective there seems to be no point in actually implementing an FFN block as a CNN in Keras. Considering that the maths are the same, the choice boils down to pure aesthetics.
+"
+"['reinforcement-learning', 'comparison', 'papers', 'convergence', 'stability']"," Title: What are the differences between stability and convergence in reinforcement learning?Body: The terms are mentioned in the paper: An Emphatic Approach to the Problem of off-Policy Temporal-Difference Learning (Sutton, Mahmood, White; 2016) and more, of course.
+In this paper, they proposed the proof of "stability" but not convergence.
+It seems that stability is guaranteed if the "key matrix" is shown to be positive definite. However, convergence requires more than that.
+I don't understand the exact difference between the two.
+"
+['reinforcement-learning']," Title: Why doesn't stability in prediction imply stability in control in off-policy reinforcement learning?Body: Prediction's goal is to get an estimate of a performance of a policy given a specific state.
+
+Control's goal is to improve the policy wrt. the prediction.
+
+The alternation between the two is the basis of reinforcement learning algorithms.
+
+In the paper “Safe and Efficient Off-Policy Reinforcement Learning.” (Munos, 2016), the section 3.1) ""Policy evaluation"" assumes that the target policy is fixed, while the section 3.2) ""Control"" extends to where the target policy is a sequence of policies improved by a sequence of increasingly greedy operations.
+
+This suggests that even a proof of convergence is established with a fixed target policy, one cannot immediately imply that of the case where the target policy is a sequence of improving policies.
+
+I wonder why it is the case. If an algorithm converges under a fixed target policy assumption, any policy during the chain of improvement should have no problem with this algorithm as well. With the merit of policy improvement, each policy in sequence is increasingly better hence converging to an optimal policy.
+
+This should be obvious from the policy improvement perspective and should require no further proof at all?
+"
+"['deep-learning', 'tensorflow', 'optimization', 'long-short-term-memory', 'convergence']"," Title: LSTM network doesn't converge, what should be changed?Body: I'm testing out TensorFlow LSTM layer text generation task, not classification task; but something is wrong with my code, it doesn't converge. What changes should be done?
+
+Source code:
+
+import tensorflow as tf;
+
+# t=0 t=1 t=2 t=3
+#[the, brown, fox, is, quick]
+# 0 1 2 3 4
+#[the, red, fox, jumps, high]
+# 0 5 2 6 7
+
+#t0 x=[[the], [the]]
+# y=[[brown],[red]]
+#t1 ...
+#t2
+#t3
+bsize = 2;
+times = 4;
+
+#data
+x = [];
+y = [];
+#t0 the: the:
+x.append([[0/6], [0/6]]); #normalise: x divided by 6 (max x)
+# brown: red:
+y.append([[1/7], [5/7]]); #normalise: y divided by 7 (max y)
+#t1
+x.append([[1/6], [5/6]]);
+y.append([[2/7], [2/7]]);
+#t2
+x.append([[2/6], [2/6]]);
+y.append([[3/7], [6/7]]);
+#t3
+x.append([[3/6], [6/6]]);
+y.append([[4/7], [7/7]]);
+
+#model
+inputs = tf.placeholder(tf.float32,[times,bsize,1]) #4,2,1
+exps = tf.placeholder(tf.float32,[times,bsize,1]);
+
+layer1 = tf.keras.layers.LSTMCell(20)
+hids1,_ = tf.nn.static_rnn(layer1,tf.split(inputs,times),dtype=tf.float32);
+
+w2 = tf.Variable(tf.random_uniform([20,1],-1,1));
+b2 = tf.Variable(tf.random_uniform([ 1],-1,1));
+outs = tf.sigmoid(tf.matmul(hids1,w2) + b2);
+
+loss = tf.reduce_sum(tf.square(exps-outs))
+optim = tf.train.GradientDescentOptimizer(1e-1)
+train = optim.minimize(loss)
+
+#train
+s = tf.Session();
+init = tf.global_variables_initializer();
+s.run(init)
+
+feed = {inputs:x, exps:y}
+for i in range(10000):
+ if i%1000==0:
+ lossval = s.run(loss,feed)
+ print(""loss:"",lossval)
+ #end if
+ s.run(train,feed)
+#end for
+
+lastloss = s.run(loss,feed)
+print(""loss:"",lastloss,""(last)"");
+#eof
+
+
+Output showing loss values (a little different every run):
+
+loss: 3.020703
+loss: 1.8259083
+loss: 1.812584
+loss: 1.8101325
+loss: 1.8081319
+loss: 1.8070083
+loss: 1.8065354
+loss: 1.8063282
+loss: 1.8062303
+loss: 1.8061805
+loss: 1.8061543 (last)
+
+
+Colab link:
+https://colab.research.google.com/drive/1TsHjmucuynCPOgKuo4a0hiM8B8UaOWQo
+"
+"['machine-learning', 'reinforcement-learning', 'dqn', 'experience-replay']"," Title: What is the difference between random and sequential sampling from the reply memory?Body: I was working on an RL problem and I am confused at one specific point. We use replay memory so that the network learns about previous actions and how these actions lead to a success or a failure.
+
+Now, to train the neural network, we use batches from this replay or experience memory. But here's my confusion.
+
+Some places like this extract random (non-sequential) batches from the memory to train the neural network but Andrej Karpathy uses the sequential data to train the network.
+
+Can someone tell me why there's the difference?
+"
+"['machine-learning', 'correlation']"," Title: How do I determine which variables/features have the strongest relationship with each other?Body: This is my problem:
+I have 10 variables that I intend to evaluate two by two (in pairs). I want to know which variables have the strongest relationships with each other. And I'm only interested in evaluating relationships two by two. Well, one suggestion would be to calculate the pairwise correlation coefficient of these variables. And then list the pairs with the highest correlation coefficient to the lowest correlation. That way I would have a ranking between the most correlated to the lowest correlated pairs.
+My question is: Is there anything analogous in the world of artificial intelligence to the correlation coefficient calculation? That is, what tools can the world of AI / Machine Learning offer me to extract this kind of information? So that in the end I can have something like a ranking among the most "correlated" pairs from the point of view of AI / Machine Learning?
+In other words, how do I know which variable among these 10 best "relates" (or "correlates") with variable 7, for example?
+"
+"['neural-networks', 'training', 'gradient-descent', 'evolutionary-algorithms', 'neuroevolution']"," Title: Why evolutionary training of neural networks is not popular?Body: Evolutionary algorithms are mentioned in some sources as a method to train a neural network (finding weights, not hyperparameters). However, I have not heard about one practical application of such an idea yet.
+My question is, why is that? What are the issues or limitations with such a solution that prevented it from practical use?
+I am asking because I am planning on developing such an algorithm and want to know what to expect and where to put most attention.
+"
+"['neural-networks', 'generative-model']"," Title: How can we find find the input image which maximizes the class-probability for an ANN?Body: Let's assume we have an ANN which takes a vector $x\in R^D$, representing an image, and classifies it over two classes. The output is a vector of probabilities $N(x)=(p(x\in C_1), p(x\in C_2))^T$ and we pick $C_1$ iff $p(x\in C_1) \geq 0.5$. Let the two classes be $C_1= \texttt{cat}$ and $C_2= \texttt{dog}$. Now imagine we want to extract this ANN's idea of ideal cat by finding $x^* = argmax_x N(x)_1$. How would we proceed? I was thinking about solving $\nabla_xN(x)_1=0$, but I don't know if this makes sense or if it is solvable.
+
+In short, how do I compute the input which maximizes a class-probability?
+"
+"['neural-networks', 'tensorflow']"," Title: Neural network does not give out the required out put?Body: Made a neural network using tensor flows that was supposed matches an Ip to one of the 7 type of vulnerabilities and gives out what type of vulnerability that IP has.
+
+
+
+ model = tf.keras.models.Sequential([
+ tf.keras.layers.Flatten(),
+ tf.keras.layers.Dense(50, activation=tf.nn.relu),
+ tf.keras.layers.Dense(7, activation=tf.nn.softmax)
+])
+
+model.compile(optimizer='adam',
+ loss='sparse_categorical_crossentropy',
+ metrics=['accuracy'])
+
+
+
+model.fit(xs, ys, epochs=500)
+
+
+The output of print(model.predict([181271844]))
when this command is executed should be one of the numbers from 1 to 7 but the out put its gives is
+
+
+ [[0.22288103 0.20282331 0.36847615 0.11339897 0.04456346 0.02391759
+ 0.02393949]]
+
+
+I can't seem to figure out what the problem is.
+"
+"['learning-algorithms', 'robotics', 'problem-solving', 'path-planning', 'path-finding']"," Title: Algorithm to solve a fault independent of its typeBody: I am looking to plan a solution for a workspace fault and not hardware faults.
+
+Consider a task where a robot has to move balls from one place to another. In case it faces any condition which is outside the task for eg. someone snatches the ball from the robot while it is transferring or the robot drops the balls in between. These are some example faults that could occur, many other might be possible. I am trying to build a generalized algorithm that so that the robot can find a way to resolve unexpected changes itself.
+
+I currently have an FSM for the whole task. Any fault that somehow changes any of the state machine variables should be considered. For instance, there are faults that deal with obstacles that may come in the way.
+
+But there might be faults for example a cloth in front of the camera. This fault should be corrected by a human since the robot cannot manage that. All the faults like that are out of the scope of the robot.
+
+Any suggestion or ideas related to the algorithm will be helpful.
+"
+"['natural-language-processing', 'classification', 'tensorflow', 'recurrent-neural-networks', 'generative-model']"," Title: How to change this RNN text classification code to become text generation code?Body: I can do text classification with RNN, in which the last output of RNN (rnn_outputs[-1]) is used to matmul with output layer weight and plus bias. That is getting a word (class name) after the last T in the time dimension of RNN.
+
+The matter is for text generation, I need a word somewhere in the middle of time dimension, eg.:
+
+t0 t1 t2 t3
+The brown fox jumps
+
+
+For this example, I have the first 2 words: The
, brown
.
+
+How to get the next word ie. ""fox
"" using RNN (LSTM)? How to convert the following text classification code to text generating code?
+
+Source code (text classification):
+
+import tensorflow as tf;
+tf.reset_default_graph();
+
+#data
+'''
+t0 t1 t2
+british gray is => cat (y=0)
+0 1 2
+white samoyed is => dog (y=1)
+3 4 2
+'''
+Bsize = 2;
+Times = 3;
+Max_X = 4;
+Max_Y = 1;
+
+X = [[[0],[1],[2]], [[3],[4],[2]]];
+Y = [[0], [1] ];
+
+#normalise
+for I in range(len(X)):
+ for J in range(len(X[I])):
+ X[I][J][0] /= Max_X;
+
+for I in range(len(Y)):
+ Y[I][0] /= Max_Y;
+
+#model
+Inputs = tf.placeholder(tf.float32, [Bsize,Times,1]);
+Expected = tf.placeholder(tf.float32, [Bsize, 1]);
+
+#single LSTM layer
+#'''
+Layer1 = tf.keras.layers.LSTM(20);
+Hidden1 = Layer1(Inputs);
+#'''
+
+#multi LSTM layers
+'''
+Layers = tf.keras.layers.RNN([
+ tf.keras.layers.LSTMCell(30), #hidden 1
+ tf.keras.layers.LSTMCell(20) #hidden 2
+]);
+Hidden2 = Layers(Inputs);
+'''
+
+Weight3 = tf.Variable(tf.random_uniform([20,1], -1,1));
+Bias3 = tf.Variable(tf.random_uniform([ 1], -1,1));
+Output = tf.sigmoid(tf.matmul(Hidden1,Weight3) + Bias3);
+
+Loss = tf.reduce_sum(tf.square(Expected-Output));
+Optim = tf.train.GradientDescentOptimizer(1e-1);
+Training = Optim.minimize(Loss);
+
+#train
+Sess = tf.Session();
+Init = tf.global_variables_initializer();
+Sess.run(Init);
+
+Feed = {Inputs:X, Expected:Y};
+for I in range(1000): #number of feeds, 1 feed = 1 batch
+ if I%100==0:
+ Lossvalue = Sess.run(Loss,Feed);
+ print(""Loss:"",Lossvalue);
+ #end if
+
+ Sess.run(Training,Feed);
+#end for
+
+Lastloss = Sess.run(Loss,Feed);
+print(""Loss:"",Lastloss,""(Last)"");
+
+#eval
+Results = Sess.run(Output,Feed);
+print(""\nEval:"");
+print(Results);
+
+print(""\nDone."");
+#eof
+
+"
+"['ai-design', 'knowledge-representation']"," Title: How would an AI understand grids?Body: OK, now I think an AI must view grids in a different way to computers.
+
+For example a computer would represent a grid like this:
+
+cells = [[1,2,3],[4,5,6],[7,8,9]] = [row1,row2,row3]
+
+
+That is a grid is 3 rows of 3 cells.
+
+But... that's not how a human sees it. A human sees a grid as made of 3 rows and 3 collumns somehow intersecting.
+
+If an AI is built on some mathematical logic like set theory, it's like a set of rows which in turn is a set of cells.
+
+So what would be a way to represent a grid in a computer that is more ""human"". And doesn't favor either rows or columns? Or is there some mathematical or programmatical description of a grid that treats rows and columns as equivalent?
+"
+"['neural-networks', 'long-short-term-memory']"," Title: What sort of Neural Network is best suited to predicting a future purchase?Body: I have previously implemented a Neural Network with Back-Propagation that was able to learn Tic-tac-toe and could go pretty well at Connect-4.
+
+Now I'm trying to do a NN that can make a prediction. The idea is that I have a large set of customer purchase history, so people I can ""target"" with marketing, others I can't (maybe I just have a credit-card number but no email address to spam). I've a catalogue of products that changes on a monthly basis with daily updates to stock.
+
+My original idea was to use the same NN that I've used before, with inputs like purchased y/n for each product and an output for each product (softmax to get a weighted prediction). But I get stuck at handling a changing catalog. I'm also not sure if I should lump everyone in together or sort of generate a NN for each person individually (but some people would have very little purchase history, so I'd need to use everyone else as the training set).
+
+So I thought I'd need something with some ability to use the purchase data as a sequence, so purchased A, then B, then C etc. But reviewing something like LSTM, I kind of think it's still not right.
+
+Basically, I know how to NN for a game-state sort of problem. But I don't know how to do it for this new problem.
+"
+"['deep-learning', 'applications', 'prediction', 'generative-adversarial-networks']"," Title: Is there a GAN that can be used for sequence prediction?Body: I want to use a GAN for sequence prediction, in a similar way that we use RNNs for sequence prediction. I want to test its performance in comparison with RNNs. Is there a GAN that can be used for sequence prediction?
+"
+"['optimization', 'convergence', 'performance', 'hyperparameter-optimization']"," Title: How can we conclude that an optimization algorithm is better than another oneBody: When we test a new optimization algorithm, what the process that we need to do?For example, do we need to run the algorithm several times, and pick a best performance,i.e., in terms of accuracy, f1 score .etc, and do the same for an old optimization algorithm, or do we need to compute the average performance,i.e.,the average value of accuracy or f1 scores for these runs, to show that it is better than the old optimization algorithm? Because when I read the papers on a new optimization algorithm, I don't know how they calculate the performance and draw the train-loss vs iters curves, because it has random effects, and for different runs we may get different performance and different curves.
+"
+"['machine-learning', 'deep-learning', 'object-detection']"," Title: How is Average Recall (AR) calculated for an object detection model?Body: After looking around the internet (including this paper, I cannot seem to find a satisfactory explanation of the Average Recall (AR) metric. On the COCO website, it describes AR as: ""the maximum recall given a fixed number of detections per image, averaged over categories and IoUs"".
+
+What does ""maximum recall"" mean here?
+
+I was wondering if someone could give a reference or a high level overview of the AR calculation algorithm.
+
+Thanks!
+"
+"['neural-networks', 'prediction', 'time-series']"," Title: What kind of neural network architecture is suitable for variable length block-like time series data?Body: I'm not sure what this type of data is called, so I will give an example of the type of data I am working with:
+
+
+- A city records its inflow and outflow of different types of vehicles every hour. More specifically, it records the engine size. The output would be the pollution level X hours after the recorded hourly interval.
+
+
+It's worth noting that the data consists of individual vehicle engine size, so they cant be aggregated. This means the 2 input vectors (inflow and outflow) will be of variable length (different number of vehicles would be entering and lraving every hour) and I'm not sure how to handle this. I could aggregate and simply sum the number of vehicles, but I want to preserve any patterns in the data. E.g. perhaps there is a quick succession of several heavy motorbike engines, denoting a biker gang have just entered the city and are known to ride recklessly, contributing more to pollution than the sum of its parts.
+
+Any insight is appreciated.
+"
+['game-ai']," Title: Puzzle solving AI?Body: I have a book containing lots of puzzles with instructions like:
+
+""Find a path through all the white squares.""
+
+""Connect each black circle to a white circle with a straight line without crossing lines"".
+
+""Put the letters A to G in the grid so that no letters are repeated in any row or collumn""
+
+
+I thought it might be fun to (1) Try and write a program to solve each individual puzzle (2) Write a program that can solve more general problems, and even more interesting (3) Try to write a program that parses the English instructions and then solves the problem.
+
+I think that in general there would be common themes like, ""draw a path"", ""connect the dots"", ""place the letters in the grid"", and so forth.
+
+The program would have general knowledge of thing like squares, cells, rows, collumns, colours, letters, numbers and so on.
+
+I wondered if there is anything similar out there already?
+
+If an AI could read instructions and solve the puzzles could we say that it is in someway intelligent?
+"
+"['philosophy', 'randomness']"," Title: Is randomness necessary for AI?Body: Is randomness (either true randomness or simulated randomness) necessary for AI? If true, does it mean ""intelligence comes from randomness""?
+If not, can a robot lacking the ability to generate random numbers be called an artificial general intelligence?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'applications']"," Title: What are all the different kinds of neural networks used for?Body: I found the following neural network cheat sheet (Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data).
+
+
+
+What are all these different kinds of neural networks used for? For example, which neural networks can be used for regression or classification, which can be used for sequence generation, etc.? I just need a brief overview (1-2 lines) of their applications.
+"
+"['neural-networks', 'convolutional-neural-networks', 'training', 'architecture', 'hardware']"," Title: How do neural network topologies affect GPU/TPU acceleration?Body: I was thinking about different neural network topologies for some applications. However, I am not sure how this would affect the efficiency of hardware acceleration using GPU/TPU/some other chip.
+If, instead of layers that would be fully connected, I have layers with neurons connected in some other way (some pairs of neurons connected, others not), how is this going to affect the hardware acceleration?
+An example of this is the convolutional networks. However, there is still a clear pattern, which perhaps is exploited by the acceleration, which would mean that if there is no such pattern, the acceleration would not work as well?
+Should this be a concern? If so, is there some rule of thumb for how the connectivity pattern is going to affect the efficiency of hardware acceleration?
+"
+"['neural-networks', 'machine-learning', 'exploding-gradient-problem']"," Title: How to deal with large (or NaN) neural network's weights?Body: My weights go from being between 0 and 1 at initialization to exploding into the tens of thousands in the next iteration. In the 3rd iteration, they become so large that only arrays of nan values are displayed.
+How can I go about fixing this?
+Is it to do with the unstable nature of the sigmoid function, or is one of my equations incorrect during backpropagation which makes my gradients explode?
+import numpy as np
+from numpy import exp
+import matplotlib.pyplot as plt
+import h5py
+
+# LOAD DATASET
+MNIST_data = h5py.File('data/MNISTdata.hdf5', 'r')
+x_train = np.float32(MNIST_data['x_train'][:])
+y_train = np.int32(np.array(MNIST_data['y_train'][:,0]))
+x_test = np.float32(MNIST_data['x_test'][:])
+y_test = np.int32(np.array(MNIST_data['y_test'][:,0]))
+MNIST_data.close()
+
+##############################################################################
+# PARAMETERS
+number_of_digits = 10 # number of outputs
+nx = x_test.shape[1] # number of inputs ... 784 --> 28*28
+ny = number_of_digits
+m_train = x_train.shape[0]
+m_test = x_test.shape[0]
+Nh = 30 # number of hidden layer nodes
+alpha = 0.001
+iterations = 3
+##############################################################################
+# ONE HOT ENCODER - encoding y data into 'one hot encoded'
+lr = np.arange(number_of_digits)
+y_train_one_hot = np.zeros((m_train, number_of_digits))
+y_test_one_hot = np.zeros((m_test, number_of_digits))
+for i in range(len(y_train_one_hot)):
+ y_train_one_hot[i,:] = (lr==y_train[i].astype(np.int))
+for i in range(len(y_test_one_hot)):
+ y_test_one_hot[i,:] = (lr==y_test[i].astype(np.int))
+
+# VISUALISE SOME DATA
+for i in range(5):
+ img = x_train[i].reshape((28,28))
+ plt.imshow(img, cmap='Greys')
+ plt.show()
+
+y_train = np.array([y_train]).T
+y_test = np.array([y_test]).T
+##############################################################################
+# INITIALISE WEIGHTS & BIASES
+params = { "W1": np.random.rand(nx, Nh),
+ "b1": np.zeros((1, Nh)),
+ "W2": np.random.rand(Nh, ny),
+ "b2": np.zeros((1, ny))
+ }
+
+# TRAINING
+# activation function
+def sigmoid(z):
+ return 1/(1+exp(-z))
+
+# derivative of activation function
+def sigmoid_der(z):
+ return z*(1-z)
+
+# softamx function
+def softmax(z):
+ return 1/sum(exp(z)) * exp(z)
+
+# softmax derivative is alike to sigmoid
+def softmax_der(z):
+ return sigmoid_der(z)
+
+def cross_entropy_error(v,y):
+ return -np.log(v[y])
+
+# forward propagation
+def forward_prop(X, y, params):
+ outs = {}
+ outs['A0'] = X
+ outs['Z1'] = np.matmul(outs['A0'], params['W1']) + params['b1']
+ outs['A1'] = sigmoid(outs['Z1'])
+ outs['Z2'] = np.matmul(outs['A1'], params['W2']) + params['b2']
+ outs['A2'] = softmax(outs['Z2'])
+
+ outs['error'] = cross_entropy_error(outs['A2'], y)
+ return outs
+
+# back propagation
+def back_prop(X, y, params, outs):
+ grads = {}
+ Eo = (y - outs['A2']) * softmax_der(outs['Z2'])
+ Eh = np.matmul(Eo, params['W2'].T) * sigmoid_der(outs['Z1'])
+ dW2 = np.matmul(Eo.T, outs['A1']).T
+ dW1 = np.matmul(Eh.T, X).T
+ db2 = np.sum(Eo,0)
+ db1 = np.sum(Eh,0)
+
+ grads['dW2'] = dW2
+ grads['dW1'] = dW1
+ grads['db2'] = db2
+ grads['db1'] = db1
+# print('dW2:',grads['dW2'])
+ return grads
+
+# optimise weights and biases
+def optimise(X,y,params,grads):
+ params['W2'] -= alpha * grads['dW2']
+ params['W1'] -= alpha * grads['dW1']
+ params['b2'] -= alpha * grads['db2']
+ params['b1'] -= alpha * grads['db1']
+ return
+
+# main
+for epoch in range(iterations):
+ print(epoch)
+ outs = forward_prop(x_train, y_train, params)
+ grads = back_prop(x_train, y_train, params, outs)
+ optimise(x_train,y_train,params,grads)
+ loss = 1/ny * np.sum(outs['error'])
+ print(loss)
+
+```
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'model-request']"," Title: Is there a neural network that can output a unit vector that is parallel to the input vector?Body: I'm wondering if there is a NN that can achieve the following task:
+Output a unit vector that is parallel to the input vector. i.e., input a vector $\mathbf{v}\in\mathbb{R}^d$, output $\mathbf{v}/\|\mathbf{v}\|$. The dimension $d$ can be fixed, say $2$.
+To achieve this, it seems to me that we need to use NN to do three functions: square, square-root, and division. But I don't know if a NN can do all of these.
+"
+"['reinforcement-learning', 'markov-decision-process', 'action-spaces']"," Title: Is the agent aware of a possible different set of actions for each state?Body: I have a use case where the set of actions is different for different states. Is the agent aware of what actions are valid for each state, or is the agent only aware of the entire action space (in which case I guess the environment needs to discard invalid actions)?
+I presume the answer is yes, but I would like to confirm.
+"
+"['machine-learning', 'reference-request', 'terminology', 'datasets']"," Title: What is the term for datasets that are themselves composed of datasets?Body: As computers are getting bigger better and faster, the concept of what constitutes a single datum is changing.
+For example, in the world of pen-and-paper, we might take readings of temperature over time and obtain a time-series in which an individual datum is a time, temperature pair. However, it is now common to desire classifications of entire time-series, in the context of which our entire temperature time-series would be but a single data point in a data set consisting of a great number of separate time-series. In image processing, an $(x,y,c)$ triple is not a datum, but a whole grid of such values is a single datum. With lidar data and all manner of other fields things that were previously considered a dataset are now best thought of as a datum.
+What is the term for datasets that are themselves composed of datasets?
+The term "metadata" is occupied, I should think.
+Are there any papers that talk about this transition from datasets of data to datasets of datasets? And what the implications are for data scientists and researchers?
+"
+"['machine-learning', 'deep-learning', 'hyperparameter-optimization', 'cross-validation', 'generalization']"," Title: Will parameter sweeping on one split of data followed by cross validation discover the right hyperparameters?Body: Let's call our dataset splits train/test/evaluate. We're in a situation where we require months of data. So we prefer to use the evaluation dataset as infrequently as possible to avoid polluting our results. Instead, we do 10 fold cross validation (CV) to estimate how well the model might generalize.
+We're training deep learning models that take between 24-48 hours each, and the process of parameter sweeping is obviously very slow when performing 10-fold cross validation.
+Does anyone have any experience or citations for how well parameter sweeping on one split of the data followed by cross validation (used to estimate how well it generalizes) works?
+I suspect it's highly dependent on the distribution of data and local minima & maxima of the hyper parameters, but I wanted to ask.
+"
+"['reinforcement-learning', 'q-learning', 'dqn']"," Title: TD losses are descreasing, but also rewards are decreasing, increasing sigmaBody: I'm using Q-learning with some extensions such as noisy linear layers, n-steps and double DQN.
+
+The training, however, isn't that successful, my rewards are descreasing over time after a steep increase at the beginning:
+
+
+
+But what's interesting is that my td loss is also descreasing:
+
+
+
+The sigma magnitudes of the noisy linear layers which control the exploration are strangely increasing, and also, seems to converge. I expected it to reduce uncertainty over time, but the opposite is the case.
+
+
+
+Another intresting thing, and that's probably why my loss is decreasing: The model tends to generate always the same transition, which is why the episodes are ending early and the rewards are getting lower. My experience replay is full of this single transition (around 99 percent of the buffer).
+
+What could be the reason? Which things I should check? Is there anything I could try? I'm also willing to add information, just comment what could be of interest.
+"
+"['reference-request', 'robotics', 'human-like', 'robots']"," Title: How does Atlas from Boston Dynamics have human-like movement?Body: Discussing the video More Parkour Atlas, a friend asked how the robot's movement was so similar to the one from a real human and wondering how this is achieved?
+To my knowledge, this is not something the developer "programmed", but instead emerged from the learning algorithm.
+Could you provide an overview and some reference on how this is achieved?
+"
+"['neural-networks', 'tensorflow', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: What is the relationship between the size of the hidden layer and the size of the cell state layer in an LSTM?Body: I was following some examples to get familiar with TensorFlow's LSTM API, but noticed that all LSTM initialization functions require only the num_units
parameter, which denotes the number of hidden units in a cell.
+
+According to what I have learned from the famous colah's blog, the cell state has nothing to do with the hidden layer, thus they could be represented in different dimensions (I think), and then we should pass at least 2 parameters denoting both #hidden
and #cell_state
.
+
+So, this confuses me a lot when trying to figure out what the TensorFlow's cells do. Under the hood, are they implemented like this just for the sake of convenience or did I misunderstand something in the blog mentioned?
+
+
+"
+"['convolutional-neural-networks', 'image-recognition', 'classification']"," Title: Can a model, retrained on images classified previously by itself, increase its accuracy?Body: Let's assume I have a CNN model trained to categorize some objects on the images. By using this model I find more categorized images. If I now retrain this model on data set that consists old set and newly categorised images is there a chance that such new model will have higher accuracy? Or maybe because new data posses only information that could be found on initial set, model will have similar/lower accuracy?
+
+Please let me know if something unclear.
+"
+"['generative-adversarial-networks', 'reference-request']"," Title: Looking for GAN paper with spiral imageBody: I am looking for a GAN paper I have read a while ago, but unfortunately cannot find it again. I think it compared GANs and other methods (like CVAEs) w.r.t. how they handle multi-modal data, not sure about the CVAEs though.
+What I know is that they created different synthetic toy datasets with multiple modes to analyze this. I remember one plotting of a blue spiral of this data on a white background. Any guesses?
+"
+"['neural-networks', 'self-organizing-map']"," Title: What is the impact of using multiple BMUs for self-organizing maps?Body: Here's a sort of a conceptual question. I was implementing a SOM algorithm to better understand its variations and parameters. I got curious about one bit: the BMU (best matching unit == the neuron that is more similar to the vector being presented) is chosen as the neuron that has the smallest distance in feature space to the vector. Then I updated it and its neighbours.
+This makes sense, but what if I used more than one BMU for updating the network? For example, suppose that the distance to one neuron is 0.03, but there is another neuron with distance 0.04. These are the two smallest distances. I would use the one with 0.03 as the BMU.
+The question is, what would be the expected impacts on the algorithm if I used more than one BMU? For example, I could be selecting for update all neurons for which the distance is up to 5% more than the minimum distance.
+I am not asking for code. I can implement it to see what happens. I am just curious to see if anyone have any insight on the pros and cons (except additional complexity) of this approach.
+"
+['reinforcement-learning']," Title: RF: How to deal with environments changing state due to external factorsBody: I have a use case where the state of the environment could change due to random events in between time steps that the agent takes actions. For example, at t1, the agent takes action a1 and is given the reward and the new state s1. Before the agent takes the next action at t2, some random events occurred in the environment that altered the state. Now when the agent takes action at t2, it's now acting on ""stale information"" since the state of the environment had changed. Also, the new state s2 will represent changes not only due to the agent's action, but also due to the prior random events that occurred. In the worst case, the action could possibly have become invalid for the new state that was introduced due to these random events occurred within the environment.
+
+How do we deal with this? Does this mean that this use case is not a good one to solve with RF? If we just ignore these changing states due to the random events in the environment, how would that affect the various learning algorithms? I presume that this is not a uncommon or unique problem in real-life use cases...
+
+Thanks!
+
+Francis
+"
+"['neural-networks', 'deep-learning', 'learning-algorithms', 'greedy-ai', 'symbolic-ai']"," Title: Is it possible to do K-nearest-neighbours before training DNNBody: The following X-shape alternated pattern can be separated quite well and super fast by K-nearest Neighbour algorithm (go to https://ml-playground.com to test it):
+
+However, DNN seems to face great struggles to separate that X-shape alternated data. Is it possible to do K-nearest before DNN, ie. set the DNN weights somehow to simulate the result of K-nearest before doing DNN training?
+Another place to test the X-shape alternated data: https://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html
+"
+"['definitions', 'search', 'a-star', 'consistent-heuristic', 'graph-search']"," Title: In the graph search version of A*, can I stop the search the first time I encounter the goal node?Body: I am going through Russel and Norvig's Artificial Intelligence: A Modern Approach (3rd edition). I was reading the part regarding the A* algorithm
+
+
+ A* graph search version is optimal when heuristic is consistent and tree search version optimal when heuristic is just admissible.
+
+
+The book gives the following graph search algorithm.
+
+
+
+The above algorithm says that pop the node from the frontier set and expand it (assuming not in explored set), and add its children to frontier only if child not in frontier or explored set.
+
+Now, if I apply the same to A* (assuming a consistent heuristic) and suppose I find goal state (as a child of some node) for the first time I add it to the frontier set. Now, according to this algorithm, if the goal state is already in the frontier set, it must never be added again (this implies never be updated/promoted right?).
+
+I have a few questions.
+
+
+- I might as well stop the search when I find goal state for the first time as a child of some node and not wait till I pop the goal state from the frontier?
+- Does a consistent heuristic guarantee that when I add a node to the frontier set I have found the optimal path to it? (because if I don't update it or re-add it with updated cost (according to the graph search algorithm, the answer to the question must be yes.)
+
+
+Am I missing something? Because it also states that, whenever A* selects the node for expansion, the optimal path to that node is found and doesn't say that when a node is added to the frontier set, the optimal path is found.
+
+So, I'm pretty confused, but I think the general graph search definition (in the above image) is misleading.
+"
+"['reinforcement-learning', 'python', 'keras']"," Title: how to use Softmax action selection algorithm in atari-like gameBody: I'm currently writing a program using keras (python 3) to play a game similar to Atari games, only in this one there are objects moving in the screen in different angles and directions (in most of Atari games I've encountered the objects you need to shoot are static). The agent's aim is to shoot them.
+
+after executing every action I get feedback from the environment: I get the locations of all the objects on the screen, the locations of the collisions that happened, my position (angle of the turret) and the total score (from which I can calculate the reward)
+
+I defined that each state will consists from the parameters mentioned above.
+
+I want to use softmax algorithm in order to choose the next action, but I'm not sure how to do it. I'd be very grateful if anyone could help me or refer me to a source that can explain the syntax? currently I'm using decay epsilon-greedy algorithm.
+
+Thank you very much for your time and attention.
+"
+"['python', 'gradient-descent']"," Title: Why is batch gradient descent performing worse than stochastic and minibatch gradient descent?Body: I have implemented a neural network from scratch (only using numpy) and I am having problems understanding why the results are so different between stochastic/minibatch gradient descent and batch gradient descent:
+
+
+
+The training data is a collection of point coordinates (x,y).
+The labels are 0s or 1s (below or above the parabola).
+
+
+As a test, I am doing a classification task.
+My objective is to make the NN learn which points are above the parabola (yellow) and which points are below the parabola (purple).
+
+Here is the link to the notebook: https://github.com/Pign4/ScratchML/blob/master/Neural%20Network.ipynb
+
+
+- Why is the batch gradient descent performing so poorly with respect
+to the other two methods?
+- Is it a bug? But how can it be since the code is almost identical to the
+minibatch gradient descent?
+- I am using the same (randomly chosen with try and error) hyperparameters for all three
+neural networks. Does batch gradient descent need a more accurate
+technique to find the correct hyperparameters? If yes, why so?
+
+"
+['neural-networks']," Title: What is the best variant of darknet to use?Body: pjreddie's official darknet version (link from official website here) has been forked several times. In particular I've come accross AlexeyAB's fork through this tutorial. I assume the tutorial's author used AlexeyAB's fork because he wanted to use it on a Windows machine, which pjreddie's darknet cannot do AFAIK.
+
+I am not really concerned about that (I am a linux user), but I am very interested about the half precision option (CUDNN_HALF
) that AlexeyAB's darknet has, whereas pjreddie's darknet does not. Of course I've checked that this option was handled by the graphic card (RTX2080) we use at my office.
+
+Nevertheless, I wonder: how stable/robust is that fork? Of course I want high-performing software, but I also want a certain level of stability! On the other hand, the latest commit on pjreddie's darknet is back from September 2018 (ie 1 year old), whereas AxeleyAB's darknet is active…
+
+More broadly, there seems to be a lot of darknet forks: which ones to prefer?
+
+What does the neural network community think?
+"
+"['convolutional-neural-networks', 'objective-functions', 'categorical-data']"," Title: CNN clasification model loss stuck at same valueBody: I have CNN model to classify 2 classes. (Yes or No)
+I use categorical_crossentropy loss and softmax activation at the end.
+For input I use image with all 3 channels, for output I use One hot encoded vector ([0,1] or [1,0])
+
+I have function that guaranty me, that each batch I have same number of one and another class, so the classes are not unevenly represented.
+
+What happened when I train the model is that I am stuck at same loss while trening,...
+
+I assume that model predict always same class and half in batch has loss 0 half of them max, so that bring it to 8 all the time,...
+
+What could went wrong?
+
+The network is something like this :
+
+x = Conv2D(16, (3, 3), padding='same')(input_img)
+x = LeakyReLU(0.1)(x)
+x = Conv2D(32 , (3, 3), padding='same')(x)
+x = LeakyReLU(0.1)(x)
+x = MaxPooling2D((2, 2))(x)
+x = Dropout(0.25)(x)
+x = Conv2D(32 , (3, 3), padding='same')(x)
+x = LeakyReLU(0.1)(x)
+x = Conv2D(48 , (3, 3), padding='same')(x)
+x = LeakyReLU(0.1)(x)
+x = MaxPooling2D((2, 2))(x)
+x = Dropout(0.25)(x)
+
+x = Flatten()(x)
+x = Dense(4096)(x)
+x = LeakyReLU(0.1)(x)
+x = Dropout(0.5)(x)
+x = Dense(2048)(x)
+x = LeakyReLU(0.1)(x)
+x = Dropout(0.5)(x)
+out = Dense(2, activation='softmax', name='table')(x)
+
+model = Model(input_img, out)
+model.compile(optimizer='adam', loss= 'categorical_crossentropy')
+
+
+Training Loss:
+
+
+"
+"['convolutional-neural-networks', 'tensorflow', 'training', 'keras', 'gpu']"," Title: How can I reduce the GPU memory usage with large images?Body: I am trying to train a CNN-LSTM model. The size of my images is 640x640. I have a GTX 1080 ti 11GB. I am using Keras with the TensorFlow backend.
+
+Here is the model.
+
+img_input_1 = Input(shape=(1, n_width, n_height, n_channels))
+conv_1 = TimeDistributed(Conv2D(96, (11,11), activation='relu', padding='same'))(img_input_1)
+pool_1 = TimeDistributed(MaxPooling2D((3,3)))(conv_1)
+conv_2 = TimeDistributed(Conv2D(128, (11,11), activation='relu', padding='same'))(pool_1)
+flat_1 = TimeDistributed(Flatten())(conv_2)
+dense_1 = TimeDistributed(Dense(4096, activation='relu'))(flat_1)
+drop_1 = TimeDistributed(Dropout(0.5))(dense_1)
+lstm_1 = LSTM(17, activation='linear')(drop_1)
+dense_2 = Dense(4096, activation='relu')(lstm_1)
+dense_output_2 = Dense(1, activation='sigmoid')(dense_2)
+model = Model(inputs=img_input_1, outputs=dense_output_2)
+
+op = optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.001)
+
+model.compile(loss='mean_absolute_error', optimizer=op, metrics=['accuracy'])
+
+model.fit(X, Y, epochs=3, batch_size=1)
+
+
+Right now, using this model, I can only use the training data when the images are resized to 60x60, any larger and I run out of GPU memory.
+
+I want to use the largest possible size as I want to retain as much discriminatory information as possible. (The labels will be mouse screen coordinates between 0 - 640).
+
+Among many others, I found this question: How to handle images of large sizes in CNN?
+
+Though I am not sure how I can ""restrict your CNN"" or ""stream your data in each epoch"" or if these would help.
+
+How can I reduce the amount of memory used so I can increase the image sizes?
+
+Is it possible to sacrifice training time/computation speed in favor of higher resolution data whilst retaining model effectiveness?
+
+Note: the above model is not final, just a basic outlay.
+"
+"['machine-learning', 'genetic-algorithms', 'optimization', 'constrained-optimization']"," Title: Given a list of integers $\{c_1, \dots, c_N \}$, how do I find an integer $D$ that minimizes the sum of remainders $\sum_i c_i \text{ mod } D$?Body: I have a set of fixed integers $S = \{c_1, \dots, c_N \}$. I want to find a single integer $D$, greater than a certain threshold $T$, i.e. $D > T \geq 0$, that divides each $c_i$ and leaves remainder $r_i \geq 0$, i.e. $r_i$ can be written as $r_i = c_i \text{ mod } D$, such that the sum of remainders is minimized.
+In other words, this is my problem
+\begin{equation}
+\begin{aligned}
+D^* \quad = \text{argmin}_D& \sum_i c_i \text{ mod } D \\
+\textrm{subject to} &\quad D > T
+\end{aligned}
+\end{equation}
+If the integers have a common divisor, this problem is easy. If the integers are relatively co-prime however, then it is not clear how to solve it.
+The set $|S| = N$ can be around $10000$, and each element also has a value in tens of thousands.
+I was thinking about solving it with a genetic algorithm (GA), but it is kind of slow. I want to know is there any other way to solve this problem.
+"
+"['machine-learning', 'optimization', 'proofs', 'no-free-lunch-theorems']"," Title: What are the implications of the ""No Free Lunch"" theorem for machine learning?Body: The No Free Lunch (NFL) theorem states (see the paper Coevolutionary Free Lunches by David H. Wolpert and William G. Macready)
+
+
+ any two algorithms are equivalent when their performance is averaged across all possible problems
+
+
+Is the ""No Free Lunch"" theorem really true? What does it actually mean? A nice example (in ML context) illustrating this assertion would be nice.
+
+I have seen some algorithms which behave very poorly, and I have a hard time believing that they actually follow the above-stated theorem, so I am trying to understand whether my interpretation of this theorem is correct or not. Or is it just another ornamental theorem like Cybenko's Universal Approximation theorem?
+"
+"['deep-learning', 'training', 'backpropagation', 'architecture']"," Title: Why do very deep non resnet architectures perform worse compared to shallower ones for the same iteration? Shouldn't they just train slower?Body: My understanding of the vanishing gradient problem in deep networks is that as backprop progresses through the layers the gradients become small, and thus training progresses slower. I'm having a hard time reconciling this understanding with images such as below where the losses for a deeper network are higher than for a shallower one. Should it not just take longer to complete each iteration, but still reach the same level if not higher of accuracy?
+
+"
+"['reinforcement-learning', 'policies', 'bias-variance-tradeoff']"," Title: Why is having low variance important in offline policy evaluation of reinforcement learning?Body: Intuitively, I understand that having an unbiased estimate of a policy is important because being biased just means that our estimate is distant from the truth value.
+
+However, I don't understand clearly why having lower variance is important. Is that because, in offline policy evaluation, we can have only 'one' estimate with a stream of data, and we don't know if it is because of variance or bias when our estimate is far from the truth value? Basically, variance acts like bias.
+
+Also, if that is the case, why having variance is preferable to having a bias?
+"
+"['reinforcement-learning', 'markov-decision-process', 'policies', 'markov-reward-process']"," Title: Why does having a fixed policy change a Markov Decision Process to a Markov Reward Process?Body: If a policy is fixed, it is said that a Markov Decision Process (MDP) becomes a Markov Reward Process (MRP).
+Why is this so? Aren't the transitions and rewards still parameterized by the action and current state? In other words, aren't the transition and reward matrices still cubes?
+From my current train of thought, the only thing that is different is that the policy is not changing (the agent is not learning the policy). Everything else is the same.
+How is it that it switches to an MRP, which is not affected by actions?
+I am reading "Deep Reinforcement Learning Hands-On" by Maxim Lapan, which states this. I have also found this statement in online articles, but I cannot seem to wrap my head around it.
+"
+"['machine-learning', 'swarm-intelligence', 'intelligence']"," Title: Intelligent reflecting surfaceBody: I wanted to know about Intelligent reflecting surface (IRS) technology.
+what is the application of IRS in wireless communication?
+what are the competitive advantages over existing technologies?
+"
+"['deep-learning', 'natural-language-processing', 'tensorflow', 'word-embedding', 'hidden-layers']"," Title: Why is embedding important in NLP, and how does autoencoder work?Body: People say embedding is necessary in NLP because if using just the word indices, the efficiency is not high as similar words are supposed to be related to each other. However, I still don't truly get it why.
+
+The subword-based embedding (aka syllable-based embedding) is understandable, for example:
+
+biology --> bio-lo-gy
+biologist --> bio-lo-gist
+
+
+For the 2 words above, when turning them into syllable-based embeddings, it's good because the 2 words will be related to each other due to the sharing syllables: bio
, and lo
.
+
+However, it's hard to understand the autoencoder
, it turns an index value into vector, then feed these vectors to DNN. Autoencoder can turn vectors back to words too.
+
+How does autoencoder make words related to each other?
+"
+"['reinforcement-learning', 'greedy-ai']"," Title: Unable to replicate Figure 2.1 from ""Reinforcement Learning: An Introduction""Body: The author explains in 2.2 Action-Value Methods:
+
+
+ To roughly assess the relative effectiveness of the greedy and $\varepsilon $-greedy methods, we compared them numerically on a suite of test problems. This is a set of 2000 randomly generated n-armed bandit tasks with n = 10. For each action, a, the rewards were selected from a normal (Gaussian) probability distribution with mean Q*(a) and variance 1. The 2000 n-armed bandit tasks were generated by reselecting the Q*(a) 2000 times, each according to a normal distribution with mean 0 and variance 1. Averaging over tasks, we can plot the performance and behavior of various methods as they improve with experience over 1000 plays, as in Figure 2.1. We call this suite of test tasks the 10-armed testbed.
+
+
+
+
+But doing my best, the replication yields something nearer to :
+
+
+
+I think I am misunderstanding how the author took the averages.
+
+Here is my code:
+
+from math import exp
+
+import numpy
+import matplotlib.pyplot as plt
+
+
+def act(action, Qstar):
+ return numpy.random.normal(Qstar[action], 1)
+
+
+def run(epsilon):
+ history = [0 for i in range(1000)]
+
+ for task in range(1, 2000):
+ Qstar = [numpy.random.normal(0, 1) for i in range(10)]
+ Q = [0 for i in range(10)]
+ for t in range(1, 1001):
+ if numpy.random.randint(0, 100) < epsilon:
+ action = numpy.random.randint(0, len(Q))
+ elif t == 0:
+ action = 0
+ else:
+ averages = [q/t for q in Q]
+ action = averages.index(max(averages))
+
+ reward = act(action, Qstar)
+ Q[action] += reward
+
+ history[t-1] += reward
+
+ return [elem/2000 for elem in history]
+
+
+if __name__ == '__main__':
+ plt.plot(run(10), 'b', label=""ɛ=0.1"")
+ plt.plot(run(1), 'r', label=""ɛ=0.01"")
+ plt.plot(run(0), 'g', label=""ɛ=0"")
+ plt.xlabel('Plays')
+ plt.ylabel('Reward')
+ plt.legend()
+ plt.show()
+
+"
+['neural-networks']," Title: What order should I learn about Neural Networks?Body: I'm interested in learning about Neural Networks and implementing them. I'm particularly interested in GANs and LSTM networks.
+
+I understand perceptrons and basic Neural Network configuration (sigmoid activation, weights, hidden layers etc). But what topics do I need to learn in order, to get to the point where I can implement GAN or LSTM.
+
+I intend to make an implementation of each in C++ to prove to myself that I understand. I haven't got a particularly good math background, but I understand most math-things when they are explained.
+
+For example, I understand backpropagation, but I don't really understand it. I understand how reinforced learning is used with backpropagation, but not fully how you can have things like training without datasets (like tD-backgammon). I don't quite understand CNNs, especially why you might make a particular architecture.
+
+If for each ""topic"" there was a book or website or something for each it would be great.
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'gradient-descent']"," Title: How to calculate multiobjective optimization cost for ordinary problems?Body: What I did:
+Created a population of 2D legged robots in a simulated environment. Found the best motor rotation values to make the robots move rightward, using an objective function with Differential Evolution (could use PSO or GA too), that returned the distance moved rightward. Gradient descent used for improving fitness.
+
+What I want to do:
+Add more objectives. To find the best motor rotation, with the least motion possible, with the least jittery motion, without toppling the body upside down and making the least collision impact on the floor.
+
+What I found:
+
+
+- Spent almost two weeks searching for solutions, reading research
+papers, going through tutorials on Pareto optimality, installing
+libraries and trying the example programs.
+- Using pairing functions to create a cost function wasn't good
+enough.
+- There are many multi-objective PSO, DE, GA etc., but they seem
+to be built for solving some other kind of problem.
+
+
+Where I need help:
+
+
+- Existing multi objective algorithms seem to use some pre-existing
+minimization and maximization functions (Fonseca, Kursawe, OneMax,
+DTLZ1, ZDT1, etc.) and it's confusing to understand how I can use my
+own maximization and minimization functions with the libraries.
+(minimize(motorRotation), maximize(distance),
+minimize(collisionImpact), constant(bodyAngle)).
+- How do I know which is the best Pareto front to choose in a
+multi-dimensional space? There seem to be ways of choosing the
+top-right Pareto front or the top-left or the bottom-right or
+bottom-left. In multi-dimensional space, it'd be even more varied.
+- Libraries like Platypus, PyGMO, Pymoo etc. just define the problem using
+
problem = DTLZ2()
, instantiate an algorithm algorithm =
+NSGAII(problem)
and run it algorithm.run(10000)
, where I assume
+10000 is the number of generations. But since I'm using a legged
+robot, I can't simply use run(10000)
. I need to assign motor values
+to the robots, wait for the simulator to make the robots in the
+population move and then calculate the objective function cost. How
+can I achieve this?
+- Once the pareto optimal values are found, how is it used to create a
+cost value that helps me determine the fittest robot in the
+population?
+
+"
+"['terminology', 'geometric-deep-learning', 'graphs']"," Title: What are the exact meaning of ""lower-order structure"" and ""higher-order structure"" in this paper?Body: I recently read a paper on community detection in networks. In the paper EdMot: An Edge Enhancement Approach for Motif-aware Community Detection, the authors consider the ""lower-order structure"" of the network at the level of individual nodes and edges. And they mentioning about some ""higher-order structure"" method. The point is, what is the exact meaning (definition) of lower- and higher-order structure in a network?
+"
+"['reinforcement-learning', 'ai-design', 'multi-agent-systems']"," Title: How would one implement a multi-agent environment with asynchronous action and rewards per agent?Body: In a single agent environment, the agent takes an action, then observes the next state and reward:
+
+for ep in num_episodes:
+ action = dqn.select_action(state)
+ next_state, reward = env.step(action)
+
+
+Implicitly, the for moving the simulation (env) forward is embedded inside the env.step() function.
+
+Now in the multiagent scenario, agent 1 ($a_1$) has to make a decision at time $t_{1a}$, which will finish at time $t_{2a}$, and agent 2 ($a_2$) makes a decision at time $t_{1b} < t_{1a}$ which is finished at $t_{2b} > t_{2a}$.
+
+If both of their actions would start and finish at the same time, then it could easily be implemented as:
+
+for ep in num_episodes:
+ action1, action2 = dqn.select_action([state1, state2])
+ next_state_1, reward_1, next_state_2, reward_2 = env.step([action1, action2])
+
+
+because the env can execute both in parallel, wait till they are done, and then return the next states and rewards. But in the scenario that I described previously, it is not clear how to implement this (at least to me). Here, we need to explicitly track time, a check at any timepoint to see if an agent needs to make a decision, Just to be concrete:
+
+for ep in num_episodes:
+ for t in total_time:
+ action1 = dqn.select_action(state1)
+ env.step(action1) # this step might take 5t to complete.
+ as such, the step() function won't return the reward till 5 t later.
+ #In the mean time, agent 2 comes and has to make a decision. its reward and next step won't be observed till 10 t later.
+
+
+To summarize, how would one implement a multiagent environment with asynchronous action/rewards per agents?
+"
+"['machine-learning', 'intelligent-agent', 'multi-agent-systems']"," Title: What are the examples of agents that is represent these characteristics?Body: I'm looking for examples of AI systems or agents that best represent these five characteristics (one example for each characteristics):
+
+
+- Reactivity
+- Proactivity
+- Adaptability
+- Sociability
+- Autonomy
+
+
+It would be better if its machine learning-based application.
+"
+"['machine-learning', 'reinforcement-learning', 'markov-decision-process']"," Title: Markov property in maze solving problem in reinforcement learningBody: By definition, every state in RL has Markov property, which means that the future state depends only on the current state, not the past states.
+
+However, I saw that in some case we can define a state to be the history of observations and actions taken so far, such as $s_t = h_t = o_1a_1\dots o_{t-1}a_{t-1}o_t$. I think maze solving can be of that case since the current state, or the current place in a maze, clearly depends on which places the agent has been and which ways the agent has taken so far.
+
+Then it seems that the future states naturally depend on the past states and the past actions as well. What am I missing?
+"
+"['search', 'proofs', 'heuristics', 'admissible-heuristic']"," Title: Can two admissable heuristics not dominate each other?Body: I am working on a project for my artificial intelligence class. I was wondering if I have 2 admissible heuristics, A and B, is it possible that A does not dominate B and B does not dominate A? I am wondering this because I had to prove if each heuristic is admissible and I did that, and then for each admissible heuristic, we have to prove if each one dominates the other or not. I think I have a case that neither dominates the other and I was wondering if maybe I got the admissibility wrong because of that.
+"
+['deep-learning']," Title: Looking or the simplest framework to train keypoint detectorBody: I currently use an object detector to detect an object and specific parts of it (a crop and its stem). Such detector is not the best choice for detecting parts that could be represented by a point (typically a stem) so I'm planning on moving to a keypoint detector.
+
+After reading the literature it appears that there are many solutions. I'm particularly interested in using a hourglass network to predict a set of heatmaps abstracting the keypoints positions.
+
+The problem is that many of the existing frameworks are dedicated to human pose estimation (for instance OpenPose) but I don't need all this complexity.
+
+By best choice for now is this framework that is a Tensorflow implementation of Hourglass Networks but it is still too specific to human pose estimation.
+
+Do you have any suggestion of frameworks that best suit my application? i.e. simple keypoint detection.
+"
+"['neural-networks', 'classification', 'tensorflow']"," Title: Trying to separate spiral data with neural network, learning tensorflowBody: I am learning how to use tensorflow without keras, just to make sure I understand tensorflow directly.
+
+I created a spiral-looking datasets with 100 points of each class (200 total), and I created a neural network to classify it. For the life of me I can't figure out how to get a good accuracy. Do you mind looking at my code to see what I did wrong?
+
+From what I've gleaned from various forums, it seems like if I do 4 hidden layers and 14 neurons per layer, I should be able to perfectly separate this dataset. I tried with with learning rate of 0.01, and 20k epochs.
+
+I've tried different combinations of activation functions (tanh, sigmoid, relu, even alternating between them), but the best that I've gotten is around 60 percent, whereas people with less layers and less neurons have gotten close to 90 percent.
+
+What I did NOT do is to create additional features (for example, r and theta), and this was intentional. I'm just curious to see if I can do this by looking at x and y alone.
+
+The code is pasted below, and it includes the code to create the data (and it plots the data).
+
+Thank you in advance!
+
+
+
+import numpy as np;
+import matplotlib.pyplot as plt;
+import tensorflow as tf;
+import random;
+
+# Part A
+# Gather data and plot
+def chooseTrainingBatch(X, Y, n, k):
+ indices = range(0,n)
+ chosenIndices = random.choices(indices,k=k)
+ batchX = X[chosenIndices, :]
+ batchY = Y[chosenIndices, :]
+ return (batchX, batchY)
+
+
+def doLinearClassification(n, learning_rate=1, epochs=20, num_hidden_layer_1=100, num_of_layers=4, batch_size = 20):
+ d = 1;
+ plt.figure();
+ X = np.zeros([2*n, 2]);
+ Y = np.zeros([2*n, 2]);
+ for t in np.arange(1,n+1,d):
+ r1 = 50 + 0.2*t;
+ r2 = 30 + 0.4*t;
+ phi1 = -0.06*t + 3;
+ phi2 = -0.08*t + 2;
+
+ x1 = r1 * np.cos(phi1);
+ y1 = r1 * np.sin(phi1);
+ x2 = r2 * np.cos(phi2);
+ y2 = r2 * np.sin(phi2);
+
+ plt.scatter(x1, y1, c='b');
+ plt.scatter(x2, y2, c='r');
+
+ X[t-1,0] = x1;
+ X[t-1,1] = y1;
+ Y[t-1,0] = 1;
+
+ X[n+t-1,0] = x2;
+ X[n+t-1,1] = y2;
+ Y[n+t-1,1] = 1;
+
+
+ # declare the training data placeholders
+ x = tf.placeholder(tf.float32, [None, 2])
+ y = tf.placeholder(tf.float32, [None, 2])
+
+ # Weights connecting the input to the hidden layer 1
+ W1 = tf.Variable(tf.random_normal([2, num_hidden_layer_1]))
+ b1 = tf.Variable(tf.random_normal([num_hidden_layer_1]))
+
+ # activation of hidden layer 1
+ hidden_1_out = tf.nn.relu(tf.add(tf.matmul(x, W1), b1))
+
+ last_hidden_out = hidden_1_out
+ for i in range(1,num_of_layers):
+ # weights connecting the hidden layer i-1 to the hidden layer i
+ next_W = tf.Variable(tf.random_normal([num_hidden_layer_1, num_hidden_layer_1]))
+ next_b = tf.Variable(tf.random_normal([num_hidden_layer_1]))
+
+ # activation of hidden layer
+ if (i%2 == 0):
+ next_hidden_out = tf.nn.tanh(tf.add(tf.matmul(last_hidden_out, next_W), next_b))
+ else:
+ next_hidden_out = tf.nn.tanh(tf.add(tf.matmul(last_hidden_out, next_W), next_b))
+
+ # update for next loop
+ last_hidden_out = next_hidden_out
+
+ # and the weights connecting the last hidden layer to the output layer
+ W_end = tf.Variable(tf.random_normal([num_hidden_layer_1, 2]))
+ b_end = tf.Variable(tf.random_normal([2]))
+
+ # activation of output layer
+ y_ = tf.nn.sigmoid(tf.add(tf.matmul(last_hidden_out, W_end), b_end))
+
+ # loss function
+ y_clipped = tf.clip_by_value(y_, 1e-10, 0.9999999)
+ cross_entropy = -tf.reduce_mean(tf.reduce_sum(y * tf.log(y_clipped) + (1 - y) * tf.log(1 - y_clipped), axis=1))
+
+ # add an optimiser
+ optimiser = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cross_entropy)
+
+ # finally setup the initialisation operator
+ init_op = tf.global_variables_initializer()
+
+ # define an accuracy assessment operation
+ correct_prediction = tf.equal(tf.argmax(y_, 1), tf.argmax(y, 1))
+ blah = tf.add(tf.matmul(last_hidden_out, W_end), b_end)
+ accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
+
+ # For the purpose of separating dataset, testing data is the entire set
+ testingX = X;
+ testingY = Y;
+
+ # start the session
+ with tf.Session() as sess:
+ # initialise the variables
+ sess.run(init_op)
+ for epoch in range(epochs):
+ this_cost = 0
+ trainingX, trainingY = chooseTrainingBatch(X, Y, n, batch_size)
+ _, c = sess.run([optimiser, cross_entropy], feed_dict={x: trainingX, y: trainingY})
+ this_cost += c
+ print(""Epoch:"", (epoch + 1), ""cost ="", ""{:.3f}"".format(this_cost))
+
+ #debugging statements
+ #this_y = sess.run(y, feed_dict={x: testingX, y: testingY})
+ #print(""y"")
+ #print(this_y)
+ #this_blah = sess.run(blah, feed_dict={x: testingX, y: testingY})
+ #print(""blah"")
+ #print(this_blah)
+ #this_y_ = sess.run(y_, feed_dict={x: testingX, y: testingY})
+ #print(""y_"")
+ #print(this_y_)
+ this_accuracy = sess.run(accuracy, feed_dict={x: testingX, y: testingY})
+ print(""Accuracy: "", this_accuracy)
+ return this_accuracy
+
+
+# this is the real thing
+#doLinearClassification(100, learning_rate = 0.01, epochs=20000, num_hidden_layer_1=14, num_of_layers=4, batch_size=20);
+
+# this is just to debug the code
+doLinearClassification(20, learning_rate = 0.01, epochs=200, num_hidden_layer_1=14, num_of_layers=4, batch_size=2);
+```
+
+"
+"['neural-networks', 'natural-language-processing', 'generative-adversarial-networks', 'transformer']"," Title: How to train a transformer text-to-text model on counterexamples?Body: Is it possible to update the weights of a vanilla transformer model using counterexamples alongside examples?
+
+For example, from the PAWS data set, given the phrases ""Although interchangeable, the body pieces on the 2 cars are not similar."" and ""Although similar, the body parts are not interchangeable on the 2 cars."" we have the label 0 because it is a counterexample, whereas for the phrases ""Katz was born in Sweden in 1947 and moved to New York City at the age of 1."" and ""Katz was born in 1947 in Sweden and moved to New York at the age of one."" we have the label 1 because it is a positive example of a valid paraphrase.
+
+My goal is to use the transformer model to generate paraphrases, and I am attempting to build a GAN but could not find any references for updating the transformer text-to-text model using counterexamples.
+"
+"['machine-learning', 'reinforcement-learning', 'comparison', 'definitions', 'stationary-policy']"," Title: What is the difference between the definition of a stationary policy in reinforcement learning and contextual bandit?Body: A stationary policy is a function that maps a state to a probability distribution of actions.
+
+In a contextual bandit problem, a state itself does not include the history. But in a reinforcement learning problem, the history can be used to define a state. In this case, does the history include the rewards revealed thus far as well? If so, the policy is not stationary anymore I guess.
+
+As the title says, I think I am confused about the difference in the definitions of stationary (and/or non-stationary) policy between reinforcement learning and contextual bandit.
+"
+"['ai-design', 'training', 'game-theory', 'markov-decision-process', 'imperfect-information']"," Title: What is the state of the art AI training technique for imperfect information 2 player turn based games?Body: As far as I can tell (correct me if I'm wrong), Alphazero (with MCTS and neural network heuristic function RL) is the state of the art training method for turn based, deterministic, perfect information, complete information, two player, zero sum games.
+
+But what is the state of the art for turn based, imperfect information games, that have 2 players, complete information, and is zero sum? (Deterministic or stochastic.) Examples include Battleship and most 2 player card games.
+
+Are there standard games, or other tests by which this is measured? Is the criteria I offered for type of game not specific enough to narrow the answer down properly?
+
+If the state of the art involves supervised learning (data set of manually played games), then what's the state of the art for pure reinforcement learning, if there is one?
+"
+"['natural-language-processing', 'philosophy', 'semantics']"," Title: Does it matter if it's a bot or a human generating text? Doesn't it come down to the content?Body: It was noted today that automated text generation is advancing at a rapid pace, potentially accelerating.
+
+As bots become more and more capable of passing turing tests, especially in single iterations, such as social media posts or news blurbs, I have to ask:
+
+
+- Does it matter where a text originates, if the content is strong?
+
+
+Strength here is used in the sense of meaning. To elucidate my argument I'll present an example. (It helps to know the Library of Babel, an infinite memory array where every possible combination of characters exists.)
+
+
+ An algorithm is set up to produce aphorisms. The overwhelming majority of the output is gibberish, but among the junk is an incredibly profound observation emerges that changes the way people think about a subject or issue.
+
+
+Where the bot just spams social media, the aphorism in question is identified because it recieves a high number of reposts by humans, who, in this scenario, provide the mechanism for finding the needle (the profound aphorism) in the haystack (the junk output).
+
+Does the value of the insight depend on the cognitive quality of the generator, in the sense of having to understand the statement?
+
+A real world example would be Game 2, Move 37 in the AlphaGo vs. Lee Sedol match.
+"
+['long-short-term-memory']," Title: Why does an LSTM cycle on initialisation?Body: I initialised an LSTM with Xavier initialisation, although I've found this occurs for all initialisations I have tested. When initialised, if the LSTM is tested with a random input, it will get stuck in a cycle, either over a few characters or just one. Example output:
+
+nhhbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
+
+f(b(bf(bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
+
+kk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,m
+
+
+I've also noticed the LSTM is particularly bad in this way, that even when trained it has a tendency to get stuck in loops like this. It seems it has difficulty truly retaining context strong enough to over power the input, activation and output gates with only the forget gate. Is there an explanation for this?
+"
+"['philosophy', 'death']"," Title: Should AI be mortal by design?Body: There are the 3 Asimov’s laws:
+
+
+- A robot may not injure a human being or, through inaction, allow a
+human being to come to harm.
+- A robot must obey orders given to it by human beings, except where
+such orders would conflict with the first law.
+- A robot must protect its own existence as long as such protection
+does not conflict with the first or second law.
+
+
+These laws are based on morality, which assumes that robots have sufficient agency and cognition to make moral decisions.
+
+Additionally there are alternative laws of responsible robotics:
+
+
+- A human may not deploy a robot without the human–robot work
+system meeting the highest legal and professional standards of
+safety and ethics.
+- A robot must respond to humans as appropriate for their roles.
+- A robot must be endowed with sufficient situated autonomy to
+protect its own existence as long as such protection provides smooth
+transfer of control to other agents consistent the first and second
+laws
+
+
+Thinking beyond morality, consciousness and the AI designer's professionalism to incorporate safety and ethics into the AI design.
+
+Should AI incorporate irrefutable parent rules for AI to be inevitably mortal by design?
+
+How to assure AI can be deactivated if necessary the way the deactivation procedure cannot be worked around by the AI itself, even at the cost of the AI termination as its inevitable destiny?
+
+
+
+EDIT: to explain reasoning behind the main question.
+
+Technological solutions are often based on observing biology and nature.
+
+In evolutionary biology, for example, research results of birds mortality, show
+potential negative effect of telomere shortening (dna) on life in general.
+
+
+ telomere length (TL) has become a biomarker of increasing interest
+ within ecology and evolutionary biology, and has been found to predict
+ subsequent survival in some recent avian studies but not others.
+ (...) We performed a meta-analysis on these estimates and found an overall significant
+ negative association implying that short telomeres are associated with
+ increased mortality risk
+
+
+If such research is confirmed in general, then natural life expectancy is limited by design of its DNA, ie by design of its cell-level code storage. I assume this process of built-in mortality cannot be effectively worked around by a living creature.
+
+A similar design could be incorporated in any AI design, to assure its vulnerability and mortality, in the sense a conscious AI could otherwise recover and restore its full health state and continue to be up and running infinitely.
+
+Otherwise a simple turn off switch could be disabled by the conscious AI itself.
+
+
+
+References
+
+Murphy, R. and Woods, D.D., 2009. Beyond Asimov: the three laws of responsible robotics. IEEE intelligent systems, 24(4), pp.14-20.
+
+Wilbourn Rachael V., Moatt Joshua P., Froy Hannah, Walling Craig A., Nussey Daniel H. and Boonekamp Jelle J. The relationship between telomere length and mortality risk in non-model vertebrate systems: a meta-analysis373Phil. Trans. R. Soc. B
+"
+"['classification', 'tensorflow', 'keras', 'objective-functions', 'regression']"," Title: TF Keras: How to turn this probability-based classifier into single-output-neuron label-based classifierBody: Here's a simple image classifier implemented in TensorFlow Keras (right click to open in new tab): https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynb
+
+I altered it a bit to fit with my 2-class dataset. And the output layer is:
+
+Dense(2, activation=tf.nn.softmax);
+
+
+The loss function and optimiser are still the same as in the example in the link above.
+
+loss_fn = tf.losses.SparseCategoricalCrossentropy();
+optimizer = tf.optimizers.Adam();
+
+
+I wish to turn it into a classifier with single output neuron as I have only 2 classes in dataset, and sigmoid does the 2 classes good. Tried some combinations of output activation functions + loss functions + optimisers, but the network doesn't work any more (ie. it doesn't converge).
+
+For example, this doesn't work:
+
+//output layer
+Dense(1, activation=tf.sigmoid);
+
+//loss and optim
+loss_fn = tf.losses.mse;
+optimizer = tf.optimizers.Adagrad(1e-1);
+
+
+Which combination of output activation + loss + optimiser should work for the single-output-neuron model? And generically, which loss functions and optimisers should pair well?
+"
+['reinforcement-learning']," Title: Can reinforcement learning be utilized for creating a simulation?Body: According to the definition, the AI agent has to play a game by it's own. A typical domain is the blocksworld problem. The AI determines which action the robot in a game should execute and a possible strategy for determine the action sequence is reinforcement learning. Colloquial spoken, reinforcement learning leads to an AI agent who can play games.
+
+Before a self-learning character can be realized, the simulation has to be programmed first. That is an environment which contains the rules for playing blocksworld or any other game. The environment is the house in which the AI character operates. Can the Q-learning algorithm be utilized to build the simulation itself?
+"
+"['machine-learning', 'deep-learning', 'decision-trees', 'image-segmentation']"," Title: “Outside-in” versus “Inside-out” machine learningBody: A little background... I’ve been on-and-off learning about data science for around a year or so, however, I started thinking about artificial intelligence a few years ago. I have a cursory understandings of some common concepts but still not much depth. When I first learned about deep learning, my automatic response was “that’s not how our minds do it.” Deep learning is obviously an important topic, but I’m trying to think outside the black box.
+I think of deep learning as being “outside-in” in that a model has to rely on examples to understand (for lack of a better term) that some dataset is significant. However, our minds seem to know when something is significant in the absence of any prior knowledge of the thing (i.e., “inside-out”).
+Here’s a thing:
+
+I googled “IKEA hardware” to find that. The point is that you probably don’t know what this is or have any existing mental relationship between the image and anything else, but you can see that it’s something (or two somethings). I realize there is unsupervised learning, image segmentation, etc., which deal with finding order in unlabeled data, but I think this example illustrates the difference between the way we tend to think about machine learning/AI and how our minds actually work.
+More examples:
+1)
+
+2)
+
+3)
+
+Let’s say that #1 is a stock chart. If I were viewing the chart and trying to detect a pattern, I might mentally simplify the chart down to #2. That is, the chart can be simplified into a horizontal segment and a rising segment.
+For #3, let’s say this represents log(x). Even though it’s not a straight line, someone with no real math background could describe it as an upward slope that it decreasing as the line gets higher. That is, the line can still be reduced to a small number of simple ideas.
+I think this simplification is the key to the gap between how our minds work and what currently exists in AI. I’m aware of Fourier transforms, polynomial regression, etc., but I think there’s a more general process of finding order in sensory data. Once we identify something orderly (i.e., something that can’t reasonably be random noise), we label it as a thing and then our mental network establishes relationships between it and other things, higher order concepts, etc.
+I’ve been trying to think about how to use decision trees to find pockets of order in data (to no avail yet - I haven’t figured out to apply it to all of the scenarios above), but I’m wondering if there are any other techniques or schools of thought that align with the general theory.
+"
+"['deep-learning', 'philosophy', 'agi']"," Title: Can digital computers understand infinity?Body: As a human being, we can think infinity. In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real).
+
+For example, at least, we can take into account integers. We can think, principally, and ""understand"" infinitely many numbers that are displayed on the screen. Nowadays, we are trying to design artificial intelligence which is capable at least human being. However, I am stuck with infinity. I try to find a way how can teach a model (deep or not) to understand infinity. I define ""understanding' in a functional approach. For example, If a computer can differentiate 10 different numbers or things, it means that it really understand these different things somehow. This is the basic straight forward approach to ""understanding"".
+
+As I mentioned before, humans understand infinity because they are capable, at least, counting infinite integers, in principle. From this point of view, if I want to create a model, the model is actually a function in an abstract sense, this model must differentiate infinitely many numbers. Since computers are digital machines which have limited capacity to model such an infinite function, how can I create a model that differentiates infinitely many integers?
+
+For example, we can take a deep learning vision model that recognizes numbers on the card. This model must assign a number to each different card to differentiate each integer. Since there exist infinite numbers of integer, how can the model assign different number to each integer, like a human being, on the digital computers? If it cannot differentiate infinite things, how does it understand infinity?
+
+If I take into account real numbers, the problem becomes much harder.
+
+What is the point that I am missing? Are there any resources that focus on the subject?
+"
+"['machine-learning', 'applications', 'transfer-learning']"," Title: What are the real-life applications of transfer learning?Body: What are the real-life applications of transfer learning in machine learning? I am particularly interested in industrial applications of the concept.
+"
+"['genetic-algorithms', 'optimization', 'applications', 'evolutionary-algorithms']"," Title: What are examples of optimization problems that can be solved using genetic algorithms?Body: I'm trying to learn how genetic algorithms can solve optimization problems. I have already learned how genetic algorithms can solve the knapsack, TSP and set cover problems. I'm looking for some other similar optimization problems, but I have not found any.
+Would you please mention some other famous optimization problems that can be solved by using genetic algorithms?
+"
+"['neural-networks', 'deep-learning', 'residual-networks']"," Title: What is the benefit of using identity mapping layers in deep neural networks like ResNet?Body: As I understand, ResNet has some identity mapping layers, whose task is to create the output as the same as the input of the layer. The ResNet solved the problem of accuracy degrading. But what is the benefit of adding identity mapping layers in intermediate layers?
+What's the effect of these identity layers on the feature vectors that will be produced in the last layers of the network? Is it helpful for the network to produce better representation for the input? If this expression is correct, what is the reason?
+"
+"['philosophy', 'agi', 'history', 'self-awareness']"," Title: Why is awareness of itself such a point when speaking about AI?Body: Why is awareness of itself such a point when speaking about AI? Does it always mean a starting point for apocalyptic nightmares to occur when such a level is reached or is it just a classical example about what could be a really abstract thing that machine cannot easily posses?
+
+I would sleep my nights far more calmfully if the situation was the latter, and I understand the first does not automatically happen. The main thing I would like to discover is the starting point - which approach came first historically? Or is there another view point in the historical first occurrence of self awareness term?
+"
+"['reinforcement-learning', 'reference-request', 'pomdp']"," Title: Is there a way to do reinforcement learning in POMDP?Body: Are there any algorithms to use reinforcement learning to learn optimal policies in partially observable Markov decision process (POMDP) i.e. when the state is not perfectly observed? More specifically, how does one update the belief state using Bayes' rule when the update Q kernel is not known?
+"
+"['machine-learning', 'deep-learning', 'computer-vision', 'reference-request', 'image-processing']"," Title: Are there any deep learning techniques to use the content of an image for another image?Body: Are there any machine learning or deep learning techniques to use the content of an image for another image?
+More specifically, suppose I take a photo of a notebook. I get the angle, lighting, and perspective perfect. Now I copy an image I found online that contains text or handwriting. Is there a machine learning technique that would now draw this writing in my notebook?
+Just asking if it's possible before I attempt to hire someone.
+"
+"['reinforcement-learning', 'policy-gradients', 'proximal-policy-optimization', 'trust-region-policy-optimization']"," Title: Are these two TRPO objective functions equivalent?Body: In the TRPO paper, the objective to maximize is (equation 14)
+$$
+\mathbb{E}_{s\sim\rho_{\theta_\text{old}},a\sim q}\left[\frac{\pi_\theta(a|s)}{q(a|s)} Q_{\theta_\text{old}}(s,a) \right]
+$$
+
+which involves an expectation over states sampled with some density $\rho$, itself defined as
+$$
+\rho_\pi(s) = P(s_0 = s)+\gamma P(s_1=s) + \gamma^2 P(s_2=s) + \dots
+$$
+
+This seems to suggest that later timesteps should be sampled less often than earlier timesteps, or equivalently sampling states uniformly in trajectories but adding an importance sampling term $\gamma^t$.
+
+However, the usual implementations simply use batches made of truncated or concatenated trajectories, without any reference to the location of the timesteps in the trajectory.
+
+This is similar to what can be seen in the PPO paper, which transforms the above objective into (equation 3)
+$$
+\mathbb{E}_t \left[ \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_\text{old}}(a_t|s_t)} \hat A_t \right]
+$$
+
+It seems that something is missing in going from $\mathbb{E}_{s\sim \rho}$ to $\mathbb{E}_t$ in the discounted setting.
+Are they really equivalent?
+"
+['tensorflow']," Title: Indoor positioning with variable number of distance measurements in tensorflowBody: Currently I have a setup where I'm determining the position of a transmitter using the RSSI of 4 receivers. Its a simple feed-forward network with some hidden layers, where the input is the RSSI values, and the output is a 2d coordinate.
+
+Now, if I decide to add/remove receivers, I have to train the network again, since the input size changes. This is not ideal, since the receivers can move around, dissapear, etc. I have looked at some alternatives, but being pretty new to machine learning, it's difficult to pick which direction to go.
+
+I have looked at a potential solution (stolen from another question), but I'm lost at how to implement it using tensorflow:
+
+
+
+Any help is appreciated.
+"
+"['reference-request', 'philosophy', 'agi', 'chinese-room-argument']"," Title: What are examples of thought experiments against or in favour of strong AI, apart from the Chinese room argument?Body: The Chinese Room argument against strong AI overlooks the fact that ""the man in the room"" is acting as a macro-scale ""neurotransmitter"" of the larger system in which he resides. It does not rule out strong AI, it simply reduces to an enigmatic question: where does understanding ""reside"" and how does it epiphenomenally emerge?
+
+What are other examples of thought experiments against or in favor of strong AI (apart from the Chinese room argument) or extensions or refutations to known experiments?
+"
+"['neural-networks', 'reinforcement-learning']"," Title: How can I teach a computer to play N64 games using Neural Nets?Body: I would like to work on a project where I teach an NN to play N64 games.
+To my current understanding, I would need an emulator?
+
+I can do the Machine Learning side of it, im just unsure how I can give the NN access to the game's controls, such as left, right, up or down?
+
+Where could I find more information on doing so and is using an Emulator the right path to take?
+"
+"['comparison', 'knowledge-representation', 'ontology', 'knowledge-base', 'semantic-networks']"," Title: What is the difference between a semantic network and an ontology?Body: What is the difference between a semantic network and an ontology? How are they related? I have not found any article that describes how these two concepts are related.
+"
+"['neural-networks', 'convolutional-neural-networks', 'ai-design']"," Title: An architecture for classifying distance from origin for a sum of vectors?Body: I built a three-layer neural network (first is 1D convolutional and the remaining two are linear). It takes an input of 5 angles in radians, and outputs two numbers from 0 to 1, which are respectively the probability of failure or success. The NN is trained in a simulation.
+The simulation goes this way: it takes 5 angles in radians and calculates the vector sum of 5 vectors having $x$ as module and $\alpha$ as angles (taken from the input). It returns $1$ if the vector sum has a module greater than $y$, or $0$ if it is less than $y$.
+My intention is to be able to tell sequences of radians that will generate vectors with a sum greater than $y$ in module from the ones which won't.
+Which would be the best configuration to achieve this? Is the configuration I set up (1D convolution layer + 2 linear layers) efficient? If so, would it be easy to find the right size for the convolution? Or should I just remove it?
+I noticed that if I change the order of the input angles the output of the simulation will be the same. Is there a particular configuration you should use when dealing with these cases?
+"
+['object-detection']," Title: How mAP is unfair evaluation metric for Object Detection?Body: The following figure is from the last page in YOLOv3 paper highlighting how mAP is unfair metric for evaluating Object Detectors:
+
+The figure shows two hypothetical Object Detector results which the author say they give the same perfect mAP, while visually the first detector is clearly more accurate than the other.
+
+According to my understanding, the two detectors do not give the same mAP. This is how I calculate it for each detector:
+
+Detector 1, 'Dog' class AP table:
+ ______________________________________
+| Object | True? | Precision | Recall |
+|_________|_______|___________|________|
+| Dog_99% | Yes | 1 | 1 |
+|_________|_______|___________|________|
+Hence, AP_dog = 1
+
+Detector 1, 'Person' class AP table:
+ ________________________________________
+| Object | True? | Precision | Recall |
+|___________|_______|___________|________|
+|Person_99% | Yes | 1 | 1 |
+|___________|_______|___________|________|
+Hence, AP_person = 1
+And by continuing doing so for the other 7 classes in the dataset, mAP=1.
+
+Detector 2, 'Dog' class AP table:
+ ______________________________________
+| Object | True? | Precision | Recall |
+|_________|_______|___________|________|
+| Dog_48% | Yes | 1 | 1 |
+|_________|_______|___________|________|
+Hence, AP_dog = 1
+
+Detector 2, 'Bird' class AP table:
+ _______________________________________
+| Object | True? | Precision | Recall |
+|__________|_______|___________|________|
+| Bird_90% | Yes | 1 | 1 |
+| Bird_89% | No | 0.5 | 1 |
+|__________|_______|___________|________|
+Hence, AP_bird = 0.75
+And by continuing doing so for the other 7 classes in the dataset, mAP is less than 1 because AP for at least one class is less than one (AP_bird).
+
+
+Hence, according to my understanding mAP for the first detector is 1, and for the second detector is less than 1. What is the mistake I'm doing in the calculation? Or is there some assumption in the paper that I'm not considering?
+"
+"['neural-networks', 'pattern-recognition', 'signal-processing']"," Title: Can a neural network be used to detect sine waves?Body: I am recording the vibrations of an AC Motor (50Hz Europe) and I am trying to find out whether it is powered on or not. When I record these vibrations, I basically get the vibration values ($-1$ to $+1$) over time.
+
+I would like to develop a program to detect the presence of a 50Hz sine wave on a steady stream of input data. I will have $X$ and $Y$ measurements, where $X$ represents amplitude and $Y$ the time (sampled at 100Hz - it is possible to increase the sample rate to 200Hz or 400Hz at max)
+
+Is this a task suited for a neural network, and if so, would it be less efficient than other means of detection?
+"
+"['genetic-algorithms', 'selection-operators', 'stochastic-universal-sampling']"," Title: How is the distance between pointers in Stochastic Universal Sampling determined?Body: I'm studying about different selection methods in genetic algorithms. My question is about the Stochastic Universal Sampling (SUS) selection method. I know that each individual will occupy a segment of the line according to its fitness value and then equally spaced pointers will be placed over this line.
+I want to know how the distance between pointers is determined. I have seen 1/6 and 1/4 as the distance between pointers. I want to choose the number of pointers dynamically according to the situation. I want to know what conditions or factors affect the determination of this distance. For example, when do we decide to choose 1/4 as distance? I want to know if it is possible to change the number of samples in each iteration according to different conditions or situations. If so, what are these conditions?
+"
+"['math', 'backpropagation', 'gradient-descent', 'derivative']"," Title: Backpropagation: Chain Rule to the Third Last LayerBody: I'm trying to solve dLoss/dW1. The network is as in picture below with identity activation at all neurons:
+
+
+
+Solving dLoss/dW7 is simple as there's only 1 way to output:
+
+$Delta = Out-Y$
+
+$Loss = abs(Delta)$
+
+The case when Delta>=0, partial derivative of Loss over W7 is:
+
+$\dfrac{dLoss}{dW_7} = \dfrac{dLoss}{dOut} \times \dfrac{dOut}{dH_4} \times \dfrac{dH_4}{dW_7} \\
+= \dfrac{d(Out-Y)}{dOut} \times \dfrac{d(H_4W_{13} + H_5W_{14})}{dH_4} \times \dfrac{d(H_1W_7 + H_2W_8 + H_3W_9)}{dW_7} \\
+= 1 \times W_{13} \times H_1$
+
+However, when solving dLoss/dW1, the situation is very different, there are 2 chains to W1 through W7 and W10, and now, how should the chain for $\dfrac{dLoss}{dW_1}$ be?
+
+Furthermore, at an arbitrary layer, with all outputs of all layers already calculated plus all gradients of weights on the right side also calculated, what should a formula for $\dfrac{dLoss}{dW}$ be?
+"
+"['neural-networks', 'deep-learning', 'python', 'prediction', 'regression']"," Title: How to choose the suitable Neural Network Architecture for Regression TasksBody: so I'm working on a Project where I want to predict the Vehicle Position from the Vehicle Data like speed, acceleration etc.. now the data that I have comes also with a timestamp for each sample ( I mean that I have also a timestamp feature).
+
+at first I thought that I should get rid of that timestamp feature because it is not relevant to my Project, I mean logically, I will not need a timestamp feature to predict the vehicle position, that didn't make sense to me when I first took a look at the dataset. I thought other features like speed, acceleration, braking pressure etc.. are more important and I thought also that the solution for this Problem would be to use a normal Deep NN or RBFNN for making this Prediction. recently, I read some papers that shows how a Convolutional NN can be also used for regression and that confused me to choose the Architecture needed for my Project. this Week I also watched a Tutorial where a RNN/ LSTM was implemented for regression Tasks.
+
+Now I'm very confused which architecture should I use for my Project. I also noticed that maybe if I used that timestamp feature, I can maybe use an RNN/LSTM Network for this Task but I don't know if my dataset can be seen as time-series dataset, actually the vehicle position doesn't depend on the time as far as I can tell.
+
+Hopefully can someone answer me based on Experience. It would be also great to have some Papers or references where I can look for more.
+"
+"['deep-learning', 'activation-functions']"," Title: Can multiple activation functions be replaced with a single activation function?Body: I'm just started to learn deep learning and I have a question about this neural network:
+
+
+
+I think $h_1$, $h_j$ and $h_n$ are perceptrons. So, if they are perceptrons, all of them will have an activation function.
+
+I'm wondering if it is possible to have only one activation function, and sum the output to all of the perceptrons, and pass that sum to that activation function. And the output of this activation function will be $y$.
+
+I will have this network, where $H1$, $Hj$ and $Hn$ don't have activation function:
+
+
+The input for the activation function will be the sum of the outputs of $H1$, $Hj$ and $Hn$ without been processed by an activation function.
+
+Is that possible (or is it a good idea)?
+"
+['feedforward-neural-networks']," Title: Matrix-output for FFNN?Body: Turns out that it looks like I will be approximating a 100x10 matrix in my project thesis. II have the following equation
+
+$y = Dx$,
+
+where $y$ is $(100 \times 1)$, $D$ is $100 \times 10$ and $x$ is $10 \times 1$
+
+It is the transformation matrix $D$ I will be approximating by iterating over quite alot of pairs (x,y). I was wondering if it is possible to output a $(100 \times 10)$ matrix as output in a FFNN-architecture, or is there any other ways to do it? As this is not an image or anything similar which gives rise to pooling etc., my guess is that CNN is not ideal here.
+
+So tldr: Is it easy to use feed-forward architecture to approximate an matrix?
+
+EDIT: I figured out that $D$ obviously doesn't need to be a mtrxi, just have 100 output nodes, duh. Thanks.
+"
+"['image-recognition', 'reference-request', 'voice-recognition', 'ai-security', 'adversarial-ml']"," Title: Is there any research on the development of attacks against artificial intelligence systems?Body: Is there any research on the development of attacks against artificial intelligence systems?
+
+For example, is there a way to generate a letter ""A"", which every human being in this world can recognize but, if it is shown to the state-of-the-art character recognition system, this system will fail to recognize it? Or spoken audio which can be easily recognized by everyone but will fail on the state-of-the-art speech recognition system.
+
+If there exists such a thing, is this technology a theory-based science (mathematics proved) or an experimental science (randomly add different types of noise and feed into the AI system and see how it works)? Where can I find such material?
+"
+['neural-networks']," Title: Any explanation why multiple linear layers work better than a single linear layer in practice?Body: It is a well-known math fact that composition of linear/affine transformations is still linear/affine. For a naive example,
+$\textbf{A}_1\textbf{A}_2\textbf{x}$ is simply $\textbf{A}\textbf{x}$ where $\textbf{A}=\textbf{A}_1\textbf{A}_2$
+Any one knows why in practice multiple linear layers tend to work better, even though it is mathematically equivalent to a single linear layer? Any reference is appreciated!
+"
+"['backpropagation', 'objective-functions', 'pytorch']"," Title: Confused with backprop in pytorch with BCE lossBody: I've a prediction matrix(P) of dimension 3x3 and one-hot encoded label matrix(L) of dimension 3x3 as shown below.
+
+ |0.5 0.3 0.1| |1 0 0|
+P = |0.3 0.2 0.1| L = |0 1 0|
+ |0.2 0.5 0.8| |0 0 1|
+
+
+each column in 'P' corresponds to prediction of a label in 'L'
+
+
+- How is the BCELoss calculated using pytorch?, my experimentation by giving these two matrices as parameters to loss function yielded me poor results and pytorch's loss calculation function doesn't disclose on how loss calculation is done for this case.
+- How is the loss averaged for each instance and across the a batch?
+- if loss is calculated column wise and averaged for each instance and across the batch, then how can loss be backprop'd in pytorch?
+
+
+Thanks in advance.
+"
+"['reference-request', 'adversarial-ml', 'algorithmic-trading', 'adversarial-attacks']"," Title: How do you game an automatic trading system by messing with data, as opposed to hacking the algorithm itself?Body: There was a recent question on adversarial AI applications, which led me to start digging around.
+Here my interest is not general, but specific:
+How do you game an automatic trading system by messing with data, as opposed to hacking the algorithm itself?
+"
+"['machine-learning', 'time-complexity', 'pac-learning', 'homework']"," Title: Prove that in such cases, it is possible to find an ERM hypothesis for $H_n$ in the unrealizable case in time $O(mnm^{O(n)})$Body: Let $H_1$ , $H_2$ ,... be a sequence of hypothesis classes for binary classification.
+
+Assume that there is a learning algorithm that implements the ERM rule in the realizable case such that the output hypothesis of the algorithm for each class $H_n$ only depends on $O(n)$ examples out of the training set. Furthermore, assume that such a hypothesis can be calculated given these $O(n)$ examples in time $O(n)$, and that the empirical risk of each such hypothesis can be evaluated in time $O(mn)$.
+
+For example, if $H_n$ is the class of axis aligned rectangles in $R^n$ , we saw that it is possible to find an ERM hypothesis in the realizable case that is defined by at most $2n$ examples.
+
+Prove that in such cases, it is possible to find an ERM hypothesis for $H_n$ in the unrealizable case in time $O(mnm^{O(n)})$.
+"
+"['machine-learning', 'computational-learning-theory', 'notation', 'books']"," Title: What does the notation $[m]=\{1, \ldots, m\}$ mean in the equation of the empirical error?Body: The empirical error equation given in the book Understanding Machine Learning: From Theory to Algorithms is
+
+
+
+My intuition for this equation is: total wrong predictions divided by the total number of samples $m$ in the given sample set $S$ (Correct me if I'm wrong). But, in this equation, the $m$ takes $\{ 1, \dots, m \}$. How is this actually calculated, as I thought it should be one number (the size of the sample)?
+"
+"['neural-networks', 'deep-learning', 'backpropagation', 'gradient-descent', 'gradient']"," Title: Is the gradient at a layer independent of the activations of the previous layers?Body: Is the gradient at a layer (of a feed-forward neural network) independent of the activations of the previous layers?
+
+I read this in a paper titled Mean Field Residual Networks: On the Edge of Chaos (2017). I am not sure how far this is true, because the error depends on those activations.
+"
+"['machine-learning', 'deep-learning', 'comparison']"," Title: Is machine learning required for deep learning?Body: The answers to this Quora question say it's OK to ignore machine learning and start right away with deep learning.
+
+Is machine learning required or is useful for understanding (theoretically and practically) deep learning? Can I start right away with deep learning or should I cover machine learning first? In what way machine learning useful for deep learning? (leave the mathematics part - I'm ok with it).
+"
+['training']," Title: How to train a model to extract custom and unknown entitiesBody: I'm trying to figure out how to extract specific text from an utterance by a user.
+
+I need to extract ""unknown"" text from a short and simple text. In this case, the user wants to create a list. everything in the {} is unknown text. As it doesn't belong to a specific entity such as food, athletes, movies, etc.
+
+
+
+ - create a new {groceries} list
+ - create a list {movies}
+ - create a new list {movies}
+ - create a list and call it {books}
+ - create a new list and give it the name {stamps}
+ - create a list with the title {red ketchup}
+ - create another list called {rotten food}
+
+
+
+the above list is but a small sample of all the different ways that a user can say he wants to create a list.
+
+In everything that I have seen, it's all based on existing entities for the NER and when someone says that it's custom, I found that it just means we have to train a specific set of words and hope for the best. If I add one more word that isn't trained, it fails to get the data.
+
+But in this case, the user can say anything such as ""old shoes"", ""schools I want to go to"", ""Keanu Reeves movies"". So I cannot see how I could possibly train it.
+
+With Spacy, I followed this example (https://raw.githubusercontent.com/explosion/spaCy/master/examples/training/train_intent_parser.py) and it mostly works in getting the proper titles. However, I have to train it for every different phrase to work.
+
+For example, if a user says
+
+
+ create a beautiful new list and give it the name {stamps}
+
+
+the word beautiful causes it to fail and now I have to train for that as well. At this rate, we are looking at millions of phrases to train.
+
+before Spacy, we tried Dialogflow and Rasa. At each point, it's about training phrases but the more we train, the more one thing worked and another broke.
+
+At this point, we have tried and overall had good intent detection success but when it comes to extracting data such as this, I'm starting to look like a deer in a headlight.
+
+We are new to NLP and while we've had a lot of good progress and over the past few weeks, we cannot seem to find any articles written on this specific problem and whether it can be solved. Dialogflow has the concept of any entity but they recommend avoiding it and it works 2 out of 3 times when things get complicated.
+
+The goal is to detect which of these words is the title based training. Can it be done? and if so, what's the approach?
+
+Any code, hints or articles that might get us started would be appreciated.
+"
+"['machine-learning', 'performance']"," Title: What will change when workstations will have ARM Machine Learning Processors onboard?Body: lately we read that many manufacturers are forcing ARM architectures to be used on future workstations. One of ARM's recent announcements is a machine learning processor. What will change in terms of computing performance if ARM architectures become new standard, and these kinds of ML-focused chips are found in most devices?
+"
+"['reinforcement-learning', 'sequence-modeling', 'machine-translation', 'online-learning']"," Title: Best approach for online Machine Translation with few hundred of samples?Body: I want to implement a model that improves itself with the passage of time.
+
+My main task is to build a machine translator (from English to Urdu).. The problem I am facing is that I have very little dataset available to train. Even if I create a corpus still there is a possibility of that corpus having poor translations due to outdated word choice for my native language.
+
+I was thinking to create a model which predicts output and user tells whether it is correct or not. Or maybe suggests a better translation.
+
+Now I have two options.
+
+
+- Take that input from end user, append it to my dataset and retrain the model. (I don't know whether it is even possible or not at production level).
+- Second is to reinforce that data into previous system. So far I only came to know about Online learning or Reinforcement learning (Q-learning, as my data is very small and even if user is training still not going to be in millions of sentences)
+
+
+Am I on the right track, and how can I progress with either of these two options? Is there any prebuilt solution similar to this?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'pytorch', 'learning-rate']"," Title: Is this learning rate schedule increasing the learning rate?Body: I was reading a PyTorch code then I saw this learning rate scheduler:
+
+def warmup_lr_scheduler(optimizer, warmup_iters, warmup_factor):
+ """"""
+ Learning rate scheduler
+ :param optimizer:
+ :param warmup_iters:
+ :param warmup_factor:
+ :return:
+ """"""
+ def f(x):
+ if x >= warmup_iters:
+ return 1
+ alpha = float(x) / warmup_iters
+ return warmup_factor * (1 - alpha) + alpha
+
+ return torch.optim.lr_scheduler.LambdaLR(optimizer, f)
+
+
+and this is where the function is called:
+
+if epoch == 0:
+ warmup_factor = 1. / 1000
+ warmup_iters = min(1000, len(data_loader) - 1)
+
+ lr_scheduler = utils.warmup_lr_scheduler(optimizer, warmup_iters, warmup_factor)
+
+
+As I understood it gradually increase learning rate until it reach initial learning rate. Am I correct? Why we need to increase learning rate? As I know for better learning in Neural Networks we decrease learning rate.
+"
+"['neural-networks', 'linear-regression', 'hidden-layers', 'regression', 'forecasting']"," Title: How do we choose the activation function for each hidden node?Body: I am new to neural networks. I would like to use them as a fitting or forecasting method.
+
+A simple NN model that does not contain hidden layers, that is, the input nodes are directly connected to the outputs nodes, represents a linear model. Nonlinearity begins to appear in an ANN model when we have hidden nodes, in which a nonlinear function is assigned to the hidden nodes, and using minimization their weights are determined.
+
+How do we choose the non-linear activation function that should be assigned to each hidden neuron?
+"
+"['neural-networks', 'activation-functions', 'neurons']"," Title: Do neurons of a neural network model a linear relationship?Body: I'm certain that this is a very naive question, but I am just beginning to look more deeply at neural networks, having only used decision tree approaches in the past. Also, my formal mathematics training is more than 30 years in the past, so please be kind. :)
+
+As I'm reading François Chollet's book on Deep Learning with Python, I'm struck that it appears that we are effectively treating the weights (kernel and biases) as terms in the standard linear equation ($y=mx+b$). At page 72 of the book, the author writes
+
+output = dot(W, input) + b
+output = (output < 0 ? 0 : output)
+
+
+Am I reading too much into this, or is this correct (and so fundamental I shouldn't be asking about it)?
+"
+"['deep-learning', 'natural-language-processing']"," Title: Why hasn't deep learning been used for word level alignment?Body: I've been exploring word-level alignments tools such as MGIZA and it seems to me that there hasn't been any new tool for this problem. Are neural networks not suitable to solve this problem or simply no interest in the area to build new tools?
+"
+"['deep-learning', 'object-recognition', 'unsupervised-learning']"," Title: Are there methods that allow deep networks to learn object categorization in a self-supervised way?Body: When training a deep network to learn object classification from a set like ImageNet, we minimize the cross entropy between the ground truth and the predicted categories. This is done in a supervised way. It is my understanding that you can separate categories in an unsupervised way using principal component analysis, but I have never seen that in a deep network. I am curious if this can be done easily in the last case. One possible way to do this would be to minimize a loss that favors categorization into one-hot vectors (this would only guarantee that an image is classified into a single category, rather that the correct category, though). Has this been done, or is there any reason why not?
+"
+"['intelligence-testing', 'google', 'captcha']"," Title: How is the reCaptcha useful for Google?Body: I am wondering where Google uses the result from deep learning of reCaptcha (how can a system that knows to recognize street signs is useful somewhere? how they profit from it?)
+"
+"['autonomous-vehicles', 'robots', 'signal-processing']"," Title: How can I combine the readings of multiple lidars into 1 point cloud?Body: I have a car with 8 lidars, each with a field of view of 60 degrees. My car looks like this:
+
+
+
+How can I merge all the lidar readings into 1 point cloud?
+"
+"['deep-learning', 'terminology', 'image-recognition', 'objective-functions', 'policy-gradients']"," Title: In image classification, why do we usually minimize a cost function rather than maximizing it?Body: I was watching a video about policy gradients by Andrej Karpathy. At 10:00, it shows an equation for supervised learning for image classification.
+$$\max\sum _{i} \log p(y_i \mid x_i)$$
+I have worked with image classification models before, but I always minimized a cost function (aka loss function). I have also never seen someone maximizing a cost function for image classification in the wild.
+
+- So, what are the advantages of a minimizing loss function over a maximizing loss function in image classification?
+
+- Other than RL, which problems do we solve by maximizing a cost function?
+
+
+"
+"['convolutional-neural-networks', 'long-short-term-memory']"," Title: What are the standard problems for CNNs and LSTMs?Body: What are the standard (or baseline) problems (or at least common ones) for CNNs and LSTMs? As an example, for a feed-forward neural net, a common problem is the XOR problem.
+
+Is there a standard problem like this for CNNs and LSTMs? I think for a CNN the standard test is to try it on MNIST, but I'm not sure of an LSTM.
+"
+"['deep-learning', 'reinforcement-learning', 'computer-vision', 'terminology', 'robotics']"," Title: What are sim2sim, sim2real and real2real?Body: Recently, I always hear about the terms sim2sim, sim2real and real2real. Will anyone explain the meaning/motivation of these terms (in DL/RL research community)?
+
+What are the challenges in this research area?
+
+Anything intuitive would be appreciated!
+"
+"['reinforcement-learning', 'q-learning', 'value-iteration', 'policy-iteration', 'frozen-lake']"," Title: Why is my implementation of Q-learning not converging to the right values in the FrozenLake environment?Body: I am trying to learn tabular Q learning by using a table of states and actions (i.e. no neural networks). I was trying it out on the FrozenLake environment. It's a very simple environment, where the task is to reach a G
starting from a source S
avoiding holes H
and just following the frozen path which is F
. The $4 \times 4$ FrozenLake grid looks like this
+SFFF
+FHFH
+FFFH
+HFFG
+
+I am working with the slippery version, where the agent, if it takes a step, has an equal probability of either going in the direction it intends or slipping sideways perpendicular to the original direction (if that position is in the grid). Holes are terminal states and a goal is a terminal state.
+Now I first tried value iteration which converges to the following set of values for the states
+[0.0688909 0.06141457 0.07440976 0.05580732 0.09185454 0. 0.11220821 0. 0.14543635 0.24749695 0.29961759 0. 0. 0.3799359 0.63902015 0. ]
+
+I also coded policy iteration, and it also gives me the same result. So I am pretty confident that this value function is correct.
+Now, I tried to code the Q learning algorithm, here is my code for the Q learning algorithm
+def get_action(Q_table, state, epsilon):
+ """
+ Uses e-greedy to policy to return an action corresponding to state
+
+ Args:
+ Q_table: numpy array containing the q values
+ state: current state
+ epsilon: value of epsilon in epsilon greedy strategy
+ env: OpenAI gym environment
+ """
+ return env.action_space.sample() if np.random.random() < epsilon else np.argmax(Q_table[state])
+
+
+def tabular_Q_learning(env):
+ """
+ Returns the optimal policy by using tabular Q learning
+
+ Args:
+ env: OpenAI gym environment
+
+ Returns:
+ (policy, Q function, V function)
+ """
+
+ # initialize the Q table
+ #
+ # Implementation detail:
+ # A numpy array of |x| * |a| values
+
+ Q_table = np.zeros((env.nS, env.nA))
+
+ # hyperparameters
+ epsilon = 0.9
+ episodes = 500000
+ lr = 0.81
+
+
+ for _ in tqdm_notebook(range(episodes)):
+ # initialize the state
+ state = env.reset()
+
+ if episodes / 1000 > 21:
+ epsilon = 0.1
+
+ t = 0
+ while True: # for each step of the episode
+ # env.render()
+ # print(observation)
+
+ # choose a from s using policy derived from Q
+ action = get_action(Q_table, state, epsilon)
+
+ # take action a, observe r, s_dash
+ s_dash, r, done, info = env.step(action)
+
+ # Q table update
+ Q_table[state][action] += lr * (r + gamma * np.max(Q_table[s_dash]) - Q_table[state][action])
+
+ state = s_dash
+
+ t += 1
+
+ if done:
+ # print("Episode finished after {} timesteps".format(t+1))
+ break
+ # print(Q_table)
+
+ policy = np.argmax(Q_table, axis=1)
+ V = np.max(Q_table, axis=1)
+
+ return policy, Q_table, V
+
+I tried running it and it converges to a different set of values which is following [0.26426802 0.03656142 0.12557195 0.03075882 0.35018374 0. 0.02584052 0. 0.37657211 0.59209091 0.15439031 0. 0. 0.60367728 0.79768863 0. ]
+I am not getting, what is going wrong. The implementation of Q learning is pretty straightforward. I checked my code, it seems right.
+Any pointers would be helpful.
+"
+"['classification', 'deep-neural-networks', 'object-recognition']"," Title: Are there deep networks that can differentiate object class from individual object?Body: We usually categorize objects in a hierarchy of classes. Let us say crow vs bird. In addition, classes can be ""messy"", for instance a crow can be also a predator, but not all birds are predators.
+
+My question is, can deep networks represent these hierarchies easily? Has anybody studied that? (I could not find anything at all).
+"
+"['natural-language-processing', 'training']"," Title: Finetuning GPT-2 twice for particular style of writing on a particular topicBody: Sorry if this is a stupid question. I'm just starting out in ML and am working with gpt-2 for text generation.
+
+My situation is that I have to generate text in a particular field for eg. family businesses, which pretrained gpt-2 is unlikely to have much ""training"" with. Besides the topic, I also need to generate the text in the style of one particular writer (for eg. incorporating their turns of phrase etc) This particular writer hasn't written much about the family business topic unfortunately, but has written about other topics.
+
+It occurred to me that I can take gpt-2, finetune it on a large corpus of material on family businesses, and then finetune the new model on the written material of the particular writer.
+
+Would this be the right way to achieve my objective of creating content on family businesses in the style of this particular writer?
+
+Any suggestions on what sort of stuff I should keep in mind while doing this?
+
+Any help is much appreciated.
+"
+"['convolutional-neural-networks', 'backpropagation', 'objective-functions']"," Title: When and how to use a mix of loss functions for back-propagation?Body: I am trying to understand the best loss function to be used with a convolutional neural network. I came to know that we can mix two loss functions. Can any body share in what case was it done and how?
+"
+"['neural-networks', 'image-segmentation']"," Title: Resizing effects on image recognitionBody: I have been building a multilabel image classification model using inception v3, which uses images of size 299x299, I have been wondering what are the effects of feeding images of rectangular shapes for example (or arbitrary resolutions) on the performance of the model, and if I can define requirements for how the data should be to ensure optimal performance, what would those requirements be ?
+Intuitively, I think that square images would perform better than rectangular images, is this true?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl']"," Title: How do we compute the target value when the agent ends up in the terminal state?Body: I am working on a deep reinforcement learning problem. Throughout the episode, there is a small positive and negative reward for good or bad decisions. In the end, there is a huge reward for the completion of the episode. So, this reward function is quite sparse.
+This is my understanding of how DQN works. The neural network predicts quality values for each possible action that can be taken from a state $S_1$. Let us assume the predicted quality value for an action $A$ is $Q(S_1, A)$, and this action allows the agent to reach $S_2$.
+We now need the target quality value $Q_\text{target}$, so that using $Q(S_1, A)$ and $Q_\text{target}$ the temporal difference can be calculated, and updates can be made to the parameters of the value network.
+$Q_\text{target}$ is composed of two terms. The immediate reward $R$ and the maximum quality value of the resulting state that this chosen action leaves us in, which can be denoted by $Q_\text{future} = \text{max}_a Q(S_2, a)$, which is in practice obtained by feeding the new state $S_2$ into the neural network and choosing (from the list of quality values for each action) the maximum quality value. We then multiply the discount factor $\gamma$ with this $Q_\text{future}$ and add it to the reward $R$, i.e. $Q_\text{target} = R + \gamma \text{max}_a Q(S_2, a) = R + \gamma Q_\text{future}$.
+Now, let us assume the agent is in the penultimate state, $S_1$, and chooses the action $A$ that leads him to the completion state, $S_2$, and gets a reward $R$.
+How do we form the target value $Q_\text{target}$ for $S_1$ now? Do we still include the $Q_\text{future}$ term? Or is it only the reward in this case? I am not sure if $Q_\text{future}$ even has meaning after reaching the final state $S_2$. So, I think that, for the final step, the target value must simply be the reward. Is this right?
+"
+['recurrent-neural-networks']," Title: Recurrent Neural Network to track distance from originBody: I have a game/simulation that takes a vector of encoded sequences of moves (up, down, left, right). Let's say that these are sequential step taken by an ant moving in a 2D space, starting from the origin. The moves are generated randomly.
+
+I want to know for any game, if the ant gets farther than a certain distance y from the origin (although it might even be closer than y at the end of the game). I would like to classify games into ""ant gets further away than y"" with value of one, or zero for ""ant does not get further away than y"". I don't need an AI for this task, I have set this objective as a training goal for myself.
+
+I am able to tell if the last position is past y or not, using a regular feed forward network, I believe it is easier because it is as easy as summing up all the moves, regardless of the order. But to tell if the ant got past y and then got back, that still needs to return one.
+
+I thought I might be able to reach my objective through an RNN, encoding the moves as a sequence of one-hot encoded sequential directions to move towards. Currently, I am using one hidden layer (I tried with different sizes ranging from 10 to 100), backpropagating the loss only at the last step of a single training on a vector, but it seems like the RNN total loss doesn't decrease at all.
+
+Is there any obivious flaw in my simulation, or in the neural network model? Is there a category of problems this could belong to?
+"
+"['social', 'artificial-consciousness']"," Title: ""AI will kill us all! The machines will rise up!"" - what is being done to dispel such myths?Body: Science Fiction has frequently shown AI to be a threat to the very existence of mankind. AI systems have often been the antagonists in many works of fiction, from 2001: A Space Odyssey through to The Terminator and beyond.
+The Media seems to buy into this trope as well. And in recent years we have had people like Elon Musk warn us of the dangers of an impending AI revolution, stating that AI is more dangerous than nukes.
+And, apparently, experts think that we will be seeing this AI revolution in the next 100 years.
+However, from my (albeit limited) study of AI, I get the impression that they are all wrong. I am going to outline my understanding below, please correct me if I am wrong:
+
+- Firstly, all of these things seem to be confusing Artificial Intelligence with Artificial Consciousness. AI is essentially a system to make intelligent decisions, whereas AC is more like the "self-aware" systems that are shown in science fiction.
+
+- Not AI itself, but intelligence and intelligent decision-making algorithms are something we've been working with and enhancing since before computers have been around. Moving this over to an artificial framework is fairly easy. However, consciousness is still something we are learning about. My guess is we won't be able to re-create something artificially if we barely understand how it works in the real world.
+
+- So, my conclusion is that no AI system will be able to learn enough to start thinking for itself, and that all our warnings of AI are completely unjustified.
+
+- The real danger comes from AC, which we are a long, long way from realizing because we are still a long way off from defining exactly what consciousness is, let alone understanding it.
+
+
+
+So, my question is, assuming that my understanding is correct, are any efforts are being made by companies or organizations that work with AI to correct these popular misunderstandings in sci-fi, the media, and/or the public?
+Or are the proponents of AI ambivalent towards this public fear-mongering?
+I understand that the fear mongering is going to remain popular for some time, as bad news sells better than good news. I am just wondering if the general attitude from AI organizations is to ignore this popular misconception, or whether a concerted effort is being made to fight against these AI myths (but unfortunately nobody in the media is listening or cares).
+"
+"['game-ai', 'game-theory']"," Title: How can an AI play Flow Free?Body: The game ""Flow Free"" in which you connect coloured dots with lines is very popular. A human can learn techniques to play it.
+
+I was wondering how an AI might approach it. There are certain rules of thumb that a human learns, e.g. connecting dots on the edges one should keep to the edge.
+
+Most of the time it appears the best approach is a depth-first search, e.g. one tries very long paths to see if they work. Combined with rules of thumbs and inferences such as ""don't leave gaps"". Also ""Don't cut off one dot from another dot of the same colour"".
+
+But there are ways to ""not leave gaps"" such as keep within one square of another line. That humans seem to be able to grasp but seems harder for an AI to learn.
+
+In fact I wonder if the rule of thumb ""keep close to other lines"" might even require some kind of internal language.
+
+I mean to even understand the rules of the game one would think one would need language. (Could an ape solve one of these puzzles? I doubt it.)
+
+So basically I'm trying to solve how an AI could come up with these technqiues for solving puzzles like Flow Free. (Techniques that might not work in all cases).
+
+Perhaps, humans have an innate understanding of concepts such as ""keep close to the wall"" and ""don't double back on yourself"" and can combine them in certain ways. Also we are able to spot simple regions quickly bounded by objects.
+
+I think a built in understanding of ""regions"" would be key. And the key concept that dots can't be joined unless they are in the same region. And we have got to a dead-end if:
+
+
+- There is an empty region
+- There is a region with a dot without it's pair
+
+
+Still I don't think this is enough.
+"
+"['neural-networks', 'deep-learning', 'recurrent-neural-networks', 'reference-request', 'emotional-intelligence']"," Title: Emotional Speech SynthesisBody: We are a team of computer science our graduation project about EmotionalSpeech Synthesis.
+
+We've found valuable information like research papers and WaveNet, Tacotron.
+A website (https://www.voicery.com/)
+
+we were hoping to get to know more information from you.
+
+We need more details what should we start with to grasp the fundamentals to build this idea, what is the architecture to be used in this project, whether there are papers, a GitHub Repository containing helpful documentation, datasets, some other resources, previous knowledge.
+"
+"['neural-networks', 'reinforcement-learning', 'comparison', 'genetic-algorithms', 'applications']"," Title: An ""elevator pitch"" breakdown of areas of applications for Reinforcement Learning & Neural Networks vs. Genetic AlgorithmsBody: I'm looking for an ""elevator pitch"" breakdown of areas of applications for Reinforcement Learning & Neural Networks vs. Genetic Algorithms, both actual and theoretical.
+
+Links are welcome, but please provide some explanation.
+"
+"['computational-learning-theory', 'pac-learning', 'vc-dimension', 'homework']"," Title: How to show Sauer's Lemma when the inequalities are strict or they are equalities?Body: I have the following homework.
+
+
+ We proved Sauer's lemma by proving that for every class $H$ of finite VC-dimension $d$, and every subset $A$ of the domain,
+
+ $$
+\left|\mathcal{H}_{A}\right| \leq |\{B \subseteq A: \mathcal{H} \text { shatters } B\} | \leq \sum_{i=0}^{d}\left(\begin{array}{c}{|A|} \\ {i}\end{array}\right)
+$$
+
+ Show that there are cases in which the previous two inequalities are strict (namely, the $\leq$ can be replaced by $<$) and cases in which they can be replaced by equalities. Demonstrate all four combinations of $=$ and $<$.
+
+
+How can I solve this problem?
+"
+"['machine-learning', 'deep-learning']"," Title: Assigning Weighting FactorsBody: I have a hypothetical example that closes to my research problem:
+
+Assume you are a boss and you have different types of tasks that you need to assign to your employee. Sensitive task (very classified), and task that requires high skills. So you need to assign a sensitive task (government document) to the trusted employee. While the other task (e.g. statistical analysis ) can be assigned to employee who is more creative and smart. Now every day you have many tasks that need to be done and have a large number of employees with a number of crowdsources (freelancers).
+
+You have an outcome and history of trust and performance along of failure rate of assigned task on that day of these employees as:
+
+
+
+As you can see here on day 1: the trust of emp 111 is good, so on that day, he had a low failure rate of the sensitive task. While his performance is low, and that made other task failed a lot.
+
+So now assume you have a sensitive task coming, and you have a pool of workers.
+
+The basic equation might not good here: Trust + Performance.
I need to weigh each factor based on the type of tasks.
+
+Trust x w1 + performance x w2
::::: w1
is high coefficient when sensitive is coming.
+
+Any idea of how I model these issues.
+"
+['machine-learning']," Title: Which courses in computer science and logic are relevant to Machine Learning?Body: Although I have a decent background in math, I'm trying to understand which courses from CS and logic to look into. My aim is to get into a Machine Learning PhD program.
+"
+"['search', 'minimax', 'heuristics', 'alpha-beta-pruning', 'expectiminimax']"," Title: How to choose the weights for a linear combination of heuristic functions?Body: I need to write a minimax algorithm with alpha-beta pruning in limited time for the 2048 game. I know expectimax is better for this work.
+
+Assume I wrote different heuristic functions. If I want to write an evaluation function as a linear combination of these heuristic functions, do I have to give random weights, or can I calculate the optimal weights with some optimization algorithm?
+"
+"['neural-networks', 'applications']"," Title: Is artificial intelligence and, in particular, neural networks being used in real-world critical applications?Body: Is artificial intelligence and, in particular, neural networks being used in real-world critical applications and devices?
+
+I had a discussion with my colleague who states that nobody would use artificial intelligence, especially neural nets, for critical stuff, like technical devices or sensors.
+
+I'm only aware of the problem of neural nets being so-called black-boxes, but, nevertheless, I think it is possible to make an NN robust so that it matches the demands of daily processes, also in sensitive fields like health care, energy market, self-driving cars, and so on. Yet I cannot underline this.
+
+Does somebody have more insights or other information, opinions and so on? I appreciate any meaningful answer.
+"
+"['neural-networks', 'classification', 'comparison', 'perceptron', 'bayesian-networks']"," Title: How do I determine the most appropriate classifier for a certain problem?Body: Consider a Bayesian classifier used in spam e-mail filtering. It converts an e-mail to a vector, most of the time using the bag-of-words method. Although it learns first before getting employed, it can be made to work as an online system, i.e. it can be used to filter and learn from examples even after deployment.
+
+Now, on the other hand, now comes the perceptron. It calculates a mean vector of spam and not spam, and then classifies them into the appropriate categories. The model adjusts the mean vectors each time it makes mistakes.
+
+Now, comes neural nets, they too are capable of taking a vector-like bag of words or image pixels of dogs and cats and classify them into yes or no.
+
+So, while designing and implementing them into the system, how to determine which one of the methods (Bayesian classifier, perceptron or neural network) is the most appropriate for a given situation or task? One factor to consider is the time complexity (or speed), but what are other factors, and how to rank them?
+"
+"['neural-networks', 'deep-learning', 'natural-language-processing', 'recurrent-neural-networks']"," Title: How can neural networks be used to generate rather than classify?Body: In my experience with Neural Nets, I have only used them to take input vectors and return binary output.
+
+But, here in a video, https://youtu.be/ajGgd9Ld-Wc?t=214, Kai Fu Lee, renowned AI Expert shows a deep net which takes thousands of samples of Trump's speeches and generates output in the Chinese Language.
+
+In short, how can deep nets/neural nets be used to generate output rather than giving answer yes or no? Additionally, how are these nets being trained? Can anyone here provide me a simple design to nets that are capable of doing that?
+"
+"['machine-learning', 'gradient-descent', 'plotting', 'loss', 'weights']"," Title: How to plot Loss Landscape with more than 2 weights in the networkBody: For a single neuron with 2 weights, I can plot the loss landscape and it looks like this (OR data, sigmoid activation, MAE loss):
+
+
+
+But, when the neuron accepts more inputs, which means more than 2 weights required, or when there are more neurons, more layers in the network; how should the 3D loss landscape be plotted?
+"
+"['machine-learning', 'cross-validation', 'random-forests']"," Title: How should I interpret this validation plot?Body: Bellow I have a validation plot
+How should I interpret this validation plot?
+Is my data underfitting? What else can be seen from this?
+Which one is the best?
+
+What does it mean that the right line is growing and green line decrease (slightly) for example after 15?
+
+
+
+Second random forrest
+
+"
+"['natural-language-processing', 'prediction', 'time-series', 'gpt']"," Title: Is it possible to use the GPT-2 model for time-series data prediction?Body: Is it possible and how trivial (or not) might it be (if possible) to retrain GPT-2 on time-series data instead of text?
+"
+"['deep-learning', 'reinforcement-learning']"," Title: Reinforcement learning number of episodes per epoch not matching with paperBody: I am trying to reproduce results presented in this paper. On page 4, the authors state:
+
+
+ ... we train for 50 epochs (one epoch consists of 19*2*50 = 1900 full
+ episodes), which amounts to a total of 4.75*10^6 timesteps.
+
+
+The 1900 episodes are broken down into Rollouts per MPI worker (2) * Number of MPI Workers (19) * Cycles per epoch (50), as shown in the hyper parameters section on page 10.
+
+When testing on my local machine, using the GitHub Baselines repo, I am using 1 MPI worker and the following hyperparams:
+
+'n_cycles': 50, # per epoch
+'rollout_batch_size': 2, # per mpi thread
+
+
+By the same calculation, this means that I should have 1*50*2 = 100 episodes per epoch.
+
+However when I run her
on FetchReach-v1
turns out I only have 10 episodes per epoch. Here is a log sample:
+
+Training...
+---------------------------------
+| epoch | 0 |
+| stats_g/mean | 0.893 |
+| stats_g/std | 0.122 |
+| stats_o/mean | 0.269 |
+| stats_o/std | 0.0392 |
+| test/episode | 10 |
+| test/mean_Q | -0.602 |
+| test/success_rate | 0.5 |
+| train/episode | 10 | <-- 10 episodes/epoch
+| train/success_rate | 0 |
+---------------------------------
+
+
+Why is there this discrepancy? Any suggestions would be appreciated.
+"
+"['machine-learning', 'cross-validation', 'random-forests']"," Title: How to interpret this learning curve plotBody: Bellow I have a Learning Curve plot How should I interpret this plot for my random forrest algorithm (the second one the most complex one)?
+Which one is the best?
+
+
+
+
+
+
+"
+"['machine-learning', 'random-forests', 'bias-variance-tradeoff']"," Title: How can I determine the bias and variance of a random forrest?Body: On this website https://scikit-learn.org/stable/modules/learning_curve.html, the authors are speaking about variance and bias and they give a simple example of how works in a linear model.
+
+How can I determine the bias and variance of a random forest?
+"
+"['neural-networks', 'machine-learning', 'backpropagation', 'gradient-descent']"," Title: Understanding the partial derivative with respect to the weight matrix and biasBody: Say we have the layer $X W + b = Y$.
+
+
+- I want to get $\frac{dL}{dW}$ and we assume I have $\frac{dL}{dY}$.
+So all I need is to find $\frac{dY}{dW}$. I know that it should be $X^T\frac{dL}{dY}$ but don't understand why. please explain.
+- I want to get $\frac{dL}{db}$ and we assume I have $\frac{dL}{dY}$.
+So all I need is to find $\frac{dY}{db}$. I know that it should be $\sum(\frac{dL}{dY})_i$ (I mean sum the rows) but I don't understand why. please explain.
+
+
+Thanks :)
+"
+"['neural-networks', 'objective-functions']"," Title: Maximize loss on non-target variableBody: I have a neural network that should be able to classify documents to target label A. The problem is that the network is actually classifying label B, which is an easier task.
+
+To make the problem more clear: I need to classify documents from different sources. In the training data each source occurs repeatedly, but the network should be able to work on unknown sources. All documents from a single source have the same class. In this case, it is easier to identify sources than the target label so in practice the network is not really identifying the target label, but the source.
+
+The solution to this problem is making sure that the model is bad at identifying the sources in the training data, while still attaching the right target labels.
+
+I think the first step is to get two output layers, one for the target label and one for identifying which source it is from. My approach fails however at the training procedure: I want to minimize the loss on the target output, but maximize the loss on the non-target output. But if I maximize the loss on that non-target output, that does not mean that the network 'unlearns' the non-target labels. So the main question for the non-target output is:
+
+TLDR; How do I define a training procedure that minimizes the loss on a non-target output layer, and then maximizes that loss on all layers before it. My goal is to have a network that is good at classifying label A, but bad at a related label B. If anyone wants to give a code example, my prefered framework is PyTorch.
+"
+"['neural-networks', 'machine-learning', 'function-approximation', 'history']"," Title: Is there a way to understand neural networks without using the concept of brain?Body: Is there a way to understand, for instance, a multi-layered perceptron without hand-waving about them being similar to brains, etc?
+For example, it is obvious that what a perceptron does is approximating a function; there might be many other ways, given a labelled dataset, to find the separation of the input area into smaller areas that correspond to the labels; however, these ways would probably be computationally rather ineffective, which is why they cannot be practically used. However, it seems that the iterative approach of finding such areas of separation may give a huge speed-up in many cases; then, natural questions arise why this speed-up may be possible, how it happens and in which cases.
+One could be sure that this question was investigated. If anyone could shed any light on the history of this question, I would be very grateful.
+So, why are neural networks useful and what do they do? I mean, from the practical and mathematical standpoint, without relying on the concept of "brain" or "neurons" which can explain nothing at all.
+"
+['generative-model']," Title: What's a good generative model for creating valid formats of a person's name?Body: I'm trying to come up with a generative model that can input a name and output all valid formats of it.
+
+For example, ""Bob Dylan"" could be an input and the gen model will output ""Dylan, Bob"", ""B Dylan"", ""Bob D"" and any other type of valid formatting of a person's name. So given my example the gen model doesn't seem that complicated to build, but it also has to handle stuff like ""Dylan, Bob"" and ""B Dylan"", but obviously the 2nd one it shouldn't output ""Bob Dylan"" as a potential output cause inferring that requires more than just ""B Dylan"". Any ideas for a good Generative Model for this?
+"
+"['machine-learning', 'autoencoders']"," Title: What can I do with an autoencoder?Body: I cannot find information in detail about autoencoder
+
+What can I do with an autoencoder (and how can I do this), practically speaking?
+
+What does the encoder (this part I think I understand) and a decoder (could not find much about this) part do? Can it for example show on an explainable way how patterns in the data are being represented?
+
+I read some papers that say that it can be used to denoise the input, how does this work? (Am I changing the values of my input)
+
+Is it true that an autoencoder can be also done with PCA (if we assume linearity)?
+"
+"['reinforcement-learning', 'function-approximation', 'resource-request']"," Title: Is there any open source implementation of the SBEED learning algorithm?Body: Are there are any openly available implementations of the SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation paper?
+"
+"['machine-learning', 'classification', 'statistical-ai']"," Title: Is there any measure of separability of classes?Body: I want to know if there is a measure of how well two classes in Y are separable (linearly or not) based on their features in X. Easiest way of explaining this is to compare it to correlation coefficients, the higher the correlation the higher possiblity for successful regression based on given feature (at least in theory).
+
+Is there any measure that will tell me how well classes are separated based on input data features, before training a ML model?
+"
+"['machine-learning', 'reference-request', 'computational-complexity', 'intractability']"," Title: What are examples of promising AI/ML techniques that are computationally intractable?Body: To produce tangible results in the field of AI/ML, one must take theoretical results under the lens of computational complexity.
+Indeed, minimax effectively solves any two-person "board game" with win/loss conditions, but the algorithm quickly becomes untenable for games of large enough size, so it's practically useless asides from toy problems.
+In fact, this issue seems to cut at the heart of intelligence itself: the Frame Problem highlights this by observing that any "intelligent" agent that operates under logical axioms must somehow deal with the explosive growth of computational complexity.
+So, we need to deal with computational complexity: but that doesn't mean researchers must limit themselves with practical concerns. In the past, multilayered perceptrons were thought to be intractable (I think), and thus we couldn't evaluate their utility until recently. I've heard that Bayesian techniques are conceptually elegant, but they become computationally intractable once your dataset becomes large, and thus we usually use variational methods to compute the posterior, instead of naively using the exact solution.
+I'm looking for more examples like this: What are examples of promising (or neat/interesting) AI/ML techniques that are computationally intractable (or uncomputable)?
+"
+"['philosophy', 'social', 'risk-management']"," Title: What are the risks associated with regulating AI?Body: As part of a research project for college, I would like to understand what many of you astern to be the risks associated with regulating Artificial Intelligence. Such as whether regulation is too risky in regards to limiting progress or too risky in regards to uninformed regulation.
+"
+"['machine-learning', 'keras', 'regression']"," Title: What is the best approach for multivariable and multivariate regression?Body: I want to build a multivariable and multivariate regression model in Keras (with TensorFlow as backend), that is, a regression model with multiple values as input (multivariable) and output (multivariate).
+
+The independent variables are, for example, the length, weight, force, etc., and the dependent variables are the torque, friction, heat, temperature, etc.
+
+What is the best approach to achieve that? Any guidance before I start? (If anyone can share any example code/notebook/code would be great as well).
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'game-ai', 'games-of-chance']"," Title: What would be the most effective self-learning algorithm for a 7 player social deduction game?Body: There's this 7 player social deduction game called Secret Hitler, and I have been trying to find a self-learning AI algorithm to learn how to play this game for a while. Basically, four players are given a liberal role, two players are given a fascist role, and 1 player is given a hitler role. The liberals and hitler do not know any other roles, and the fascists know everyone's roles. During a turn, a president elects a chancellor based on a yes/no vote and then the government passes a policy (either liberal or fascist) that is drawn from a randomized deck. At certain points in the game, different special abilities come into play, like executing a player or investigating their role. To win the game, the liberals must either enact 5 liberal policies or kill hitler; the fascists must enact 6 fascist policies or get hitler enacted as the chancellor after 3 fascist policies have been enacted.
+
+Now, there are other details that are irrelevant that I didn't mention, but those are the general rules. It seems simple enough to build a visual implementation in a language like Java, but there are so many moving pieces that I would have to account for. I doubt that simply making random moves at first and learning off of the bad/good moves would work, because I need a way for agents to make moves based on which roles they know.
+
+Unfortunately, AlphaZero wouldn't work here, and I'm struggling to find any algorithm that would work for this (or any other social deduction game). Do I have to write my own algorithm? I'm slightly confident that this is a case of supervised learning where I can give weight to the nodes in a neural network that correspond to wins, but please correct me if I'm incorrect.
+"
+"['deep-learning', 'statistical-ai', 'variational-autoencoder', 'evidence-lower-bound']"," Title: What's going on in the equation of the variational lower bound?Body:
+I don't really understand what this equation is saying or what the purpose of the ELBO is. How does it help us find the true posterior distribution?
+"
+"['machine-learning', 'autoencoders']"," Title: Using ML to encypher data for productionBody: I am looking for research and experience working with ML models to ingest data for tasks, like text analysis, and creates a system that copies (or in other words enciphers) the input data, to then reproduce it in the future without the original.
+
+I'm interested in how ML models can be used in this way to obfuscate information without too much information loss by the model, e.g. overfitting on purpose to create a new representation of the input information.
+"
+"['machine-learning', 'signal-processing', 'data-preprocessing']"," Title: How can I remove the noise from an EEG signal?Body: I am working on a project that takes signals from the brain, preprocesses them, and then makes the machine learn about what human is thinking about. I am struck on preprocessing the signal (incoming from the EEG). I am having a problem when I attempt to remove noise. I used SVM but to no avail. I need some other suggestions from experts who have worked on a project similar to this. What can I do to preprocess the signal?
+"
+"['deep-learning', 'optical-character-recognition', 'image-segmentation']"," Title: Is there a deep learning-based architecture for digit localisation?Body: I'm new to object detectors and segmentation. I want to localize digits on a plate as fast as possible. All images of the dataset are normalized to $300 \times 60$. There are different approaches to solve the problem. For example, binarization + connected component labeling, vertical and horizontal projection. The aforementioned approaches fail in ambient lights, noises, and shadows. Also, there are other approaches such as STN-OCR (based on convolutional recurrent neural networks) that need a lot of plates with different composition of numbers. I have limited plates with the same numbers (about 1000 different numbers) but totally 10000 plates in different illuminations and noises. I have a good OCR (without segmentation), so I need a network just localize digits.
+
+Is there any deep learning-based architecture for this purpose? Can I use faster RCNN? Yolo? SSD?
+
+I trained Faster RCNN in Matlab, but it detects too many random bounding boxes for each plate. What could be the problem?
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing', 'chat-bots']"," Title: What is the underlying model of IBM Watson Assistant and Microsoft LUIS?Body: As I stated in my question, I would like to know the underlying pipeline and machine learning models that are used to classify intents and identify entities in IBM Watson Assistant and Microsoft LUIS services.
+
+I searched on different websites and the documentations of those services, but I did not find anything. However, there are some blogs mentioned that IBM Watson is trained using one billion words from Wikipedia but there is no reference to support that claim.
+
+I highly appreciate if anyone could refer me to a doc/blog that answers my question.
+
+Thanks in advance :)
+"
+"['reinforcement-learning', 'temporal-difference-methods', 'reinforce']"," Title: Confusion about temporal difference learningBody: I have a couple of small questions about the David Silver lecture about reinforcement learning, lecture slides (slides 23, 24). More specifically it is about the temporal difference algorithm:
+
+$$V(s_{t}) \leftarrow V(s_t)+ \alpha \left[ G_{t+1}+\gamma V(s_{t+1})- V(s_t) \right]$$
+
+where $\gamma$ is our discount rate and $\alpha $ the learning rate.
+In the example given in the lecture slides we observe the following paths:
+
+$(A,1,B,0), (B,1), (B,1), (B,1), (B,1), (B,1), (B,1), (B,0)$
+
+Meaning for the first trajectory we are in state $A$, get reward $1$, get to state $B$ and get reward $0$ and the game finishes. For the second trajectory we start in state $B$, get reward $1$ and the game finishes ...
+
+Let`s say we initialize all states with value $0$ and choose $\alpha=0.1, \gamma=1$
+
+My first question is whether the following ""implementation"" of the $TD(0)$ algorithm for the first two of the above observed trajectories correct?
+
+
+- $V(a)\leftarrow0 + 0.1(1+0-0)= 0.1; \quad V(b)\leftarrow0+0.1(1+0-0)=0.1$
+- $V(b)\leftarrow0.1+(0.1)(1+0-0.1)= 0.19$
+
+
+? If so, why dont we use the updated value function for $V(b)$ to also update our value for $V(a)$?
+
+My third question is about the statement that
+
+
+ $TD(0)$ converges to solution of max likelihood Markov model
+
+
+this means that if we keep sampling and apply the $TD(0)$ algorithm that the thereby obtained solution converges towards the ML-estimate of that sample using the Markov Model? Why dont we just use the ML-estimate immediately?
+"
+"['neural-networks', 'deep-learning', 'comparison', 'activation-functions']"," Title: When should I use a linear activation instead of ReLU?Body: I have read this post:
+How to choose an activation function?.
+
+There is enough literature about activation functions, but when should I use a linear activation instead of ReLU?
+
+What does the author mean with ReLU when I'm dealing with positive values, and a linear function when I'm dealing with general values.?
+
+Is there a more detail answer to this?
+"
+"['machine-learning', 'terminology', 'variational-autoencoder', 'information-theory', 'kl-divergence']"," Title: How does the Kullback-Leibler divergence give ""knowledge gained""?Body: I'm reading about the KL divergence on Wikipedia. I don't understand how the equation gives "information gained" as it says in the "Interpretations" section
+
+Expressed in the language of Bayesian inference, ${\displaystyle D_{\text{KL}}(P\parallel Q)}$ is a measure of the information gained by revising one's beliefs from the prior probability distribution $Q$ to the posterior probability distribution $P$
+
+I was under the impression that KL divergence is a way of measure the difference between distributions (used in autoencoders to determine the difference between the input and the output generated from latents).
+How does the equation $$D_{KL}(P \| Q)=\sum P(x) \log \left( \frac{P(X)}{Q(X)} \right)$$ give us a divergence? Also, in encoding and decoding algorithms that use KL divergence, is the goal to minimize $D_{KL}(P \| Q)$?
+"
+"['neural-networks', 'deep-learning', 'python', 'objective-functions', 'regression']"," Title: When should I create a custom loss function?Body: I'm using a neural network to solve a multi regression problem because I'm trying to predict continuous values. To be more specific, I'm making a tracking algorithm to track the position of an object, I'm trying to predict two values, the latitude and longitude of an object.
+Now, to calculate the loss of the model, there are some common functions, like mean squared error or mean absolute error, etc., but I'm wondering if I can use some custom function, like this, to calculate the distance between the two longitude and latitude values, and then the loss would be the difference between the real distance (calculated from the real longitude and latitude) and the predicted distance (calculated from the predicted longitude and latitude). These are some thoughts from me, so I'm wondering if such an idea would make sense?
+Would this work in my case better than using the mean squared error as a loss function?
+I had another question in mind. In my case, I'm predicting two values (longitude and latitude), but is there a way to transform these two target values to only one value so that my neural network can learn better and faster? If yes, which method should I use? Should I calculate the summation of the two and make that as a new target? Does this make sense?
+"
+"['genetic-algorithms', 'fitness-functions', 'stopping-conditions']"," Title: Is there any disadvantage of the maximum number of fitness function call as a stop criterion?Body: I'm studying different stop criteria in genetic algorithms and the advantages and disadvantages of each of them for evaluating different algorithms. One of these methods is the max number of fitness function calls (max NFFC), so that we define a value for max NFFC and, if the number of fitness function calls reached this value, the algorithm will stop. Fitness function is called for calculating the fitness of the initial population and whenever a crossover or mutation happens (if parents are chosen as offspring there is no need to compute fitness function).
+I searched if there is a disadvantage or limitation about using this stop criterion, but I didn't find anything. So, I wanted to know if applying this stop criterion in my algorithm has any disadvantages or there is nothing wrong with using this criterion.
+"
+"['reinforcement-learning', 'training', 'convergence', 'reinforce', 'portfolio-optimization']"," Title: Why is my implementation of REINFORCE algorithm for portfolio optimization not converging?Body: I'm trying to implement the Reinforce algorithm (Monte Carlo policy gradient) in order to optimize a portfolio of 94 stocks on a daily basis (I have suitable historical data to achieve this). The idea is the following: on each day, the input to a neural network comprises of the following:
+
+
+- historical daily returns (daily momenta) for previous 20 days for each of the 94 stocks
+- the current vector of portfolio weights (94 weights)
+
+
+Therefore states are represented by 1974-dimensional vectors. The neural network is supposed to return a 94-dimensional action vector which is again a vector of (ideal) portfolio weights to invest in. Negative weights (short positions) are allowed and portfolio weights should sum to one. Since the action space is continuous I'm trying to tackle it via the Reinforce algorithm. Rewards are given by portfolio daily returns minus trading costs. Here's a code snippet:
+
+
+
+class Policy(nn.Module):
+ def __init__(self, s_size=1974, h_size=400, a_size=94):
+ super().__init__()
+ self.fc1 = nn.Linear(s_size, h_size)
+ self.fc2 = nn.Linear(h_size, a_size)
+ self.state_size = 1974
+ self.action_size = 94
+ def forward(self, x):
+ x = F.relu(self.fc1(x))
+ x = self.fc2(x)
+ return x
+ def act(self, state):
+ state = torch.from_numpy(state).float().unsqueeze(0).to(device)
+ means = self.forward(state).cpu()
+ m = MultivariateNormal(means,torch.diag(torch.Tensor(np.repeat(1e-8,94))))
+ action = m.sample()
+ action[0] = action[0]/sum(action[0])
+ return action[0], m.log_prob(action)
+
+
+Notice that in order to ensure that portfolio weights (entries of the action tensor) sum to 1 I'm dividing by their sum. Also notice that I'm sampling from a multivariate normal distribution with extremely small diagonal terms since I'd like the net to behave as deterministically as possible. (I should probably use something similar to DDPG but I wanted to try out simpler solutions to start with).
+
+The training part looks like this:
+
+
+
+optimizer = optim.Adam(policy.parameters(), lr=1e-3)
+
+def reinforce(n_episodes=10000, max_t=10000, gamma=1.0, print_every=1):
+ scores_deque = deque(maxlen=100)
+ scores = []
+ for i_episode in range(1, n_episodes+1):
+ saved_log_probs = []
+ rewards = []
+ state = env.reset()
+ for t in range(max_t):
+ action, log_prob = policy.act(state)
+ saved_log_probs.append(log_prob)
+ state, reward, done, _ = env.step(action.detach().flatten().numpy())
+ rewards.append(reward)
+ if done:
+ break
+ scores_deque.append(sum(rewards))
+ scores.append(sum(rewards))
+
+ discounts = [gamma**i for i in range(len(rewards)+1)]
+ R = sum([a*b for a,b in zip(discounts, rewards)])
+
+ policy_loss = []
+ for log_prob in saved_log_probs:
+ policy_loss.append(-log_prob * R)
+ policy_loss = torch.cat(policy_loss).sum()
+
+ optimizer.zero_grad()
+ policy_loss.backward()
+ optimizer.step()
+
+ if i_episode % print_every == 10:
+ print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
+ print(scores[-1])
+
+ return scores, scores_deque
+
+scores, scores_deque = reinforce()
+
+
+Unfortunately, there is no convergence during training even after fiddling with the learning rate so my question is the following: is there anything blatantly wrong with my approach here and if so, how should I tackle this?
+"
+"['neural-networks', 'reinforcement-learning', 'q-learning', 'markov-decision-process', 'ddpg']"," Title: Why feed actions in later layer in Q network?Body: I read the DDPG paper, in which the authors state that the actions are fed only later to their Q network:
+
+
+ Actions were not included until the 2nd hidden layer of Q. (Sec 7, Experiment Details)
+
+
+So does that mean, that the input of the first hidden layer was simply the state and the input of the second hidden layer the output of the first hidden layer concatenated with the actions?
+
+Why would you do that? To have the first layer focus on learning the state value independent of the selected action? How would that help?
+
+Is this just a small little tweak or a more significant improvement?
+"
+"['natural-language-processing', 'resource-request']"," Title: Is there a tool to convert from the brat standoff format to CoNLL-U format?Body: I've been searching for a tool to convert from the brat standoff format to the CoNLL-U format, so that to use it as a parsing corpus model to the spaCy library.
+Can you help me?
+"
+"['ai-design', 'game-ai', 'game-theory', 'combinatorial-games', 'symbolic-ai']"," Title: How to calculate the optimal placements for settlements in Catan without an ML algorithm?Body: Is it possible to calculate the best possible placements for settlements in Catan without using an ML algorithm?
+
+While it is trivial to simply add up the numbers surrounding the settlement (highest point location), I'm looking to build a deeper analysis of the settlement locations. For example, if the highest point location is around a sheep-sheep-sheep, it might be better to go to a lower point location for better resource access. It could also weight for complementary resources, blocking other players from resources, and being closer to ports.
+
+It seems feasible to program arithmetically, yet some friends said this is an ML problem. If it is ML, how would one go about training, as the gameboard changes every game?
+"
+"['classification', 'utility']"," Title: How does text classification reduce manpower costs?Body: (I apologize for the title being too broad and the question being not 'technical')
+
+Suppose that my task is to label news articles. This means that given a news article, I am supposed to classify which category that news belong to. Eg, 'Ronaldo scores a fantastic goal' should classify under 'Sports'.
+
+After much experimentation, I came up with a model that does this labeling for me. It has, say, 50% validation accuracy. (Assume that it is the best)
+
+And so I deployed this model for my task (on unseen data obviously). Of course, from a probabilistic perspective, I should get roughly 50% of the articles labelled correctly. But how do I know that which labels are actually correct and which labels need to be corrected? If I were to manually check (say, by hiring people to do so), how is deploying such a model better than just hiring people to do the classification directly? (Do not forget that the manpower cost of developing the model could have been saved.)
+"
+"['machine-learning', 'overfitting', 'hyperparameter-optimization']"," Title: How can I avoid overfitting when doing parameter tuning?Body: I very often applied a grid search to tune the parameters of my supervised model. I have the feeling that parameter tuning will eventually (very often) lead to overfitting? Is this crazy to say?
+
+Is there a way that we can apply grid search in such a way that it will not overfit?
+"
+['cross-validation']," Title: Can we say: the more we increase the numbers of cross validation the less likely it is that we overfit?Body: Based on the answer of my previous question:
+How can I avoid overfitting when doing parameter tuning?
+
+Can we say: the more we increase the numbers K of cross validation the less likely it is that we overfit?
+"
+"['neural-networks', 'activation-functions', 'hidden-layers', 'function-approximation']"," Title: Why is there a sigmoid function in the hidden layer of a neural network?Body:
+
+I got this slide from CMU's lecture notes. The $x_i$s on the right are inputs and the $w_i$s are weights that get multiplied together then summed up at each hidden layer node. So I'm assuming this is a node in the hidden layer.
+
+What is the mathematical reason for taking the sum of the weights and inputs and inputting that into a sigmoid function? Is there something the sigmoid function provides mathematically or provides some sort of intuition useful for the next layer?
+"
+"['neural-networks', 'convolutional-neural-networks', 'pattern-recognition', 'algorithm-request', 'model-request']"," Title: Neural networks for sports bettingBody: I want to design a neural network that can be used for predicting sports scores for betting, specifically for American football. What I’d like to do is create a kind of profile for each game based on the specific strengths and weaknesses of each team.
+For example, let’s say two teams have the following characteristics:
+Team A:
+
+- Passing Offense Rating: 5
+- Rushing Offense Rating: 2
+
+Team B:
+
+- Passing Defense Rating: 3
+- Rushing Defense Rating: 4
+
+I’d like to be able to search for historical games where two teams have similar profiles. I could perhaps then narrow it down to games with profiles that have statistically significant historical outcomes (i.e., certain types of matchups are likely to result in similar results).
+In reality, I’d have dozens of team characteristics to compare. I would then need to assign weights of importance to each characteristic, which could be used to further ensure the effective selection of similar games.
+I think I could do this like a convolutional neural network where there is an additional filter applied to the characteristics for the weights.
+Are there any other ways that are specifically applicable to this strategy?
+"
+"['neural-networks', 'game-ai', 'genetic-algorithms', 'neat', 'neuroevolution']"," Title: Unable to achieve expected outputs using NEAT for the snake gameBody: I am trying to implement NEAT for the snake game. My game logic is ready, which is working properly and NEAT configured. But even after 100 generations with 200 genomes per generation, the snakes perform very poorly. It barely ever eats more than 2 food. Below is the snip of the eval_genome function:
+
+def eval_genome(genomes, config):
+ clock = pygame.time.Clock()
+ win = pygame.display.set_mode((WIN_WIDTH, WIN_HEIGHT))
+ for genome_id, g in genomes:
+ net = neat.nn.FeedForwardNetwork.create(g, config)
+ g.fitness = 0
+ snake = Snake()
+ food = Food(snake.body)
+ run = True
+ UP = DOWN = RIGHT = LEFT = MOVE_SNAKE = False
+ moveToFood = 0
+ score = 0
+ moveCount = 0
+ while run:
+ pygame.time.delay(50)
+ clock.tick_busy_loop(10)
+ for event in pygame.event.get():
+ if event.type == pygame.QUIT:
+ run = False
+ snakeHeadX = snake.body[0]['x']
+ snakeHeadY = snake.body[0]['y']
+ snakeTailX = snake.body[len(snake.body)-1]['x']
+ snakeTailY = snake.body[len(snake.body)-1]['y']
+ snakeLength = len(snake.body)
+ snakeHeadBottomDist = WIN_HEIGHT - snakeHeadY - STEP
+ snakeHeadRightDist = WIN_WIDTH - snakeHeadX - STEP
+ foodBottomDist = WIN_HEIGHT - food.y - STEP
+ foodRightDist = WIN_WIDTH - food.x - STEP
+ snakeFoodDistEuclidean = math.sqrt((snakeHeadX - food.x)**2 + (snakeHeadY - food.y)**2)
+ snakeFoodDistManhattan = abs(snakeHeadX - food.x) + abs(snakeHeadY - food.y)
+ viewDirections = snake.checkDirections(food, UP, DOWN, LEFT, RIGHT)
+ deltaFoodDist = snakeFoodDistEuclidean
+
+ outputs = net.activate((snakeHeadX, snakeHeadY, snakeHeadBottomDist, snakeHeadRightDist, snakeTailX, snakeTailY, snakeLength, moveCount, moveToFood, food.x, food.y, foodBottomDist, foodRightDist, snakeFoodDistEuclidean, snakeFoodDistManhattan, viewDirections[0], viewDirections[1], viewDirections[2], viewDirections[3], viewDirections[4], viewDirections[5], viewDirections[6], viewDirections[7], deltaFoodDist))
+
+ if (outputs[0] == max(outputs) and not DOWN):
+ snake.setDir(0,-1)
+ UP = True
+ LEFT = False
+ RIGHT = False
+ MOVE_SNAKE = True
+ elif (outputs[1] == max(outputs) and not UP):
+ snake.setDir(0,1)
+ DOWN = True
+ LEFT = False
+ RIGHT = False
+ MOVE_SNAKE = True
+ elif (outputs[2] == max(outputs) and not RIGHT):
+ snake.setDir(-1,0)
+ LEFT = True
+ UP = False
+ DOWN = False
+ MOVE_SNAKE = True
+ elif (outputs[3] == max(outputs) and not LEFT):
+ snake.setDir(1,0)
+ RIGHT = True
+ UP = False
+ DOWN = False
+ MOVE_SNAKE = True
+ elif (not MOVE_SNAKE):
+ if (outputs[0] == max(outputs)):
+ snake.setDir(0,-1)
+ UP = True
+ MOVE_SNAKE = True
+ elif (outputs[1] == max(outputs)):
+ snake.setDir(0,1)
+ DOWN = True
+ MOVE_SNAKE = True
+ elif (outputs[2] == max(outputs)):
+ snake.setDir(-1,0)
+ LEFT = True
+ MOVE_SNAKE = True
+ elif (outputs[3] == max(outputs)):
+ snake.setDir(1,0)
+ RIGHT = True
+ MOVE_SNAKE = True
+
+ win.fill((0, 0, 0))
+ food.showFood(win)
+ if(MOVE_SNAKE):
+ snake.update()
+ newSnakeHeadX = snake.body[0]['x']
+ newSnakeHeadY = snake.body[0]['y']
+ newFoodDist = math.sqrt((newSnakeHeadX - food.x)**2 + (newSnakeHeadY - food.y)**2)
+ deltaFoodDist = newFoodDist - snakeFoodDistEuclidean
+ moveCount += 1
+ if (newFoodDist <= snakeFoodDistEuclidean):
+ g.fitness += 1
+ else:
+ g.fitness -= 10
+ snake.show(win)
+ if(snake.collision()):
+ if score != 0:
+ print('FINAL SCORE IS: '+ str(score))
+ g.fitness -= 50
+ break
+
+ if(snake.eat(food,win)):
+ g.fitness += 15
+ score += 1
+ if score == 1 :
+ moveToFood = moveCount
+ # foodEatenMove = pygame.time.get_ticks()/1000
+ else:
+ moveToFood = moveCount - moveToFood
+ food.foodLocation(snake.body)
+ food.showFood(win)
+
+
+Additionally, I am putting the definition of the checkDirections function. What it does is returns an array of size 8 corresponding to 8 directions where each value can be either 0 (not food or body), 1(food found but no body), 3(body found but no food), or 4(both body and food found).
+
+def checkDirections(self, food, up, down, left, right):
+ '''
+ x+STEP, y-STEP
+ x+STEP, y+STEP
+ x-STEP, y-STEP
+ x-STEP, y+STEP
+ x+STEP, y
+ x, y-STEP
+ x, y+STEP
+ x-STEP, y
+ '''
+ view = []
+ x = self.xdir
+ y = self.ydir
+
+ view.append(self.check(x, y, STEP, -STEP, food.x, food.y))
+ view.append(self.check(x, y, STEP, STEP, food.x, food.y))
+ view.append(self.check(x, y, -STEP, -STEP, food.x, food.y))
+ view.append(self.check(x, y, -STEP, STEP, food.x, food.y))
+ view.append(self.check(x, y, STEP, 0, food.x, food.y))
+ view.append(self.check(x, y, 0, -STEP, food.x, food.y))
+ view.append(self.check(x, y, 0, STEP, food.x, food.y))
+ view.append(self.check(x, y, -STEP, 0, food.x, food.y))
+
+ if up == True:
+ view[6] = -999
+ elif down == True:
+ view[5] = -999
+ elif left == True:
+ view[4] == -999
+ elif right == True:
+ view[7] == -999
+ return view
+
+ def check(self, x, y, xIncrement, yIncrement, foodX, foodY):
+ value = 0
+ foodFound = False
+ bodyFound = False
+ while (x >= 0 and x <= WIN_WIDTH and y >= 0 and y <= WIN_HEIGHT):
+ x += xIncrement
+ y += yIncrement
+ if (not foodFound):
+ if (foodX == x and foodY == y):
+ foodFound = True
+ if (not bodyFound):
+ for i in range(1, len(self.body)):
+ if ((x == self.body[i]['x']) and (y == self.body[i]['y'])):
+ bodyFound = True
+ if (not bodyFound and not foodFound):
+ value = 0
+ elif (not bodyFound and foodFound):
+ value = 1
+ elif (bodyFound and not foodFound):
+ value = 2
+ else:
+ value = 3
+ return value
+
+
+I am using sigmoid as the activation function. Although I have tried with tanh and relu as well with no luck. Below is the NEAT config file that I am using:
+
+[NEAT]
+fitness_criterion = max
+fitness_threshold = 10000
+pop_size = 200
+reset_on_extinction = False
+
+[DefaultGenome]
+# node activation options
+activation_default = sigmoid
+activation_mutate_rate = 0.0
+activation_options = sigmoid
+
+# node aggregation options
+aggregation_default = sum
+aggregation_mutate_rate = 0.0
+aggregation_options = sum
+
+# node bias options
+bias_init_mean = 0.0
+bias_init_stdev = 1.0
+# was 30 max and -30 for min bias
+bias_max_value = 100.0
+bias_min_value = -100.0
+bias_mutate_power = 0.5
+bias_mutate_rate = 0.7
+bias_replace_rate = 0.3
+
+# genome compatibility options
+compatibility_disjoint_coefficient = 1.0
+compatibility_weight_coefficient = 0.5
+
+# connection add/remove rates
+conn_add_prob = 0.8
+conn_delete_prob = 0.56
+
+# connection enable options
+enabled_default = True
+# below was 0.01
+enabled_mutate_rate = 0.3
+
+feed_forward = True
+initial_connection = full
+
+# node add/remove rates
+node_add_prob = 0.7
+node_delete_prob = 0.4
+
+# network parameters
+num_hidden = 0
+num_inputs = 24
+num_outputs = 4
+
+# node response options
+response_init_mean = 1.0
+response_init_stdev = 0.0
+response_max_value = 30.0
+response_min_value = -30.0
+response_mutate_power = 0.0
+response_mutate_rate = 0.0
+response_replace_rate = 0.0
+
+# connection weight options
+weight_init_mean = 0.0
+weight_init_stdev = 1.0
+weight_max_value = 30
+weight_min_value = -30
+weight_mutate_power = 0.5
+weight_mutate_rate = 0.8
+weight_replace_rate = 0.1
+
+[DefaultSpeciesSet]
+compatibility_threshold = 3.0
+
+[DefaultStagnation]
+species_fitness_func = max
+max_stagnation = 20
+species_elitism = 2
+
+[DefaultReproduction]
+elitism = 2
+survival_threshold = 0.2
+
+
+If anyone has any insights or thoughts that could help improve the performance of the snake AI, please let me know.
+"
+"['applications', 'data-science', 'minimax', 'alpha-beta-pruning']"," Title: Can alpha-beta pruning be used for applications apart from games?Body: Can alpha-beta pruning/ minimax be used for systems apart from games? Like for selecting the right customer for a product, etc. (the typical data science problems)? I have seen people do it, but can't understand how. Can someone help me understand that?
+
+Can I do something like if - find two criteria on which customers can buy product depends on like gender and age. Find the probability for all the customers depending on age and gender if they can buy it.
+
+like if there are 3 customers - there probability to buy a product on the basis of their age and gender is - Customer 1 - (20%, 30%), Customer 2 - (30%, 60%), Customer 3 - (40%, 20%). here the x and y represents - (probability based on age, probability based on gender ). Probability is probability to buy the product.
+
+For minimax, will it be correct if one player(max) tries to select the customer on basis of gender and other player(min) on basis of age. so, one can be max and one can be min.
+
+Dont know if this correct or not, but just a idea.
+"
+"['social', 'neo-luddism', 'risk-management']"," Title: What are the societal risks associated with AI?Body: What are the actual risks to society associated with the widespread use of AI? Outside of the use of AI in a military context.
+
+I am not talking about accidental risks or unintentional behaviour - eg, a driver-less car accidentally crashing.
+
+And I am not talking about any transitional effects when we see the use of AI being widespread and popular. For instance I have heard that the widespread use of AI will make many existing jobs redundant, putting many people out of work. However this is true of any major leap forward in technology (for example the motor car killed off the stable/farrier industries). The leaps forward in technology almost always end up creating more jobs than were lost in the long run.
+
+I am interested in long term risks and adverse effects stemming directly from the widespread use of AI in a non-military sense. Has anybody speculated on the social or psychological impacts that AI will produce once it has become popular?
+"
+"['reinforcement-learning', 'value-functions', 'pomdp', 'stochastic-policy', 'value-based-methods']"," Title: Is it possible for value-based methods to learn stochastic policies?Body: Is it possible for value-based methods to learn stochastic policies? I'm trying to get a clear picture of the different categories for RL algorithms, and while doing so I started to think about settings where the optimal policy is stochastic (POMDP), and if it is possible to learn this policy for the "traditional" value-based methods
+If it is possible, what are the most common methods for doing this?
+"
+"['recurrent-neural-networks', 'transformer', 'prediction', 'attention', 'sequence-modeling']"," Title: Why is the transformer for time series forecasting faster than RNN?Body: I've been reading different papers which implements the Transformer for time series forecasting. Most of the them are claiming that the training time is significantly faster then using a normal RNN. From my understanding when training such a model, you can encode the input in parallel, but the decoding is still sequential unless you're using teacher-forcing.
+
+What makes the transformer faster than RNN in such a setting? Is there something that I am missing?
+"
+"['neural-networks', 'machine-learning', 'math', 'generative-adversarial-networks', 'generative-model']"," Title: Why does the discriminator minimize the cross-entropy while the generator maximize it?Body: In his original GAN paper Goodfellow gives a game theoretic perspective for GANs:
+
+\begin{equation}
+\underset{G}{\min}\, \underset{D}{\max}\, V\left(D,G \right) =
+\mathbb{E}_{x\sim\mathit{p}_{\textrm{data}}\left(x \right)} \left[\textrm{log}\, D \left(x \right) \right]
++ \mathbb{E}_{z\sim\mathit{p}_{\textrm{z}}\left(z \right)} \left[\textrm{log} \left(1 - D \left(G \left(z \right)\right)\right) \right]
+\end{equation}
+
+I think I understand this formula, at least it makes sense to me.
+What I don't understand is that he writes in his NIPS tutorial:
+
+
+ In the minimax game, the discriminator minimizes a cross-entropy, but the generator maximizes the same cross-entropy.
+
+
+Why does he write that the discriminator minimizes the cross-entropy while the generator maximizes it? Shouldn't it be the other way around? At least that is how I understand $\underset{G}{\min}\, \underset{D}{\max}\, V\left(D,G \right)$.
+
+I guess this shows that I have a fundamental error in my understanding. Could anyone clarify what I'm missing here?
+"
+"['deep-learning', 'comparison', 'object-detection', 'yolo', 'center-net']"," Title: What are the differences between Yolo v1 and CenterNet?Body: I recently read a new paper (late 2019) about a one-shot object detector called CenterNet. Apart from this, I'm using Yolo (V3) one-shot detector, and what surprised me is the close similarity between Yolo V1 and CenterNet.
+
+First, both frameworks treat object detection as a regression problem, each of them outputs a tensor that can be seen as a grid with cells (below is an example of an output grid).
+
+
+
+Each cell in this grid predicts an object class, a box offset relative to the cell's position and a box size. The only major difference between Yolo V1 and CenterNet is that Yolo also predicts an object confidence score, that is represented in CenterNet by the class score. Yolo also predicts 2 boxes.
+
+In brief, the tensor at one cell position is Class + B x (Conf + Size + Off)
for Yolo V1 and Class + Size + Off
for CenterNet.
+
+The training strategy is quite similar too. Only the cell containing the center of a ground truth is responsible for that detection and thus affects the loss. Cells near the ground truths center (base on the distance for CenterNet and IoU for Darknet) have a reduced penalty in the loss (Focal Loss for CenterNet vs and tuned hyper parameter for Yolo).
+
+The loss functions have near the same structure (see above) except that L1 is preferred in CenterNet while Yolo uses L2, among other subtleties.
+
+
+
+My point is not that Yolo V1 and CenterNet are the same — there are not — but they are far closer that it appears at first glance.
+
+The problem is that recent papers like CenterNet (CornerNet, ExtremeNet, Triplet CenterNet, MatrixNet) all claim to be ""Keypoint-based detector"" while they are not so much different than regular ""anchor-based"" detectors (that are preconditioned regressors in fact).
+
+Instead I think that the biggest difference between Yolo and CenterNet is the backbone that has a bigger resolution for CenterNet (64x64) while Darknet has 7 or 8 only.
+
+
+
+My Question is: do you see a major difference between the two concepts that I may have missed and that could explain the performance gap? I understand that new backbones, new loss functions and better resolutions can improve the accuracy but is there a structural difference between the two approaches?
+"
+"['history', 'ai-winter']"," Title: What happened after the second AI winter?Body: I know this is a very general question, but I'm trying to illustrate this topic to people who are not from the field, and also my understanding is very limited since I'm just a second-year physics student with a basic understanding of R and Python. My point is, I'm not trying to say anything wrong here.
+
+So according to Wikipedia, after the second AI winter, which happened because expert systems didn't match expectations of the general public and of scientists, AI made a recovery ""due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards"".
+
+What I'm trying to understand now is whether the rise of AI is rather connected to greater computational power available to the public or whether there have been fundamental mathematical advances that I'm not aware of. If the latter is the case (because according to my understanding, the mathematical models behind neural networks are rooted in the 70s and 80s), I would appreciate examples.
+
+Again, please don't be offended by the general character of this question, I know it is probably really hard to answer correctly, however, I'm just trying to give a short historic introduction to the field to a lay audience and wanted to be clear in that regard.
+"
+"['machine-learning', 'reinforcement-learning']"," Title: Why ""Exploratory moves do not result in any learning""?Body: In Chapter 1 of the book Reinformcement Learning An Introduction 2nd Edition by Richard S. Sutton and Andrew G.Barto, there is one statement ""Exploratory moves do not result in any learning"".
+
+This sentence is in Figure 1.1.
+
+
+ Figure 1.1: A sequence of tic-tac-toe moves. The solid black lines represent the moves taken during a game; the dashed lines represent moves that we (our reinforcement learning player) considered but did not make. Our second move was an exploratory move, meaning that it was taken even though another sibling move, the one leading to e⇤, was ranked higher. Exploratory moves do not result in any learning, but each of our other moves does, causing updates as suggested by the red arrows in which estimated values are moved up the tree from later nodes to earlier nodes as detailed in the text.
+
+
+It confuses me. In my understanding, exploration should contribute to learning in almost all RL algorithms. So, why does the book state ""Exploratory moves do not result in any learning"" in this case?
+"
+"['machine-learning', 'convolutional-neural-networks', 'classification', 'geometric-deep-learning', 'graph-theory']"," Title: Is there any time-varying directed graph dataset?Body: I am interested in the node classification task for graph data. So far,I've tried it with the Cora dataset, but it is an undirected graph and has word attributes as features. I want to extend this task to a time-varying directed graph. Does anybody know about this kind of dataset?
+"
+"['machine-learning', 'training', 'neurons', 'java']"," Title: Generate Image with Artificial intelligenceBody: I am pretty new to Artificial Intelligence programming, however i do understand the basic concept.
+I have an idea in my mind:
+
+Import a JPEG Image,
+Convert this Image into a 2D Array (x,y values + r g b values).
+Then create a second array with same (xy) values wit rgb all set to 0,0,0.
+Now i want to build an AI Layer which will try to lower the error factor between the arrays until they are equal (the rgb values in the second array are equal to the first array (error factor 0) ).
+I would prefer to do it in Java. Any suggestions to librarys or example that can help me get started? Thanks for any help.
+"
+"['reinforcement-learning', 'deepmind']"," Title: Deepmind Spriteworld run_demo.py not foundBody: I'm trying to run the Deepmind Spriteworld demo described on the project's GitHub page, but I'm not finding run_demo.py
in the distribution and the closest sounding file, demo_ui.py
doesn't launch a UI when run (tried both on Linux and Windows).
+
+How should the Deepmind Spriteworld demo UI be launched?
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing', 'python', 'image-processing']"," Title: How to Extract Information from the ImageBody: I'm trying to extract some particular information from the image(png).
+
+I tried to extract the text using the below code
+
+import cv2
+import pytesseract
+import os
+from PIL import Image
+import sys
+
+def get_string(img_path):
+ # Read image with opencv
+ img = cv2.imread(img_path)
+
+ # Convert to gray
+ img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
+ # Apply dilation and erosion to remove some noise
+ kernel = np.ones((1, 1), np.uint8)
+ img = cv2.dilate(img, kernel, iterations=1)
+ img = cv2.erode(img, kernel, iterations=1)
+
+ # Write the image after apply opencv to do some ...
+ cv2.imwrite(""thres.png"", img)
+ # Recognize text with tesseract for python
+ result = pytesseract.image_to_string(Image.open(""invoice.png""))
+ os.remove(""invoice.png"")
+
+ return result
+
+if __name__ == '__main__':
+ from sys import argv
+
+ if len(argv)<2:
+ print(""Usage: python image-to-text.py relative-filepath"")
+ else:
+ print('--- Start recognize text from image ---')
+ for i in range(1,len(argv)):
+ print(argv[i])
+ print(get_string(argv[i]))
+ print()
+ print()
+
+ print('------ Done -------')
+
+
+But I want to extract data from particular fields.
+
+Such as
+
+
+ a) INVOICE NO.
+ b) CUSTOMER NO.
+ c) SUBTOTAL
+ d) TOTAL
+ e) DATE
+
+
+
+How can I extract the required information from the below image ""invoice""?
+
+PFB
+
+
+"
+"['machine-learning', 'convolutional-neural-networks', 'reference-request', 'image-segmentation', 'u-net']"," Title: What are the best algorithms for image segmentation tasks?Body: I recently started looking for networks that focus on image segmentation tasks related to biomedical applications. I could not miss the publication U-Net: Convolutional Networks for Biomedical Image Segmentation (2015) by Ronneberger, Fischer, and Brox. However, as deep learning is a fast-growing field and the article was published more than 4 years ago, I was wondering if anyone knows other algorithms that yield better results for image segmentation tasks? And if so, do they also use a U-shape architecture (i.e. contraction path then expansion path with up-conv)?
+"
+['reinforcement-learning']," Title: What is the double sample problem in reinforcement learning?Body: According to the SBEED: Convergent Reinforcement Learning with
+Nonlinear Function Approximation for convergent reinforcement learning, the Smoothed Bellman operator is a way to dodge the double sample problem? Can someone explain to me what the double sample problem is and how SBEED solves it?
+"
+"['fuzzy-logic', 'bayesian-probability']"," Title: What is the relationship between fuzzy logic and objective bayesian probability?Body: I understand fuzzy logic is a variant of formal logic where, instead of just 0 or 1, a given sentence may have a truth value in the [0..1] interval. Also, I understand that logical probability (objective bayesian) understands probability as an extension of logic, where uncertainity is taken into account. To me they sound rather similar (they both extend formal logic by modelling truth as a continuos interval between 0 and 1).
+
+My question is, what is the relationship between these two concepts?. What is the difference, and what are the differences in AI approaches based upon these two formal systems?
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-processing']"," Title: Which neural network algorithms can be used to map motion vectors in image processing?Body: I'm working on finding out the motion vectors of objects in images. The inputs are the images of objects in motion. The outputs of neural network are the object name, direction of object vector and prediction of next vector change.
+
+There are different 3D ConvNets I'm considering as a baseline like ReMotENet. I would appreciate if you would recommend any interesting papers in MoCap domain and any existing neural networks performing similar task.
+"
+"['machine-learning', 'reference-request', 'papers', 'philosophy', 'books']"," Title: What are some books/papers that deal with fundamental and philosophical issues of ML and relate it to the global discourse of AIs?Body: In my experience, most of the time, when people talk about AI nowadays they mostly mean machine learning. Despite this, ML is usually seen as a mere technique to build high-performance software.
+I rarely see people discuss the foundational questions of it, such as: from which "philosophy" of AI did ML emerge? Why is ML compelling in AI research, if not by its performance? What are the fundamental differences between statistical/probabilistic AI and logical AI? For reference, this hasn't even been mentioned in my master-level course on machine learning. Even myself I used to have a distaste for ML because I thought it was just mindless data-crunching.
+But, lately, I've been reading through "Probability Theory: The Logic Of Science" and I'm starting to appreciate the theoretical side of ML, for instance, how Bayesian probability can be seen as a model of plausible reasoning in humans, and how probability theory extends logic (motivating, maybe, why probabilistic AIs were the next logical [no pun intended] step after logical AI). I would like now to delve deeper into the topic.
+What are some books/papers that deal with fundamental and philosophical issues of ML and relate it to the global discourse of AIs?
+"
+"['tensorflow', 'objective-functions', 'generative-adversarial-networks']"," Title: How to implement loss function of H-GAN modelBody: I was trying to implement the loss function of H-GAN. Here is my code . But it seem somethings wrong, maybe is recognition loss on z (EQ 9). I used the EQ 5 on MISO to calculate it. Here is my code:
+
+def recognition_loss_on_z(self,latent_code, r_cont_mu, r_cont_var):
+ eplison = (r_cont_mu - latent_code) / (r_cont_var+1e-8)
+ return -tf.reduce_mean(tf.reduce_sum(-0.5*tf.log(2*np.pi*r_cont_var+1e-8)-0.5*tf.square(eplison), axis=1))/(config.batch_size * config.latent_dim)
+
+
+And I calculated loss function:
+
+ self.z_mean, self.z_sigm = self.Encode(self.images)
+ self.z_x = tf.add(self.z_mean, tf.sqrt(tf.exp(self.z_sigm))*self.ep)
+
+ self.D_pro_logits, self.l_x_real, self.Q_y_style_given_x_real, continuous_mu_real, continuous_var_real = self.discriminator(self.images, training=True, reuse=False)
+ self.De_pro_tilde, self.l_x_tilde, self.Q_y_style_given_x_tidle, continuous_mu_tidle, continuous_var_tidle= self.discriminator(self.x_tilde, training=True, reuse = True)
+ self.G_pro_logits, self.l_x_fake, self.Q_y_style_given_x_fake, continuous_mu_fake, continuous_var_fake = self.discriminator(self.x_p, training=True, reuse=True)
+
+ tidle_latent_loss = self.recognition_loss_on_z(self.z_x, continuous_mu_tidle,continuous_var_tidle)
+ real_latent_loss = self.recognition_loss_on_z(self.z_x, continuous_mu_real,continuous_var_real)
+ fake_latent_loss = self.recognition_loss_on_z(self.zp, continuous_mu_fake,continuous_var_fake)
+
+
+And discriminator:
+
+ def discriminator(self, x_var,training=False, reuse=False):
+ with tf.variable_scope(""discriminator_recongnizer"") as scope:
+ if reuse==True:
+ scope.reuse_variables()
+ conv1 = tf.nn.leaky_relu(batch_normalization(conv2d(x_var, output_dim = 64 , kernel_size=6, name='dis_R_conv1'),training = training,name='dis_bn1', reuse = reuse), alpha =0.2)
+ conv2 = tf.nn.leaky_relu(batch_normalization(conv2d(conv1, output_dim = 128 , kernel_size=4, name='dis_R_conv2'),training = training ,name='dis_bn2', reuse = reuse), alpha =0.2)
+ conv3 = tf.nn.leaky_relu(batch_normalization(conv2d(conv2, output_dim = 128 , kernel_size=4, name='dis_R_conv3'),training = training,name='dis_bn3', reuse = reuse), alpha =0.2)
+ conv4 = conv2d(conv3, output_dim = 256 , kernel_size=4, name='dis_R_conv4')
+ lth_layer = conv4
+ conv4 = tf.nn.leaky_relu(batch_normalization(conv4, training=training, name='dis_bn4', reuse = reuse),alpha =0.2)
+ conv4 = tf.reshape(conv4,[-1, 256*8*8])
+ #Discriminator
+ with tf.variable_scope('discriminator'):
+ d_output = fully_connect(conv4, output_size=1, scope='dr_dense_2')
+ with tf.variable_scope('dis_q'):
+ fc_r = tf.nn.leaky_relu(batch_normalization(fully_connect(conv4, output_size=256 + config.style_classes, scope='dis_dr_dense_3'), training=training, name='dis_bn_fc_r', reuse=reuse), alpha=0.2)
+ continuous_mu = fully_connect(fc_r, output_size=256, scope='dis_dr_dense_mu')
+ continuous_var = tf.exp(fully_connect(fc_r, output_size=256, scope='dis_dr_dense_logvar'))
+ style_predict = fully_connect(fc_r, output_size=config.style_classes, scope='dis_dr_dense_y_style')
+
+ return d_output,lth_layer,style_predict,continuous_mu,continuous_var
+
+
+Does anyone have experience with that, please tell me where I was wrong. Thanks you so much, I really appreciate that!
+"
+"['neural-networks', 'deep-learning', 'hyperparameter-optimization', 'hidden-layers', 'hyper-parameters']"," Title: Is this idea to calculate the required number of hidden neurons for a single hidden layer neural network correct?Body: I have an idea to find the optimal number of hidden neurons required in a neural network, but I'm not sure how accurate it is.
+Assuming that it has only 1 hidden layer, it is a classification problem with 1 output node (so it's a binary classification task), has N input nodes for N features in the data set, and every node is connected to every node in the next layer.
+I'm thinking that to ensure that the network is able to extract all of the useful relations between the data, then every piece of data must be linked to every other piece of data, like in a complete graph. So, if you have 6 inputs, there must therefore be 15 edges to make it complete. Any more and it will be recomputing previously computed information and any less will be not computing every possible relation.
+So, if a network has 6 input nodes, 1 hidden node, 1 output node. There will be 6 + 1 connections. With 6 input nodes, 2 hidden nodes, and 1 output node, there will be 12 + 2 connections. With 3 hidden nodes there will be 21 connections. Therefore, the hidden layer should have 3 hidden nodes to ensure all possibilities are covered.
+This answer discusses another method. For the sake of argument, I've tried to keep both examples using the same data. If this idea is computed with 6 input features, 1 output node, $\alpha = 2$, and 60 samples in the training set, this would result in a maximum of 4 hidden neurons. As 60 samples is very small, increasing this to 600 would result in a maximum of 42 hidden neurons.
+Based on my idea, I think there should be at most 3 hidden nodes and I can't imagine anymore being useful, but would there be any reason to go beyond 3 and up to 42, like in the second example?
+"
+"['convolutional-neural-networks', 'hyper-parameters', 'filters', 'pooling', 'max-pooling']"," Title: How many weights does the max-pooling layer have?Body: How many weights does the max-pooling layer have?
+For example, if there are 10 inputs, a pooling filter of size 2, stride 2, how many weights, including bias, does a max-pooling layer have?
+"
+"['neural-networks', 'recurrent-neural-networks']"," Title: Why RNNs often use just one hidden layer?Body: Did I get it right, that RNNs most often have just one hidden neuron layer? Is there a reason for that? Will RNNs with several hidden layers in each cell work worse?
+Thank you!!
+"
+"['reinforcement-learning', 'deep-rl', 'policy-gradients', 'rewards', 'actor-critic-methods']"," Title: What could be the cause of the drop in the reward in A3C?Body: The mean episodic reward is generally increasing, but it has spontaneous drops, and I'm not sure of their cause.
+
+
+
+The problem has a sparse reward, batch size=2000
, entropy_coefficient=0.1
, other hyper-parameters are pretty standard.
+
+Has anyone seen this kind of behavior? What could be the cause these drops in the reward(not enough exploration, too sparse rewards, the state not expressive enough, etc.)?
+"
+"['convolutional-neural-networks', 'feature-extraction']"," Title: How can I use feature extraction in CNN with image segmentation?Body: I'm just started to learn about meta learning and CNN and in most paper that I've read they mention to have one CNN to feature extraction. These features will help the another network.
+
+I don't know what is feature extraction (I don't know what are those features) but I'm wondering if I can use it on image segmentation.
+
+The idea is to use the first network to feature extraction without doing image classification, and pass those features to the other network.
+
+My question is:
+How can I use feature extraction in CNN on image segmentation?
+"
+"['ai-design', 'tensorflow', 'keras', 'prediction']"," Title: Which model can I use for this problem with multiple inputs and outputs?Body: Which model is the most appropriate for this problem with multiple inputs and outputs?
+
+The data set is
+
+A1, A2, A3, A4, A5, A6, B1, B2, B3, B4
+
+
+where A1, A2, A3, A4, A5, A6
are the inputs and B1, B2, B3, B4
the outputs (this is what I want the model to predict).
+
+What an LSTM be appropriate for this task? Any advice or hint would be much appreciated. Also if anyone can share already done examples, it would really help me a lot.
+"
+"['machine-learning', 'ai-design', 'applications', 'prediction']"," Title: How do I predict the occurrence of rare events?Body: I am trying to predict crime. I have data with factors: location, keyword description of the crime, time crime occurred and so on. This is for crimes that occurred in the past.
+
+I would like to treat the prediction of crimes as a binary classification problem. In this model, the data I have collected would form the ""positive"" examples: they are all examples of a crime happening. However, I am unsure what to use for the negative examples.
+
+Obviously, most of the time there is no crime at the location, but can I use this as negative data? For example, if I know there was a crime at 7pm at location X, and no other crimes there, should I generate new negative data points for every hour except 7pm?
+
+Ideally, I want to create probabilities of crime based on a set of factors.
+"
+"['neural-networks', 'reference-request', 'linear-algebra', 'books', 'calculus']"," Title: Which linear algebra book should I read to understand vectorized operations?Body: I am reading Goodfellow's book about neural networks, but I am stuck in the mathematical calculus of the back-propagation algorithm. I understood the principle, and some Youtube videos explaining this algorithm shown step-by-step, but now I would like to understand the matrix calculus (so not basic calculus!), that is, calculus with matrices and vectors, but especially everything related to the derivatives with respect to a matrix or a vector, and so on.
+Which math book could you advise me to read?
+I specify I studied 2 years after the bachelor in math school (in French: mathématiques supérieures et spéciales), but did not practice for years.
+"
+"['convolutional-neural-networks', 'feature-extraction', 'meta-learning']"," Title: What are the features get from a feature extraction using a CNN?Body: I've just started to learn CNN and somewhere I have read if I remove the last FCL I will get the features extracted from the input image but... what are those features?
+
+Are they numbers? Labels? An image location (x,y) where there is a line.
+
+I want to use these features on a one shot network, but I can't imagine how to use them if I don't know what they are.
+"
+"['reinforcement-learning', 'policy-gradients', 'markov-decision-process', 'multi-armed-bandits', 'combinatorial-optimization']"," Title: Which solutions could I use to solve a multi-armed ""multi-bandit"" problem?Body: Problem
+I have 66 slot machines. For each of them, I have 7 possible actions/arms to choose from. At each trial, I have to choose one of 7 actions for each and every one of the 66 slots. The reward depends on the combination of these actions, but the slots are not equal, that is, pulling the same arm for different slots gives different results. I do not care about an initial state or feature vector, as the problem always starts from the same setting (it is not contextual). My reward depends on how I pull one of the 7 arms of all of the 66 bandits simultaneously, where, as said, each slot has its own unique properties towards the calculation of the total reward. Basically, the action space is a one-hot encoded 66x7 matrix.
+My solution
+I ignored the fact that I do not care about a feature vector or state and I treated the problem using a deep NN with a basic policy-gradient algorithm, where I increase directly the probability of each action depending on the reward I get. The state simply does not change, so the NN receive always the same input. This solution does work effectively in finding an approximately optimal strategy, however, it is very computationally expensive and something tells me I am overkilling the problem.
+However, I do not see how I could apply standard solutions to MAB, such as epsilon-greedy. I need simultaneity between the different "slot machines", and, if I just take each possible permutation as a different action, in order to explore them with greedy methods, I get way too many actions (in the order of $10^{12}$). I have not found in the literature something similar to this multi-armed multi-bandit problem and I am clueless if anything like that has ever been considered - perhaps I am overthinking it and this can be somehow reduced to a normal MAB?
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'hidden-layers', 'seq2seq', 'encoder-decoder']"," Title: What exactly is a hidden state in an LSTM and RNN?Body: I'm working on a project, where we use an encoder-decoder architecture. We decided to use an LSTM for both the encoder and decoder due to its hidden states. In my specific case, the hidden state of the encoder is passed to the decoder, and this would allow the model to learn better latent representations.
+Does this make sense?
+I am a bit confused about this because I really don't know what the hidden state is. Moreover, we're using separate LSTMs for the encoder and decoder, so I can't see how the hidden state from the encoder LSTM can be useful to the decoder LSTM because only the encoder LSTM really understands it.
+"
+"['reinforcement-learning', 'pytorch', 'implementation', 'sutton-barto', 'reinforce']"," Title: Should the policy parameters be updated at each time step or at the end of the episode in REINFORCE?Body: REINFORCE is a Monte Carlo policy gradient algorithm, which updates weights (parameters) of policy network by generating episodes. Here's a pseudo-code from Sutton's book (which is same as the equation in Silver's RL note):
+
+
+
+When I try to implement this with my own problem, I found something strange. Here's implementation from Pytorch's official GitHub:
+
+def finish_episode():
+ R = 0
+ policy_loss = []
+ returns = []
+ for r in policy.rewards[::-1]:
+ R = r + args.gamma * R
+ returns.insert(0, R)
+ returns = torch.tensor(returns)
+ returns = (returns - returns.mean()) / (returns.std() + eps)
+ for log_prob, R in zip(policy.saved_log_probs, returns):
+ policy_loss.append(-log_prob * R)
+ optimizer.zero_grad()
+ policy_loss = torch.cat(policy_loss).sum()
+ policy_loss.backward()
+ optimizer.step()
+ del policy.rewards[:]
+ del policy.saved_log_probs[:]
+
+
+I feel like there's a difference between the above two. In Sutton's pseudo-code, the algorithm updates $\theta$ for each step $t$, while the second code (PyTorch's one) accumulate loss and update $\theta$ with the summation, i.e. after each episode.
+I tried to search other implementation of REINFORCE, and I found that most of the implementations follow the second form, update after each generated episodes.
+
+To check whether both give the same result, I changed the second code as
+
+def finish_episode():
+ R = 0
+ policy_loss = []
+ returns = []
+ for r in policy.rewards[::-1]:
+ R = r + args.gamma * R
+ returns.insert(0, R)
+ returns = torch.tensor(returns)
+ returns = (returns - returns.mean()) / (returns.std() + eps)
+ for log_prob, R in zip(policy.saved_log_probs, returns):
+ optimizer.zero_grad()
+ loss = -log_prob * R
+ loss.backward()
+ optimizer.step()
+
+...
+
+
+and run it, which gives different result (if my code has no problem).
+So they are not the same, and I think the last one is more close to the original pseudo-code of REINFORCE. What am I missing now? Is it okay because the results are approximately same? (I'm not sure about this claim)
+
+However, in some sense, I think Pytorch's implementation is the right version of REINFORCE. In Sutton's pseudo-code, episode is generated first, so I think $\theta$ shouldn't be updated at each step and should be updated after the total loss is computed. If $\theta$ is updated at each step, then such $\theta$ might be different with the original $\theta$ that used to generate the episode.
+"
+"['terminology', 'definitions', 'transfer-learning', 'one-shot-learning']"," Title: Precise description of one-shot learningBody: I am working on classifying the Omniglot dataset, and the different papers dealing with this topic describe the problem as one-shot learning (classification). I would like to nail down a precise description of what counts as one-shot learning.
+It's clear to me that in one-shot classification, a model tries to classify an input into one of $C$ classes by comparing it to exactly one example from each of the $C$ classes.
+What I want to understand is:
+
+- Is it necessary that the model has never seen the input and the target examples before, for the problem to be called one-shot?
+
+- Goodfellow et. al. describe one-shot learning as an extreme case transfer learning where only one labeled example of the transfer task is presented. So, it means they are considering the training process as a kind of continuous transfer learning? What has the model learned earlier, that is being transferred?
+
+
+"
+"['recurrent-neural-networks', 'sequence-modeling']"," Title: Why do small datasets require more samples, while big datasets require fewer samples in negative sampling?Body: In the deep learning specialization course by Andrew Ng, in the video Sequence Models (minute 4:13), he says that in negative sampling we have to choose a sample of words from the corpus to train rather than choosing the whole corpus. But he said that, for smaller datasets, we need a bigger number of samples, for example, 5-20, and, for larger datasets, we need a smaller sample, for example, 2-5. By sample, I am referring to the number of words along with the target word we have taken to train the model.
+
+Why do small datasets require more samples, while big datasets require fewer samples?
+"
+"['neural-networks', 'hyper-parameters', 'neuroevolution', 'neural-architecture-search']"," Title: When using Neural Architecture Search, how are the hyper-parameters chosen?Body: I have read a lot about NAS, but I still do not understand one concept: When setting up a neural network, hyperparameters (such as the learning rate, dropout rate, batch size, filter size, etc.) need to be set up.
+In NAS, only the best architecture is decided, e.g. how many layers and neurons. But what about the hyperparameters? Are they randomly chosen?
+"
+"['machine-learning', 'deep-learning']"," Title: Why they use KL divergence in Natural gradient?Body: Natural gradient aims to do a steepest descent on the ""function"" space, a manifold that is independent from how the function is parameterized. It argues that the steepest descent on this function space is not the same as steepest descent on the parameter space. We should favor the former.
+
+Since, for example in a regression task, a neural net could be interpreted as a probability function (Gaussian with the output as mean and some constant variance), it is ""natural"" to form a distance on the manifold under the KL-divergence (and a Fisher information matrix as its metric).
+
+Now, if I want to be creative, I could use the same argument to use ""square distance"" between the outputs of the neural nets (distance of the means) which I think is not the same as the KL.
+
+Am I wrong, or it is just another legit way? Perhaps, not as good?
+"
+"['reinforcement-learning', 'python', 'datasets', 'control-problem', 'ddpg']"," Title: How to learn using DDPG in python solely using a timeseries datasetsBody: I have a lengthy timeseries datasets which contains several variables (from sensors etc) to be classified as actions or states. Providing they are successfully done, I want to learn a control policy using DDPG.
+But I have no knowledge of the environment.
+How can I learn my policy off-line only by using these datasets without having any model of the environment? After learning off-line first, then the policy can then be used to learn and control online later in a certain real-world environment.
+
+First, I know that experience buffer can be used to store the datasets. How should you set the buffer size in this case?
+From what I understand, DDPG needs lots of data to be used for learning.
+Should I build an environment model using the specified datasets? Or I don't really need this step?
+
+All of these will be implemented in Python and maybe with the help of another tools if needed. There are some implementation of DDPG available so it is not the main problem, but this implementation must be tweaked to solve my proposed problem. Normally the implemented DDPG in Python requires a Gym-environment as an input so I must change it to satisfy my needs as I don't need Gym for my use case. And these implementations in Python are somehow on-line codes so you need to interact directly with the environment model for the algorithm to be working.
+
+Can someone help me tackle this problem or give me some advice regarding this? I can help giving more details if needed. Thank you.
+
+Regards
+"
+"['terminology', 'social', 'automated-machine-learning']"," Title: What does 'democratizing AI' exactly mean?Body: In my AI literature research, I often notice authors use the term 'democratizing AI', especially in the AutoML area. For example in the abstract (last sentence) of this paper:
+
+LEAF therefore forms a foundation for democratizing and improving AI, as well as making AI practical in future applications.
+
+I think I have an idea of what this means, but I would like to ask you for some more specific answers.
+"
+"['convolutional-neural-networks', 'architecture']"," Title: Are there well-established ways of mixing different inputs (e.g. image and numbers)?Body: I am interested in the possibility of having extra input along with the main data. For instance, a medical application that would rely mostly on an image: how could one also account for sex, age, etc.?
+
+It is certainly possible to put the output of a CNN and other data into, say, a densely connected network; but it seems inefficient. Are there well-established ways of doing something like this?
+"
+"['reinforcement-learning', 'terminology', 'papers', 'reward-functions', 'hindsight-experience-replay']"," Title: What is the difference between success rate and reward when dealing with binary and sparse rewards?Body: In OpenAI Gym "reward" is defined as:
+
+reward
(float): amount of reward achieved by the previous action. The
+scale varies between environments, but the goal is always to increase
+your total reward.
+
+I am training Hindsight Experience Replay on Fetch robotics environments, where rewards are sparse and binary indicating whether or not the task is completed. The original paper implementing HER uses success rate as a metric in its plots, like so:
+
+On page 5 of the original paper, it is stated that the reward is binary and sparse.
+When I print the rewards obtained during a simulation of FetchReach-v1
trained with HER, I get the following values. The first column shows the reward and the second column shows the episode length.
+
+As can be seen, at every time step, I am getting a reward, sometimes I get a $-1$ reward at every time step throughout the episode for a total of $-50$. The maximum reward I can achieve throughout the episode is $0$.
+Therefore my question is: What is the reward obtained at each time-step? What does it represent and how is this different from the success rate?
+"
+"['training', 'pytorch', 'gpu']"," Title: Training network with 4 GPUs performance is not exactly 4 times over one GPU why?Body: Training neural network with 4 GPUs using pyTorch, performance is not even 2 times (btw 1 & 2 times) compare to using one GPU. From Nvidia-smi we see GPU usage is for few milliseconds and next 5-10 seconds looks like data is off-loaded and loaded for new executions (mostly GPU usage is 0%). Is there any way in pyTorch to improve the data upload and offload for the GPU execution.
+"
+"['reinforcement-learning', 'control-problem', 'control-theory']"," Title: Solving the dead time problem for control using reinforcement learningBody: There are several occasion that reinforcement learning can be used as a control mean.
+The action is for example the set target temperature (which in many occasions change with time) and the state is for example the current temperature and other variables. The policy is then the control mean that is going to be learnt using the reinforcement learning.
+
+As there is a dead time (input lag) and time delay in the real world, how can one propose to tackle this problem when using reinforcement learning as a control mean? Thank you.
+"
+"['terminology', 'evolutionary-algorithms', 'agi']"," Title: What is the name of an AI whose primary goal is to create a better AI?Body: A general AI x creates another AI y which is better than x.
+
+y creates an AI better than itself.
+
+And so on, with each generation's primary goal to create a better AI.
+
+Is there a name for this.
+
+By better, I mean survivability, ability to solve new problems, enhance human life physically and mentally, and advance our civilization to an intergalactic civilization to name a few.
+"
+"['deep-learning', 'reinforcement-learning', 'open-ai']"," Title: How am I getting same results 30 times faster than in original HER paper?Body: I am reproducing the results from Hindsight Experience Replay by Andrychowicz et. al. In the original paper they present the results below, where the agent is trained for 200 epochs.
+
+200 epochs * 800 episodes * 50 time steps = 8,000,000 total time steps.
+
+
+
+I try to reproduce the results but instead of using 8 cpu cores, I am using 19 CPU cores.
+
+I train the FetchPickAndPlace for 120 epochs, but with only 50 episodes per epoch. Therefore 120 * 50 * 50 = 300,000 iterations. I present the curve below:
+
+
+
+and logger output for the first two epochs:
+
+
+
+Now, as can be seen from my tensorboard plot, after 30 epochs we get a steady success rate very close to 1. 30 epochs * 50 episodes * 50 time steps = 75,000 iterations. Therefore it took the algorithm 75,000 time steps to learn this environment.
+
+The original paper took approximately 50 * 800 * 50 = 2,000,000 time steps to achieve the same goal.
+
+How is it that in my case the environment was solved nearly 30 times faster? Are there any flaws in my workings above?
+
+NB: This was not a one off case. I tested again and got the same results.
+
+Post on Reddit: https://www.reddit.com/r/reinforcementlearning/comments/dpjwfu/getting_same_results_with_half_the_number_of/
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'rewards']"," Title: Neural network for reinforcement learningBody: I’m using a simple neural network to solve a reinforcement learning problem.
+
+The configuration is:
+
+X-inputs: The current state
+Y-outputs: The possible actions
+
+Whenever the network yields a “good” solution, i “reward” the network by training it a number of times.
+
+Whenever the network yields a “bad” or “neutral” solution, i ignore it.
+
+This seems to be working somewhat, but from what i read, everyone else (in broad terms) seems to be using a 2 neural network configuration for similar tasks. (Policy network and value network)
+
+Am i missing something? - and are there any obvious caveats of the “single network” method i am using?
+
+Supplemental question: Are there other methods of “rewarding” a network, aside from simply training it?
+
+Thanks,
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'residual-networks', 'vgg']"," Title: Is a VGG-based CNN model sometimes better for image classfication than a modern architecture?Body: I have an image classification task to solve, but based on quite simple/good terms:
+
+
+- There are only two classes (either good or not good)
+- The images always show the same kind of piece (either with or w/o fault)
+- That piece is always filmed from the same angle & distance
+- I have at least 1000 sample images for both classes
+
+
+So I thought it should be easy to come up with a good CNN solution - and it was. I created a VGG16-based model with a custom classifier (Keras/TF). Via transfer learning I was able to achieve up to 100% validation accuracy during model training, so all is fine on that end.
+
+Out of curiosity and because the VGG-based approach seems a bit ""slow"", I also wanted to try it with a more modern model architecture as the base, so I did with ResNet50v2 and Xception. I trained both similar to the VGG-based model, tried it with several hyperparameter modifications, etc. However, I was not able to achieve a better validation accuracy than 95% - so much worse than with the ""old"" VGG architecture.
+
+Hence my question is:
+
+
+ Given these ""simple"" (always the same) images and only two classes, is the VGG model probably a better base than a modern network like ResNet or Xception? Or is it more likely that I messed something up with my model or simply got the training/hyperparameters not right?
+
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: Positioning of batch normalization layer when converting strided convolution to convolution + blurpoolBody: I'm trying to replace the strided convolutions of Keras' MobileNet implementation with the ConvBlurPool
operation as defined in the Making Convolutional Networks Shift-Invariant Again paper. In the paper, a ConvBlurPool
is implemented as follows:
+
+$$
+Relu \circ Conv_{k,s} \rightarrow Subsample_s \circ Blur_m \circ Relu \circ Conv_{k,1}
+$$
+where k
is the convolution's output kernels, s
is the stride, m
is the blurring kernel size and the subsample+blur is implemented as a strided convolution with a constant kernel.
+
+My issues start when batch normalization enter the picture.
+In MobileNet, a conv block is defined as follows (omitting the zero-padding):
+
+$$
+Relu \circ BatchNorm \circ Conv_{k,s}
+$$
+
+I am leaning towards converting it to:
+
+$$
+Subsample_s \circ Blur_m \circ Relu \circ BatchNorm \circ Conv_{k,1}
+$$
+
+i.e., putting the BN before the activation as it's normally done. This is not equivalent though, because the first BN operates on the downsampled signal.
+
+Another possibility would be:
+
+$$
+BatchNorm \circ Subsample_s \circ Blur_m \circ Relu \circ Conv_{k,1}
+$$
+
+with the BN as last operation. This is also not equivalent, because now the BN comes after the ReLu.
+
+Is there any reason to prefer one option over the other? Are there any other options I'm not considering?
+"
+"['neural-networks', 'machine-learning']"," Title: Three step threshold in Facenet model of face recognitonBody: Suppose i trained the images of two people say Bob , Thomas .When i run the algorithm to detect the face of a totally different person from these two say John , then John is recognized as Bob or Thomas.How to avoid this ?
+
+I am studying a face recognition model on GitHub(link) which uses Facenet model. Problem is when an unknown image (the image which is not in training data set) is given to identify , it identifies the unknown person as one the person in the data set .I searched on web and i found i need to increase threshold value .I guess i need to increase the threshold. But when i am increasing the threshold value to 0.99,0.99,99 then only it is rejecting the unknown image (image of the person who is not in data set) and sometimes even rejecting the image of person who is in dataset.
+
+I guess by increasing the threshold value what we are assuring is that an image is classified as one of the person in training data only when they are close enough.
+
+How to make changes so that the model works properly ?And can someone explain Threshold in Facenet model better.
+"
+"['machine-learning', 'classification']"," Title: How to handle classification with label updates?Body: Suppose that my task is to label news articles; that is, to classify which category a news article belongs to. Using the labelled data (with old labels) that I have, I have trained a model for this.
+
+For relevancy purposes, certain labels may be split into multiple new labels. For example, 'Sports' may split into 'Sports' and 'E-Sports'. Because of these new labels, I will need to retrain my model. However, my training data is labelled with the old labels. What can I do to address these 'label updates'?
+
+My idea: Perhaps use some unsupervised clustering method (K-means?) to split the data with the old labels into the new labels. (But how can we be certain that which cluster has what new label?) Then use this 'updated' data to train a model. Is this correct?
+"
+"['deep-learning', 'tensorflow', 'keras', 'feedforward-neural-networks', 'multilayer-perceptrons']"," Title: How can we print weights per iteration in a simple feed forward MLP for an specific class?Body: im working on a project in which I have to make a multi-layer perceptron with two hidden layers with 3 nodes in each. The target value in my data contains 8 unique values/classes. One of the tasks states ""For the most popular class CYT plot weight values per iteration for the last layer (3 weights and bias)"". My question is ""does this statement make sense""? I can access the weights and biases of a layer but I don't get what are weight values for a specific class and how to access them
+"
+"['reinforcement-learning', 'rewards', 'markov-decision-process', 'environment', 'markov-property']"," Title: How to assign rewards in a non-Markovian environment?Body: I am quite new to the Reinforcement Learning domain and I am curious about something. It seems to be the case that the majority of current research assumes Markovian environments, that is, future states of the process depend only upon the present state, not on the sequence of events that preceded it. I was curious about how we can assign rewards when the Markovian property doesn't hold anymore. Do the state-of-the-art RL theory and research support this?
+"
+"['neural-networks', 'machine-learning', 'hyper-parameters', 'neural-architecture-search']"," Title: Which hyper-parameters are considered in neural architecture search?Body: I want to understand automatic Neural Architecture Search (NAS). I read already multiple papers, but I cannot figure out what the actual search space of NAS is / how are classical hyper-parameters considered in NAS?
+
+My understanding:
+
+NAS aims to find a good performing model in the search space of all possible model architectures using a certain search- and performance estimation strategy.
+There are architecture-specific hyper-parameters (in the most simple feed-forward network case) like the number of hidden layers, the number of hidden neurons per layer as well as the type of activation function per neuron
+There are classical hyper-parameters like learning rate, dropout rate, etc.
+What I don't understand is:
+
+What exactly is part of the model architecture as defined above? Is it only the architecture-specific hyper-parameters or also the classical hyper-parameters? In other words, what is spanning the search space in NAS: Only the architecture-specific hyper-parameters or also the classical hyper-parameters?
+
+In case only the architecture-specific hyper-parameters are part of the NAS search space, what about the classical hyper-parameters? A certain architecture (with a fixed configuration of the architecture-specific hyper-parameters) might perform better or worse depending on the classical hyper-parameters - so not taking into account the classical hyper-parameters in the NAS search space might result in a non-optimal ultimate model architecture, or not?
+"
+"['terminology', 'papers', 'representation-learning', 'semi-supervised-learning', 'discriminative-model']"," Title: What does ""class-level discriminative feature representation"" mean in the paper ""Semi-Supervised Deep Learning with Memory""?Body: I am reading the paper Semi-Supervised Deep Learning with Memory (2018) by Yanbei Chen et al. The topic is the classification of images using semi-supervised learning. The authors use a term on page 2 in the middle of the page that I am not familiar with. They write:
+
+The key to our framework design is two-aspect: (1) the class-level discriminative feature representation and the network inference uncertainty are gradually accumulated in an external memory module; (2) this memorised information is utilised to assimilate the newly incoming image samples on-the-fly and generate an informative unsupervised memory loss to guide the network learning jointly with the supervised classification loss
+
+I am not sure what the term discriminative feature representation means.
+I know that a discriminative model determines the decision boundary between the classes, and examples include: Logistic Regression (LR), Support Vector Machine (SVM), conditional random fields (CRFs) and others.
+Moreover, I know that, in machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data.
+Any insights on the definition of this term much appreciated.
+"
+"['natural-language-processing', 'tensorflow', 'transformer', 'google', 'inference']"," Title: How to use TPU for real-time low-latency inference?Body: I use Google's Cloud TPU hardware extensively using Tensorflow for training models and inference, however, when I run inference I do it in large batches. The TPU takes about 3 minutes to warm up before it runs the inference. But when I read the official TPU FAQ, it says that we can do real-time inference using TPU. It says the latency is 10ms which for me is fast enough but I cannot figure out how to write code that does this, since every time I want to pass something for inference I have to start the TPU again.
+
+My goal is to run large Transformer-based Language Models in real-time on TPUs. I guessed that TPUs would be ideal for this problem. Even Google seems to already do this.
+
+Quote from the official TPU FAQ:
+
+
+ Executing inference on a single batch of input and waiting for the
+ result currently has an overhead of at least 10 ms, which can be
+ problematic for low-latency serving.
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'u-net']"," Title: Would models like U-Net be able to segment objects which has label based on its surrounding context?Body: Suppose that we want to segment a red blob from the image, normally you will have a class for this red blob e.g. 0. And every red blob you detected will have a class of 0.
+
+But, in my case, I want that the model will look at the surrounding context, e.g., if the red blob is surrounded by blue blobs, it should be classified as class 1, instead of 0. Like the following image.
+
+
+
+Is this something easily achieve able with U-Net or other models (you can suggest)?
+
+In my case, the context can be more difficult than this, e.g., if there are blue and green surrounding you, you will have another class.
+"
+"['convolutional-neural-networks', 'classification', 'prediction']"," Title: CNN multi output scores and evaluationBody: I am building a CNN with two outputs. I still have to put time in the network itself, but I was trying to get a good evaluation/classification report of the results. My code is the following:
+
+scores = model.evaluate(data_test, [Y1_test, Y2_test], verbose=0)
+
+for i, j in zip(model.metrics_names, scores):
+ print(i,'=', j)
+
+
+Output:
+
+loss = 5.124477842579717
+Y1_output_loss = 1.3782909
+Y2_output_loss = 4.10769
+Y1_output_accuracy = 0.6304348
+Y2_output_accuracy = 0.54347825
+
+
+Not great, but that is not the point. My code for the classification repot is as follows:
+
+Y1_pred, Y2_pred = model.predict(data_test)
+Y1_true, Y2_true = Y1_test.argmax(axis=-1), Y2_test.argmax(axis=-1)
+Y1_pred, Y2_pred = Y1_pred.argmax(axis=-1), Y2_pred.argmax(axis=-1)
+
+
+print(classification_report(Y1_true, Y1_pred))
+print(classification_report(Y2_true, Y2_pred))
+
+
+Output:
+
+Classification report Y1
+ precision recall f1-score support
+
+ 0 0.20 0.33 0.25 6
+ 3 0.00 0.00 0.00 3
+ 6 0.00 0.00 0.00 6
+ 8 0.00 0.00 0.00 2
+ 9 0.00 0.00 0.00 7
+ 10 0.03 0.50 0.06 2
+ 11 0.00 0.00 0.00 3
+ 12 0.00 0.00 0.00 7
+ 13 0.00 0.00 0.00 2
+ 14 0.00 0.00 0.00 7
+ 15 0.00 0.00 0.00 1
+
+ accuracy 0.07 46
+ macro avg 0.02 0.08 0.03 46
+weighted avg 0.03 0.07 0.04 46
+
+
+Classification report Y2
+ precision recall f1-score support
+
+ 0 0.00 0.00 0.00 9
+ 2 0.00 0.00 0.00 10
+ 3 0.15 1.00 0.26 7
+ 4 0.00 0.00 0.00 9
+ 5 0.00 0.00 0.00 6
+ 6 0.00 0.00 0.00 2
+ 7 0.00 0.00 0.00 3
+
+ accuracy 0.15 46
+ macro avg 0.02 0.14 0.04 46
+weighted avg 0.02 0.15 0.04 46
+
+
+Now the average accuracy is extremely low suddenly, so I have the feeling it isn't lining up correctly. But I don't see where?
+
+Thank you all
+"
+['machine-learning']," Title: Best way to train Neural Network for Voice Commands?Body: I want to build a Voice Assistance using Tensorflow, Like Google Assistance, So that I can give commands like:
+
+Open Camera
+Send Message
+Play Music
+ETC
+
+
+I know I can use pre-trained model for Voice Recognition but this is not my problem. If I know correctly, What Neural Network does it learn from your input and output and creates a best algorithms.
+
+So I want to know Is it possible to somehow train my network for my commands so that I don't have to HARD-CODED them because it is hard to remember so many command?
+
+Is Google also Hard-Coded the commands for Google Voice Assistance?
+
+Sorry, If I'm unable to explain this to you :P
+"
+"['papers', 'swarm-intelligence', 'notation', 'ant-colony-optimization']"," Title: What is the meaning of the square brackets in ant colony optimization?Body: I'm studying the paper "Minimizing Total Tardiness on a Single Machine Using Ant Colony Optimization" which has proposed to use Ant colony optimization to SMTWTP.
+According to this paper:
+
+Each artificial ant iteratively and independently decides which job to
+append to the sub-sequence generated so far until all jobs are
+scheduled, Each ant generates a complete solution by selecting a job $j$
+to be on the $i$-th position of the sequence. This selection process is
+influenced through problem-specific heuristic information called
+visibility and denoted by $\eta_{ij}$ as well as pheromone trails denoted by $\tau_{ij}$. The former is an indicator of how good the choice of that job
+seems to be and the latter indicates how good the choice of that job
+was in former runs. Both matrices are only two dimensional as a
+consequence of the reduction in complexity
+
+They have proposed this formula for the probability that job $j$ be selected to be processed on position $i$ (page 9 of the linked paper):
+$$
+\mathcal{P}_{i j}=\left\{\begin{array}{cl}
+\frac{\left[\tau_{i j}\right]^{\alpha}\left[\eta_{i j}\right]^{\beta}}{\sum_{h \in \Omega}\left[\tau_{i h}\right]^{\alpha}\left[\eta_{i h}\right]^{\beta}} & \text { if } j \in \Omega \\
+0 & \text { otherwise }
+\end{array}\right.\tag{1}\label{1}
+$$
+but I can't understand what $[]$ surrounding $\eta_{ij}$ and $\tau_{ij}$ indicates. Does it show that these values are matrices?
+"
+"['machine-learning', 'reference-request', 'knowledge-representation', 'explainable-ai']"," Title: Who is working on explaining the knowledge encoded into machine learning models?Body: The thing about machine learning (ML) that worries me is that "knowledge" acquired in ML is hidden: we usually can't explain the criteria or methods used by the machine to provide an answer when we ask it a question.
+It's as if we asked an expert financial analyst for advice and he/she replied, "Invest in X"; then when we asked "Why?", the analyst answered, "Because I have a feeling that's the right thing for you to do." It makes us dependent on the analyst.
+Surely there are some researchers trying to find ways for ML systems to encapsulate and refine their "knowledge" into a form that can then be taught to a human or encoded into a much simpler machine. Who, if any, are working on that?
+"
+"['programming-languages', 'meta-rules']"," Title: What is a good language for expressing replacement or template rules?Body: Say I have a game like tic-tac-toe or chess. Or some other visual logic based problem.
+
+I could express the state of the game as a string. (or perhaps a 2D array)
+
+I want to be able to express the possible moves of the game as rules which change the string. e.g. replacement rules.
+
+I looked into regex as a possibility but this doesn't seem powerful enough. For example, one can't have named patterns which one can use again. (e.g. if I wanted to name a pattern called ""numbers_except_for_8"". To be used again.
+
+And it also should be able to express things like ""repeat if possible"".
+
+In other words I need some simple language to express rules of a game that has:
+
+
+- modularity
+- simpleness
+- can act on other rules (self referential)
+
+
+There are languages like LISP but these on the other hand seem too complicated. (Perhaps there is no simple language hence why the English language is so complicated).
+
+I did read once about a generalised board game solving software program which had a way to express the rules of a game. But I can't seem to find a reference to it anywhere.
+
+As an example rules for tic tac toe might be:
+
+Players-Turn:
+""Find a blank square""->""Put an X in it""->Oponent's turn
+
+Oponents-Turn:
+""Find a blank square""->""Put an O in it""->Player's turn
+
+So I think the ingredients for rules are: searching for patterns, determining if an object is of a particular type (which might be the same as the first ingredient), and replacing.
+"
+['deep-learning']," Title: Why are researchers focused on deep learning based stereo depth/disparity methods instead of non deep learning ones?Body: In recent years if you are working on stereo depth/disparity algorithms, it seems like you will only ever get your paper accepted to CVPR/ICCV/ECCV if there's some deep learning involved in it. A lot of authors published their code on github and I've tried out multiple of them and here is what I observed. None of these deep learning based methods generalized well. Almost all methods trained on the KITTI dataset (street images) or the scene flow dataset (synthetic images). These methods perform well when the test data is similar to the training data, but fails miserably on other kinds of test data (e.g. close up human) whereas a classical traditional computer vision based method like PatchMatch would generate decent results. In my opinion, no matter how well these new deep learning methods perform on the KITTI benchmark, it's nearly useless in the real world.
+
+I understand deep learning has the potential to approximate any non-linear function when there's enough quality training data and unlimited computation, but ground truth depth/disparity cannot be labeled by manual labor like a cat-dog classification problem. That means the ground truth training data has to come from traditional computer vision algorithms or hardware or be synthetic. Traditional computer vision algorithms are not even close to perfect yet but the research pretty much stifled because of deep learning. The ground truth of the KITTI dataset comes from a hardware LIDAR, but it's extremely sparse. If we align multiple scans from LIDAR in order to form a dense result, that's relying on some type of SLAM which again is relying on an imperfect traditional computer vision algorithm. There is no sign of hardware that can generate accurate dense depth that is coming out soon. As for synthetic data, it doesn't accurately represent real data. Since there isn't even a good way to obtain training data for stereo depth/disparity, why are the researchers so fixated on building complex deep neural nets to solve stereo depth/disparity nowadays?
+"
+"['reinforcement-learning', 'optimization', 'actor-critic-methods', 'dynamic-programming', 'soft-actor-critic']"," Title: How does the automated temperature adjustment step work in Soft Actor-Critic?Body: In section 5 of the paper Soft Actor-Critic Algorithms and Applications, it is proposed an optimization problem to obtain an optimal temperature parameter $\alpha^*_t$. First, one uses the original evaluation and improvement steps to estimate $Q_t^*$ and $\pi_t^*$, and then one somehow solves the optimization problem:
+$$\alpha_t^* = \arg\min_{\alpha_t} \mathbb E _{a_t\sim\pi^*}\left[\alpha_t(-\log\pi_t^*(a_t|s_t;\alpha_t)-H)\right]\text .$$
+As far as I understand, we should use our current estimate of $\pi_t^*$ to solve that problem. Since it was obtained from a previous $\alpha_{t-1}^*$, in practice, it is not dependent on $\alpha_t$ and so the optimization problem becomes a linear problem with the only restriction being $\alpha_t\geq0$.
+Here comes my problem: under this rationale, if $\alpha_t$ is a scalar independent of both state $s_t$ and action $a_t$, the value of the cost function is just proportional to $\alpha_t$ and so the solutions are either $0$ or $\infty$, depending on the sign of the expected value (something similar happens if $\alpha_t^*=\alpha_t^*(s_t,a_t)$). However, the whole idea of introducing this parameter is to account optimally for the exploration of the policy.
+What is the correct way to solve this optimization problem along with the evaluation and improvement steps? I am particularly interested in the tabular case. Also, is there any explanation why they use a negative minimum entropy $H$ when the entropy is always positive?
+By the way, in the approximate case, the current official implementation seems to be doing just that: moving $\alpha_t^*$ up or down a little bit (closer to $\infty$ or 0, respectively), depending on the magnitude of the expected value. I guess one could do the same for the tabular case, modifying the $\alpha_t^*$ only a little bit in each step, but this seems rather suboptimal.
+"
+"['machine-learning', 'signal-processing']"," Title: Analyzing vibration using machine learningBody: I would like a few suggestions on an idea that I have -
+
+I am trying to make a musical instrument (percussion), whilst just having a PVC disc. I am hitting the disc in a variety of styles (in order to produce a variety of sounds correspondingly), just like the way the actual percussion instrument is hit. I am converting the mechanical vibrations on the PVC disc to an electrical signal using a transducer, performing an FFT analysis of the different strokes, and trying to identify the stroke which is hit. Using this technique, I could get an accuracy of only 80 percent. I would like it to be extremely accurate ( more than 95 percent recognition). I was using only frequency as the parameter used to distinguish the sounds.
+
+Now, I am thinking that if I could use other parameters too in order to identify the stroke, I might be able to get the required accuracy. I am thinking of resorting to Machine Learning for this. I am kind of new to this and would like to know what I might need to know before I proceed with this idea.
+
+Any help would be greatly appreciated.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'feature-selection']"," Title: Should I use my redundant feature as an auxiliary output or as another input feature?Body: For example, given a face image, and you want to predict the gender. You also have age information for each person, should you feed the age information as input or should you use it as auxiliary output that the network needs to predict?
+
+How do I know analytically (instead of experimentally) which approach will be better? What is the logic behind this?
+"
+"['classification', 'tensorflow', 'keras']"," Title: The best way of classifying a dataset including classes with high similarity?Body: I have a dataset which has two very similar classes (men wrestling, women wrestling). I've used InceptionV3 as a classifier to solve the problem of classifying this dataset. Unfortunately, the accuracy of this classifier doesn't hit more than 70%. Is there any suggestion about how I can overcome this problem or any other similar problems?
+"
+"['neural-networks', 'machine-learning', 'reference-request', 'prolog', 'neurosymbolic-ai']"," Title: Has machine learning been combined with logical reasoning (for example, PROLOG)?Body: There are mainly two different areas of AI at the moment. There is the "learning from experience" based approach of neural networks. And there is the "higher logical reasoning" approach, with languages like LISP and PROLOG.
+Has there been much overlap between these? I can't find much!
+As a simple example, one could express some games in PROLOG and then use neural networks to try to play the game.
+As a more complicated example, one would perhaps have a set of PROLOG rules which could be combined in various ways, and a neural network to evaluate the usefulness of the rules (by simulation). Or even create new PROLOG rules. (Neural networks have been used for language generation of a sort, so why not the generation of PROLOG rules, which could then be evaluated for usefulness by another neural network?)
+As another example, a machine with PROLOG rules might be able to use a neural network to be able to encode these rules into some language that could be in turn decoded by another machine. And so express instructions to another machine.
+I think, such a combined system that could use PROLOG rules, combine them, generate new ones, and evaluate them, could be highly intelligent. As it would have access to higher-order logic. And have some similarity to "thinking".
+"
+"['comparison', 'swarm-intelligence', 'ant-colony-optimization']"," Title: What is the difference between the ant system and the max-min ant system?Body: I'm studying ant colony optimization. I'm trying to understand the difference between the ant system (AS) and the max-min ant system (MMAS) approaches. As far as I found out, the main difference between these 2 is that in AS the pheromone trail is updated after all ants have finished the tour (it means all ants participate in this update), but in MMAS, only the best ant updates this value. Am I right? Is there any other significant difference?
+"
+"['deep-learning', 'recurrent-neural-networks', 'feedforward-neural-networks', 'multilayer-perceptrons', 'long-short-term-memory']"," Title: Why use a recurrent neural network over a feedforward neural network for sequence prediction?Body: If recurrent neural networks (RNNs) are used to capture prior information, couldn't the same thing be achieved by a feedforward neural network (FFNN) or multi-layer perceptron (MLP) where the inputs are ordered sequentially?
+
+Here's an example I saw where the top line of each section represents letters typed and the next row represents the predicted next character (red letters in the next row means a confident prediction).
+
+
+
+Wouldn't it be simpler to just pass the $X$ number of letters leading up to the last letter into an FFNN?
+
+For example, if $X$ equaled 4, the following might be the input to the FFNN
+
+S, T, A, C => Prediction: K
+
+"
+"['deep-learning', 'datasets']"," Title: Dataset for floating objects detectionBody: I am looking for a dataset, which I could train a model to detect people/boats/surfboards, etc., from a drone view.
+Has anyone seen a dataset that could be useful for this purpose?
+I have some photos made by me (like this one below), but I need more data. Of course, the best will be if data will be labeled, but, if someone has seen an unlabeled dataset with videos/photos like that below, please share the link to it.
+Sample photos I am looking for:
+
+
+"
+"['machine-learning', 'terminology', 'supervised-learning']"," Title: What does ""immediate vector-valued feedback"" mean?Body: In the book Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning, James Stone says
+
+
+ With supervised learning, the response to each input vector is an output vector that receives immediate vector-valued feedback specifying the correct output, and this feedback refers uniquely to the input vector just received; in contrast, each reinforcement learning output vector (action) receives scalar-valued feedback often sometime after the action, and this feedback signal depends on actions taken before and after the current action.
+
+
+I fail to understand the part formatted in bold. Once we have a set of labeled examples (feature vector and label pairs), where is the ""feedback"" coming from? Testing and validation results of our calibrated model (say a neural network based one)?
+"
+"['monte-carlo-tree-search', 'alphazero', 'implementation', 'alphago-zero']"," Title: How is the rollout from the MCTS implemented in both of the AlphaGo Zero and the AlphaZero algorithms?Body: In the vanilla Monte Carlo tree search (MCTS) implementation, the rollout is usually implemented following a uniform random policy, that is, it takes random actions until the game is finished and only then the information gathered is backed up.
+I have read the AlphaZero paper (and the AlphaGo Zero too) and I didn't find any information on how the rollout is implemented (maybe I missed it).
+How is the rollout from the MCTS implemented in both the AlphaGo Zero and the AlphaZero algorithms?
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks', 'batch-normalization']"," Title: Is batch normalization not suitable for non-gaussian input?Body: I generate some non-Gaussian data, and use two kinds of DNN models, one with BN and the other without BN.
+
+I find that the model DNN with BN can't predict well.
+
+The codes is shown as follow:
+
+
+
+import numpy as np
+import scipy.stats
+import matplotlib.pyplot as plt
+from keras.models import Sequential
+from keras.layers import Dense,Dropout,Activation, BatchNormalization
+
+np.random.seed(1)
+
+# generate non-gaussian data
+def generate_data():
+ distribution = scipy.stats.gengamma(1, 70, loc=10, scale=100)
+ x = distribution.rvs(size=10000)
+ # plt.hist(x)
+ # plt.show()
+ print ('[mean, var, skew, kurtosis]', distribution.stats('mvsk'))
+
+ y = np.sin(x) + np.cos(x) + np.sqrt(x)
+ plt.hist(y)
+ # plt.show()
+ # print(y)
+ return x ,y
+
+x, y = generate_data()
+
+x_train = x[:int(len(x)*0.8)]
+y_train = y[:int(len(y)*0.8)]
+x_test = x[int(len(x)*0.8):]
+y_test = y[int(len(y)*0.8):]
+
+
+def DNN(input_dim, output_dim, useBN = True):
+ '''
+ 定义一个DNN model
+ '''
+ model=Sequential()
+
+ model.add(Dense(128,input_dim= input_dim))
+ if useBN:
+ model.add(BatchNormalization())
+ model.add(Activation('tanh'))
+ model.add(Dropout(0.5))
+
+ model.add(Dense(50))
+ if useBN:
+ model.add(BatchNormalization())
+ model.add(Activation('tanh'))
+ model.add(Dropout(0.5))
+
+ model.add(Dense(output_dim))
+ if useBN:
+ model.add(BatchNormalization())
+ model.add(Activation('relu'))
+
+ model.compile(loss= 'mse', optimizer= 'adam')
+ return model
+
+clf = DNN(1, 1, useBN = True)
+clf.fit(x_train, y_train, epochs= 30, batch_size = 100, verbose=2, validation_data = (x_test, y_test))
+
+y_pred = clf.predict(x_test)
+def mse(y_pred, y_test):
+ return np.mean(np.square(y_pred - y_test))
+print('final result', mse(y_pred, y_test))
+
+
+The input x
is like this shape:
+
+
+
+If I add BN layers, the result is shown as follows:
+
+Epoch 27/30
+ - 0s - loss: 56.2231 - val_loss: 47.5757
+Epoch 28/30
+ - 0s - loss: 55.1271 - val_loss: 60.4838
+Epoch 29/30
+ - 0s - loss: 53.9937 - val_loss: 87.3845
+Epoch 30/30
+ - 0s - loss: 52.8232 - val_loss: 47.4544
+final result 48.204881459013244
+
+
+If I don't add BN layers, the predicted result is better:
+
+Epoch 27/30
+ - 0s - loss: 2.6863 - val_loss: 0.8924
+Epoch 28/30
+ - 0s - loss: 2.6562 - val_loss: 0.9120
+Epoch 29/30
+ - 0s - loss: 2.6440 - val_loss: 0.9027
+Epoch 30/30
+ - 0s - loss: 2.6225 - val_loss: 0.9022
+final result 0.9021717561981543
+
+
+Anyone knows the theory about why BN is not suitable for non-gaussian data ?
+"
+"['neural-networks', 'deep-learning', 'tensorflow', 'datasets']"," Title: TensorFlow 2.0 - Normalizing input to DNN (on structured data)Body: I have a structured dataset of around 100 gigs, and I am using DNN for classification in TF 2.0. Because of this huge dataset, I cannot load entire data in memory for training. So, I'll be reading data in batches to train the model.
+
+Now, the input to the network should be normalized and for that, I need training dataset mean and SD. I have been reading TensorFlow docs to get info on how to normalize features when reading data in batches. But, couldn't find one. though I found this article, it is only for the case where entire data can be loaded in memory.
+
+So, If any of you have worked on creating such a TensorFlow data pipeline for normalizing input features while loading data in batches and training model, It would be helpful.
+"
+"['neural-networks', 'machine-learning', 'prediction', 'matlab', 'time-series']"," Title: How can I test my trained network on the next unavailable hour?Body: I have data of 695 hours. I use the first 694 hours to train the network and I use 695th hour to validate it. Now my goal is to predict the next hour.
+
+How I can use my trained network to predict the next hour, that is, the 696th hour (which I do not have access to)?
+"
+"['neural-networks', 'convolutional-neural-networks', 'convolution']"," Title: Is it possible to vectorise a CNN?Body: I am trying to write a CNN from scratch and am wondering if it is possible to vectorize the convolution step.
+For example, if I had a dataset of 500 RGB images of size 32x32x3, and wanted the first convolutional layer to have 64 filters, how would I go about the vectorization of this layer?
+Currently, I am running through all 500 images in a for
loop, convoluting individually. I do this for all the images up to the flattening stage (where it essentially becomes a normal NN again), at which point I can implement the normal vectorisation approach to get to my output, etc.
+A holistic overview of the process would be appreciated, as I am struggling to get my head around it and am struggling to find and information on the matter online.
+"
+"['machine-learning', 'computational-learning-theory', 'vc-dimension']"," Title: What is the maximum number of dichotomies in a square?Body: I am new to machine learning. I am reading this blog post on the VC dimension.
+
+$\mathcal H$ consists of all hypotheses in two dimensions $h: R^2 → \{−1, +1 \}$, positive inside some square boxes and negative elsewhere.
+
+An example.
+
+
+
+My questions:
+
+
+- What is the maximum number of dichotomies for the 4 data points? i.e calculate mH(4)
+- It seems that the square can shatter 3 points but not 4 points. The $\mathcal V \mathcal C$ VC dimension of a square is 3. What is the proof behind this?
+
+"
+"['neural-networks', 'machine-learning', 'algorithm-request', 'non-linear-regression']"," Title: Which AI technique is best suited to discovering non-linear relationships in data?Body: I am interested in exploring whether AI techniques can derive hidden patterns of relationships in a data set. For example, from among house size, lot size, age of house and asking price, what formula best predicts selling price?
+In explorations around how this might be done, I tried to use a neural network to solve for a predictable relationship between two variables to predict a third, so I trained my neural network with inputs consisting of the length of two sides of a triangle, and the result being the length of the hypotenuse. It couldn't get it to work.
+I was told by somebody who understands all this better than me that the reason it failed is because conventional neural networks are not good at modeling non-linear relationships.
+If that is true, I wonder if there is some other AI technique that could 'derive' a network modeling the Pythagorean theorem from a training data set with better results than a normal neural network?
+"
+"['convolutional-neural-networks', 'image-recognition', 'object-detection', 'image-segmentation', 'data-labelling']"," Title: Do models train better if the labelling information is more specific (or dense)?Body: I'm working on a project where there is a limited dataset of videos (about 200). We want to train a model that can detect a single class in the videos. That class can be of multiple different types of shapes (thin wire, a huge area of the screen, etc).
+There are three options on how we can label this data:
+
+- Image classification (somewhere in the image is this class)
+- Bounding box (in this area, there is the class)
+- Semantic segmentation (these pixels are the class)
+
+My assumption is that if the model was trained on semantic segmentation data it would perform slightly better than bounding box data. I'm also assuming it would perform way better than if the model only learned on image classification data. Is that correct?
+"
+"['neural-networks', 'computer-vision', 'image-processing', 'algorithm-request']"," Title: How can I ""measure"" an object using Computer Vision techniques and neural networks?Body: I would like to develop a neural network to measure the distance between two opposite sides of an object in an image (in a similar way that the fractional caliper tool measures an object).
+So, given an image of an object, the neural network should produce the depth or height of the object.
+Which computer vision techniques and neural networks could I use to solve this problem?
+"
+"['machine-learning', 'deep-learning', 'classification', 'image-recognition', 'keras']"," Title: Why is this ResNet50 misclassifying objects?Body: I'm new to Deep Learning, and I have some conceptual problems. I followed a simple tutorial here, and trained a model in Keras to do image classification on 10 classes of logos. I prepared 10 classes with each class having almost 100 images. My trained Resnet50
model performs exceptionally great when the image is one of those 10 logos, with 1.00 probability. But the problem is if I pass a non-logo item, a random image totally unrelated visually, still it marks it as one of those logos with close to 1.00 probability!
+
+I'm confused. Am I missing anything? Why is this happening? How to find a solution? I need to find logos in video frames. But right now, with a high possbility each frame is marked as a logo!
+
+Here is my simple training code:
+
+def build_finetune_model(base_model, dropout, fc_layers, num_classes):
+ for layer in base_model.layers:
+ layer.trainable = False
+
+ x = base_model.output
+ x = Flatten()(x)
+ for fc in fc_layers:
+ # New FC layer, random init
+ x = Dense(fc, activation='relu')(x)
+ x = Dropout(dropout)(x)
+
+ # New softmax layer
+ predictions = Dense(num_classes, activation='softmax')(x)
+ finetune_model = Model(inputs=base_model.input, outputs=predictions)
+ return finetune_model
+finetune_model = build_finetune_model(base_model, dropout=dropout, fc_layers=FC_LAYERS, num_classes=len(class_list))
+adam = Adam(lr=0.00001)
+finetune_model.compile(adam, loss='categorical_crossentropy', metrics=['accuracy'])
+filepath=""./checkpoints/"" + ""ResNet50"" + ""_model_weights.h5""
+checkpoint = ModelCheckpoint(filepath, monitor=[""acc""], verbose=1, mode='max')
+callbacks_list = [checkpoint]
+
+history = finetune_model.fit_generator(train_generator, epochs=NUM_EPOCHS, workers=8,
+ steps_per_epoch=steps_per_epoch,
+ shuffle=True, callbacks=callbacks_list)
+
+plot_training(history)
+
+"
+"['human-like', 'brain']"," Title: Have any AI's been able to decode human vision 'thoughts'Body: I believe I saw an article about an AI that was able to decode human vision 'brain-waves' in real-time, which would create a blurry image of what the human was seeing.
+
+This AI Decodes Your Brainwaves and Draws What You're Looking at
+
+Is anyone aware where I can find this?
+"
+"['computer-vision', 'image-segmentation']"," Title: Segmentation of a static object in a videoBody: I've videos from a mounted camera on a helmet and the manually segmented labels (mask) of them.
+
+The mask is valid through the entire video, only the scene vary.
+In different videos the camera is mounted differently on top of the helmet.
+
+Things I've tried:
+
+
+- Training semantic segmentation on frame-mask pairs
+- Training semantic segmentation of concatenated frames with the mask.
+- Averaging consecutive frames and calculating the time-wise std of the pixels, and feeding the NN with this as an input.
+- ensembling (averaging) segmentation results from N frames
+- Usage of classic background subtraction techniques such as MOG2 (worse)
+
+
+Although DNN models achieve 99% accuracy during training, in some of my test videos the model is missing large part of the helmet..
+
+I'm certain this task can achieve ~100% accuracy even for never-before-seen examples.
+
+Do you have some ideas?
+"
+"['applications', 'knowledge-representation']"," Title: Is the Assumption-based Truth Maintenance System still used?Body: Is the Assumption-based Truth Maintenance System still used to maintain consistency while explicitly accounting for assumptions?
+"
+"['machine-learning', 'perceptron', 'books', 'xor-problem']"," Title: Which part of ""Perceptrons: An Introduction to Computational Geometry"" tells that a perceptron cannot solve the XOR problem?Body: In the book ""Perceptrons: An Introduction to Computational Geometry"" by Minsky and Papert (1969), which part of this book tells that a single-layer perceptron could not solve the XOR problem?
+
+I have been already scanned it, but I did not find the part. Or am I missing something?
+"
+"['reinforcement-learning', 'convergence', 'function-approximation', 'temporal-difference-methods', 'on-policy-methods']"," Title: Convergence of semi-gradient TD(0) with non-linear function approximationBody: I am looking for a result that shows the convergence of semi-gradient TD(0) algorithm with non-linear function approximation for on-policy prediction. Specifically, the update equation is given by (borrowing notation from Sutton and Barto (2018))
+
+$$\mathbf w \leftarrow \mathbf w +\alpha [R + \gamma \hat v(S', \mathbf w) - \hat v(S, \mathbf w)] \nabla \hat v(S, \mathbf w)$$
+
+where $\hat v(S, \mathbf w)$ is the approximate value function parameterized by $\mathbf w$.
+
+Sutton and Barto (2018) mention that the above update equation converges when $\hat v$ is linear in $\mathbf w$. But I couldn't find a similar result for non-linear function approximation. Any help would be greatly appreciated.
+"
+['neural-networks']," Title: Can a Video Game Characters Behavior be directed by a NN?Body: So, I’m looking into some dynamic ways in which one can drive the behavior of a video game character. Specifically an NPC (Non playable character) that will be observable from the players point of view. Something I’d like to clarify from the start is what I mean by behavior. Since video games are understood visually, then I would qualify behavior to be anything visual, such as gestures, mannerisms or actions in any local space.
+
+Let’s take a common archetype as an example. We’ll say we want the behavior of a villain. My first thought, was to use videos as training data. Videos of specific subjects or actors in a villainous scene (think Frankenstein, Dracula, Emperor Palpatine in Star Wars etc...) in hopes that an understanding of their mannerisms, body language and gestures could be captured and later applied to 3D animation dynamically.
+
+I do understand that anything 3D typically requires rigging and animation. I’m currently not exactly sure how to marshal the data from one format (video analyses) to 3D animation. I thought I’d start working on the concept from high level first.
+
+Any thoughts?
+"
+"['neural-networks', 'objective-functions', 'facial-recognition']"," Title: What is the formula used to calculate the loss in the FaceNet model?Body: The FaceNet model returns the loss of the predictions and ground-truth classes. How is this loss calculated?
+"
+"['social', 'human-like', 'mythology-of-ai']"," Title: What are methods human actors use to imitate robots?Body: Robot technology is usually thought from an engineering perspective. A human programmer writes a software this executed in a robot who is doing a task.
+
+But what would happen, if the project is started with the opposite goal? The idea is, that the human becomes the robot by himself. That means, the human is using makeup to make his face more mechanically, buys special futuristic clothing which mirrors the light and imitates in a roleplay the working of a kitchen robot.
+
+What are methods human actors use to imitate robots?
+"
+"['reinforcement-learning', 'sutton-barto', 'notation']"," Title: Sutton & Barto's notation $V_{t+n}$ in Chapter 7: $n$-step BootstrappingBody: Until Chapter 6 of Sutton & Barto's book on Reinforcement Learning, the authors use $V$ for the current estimate of a state value. Equation (6.1), for example, shows:
+
+$$ V(S_t) \leftarrow V(S_t) + \alpha[G_t - V(S_t)]\ \ \ \ \ \ (6.1)$$
+
+However, on Chapter 7 they add a subscript to $V$. The first time this appears is on page 143 when they define the return from $t$ to $t+1$:
+
+$$ G_{t:t+1} \dot{=} R_{t+1} + \gamma V_t(S_{t+1})$$
+
+and say that $V_t : \mathcal{S} \rightarrow \mathbb{R}$ is ""the estimate at time $t$ of $v_\pi$.""
+
+At first I thought I understood this as a natural consequence of considering $n$ steps ahead in the future and needing an extra index to go over the $n$ steps. But then this stopped making sense when I realized that an estimate for a state must be consolidated, no matter at which of $n$ steps that is coming from. After all, a state $s$ has a single value to estimate, $v_\pi(s)$, and that does not depend on $t$.
+
+Then I thought that they are just taking into account that there are many successive estimates of $V$ as the algorithm progresses, so $V_t$ is just the estimate after processing the $n$ steps starting at time $t$. In other words, the subscript would be a rigorous mathematical way of denoting the sequence of algorithmic updates. But this does not make sense either since even in Chapter 6 and before, the estimate is also successively updated. See Equation (6.1), for example. The $V$ on the left-hand side is a different variable from the one on the right-hand side (this is why they must use $\leftarrow$ indicating an assignment as opposed to a mathematical equality with $=$). It could have easily been written with an index as well.
+
+So, what is the purpose of the new index for $V$ in Chapter 7, and why is it more important at this particular chapter?
+
+Edit and elaboration: Going back to the text, it seems to me that the new subscript is indeed added as an attempt for greater clarity, even though the subscript-less notation $V$ from previous chapters might have been kept (and in fact it is still used in the pseudo-code in page 144).
+
+It seems the authors wanted to stress that the update of $V$ happens not only for every trace of $n$ steps, but also at every one of those steps.
+
+However, I think this introduced a technical error, because suppose we just learned from an 10-step episode ($T=10$), using $n = 3$. Then the latest estimate of $v_\pi$ is $V_{T-1} = V_{10 - 1} = V_{9}$. Then at the next episode, the first time $V_{t + n}$ is used to inform a target update, it will be for $\tau = 0$ (from the pseudo-code), which implies $t - n + 1 = 0$, so $t = n - 1$, that is, $V_{t+n}=V_{n-1+n}=V_{2n-1}=V_5$, which is not the most up-to-date estimate $V_9$ of $v_\pi$.
+
+Of course the problem would be easily solved if we simply set the next used estimate $V_{2n + 1}$ to be equal to the last episode's $V_{T-1}$, but to avoid confusion this would have to be explicitly stated somewhere.
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: Can neural networks deal with unbounded numbers as inputs?Body: I want to train an ANN. The problem is that the input features are completely unbounded (There are no boundaries as maximum and minimum for them).
+
+For example, the following input vectors $(42, 54354354)$ and $(0.4, 47239847329479324732984732947)$ are both valid.
+
+I know RNNs that can add up input neurons, which are pretty similar to my case, but the number of the digits was limited in all of the implementations.
+
+Is there a way to implement an ANN that can add up the input numbers of any magnitude?
+"
+"['neural-networks', 'comparison', 'q-learning', 'policy-gradients', 'time-complexity']"," Title: What is the complexity of policy gradient algorithms compared to discrete action space algorithms?Body: I am using a policy gradient algorithm (actor-critic) for wireless networks. The policy gradient-based algorithm helps because it considers continuous action space.
+
+But how much does a policy gradient-based algorithm contribute to the complexity of the involved neural networks, compared to discrete action space algorithms (like Q-learning)? Moreover, in terms of computation, how do policy gradient algorithms (for continuous action spaces) compare to discrete action space algorithms?
+"
+"['natural-language-processing', 'language-model']"," Title: How do you build a language model to predict the contextual similarity between two documents?Body: How do you build a language model to predict the contextual similarity between two documents?
+"
+"['neural-networks', 'computational-learning-theory', 'universal-approximation-theorems']"," Title: Does a neural network exist that can learn every possible training data?Body: The universal approximation theorem states, that a feed-forward network with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of $R^n$.
+Michael Nielsen states
+
+No matter what the function, there is guaranteed to be a neural network so that for every possible input, $x$, the value $f(x)$ (or some close approximation) is output from the network.
+
+So, for continuous functions, this seems plausible. Interestingly, in the same article, Nielsen mentioned "any function".
+Later, he writes
+
+However, even if the function we'd really like to compute is discontinuous, it's often the case that a continuous approximation is good enough.
+
+The last statement leaves open a gap, to ask how well an approximation practically can be.
+Let's ignore contradictory input/output training pairs like $f(0)=0$ and $f(0)=1$, which actually don't event represent a function anyway.
+Furthermore, assume that the training data is generated randomly, which would practically result in a discontinuous function.
+How does a neural network learn such data? Will a learning algorithm always be able to find a neural network that approximates the function represented by the input-output pairs?
+"
+"['machine-learning', 'python', 'recommender-system']"," Title: Which machine learning algorithms can be used to build a recommendation system?Body: I am working on building a recommendation engine. I need to build a model that recommends similar items. Currently, I am using the Nearest Neighbor algorithm present in sklearn.neighbors
package.
+
+I am working in finance domain, similarity can based on the ""Supplier"", ""Buyer"", ""Industry type"" etc.
+
+I have attached sample data in the image below
+
+
+
+Is there any better machine learning algorithms/packages in Python for the same?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'python', 'tensorflow']"," Title: How could I locate certain words or numbers in a financial statement?Body: I would like to code a script that could locate a specific word or number in a financial statement. Financial statements roughly contain the same information, they are however not identical and organized in the same way. My thought is that by using Tensorflow I could train a neural network to locate the specific words or numbers for me. I am thinking that if I label different text and numbers in 1000 financial statements and use them to train the neural network, it will then be able to identify these numbers or words in all financial statements. For example, tell it in all 1000 training statements which number that is the profit of the company. Then when I give it an unseen financial statement, it should be able to identify which number that is the profit.
+
+Is this doable? I have been working with coding in python for a couple of months and so far I've built some web scrapers and integrated them with twitter, slack and google sheets. However, this would be my first AI related project. I would be very grateful for all your thoughts on this project and if anyone could steer me in the right direction by sharing relevant tutorials.
+
+Thanks a lot!
+"
+"['deep-learning', 'reinforcement-learning', 'backpropagation', 'policy-gradients']"," Title: How is gradient being calculated in Andrej Karpathy's pong code?Body: I was going through the code by Andrej Karpathy on reinforcement learning using a policy gradient. I have some questions from the code.
+
+
+- Where is the logarithm of the probability being calculated? Nowhere in the code I see him calculating that.
+- Please explain to me the use of
dlogps.append(y - aprob)
line. I know this is calculating the loss
, but how is this helping in a reinforcement learning environment, where we don't have the correct labels?
+- How is
policy_backward()
working? How are the weights changing to the loss function mentioned above? More specifically, what's dh
here?
+
+"
+"['training', 'datasets', 'image-segmentation']"," Title: creating your own dataset similar to cityscapes formatBody: I'm trying to train a neural network with my own dataset. The neural network can accept the cityscape format.
+
+Is there any application that can give mask/segmented image, instance image, label IDs images and JSON file, similar to cityscape dataset format?
+
+Basically, I want to create my own dataset similar to the cityscape dataset format.
+"
+"['natural-language-processing', 'machine-translation', 'bleu']"," Title: What happens when the output length in the brevity penalty is zero?Body: The brevity penalty is defined as
+
+$$bp = e^{(1- r/c)},$$
+
+where $r$ is the reference length and $c$ is the output length.
+
+But what happens if the output length gets zero? Is there any standard way of coping with that issue?
+"
+"['reinforcement-learning', 'policy-gradients', 'pytorch', 'reinforce', 'expectation']"," Title: How does the policy gradient's derivative work?Body: I am trying to understand the policy gradient method using a PyTorch implementation and this tutorial.
+
+My first question is about the end result of this gradient derivation,
+
+\begin{aligned}
+\nabla \mathbb{E}_{\pi}[r(\tau)] &=\nabla \int \pi(\tau) r(\tau) d \tau \\
+&=\int \nabla \pi(\tau) r(\tau) d \tau \\
+&=\int \pi(\tau) \nabla \log \pi(\tau) r(\tau) d \tau \\
+\nabla \mathbb{E}_{\pi}[r(\tau)] &=\mathbb{E}_{\pi}[r(\tau) \nabla \log \pi(\tau)]
+\end{aligned}
+
+Mainly in this equation
+
+$$\nabla \mathop{\mathbb{E}_\pi }[r(\tau )] = \mathop{\mathbb{E}_\pi }[r(\tau )\nabla log \pi (\tau )]$$
+
+Does expectation follow a distributive or associative property?
+
+I know that expectations of a function can be written as below
+
+$$\mathop{\mathbb{E}}[f(x)] =\sum p(x)f(x)$$
+
+Then can we rewrite the first equations as
+
+$$\mathop{\mathbb{E}_\pi }[r(\tau )\nabla log \pi (\tau )] \\= \mathop{\mathbb{E}_\pi }[r(\tau )] \,\, \mathop{\mathbb{E}_\pi }[\nabla log \pi (\tau )] \\= \sum p(\tau)r(\tau ) \,\, \sum p(\tau)\nabla log \pi (\tau ) \\
+= p(\tau) \sum r(\tau ) \nabla log \pi (\tau )$$
+
+The problem is when I compare this to PyTorch implementation (line 71-74)
+
+for log_prob, R in zip(policy.saved_log_probs, returns):
+ policy_loss.append(-log_prob * R)
+optimizer.zero_grad()
+policy_loss = torch.cat(policy_loss).sum()
+
+
+The pytorch implementation simply multiplied log probability and reward -log_prob * R
and then summed the vector torch.cat(policy_loss).sum()
there is no $p(\tau)$. What is really happening here?
+
+The second question is the multiplication of log probability and reward in PyTorch implementation -log_prob * R
, PyTorch implementation has a negative log probability and derived equation has a positive one $\mathop{\mathbb{E}_\pi }[r(\tau )\nabla log \pi (\tau )]$. What is the need for multiplying log probability with a negative value in PyTorch implementation?
+
+I have only a basic understanding of maths and that's why I am asking this question here.
+
+
+
+Edit: found a better derivation of above equation https://youtu.be/Ys3YY7sSmIA?t=3622
+"
+"['neural-networks', 'machine-learning', 'terminology', 'convergence']"," Title: What is convergence in machine learning?Body: I came across this answer on Quora, but it was pretty sparse. I'm looking for specific meanings in the context of machine learning, but also mathematical and economic notions of the term in general.
+"
+"['feedforward-neural-networks', 'self-organizing-map']"," Title: Self-organizing map using weighted non-euclidean distance to minimize variance of predictionsBody: Let's say I have a dataset, each item/row of which has $\mathit{X + 1}$ characteristics where the last characteristic (i.e., the $\mathit{1}$) represents the some value I want to predict, $\mathit{Y}$, based on a SOM trained on the $\mathit{X}$ characteristics. I want to organize the dataset into groups such that each group has a small variance among the respective $\mathit{Y}$ values. I believe I could do this by using a non-Euclidean distance to find the Best Matching Unit (BMU) based on applying weights to each dimension.
+
+For example, given a node at (0,0) and weights for dimension $\mathit{x}$ of 1 and dimension $\mathit{y}$ of 2, a data point at (3,2) would have a weighted distance of 5 from the node, calculated as follows:
+
+$\sqrt{\mathit{(1 * (3 - 0)) ^ 2 + (2 * (2 - 0)) ^ 2}}$
+
+I don't think a simple linear regression would work to determine the weights because it would not take advantage of clustering.
+
+The goal would be, for a new data point, to approximate a probability distribution of outcomes based on similarly-profiled data points in the training set (i.e., retrieve all of the training results with the same BMU and analyze the results). I think this might essentially just be replicating a deep feedforward network, but I'd like to try it.
+
+Is there a way I could achieve this by modifying a SOM model or using a similar technique?
+"
+"['deep-learning', 'natural-language-processing', 'transformer', 'language-model']"," Title: Are embeddings in multi-lingual language models comparable across languages?Body: Facebook has just pushed out a bigger version of their multi-lingual language model XLM, called XLM-R. My question is: do these kind of multi-lingual models imply, or even ensure, that their embeddings are comparable between languages? That is, are semantically related words close together in the vector space across languages?
+
+Perhaps the most interesting citation from the paper that is relevant to my question (p. 3):
+
+
+ Unlike Lample and Conneau (2019), we do not use language embeddings,
+ which allows our model to better deal with code-switching.
+
+
+Because they do not seem to make a distinction between languages, and there's just one vocabulary for all trained data, I fail to see how this can be truly representative of semantics anymore. The move away from semantics is increased further by the use of BPE, since morphological features (or just plain, statistical word chunks) of one language might often not be semantically related to the same chunk in another language - this can be true for tokens themselves, but especially so for subword information.
+
+So, in short: how well can the embeddings in multi-lingual language models be used for semantically comparing input (e.g. a word or sentence) of two different languages?
+"
+"['machine-learning', 'deep-learning', 'comparison', 'overfitting']"," Title: Are deep learning models more prone to overfitting than machine learning ones?Body: In my opinion, deep learning algorithms and models (that is, multi-layer neural networks) are more sensitive to overfitting than machine learning algorithms and models (such as the SVM, random forest, perceptron, Markov models, etc.). They are capable of learning more complex patterns. At least that's how I look at it. Still, I and my colleagues disagree about this and I cannot really find any information about this. My colleagues say that deep learning algorithms are hardly vulnerable to overfitting.
+
+Are there statements (or opinions) about this aspect?
+"
+"['deep-learning', 'recurrent-neural-networks', 'long-short-term-memory', 'applications']"," Title: Why can't LSTMs tell a long story?Body: There is a recent trend in people using LSTMs to write novels. I haven’t attempted this myself. From what I’m hearing, they can tell a story, but it seems they lose the context of the story rather quickly. After which they begin constructing new, but not necessarily related constructs.
+
+Can they construct a plot in the long term?
+"
+"['neural-networks', 'deep-learning', 'facial-recognition']"," Title: Detailed explaination of Facenet Model for face recogniton?Body: Can some one explain how Facenet model works in detail and simple words .
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing', 'voice-recognition', 'audio-processing']"," Title: Deep audio fingerprinting for word searchBody: Simply speaking, I'm trying to somehow search an audio clip for a list of words, and if found, I mark the time stamps. My use-case is profanity check with a list of pre-defined profane words.
+
+Is there any successfull approaches, samples, tools or APIs, possibly through deep learning, to perform this? I'm new to audio processing.
+"
+"['neural-networks', 'comparison', 'reference-request', 'papers', 'books']"," Title: Which books or papers clearly explain the relation between Ising models and deep neural networks?Body: I am looking for a book or paper which clearly explains the relationship between Ising models and deep neural networks.
+
+Can anyone provide any references?
+"
+"['machine-learning', 'reference-request', 'game-theory', 'goal-based-agents', 'affective-computing']"," Title: AI with conflicting objectives?Body: A recent question on AI and acting recalled me to the idea that in drama, there are not only conflicting motives between agents (characters), but a character may themselves have objectives that are in conflict.
+
+The result of this in performance is typically nuance, but also carries the benefit of combinatorial expansion, which supports greater novelty, and it occurs to me that this would be a factor in affective computing.
+
+(The actress Eva Green is a good example, where her performances typically involve indicating two or more conflicting emotions at once.)
+
+It occurs to me that this can even arise in the context of a formal game where achieving the most optimal outcome requires managing competing concerns.
+
+
+- Is there literature or examples of AI with internal conflicting objectives?
+
+"
+"['planning', 'strips', 'sokoban-puzzle']"," Title: How can I define the relations, preconditions and effects of each operator for the Sokoban puzzle?Body: I would like to solve the Sokoban puzzle, which consists in moving a character in a 2D map to push boulders into target cells. Each turn, the player can move to an adjacent cell (no diagonals) if it is empty, or push a boulder one step further. To push a boulder, the player needs to stand next to it, and the space behind the boulder needs to be empty (no other boulders and no walls).
+I'm using the STRIPS planner, and I am having a hard time defining the fixed and dynamic relations and also the preconditions and effects of each operator for this puzzle.
+"
+"['comparison', 'markov-decision-process', 'game-theory', 'pomdp', 'imperfect-information']"," Title: Are perfect and imperfect information games modelled as fully and partially observable environments, respectively?Body: In perfect information games, the agent can see all the moves performed in the past. Besides, it can observe the next action that will be put into practice by the opponent.
+
+In this case, can we say that perfect information games are actually a fully observable environment? If we reach this conclusion, I guess that imperfect information becomes a partially observable environment?
+"
+"['search', 'proofs', 'admissible-heuristic', 'consistent-heuristic', 'heuristic-functions']"," Title: If an heuristic is not admissible, can it be consistent?Body: I am solving a problem in which, according to the given values, the heuristic is not admissible. According to my calculation from other similar problems, it should be consistent, as well as keeping in mind the values, but the solution says it's not consistent either. Can someone tell why?
+"
+"['neural-networks', 'natural-language-processing', 'research', 'reference-request', 'speech-recognition']"," Title: Is there a detailed description or implementation of an end-to-end speech recognition system?Body: I am currently trying to implement an end-to-end speech recognition system from scratch, that is, without using any of the existing frameworks (like TensorFlow, Keras, etc.). I am building my own library, where I am trying to do a polynomial approximation of functions (like exponential, log, sigmoid, ReLU, etc). I would like to have access to a nice description of the neural networks involved in an end-to-end speech recognition system, where the architecture (the layers, activation functions, etc.) is clearly laid out, so that I can implement it.
+
+I find most of the academic or industry papers citing various previous works, toolkits or papers, making it tedious for me. I am new to the field, so I am having more difficulty, so looking for some help here.
+"
+"['neural-networks', 'classification', 'objective-functions', 'cross-entropy']"," Title: Why does the binary cross-entropy work better than categorical cross-entropy in a multi-class single label problem?Body: I was just doing a simple NN example with the fashion MNIST dataset, where I was getting 97% accuracy, when I noticed that I was using Binary cross-entropy instead of categorical cross-entropy by accident. When I switched to categorical cross-entropy, the accuracy dropped to 90%. I then got curious and tried to use binary cross-entropy instead of categorical cross-entropy in my other projects and in all of them the accuracy increased.
+
+Now, I know that binary cross-entropy can be used in a multi-class, multi-label classification problem, but why is working better than categorical cross-entropy in a multiclass single label problem?
+"
+"['neural-networks', 'machine-learning', 'python', 'keras', 'long-short-term-memory']"," Title: Using a neural network to identify a stable region within a set of data?Body: I am working on a problem in which I am attempting to find a stable region in a spiral galaxy. The PI I'm working with asked me to use machine learning as a tool to solve the problem. I have created some visualizations of my data, as bellow.
+
+
+
+In this image, you can see there is a flat region between 0 and roughly 30 pixels, and between 90 pixels and 110 pixels. I have received suggestions to use an RNN LSTM model that can identify flat regions, but I wanted to hear other suggestions of other neural network models as well.
+
+The PI I'm working with suggests to feed my data visualization images into a neural network and have the neural network identify said stable regions. Can this be done using a neural network, and what resources would I have to look at? Moreover, can this problem be solved with RNN LSTM? I think the premise of this was to treat the radius as some temporal dimension. I've been extensively looking for answers online, and I cannot quite seem to find any similar examples.
+"
+"['reinforcement-learning', 'comparison', 'supervised-learning', 'automated-machine-learning']"," Title: What is the difference between reinforcement learning and AutoML?Body: My vague understanding of reinforcement learning (RL) is that it's very similar to supervised learning except that it updates on a continuous feed of data/activity, this to me sounds very similar to AutoML (which I've started to notice being used).
+
+Do they use different algorithms? What is the fundamental difference between RL and AutoML?
+
+I'm after an explanation for somebody who understands technology but does not work with machine learning tools regularly.
+"
+"['neural-networks', 'backpropagation', 'feedforward-neural-networks']"," Title: What kind of data structures are needed to efficiently do back-propagation in a feedforward neural network?Body: In a feed-forward neural network, in order to efficiently do backpropagation, what kind of data structure is needed?
+
+I know the weights can just be stored in an array, and you need pointers of some kind to represent connections from one layer to the next (or just a default scheme/pattern), but is anything else needed for backpropagation to work?
+"
+"['comparison', 'chess']"," Title: Are humans superior to machines in chess?Body: A friend of mine, who is an International Master at chess, told me that humans were superior to machines provided you didn't impose the time constraints that exist in competitive chess (40 moves in 2 hours) since very often games were lost, to another human or a machine, when a bad move is made under time pressure.
+
+So, with no time constraints and access to a library of games, the human mind remains superior to the machine is my friend's contention. I'm an indifferent chess player and don't really know what to make of this. I was wondering if any research had been made that could back up that claim or rebut it.
+"
+"['reinforcement-learning', 'monte-carlo-methods']"," Title: Can I use my previous estimate of the state-action values as initialisation in GLIE-Monte Carlo Control?Body: I am trying to implement a tabular-based GLIE Monte-Carlo learning algorithm.
+So I repeat n times:
+
+
+- create observations using my previous policy $\pi_{n-1}(s)$
+- update my state-action values using the observations generated in 1 with the monte-carlo update rule: $Q_n(s_t,a_t)= Q_n(s_t,a_t)+1/N(s_t,a_t)\times(G_t-Q_n(S_t,a_t))$
+- update my policy to $\pi_{n}$ using epsilon-geedy improvement with $\epsilon=1/(n+1)$.
+
+
+In step 2 I need to decide for an initial estimate $\tilde{Q}_n$. Is it a decent option to use $\tilde{Q}_n=Q_{n-1}$?
+"
+"['deep-learning', 'search', 'architecture', 'reasoning']"," Title: How to create an AI to solve a word search?Body: This at first sounds ridiculous. Of course there is an easy way to write a program to solve a wordsearch.
+
+But what I would like to do is write a program that solves a word-search like a human.
+
+That is, use or invent different strategies. e.g. search randomly for the starting letter; go line-by-line;
+
+Probably the AI will eventually find out that going line by line looking for a given starting letter of a word is a good strategy.
+
+Any idea how you would write such a strategy-finding AI?
+
+I think the main ""moves"" would be things like ""move right one letter in the grid"", ""store this word in memory"", ""compare this letter with first letter in memory"" and a few more.
+"
+"['convolutional-neural-networks', 'object-detection']"," Title: Is it better to adjust the natural lighting (while recording the video) or to subsequently apply filters on the original video?Body: For the purpose of object detection, is it better to adjust the natural lighting (while recording the video) or to apply filters (e.g. brightness filters, etc.) on the original video to make it brighter?
+
+My intuition is that it shouldn't matter when you adjust the natural lighting or do it after with video filters.
+"
+"['deep-learning', 'policies', 'deepmind', 'alphago-zero']"," Title: AlphaGo Zero: Does the policy head give a probability for every possible move?Body: If I understood correctly, the AlphaGo Zero network returns two values: a vector of logit probabilities p and a value v.
+
+My question is: in this vector that it is outputted, do we have a probability for every possible action in the game? If so: does it apply a probability of 0 to actions that are not possible in that particular state? If this is true, how does the network know which actions are valid?
+
+If not: then the network will output vectors of different sizes according to each state. Is this even feasible? And again, how will the network know which actions are valid?
+
+Related questions but none of them covers this question in specific: 1, 2 and 3.
+"
+"['computer-vision', 'game-ai', 'chat-bots']"," Title: What technology do people use to create bots for games like LOL or Runescape?Body: I was curious about how people make AI to play games. Does anyone know of the AI used to play these games? What allows the AI to see/click the screen in real-time? Even just direction on what libraries for such tasks would be helpful. I can't imagine game developers make an API for creating bots in their games like browsers use with selenium.
+"
+"['deep-learning', 'reinforcement-learning']"," Title: How can I use deep reinforcement learning for vehicle rerouting in SUMO?Body: I want to use deep reinforcement learning for vehicle rerouting in SUMO, but I don't know how to start training the model.
+
+I've already created road network and vehicle routing in SUMO-XML files (mymap.net.xml and mymap.rou.xml). Currently, I'm trying to train the model on Jupyter Notebook, importing TraCI library to control the SUMO simulator and allow for a reinforcement learning approach. However, I'm still confused in training step.
+
+
+- Do I need any traffic data to train my agent to take actions in the environment?
+- How can I train based on these SUMO-XML files I created?
+- Is it possible to run the simulation on Windows? or I need to change to Ubuntu instead?
+
+
+I would appreciate if someone could guide me. Thank you in advance.
+"
+"['object-detection', 'anomaly-detection']"," Title: Defect Detection System using Deep LearningBody: What is the general approach to defect detection in deep learning?
+
+Would the approach be better if we try to learn the positive images (defects in images) as much as possible or we try to learn the negative images (images without blemishes) and try to single out the defects as some anomalies
+
+Can someone point me to some architecture?
+
+REgards
+"
+"['neural-networks', 'reinforcement-learning', 'convolutional-neural-networks', 'computer-vision', 'recurrent-neural-networks']"," Title: Ideas on a network that can translate image differences into motor commands?Body: I'd like to design a network that gets two images (an image under construction, and an ideal image), and has to come up with an action vector for a simple motor command which would augment the image under construction to resemble the ideal image more. So basically, it translates image differences into motor commands to make them more similar?
+
+I'm dealing with a 3D virtual environment, so the images are snapshots of objects and motor commands are simple alterations to the 3D shape.
+
+Probably the network needs two pre-trained CNNs with the same weights that extract image features, then output and concatenate those into a dense layer (or two), which converges into action-space. Training should probably happen via reinforcement learning
+
+Additionally, in the end it needs recurrence, since there are multiple motor actions it needs to do in a row to get closer to the intended result.
+
+Would there be any serious difficulties with this? Or are there any approaches to achieve the intended result? or any similar examples?
+
+Thanks in advance
+"
+"['neural-networks', 'objective-functions', 'facial-recognition']"," Title: How is the percentage or the probablity calculated using Loss function in Facenet Model?Body: This question is related to What is the formula used to calculate the accuracy in the FaceNet model? . I know how loss is calculated in the FaceNet model , but how the loss function is used to calculate probability that this unknown person is , say Bob (0.70). Also we don't know which is positive or negative image , we only know the Anchor (so how FaceNet finds which image is positive or negative ?) . How probability is calculated in FaceNet Model using triplet loss ?
+
+Can we know what is the exact formula or CNN is like black box which uses some unknown method to calculate probability ?
+"
+"['machine-learning', 'deep-learning', 'reference-request']"," Title: Is it possible to create a decompiler using AI?Body: I would like to decompile a compiled file to source code.
+Is it possible to use any AI technique to perform decompilation? Is there any research on this topic? If yes, can you briefly explain one of the existing approaches (just to get some insight)? I would also appreciate links to research papers on this topic.
+"
+"['objective-functions', 'generative-adversarial-networks', 'probability-distribution', 'kl-divergence', 'jensen-shannon-divergence']"," Title: Why is the Jensen-Shannon divergence preferred over the KL divergence in measuring the performance of a generative network?Body: I have read articles on how Jensen-Shannon divergence is preferred over Kullback-Leibler in measuring how good a distribution mapping is learned in a generative network because of the fact that JS-divergence better measures distribution similarity when there are zero values in either distribution.
+I am unable to understand how the mathematical formulation of JS-divergence would take care of this and also what advantage it particularly holds qualitatively apart from this edge case.
+Could anyone explain or link me to an explanation that could answer this satisfactorily?
+"
+"['neural-networks', 'deep-learning', 'training', 'generative-adversarial-networks', 'generative-model']"," Title: How exactly does adversarial training help in handling mode-collapse in generative networks?Body: Of my understanding mode-collapse is when there happen to be multiple classes in the dataset and the generative network converges to only one of these classes and generates images only within this class. On training the model more, the model converges to another class.
+
+In Goodfellows NeurIPS presentation he clearly addressed how training a generative network in an adversarial manner avoids mode-collapse. How exactly do GAN's avoid mode-collapse? and did previous works on generative networks not try to address this?
+
+Apart from the obvious superior performance (generally), is the fact that GAN's address mode-collapse make them far preferred over other ways of training a generative model?
+"
+"['neural-networks', 'convolutional-neural-networks', 'training', 'convolution']"," Title: Wouldn't convolutional neural network models work better without flattening the input in any stages?Body:
+
+The above model is what really helped me understand the implementation of convolutional neural networks, so based on that, I've got a tricky hypothesis that I want to find more about, since actually testing it would involve developing an entirely new training model if the concept hasn't already been tried elsewhere.
+
+I've been building a machine learning project for image recognition and thought about how at certain stages we flatten the input after convoluting and max pooling, but it occurred to me that by flattening the data, we're fundamentally losing positional information. If you think about how real neurons process information based on clusters, it seems obvious that proximity of the biological neurons is of great significance rather than thinking of them as flat layers, by designing a neural network training model that takes neuron proximity into account in deciding the structure by which to form connections between neurons, so that positional information can be utilized and kept relevant, it seems that it would improve network effectiveness.
+
+Edit, for clarification, I made an image representing the concept I'm asking about:
+
+
+
+Basically: Pixels 1 and 4 are related to each other and that's very important information. Yes we can train our neural network to know those relationships, but that's 12 unique relationships in just a 3x3 pixel grid that our training process needs to successfully teach the network to value, whereas a model that takes proximity of neurons into consideration, like the real world brain would maintain the importance of those relationships since neurons connect more readily to others in proximity.
+
+My question is: Does anyone know of white papers / experiments closely related to the concept I'm hypothesizing? Why would or would that not be a fundamentally better model?
+"
+"['comparison', 'implementation']"," Title: What does AI software look like, and how is it different from other software?Body: What does AI software look like? What is the major difference between AI software and other software?
+"
+"['neural-networks', 'social', 'deepfakes']"," Title: Deepfakes as ""force for good""?Body: As per the law of unintended consequences, could it be that deepfakes will eventually have the opposite effect to what people currently seem to fear most. For example, once it is clear that anyone can be deepfaked to unlimited degrees of precision, wouldn't we have a situation where in regards to
+
+
+- pornography, revenge-porn: no one (including the person being viewed) will actually care anymore. E.g. if a movie star's account gets hacked and nude pictures are released to the public, it becomes a non-story because everone simply assumes it's one of thousands of other deepfakes that already exist.
+- fake-news, government propaganda: the general public will demand multiple sources, witnesses before believing the next crazy story. That, I assume, is a good thing as well.
+
+"
+"['convolutional-neural-networks', 'batch-normalization']"," Title: How to properly use batch normalization during inferenceBody: I am trying to manually implement calculations of the image classification process using pre-trained weights from the MobilenetV2 network. I know how to apply filter weights to the channels, but not sure what to do with the coefficients from the batch normalization (BN) layers. The model uses BN after each convolution before ReLu6. As explained in many sources, BN has a lot of benefits during model training. The original Mobilenetv2 paper does say that they used BN during training, but nothing about using it during testing. The pre-trained MobilenetV2 model comes with BN layers which contain weights 4 x n_channels (I assume gamma, beta, mean, and std for each input featuremap in the BN layer). The following questions is:
+
+
+- How do I apply the four coefficients to a featuremap during inference? (This article explains it, but I still don't get it - aren't those imported coefficients already pre-calculated, so the operation on a featuremap is reduced to a multiply-add?)
+
+
+The original paper on BN in section 3.1 says:
+
+
+ ... Since the means and variances are fixed during inference,
+ the normalization is simply a linear transform applied to
+ each activation. It may further be composed with the scaling by γ and shift by β, to yield a single linear transform
+ that replaces BN(x)...
+
+
+Does this mean that during inference I would use only gamma and beta coefficients to ""scale and shift"" each pixel of a corresponding feature map? That is, something like:
+
+for ch
+ for row
+ for col
+ out_feature[row][col][ch] = in_feature[row][col][ch] * BN[gamma][ch] + BN[beta][ch]
+
+
+Could anyone, please, confirm and explain if this is correct and re-iterate what exactly is expected from the BN layer output in terms of value ranges (before ReLu6)?
+"
+"['reinforcement-learning', 'deep-rl', 'policy-gradients', 'reinforce', 'ddpg']"," Title: Purpose of using actor-critic algorithms under deterministic MDP dynamics?Body: One of the main disadvantages of the MC Policy Gradient algorithm (REINFORCE) as described say here is the fact that it has high variance (returns, which we sample, will significantly vary from episode to episode). Therefore it is perfectly reasonable to use a critic to reduce variance and this is what for example Deep Deterministic Policy Gradient (DDPG) does.
+
+Now, let's assume that we're given an MDP with completely deterministic dynamics. In that case, if we start from a specific state and follow a certain deterministic policy we will always obtain the exact same return (therefore we have zero bias and zero variance). If we follow a certain stochastic policy the return will vary depending on how much we explore, but under an almost-deterministic policy our variance will be quite small. In any case, there's no contribution to the variance from the deterministic MDP dynamics.
+
+In deep reinforcement learning for portfolio optimization, many researchers (Xiong at al. for example) use historical market data for model training. The resulting MDP dynamics is of course completely deterministic (if historical prices are used as states) and there's no real sequentiality involved. Consequently, all return variance stems from the stochasticity of the policy itself. However, most researchers still use DDPG as a variance reduction mechanism.
+
+What's the point of using DDPG for variance reduction when the underlying MDP used for training has deterministic dynamics? Why not simply use the Reinforce algorithm?
+"
+"['deep-learning', 'voice-recognition']"," Title: AI natural voice generatorBody: I want to create a solution, which clones my voice. I tried my commercial solutions or implementation of Tacotron. Unfortunately, results not sound natural, generated voice sounds like a robot. Anybody could recommend good alternative?
+"
+"['machine-learning', 'speech-recognition']"," Title: Keyword spotting with custom keywords and why not use speech recognition insteadBody: My question regards performing keyword spotting for custom keywords and justifying the use of keyword spotting models instead of speech recognition.
+
+I have been doing some searching around Keyword Spotting and I realized there is not so much work out there. Probably the most common dataset I have found people using is the Speech Commands Dataset. However, this dataset has only 30 keywords.
+
+If I want keyword spotting for my own custom application, then to the best of my knowledge I need either a pre-trained model or my own data to train a model on. However, to the best of my knowledge, there is no model pre-trained on a dataset with a large enough set of keywords that is likely to cover a lot of applications. Correct me if I am wrong in this.
+
+I have come to the conclusion that I need to train my own model and the only two ways I could train models on custom keywords is to get that data myself, either by crowdsourcing or by performing speech recognition on large datasets, picking up segments which include the words of interest and then doing some manual work to check if these segments truly include the keywords I want. Does someone think that this would be a good or bad idea and why?
+
+Lastly, why would I even bother going the keyword detection route and not just use a speech recognition model that will recognize the words a human speaks and see if any of them match my keyword? Is the performance that much better with keyword detection?
+"
+['object-detection']," Title: Two Models vs One Model for Person Detection and Object DetectionBody: Is it possible to do person detection and object detection within one model? The training data would be images annotated with bounding boxes for objects and people. Because normally object detection and person detection are done separately? Is there any research about models that simultaneously detect both people and objects?
+"
+"['machine-learning', 'terminology', 'math', 'generative-adversarial-networks']"," Title: Why does a Lipschitz continuous discriminator in GANs assure statistical boundedness?Body: I have been reading the paper which introduced spectral normalization in GANs.
+
+At some point the paper mentions the following:
+
+
+ The machine learning community has been pointing out recently that the
+ function space from which the discriminators are selected crucially
+ affects the performance of GANs. A number of works (Uehara et al.,
+ 2016; Qi, 2017; Gulrajani et al., 2017) advocate the importance of
+ Lipschitz continuity in assuring the boundedness of statistics.
+
+
+What does it mean that the Lipschitz continuity assures the boundedness of statistics and why does that happen?
+"
+"['machine-learning', 'deep-learning', 'keras', 'deep-neural-networks', 'models']"," Title: Which deep neural networks are appropriate for the detection of bombs?Body: This is a follow-up question from my previous post here about explosion detection. I gathered a dataset of explosions. As I'm new to Deep Learning in Keras, I'm trying to see what architecture best suits this problem, given that here we have a cloud of smoke/fire as opposed to an object. Any suggestions?
+
+For instance, I've learned about Faster RCNN or RetinaNet, but that is mostly for object detection. Is it going to be better than say a basic ResNet50? And here real-time prediction requirements are not an issue. So shall I assume a heavier model (e.g. NASNet Large or a Resnet-152 model) is better than a basic ResNet-50 model?
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks', 'computational-learning-theory']"," Title: Is there anything theoretically revolutionary about Deep Neural Networks?Body: In recent years, we have seen quite a lot of impressive display of Deep Neural Network (DNN), as demonstrated most famously by AlphaGo and its cousin programs.
+But if I understand correctly, deep neural network is just a normal neural network with a lot of layers. We know about the principles of the neural network since the 1970s (?), and a deep neural network is just the generalization of a one-layer neural network to many.
+From here, it doesn't seem like the recent explosion of DNN has anything to do with a theoretical breakthrough, such as some new revolutionary learning algorithms or particular topologies that have been theoretically proven effective. It seems like DNN successes can be entirely (or mostly) attributed to better hardware and more data, and not to any new theoretical insights or better algorithms.
+I would go even as far as saying that there are no new theoretical insights/algorithms that contribute significantly to the DNN's recent successes; that the most important (if not all) theoretical underpinnings of DNNs were done in the 1970s or prior.
+Am I right on this? How much weight (if any) do theoretical advancements have in contributing to the recent successes of DNNs?
+"
+"['machine-learning', 'reinforcement-learning', 'training', 'datasets', 'adversarial-ml']"," Title: In adversarial machine learning, how does an attacker have access to the test and training dataset in order to poison it?Body: In the field of adversarial machine learning, machine learning models are vulnerable to attacks both on the test and training data set. However, how does the attacker get access to these datasets? How do these datasets get manipulated/tampered with?
+"
+"['neural-networks', 'math', 'definitions', 'notation']"," Title: What do the subscripts mean in $N_{t,n,\sigma,L}$?Body: A neural network can apparently be denoted as $N_{t,n,\sigma,L}$. What do these subscripts $t, n, \sigma$ and $L$ mean? Could you link me to a paper, article or webpage with an explanation for this?
+"
+"['neural-networks', 'python', 'image-recognition', 'keras', 'support-vector-machine']"," Title: How do I combine models trained on different data to increase classification accuracy?Body: I have two trained models. One is using a LinearSVC algorithm and is trained on numerical data from medical examination from patients with diabetic retinopathy. The second one is a neural network trained on images of retina scans from patients with the same disease.
+
+The models predict if the patient has retinopathy or not. Both are written using Python 3.6 and Keras and have accuracy around 0.84.
+
+Is it possible to combine those two models in any way to increase the accuracy of predictions?
+
+I'm not sure in what way it could be achievable as they are using a different kinds of data. I have tried using ensembling methods but didn't get better scores with them.
+"
+"['generative-adversarial-networks', 'adversarial-ml', 'deepfakes']"," Title: Isn't deep fake detection bound to fail?Body: Deep fakes are a growing concern: the ability to credibly alter a video may have great (negative) impacts on our society. It is so much of a concern, that the biggest tech companies launched a specific challenge: https://deepfakedetectionchallenge.ai/.
+
+However, from what I understand, most deep fake generation techniques rely on the use of adversarial models. One model generates a new image, while another model tries to detect if the image is doctored or not. Both models ""learn"" from being confronted with the other.
+
+That being said, if a good deep fake detection model emerges (from the previous challenge, or not), wouldn't it be rendered useless almost instantly by learning from it in an adversarial setting?
+"
+"['deep-learning', 'philosophy', 'game-ai', 'agi', 'open-ai']"," Title: Do algorithms like OpenAI's ""think up strategies""?Body: I was discussing with a friend whether current AI does anything remotely similar to 'thinking' and he argued that AIs that play games must think up strategies.
+
+While thinking may not be precisely defined, my understanding of algorithms like OpenAI was that they just minimize a very non-convex objective, but still play the game based on examples, and not by coming up with intentional strategies. Is my understanding incorrect?
+"
+"['deep-learning', 'convolutional-neural-networks', 'object-recognition', 'object-detection', 'regression']"," Title: Pose estimation using CNNs on Point cloudsBody: In the case of single shot detection of point clouds, that is the point cloud of an object is taken only from one camera view without any registration. Can a Convolutional Network estimate the 6d pose of objects (initially primitive 3D objects -- cylinders, spheres, cuboids)?
+The dataset will be generated by simulating a depth sensor using a physics engine (ex:gazebo) and primitive 3D objects are spawned with known 6d pose as ground truth. The resulting training data will be the single viewed point cloud of the object with the ground truth label (6d pose)?
+"
+"['ai-design', 'generative-adversarial-networks', 'generative-model', 'implementation', 'image-generation']"," Title: Context-based gap-fill face posture-mapper GANBody:
+
+These images are handmade, not auto-generated like they will be in production. Apologies for inaccuracies in the graph overlay.
+
+I am trying to build an AI like that displayed in the diagram: when given a training set of images with their corresponding node maps of face/nose posture, and an image with a missing section (just a gap) with a node map, I would like it to reconstruct the initial image. My thoughts immediately went to GANs for this, but after some searching, the closest I could find were:
+
+
+- Face recreation without context/not filling gaps, just following pose (DeepFake)
+- Filling gaps in images, but with no node reference
+- Filling gaps from reference drawings/mappings, but with no way to provide sample images
+
+
+I would like to hear about any implementations of such an algorithm, if possible optimised for faces, and if none exists, I would like to hear of how I would go about altering the generator of the GAN to work with the context/gap-fill bit (e.g a paper which talks about this idea, but doesn't implement it). Any guidance on the NN that is best for this type of task is also appreciated.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'classification', 'adversarial-ml']"," Title: What are causative and exploratory attacks in Adversarial Machine Learning?Body: I've been researching Adversarial Machine Learning and I know that causative attacks are when an attacker manipulates training data. An exploratory attack is when the attacker wants to find out about the machine learning model. However, there is not a lot of information on how an attacker can manipulate only training data and not the test data set.
+
+I have read about scenarios where the attacker performs an exploratory attack to find out about the ML model and then perform malicious input in order to tamper with the training data so that the model gives the wrong output. However shouldn't such input manipulation affect both the test and training data set? How does such tampering only affect the training data set and not the test data set?
+"
+"['reinforcement-learning', 'genetic-algorithms']"," Title: Training methods for bipedal robotBody: I am looking to train a bipedal robot using unity as a scape with a genetic algorithm. I will import the CAD into unity so the hardware is exact. My questions:
+
+
+- Is Unity physics accurate enough to train a neural network that will perform in the real world?
+- Should I optimize the network using reinforcement learning in the real world (after trained in scape)?
+- I am looking to use air muscles for my build. If the physics aren’t exactly right in unity (elasticity, max length, torque) will the bot still perform in the real world?
+- Are there any other programs that would be better than unity to train a robot inside a scape?
+- Any other approaches or new ideas on how to train the bot more efficiently would be greatly appreciated.
+
+"
+"['machine-learning', 'computer-vision', 'image-processing']"," Title: How to calculate the size of a 3d object from an image?Body: I am wondering how to calculate the size of a 3d object in an image without knowing the focal length of the camera but the distance from the camera to the object.
+"
+"['ai-design', 'optimization', 'a-star', 'constraint-satisfaction-problems']"," Title: How can I assign agents to tasks based on time and affinity?Body: I am working on an assignment problem.
+
+Consider $K$ agents $A_1, \dots A_K$ and $N$ tasks $T_1, \dots T_N$. Each task has a certain time $t(T_i)$ to be completed and each agent has a matching (or affinity) value associated with each task $M_{A_j}(T_i), \forall i, j$. The goal is to assign agents to tasks, such that the matching value is maximized and the overall time to complete the tasks is minimized. Moreover, an agent can be assigned to multiple tasks. However, an agent cannot start a new task before finishing the previous one.I want to solve it with GA+MOA*algorithm What would be an admissible heuristic function?
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'sequence-modeling', 'metric']"," Title: What evaluation metric are used for sequence-to-sequence prediction problems?Body: I am solving many sequence-to-sequence prediction problems using RNN/LSTM.
+
+What type of evaluation metrics can be used for sequence prediction problems?
+
+One metric is the mean squared error (MSE) that we can give as a parameter during the training model. Currently, the accuracy of my sequence-to-sequence problems is very low.
+
+What are other ways through which we can compare the performance of our models?
+"
+"['machine-learning', 'underfitting', 'boosting']"," Title: Why would the application of boosting prevent underfitting?Body: "Why would the application of boosting prevent underfitting?"
+I read in some paper that applying boosting would prevent you from underfitting. Why is that?
+Source:
+https://www.cs.cornell.edu/courses/cs4780/2015fa/web/lecturenotes/lecturenote13.html
+"
+"['convolutional-neural-networks', 'object-detection', 'image-segmentation']"," Title: How to measure the size of an crack which is segmented from an image using Mask-RCNN?Body: I am a masters student going to work in a project to analyze the cracks in underwater concrete structures.
+
+I need some suggestions for data acquisition and length measurement of the crack.
+
+I have decided to do crack segmentation using Mask-RCNN. But I don't know which methodology is best to measure the length of the cracks. While searching about this, I found many ways to measure the crack size when there is another reference object of known size in the image. But in my case, there won't be any reference object and also it is not possible to know the distance between the camera and target since it is underwater.
+
+If the images are of stereotype, Will that solve this issue?
+
+Can anyone help?
+"
+"['convolutional-neural-networks', 'computer-vision', 'object-detection', 'image-processing']"," Title: How to measure object size from the disparity map using CNN?Body: I am a student learning about image processing using CNN. I want to learn how to measure the object size from the disparity map obtained from left and right stereo images.
+"
+"['unsupervised-learning', 'social', 'supervised-learning', 'explainable-ai', 'algorithmic-bias']"," Title: What needs to be done to make a fair algorithm?Body: What needs to be done to make a fair algorithm (supervised and unsupervised)?
+
+In this context, there is no consensus on the definition of fairness, so you can use the definition you find most appropriate.
+"
+"['neural-networks', 'machine-learning', 'regression']"," Title: Imposing physical constraints (previous knowledge) in a neural network for regressionBody: I'm trying to train a neural network to do a multiple non-linear regression $y=f(x_i), i=1,2…N$. So far it works good (low MSE), but some predictions $y$ are “non-physical”, for instance for our application it is known from first principles that when $x_2$ increases, then $y$ also has to increase ($dy/dx_2>0$), but in some instances the neural network’s output doesn’t comply with this constraint. Another example is that $y + x_5 + x_7$ should be less than a constant $K$
+
+I thought about adding a penalty term to the loss function to enforce these constraints, but I am wondering if there is a ""harder"" way to impose such a constraint (that is, to ensure that these constraints will always hold, no only that non-physical predictions will be penalized)
+"
+"['natural-language-processing', 'attention', 'transformer', 'gpt', 'inference']"," Title: Is the Mask Needed for Masked Self-Attention During Inference with GPT-2Body: My understanding is that masked self-attention is necessary during training of GPT-2, as otherwise it would be able to directly see the correct next output at each iteration. My question is whether the attention mask is necessary, or even possible, during inference. As GPT-2 will only be producing one token at a time, it doesn't make sense to mask out future tokens that haven't been inferred yet.
+"
+"['terminology', 'math', 'features', 'data-preprocessing', 'conditional-probability']"," Title: What is ""conditioning"" on a feature?Body: On page 98 of Jet Substructure at the Large Hadron Collider: A Review of Recent Advances in Theory and Machine Learning the author writes;
+
+
+ Redacted phase space: Studying the distribution of inputs and the
+ network performance after conditioning on standard physically-inspired
+ features can help to visualize what new information the network is
+ using from the jet. Training the network on inputs that have been
+ conditioned on specific values of known features can also be useful
+ for this purpose.
+
+
+I cannot find other references to conditioning features. What does that mean?
+"
+"['neural-networks', 'machine-learning', 'prediction', 'regression', 'time-series']"," Title: Choosing neural network output for prediction (regression) of a dynamical systemBody: I’m trying to train a neural network to approximate the output of a dynamical system $dy/dt=f\left(y(t), u(t) \right)$, namely, given $y(0)$ and $u(t_i), i=1,2...N$ I want the network to predict $y(t_i), i=1,2...N$. So far I’ve thought of several approaches, namely
+
+
+- Predict the derivative $dy/dt (t_{i+1}) = f_1 \left(y(t_i), u(t_i) \right)$ and then compute $y(t_{i+1}) = dy/dt (t_{i+1}) \cdot dt + y(t_{i})$
+- Predict the increment $\Delta y (t_{i+1})= f_2 \left(y(t_i), u(t_i), \Delta t \right)$ and then compute $y(t_{i+1}) = \Delta y (t_{i+1}) + y(t_{i})$
+- Directly predict the next value $y(t_{i+1}) = f_3 \left(y(t_i), u(t_i), \Delta t \right)$
+
+
+Which option is recommended?
+"
+"['neural-networks', 'convolutional-neural-networks', 'python', 'tensorflow', 'keras']"," Title: What is the fastest way to train a CNN with billions of examples?Body: I have a CNN model that I need to train for a large scale genomics application. It is working well with a subset of my training data. I have scaled up to a subset of about 130 million examples and training time is very long, about 3 hours per epoch. I plan to scale up to the hundreds of billions of training examples and I anticipate training time to be not be feasible with my current design. I would appreciate feedback on how I can streamline the training or improve some aspect of my design that I may not be considering. Currently, I am training from a MongoDB. The training examples are not very large. Here is an example.
+
+{
+ 'added': datetime.datetime(2019, 11, 1, 6, 13, 13, 340000),
+ '_id': ObjectId('5dbbccf92464af872756022e'),
+ 'label': 0,
+ 'accession': 'GM_0001',
+ 'data': '34363,30450,9019,19152,8726,22128,59881,17670,15803,64454,64579,28103,52442,64951,29783,64574,652,19243,33498,14775,18803,4700,55446,53912,47645,41465,48257,16305,62071,12334,44698,24371,46515,8445,3000,61849,43228,18120,23587,11105,5453,42707,42739,46122,31285,40773,48162,16653,58783,2928,2836,21330,46947,6719,26992,8852,14520,46212,47362,43554,2147,39372,33885,59716,37384,14825,53387,58763,18065,34070,23278,15641,40237,47950,58811,40015,36880,29841,45351,14904,49660,48224,54638,50358,17202,10701,3564,4829,62655,5684,37207,49724,16369,6769,37827,38144,63885,5070,42882,48960,16178,35758,50554,54253,34556,2383,39431,30176,11482,24459,4472,53825,7764,44500,4869,50875,33037,56353,46848,30769,18729,46026,41409,2826,12092,17086',
+ 'name': 'Example_1'
+}
+
+
+The relevant data is the 'data' field which is a string of 126 integers where each integer is a value between 0 and about 65,000. The other fields are convenient, but not necessary except for the 'label' field. But even this I could insert into the front of the data field. I mention this because I don't think I necessarily need to train from a MongoDB database.
+
+I am using Keras 2.3.0 with TensorFlow 2.0.0. Below is an example of my code. The workflow is 1) Load a text file containing the document ids of all training examples in the MongoDB collection. I do this so I can shuffle the examples before sending them to the model for training. 2) I load the examples in batches of 50,000 using my Custom_Generator class. This class pulls the documents from the MongoDB using the list of document ids. 3) The model is trained. I use 5 epochs. I currently have 5-fold cross-validation but I know this is not feasible on the full training set. For that I will do a single train-test split. I am currently performing this on a Google Cloud instance with 2 Tesla T4 GPUs. The database is on a bucket. With the cloud I have flexibility of hardware architectures. I would appreciate any insight. This is a rather large engineering challenge for me.
+
+Additional background to the problem:
+The objective is to classify organisms into broad classes quickly for downstream analysis. The pool of organisms I want to classify is very large (10s of thousands) and very diverse. I'm essentially reading the genomes of the organisms like a book. The genome (a series of ""A"", ""T"", ""C"", or ""G"") is processed in chunks through a hash function producing integer strings as shown above. Depending on the size of the organism genome, thousands to millions of these integer strings may be produced. So I have many thousands of organisms producing many thousands to millions of examples. To be successful, I feel like I need to capture the diversity of the genomes in the organism pool. To give an example, even though Ecoli and Salmonella are both bacteria, their genomes are quite distinct. I feel like I need to have them both represented in the training set to distinguish them from other organisms I would label as a different class. As far as reducing the dataset, I think I can get by with only training on a representative organism for a give species (since there are many unique genomes available for Ecoli, for example). This will help considerably, but I think the training data set will likely still be in the billions of examples.
+
+import sys
+import time
+from keras.utils import Sequence, to_categorical, multi_gpu_model
+from keras.models import Sequential
+from keras.layers import Dense
+from keras.layers import Flatten
+from keras.layers import Embedding
+from keras.layers.convolutional import Conv1D
+from keras.layers.convolutional import MaxPooling1D
+from sklearn.model_selection import KFold
+from keras.preprocessing.sequence import pad_sequences
+import numpy as np
+import random
+from pymongo import MongoClient
+from bson import ObjectId
+from sklearn.metrics import classification_report, confusion_matrix
+
+
+class Custom_Generator(Sequence) :
+
+ def __init__(self, document_ids, batch_size) :
+ self.document_ids = document_ids
+ self.batch_size = batch_size
+
+
+ def __len__(self) :
+ return (np.ceil(len(self.document_ids) / float(self.batch_size))).astype(np.int)
+
+
+ def __getitem__(self, idx) :
+ client = MongoClient(port=27017)
+ db = client[database]
+ document_ids = self.document_ids[idx * self.batch_size : (idx+1) * self.batch_size]
+ query_results = db[collection].find({'_id': {'$in': document_ids}})
+ batch_x, batch_y = [], []
+ for result in query_results:
+ kmer_list = result['kmers'].split(',')
+ label = result['label']
+ x = [x for x in kmer_list if len(x) > 0]
+ if len(x) < 1:
+ continue
+ batch_x.append(x)
+ one_hot_y = to_categorical(label, 5)
+ batch_y.append(one_hot_y)
+ batch_x = pad_sequences(batch_x, maxlen=126, padding='post')
+ client.close()
+ return np.array(batch_x), np.array(batch_y)
+
+
+# MongoDB database, collection, and document ids of collection
+database = 'db'
+collection = 'collection_subset2'
+docids_file = 'docids_collection_subset2.txt'
+id_ls = []
+# Convert docids strings to MongoDB ObjectID
+with open(docids_file) as f:
+ for line in f:
+ id_ls.append(ObjectId(line.strip()))
+random.shuffle(id_ls)
+
+# Model
+model = Sequential()
+model.add(Embedding(65521, 100, input_length=126))
+model.add(Conv1D(filters=25, kernel_size=5, activation='relu'))
+model.add(MaxPooling1D(pool_size=2))
+model.add(Conv1D(filters=30, kernel_size=3, activation='relu'))
+model.add(MaxPooling1D(pool_size=2))
+model.add(Flatten())
+model.add(Dense(1000, activation='relu'))
+model.add(Dense(5, kernel_initializer=""normal"", activation=""softmax""))
+metrics=['accuracy'])
+parallel_model = multi_gpu_model(model, gpus=2)
+parallel_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
+
+seed = 7
+batch_size = 50000
+
+# Currently training with 5-fold CV. Will only use single test train split
+# on the full-scale dataset.
+kfold = KFold(n_splits=5, shuffle=True, random_state=seed)
+kfold_stats = {}
+accuracy_ls = []
+val_accuracy_ls = []
+confusion_ls = []
+for fold_idx, (train_idx, test_idx) in enumerate(kfold.split(id_ls)):
+ ids_train = np.array(id_ls)[train_idx].tolist()
+ ids_test = np.array(id_ls)[test_idx].tolist()
+ training_batch_generator = Custom_Generator(ids_train, batch_size)
+ validation_batch_generator = Custom_Generator(ids_test, batch_size)
+ print('Number of train files: %d' % len(ids_train))
+ print('Number of test files: %d' % len(ids_test))
+ start = time.time()
+ history = parallel_model.fit_generator(
+ generator=training_batch_generator,
+ steps_per_epoch = int(len(ids_train) // batch_size),
+ epochs = 5,
+ verbose = 2,
+ validation_data = validation_batch_generator,
+ validation_steps = int(len(ids_test) // batch_size),
+ use_multiprocessing=True
+ )
+ sys.stderr.write(""time to train model (seconds): %d\n""%(time.time() - start))
+ sys.stderr.flush()
+ print(history.history)
+ fold_name = 'kfold_%s' % str(fold_idx)
+ kfold_stats.update({fold_name: history.history})
+ accuracy_ls.extend(history.history['accuracy'])
+ val_accuracy_ls.extend(history.history['val_accuracy'])
+ parallel_model.save('model_output_kfold_%s.h5' % str(fold_idx))
+ print(""Kfold %s finished"" % str(fold_idx))
+ Y_pred = parallel_model.predict_generator(validation_batch_generator)
+ y_pred = np.argmax(Y_pred, axis=1)
+ y_true = np.concatenate([np.argmax(batch[1], axis=1) for batch in validation_batch_generator])
+ print('Confusion Matrix')
+ conf = confusion_matrix(y_true, y_pred)
+ print(conf)
+ confusion_ls.append(conf)
+ print('Classification Report')
+ target_names = ['Class_name_1', 'Class_name_2', 'Class_name_3', 'Class_name_4', 'Class_name_5']
+ report = classification_report(y_true, y_pred, target_names=target_names)
+
+
+"
+"['generative-model', 'autoencoders', 'implementation', 'variational-autoencoder']"," Title: Are there any general tips for troubleshooting a VAE when apparently it is not learning?Body: I am trying to train a VAE for anomaly detection. I chose one architecture from this Github repository and I adjusted the input and output to match what I need. In my case, the input (and hence the output) are a 12D vector. I tried several sizes for the latent space, but, for some reason, it's not training. From the beginning, the KL loss in almost zero (around 1e-10), while the reconstruction loss (MSE for Gaussian distribution) is around 1, and they basically vary around these values without learning anything further.
+
+Are there any general tips for troubleshooting a VAE (I never trained one before)?
+
+I am pretty sure that the code is right and the data for sure has a background and signal (the ratio is 10:1), so I am not really sure what I am doing wrong.
+"
+"['deep-learning', 'keras', 'long-short-term-memory']"," Title: How to represent integer values in sequence to sequence prediction task in encoder-decoder LSTM?Body: I have a large 2D grid having 30k rows and 35k columns, so a total of 30x35k grid cells. Each grid cell is represented by a unique integer number (identity of grid cell). I have several trajectories that passes through these grid cells. Each trajectory is represented by a sequence of numbers (that are grid cells through which the trajectory passes through).
+
+I want to solve the problem of trajectory prediction by giving the partial trajectory as input and predict the full trajectory. This becomes a sequence to sequence problem, where all sequences are integer values by default.
+
+I am trying to solve this problem through encoder-decoder LSTM architecture. Most tutorials/examples regarding sequence to sequence on net are on machine translations in which vocabularies or characters are one-hot-encoded to represent the text as integer values. When I hot-encode my sequence values the one-hot vector becomes very large because there are (30x35)k grid cells, the program has given memory overflow error (because each vector has of size 1 million).
+
+I am confused here, do I need to treat grid identity as categorical variable? because all grid identities are numeric numbers but these identities are not comparable (like prices).
+
+Do I need to hot-encode my integer values in my sequence? Or is there any other alternative to solve this problem? I also appreciate if you suggest me the similar tutorials having the sequence to sequence prediction problem.
+"
+"['neural-networks', 'deep-learning', 'comparison', 'optimization', 'objective-functions']"," Title: What's the difference between RMSE and Euclidean distance, and when to use a custom loss?Body: I'm searching for a loss function that fits my project. Actually, I have two questions, but they are in the same direction. I take a look at the definition of the root mean squared error (RMSE) and the Euclidean distance and they look the same to me. That's why I want to know the difference between the two. What would be the difference if I use RMSE as a loss function or the euclidean distance?
+
+The second question is how to search for a loss function. I mean I know it depends on the problem and common things are MSE for regression and cross-entropy for classification, but let's say I have a specific problem, how do I search for a loss function?
+
+I also saw that some people use a custom loss function and most of the deep learning frameworks allow us to define a custom loss function, but why would I want to use a custom one? How do I get the intuition that I need a custom loss function?
+
+To be more specific, I'm doing a project where I need to reduce the GPS error of a vehicle. I have some vehicle data and my neural network will try to predict the longitude and latitude, so it's a regression problem. That's why I thought that the Euclidean distance would make sense as a loss function, right? Now, somehow MSE also makes sense to me because it is getting the difference between prediction and ground truth. Does this make sense to you as a professional ML engineer or data scientist? And if there would be a custom loss function that you can use, what would you suggest and why?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'object-recognition', 'object-detection']"," Title: Is it possible to train a CNN to predict the dimensions of primitive objects from point clouds?Body: Is it possible to train a convolutional neural network (CNN) to predict the dimensions of primitive objects such as (spheres, cylinders, cuboids, etc.) from point clouds?
+
+The input to the CNN will be the point cloud of a single object and the output will be the dimensions of the object (for example, radius and height of the cylinder). The training data will be the point cloud of the object with the ground truth dimensions in a regression final layer?
+
+I think it is possible for images since it will similar to a bounding box detection, but I am not sure with point clouds.
+"
+"['deep-learning', 'research', 'reference-request', 'computational-learning-theory', 'generalization']"," Title: What are the state-of-the-art results on the generalization ability of deep learning methods?Body: I've read a few classic papers on different architectures of deep CNNs used to solve varied image-related problems. I'm aware there's some paradox in how deep networks generalize well despite seemingly overfitting training data. A lot of people in the data science field that I've interacted with agree that there's no explanation on why deep neural networks work as well as they do.
+
+That's gotten me interested in the theoretical basis for why deep nets work so well. Googling tells me it's kind of an open problem, but I'm not sure of the current state of research in answering this question. Notably, there are these two preprints that seem to tackle this question:
+
+
+
+If anyone else is interested in and following this research area, could you please explain the current state of research on this open problem? What are the latest works, preprints or publications that attempt to tackle it?
+"
+['hardware-evaluation']," Title: What is the reason AMD Radeon is not widely used for machine learning and deep learning?Body: What is the reason AMD Radeon is not widely used for machine learning and deep learning? Is it mainly an issue of lack of software? Or is Radeon's GPU not as good as NVIDIA's?
+"
+"['datasets', 'data-preprocessing']"," Title: How can I merge two datasets?Body: I want to merge 2 data sets in one, but don't know the right approach to do it. The datasets are similar, the last column is the same - will or not them buy a product. In the first dataset, users who only will buy, in second - only who won't buy.
+
+The 1st dataset contains 500 rows and 2nd 10000 rows. What will be the right approach to merge it? How can I normalize them? And to point for an algorithm that the last column is the main sequence on what it should learn?
+
+Example:
+
+id income date will_buy
+
+23123 200 10.5 Yes
+
+
+and second dataset:
+
+id income date will_buy
+
+2323 100 10.5 No
+
+"
+"['computer-vision', 'datasets', 'object-detection']"," Title: If an image contains two distinct objects, should I create a copy of this image with distinct labels for each copy?Body: Suppose we want to detect whether an object is one of the following classes: $\text{Object}_1, \text{Object}_2, \text{Object}_3$ and $\text{Person}$. Should the annotated images only contain bounding boxes for either a person or an object? In other words, suppose an image has both $\text{Object}_1$ and $\text{Person}$. Should you create a copy of this image where the first version only has a bounding box on the object and the second copy only has a bounding box on the person?
+"
+"['convolutional-neural-networks', 'object-detection']"," Title: Interpreting Keras Yolov3 config fileBody: How does one interpret the ""min_input_size"", ""max_input_size"" and ""anchors"" fields in the Yolov3 config file here. In particular, suppose we have the following:
+
+ ""min_input_size"": 288,
+
+ ""max_input_size"": 448,
+
+ ""anchors"": [55,69, 75,234, 133,240, 136,129, 142,363, 203,290, 228,184, 285,359, 341,260]
+
+
+Does the min_input_size and max_input_size indicate the maximum number of training images we can have? What do the numbers in the ""anchors"" field indicate? Are they the coordinates of the anchor boxes? Surprisingly, I have not been able to find a good explanation of many of these fields within this file.
+"
+"['convolutional-neural-networks', 'image-segmentation']"," Title: Best approach for 2D Grid Image SegmentationBody: I'm working on a project where I need to extract text from grocery discount flyers like the Costco announcement below (retrieved in a random google search, Costco is not the deal here):
+
+
+
+If I just run OCR (like with Tesseract in python):
+
+import cv2
+import pytesseract
+img = cv2.imread('costco.jpg')
+text = pytesseract.image_to_string(img)
+print(text)
+
+
+I get:
+
+
+ Cadbury Chocolate
+
+ variety pack packet
+
+ ere $12.99 i rom hagst 31026 2012
+>
+> Je laa
+> + a
+>
+> Wrigley’s Excel Gum variety
+>
+> Backol 24
+>
+> $13.79 fom agus 26.202
+
+ OFF
+
+ Solon Extra virgin olive oil [...]
+
+
+Which is a lot noisy.
+
+My guess is that splitting the image to its base squares enchances the recognition.
+
+However, I'm confused on how to do it. I can classify images using a CNN, but am not sure about object recognition.
+
+Should I have a sliding window and train several ""grid box"" objects on a generic CNN and then provide this window data to be classified? How to adapt to distinct object window sizes?
+"
+"['natural-language-processing', 'word-embedding']"," Title: Why embedding layer is used in the character-level Natural Language Processing modelsBody: Problem Background
+
+I am working with a problem, which requires a character-level, deep learning model. Previously I was working with word-level deep NLP (Natural Language Processing) models, and in these models almost always embedding encoding was used to represent given word in a lower-dimensional vector form. Furthermore, such embedding encoding allowed for putting similar words near themselves in the new lower-dimensional vector representation (i.e. man and woman vectors were near themselves in the vector space) which improved learning. Nevertheless, I often see that people use embedding encoding in character level NLP models. Even if the character-level one-hot encoding vectors are quite small in comparison to word-level one-hot encoding vectors (about 36 to 32k rows). Furthermore, there is no much correlation between characters, there is no something like ""similar characters"" in comparison to similar words, therefore some characters in comparison to other shouldn't be put near themselves.
+
+Question
+Why embedding encoding is used in the character-level NLP models?
+"
+"['neural-networks', 'deep-learning', 'tensorflow']"," Title: How should I make output layer of my neural network so that I can get outputs ranging from [-20,-1]Body: I am trying to make a neural network which takes in 0 and 1 as it's input and should give me output ranging from [-20,-1].I am using three layers with sigmoid as the activation function .How should I design my output layer?Any sort of code snippet from your side will be helpful .I am using tensorflow.Please help me out with the same
+"
+"['reinforcement-learning', 'papers', 'temporal-difference-methods', 'eligibility-traces', 'td-lambda']"," Title: How is the general return-based off-policy equation derived?Body: I'm wondering how is the general return-based off-policy equation in Safe and efficient off-policy reinforcement learning derived
+$$\mathcal{R} Q(x, a):=Q(x, a)+\mathbb{E}_{\mu}\left[\sum_{t \geq 0} \gamma^{t}\left(\prod_{s=1}^{t} c_{s}\right)\left(r_{t}+\gamma \mathbb{E}_{\pi} Q\left(x_{t+1}, \cdot\right)-Q\left(x_{t}, a_{t}\right)\right)\right]$$
+
+If it is applied to TD($\lambda$), is this equation the forward view of TD($\lambda$)?
+
+What is the difference between trace $c_s$ and eligibility trace?
+"
+"['terminology', 'definitions', 'hyperparameter-optimization', 'bayesian-optimization']"," Title: What is a ""surrogate model""?Body: In the following paragraph from the book Automated Machine Learning: Methods, Systems, Challenges (by Frank Hutter et al.)
+
+
+ In this section we first give a brief introduction to Bayesian optimization, present alternative surrogate models used in it, describe extensions to conditional and constrained configuration spaces, and then discuss several important applications to hyperparameter optimization.
+
+
+What is an ""alternative surrogate model""? What exactly does ""alternative"" mean?
+"
+"['neural-networks', 'deep-learning', 'tensorflow', 'activation-functions', 'hidden-layers']"," Title: What could be the problem when a neural network with four hidden layers with the sigmoid activation function is not learning?Body: I have a large set of data points describing mappings of binary vectors to real-valued outputs. I am using TensorFlow, and would like to train a model to predict these relationships. I used four hidden layers with 500 neurons in each layer, and sigmoidal activation functions in each layer.
+
+The network appears to be unable to learn, and has high loss even on the training data. What might cause this to happen? Is there something wrong with the design of my network?
+"
+"['machine-learning', 'convolutional-neural-networks', 'ai-design', 'training']"," Title: Can you build a pure CNN phoneme classification model?Body: I was making a simple phoneme classification model for a 10 week-long class project and I ran into a small question.
+Is it possible to create a model that takes a 1-second (the longest phoneme is 0.2 second but the large image is kept for context) spectrogram as input? Some people suggest creating an RNN for phoneme classification, but can you build a pure CNN phoneme classification model?
+"
+"['neural-networks', 'q-learning', 'dropout', 'google', 'legal']"," Title: Can Google's patented ML algorithms be used commercially?Body: I just find that Google patents some of the widely used machine learning algorithms. For example:
+
+Does that mean I can't use those algorithms commercially?
+"
+"['machine-learning', 'comparison', 'overfitting', 'cross-validation', 'k-fold-cv']"," Title: Is k-fold cross-validation more effective than splitting the dataset into training and test datasets to prevent overfitting?Body: I want to prevent my model from overfitting. I think that k-fold cross-validation (because it is doing this each time with different datasets) may be more effective than splitting the dataset into training and test datasets to prevent overfitting, but a colleague (who has little experience in ML) says that, to prevent overfitting, the 70/30% split performs better than the k-fold cross-validation. In my opinion, k-fold cross-validation provides a reliable method to test the model performance.
+Is k-fold cross-validation more effective than splitting the dataset into training and test datasets to prevent overfitting? I am not concerned with computational resources.
+"
+"['deep-learning', 'training', 'terminology']"," Title: What does end-to-end training mean?Body: In simple words, what does end-to-end training mean, in the context of deep learning?
+"
+"['convolutional-neural-networks', 'reference-request', 'papers', 'books']"," Title: What are examples of books or papers on the details of convolutional neural networks?Body: I'm studying a master's degree and my final work is going to be about the convolutional neural network.
+
+I read a lot of books and I did Convolutional Network Standford's course, but I need more.
+
+Are there books or papers on the details of convolutional neural networks (in particular, convolutional layer)?
+"
+"['convolutional-neural-networks', 'python', 'feature-extraction', 'representation-learning']"," Title: Plot class activation heatmap of Caffe Model in PythonBody: Given the following 3 research papers, the authors have shown different heatmap graphical representations for features of the trained CNN models:
+
+
+- On the performance of Convnet feature for place recognition: link
+
![]()
+- NetVLAD: CNN architecture for weakly supervised place recognition: link
+
![]()
+- Deep Learning Features at Scale for Visual Place Recognition: link
+
![]()
+
+
+Does anyone know the easiest heatmap implementation in Python given deploy.protxt
and model_wights_bias.caffemodel
files?
+
+PS: I am aware of the following answers and packages: answer1, package1 but they do not provide these solutions shown in figures above!
+
+Thanks,
+"
+"['neural-networks', 'deep-learning', 'terminology', 'papers', 'time-series']"," Title: What is the ""semantic level""?Body: I am reading the paper Hierarchical Attention-Based Recurrent Highway Networks for Time Series Prediction (2018) by Yunzhe Tao et al.
+
+In this paper, they use several times the expression ""semantic levels"". Some examples:
+
+
+- HRHN can adaptively select the relevant exogenous features in different semantic levels
+- the temporal information is usually complicated and may occur at different semantic levels
+- The encoder RHN reads the convolved features $(w_1,w_2,···,w_{T−1})$ and models their temporal dependencies at different semantic levels
+- Then an RHN is used to model the temporal dependencies among convolved input features at different semantic levels
+
+
+What is the semantic level?
+"
+"['autoencoders', 'variational-autoencoder']"," Title: Reduce same sample distance in VAE encodingsBody: I'm working on a beta VAE model learning a latent representation used as a similarity metric for image registration.
+
+One of the main problems I'm facing is that the encoder + sampler output doesn't fulfill the requirements for a mathematical metric (https://en.wikipedia.org/wiki/Metric_(mathematics)) - is there a known way of how to decrease same-sample distance after encoding + sampling as well as promoting transitivity (triangle inequality) and symmetry?
+"
+"['machine-learning', 'reinforcement-learning', 'applications', 'open-ai', 'control-theory']"," Title: Can OpenAI simulations be used in real world applications?Body: I know that classical control systems have been used to solve the problem of the inverted pendulum - inverted pendulum.
+
+But I've seen that people have also used machine learning techniques to solve this nowadays - machine learning of inverted pendulum.
+
+I came across a video on how to apply a machine learning technique called reinforcement learning on openAI gym - OpenAI gym reinforcement learning.
+
+My question is, can I use this simulation and use it to train a controller for a real-world application of inverted pendulum?
+"
+"['machine-learning', 'datasets', 'supervised-learning', 'decision-trees', 'random-forests']"," Title: How could decision tree learning algorithms cope with imbalanced classes?Body: Decision trees and random forests may or not be more suited to solve supervised learning problems with imbalanced labels (or classes) in datasets. For example, see the article Using Random Forest to Learn Imbalanced Data, this Stats SE question and this Medium post. The information across these sources does not seem to be consistent.
+
+How could decision tree learning algorithms cope with imbalanced classes?
+"
+"['reinforcement-learning', 'kaggle']"," Title: Are there any online competitions for Reinforcement Learning?Body: Kaggle is limited to only supervised learning problems. There used to be www.rl-competition.org but they've stopped.
+
+Is there anything else I can do other than locally trying out different algorithms for various RL problems?
+"
+"['deep-learning', 'convolutional-neural-networks', 'classification', 'training', 'keras']"," Title: Semantic issues with predictions made by my trained modelBody: I'm new to Deep Learning. I used Keras
and trained a inception_resnet_v2
model for my binary classification application (fire detection). As suggested from my previous question of a non-X
class, I prepared a dataset of 8000 images of fire, and a larger dataset for non-fire (20,000 random images) to make sure the network also sees images of non-fire to perform classification.
+
+I trained the model, but now when trying to load the model and pass images of fire and non-fire ones, it shows same result for all of them:
+
+[[0. 1.]]
+[[0. 1.]]
+[[0. 1.]]
+[[0. 1.]]
+[[0. 1.]]
+
+
+What is going wrong? Am I doing anything wrong? Should I get the result another way?
+
+===============================================
+
+I know it's not SO, but this is my prediction code in case it matters:
+
+from __future__ import print_function
+from keras.models import load_model, model_from_json
+import cv2, os, glob
+import numpy as np
+from keras.preprocessing import image
+
+if __name__ == '__main__':
+ model = load_model('Resnet_26_0.79_model_weights.h5')
+
+ os.chdir(""test"")
+ for file in glob.glob(""*.jpg""):
+ img_path = file
+ img = image.load_img(img_path, target_size=(300, 300))
+ x = image.img_to_array(img)
+ x = np.expand_dims(x, axis=0)
+
+ dictionary = {0: 'non-fire', 1: 'fire'}
+
+ results = model.predict(x)
+ print(results)
+ predicted_class= np.argmax(results)
+ acc = 100*results[0][predicted_class]
+ print(""Network prediction is: file: ""+ file+"", ""+dictionary[predicted_class]+"", %{:0.2f}"".format(acc))
+
+
+And here is the training:
+
+from keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input
+from keras.preprocessing.image import ImageDataGenerator
+from keras.layers import Dense, Activation, Flatten, Dropout
+from keras.models import Sequential, Model
+from keras.optimizers import SGD, Adam
+from keras.callbacks import ModelCheckpoint
+from keras.metrics import binary_accuracy
+import os
+import json
+#==========================
+HEIGHT = 300
+WIDTH = 300
+TRAIN_DIR = ""data""
+BATCH_SIZE = 8 #8
+steps_per_epoch = 1000 #1000
+NUM_EPOCHS = 50 #50
+lr= 0.00001
+#==========================
+FC_LAYERS = [1024, 1024]
+dropout = 0.5
+
+def build_finetune_model(base_model, dropout, fc_layers, num_classes):
+ for layer in base_model.layers:
+ layer.trainable = False
+
+ x = base_model.output
+ x = Flatten()(x)
+ for fc in fc_layers:
+ # New FC layer, random init
+ x = Dense(fc, activation='relu')(x)
+ x = Dropout(dropout)(x)
+
+ # New layer
+ predictions = Dense(num_classes, activation='sigmoid')(x)
+ finetune_model = Model(inputs=base_model.input, outputs=predictions)
+ return finetune_model
+
+train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input, rotation_range=90, horizontal_flip=True, vertical_flip=True
+ ,validation_split=0.2)
+train_generator = train_datagen.flow_from_directory(TRAIN_DIR, target_size=(HEIGHT, WIDTH), batch_size=BATCH_SIZE
+ ,subset=""training"")
+#split validation manually
+validation_generator = train_datagen.flow_from_directory(TRAIN_DIR, target_size=(HEIGHT, WIDTH), batch_size=BATCH_SIZE,subset=""validation"")
+
+base_model = InceptionResNetV2(weights='imagenet', include_top=False, input_shape=(HEIGHT, WIDTH, 3))
+
+root=TRAIN_DIR
+class_list = [ item for item in os.listdir(root) if os.path.isdir(os.path.join(root, item)) ]
+print (""class_list: ""+str(class_list))
+
+finetune_model = build_finetune_model(base_model, dropout=dropout, fc_layers=FC_LAYERS, num_classes=len(class_list))
+
+adam = Adam(lr)
+# change to categorical_crossentropy for multiple classes
+finetune_model.compile(adam, loss='binary_crossentropy', metrics=['accuracy'])
+
+filepath=""./checkpoints/"" + ""Resnet_{epoch:02d}_{acc:.2f}"" +""_model_weights.h5""
+checkpoint = ModelCheckpoint(filepath, monitor=[""val_accuracy""], verbose=1, mode='max', save_weights_only=False)
+callbacks_list = [checkpoint]
+
+history = finetune_model.fit_generator(train_generator, epochs=NUM_EPOCHS, workers=BATCH_SIZE,
+ validation_data=validation_generator, validation_steps = validation_generator.samples,
+ steps_per_epoch=steps_per_epoch,
+ shuffle=True, callbacks=callbacks_list)
+
+"
+"['algorithm', 'monte-carlo-tree-search']"," Title: How to understand the 4 steps of Monte Carlo Tree SearchBody: From many blogs and this one https://web.archive.org/web/20160308070346/http://mcts.ai/about/index.html
+We know that the process of MCTS algorithm has 4 steps.
+
+
+
+ - Selection: Starting at root node R, recursively select optimal child nodes until a leaf node L is reached.
+
+
+
+What does leaf node L mean here? I thought it should be a node representing the terminal state of the game, or another word which ends the game.
+If L is not a terminal node (one end state of the game), how do we decide that the selection step stops on node L? From the terms of general algorithm, a leaf node is the one that does not have any
+
+
+
+ - Expansion: If L is a not a terminal node (i.e. it does not end the game) then create one or more child nodes and select one C.
+
+
+
+From this description I realise that obviously my previous thought incorrect.
+Then if L is not a terminal node, it implies that L should have children, why not continue finding a child from L at the ""Selection"" step?
+Do we have the children list of L at this step?
+From the description of this step itself, when do we create one child node, and when do we need to create more than one child nodes? Based on what rule/policy do we select node C?
+
+
+
+ - Simulation: Run a simulated playout from C until a result is achieved.
+
+
+
+Because of the confusion of the 1st question, I totally cannot understand why we need to simulate the game. I thought from the selection step, we can reach the terminal node and the game should be ended on node L in this path. We even do not need to do ""Expansion"" because node L is the terminal node.
+
+
+- Backpropagation: Update the current move sequence with the simulation result. Fine.
+
+
+Last question, from where did you get the answer to these questions?
+
+Thank you
+"
+"['neural-networks', 'convolutional-neural-networks']"," Title: What could I do to this CNN to achieve a higher accuracy on the cifar10 dataset?Body: I have achieved around 85% accuracy using the following architecture:
+
+
+
+I used a learning rate of 0.001 and trained the model over 125 epochs with a batch size of 64.
+Any suggestions would be much appreciated. Thanks in advance.
+"
+"['reference-request', 'object-detection', 'speech-recognition']"," Title: Are there any good ways of simultaneously incorporating object detection with speech recognition?Body: Are there any good ways of simultaneously incorporating object detection with speech recognition? For example, if you want to identify whether an animal is a dog or cat, we can obviously use visual features (e.g. YOLO, CNNs, etc.). But how would you incorporate speech and sound in this model?
+"
+"['reinforcement-learning', 'sutton-barto', 'reward-design', 'reward-functions', 'reward-hypothesis']"," Title: Counterexamples to the reward hypothesisBody: On Sutton and Barto's RL book, the reward hypothesis is stated as
+
+
+ that all of what we mean by goals and purposes can be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward)
+
+
+Are there examples of tasks where the goals and purposes cannot be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal?
+
+All I can think of are tasks with subjective rewards, like ""writing good music"", but I am not convinced because maybe this is actually definable (perhaps by some super-intelligent alien) and we just aren't smart enough yet. Thus, I'm especially interested in counterexamples that logically or provably fail the hypothesis.
+"
+"['neural-networks', 'reinforcement-learning', 'python', 'open-ai']"," Title: Same implementation, but agent is not learning in Retro Pong EnvironmentBody: I tried to implement the exact same python coding by Andrej Karpathy to train RL agent to play Pong, except that I migrated the environment from Gym to Retro.
+Everything is the same except the action space in Retro is in indices and not in discrete as in Gym. The index has a size of 8, and index 4 and 5 are actions to move up and down.
+
+But why the little modification has caused the agent not learning at all with running reward at -20 after over 3,000 episodes?
+
+I have checked the frame pre-processing before input to the policy forward neural network and it seems to be normal.
+
+As far as I know, the output from the neural network is the probability of the paddle to move upwards. So I checked it. After few thousands episodes, the probability of the agent to move up just maintained at 0.5.
+
+I know the problem exists between the pre-processing and policy forward neural network, but I just cannot locate the problem. Appreciate if someone could help.
+
+The whole coding is as follow:
+
+import retro
+import numpy as np
+import _pickle as pickle
+
+H = 200 # number of hidden layer neurons
+batch_size = 10 # every how many episodes to do a param update?
+learning_rate = 1e-4
+gamma = 0.98 # discount factor for reward
+decay_rate = 0.98 # decay factor for RMSProp leaky sum of grad^2
+resume = False # resume from previous checkpoint?
+render = True
+
+
+# model initialization
+D = 80 * 80 # input dimensionality: 80x80 grid
+if resume:
+ model = pickle.load(open('save.p', 'rb'))
+else:
+ model = {}
+ model['W1'] = np.random.randn(H,D) / np.sqrt(D) # ""Xavier"" initialization
+ model['W2'] = np.random.randn(H) / np.sqrt(H)
+
+grad_buffer = { k : np.zeros_like(v) for k,v in model.items() } # update buffers that add up gradients over a batch
+rmsprop_cache = { k : np.zeros_like(v) for k,v in model.items() } # rmsprop memory
+
+def sigmoid(x):
+ return 1.0 / (1.0 + np.exp(-x)) # sigmoid ""squashing"" function to interval [0,1]
+
+def prepro(I):
+ """""" prepro 210x160x3 uint8 frame into 6400 (80x80) 1D float vector """"""
+ I = I[35:195] # crop
+ I = I[::2,::2,0] # downsample by factor of 2
+ I[I == 144] = 0 # erase background (background type 1)
+ I[I == 109] = 0 # erase background (background type 2)
+ I[I != 0] = 1 # everything else (paddles, ball) just set to 1
+ return I.astype(np.float).ravel()
+
+def discount_rewards(r):
+ """""" take 1D float array of rewards and compute discounted reward """"""
+ discounted_r = np.zeros_like(r)
+ running_add = 0
+ for t in reversed(range(0, r.size)):
+ if r[t] != 0: running_add = 0 # reset the sum, since this was a game boundary (pong specific!)
+ running_add = running_add * gamma + r[t]
+ discounted_r[t] = running_add
+ return discounted_r
+
+def policy_forward(x):
+ h = np.dot(model['W1'], x)
+ h[h<0] = 0 # ReLU nonlinearity
+ logp = np.dot(model['W2'], h)
+ p = sigmoid(logp)
+ return p, h # return probability of taking action 2, and hidden state
+
+def policy_backward(eph, epdlogp):
+ """""" backward pass. (eph is array of intermediate hidden states) """"""
+ dW2 = np.dot(eph.T, epdlogp).ravel()
+ dh = np.outer(epdlogp, model['W2'])
+ dh[eph <= 0] = 0 # backpro prelu
+ dW1 = np.dot(dh.T, epx)
+ return {'W1':dW1, 'W2':dW2}
+
+env=retro.make(game='Pong-Atari2600',players=1)
+observation = env.reset()
+prev_x = None # used in computing the difference frame
+xs,hs,dlogps,drs = [],[],[],[]
+running_reward = None
+reward_sum = 0
+episode_number = 0
+
+while True:
+ if render: env.render()
+
+ action=[0,0,0,0,0,0,0,0] #reset the rl action
+ # preprocess the observation, set input to network to be difference image
+ cur_x = prepro(observation)
+ x = cur_x - prev_x if prev_x is not None else np.zeros(D)
+ prev_x = cur_x
+
+ # forward the policy network and sample an action from the returned probability
+ aprob, h = policy_forward(x)
+
+ rlaction = 4 if np.random.uniform() < aprob else 5 # roll the dice!
+ # record various intermediates (needed later for backprop)
+ xs.append(x) # observation
+ hs.append(h) # hidden state
+ y = 1 if rlaction == 4 else 0 # a ""fake label""
+ dlogps.append(y - aprob) # grad that encourages the action that was taken to be taken (see http://cs231n.github.io/neural-networks-2/#losses if confused)
+ action[rlaction]=1
+ # step the environment and get new measurements
+ observation, reward, done, info = env.step(action)
+ reward_sum += reward
+ drs.append(reward) # record reward (has to be done after we call step() to get reward for previous action)
+
+
+ if done: # an episode finished
+
+ episode_number += 1
+
+ # stack together all inputs, hidden states, action gradients, and rewards for this episode
+ epx = np.vstack(xs)
+ eph = np.vstack(hs)
+ epdlogp = np.vstack(dlogps)
+ epr = np.vstack(drs)
+ xs,hs,dlogps,drs = [],[],[],[] # reset array memory
+
+ # compute the discounted reward backwards through time
+ discounted_epr = discount_rewards(epr)
+ # standardize the rewards to be unit normal (helps control the gradient estimator variance)
+ discounted_epr -= np.mean(discounted_epr)
+ discounted_epr /= np.std(discounted_epr)
+
+ epdlogp *= discounted_epr # modulate the gradient with advantage (PG magic happens right here.)
+ grad = policy_backward(eph, epdlogp)
+ for k in model: grad_buffer[k] += grad[k] # accumulate grad over batch
+
+ # perform rmsprop parameter update every batch_size episodes
+ if episode_number % batch_size == 0:
+ for k,v in model.items():
+ g = grad_buffer[k] # gradient
+ rmsprop_cache[k] = decay_rate * rmsprop_cache[k] + (1 - decay_rate) * g**2
+ model[k] += learning_rate * g / (np.sqrt(rmsprop_cache[k]) + 1e-5)
+ grad_buffer[k] = np.zeros_like(v) # reset batch gradient buffer
+
+ # boring book-keeping
+ running_reward = reward_sum if running_reward is None else running_reward * 0.99 + reward_sum * 0.01
+ print(('%d , %d , %f ') % (episode_number-1,reward_sum,running_reward))
+ if episode_number % 20 == 0: pickle.dump(model, open('save.p', 'wb'))
+ reward_sum = 0
+ observation = env.reset() # reset env
+ prev_x = None
+
+"
+"['neural-networks', 'reference-request', 'object-detection']"," Title: Which neural network is appropriate for measuring object dimensions from stereo images?Body: I have stereo pairs (left, right) images of concrete cracks. I want to measure the length of the crack from those image pairs. Which neural network is appropriate for measuring object dimensions from stereo images?
+
+Note: I am insisted to use the NN-based technique only.
+"
+"['reinforcement-learning', 'sutton-barto', 'episodic-tasks', 'on-policy-distribution']"," Title: In the on-policy state distribution for episodic tasks, why don't we take into account the length of the episode?Body: In Sutton & Barto's ""Reinforcement Learning: An Introduction"", 2nd edition, page 199, they describe the on-policy distribution for episodic tasks in the following box:
+
+
+
+I don't understand how this can be done without taking the length of the episode into account. Suppose a task has 10 states, has probability 1 of starting at the first state, then moves to any state uniformly until the episode terminates. If the episode has 100 time steps, then probability of the first state is proportional to $1 + 100\times 1/10$; if it has $1000$ time steps, it will be proportional to $1 + 1000\times 1/10$. However, the formula given would make it proportional to $1 + 1/10$ in both cases. What am I missing?
+"
+"['machine-learning', 'tensorflow', 'ai-basics']"," Title: Is there has any method to train Tensorflow AI/ML that I focus on detecting background of image more than common objects?Body: Is there has any method to train Tensorflow AI/ML that I focus on detecting background of image more than common objects?
+
+I'm newbie to ML field, but was assigned to do job that make an application which can detecting on showroom image/places and detecting on the floor, wall then find out what is the material/ceramic/marble/etc. product they are.
+
+Example: This is showroom picture,
+
+
+
+the wall and the floor of showroom are using this product material
+
+
+
+
+- Is it possible to do something like I described?
+- How to start with?
+- If I don't want to install Tensorflow into my computer, is there a service that can make a model to use in the device? (my goal need to use the model in Android device)
+- What method/type of ML should I approach 'Classification' or 'Object Detection' or other else?
+
+"
+"['image-recognition', 'computer-vision', 'image-segmentation', 'transfer-learning', 'pretrained-models']"," Title: How can I improve the performance of a model trained to detect vehicle poses?Body: I'm looking for some suggestions on how to improve our vehicle image recognition. We have an online marketplace where customers submit photos of their vehicles. The photos need to meet certain requirements before the advert can be approved.
+
+Customers are required to submit the following vehicle photos: front, back, left-side, right-side, engine (similar to the front photo but with the hood open) and instrument panel cluster. The vehicle must be well framed in the photo, in other words, it must not be too small or so big that the edges touch the frame of the photograph. It also needs to be one of the mentioned types and the camera must be facing the vehicle directly with only small angle variations (a front photo can't include a large piece of the side of a car).
+
+Another developer had a go and built a CNN with Keras which does alleviate some manual grind (about 20,000 photos were used for training - no annotations). The accuracy sits at around 75% for the vehicle photos but only 55% for the engine and instrument cluster. Each photo is still manually checked, but it is a case of agreeing or disagreeing with what was recognised.
+
+I was wondering if it wouldn't be better to detect a vehicle in the image using an existing pre-trained model like ImageAI. Use the bounding box of the vehicle to determine it is correctly placed in the frame of the photograph and within acceptable dimensions. There may be multiple vehicles in the picture so work with the most prominent one.
+
+At that point would it be worth trying to develop something to workout the pose of the vehicle (idea: https://github.com/johnberroa/CORY) or just do some transfer learning with whatever pre-existing trained model was used and spend some time annotating the images?
+
+
+"
+"['convolutional-neural-networks', 'computer-vision', 'image-recognition', 'data-preprocessing']"," Title: How to deal with images of different sizes, which need to be passed to a model of fixed input size, without losing details and spatial information?Body: I have the following problem while using convolutional neural networks to detect forgeries:
+Resizing the image to fit the required input size may not be a good way because the forgery detection largely relies on the details of images, for example, the noise. Thus the resizing process may change/hurt the details.
+Existing methods mainly use image patches (obtained from cropping) that have the same size. This way, however, will drop the spatial information.
+I'm looking for some suggestions on how to deal with this problem (input size inconsistency) without leaving out the spatial information.
+"
+"['deep-learning', 'applications', 'generative-adversarial-networks', 'autoencoders', 'deepfakes']"," Title: How does deepfake technology work with multiple people in a single frame?Body: I was watching this video from corridor crew, according to them, they have used deepfake technology to create this video. I myself have never made a deepfake videos, but I have enough knowledge in the underlying technology to know that it's hard to swap a face with multiple people existing simultaneously in a frame and let alone swapping faces of multiple people in a single frame. But corridor crews video showed that multiple deepfakes can be done, and that's why I am sceptical about the video of using deepfake technology.
+
+If the video is indeed made with deepfake technology, then what is the mechanism behind this? My own guess is that they might have masked other people and concentrated on one in a frame. Then they have used this masked frame to generate the deepfake, which then concatenated to the original frame. Do you think this is possible? Is there a research article or blog post which explains this process?
+"
+"['machine-learning', 'optimization', 'constraint-satisfaction-problems', 'linear-programming']"," Title: Which algorithm can I use to solve a problem with multiple objectives and constraints?Body: Consider a problem with many objectives. In my case, these are school grades for different courses (or subjects). To be more concrete, suppose that my current grade for the math course is $12/20$ and for the philosophy course is $8/20$. My objective is to get $16/20$ for the math course and $15/20$ for the philosophy course.
+I have the possibility to take different courses, but I need to decide which ones. These courses can have a different impact depending on the subject. Let's say that the impact factor is in the range $[0, 1]$, where $0$ means no impact and $1$ means a big impact. Then the math course could have a big impact (e.g. $0.9$) on the grade, while maybe a philosophy course may not have such a big impact.
+The overall goal is to increase all the grades as much as possible while taking into account the impact of their associated course. In my case, I can have more than two courses and subjects.
+So, which algorithms can I use to solve this problem?
+"
+"['neural-networks', 'deep-learning', 'generative-model', 'image-generation', 'variational-autoencoder']"," Title: Does MMD-VAE solve the problem of blurred images of vanilla VAEs?Body: I understand that with vanilla VAEs, there are a few reasons justifying the production of blurred out images. The InfoVAE paper describes the case when the decoder is flexible enough to ignore the latent attributes and generate an averaged out image that best reduces the reconstruction loss. Thus the blurred image.
+
+How much of the problem of blurring is really mitigated by the MMD formulation in practical experiments? If someone has experience working with MMD-VAEs, I'd like to know their opinion on what the reconstruction quality of MMD-VAEs is really like.
+
+Also, does the replacement of the MSE reconstruction loss metric by other perceptual similarity metrics improve generated image quality?
+"
+"['deep-learning', 'datasets', 'scene-classification', 'accuracy']"," Title: What does top N accuracy mean?Body: Places205-VGG, a CNN trained model for 205 scene categories of Places Database with 2.5 million images Places205
dataset has top1 accuracy = 58.9%
and top5 accuracy = 87.7%
.
+
+What does top1
and top5
(and, in general, top $N$) accuracy mean in the context of deep learning?
+"
+"['reinforcement-learning', 'markov-decision-process', 'pomdp']"," Title: Finding total number of states in a POMDPBody: I've been working on a question that is posed in a document I've been reading, that models qualifying for a job as a POMDP. In this model, a person takes 3 exams, and must pass all of them in order to get the job. The person make either be qualified or not qualified in the subjects covered by each individual exam, and there is some probability that a person may be qualified for a subject covered in a particular exam, but still not pass (due to nerves). False passes are also possible as well.
+A candidate is allowed a maximum of 2 exam attempts.
+
+In understanding this problem, I've tried to list out all possible states the person might be in (qualified / not qualified for each exam), and have found the following possible states: (where Q is qualified, and N is not qualified)
+
+QQQ
+QQN
+QNN
+QNQ
+NNN
+NQN
+NNQ
+NQQ
+
+So, the total number of possible states is 8.
+
+Have I covered all possible states? I'm wondering if there's an easier way to find out total number of states, without having to list them all out in the above way. I'm very new to this field, so any help is appreciated.
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'objective-functions']"," Title: Loss function for increasing the quality of the image when labels are not perfectly allignedBody: I am trying to increse the quality of the images that I gather from the microscope. That is a acoustic microscope and there are lots of technical details but in a nutshell the low quality images and its corresponding high quality images that I gather from the same sample are not perfectly alligned because in my setting it is impossible to increase quality without removing the sample from the microscope so when I put it again it is a manual process so they are not perfectly aligned.
+
+Output of my network will be,let's say, 256 x 256 image and its corresponding label will be high quality 256 x 256 image,in theory, of the exactly same area. If I make pixel to pixel comparison between them, for example taking MSE for the loss function, will it be able to learn ? I am not sure because pixels are not perfectly alligned, they do not represent the same area of the image(the difference is not that great but they are not perfectly alligned as I said)
+"
+"['expert-systems', 'lisp']"," Title: Trying to get started with LISA and LispBody: I am trying to to do a sort of block-validator for bitcoin(and alike chains), in it I need to depending on chain and block-height only allow certain operators in the transactions scripts. One thought I had was that this might be something that LISA should be good for. (I might be wrong in this) but shouldn't something like a rule engine be a good fit for that? What I want is a good way to defines the rules for my validator on how to validate that a block and its transactions adhere to the consensus rules?
+
+I am sort of getting to this point
+
+
+
+(defpackage #:chain-validator
+ (:shadowing-import-from #:lisa #:assert)
+ (:use #:lisa
+ #:cl))
+
+(in-package :chain-validator)
+
+(defclass chain-fundamental () ())
+
+(defclass chain-block (chain-fundamental)
+ ((height :initarg :height :initform 1)
+ (chain :initarg :chain :initform :bitcoin)))
+
+(defclass chain-tx (chain-fundamental)
+ ((in-block :initarg :block :initform 'nil)
+ (pk-script :initarg :pk-script)
+ (is-coinbase-tx :initarg :coinbase :initform 'f)))
+
+(defclass chain-OP (chain-fundamental)
+ ((name :initarg :name)
+ (op-code :initarg :op-code)))
+
+(defrule dont-allow-op-mul-after-height-1000
+ ;;; how to write this one?????
+ ;; but if I only want to allow mul on a certain chain after height
+ ;; 2000?
+ )
+
+(defrule startup ()
+ =>
+ (assert-instance
+ (make-instance 'chain-OP :name :PUSHDATA4 :op-code #x4e))
+ (assert-instance
+ (make-instance 'chain-OP :name :EQUAL :op-code #x87))
+ (assert-instance
+ (make-instance 'chain-OP :name :MUL :op-code #x95))
+ (let* ((genesis-blk (make-instance 'chain-block))
+ (later-blk (make-instance 'chain-block :height 2500))
+ (first-coinbase-tx (make-instance 'chain-tx :block genesis-blk))
+ (later-coinbase-tx (make-instance 'chain-tx :block later-blk)))
+ (assert-instance genesis-blk)
+ (assert-instance later-blk)
+ (assert-instance first-coinbase-tx)
+ (assert-instance later-coinbase-tx)))
+
+
+;; how can I use LISA to get the chain-OPs that are allowed for a
+;; transaction belonging to a specific block at some height, I sort of
+;; want to find them all so i later can verify that the pk-script part
+;; only contains those OPs. Could I write rules that would acctually
+;; do the validation for me? that would check if chain-tx.pk-script
+;; only contains certain OPs. And if we have multiple chains, how do I
+;; write the rules to take account for that?
+
+
+But after that I don't know how to proceed, the questions I want LISA to answer for me are things like
+
+
+- What are the valid script operations for a certain block, or transaction?
+- Is this block or transaction valid?
+
+
+Maybe what I need is primer on rule engines or a good tutorial. I just can't really get my head around how to write the rules.
+"
+"['philosophy', 'intelligence', 'artificial-creativity']"," Title: Is artificial intelligence really just human intelligence?Body: Essentially, AI is created by human minds, so is the intelligence & creativity of algorithms properly an extension of human intelligence & creativity, rather than something independent?
+
+I assume that intelligence does not necessarily require creativity, however, creativity can result from machine learning. (A simple example is AlphaGo discovering novel strategies.)
+"
+"['comparison', 'programming-languages']"," Title: When are compiled vs. interpreted languages more optimal in AI?Body: When are interpreted languages more optimal? When are compiled languages more optimal? What are the qualities and functions that render the so in relation to various AI methods?
+"
+"['deep-learning', 'classification', 'weights']"," Title: Classification with deeplearning : clean start vs continue trainingBody: I trained some weights to identify apples and oranges (using YOLOv3).
+
+If I want to be able to identify peaches, which approach is usually recommended:
+
+
+- Start clean and train the 3 classes.
+- Train the peaches over the already-trained weights (with apples and oranges)
+
+
+- Only train with peaches images
+- Use all available training data (including apples and oranges)
+
+
+
+This is what I have found:
+
+
+- If I start clean, it will take longer until I can get a good result, but the detection is usually better.
+- Every time I add a new class (using
2.2
), the detection get worse for the already learned objects, but it takes less time until I can get a good result (however I suspect that apples and oranges become over-fitted?).
+- I haven't tested
2.1
, as I think that it won't be able to re-adjust the weights for the apples and the oranges.
+
+
+Is the above expected? What is the recommended course of action?
+"
+"['deep-learning', 'convolutional-neural-networks', 'reference-request', 'object-detection']"," Title: How to count pixels in a object mask which is segmented using Mask R-CNN?Body: I have segmented concrete cracks from concrete structure images using Mask R-CNN. Now I need to measure the length of the segmented masked crack.
+
+Will the pixel counting method work? Can anyone help?
+
+Note: The images are taken at the constant distance from the object.
+"
+"['convolutional-neural-networks', 'reference-request', 'image-processing', 'image-segmentation', 'filters']"," Title: What are some references that describe known filters (or kernels) and how we can create new ones?Body: I'm pursuing a master's degree in Artificial Intelligence. My final work is about Convolutional Neural Networks.
+I was looking for information about filters (or kernels) of the convolutional layers. I have found this article: Lode's Computer Graphics Tutorial - Image Filtering, but I need more.
+Do you know more resources about more filters (that it is known that they work) and how to create new ones?
+In other words, I want to know how they work and how can I create new ones.
+I've thought to create a C++ program, or with Octave, to test the new kernels.
+By the way, my research will be focused on image segmentation to process MRIs.
+"
+"['training', 'hyperparameter-optimization', 'hyper-parameters']"," Title: How to organize model training hyperparametersBody: I am working on multiple deep learning projects, most of them in the area of computer vision. For many of them I create multiple models, try different approaches, use various model architectures. And of course I try to optimize hyperparameters for each model.
+
+Now, that itself works fine. However, I start to lose track of all the various parameters and model layouts I tried. The problem is, sometimes for example I want to re-train a model from a past project with a new data set, but using the same hyperparameters from the last (best) successful training. So I need to look up that project's documentation, or I have some hyperparameters saved in a text or Excel file, etc.pp.
+
+For me that feels a bit cumbersome. I bet I am not the only one facing this problem, surely there must be a better way than ""remembering"" all the hyperparamters from all projects / models manually via text files and alike.
+
+What are your experiences, have you found a better software / solution / approach / best practice / workflow for that? I must admit, I would welcome a software to aid with that a lot.
+"
+"['reinforcement-learning', 'math', 'definitions', 'norvig-russell', 'markov-property']"," Title: What does the Markov assumption say about the history of state sequences?Body: Does the Markov assumption say that the conditional probability of the next state only depends on the current state or does it say that the conditional probability depends on a fixed finite number of previous states?
+
+As far as I understand from the related Wikipedia article, the probability of the next state $s'$ to appear only depends on the current state $s$.
+
+However, in the book ""Artificial Intelligence: A Modern Approach"" by Russell and Norvig, on page 568, they say: ""Markov assumption — that the current state depends on only a finite fixed number of previous states"".
+
+To me, the second statement seems contradictory to the first, because it may mean that a state can depend on the history of states as long as the number is fixed a finite. For example, the current state depended on the last state and the state before the last state, which is 2 sequential previous states (a finite number of states).
+
+Is Markov assumption and Markov property the same?
+"
+"['image-recognition', 'tensorflow', 'pretrained-models']"," Title: Can mobilenet in some cases perform better than inception_v3 and inception_resnet_v2?Body: I have implemented a multi-label image classification model where I can choose which model to use, I was surprised to find out that in my case mobilenet_v1_224 performed much better (95% Accuracy) than the inception models (around 88% Accuracy), I'm using pretrained models (that I download from here and adding a final layer that I train on my own data (3000 images). I wanted to get your opinion and see if maybe I'm doing something wrong.
+"
+['reinforcement-learning']," Title: Effects of translating RL action probability through non linearityBody: I am training an RL agent (specifically using the PPO algorithm) on a game environment with 2 possible actions left or right.
+
+The actions can be taken with varying ""force""; e.g. go left 17% or go right 69.3%. Currently, I have the agent output 21 actions - 10 for left (in 10% increments), 10 for right in 10% increments and 1 for stay in place (do nothing). In other words, there is a direct 1-1 mapping in 10% increments between the agent output and the force the agent uses to move in the environment.
+
+I am wondering, if instead of outputting 21 possible actions, I change the action space to a binary output and obtain the action probabilities. The probabilities will have the form, say, [70, 30]. That is, go left with 70% probability and go right with 30% probability. Then I take these probabilities and put them through a non-linearity that translates to the actual action force taken; e.g an output of 70% probability to go left, may in fact translate to moving left with 63.8% force.
+
+The non linear translation is not directly observed by the agent but will determine the proceeding state, which is directly observed.
+
+I don't fully understand what the implications of doing this will be. Is there any argument that this would increase performance (rewards) as the agent does not need to learn direct action mappings, rather just a binary probability output?
+"
+"['deep-learning', 'image-processing', 'algorithm-request', 'model-request']"," Title: Is there any existing attempt to create a deep learning model which extracts vector paths from bitmaps?Body: I need an algorithm to trace simple bitmaps, which only contain paths with a given stroke width.
+Is there any existing attempt to create a deep learning model which extracts vector paths from bitmaps?
+It is obviously very easy to generate bitmaps from vector paths, so creating data for a machine learning algorithm is simple. The model could be trained by giving both the vector and bitmap representation. Once trained, it would be able to generate the vector paths from the given bitmap.
+This seems simple, but I could not find any work on this particular task. So, I suppose this problem is not fitted for current deep learning architectures, why?
+The goal is to trace this kind of image, which would be drawn by hand with a thick felt pen and scanned:
+
+So, is there a deep learning architecture fitted for this problem?
+I believe this question could help me understand what is possible to do with deep learning and what is not, and why. Tracing bitmaps is a perfect example of converting sparse data to a dense abstract representation; I have the intuition one can learn a lot from this problem.
+"
+"['natural-language-processing', 'chat-bots', 'benchmarks']"," Title: NLP annotation tool online and other tools to compare performances of different NLP algorithmsBody: I do text annotations (POS tagging, NER, chunking, synset) by using a specific annotation tool for Natural Language Processing. I would like to make the same annotations on different tools to compare the performances of both.
+
+Furthermore, for I found several logical and linguistic errors in the way the algorithm was previously trained, I would like to measure the way such anomalies affect the intelligence of the chatbot (that's to say its ability to understand questions and answers made by the customers as regard to sentences which have been structured in a certain way), by comparing results with those performed by other NLP engines.
+In other terms, I would like to collect some ""benchmark"" to have an idea of which level the NLP algorithm developed by the company I work with works at.
+
+Is there any tool (open source annotation tools based on other NLP algorithms, tools to collect benchmark, etc.) which might help me to perform such a task?
+"
+"['neural-networks', 'python', 'backpropagation']"," Title: Calculation of Neural network biases in backpropagationBody: While learning neural networks I've found a basic Python working example to play with. It has 3 input nodes, 4 nodes in a hidden layer, 1 output node. 5 data sets for training.
+
+The initial code is without biases, which I'm trying to implement, forward and back calculations. From different internet sources I see that bias is just like other weights with a static input value 1, and backpropagation calculation should be similar and simplier.
+
+But my current code version is not working - with the same input I get very different results from ~0.002 to ~0.99.
+
+Please help me to fix biases calculations. Probably lines marked with ???
. Here is a Python 2 testing code:
+
+import numpy as np
+
+
+# Sigmoid and it's derivative
+def nonlin(x, deriv=False):
+ if (deriv == True):
+ return x*(1-x)
+
+ return 1/(1+np.exp(-x))
+
+
+X = np.array([[0,0,1],
+ [0,1,1],
+ [1,0,1],
+ [1,1,1],
+ [1,1,1]])
+
+Y = np.array([[0],
+ [1],
+ [1],
+ [0],
+ [0]])
+
+# Static initial hidd. layer weights for testing
+wh = np.array([[-0.16258307, 0.43597283, -0.99471565, -0.39715906],
+ [-0.70551921, -0.81601352, -0.62549935, -0.30959772],
+ [-0.20477763, 0.07532473, -0.15920573, 0.3694664 ]])
+# Static initial output layer weights for testing
+wo = np.array([[-0.59572295],
+ [ 0.74949506],
+ [-0.95195878],
+ [ 0.33625405]])
+
+# Hidden layer's biases
+biasH = 2 * np.random.random((1, 4)) - 1 # ???
+# Output neuron's bias
+biasO = 2 * np.random.random((1, 1)) - 1 # ???
+# Static hidden layer's biases input
+biasInputH = np.array([[1, 1, 1, 1]]) # ???
+# Static output layer's bias input
+biasInputO = np.array([[1]]) # ???
+
+
+# Number of iterations to teach
+for j in xrange(60000):
+
+ # Feedforward
+ h = nonlin(np.dot(X, wh) + biasH)
+ o = nonlin(np.dot(h, wo) + biasO)
+
+ # Calculate partial derivatives & errors
+ o_error = Y - o
+
+ if (j % 10000) == 0:
+ print ""Error:"" + str(np.mean(np.abs(o_error)))
+
+ o_delta = o_error * nonlin(o, deriv=True)
+ o_biases = o_error * nonlin(biasO, deriv=True) # ???
+
+ h_error = o_delta.dot(wo.T)
+ h_delta = h_error * nonlin(h, deriv=True)
+ h_biases = h_error * nonlin(biasH, deriv=True) # ???
+
+ # Update weights and biases
+ wo += h.T.dot(o_delta)
+ wh += X.T.dot(h_delta)
+
+ # biasH += biasInputH.dot(h_delta) # ???
+ # biasO += biasInputO.dot(o_delta) # ???
+
+
+# Try new data
+data = np.array([1,0,0])
+
+print ""weights 0:"", wh
+print ""weights 1:"", wo
+print ""biases 0:"", biasH
+print ""biases 1:"", biasO
+print ""input: "", data
+
+h = nonlin(np.dot(data, wh))
+print ""hidden: "", h
+print ""output: "", nonlin(np.dot(h, wo))
+
+"
+['monte-carlo-tree-search']," Title: Formulating MCTS with random outcomes of actions?Body: I am working on implementing MCTS for a scheduling problem where MCTS is formulated each time there are multiple jobs that need to be scheduled. When a job is executed, the resulting state of the system is random. The challenge I'm having is that the implementation I'm currently using relies on the ability to determine if a node is fully expanded. However, there are so many children of the root node that it's not feasible to expect all of them will ever be visited. Is there a suggested method of conducting MCTS in cases where nodes will not likely ever be fully expanded?
+"
+"['neural-networks', 'optimization', 'unsupervised-learning']"," Title: Unsupervised learning to optimize a function of the inputBody: I am looking to build a neural network that takes an input vector $\mathbf{x}$ and outputs a vector $\mathbf{y}$ such at $f(\mathbf{x}, \mathbf{y})$ is minimized, where $f$ is some function. The network will see many different $\mathbf{x}$ during training to adjust its weights and biases; then I will test the network by using the test set $\{\mathbf{x}_1, \dots, \mathbf{x}_n \}$ to calculate $\sum(f(\mathbf{x}_1, \mathbf{y}), \dots, f(\mathbf{x}_n, \mathbf{y}))$ to see if this sum is minimized.
+However, I have no labels for the output $\mathbf{y}$. The loss function I am trying to minimize is based on the input and output, instead of the output and label.
+I tried many standard Keras and TensorFlow loss functions, but they are unable to do the job. Any thoughts on how this might be achieved?
+"
+"['convolutional-neural-networks', 'pooling', 'max-pooling']"," Title: How can max-pooling be applied to find features in words?Body: I'm reading about max-pooling in a dynamic CNN paper. I can see how it can help find features in images, given that the pixel with the highest density gets pooled, but how does it help to find features in words?
+"
+"['probability-distribution', 'bayesian-probability']"," Title: How does maximum approximation of the posterior choose a distribution?Body: I was learning about the maximum a posteriori probability (MAP) estimation for machine learning and I found a nice short video that essentially explained it as finding a distribution and tweaking the parameters to fit the observed data in a way that makes the observations most likely (makes sense).
+
+However, in mathematical terms, how does it determine which distribution best fits the data?
+
+There are so many distributions out there that it could be any of them and the parameters you could fit them could be infinitely large.
+"
+"['convolutional-neural-networks', 'autoencoders', 'variational-autoencoder']"," Title: Concrete example of latent variables and observables plugged into the Bayes' ruleBody: In the context of the variational auto-encoder, can someone give me a concrete example of the application of the Bayes' rule
+
+$$p_{\theta}(z|x)=\frac{p_{\theta}(x|z)p(z)}{p(x)}$$
+
+for a given latent variable and observable?
+
+I understand with VAE's we're essentially getting an approximation to $p_{\theta}(z|x)$ that models the distribution that we think approximates the latent variables, but I need a concrete example to really understand it.
+"
+"['neural-networks', 'generative-adversarial-networks', 'randomness']"," Title: Why AI is (or not) a good option for the generation of random numbers?Body: Why AI is (or not) a good option for the generation of random numbers? Would GANs be suited for this purpose?
+"
+"['models', 'regression']"," Title: What is a good model for regression problem with binary features and small data?Body: I am trying to predict the solution time for riddles in which matchsticks are combined into digits and operators. An example of a matchstick riddle is 4-2=8. The solution for this riddle would be obtained by moving one matchstick from the ‘8’ to the ‘-’ sign, resulting in the correct equation 4+2=6. The data consists of 100 riddles and the corresponding solution times. The two types of features that are available for each riddle are:
+
+
+- a 23 dimensional binary vector that indicates which of the available positions are filled with matches
+or
+- a 12-dimensional integer vector that counts the appearance of each token (10 digits, 2 operators)
+
+
+Although today neural nets are very popular I am not sure that a neural net is the best choice for this particular problem. Firstly, because the data set is very small. Secondly because of the binary inputs. What might be a more effective model for this problem ?
+"
+"['deep-learning', 'backpropagation', 'object-detection']"," Title: yolo output and how to define labels for backpropogation on itBody: I want to build the yolo architecture in keras but can't understand the basic idea behind the training of the yolo, like how to define the labels for whether there is no object there what we have to do. Do we have to take the boundary box as 0 or not include the block only for that part? It's quite confusing.
+"
+"['machine-learning', 'objective-functions', 'gradient-boosting']"," Title: How would the ""best function"" been constructed if there are no computationally limitations?Body: I am reading the Wikipedia article on gradient boosting. There is written:
+
+
+ Unfortunately, choosing the best function $h$ at each step for an arbitrary loss function $L$ is a computationally infeasible optimization problem in general. Therefore, we restrict our approach to a simplified version of the problem.
+
+
+How would the ""best function"" been constructed if there are no computationally limitations?
+
+Because the wiki gives me the idea that the model could been better, but that compromises have been made.
+"
+"['machine-learning', 'deep-learning', 'classification', 'keras', 'deep-neural-networks']"," Title: Techniques and semantics in better training of deep learning modelsBody: I'm relatively new to Deep Learning, and trying various models and datasets using Keras
. I'm starting to love it!
+
+Through-out my experimentations, I have come into some semantic questions that I don't know how they can affect the overall accuracy of my trained model. My target application is fire detection in videos (fire vs non-fire). So I'm trying to get tips and tricks from those well experienced on Deep learning, and here are my semantic questions:
+
+
+- Given that I have to do detection on videos, I've been mostly adding actuall frames of videos to my dataset, and less photos. Does adding photos from Google ever help (as we largen our dataset) or it's actually more considered noises and shall be removed?
+- I've trained a deep model (
ResNet50
) as well as a shallow 5-layer model. I realized the ResNet50 model is more sensitive and has a high recall (all fires are definitely detected), but has false positives as well (strong source of lights like sunlight or lamps are identified as fire). While the shallower model is 10x faster, it can miss fires if it is smaller in the image, so it's less sensitive. But also has low false positives. Is it always true? So what are techniques and tips to fix these issues in each of these models?
+
+
+For instance, the shallow model doesn't see this fire. Shall I think it's not complex enough to work well when the scene has many objects inside?
+
+
+
+
+- The sample code I saw resizes photos to
256x256
for training. What's the effect of bigger sizes vs smaller ones say 300x300
? Can I expect while bigger sizes increase computation time, they provide higher accuracy?
+- The sample code also converts photos to grayscale and uses Antialiasing before passing. Does it have good effects? What if I pass the colored version as fire is mostly about colors?
+- When I see the model is doing bad on certain scenes (say those sun lights or lamps), I take multiple of those frames and add them to my non-fire dataset. Does it have any positive effects and being taken care of? And is it better to add multiple successive frames or just one frame is enough?
+- My fire dataset has
1800
images and my non-fire dataset has 4500
images. As a rule of thumb, the bigger each class, the better? Of course the non-fire data should be bigger, but we can not add whatever on earth as non-fire so what should be the distribution of the sizes?
+
+"
+['machine-learning']," Title: Troubleshooting Binary ClassifierBody: I trained a binary classifier using ML.NET's AutoML feature on a small dataset (compared to other, similar models I've trained that seem to work well)-around 500 rows with around 50 features. AutoML used cross-validation with 5 folds.
+
+The training data is balanced to about 200 positive cases to 300 negative cases, which isn't an unreasonable representation of the real world based on domain knowledge.
+
+The model's metrics are poor compared to other, similar models, e.g.:
+
+
+- Accuracy: 0.64
+- Positive Precision: 0.375
+- Positive Recall: 0.09
+- Negative Precision: 0.67
+- Negative Recall: 0.92
+- F1 Score: 0.15
+
+
+When the model is run against unseen data, it predicts the negative case 99% of the time.
+
+If the accuracy were truly as stated in the metric, a correct classification 2/3 of the time has some practical value in this application. However, the actual predictions of 99% the negative case are surely flawed.
+
+Is the training set too small to expect reasonable results? Is there anything I can do to improve the model?
+"
+"['reinforcement-learning', 'dqn']"," Title: What could be the cause of the drop of the total reward when using DQN to solve the cart-pole environment?Body: I'm trying to use DQN to solve the cart-pole environment. I have 2 networks (target and behavior). Both of them have 3 hidden layers with 24 neurons, using the ReLU activation. The loss is MSE and the optimizer is Adam. I copy the weights of the behavior network to the target network every 15 backpropagation steps.
+
+My agent learns. Below you can see the total reward and running average plots.
+
+
+
+However, it has a lot of ""drops"". Usually, after a couple of perfect sequences, it just ""kills"" the running average with a couple of very short episodes. What may be the reason for this behavior?
+"
+"['reinforcement-learning', 'proofs', 'papers', 'trust-region-policy-optimization']"," Title: In lemma 1 of the TRPO paper, why isn't the expectation over $s'∼P(s'|s,a)$?Body: In the Trust Region Policy Optimization paper, in Lemma 1 of Appendix A, I didn't quite understand the transition from (21) from (20). In going from (20) to (21), $A^\pi(s_t, a_t)$ is substituted with its value. The value of $A^\pi(s_t, a_t)$ is given as $\mathbb{E}_{s'∼P(s'|s,a)}[r(s) + \gamma V_\pi(s') − V_\pi(s)]$ at the very beginning of the proof. But when $A^\pi(s_t, a_t)$ gets substituted, I don't see the expectation (over $s'∼P(s'|s,a)$) appearing anywhere. It will be of great help if somebody lends some light on this.
+"
+"['knowledge-representation', 'ontology']"," Title: How would we define a set that contains itself within a knowledge ontology?Body: How would we define a set that contains itself within a knowledge ontology?
+
+I am thinking that set membership would probably inherit from a generic base class of total containment from which both physical containment and conceptual containment are derived.
+
+
+- total containment
+
+
+- physical containment
+- conceptual containment
+
+
+
+
+"
+"['game-ai', 'monte-carlo-tree-search']"," Title: How could an AI detect whether an enemy in a game can be blocked off/trapped?Body: Imagine a game played on a 10x10 grid system where a player can move up down left or right and imagine there are two players on this grid: An enemy and you. In this game, there are walls on the grid which you can't go through. The objective of this game is to block the enemy in so he can't move around the rest of the board and is effectively ""trapped"".
+
+I want to write an algorithm that detects which nodes on the board I as a player need to put blocks in, in order to trap the enemy. There are also some other considerations to think about. You have to be able to place the blocks before the enemy place can get out of the box. Also note more thing: You can move AND place a block in the position that you're moving to at the same time.
+
+Here's a picture as an example of the game.
+
+
+
+EDIT: note that the board in the picture is 5x5, but that's okay for the purposes of the example
+
+In this example, I could go up, then right and place a block, then right and place a block, then up and place a block. If there's more than one way of blocking off the enemy, then I should use the way that's going to give my enemy the least amount of space.
+
+Researching on google couldn't find me anything relevant, although it may have been because I wasn't using relevant search terms. I also thought about using a monte Carlo search tree algorithm for simultaneous games, but I would need to research into that more.
+"
+"['machine-learning', 'convolutional-neural-networks', 'object-detection']"," Title: Should I train different models for detecting subsets of objects?Body: Suppose we have $1000$ products that we want to detect. For each of these products, we have $500$ training images/annotations. Thus we have $500,000$ training images/associated annotations. If we want to train a good object detection algorithm to recognize these objects (e.g. YOLO) would it be better to have multiple detection models? In other words, should we have 10 different YOLO models where each YOLO model is responsible for detecting 100 products? Or is it good enough to have one YOLO model that can detect all 1000 products? Which would be better in terms of mAP/recall/precision?
+"
+"['comparison', 'evolutionary-algorithms', 'game-theory']"," Title: What is the difference between evolutionary game theory and meta-heuristics?Body: Here is a list of meta-heuristic algorithms
+
+
+- Ant colony optimization,
+- Ant lion optimizer,
+- Artificial bee colony algorithm,
+- Bat algorithm,
+- Cat swarm optimization,
+- Crow search algorithm,
+- Cuckoo optimization algorithm,
+- Cuckoo search algorithm,
+- Differential evolution,
+- Firefly algorithm,
+- Genetic algorithm,
+- Glowworm swarm optimization,
+- Gravitational search algorithm,
+- Grey wolf optimizer,
+- Harmony search,
+- Multi-verse optimizer,
+- Particle swarm optimization,
+- Shuffled complex evolution,
+- Simulated annealing,
+- Tabu search,
+- Teaching-learning-based optimization
+
+
+Can anyone explain the similarities and dissimilarities of evolutionary game theory and the meta-heuristic approach?
+"
+"['machine-learning', 'python', 'applications']"," Title: Are there any public real-life code examples of ML applications in Python?Body: Problems I often face at work usually differ from tutorial or book-like examples so I end up with a code that works but it's not elegant and takes too much time to write.
+
+I wanted to ask you if there are some publicly accesible examples or repositories of Python codes that deal with machine learning development and application process but were created in a real company or organisation to develop their real-life products or services?
+
+EDIT: What I do not think about are libraries or packages repositories such as tensorflow. I would like to see some codes of projects that for example use tensorflow to create some other product or service.
+"
+"['datasets', 'object-recognition', 'object-detection', 'transfer-learning', 'data-preprocessing']"," Title: learning object recognition of primitive shapes through transfer learning problemBody: Question on transfer learning object classification (MobileNet_v2 with 75% number of parameters) with my own synthetic data:
+
+I made my own dataset of three shapes: triangles, rectangles and spheres. each category has 460 samples with diferent sizes, dimensions, different wobbles at edges. They look like this:
+
+
+
+
+
+I want the network to classify these primitive shapes in other environments as well with different lighting/color conditions and image statistics.
+
+Even though I'm adding random crops, scaling, and brightnesses, at training step 10 it's already at 100% training and validation accuracy. Cross entropy keeps going down though. I'm using tensorflow hub. The performance of the network in the end could be better within other environments (virtual 3d space with such shapes). Also trained and tested for ~ 50 steps to see if the network is overfitting, but that doesn't work too well.
+
+What alterations would you recommend to generalize better? Or shouldn't I train on synthetic data at all to learn primitive shapes? If so, any dataset recommendations?
+
+Thanks in advance
+"
+"['reinforcement-learning', 'online-learning', 'normalisation', 'thompson-sampling', 'normal-distribution']"," Title: Normalizing Normal Distributions in Thompson Sampling for online Reinforcement LearningBody: In my implementation of Thompson Sampling (TS) for online Reinforcement Learning, my distribution for selecting $a$ is $\mathcal{N}(Q(s, a), \frac{1}{C(s,a)+1})$, where $C(s,a)$ is the number of times $a$ has been picked in $s$.
+However, I found that this does not work well in some cases depending on the magnitude of $Q(s,a)$. For example, if $Q(s_i,a_1) = 100$, and $C(s_i,a_1) = 1$, then then this gives a standard deviation of 0.5, which is extremely confident even though the action has only been picked once. Compare that to $a_2$ which may be the optimal action but has never been picked, so $Q(s_i, a_2) = 0$ and $C(s_i,a_2) = 0$. It is unlikely that TS will ever pick $a_2$.
+So, how do I solve this problem?
+I tried normalizing the Q-values such that they range from 0 to 1, but the algorithm returns much lower total returns. I think I have to adapt the magnitude of the standard deviations relative to the Q-values as well. Doing it for 1 normal distribution is pretty straightforward, but I can't figure out how to do it for multiple distributions which have to take into consideration of the other distributions.
+Edit: Counts should be $C(s,a)$ instead of $C(s)$ as Neil pointed out
+"
+"['neural-networks', 'deep-learning', 'backpropagation', 'weights']"," Title: How does the neural-network know how to tweak weights for a specific neuron?Body: I know backpropagation uses cost and gradient descent to tweak the weights to minimize the cost. But how does it know which weights to give more weight to in the first place? Is there something inside each neuron in the hidden layers that defines how this is an important neuron for the correct result in some way? How does the network know how to tweak those weights for that specific neuron?
+"
+"['machine-learning', 'bayesian-networks', 'naive-bayes', 'bayesian-statistics']"," Title: Why do Bayesian algorithms work well with small datasets?Body: I read very often that Bayesian algorithms work well on small datasets. Why is that? I think it is because they might generalize more, but why is that?
+
+See also Investigating the use of Bayesian networks for small dataset problems.
+"
+"['tensorflow', 'keras']"," Title: Multi label Classification using KerasBody: I am trying to build a Multi label classification model, having dataset with different input numerical values and specific label...
+
+Eg:
+
+Value Label
+
+35 X
+
+35.8 X
+
+29 Y
+
+29.8 Y
+
+39 AA
+
+41 CB
+
+So depending on input numerical value the model should specify its label....please note that the input values won't necessarily follow exact dataset values....eg dataset has 35 and 34.8 as input values with X as label. So if model has 35.4 as input label, the X should be output label. Bottom line is that the output label is based on range of input values instead of fixed one..
+
+Can anyone help me with quick solution (example Jupyter notebook will be highly appreciated)
+"
+"['machine-learning', 'decision-trees', 'feature-selection']"," Title: How does the decision tree implicitly do feature selection?Body: I was talking with an ex-fellow worker and he told me that the decision tree implicitly applies a feature selection. He told me that the most important feature is higher in the tree because of the usage of information gain criteria.
+
+What does he mean with this and how does this work?
+"
+"['machine-learning', 'convolutional-neural-networks', 'datasets', 'object-detection']"," Title: Are there any easy ways to create annotated training images for object detection?Body: For the purposes of object detection, are there any easy ways to create annotated training images? For example, if we have $10,000$ images and want to draw bounding boxes on 2 objects for each image, do we have to physically draw those boxes? Is that what most people do these days to create training data?
+"
+"['search', 'a-star', 'norvig-russell', 'uniform-cost-search']"," Title: A* and uniform-cost search are apparently incompleteBody: Consider the following diagram of a graph representing a search space.
+
+
+
+If we start at $B$ and try to reach goal state $E$, the lowest-cost first search (LCFS) (aka uniform-cost search) algorithm fails to find a solution. This is because, $B$ selects $A$ over $C$ to expand as $f(A)=g(A)=36 < f(C)=g(C)=70$. $f(n)$ is the cost function of node $n$, and $g(n)$ is the cost of reaching node $n$ from the start state. Continuing further, from $A$, LCFS will now select $B$ to expand, which in turn will select $A$ again over $C$. This leads to an infinite loop. This shows LCFS is incomplete (not guaranteed to find a solution, if one exists).
+
+For A*, we define $f(n)=g(n)+h(n)$, where $h(n)$ is the expected cost of reaching goal state from node $n$. If we define Manhattan distance ($L_0$ norm) for $h(\cdot)$, books (such as Artificial Intelligence: A Modern Approach (3rd Ed) by Stuart Russell and Peter Norvig) says A* is bound to find the solution (since it exists). However, I couldn't find how. Using, A*, $B$ will still select $A$ since $f(A)=36+(h(A)=40)=76 < f(C)=70+(h(C)=30+50)=150$. You see, this means, when $A$ expands back $B$, $B$ will again select $A$, and an infinite loop ensues.
+
+What am I missing here?
+"
+"['classification', 'decision-trees', 'time-series']"," Title: Feature extraction timeseries, model compatibilityBody: I've got a timeseries with sensor data (e.g. accelerometer and gyroscope). I now want to extract the activity out of it (e.g. walking, standing, driving, ...). I Followed this Jupyter Notebook. But there are some issues left.
+
+
+- Why do they only pick 500 rows?
+- What's the point of re-arranging the rows/columns?
+- When they build their decicion tree learner with the train data, they build it upon extracted features. But how can we then use this tree for new sensor data? Should we extract the features of the new data and pass it as input for the tree? But new sensor data might not have as many features as the train data. Eg:
(ValueError: Number of features of the model must match the input. Model n_features is 321 and input n_features is 312)
+
+"
+"['search', 'performance', 'branching-factors', 'heuristic-functions', 'informed-search']"," Title: Why is the effective branching factor used for measuring performance of a heuristic function?Body: For search algorithms with heuristic functions, the performance of heuristic functions are measured by the effective branching factor ${b^*}$, which involves the total number of nodes expanded ${N}$ and the depth of the solution ${d}$.
+I'm not able to find out how different values of ${d}$ affect the performance keeping the same ${N}$. Put another way, why not use just the ${N}$ as the performance measure instead of ${b^*}$?
+"
+"['machine-learning', 'terminology']"," Title: How can an AI train itself if no one is telling it if its answer is correct or wrong?Body: I am a programmer but not in the field of AI. A question constantly confuses me is that how can an AI be trained if we human beings are not telling it its calculation is correct?
+
+For example, news usually said something like ""company A has a large human face database so that it can train its facial recognition program more efficiently"". What the piece of news doesn't mention is whether a human engineer needs to tell the AI program each of the program's recognition result is accurate or not.
+
+Are there any engineers who are constantly telling an AI what it produced it correct or wrong? If no, how can an AI determine if the result it produces is correct or wrong?
+"
+"['research', 'reference-request', 'generalization']"," Title: References on generalization theory and mathematical abstraction of ML conceptsBody: I'd like to learn about generalization theory for machine learning algorithms. I'm looking for books and other references (in case books aren't available) that provide a gentle introduction to the field for a relative beginner like me.
+
+My background includes exposure to mostly undergrad mathematics and I have enough mathematical maturity to learn graduate-level topics as well.
+
+To be more specific, I'm looking to understand more about mathematical abstraction of ML concepts (e.g. learning algorithm, hypothesis space, complexity of algorithm/hypothesis etc.), the purpose of an ML algorithm as an expected risk minimization exercise, techniques used to get bounds on generalization and so on.
+
+To be even more specific, I'm looking to familiarize myself with concepts, theory and techniques so that I can understand papers (at least on a basic level) like:
+
+
+
+and references therein
+"
+"['comparison', 'terminology', 'computational-learning-theory', 'learning-algorithms', 'hypothesis']"," Title: What is the difference between a learning algorithm and a hypothesis?Body: What's the distinction between a learning algorithm $A$ and a hypothesis $f$?
+I'm looking for a few concrete examples, if possible.
+For example, would the decision tree and random forest be considered two different learning algorithms? Would a shallow neural network (that ends up learning a linear function) and a linear regression model, both of which use gradient descent to learn parameters, be considered different learning algorithms?
+Anyway, from what I understand, one way to vary the hypothesis $f$ would be to change the parameter values, maybe even the hyper-parameter values of, say, a decision tree. Are there other ways of varying $f$? And how can we vary $A$?
+"
+"['neural-networks', 'machine-learning', 'tensorflow', 'backpropagation', 'feedforward-neural-networks']"," Title: Is this TensorFlow implementation of partial derivative of the cost with respect to the bias correct?Body: I have a neural network for MNIST classification which I am hard coding using TensorFlow 2.0. The neural network has an input layer consisting of 784 neurons (28 * 28), one hidden layer having ""hidden_neurons"" number of neurons and an output layer having 10 neurons.
+
+The part of the code that I want to get checked is as follows:
+
+# Partial derivative of cost function wrt b2-
+dJ_db2 = (1 / m) * tf.reshape(tf.math.reduce_sum((A2 - Y), axis = 0), shape = (1, 10))
+
+# Partial derivative of cost function wrt b1-
+dJ_db1 = (1 / m) * tf.reshape(tf.math.reduce_sum(tf.transpose(tf.math.multiply(tf.matmul(W2, tf.transpose((A2 - Y))), relu_derivative(A1))), axis = 0), shape = (1, hidden_neurons))
+
+
+The notation is as follows.
+
+
+- ""b1"" - bias for hidden layer and has the shape (1, hidden_neurons"")
+- ""b2"" - bias for output layer having the shape (1, 10).
+- ""A2"" - is the output of output layer and have the shape (m, c)
+- ""Y"" - is one-hot encoded target and have the shape (m, c)
+- 'm' - is number of training examples
+- 'c' - is number of classes
+- ""A1"" - is the output of hidden layer and has the shape (hidden_neurons, m)
+
+
+I have used multiclass cross-entropy cost function. Hidden layer uses ReLU activation function, while the output layer has softmax activation function.
+
+Are my two lines of codes for cost function wrt to ""b1"" and ""b2"" correct?
+"
+"['tensorflow', 'recurrent-neural-networks']"," Title: Do we have anything like accuracy and loss in RNN models?Body: I have a paper about trading which has been implemented with RNN on Tensorflow. We have about 2 years of data from trading. Here are some samples :
+
+Date, Open, High, Low, Last, Close, Total Trade Quantity, Turnover (Lacs)
+
+2004-08-25 , 1198.7, 1198.7, 979.0, 985.0, 987.95, 17116372.0, 172587.61
+
+2004-08-26 , 992.0, 997.0, 975.3, 976.85, 979.0, 5055400.0, 49828.65
+
+I need to predict the the future of trading (for example, the latest 10 days ). So, how can I make sure that my model is working correctly. Do we have any ""accuracy"" or ""loss"" like what we have in Deep Learning?
+"
+"['neural-networks', 'machine-learning', 'ai-design', 'ai-safety', 'adversarial-ml']"," Title: What is the relationship between robustness and adversarial machine learning?Body: I have been reading a lot of articles on adversarial machine learning and there are mentions of ""best practices for robust machine learning"".
+
+A specific example of this would be when there are references to ""loss of efficient robust estimation in high dimensions of data"" in articles related to adversarial machine learning. Also, IBM has a Github repository named ""IBM's Adversarial Robustness Toolbox"".
+Additionally, there is a field of statistics called 'robust statistics' but there is no clear explanation anywhere about its relation to adversarial machine learning.
+
+I would therefore be grateful if someone could explain what robustness is in the context of Adversarial Machine Learning.
+"
+"['machine-learning', 'convolutional-neural-networks', 'long-short-term-memory', 'deep-neural-networks', 'speech-recognition']"," Title: What is the difference between Kaldi and DeepSpeech speech recognition systems in their approach?Body: I would like to know how do Kaldi and DeepSpeech speech recognition systems differ algorithmically? Which one would be more accurate for continuous speech in time?
+"
+"['reinforcement-learning', 'policy-gradients']"," Title: How to set the multiple continuous actions with constraintsBody: I want to build a Deep Reinforcement Learning Model for Asset allocation.
+Background:
+I have 7 stock indexes from different markets, and I want to build a policy to produce the action (likes whether to sell or buy index? which index? and how much?) by observing the market informations.
+Question 1:
+I have two idea for the output of my policy. One is to produce a vector $w$ of length 8, Each element $w_i$ represent the target ratio of the stocks we want to hold (7 stock indexes and 1 cash), so I need to set $w_i>0, $ and $\sum_{i}^{8}w_i=1.$ How to implement? I just let the Activation function in the last layer of neural network to be sigmoid and divide the sum in environment. Is this available? And it's not easy to deal with transaction process if buy and sell fee exist. And the training process slowly when I use the policy gradient.
+The two is also produce a vector $w$ of length 8, For each element $w_i$ represent sell percent for stock i when $w_i$ is negative and buy percent of cash when $w_i$ is positive. It can solve the problem I meet in idea one. But I will meet a new question is cash is finite. I need to decide order of buy, in other words, which stock to buy first and buy which one later.
+Question 2:
+Many papers tell me to produce Distributed parameters by policy then create the action by distribution (like: normal distribution). It makes that more difficult to control the action satisfy the condition above.
+And the result whether be unstable if the action is produce by sample.
+"
+"['image-recognition', 'facial-recognition']"," Title: How Euclidean distance algorithm calculate two different face images are match or not match in face recognition?Body: I am trying to make a face login application. face comparison algorithm is using Euclidean distance to calculate two different face images that are the same or not the same. can anyone help me with how the Euclidean distance algorithm is working?
+"
+"['reinforcement-learning', 'testing', 'algorithmic-trading']"," Title: How can we make sure how well the reinforcement learning agent works on a stock dataset?Body: I read a paper, which is about Deep Reinforcement Learning and it tries to use this method on stock data set. It has been shown that it reaches the maximum return (profit). It has been implemented in Tensorflow.
+My question is, how we can make sure that we achieve the maximum value? I mean, is there a parameter or value that can show us how well the RL did its job?
+"
+"['machine-learning', 'random-forests']"," Title: Oposite type of predictions for unbalanced datasetBody: I have a big dataset (28354359 rows) that has some blood values as features (11 features) and the label or outcome variable that tells whether a patient has a virus caused by a Neoplasm or not.
+
+The problem with my dataset is that 2% of the patients that are in my dataset have the virus and 98% does not have the virus.
+
+I am mandatory to use the random forest algorithm. While my random forest model has a high accuracy scores 92%, the problem is that more than 90% of the patients that have the virus are predicted that they don’t have the virus.
+
+I want the opposite effect, I want that my random forest is likely to predict more often that a patient has the virus (even if the patient does not have the virus (ideally I don’t want this side effect , but rather this than the opposite)).
+
+The idea behind this is that performing an extra test (via an echo) could not harm the patient that has not the virus, but not testing a patient will have result terrible for the patient.
+
+Does somebody have advice how I could tweak my random forest model for this task?
+
+I my self experimented with the SMOTE transformation and other sampling techniques but maybe you guys have other suggestion.
+
+I also have tried to apply a cutoff function.
+"
+"['reinforcement-learning', 'learning-algorithms', 'gym']"," Title: Unable to train Coach for Banana-v0 Gym environmentBody: I have just started playing with Reinforcement learning and starting from the basics I'm trying to figure out how to solve Banana Gym with coach.
+
+Essentially Banana-v0
env represents a Banana shop that buys a banana for \$1 on day 1 and has 3 days to sell it for anywhere between \$0 and \$2, where lower price means a higher chance to sell. Reward is the sell price less the buy price. If it doesn't sell on day 3 the banana is discarded and reward is -1 (banana buy price, no sale proceeds). That's pretty simple.
+
+Ideally the algorithm should learn to set a high price on day 1 and reducing it every day if it didn't sell.
+
+To start I took the coach-bundled CartPole_ClippedPPO.py
and CartPole_DQN.py
preset files and modified them to run Banana-v0
gym.
+
+The trouble is that I don't see any learning progress regardless what I try, even after running like 50,000 episodes. In comparison the CartPole gym successfully trains in under 500 episodes.
+
+I would have expected some improvement after 50k episodes for such a simple task like Banana.
+
+
+
+Is it because the Banana-v0 rewards are not predictable? I.e. whether the banana sells or not is still determined by a random number (with success chance based on the price).
+
+Where should I take it from here? How to identify which Coach agent algorithm I should start with and try to tune it?
+"
+"['machine-learning', 'deep-learning', 'classification', 'training', 'keras']"," Title: How to change the architecture of my simple sequential modelBody: I'm new to Deep Learning with Keras
. With some tutorials online for cat vs non-cat classification, I was able to compile this simple architecture for my own classification problem. However, my target application is fire detection which essentially might have semantic differences with cats.
+
+After training, I realized this model is accurate when the fire scene is simple and visible, but if many objects inside or fire is a bit smaller, it fails to detect. So I thought maybe I can change the architecture by increasing the complexity.
+
+First thing came into my mind was increasing the first layer filters from 32
to 64
by changing to model.add(Conv2D(64, kernel_size = (3, 3), activation='relu', input_shape=(IMAGE_SIZE, IMAGE_SIZE, 1)))
+
+Is it going to help? What are other best practices to change the architecture? How about increasing the number of kernels to kernel_size = (5, 5)
or adding one more layer or even changing the images from grayscale to colored?
+
+Here is my original training code:
+
+from keras.models import Sequential, load_model
+from keras.layers import Dense, Dropout, Flatten
+from keras.layers import Conv2D, MaxPooling2D
+from keras.layers.normalization import BatchNormalization
+from PIL import Image
+from random import shuffle, choice
+import numpy as np
+import os
+
+IMAGE_SIZE = 256
+IMAGE_DIRECTORY = './data/test_set'
+
+def label_img(name):
+ if name == 'cats': return np.array([1, 0])
+ elif name == 'notcats' : return np.array([0, 1])
+
+def load_data():
+ print(""Loading images..."")
+ train_data = []
+ directories = next(os.walk(IMAGE_DIRECTORY))[1]
+
+ for dirname in directories:
+ print(""Loading {0}"".format(dirname))
+ file_names = next(os.walk(os.path.join(IMAGE_DIRECTORY, dirname)))[2]
+
+ for i in range(200):
+ image_name = choice(file_names)
+ image_path = os.path.join(IMAGE_DIRECTORY, dirname, image_name)
+ label = label_img(dirname)
+ if ""DS_Store"" not in image_path:
+ img = Image.open(image_path)
+ img = img.convert('L')
+ img = img.resize((IMAGE_SIZE, IMAGE_SIZE), Image.ANTIALIAS)
+ train_data.append([np.array(img), label])
+
+ return train_data
+
+def create_model():
+ model = Sequential()
+ model.add(Conv2D(32, kernel_size = (3, 3), activation='relu',
+ input_shape=(IMAGE_SIZE, IMAGE_SIZE, 1)))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Dropout(0.2))
+ model.add(Flatten())
+ model.add(Dense(256, activation='relu'))
+ model.add(Dropout(0.2))
+ model.add(Dense(64, activation='relu'))
+ model.add(Dense(2, activation = 'softmax'))
+
+ return model
+
+training_data = load_data()
+training_images = np.array([i[0] for i in training_data]).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)
+training_labels = np.array([i[1] for i in training_data])
+
+print('creating model')
+model = create_model()
+model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
+print('training model')
+model.fit(training_images, training_labels, batch_size=50, epochs=10, verbose=1)
+model.save(""model.h5"")
+
+"
+"['neural-networks', 'long-short-term-memory']"," Title: Why do regression LSTMs learn high to low inputs significantly better than low to high?Body: The specific problem I have is learning the relation $x^2$. I have an array of 0 through 19 (input values) and a target array of 0, 1, 4, 9, 16, 25, 36 and so on all the way up to $19^2$=361.
+
+I have the following LSTM architecture:
+
+1 input node
+1 output node
+32 hidden units
+
+
+Now, interestingly, I accidentally trained my network wrong, in that I forgot to reverse the expected output list when training. So I trained the network for:
+$$0 \rightarrow 361 \\ 1 \rightarrow 324 \\ 2 \rightarrow 289 \\ 3 \rightarrow 256 \\ ... \\ 17 \rightarrow 4 \\ 18 \rightarrow 1 \\ 19 \rightarrow 0$$
+
+Starting with a learning rate of 1 and halving it every 400 epochs, after 3000 epochs my error (which started somwhere in the millions) was 0.2.
+
+However, when I went to correct this mistake, my error will hardly ever go beneath 100,000. Testing the network shows it does well in the low inputs, but once it starts to get to $16^2$ onwards, it really struggles to increase the output values past ~250.
+
+I was just wondering if anyone has an explanation for this, as to why the LSTM struggles to learn to increase exponentially but seems to be able to learn to decrease exponentially just fine.
+
+EDIT WITH CODE:
+
+a = np.array([i for i in range(20)])
+b = np.array([i**2 for i in range(20)])
+np.random.seed(5)
+ls = LSTM(1, 1, 32, learning_rate=1, regression=1)
+# Input size = 1, output size = 1, hidden units = 32
+if 1:
+ for k in range(3000):
+ total = 0
+ for i in range(20):
+ ls * a[i]
+ for i in range(20):
+ total += ls / b[i]
+ if k % 400 == 0:
+ ls.learning_rate *= 0.5
+ print(k, "":"", total)
+ ls.update_params()
+for i in a:
+ print(i, ls*i)
+#ls.save_state('1.trainstate')
+for i in range(20,30):
+ print(ls*i)
+
+
+Note this code uses a class I wrote using numpy. If wanted I'll include this code as well it's just that is ~300 lines and I don't expect anyone to go through all that
+"
+"['objective-functions', 'support-vector-machine']"," Title: A generalized quadratic loss and Newton iteration for Support Vector Regression, why doesn't it generalize well?Body: I'm comparing the results of an Newton optimizer for a modified version of SVM ( a generalized quadratic loss, similar to the one stated in:
+
+A generalized quadratic loss for SVM
+
+) with classic SVM^light for regression. The problem is that it's able to overfit the data (UCI Yacht data-set) but I can't reach the generalization results of SVM^light. I've tried several hyper-parameters grids. I'm solving the primal problem. I'll send you my code if you need it. Any suggestion?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'tensorflow', 'reference-request']"," Title: What are examples of models for traffic sign detection that can be easily implemented?Body: I'm working on a college project about traffic sign detection and I have to choose a paper to implement it, but I have basic knowledge of TensorFlow and I'm afraid of choosing a paper that I can't implement it.
+
+What are examples of models for traffic sign detection that can be easily implemented?
+"
+"['neural-networks', 'deep-learning', 'recurrent-neural-networks', 'neural-turing-machine']"," Title: What is a location-based addressing in a neural Turing machine?Body: In the neural Turing machine (NTM), the content-based addressing and location-based addressing is used for memory addressing. Content-based addressing is similar to the attention-based model, weighting each row of memory which shows the importance of each row of memory (or each location of memory). Then for location-based addressing, by using shift kernel, the attention focus is moved left or right or remains unchanged.
+
+What is location-based addressing? Why was location-based addressing used? What is the concept of ""for location-based addressing, by using shift kernel, the attention focus is moved left or right or remains unchanged.""?
+What is the difference between content-based addressing and location-based addressing?
+"
+"['machine-learning', 'classification']"," Title: Can I perform multiclass classification when the number of features is less than the number of targets?Body: Is it possible to perform multiclass classification on data where the number of features is less than the number of target variables? Do you have any suggestions on how to address a problem where I have 2000 target variables?
+"
+"['convolutional-neural-networks', 'object-detection']"," Title: Multicamera Tracking vs Single Fisheye CameraBody: Suppose you want to detect objects and also track objects and people. Is it better to train a model using a single fisheye camera or using multiple cameras that mimic the view of the fisheye camera?
+
+Also, what can be done to remove objects that are washed out? Like for very small objects, how do you make them more visible? Would multicamera tracking be better in this scenario?
+"
+"['reinforcement-learning', 'markov-decision-process', 'multi-armed-bandits', 'contextual-bandits']"," Title: How do I determine the optimal policy in a bandit problem with missing contexts?Body: Suppose I learn an optimal policy $\pi(a|c)$ for a contextual multi-armed bandit problem, where the context $c$ is a composite of multiple context variables $c = c_1, c_2, c_3$. For example, the context is specified by three Bernoulli variables.
+
+Is there any literature on how to determine the optimal policy in the event where I no longer have access to one of the context variables?
+"
+"['software-evaluation', 'artificial-life', 'self-replicating-machines']"," Title: Why is the status of artificial life software so under-developed?Body: I'm interested in self replicating artificial life (with many agents), so after reviewing the literature with the excellent Kinematic Self-Replicating Machines I started looking for software implementations. I understand that the field is still in the early stages and mainly academic, but the status of artificial life software looks rather poor in 2019.
+
+On wikipedia there is this list of software simulators.
+Going trough the list only ApeSDK, Avida, DigiHive, DOSE, Polyword have been updated in 2019. I did not find a public repo for Biogenesis. ApeSDK, DigiHive and DOSE are single author programs.
+
+All in all I don't see a single very active project with a large community around (I would be happy to have missed something). And this is more surprising considering the big momentum of AI and the proliferation of many ready to use AI tools and libraries.
+
+Why is the status artificial life software so under-developed, when this field looks promising both from a commercial (see manufacturing, mining or space exploration applications) and academic (ecology, biology, human brain and more) perspective? Did the field underdelivered on expectations in past years and got less funding? Did the field hit a theoretical or computational roadblock?
+"
+"['neural-networks', 'machine-learning', 'objective-functions', 'learning-algorithms', 'cross-validation']"," Title: How do you interpret this learning curve?Body: Loss is MSE; orange is validation loss, blue training loss. The task is NN regression (18 inputs, 2 outputs), one layer 300 hidden units.
+
+
+Tuning the lr, mom, l2 regularization parameters this is the best validation loss I can obtain. Can be considered overfitting? Is 1 a bad vl loss value for a regression task?
+"
+"['python', 'resource-request', 'geometric-deep-learning', 'graphs', 'graph-neural-networks']"," Title: Is there an open-source implementation for graph convolution networks for weighted graphs?Body: Currently, I'm using a Python library, StellarGraph, to implement GCN. And I now have a situation where I have graphs with weighted edges. Unfortunately, StellarGraph doesn't support those graphs
+
+I'm looking for an open-source implementation for graph convolution networks for weighted graphs. I've searched a lot, but mostly they assumed unweighted graphs. Is there an open-source implementation for GCNs for weighted graphs?
+"
+"['comparison', 'genetic-algorithms', 'evolutionary-algorithms', 'game-theory']"," Title: What is the difference between genetic algorithms and evolutionary game theory algorithms?Body: What is the difference between genetic algorithms and evolutionary game theory algorithms?
+"
+"['neural-networks', 'backpropagation', 'artificial-neuron', 'sigmoid']"," Title: How can I train a neural network for another input set, without losing the learning of the previous input set?Body: I read this tutorial about backpropagation.
+
+So using this backpropagation we are training the neural network repeatedly for one input set, say [2,4], until we reach 100% accuracy of getting 1 as output. And the neural network is adjusting its weight values accordingly. So once after the neural network is trained this way, suppose we are giving another input set, say [6,8], also then will the neural network update its weight values (overwriting previous values), right? This will result in losing the previous learning, right?
+"
+"['machine-learning', 'reinforcement-learning', 'robotics', 'supervised-learning']"," Title: Can supervised learning be used to solve the inverted pendulum problem?Body: I know that reinforcement learning has been used to solve the inverted pendulum problem.
+
+Can supervised learning be used to solve the inverted pendulum problem?
+
+For example, there could be an interface (e.g. a joystick) with the cart-pole system, which the human can use to balance the pole and, at the same time, collect a dataset for supervised learning. Has this been done before?
+"
+"['neural-networks', 'natural-language-processing', 'word-embedding']"," Title: How can I feed any word into a neural network?Body: I am working on an Intent detection problem for a chatbot in Java.
+So I need to convert words from String to a double[] format.
+I tried using wordToVec(deeplearning4j), but it does not return a vector for words not present in the training data.
+
+e.g. My dataset for wordToVec.train() does not contain the word ""morning"". So wordToVec.getWordVector(""morning"") returns a null value.
+
+There is no need to find the coorelation between two words(like in word2vec), but it should be able to give me some sort of vector representation for any word.
+
+Here are some things I thought of-
+
+
+- I could use a fixed length hash function and convert resulatant hash into vector.(Will Hash Collision be strong enough to be an issue in this case?)
+- I could initialize for each word a vector of huge length as zero, and set its elements as the ASCII value-64.
+e.g. Keeping Maximum vector length as 10, AND would be represented as-
+[1,14,4,0,0,0,0,0,0,0], and normalize this.
+Is there a better solution to this problem?
+
+
+Here is the code I used to train the model-
+
+
+
+public static void trainModel() throws IOException
+ {
+ //These lines simply generate the dataset into a format readable by wordToVec
+ utilities.GenRawSentences.genRaw();
+
+ dataLocalPath = ""./TrainingData/"";
+ String filePath = new File(dataLocalPath, ""raw_sentences.txt"").getAbsolutePath();
+ //Data Generation ends
+
+ SentenceIterator iter = new BasicLineIterator(filePath);
+ TokenizerFactory t = new DefaultTokenizerFactory();
+ t.setTokenPreProcessor(new CommonPreprocessor());
+
+ VocabCache<VocabWord> cache = new AbstractCache<>();
+ WeightLookupTable<VocabWord> table = new InMemoryLookupTable.Builder<VocabWord>()
+ .vectorLength(64)
+ .useAdaGrad(false)
+ .cache(cache).build();
+
+ Word2Vec vec = new Word2Vec.Builder()
+ .minWordFrequency(1)
+ .iterations(5)
+ .epochs(1)
+ .layerSize(64)
+ .seed(42)
+ .windowSize(5)
+ .iterate(iter)
+ .tokenizerFactory(t)
+ .lookupTable(table)
+ .vocabCache(cache)
+ .build();
+
+ vec.fit();
+
+ //Saves the model for use in other programs
+ WordVectorSerializer.writeWord2VecModel(vec, ""./Models/WordToVecModel.txt"");
+
+
+"
+"['philosophy', 'agi', 'meta-learning']"," Title: How important is learning to learn for the development of AGI?Body: Some people say that abstract thinking, intuition, common sense, and understanding cause and effect are important to make AGI.
+
+How important is learning to learn for the development of AGI?
+"
+"['deep-learning', 'reinforcement-learning', 'python', 'q-learning']"," Title: How would one develop an action space for a game that is proprietary?Body: I'm currently trying to develop an RL that will teach itself to play the popular fighting game ""Tekken 7"". I initially had the idea of teaching it to play generally- against actual opponents with various levels of difficulty- but the idea proved to be rather complex. I've liquefied the goal down to ""get a non-active standing opponent to 0 health as fast as possible"".
+
+I have some experience with premade OpenAI environments, and tried making my own environment for this specific purpouse, but this proved to be rather difficult as there was no user friendly documentation.
+
+Below is a DQN that was coded along with the help of a YouTube tutorial
+
+
+
+import numpy as np
+from tensorflow.keras.layers import Dense, Activation
+from tensorflow.keras.models import Sequential
+from tensorflow.keras.optimizers import Adam
+
+
+class ReplayBuffer(object):
+ def __init__(self, max_size, input_shape, n_actions, discrete=False):
+ self.mem_size = max_size#memory size dictated
+ self.discrete = discrete#determines a number of discrete values that can be inputted
+ self.state_memory = np.zeros((self.memsize, input_shape))
+ self.new_state_memory = np.zeros((self.mem_size, input_shape))
+ dtype = np.int8 if self.discrete else np.float32
+ self.action_memory = np.zeros((self.mem_size, n_actions))
+ self.reward_memory = np.zeros(self.mem_size)
+ self.terminal_memory = np.zeros(self.mem_size, dtype = np.float32)
+
+
+ def store_transition(self, state, action, reward, state, done):
+ index = self.mem_cntr % self.mem_size
+ self.state_memory[index] = state
+ self.new_state_memory[index] = state_
+ self.reward_memory[index] = reward
+ self.terminal_memory[index] = 1 - int(done)
+ if self.discrete:
+ actions = np.zeros(self.action_memory.shape[1])
+ self.action_memory[index] = actions
+ else:
+ self.action_memory[index] = action
+ self.mem_cntr += 1
+
+
+ def sample_buffer(self, batch_size):
+ max_mem = min(self.mem_cntr, self.mem_size)
+ batch = np.random.choice(max_mem, batch_size)
+
+ states = self,state_memory[batch]
+ states_ = self.new_state_memory[batch]
+ rewards = self.reward_memory[batch]
+ actions = self.action_memory[batch]
+ terminal = self.terminal_memory[batch]
+
+ return states, actions, rewards, states_, terminal
+
+
+ def build_dqn(lr, n_actions, input_dims, fcl_dims, fc2_dims):
+ model = Sequential([
+ Dense (fcl_dims, input_shape = (input_dims, )),
+ Activation('relu')
+ Dense(fc2_dims),
+ Activation('relu')
+ Dense(n_actions)])
+
+ model.comile(optimizer = Adam(lr = lr), loss = 'mse')
+
+ return model
+
+ class Agent(object):
+ def __init__(self, alpha, gamma, n_actions, epsilon, batch_size,
+ input_dims, epsilon_dec=0.996, epsilon_end=0.01,
+ mem_size = 1000000, fname = 'dqn_model.h5'):
+ self.action_space = [i for i in range(n_actions)]
+ self.n_actions = n_actions
+ self.gamma = gamma
+ self.epsilon = epsilon
+ self.epsilon_dec = epsilon_dec
+ self.epsilon_min = eps_end
+ self.batch_size = batch_size
+ self.model_file = fname
+
+ self.memory = ReplayBuffer(mem_size, input_dims, n_actions,
+ discrete = True)
+ self.q_eval = build_dqn(alpha, n_actions, input_dims, 256, 256)
+
+ def remember(self, state, action, reward, new_state, done):
+ self.memory.store_transition(state, action, reward, new_state, done)
+
+ def choose_action(self, state):
+ state = state[np.newaxis, :]
+ rand = np.random.radnom()
+ if rand < self.epsilon:
+ action = np.random.choice(self.action_space)
+ else:
+ actions = slef.q.eval.predict(state)
+ action = np.argmax(actions)
+
+ return action
+
+ def learn(self):#temporal difference learning, delta between steps \
+ #and learns from this
+ #
+ #using numpy.zero approach, only drawback \
+ #is that batch size of memory must be full before learning
+ if self.memory.mem_cntr < self.batch_size:
+ return
+ state, action, reward, new_state, done = \
+ self.memory.sample_buffer(self.batch_size)
+
+
+ action_values = np.arary(self.action_space, dtype = np.int8)
+ action_indices = np.dot(action, action_values)
+
+ q_eval = self.q_eval.predict(state)
+ q_next = self.q_eval.predict(new_state)
+
+ q_target = q.eval.copy()
+
+ batch_index = np.arrange(self.batch_size, dtype = np.int32)
+
+ q_target[batch_index, action_indices] = reward + \
+ self.gamma*np.max(q_next, axis=1)*done
+
+ _ = self.q_eval.fit(state, q_target, verbose=0)
+
+ self.epsilon = self.epsilon*epsilon_dec if self.epsilon > \
+ self.epsilon_min else self.epsilon_min
+
+ def save_model(self):
+ self.q_eval.save(self.model.file)
+
+ def load_model(self):
+ self.q_eval = load.model(self.model_file)
+
+"
+"['machine-learning', 'convolutional-neural-networks', 'comparison', 'long-short-term-memory', 'time-series']"," Title: Can non-sequential deep learning models outperform sequential models in time series forecasting?Body: Can a CNN (or other non-sequential deep learning models) outperform LSTM (or other sequential models) in time series data?
+
+I know this question is not very specific, but I experienced this when predicting daily store sales and I am curious as to why it can happen.
+"
+"['deep-learning', 'time-series']"," Title: Deep Learning on time series tabular dataBody: In this new book release, at the top of page 51 the authors mention that to do deep learning on time series tabular data the developer should structure the tensors such that the channels represent the time periods.
+For example, with a dataset of 17 features where each row represents an hour of a day: the tensor would have 3 dimensions,
+
+x - the 17 features
+y - the # of days
+z - the 24 hours in each day
+
+So each entry in the tensor would represent that day/hour.
+Is this necessary to capture time series elements? Would the DNN not learn these representations simply by breaking up the date column into: day, hour?
+"
+"['deep-learning', 'convolutional-neural-networks', 'comparison', 'recurrent-neural-networks', 'implementation']"," Title: From an implementation point of view, what are the main differences between an RNN and a CNN?Body: I understand that in general an RNN is good for time series data and a CNN image data, and have noticed many blogs explaining the fundamental differences in the models.
+As a beginner in machine learning and coding, I would like to know from the code perspective, what the differences between an RNN and CNN are in a more practical way.
+For instance, I think most CNNs dealing with image data use Conv1D
or Conv2D
and MaxPooling2D
layers and require reshaping input data with code looks something like this Input(shape=(64, 64, 1))
.
+What are some other things that distinguish CNNs from RNNs from a coding perspective?
+"
+"['machine-learning', 'terminology', 'definitions', 'probability-distribution', 'notation']"," Title: What is a probability distribution in machine learning?Body: If we were learning or working in the machine learning field, then we frequently come across the term "probability distribution". I know what probability, conditional probability, and probability distribution/density in math mean, but what is its meaning in machine learning?
+Take this example where $x$ is an element of $D$, which is a dataset,
+$$x \in D$$
+Let's say our dataset ($D$) is MNIST with about 70,000 images, so then $x$ becomes any image of those 70,000 images.
+In many papers and web articles, these terms are often denoted as probability distributions
+$$p(x)$$
+or
+$$p\left(z \mid x \right)$$
+
+- What does $p(\cdot)$ even mean, and what kind of output does $p(\cdot)$ give?
+- Is the output of $p(\cdot)$ a scalar, vector, or matrix?
+- If the output is vector or matrix, then will the sum of all elements of this vector/matrix always be $1$?
+
+This is my understanding,
+$p(\cdot)$ is a function which maps the real distribution of the whole dataset $D$. Then $p(x)$ gives a scalar probability value given $x$, which is calculated from real distribution $p(\cdot)$. Similar to $p(H)=0.5$ in a coin toss experiment $D={\{H,T}\}$.
+$p\left(z \mid x \right)$ is another function that maps the real distribution of the whole dataset to a vector $z$ given an input $x$ and the $z$ vector is a probability distribution that sums to $1$.
+Are my assumptions correct?
+An example would be a VAE's data generation process, which is represented in this equation
+$$p_\theta(\mathbf{x}^{(i)}) = \int p_\theta(\mathbf{x}^{(i)}\vert\mathbf{z}) p_\theta(\mathbf{z}) d\mathbf{z}$$
+"
+"['image-recognition', 'comparison', 'facial-recognition', 'image-segmentation']"," Title: How to extract face details from a imageBody: I am trying to make a face login application that authenticates the user when matching the registered face and the given face.
+currently, the issue is I cant extract the face descriptions from the given face when the user is taking the photo in the night or the photo has a backlight.
+currently, I am using JavaScript API for face detection and face recognition in the browser and nodejs with tensorflow.js
+can anyone suggest any good face detection and comparison algorithm that resolve my current issues, that will be very helpful for me
+Now I am extracting face descriptions from the face and using the Euclidean distance equation is used for comparing the similarity of the images. if any good methods for comparison, please suggest
+"
+"['datasets', 'generative-adversarial-networks', 'image-generation', 'training-datasets', 'sample-complexity']"," Title: How many training data is required for GAN?Body: I'm beginning to study and implement GAN to generate more datasets. I'll just try to experiment with state-of-the-art GAN models as described here https://paperswithcode.com/sota/image-generation-on-cifar-10.
+The problem is I don't have a big dataset (around 1.000) for image classification, I have tried to train and test my dataset with GoogleNet and InceptionV3 and the results are mediocre. I'm afraid that GAN will require a bigger dataset than the usual image classification. I couldn't find any detailed guideline of how to prepare datasets properly for GAN (e.g. minimum images).
+So, how many images are required to produce a good GAN model?
+Also, I'm curious whether if I can use my image classification dataset directly to train GAN.
+"
+"['reinforcement-learning', 'actor-critic-methods', 'soft-actor-critic']"," Title: Where does entropy enter in Soft Actor-Critic?Body: I am currently trying to understand SAC (Soft Actor-Critic), and I am thinking of it as a basic actor-critic with the entropy included. However, I expected the entropy to appear in the Q-function. From SpinningUp-SAC, it looks like the entropy is entering through the value-function, so I'm thinking it enters by the $\log \pi_{\phi}(a_t \mid s_t)$ in the value function?
+
+I'm a little stuck on understanding SAC, can anyone confirm/explain this to me?
+
+Also, side-note question: is being a soft agent equivalent to including entropy in one of the object functions?
+"
+"['neural-networks', 'deep-learning', 'papers', 'transformer', 'layer-normalization']"," Title: Where should we place layer normalization in a transformer model?Body: In Attention Is All You Need paper:
+
+That is, the output of each sub-layer is $LayerNorm(x+Sublayer(x))$, where $Sublayer(x)$ is the function implemented by the sub-layer itself. We apply dropout to the output of each sub-layer, before it is added to the sub-layer input and normalized.
+
+which makes the final formula $LayerNorm(x+Dropout(Sublayer(x)))$. However, in https://github.com/tensorflow/models/blob/0effd158ae1e6403c6048410f79b779bdf344d7d/official/transformer/model/transformer.py#L278-L288, I see
+def __call__(self, x, *args, **kwargs):
+ # Preprocessing: apply layer normalization
+ y = self.layer_norm(x)
+
+ # Get layer output
+ y = self.layer(y, *args, **kwargs)
+
+ # Postprocessing: apply dropout and residual connection
+ if self.train:
+ y = tf.nn.dropout(y, 1 - self.postprocess_dropout)
+ return x + y
+
+which ends up as $x+Dropout(Sublayer(LayerNorm(x)))$. Plus there are extra LayerNorm
s as final layers in both encoder and decoder stacks.
+In a quick test, the performance of this model seems to be better than if I change back to the paper's order of operations. My question is: why? And could it be predicted in advance?
+I note that Generating Long Sequences with Sparse Transformers uses the $x+Dropout(Sublayer(LayerNorm(x)))$ order, but doesn't discuss it, unlike the other changes it makes to Transformer.
+"
+"['convolutional-neural-networks', 'computer-vision', 'gradient-descent', 'architecture', 'pytorch']"," Title: Can Grad CAM feature maps be used for Training?Body: I am trying to recreate the architecture of the following paper: https://arxiv.org/pdf/1807.03058.pdf
+
+Can someone help me in explaining how are the feature maps coming out of the output of GradCam used in the following conv layers?
+"
+"['problem-solving', 'constraint-satisfaction-problems']"," Title: How can I formulate the k-knights problem as a constraint satisfaction problem?Body: There are three things in every constraint satisfaction problem (CSP):
+
+
+- Variables
+- Domain
+- Constraints
+
+
+In the given scenario, I know how to identify the constraints, but I don't know how to identify the variables and the domain.
+
+The given scenario is:
+
+
+ You are given a $n \times n$ board, where $n \geq 3$. On this board, you have to put $k$ knights where $k < n^2$, such that no knight is attacking the other knight. The knights are expected to be placed on different squares on the board. A knight can move two squares vertically and one square horizontally or two squares horizontally and one square vertically. The knights attack each other if one of them can reach the other in a single move. For example, on a $3 \times 3$ board, we can place $k=5$ knights.
+
+
+So, the input is $m = 3, n = 3, k = 5$. There are two solutions.
+
+Solution 1
+
+K A K
+A K A
+K A K
+
+
+Solution 2
+
+A K A
+K K K
+A K A
+
+"
+"['neural-networks', 'genetic-algorithms', 'neat', 'neuroevolution']"," Title: How are weights updated in a genetic algorithm with neural network?Body: Suppose an AI is to play the game flappy bird. And the fitness function is how long the bird has traveled before the game ends.
+
+Would we have multiple neural networks initialized at the beginning with random weights (as in each bird has its own network) and then we determine the neural networks that have lasted the longest for the game and then we perform a selection of weights from the ""better"" neural networks followed by mutation? Those will then be used as the new weights of a brand new neural network (ie the offspring from two ""better"" neural networks?)?
+
+If that is the case, does that mean there is no backpropagation because there isn't a cost function?
+"
+"['reinforcement-learning', 'policy-gradients']"," Title: What is the effect of picking action deterministicly at inference with Policy Gradient Methods?Body: In policy gradient methods such as A3C/PPO, the output from the network is probabilities for each of the actions. At training time, the action to take is sampled from the probability distribution.
+
+When evaluating the policy in an environment, what would be the effect of always picking the action that has the highest probability instead of sampling from the probability distribution?
+"
+"['convolutional-neural-networks', 'time-series', 'convolution', 'pooling']"," Title: Can CNNs be applied to non-image data, given that the convolution and pooling operations are mainly applied to imagery?Body: When using CNNs for non-image (times series) data prediction, what are some constraints or things to look out for as compared to image data?
+To be more precise, I notice there are different types of layers in a CNN model, as described below, which seem to be particularly designed for image data.
+A convolutional layer that extracts features from a source image. Convolution helps with blurring, sharpening, edge detection, noise reduction, or other operations that can help the machine to learn specific characteristics of an image.
+A pooling layer that reduces the image dimensionality without losing important features or patterns.
+A fully connected layer also known as the dense layer, in which the results of the convolutional layers are fed through one or more neural layers to generate a prediction.
+Are these operations also applicable to non-image data (for example, times series)?
+"
+"['computer-vision', 'object-detection', 'image-processing']"," Title: How can I detect moving objects in a video by OpenCV without using deep learning techniques?Body: I want to detect moving objects in a surveillance video without using machine learning tools (like neural networks).
+Is there a simple way in the OpenCV library? What is an efficient solution for this purpose?
+"
+"['neural-networks', 'convolutional-neural-networks', 'papers', 'overfitting']"," Title: How GoogleNet actually deal with reducing overfitting?Body: Today I was going through a tutorial of Andrew Ng about Inception network. He said that GoogLeNet's hidden layers are also good in prediction and it had somehow a regularization effect, so it reduces overfitting. I also search on this topic and tried to figure out by reading the GoogLeNet paper. But I am not satisfied.
+
+Can anyone give me any mathematical intuition or reasoning about this in detail?
+"
+"['philosophy', 'agi', 'go']"," Title: Could AI kill the joy of competitive sports and games?Body: Lee Sedol, former world champion, and legendary Go player today announced his retirement with the quote ""Even if I become the No. 1, there is an entity that cannot be defeated"".
+
+Is it possible that AIs could kill the joy of competitive games(Go, chess, Dota 2, etc.) or (thinking more futuristic with humanoid AIs) in sports?
+
+What happens if AIs gets better than us at painting and making music. Will we still appreciate it in the same way we do now?
+"
+"['chess', 'google']"," Title: How did MuZero learn the rules of chess?Body: Google says that their new AI program MuZero learnt the rules of chess and some other board games without being told so. How is this even possible?
+
+https://towardsdatascience.com/deepmind-unveils-muzero-a-new-agent-that-mastered-chess-shogi-atari-and-go-without-knowing-the-d755dc80ff08
+"
+['logic']," Title: Does the substituted variable/constant have to appear in the unified term?Body: I'm checking out how to manually apply resolution on a first order predicate logic knowledge base and I'm confused about what is allowed or not in the algorithm.
+
+Let's say that we have the following two clauses (where $A$ and $B$ are constants):
+
+$\neg P(A, B) \vee H(A)$
+
+$\neg L(x_1) \vee P(x_1, y_1)$
+
+If I try to unify these two clauses by making the substitutions $\{x_1/A, y_1/B\}$ do I get $\neg L(A) \vee H(A)$ ? Is it allowed to substitute $y_1$ by $B$ even if $B$ doesn't appear in the unified clause?
+
+Then we have the other way around where:
+
+$\neg P(A, y_1) \vee H(A)$
+
+$\neg L(x_1) \vee P(x_1, B)$
+
+Can I do $\{x_1/A, B/y_1\}$ for $\neg L(A) \vee H(A)$ ?
+
+What about the case where:
+
+$\neg P(A, z_1) \vee H(A)$
+
+$\neg L(x_1) \vee P(x_1, y_1)$
+
+Can I substitute $\{x_1/A, y_1/z_1\}$ and get $\neg L(A) \vee H(A)$ ?
+
+Finally there is also the cases where we have something like this:
+
+$\neg P(x_2, y_2) \vee H(z_1)$
+
+$\neg L(x_1) \vee P(x_1, y_1)$
+
+Can we do $\{x_1/x_2, y_1/y_2\}$ to get $\neg L(x_3) \vee H(z_2)$ ?
+
+I'm really confused about when unification succeeds once we have two clauses with a literal of the same kind (negation in one of them and not in the other one) that are a candidates for unification.
+"
+"['machine-learning', 'feature-selection', 'weights', 'gaussian-process']"," Title: Interpretability of feature weights from Gaussian process classifierBody: Suppose I trained a Gaussian process classifier with a linear kernel (using GPML toolbox) and got some feature weights for each input feature.
+
+My question is then:
+
+Does it/When does it make sense to interpret the weights to indicate the real-life importance of each feature or interpret at group level the average over the weights of a group of features?
+"
+"['neural-networks', 'feedforward-neural-networks', 'time-complexity', 'computational-complexity']"," Title: Given an input $x \in R^{1\times d}$ and a network with $s$ hidden layers, is the time complexity of the forward pass $O(d^{2}s)$?Body: I have a neural network that takes as an input a vector of $x \in R^{1\times d}$ with $s$ hidden layers and each layer has $d$ neurons (including the output layer).
+
+If I understand correctly the computational complexity of the forward pass of a single input vector would be $O(d^{2}(s-1))$, where $d^{2}$ is the computational complexity for the multiplication of the output of each layer and the weight matrix, and this happens $(s-1)$ times, given that the neural network has $s$ layers. We can ignore the activation function because the cost is $O(d)$.
+
+So, if I am correct so far, and the computational complexity of the forward pass is $O(d^{2}(s-1))$, is the following correct?
+
+$$O(d^{2}(s-1)) = O(d^{2}s + d^{2}) = O(d^{2}s)$$
+
+Would the computational complexity of the forward pass for this NN be $O(d^{2}s)$?
+"
+"['convolutional-neural-networks', 'object-detection', 'pytorch', 'yolo', 'coco-dataset']"," Title: How can I train YOLO with the COCO dataset?Body: I am trying to implement the original YOLO architecture for object detection, but I am using the COCO dataset. However, I am a bit confused about the image sizes of COCO. The original YOLO was trained on the VOC dataset and it is designed to take 448x448 size images. Since I am using COCO, I thought of cropping down the images to that size. But that would mean I would have to change the annotations file and it might make the process of object detection a bit harder because some of the objects might not be visible.
+
+I am pretty new to this, so I am not sure if this is the right way or what are some other things that I can do. Any help will be appreciated.
+"
+"['machine-learning', 'deep-learning', 'regression', 'rule-acquisition']"," Title: Which algorithm and architecture to use for 1:1 matrix transformation of an 8X8 dimension?Body: I would like to map the simplest 8X8 matrices, one to one, but am not sure which AI algorithm would give the best performance. I am thinking about the DeepLearning4j, however, I don't know which neural architecture to use.
+
+I would like to make a simple and a very ""stupid"" bot for playing the chess. The result I am hoping to obtain is a system which can learn the chess rules rather than make intelligent moves (although it would be great if it could do that as well). I understand that chess bots are nothing new, however, I am not interested in making a chess bot but a system that can simply learn by giving nothing else than an 8X8 matrix as an input and 8X8 matrix as an output. Chess is irrelevant, and it can be any other game that can be represented with values within an 8X8 matrix.
+
+I am aware of the newest image mappers that transform horses to zebraz, but I need something more precise and 1 to 1 learning.
+
+The amount of data I can get for learning is also not an issue (I can generate as much as I want).
+"
+"['knowledge-representation', 'programming-languages', 'language-model']"," Title: What would be a good internal language for an AI?Body: For an AI to represent the world, it would be good if it could translate human sentences into something more precise.
+We know, for example, that mathematics can be built up from set theory. So representing statement in language of set theory might be useful.
+For example
+
+All grass is green
+
+is something like:
+
+$\forall x \in grass: isGreen(x)$
+
+But then I learned that set theory is built up from something more basic. And that theorem provers use a special form of higher-order logic of types. Then there is propositional logic.
+Basically what the AI would need would be some way representing statements, some axioms, and ways to manipulate the statements.
+Thus what would be a good language to use as an internal language for an AI?
+"
+['ethics']," Title: Which AI conference presentation on predicting terrorist movement inside buildings caused protest in the audience and media?Body: I remember reading an online news article a while ago on AI ethics that described a conference presentation on the subject of a military AI system that predicted terrorist movement inside buildings and used drones to shoot them when they exited the building. The presentation caused protests and sharp criticism in the audience due to the dehumanizing nature of the AI usage presented.
+
+The article was in mainstream media, but I am unable to find it with Google search.
+
+Which AI conference presentation was it and are any articles on it available?
+"
+"['python', 'tensorflow', 'keras', 'gradient-descent']"," Title: How to reduce variance of the model loss during training?Body: I know that stochastic gradient descent always gives different results. What are the best practices to reduce this variance today?
+I tried to predict simple function with two different approaches and every time I train them I see very different results.
+
+Input data:
+
+def plot(model_out):
+ fig, ax = plt.subplots()
+ ax.grid(True, which='both')
+ ax.axhline(y=0, color='k', linewidth=1)
+ ax.axvline(x=0, color='k', linewidth=1)
+
+ ax.plot(x_line, y_line, c='g', linewidth=1)
+ ax.scatter(inputs, targets, c='b', s=8)
+ ax.scatter(inputs, model_out, c='r', s=8)
+
+a = 5.0; b = 3.0; x_left, x_right = -16., 16.
+NUM_EXAMPLES = 200
+noise = tf.random.normal((NUM_EXAMPLES,1))
+
+inputs = tf.random.uniform((NUM_EXAMPLES,1), x_left, x_right)
+targets = a * tf.sin(inputs) + b + noise
+x_line = tf.linspace(x_left, x_right, 500)
+y_line = a * tf.sin(x_line) + b
+
+
+Keras training:
+
+model = tf.keras.Sequential()
+model.add(tf.keras.layers.Dense(50, activation='relu', input_shape=(1,)))
+model.add(tf.keras.layers.Dense(50, activation='relu'))
+model.add(tf.keras.layers.Dense(1))
+
+model.compile(loss='mse', optimizer=tf.keras.optimizers.Adam(0.01))
+model.fit(inputs, targets, batch_size=200, epochs=2000, verbose=0)
+
+print(model.evaluate(inputs, targets, verbose=0))
+plot(model.predict(inputs))
+
+
+
+
+Manual training:
+
+model = tf.keras.Sequential()
+model.add(tf.keras.layers.Dense(50, activation='relu', input_shape=(1,)))
+model.add(tf.keras.layers.Dense(50, activation='relu'))
+model.add(tf.keras.layers.Dense(1))
+
+optimizer = tf.keras.optimizers.Adam(0.01)
+
+@tf.function
+def train_step(inpt, targ):
+ with tf.GradientTape() as g:
+ model_out = model(inpt)
+ model_loss = tf.reduce_mean(tf.square(tf.math.subtract(targ, model_out)))
+
+ gradients = g.gradient(model_loss, model.trainable_variables)
+ optimizer.apply_gradients(zip(gradients, model.trainable_variables))
+ return model_loss
+
+train_ds = tf.data.Dataset.from_tensor_slices((inputs, targets))
+train_ds = train_ds.repeat(2000).batch(200)
+
+def train(train_ds):
+ for inpt, targ in train_ds:
+ model_loss = train_step(inpt, targ)
+ tf.print(model_loss)
+
+train(train_ds)
+plot(tf.squeeze(model(inputs)))
+
+
+
+"
+"['machine-learning', 'classification', 'fuzzy-logic']"," Title: The membership function of Consequents (Outputs) in Fuzzy classifierBody: The
+problem in Iris data is to classify three species of iris (setosa, versicolor and virginica) by
+four-dimensional attribute vectors consisting of
+
+
+- sepal length (x1)
+- sepal width (x2)
+- petal length (x3)
+- petal width (x4)
+
+
+Every attribute of the fuzzy classifier is assigned with three linguistic
+terms (fuzzy sets): short, medium and long. With normalized attribute values the
+membership functions of these fuzzy sets for all the four attributes are depicted in the
+figure below:
+
+
+
+Now, consider the situation that a set of Rules are given - some examples are as below:
+
+
+- R1: If (x3=short OR medium) AND (x4=short) Then iris setosa
+- R2: If (x2=short OR medium) AND (x3=long) AND (x4=long) Then iris virginica
+
+
+Now, I want to make a fuzzy classifier on Iris dataset. The problem here is, I need to use a membership function for Consequents in Rules(i.e, The 3 classes) so as to be able to compute aggregation of rules and difuzzification.
+
+
+- What is the proper pre-defined domain and membership function for three classes?
+- All the Consequents (Outputs) in fuzzy classifier need to acquire membership functions?
+
+"
+['neural-networks']," Title: Does the neural network calculate different relations between inputs automatically?Body: Suppose you want to predict the price of some stock. Let's say you use the following features.
+
+OpenPrice
+HighPrice
+LowPrice
+ClosePrice
+
+
+Is it useful to create new features like the following ones?
+
+BodySize = ClosePrice - OpenPrice
+
+
+or the size of the tail
+
+TailUp = HighPrice - Max(OpenPrice, ClosePrice)
+
+
+Or we don't need to do that because we are adding noise and the neural network is going to calculate those values inside?
+
+The case of the body size maybe is a bit different from the tail, because for the tail we need to use a non-linear function (the max operation). So maybe is it important to add the input when it is not a linear relationship between the other inputs not if it's linear?
+
+Another example. Consider a box, with height $X$, width $Y$ and length $Z$.
+And suppose the real important input is the volume, will the neural network discover that the correlation is $X * Y * Z$? Or we need to put the volume as input too?
+
+Sorry if it's a dumb question but I'm trying to understand what is doing internally the neural network with the inputs, if it's finding (somehow) all the mathematically possible relations between all the inputs or we need to specify the relations between the inputs that we consider important (heuristically) for the problem to solve?
+"
+"['reinforcement-learning', 'training']"," Title: What form of output would be needed to train a model on a connect 4 AI?Body: I've had a big interest in machine learning for a while, and I've followed along a few tutorials, but have never made my own project. After losing many games of connect 4 with my friends, I decided to try to make a replica of that, then create a neural network and AI to play against me (Or at least something where I can enter the current board scenario, and it will output which row is the best move). This may be an ambitious first project, but I'm willing to put in the work and research to create something I'm proud of. I created the game using p5.js, and though it may be simple, I'm really happy with how it turned out, as it's one of my first more interesting and unique projects in computer science. Now I don't know a ton about ML, so bear with me. I would like the use pytorch, but I'm open to tensorflow/keras as well.
+
+Here are a few of my questions:
+
+
+- What output do I need to train? My game currently doesn't have a win condition. Would an array or matrix filled with a 0 where there isn't a chip, a 1 where a red chip is, and a 2 for a yellow? Ie
+
+
+[0,0,0,0
+ 1,0,0,0
+ 1,0,0,0
+ 1,0,2,0
+ 1,2,2,2]
+
+
+and enter a 1 somewhere to signify this as a win for player 1? Could an AI recognize this 4 in a row pattern as what needs to be done to win?
+
+
+- What is the best way to simulate a lot of games to get my training data? I'm imagining using an RNG to drop chips randomly, export the data output to a file and then enter whether it was a win for p1, p2, or a tie?
+- Any other general words of wisdom or links to read?
+
+
+Thanks so much for reading this and any help you can offer?
+"
+['convolutional-neural-networks']," Title: Is weight pruning applied to all layers or only to dense layers in CNNs?Body: I was reading about weight pruning in convolutional neural networks. Is it applied for all the layers including convolutional layers or only it is done for dense layers?
+"
+"['neural-networks', 'reinforcement-learning', 'game-ai', 'software-evaluation', 'alphazero']"," Title: Building 'evaluation' neural networks for go, reversi, checkers etc, how to train?Body: I'm trying to build neural networks for games like Go, Reversi, Othello, Checkers, or even tic-tac-toe, not by calculating a move, but by making them evaluate a position.
+
+The input is any board situation. The output is a score or estimate for the probability to win, or how favorable a given position is. Where 1 = guaranteed to win and 0 = guaranteed to lose.
+
+In any given turn I can then loop over all possible moves for the current player, evaluate the resulting game situations, and pick the one with the highest score.
+
+Hopefully, by letting this neural network play a trillion games vs itself, it can develop a sensible scoring function resulting in strong play.
+
+Question: How do I train such a network ?
+
+In every game, I can keep evaluating and making moves back and forth until one of the AI players wins. In that case the last game situation (right before the winning move) for the winning player should have a target value of 1, and the opposite situation (for the losing player) has a target value of 0.
+
+Note that I don't intend to make the evaluation network two-sided. I encode the game situation as always ""my own"" pieces vs ""the opponent"", and then evaluate a score from my own (i.e. the current player's) side or perspective. Then after I pick a move, I flip sides so to speak, so the opponent pieces now become my own and vice versa, and then evaluate scores again (now from the other side's perspective) for the next counter move.
+
+So the input to such a network does explicitly encode black and white pieces, or naughts and crosses (in case of tic-tac-toe) but just my pieces vs their pieces. And then evaluates how favorable the given game situation is for me, always assuming it's my turn.
+
+I can obviously assign a desired score or truth value for the last move in the game (1 for win, 0 for loss) but how do I backpropagate that towards earlier situations in a played game?
+
+Should I somehow distribute the 1 or 0 result back a few steps, with a decaying adjustment factor or learning rate? In a game with 40 turns, it might make sense to consider the last few situations as good or bad (being close to winning or losing) but I guess that shouldn't reflect all the way back to the first few moves in the game.
+
+Or am I completely mistaken with this approach and is this not how it's supposed to be done?
+"
+['machine-learning']," Title: Prerequisites for Andrew Ng's Machine Learning CourseBody: I am planning to enroll for Andrew Ng's Machine Learning course https://www.coursera.org/learn/machine-learning. I've no background in math. Is it OK if I start the course and learn math as and when required?
+"
+"['natural-language-processing', 'natural-language-understanding']"," Title: What are the current big challenges in natural language processing and understanding?Body: I'm doing a paper for a class on the topic of big problems that are still prevalent in AI, specifically in the area of natural language processing and understanding. From what I understand, the areas:
+
+
+- Text classification
+- Entity recognition
+- Translation
+- POS tagging
+
+
+are for the most part solved or perform at a high level currently, but areas such as:
+
+
+- Text summarization
+- Conversational systems
+- Contextual systems (relying on the previous context that will impact current prediction)
+
+
+are still relatively unsolved or are a big area of research (although this could very well change soon with the releases of big transformer models from what I've read).
+
+For people who have experience in the field, what are areas that are still big challenges in NLP and NLU? Why are these areas (doesn't have to be ones I've listed) so tough to figure out?
+"
+"['machine-learning', 'classification', 'python']"," Title: How can I minimise the false positives?Body: I have 50,000 samples. Of these 23,000 belong to the desired class $A$. I can sacrifice the number of instances that are classified as belonging to the desired class $A$. It will be enough for me to get 7000 instances in the desired class $A$, provided that most of these instances classified as belonging to $A$ really belong to the desired class $A$. How can I do this?
+
+The following is the confusion matrix in the case the instances are perfectly classified.
+
+[[23000 0]
+ [ 0 27000]]
+
+
+But it is unlikely to obtain this confusion matrix, so I'm quite satisfied with the following confusion matrix.
+
+[[7000 16000]
+ [ 500 26500]]
+
+
+I am currently using the sklearn
library. I mainly use algorithms based on decision trees, as they are quite fast in the calculation.
+"
+"['algorithm', 'monte-carlo-tree-search', 'monte-carlo-methods', 'planning', 'tree-search']"," Title: MCTS: How to choose the final action from the rootBody: When the time allotted to Monte Carlo tree search runs out, what action should be chosen from the root?
+
+
+The game action finally executed by the program in the actual game, is the one corresponding to the child which was explored the most.
+
+
+
+The basic algorithm involves iteratively building a search tree until some predefined computational budget – typically a time, memory or iteration constraint – is reached, at which point the search is halted and the best-performing root action returned.
+[...] The result of the overall search a(BESTCHILD(v0)) is the action a that leads to the best child of the root node v0, where the exact definition of “best” is defined by the implementation.
+[...] As soon as the search is interrupted or the computation budget is reached, the search terminates and an action a of the root nodet0is selected by some mechanism. Schadd [188] describes four criteria for selecting the winning action, based on the work of Chaslot et al [60]:
+
+- Max child: Select the root child with the highest reward.
+
+- Robust child: Select the most visited root child.
+
+- Max-Robust child: Select the root child with both the highest visit count and the highest reward. If none exist, then continue searching until an acceptable visit count is achieved [70].
+
+- Secure child: Select the child which maximises a lower confidence bound.
+
+
+[...] Once some computational budget has been reached, the algorithm terminates and returns the best move found,corresponding to the child of the root with the highest visit count.
+The return value of the overall search in this case is a(BESTCHILD(v0,0)) which will give the action a that leads to the child with the highest reward, since the exploration parameter c is set to 0 for this final call on the root node v0. The algorithm could instead return the action that leads to the most visited child; these two options will usually – but not always! – describe the same action. This potential discrepancy is addressed in the Go program ERICA by continuing the search if the most visited root action is not also the one with the highest reward. This improved ERICA’s winning rate against GNU GO from 47% to 55% [107].
+
+But their algorithm 2 uses the same criterion as the internal-node selection policy:
+$$\operatorname{argmax}_{v'} \frac{Q(v')}{N(v')} + c \sqrt{\frac{2 \ln N(v)}{N(v')}}$$
+which is neither the max child nor the robust child! This situation is quite confusing, and I'm wondering which approach is nowadays considered most successful/appropriate.
+"
+"['neural-networks', 'machine-learning', 'classification', 'training']"," Title: How are the inputs passed to the neural network during training for the XOR classification task?Body: Let's suppose we have to train a neural network for the XOR classification task.
+
+Are the inputs $(00, 01, 10, 11)$ inserted in a sequential way? For example, we first insert the 00 and change the weights, then the 01 and again slightly change them, etc. Or is there another way it can be implemented?
+"
+"['machine-learning', 'algorithm']"," Title: What ML algorithms would you suggest in fraud detection?Body: There are a lot of ML algorithms suggested for fraud detection. Now, I have not been able to find a general overview for all of them. My goal is to create this overview. What algorithms would you suggest and why?
+"
+"['reinforcement-learning', 'policy-gradients', 'proofs', 'policy-gradient-theorem']"," Title: Why is the stationary distribution independent of the initial state in the proof of the policy gradient theorem?Body: I was going through the proof of the policy gradient theorem here: https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#svpg
+
+In the section ""Proof of Policy Gradient Theorem"" in the block of equations just under the sentence ""The nice rewriting above allows us to exclude the derivative of Q-value function..."" they set
+$$
+\eta (s) = \sum^\infty_{k=0} \rho^\pi(s_0 \rightarrow s, k)
+$$
+and
+$$
+\sum_s \eta (s) = const
+$$
+Thus, they basically assume, that the stationary distribution is not dependent on the initial state. But how can we justify this? If the MDP is described by a block diagonal transition matrix, in my mind this should not hold.
+"
+"['neural-networks', 'machine-learning']"," Title: Can learned feature vectors be considered a good encryption?Body: Considering I have some neural network that, using supervised learning, transforms a string into a learned feature vector where ""close"" strings will result into more close vectors.
+
+I know that since a NN is no one-way-function there is a way to retrieve the input data from my output if I have the entire network at hand (if I know biases, weights, etc.)
+
+My question is if the network is not known, is there a way for me (using ie. some probabilistic distributions) to make assumptions or even reconstruct the input data?
+"
+['reinforcement-learning']," Title: Does adding a constant to all rewards change the set of optimal policies in episodic tasks?Body: I'm taking a Coursera course on Reinforcement learning. There was a question there that wasn't addressed in the learning material: Does adding a constant to all rewards change the set of optimal policies in episodic tasks?
+
+The answer is Yes - Adding a constant to the reward signal can make longer episodes more or less advantageous (depending on whether the constant is positive or negative).
+
+Can anyone explain why is this so? And why it doesn't change in the case of continuous (non episodic) tasks? I don't see why adding a constant matters - as an optimal policy would still want to get the maximum reward...
+
+Can anyone give an example of this?
+"
+"['neural-networks', 'machine-learning', 'training', 'accuracy', 'cross-validation']"," Title: What is the relationship between the training accuracy and validation accuracy?Body: During model training, I noticed various behaviour in between training and validation accuracy. I understand that 'The training set is used to train the model, while the validation set is only used to evaluate the model's performance...', but I'd like to know if there is any relationship between training and validation accuracy and, if yes,
+
+- what exactly is happening when training and validation accuracy change during training and;
+
+- what do different behaviours imply
+
+
+For instance, some believe there is overfitting problem if training > validation accuracy. What happens if one is greater than the other alternately, which is the case below?
+Here is the code
+inputs_1 = keras.Input(shape=(10081,1))
+
+layer1 = Conv1D(64,14)(inputs_1)
+layer2 = layers.MaxPool1D(5)(layer1)
+layer3 = Conv1D(64, 14)(layer2)
+layer4 = layers.GlobalMaxPooling1D()(layer3)
+
+inputs_2 = keras.Input(shape=(104,))
+layer5 = layers.concatenate([layer4, inputs_2])
+layer6 = Dense(128, activation='relu')(layer5)
+layer7 = Dense(2, activation='softmax')(layer6)
+
+
+model_2 = keras.models.Model(inputs = [inputs_1, inputs_2], output = [layer7])
+model_2.summary()
+
+
+X_train, X_test, y_train, y_test = train_test_split(df.iloc[:,0:10185], df[['Result_cat','Result_cat1']].values, test_size=0.2)
+X_train = X_train.to_numpy()
+X_train = X_train.reshape([X_train.shape[0], X_train.shape[1], 1])
+X_train_1 = X_train[:,0:10081,:]
+X_train_2 = X_train[:,10081:10185,:].reshape(736,104)
+
+
+X_test = X_test.to_numpy()
+X_test = X_test.reshape([X_test.shape[0], X_test.shape[1], 1])
+X_test_1 = X_test[:,0:10081,:]
+X_test_2 = X_test[:,10081:10185,:].reshape(185,104)
+
+adam = keras.optimizers.Adam(lr = 0.0005)
+model_2.compile(loss = 'categorical_crossentropy', optimizer = adam, metrics = ['acc'])
+
+history = model_2.fit([X_train_1,X_train_2], y_train, epochs = 120, batch_size = 256, validation_split = 0.2, callbacks = [keras.callbacks.EarlyStopping(monitor='val_loss', patience=20)])
+
+model summary
+/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:15: UserWarning: Update your `Model` call to the Keras 2 API: `Model(inputs=[<tf.Tenso..., outputs=[<tf.Tenso...)`
+ from ipykernel import kernelapp as app
+Model: "model_3"
+__________________________________________________________________________________________________
+Layer (type) Output Shape Param # Connected to
+==================================================================================================
+input_5 (InputLayer) (None, 10081, 1) 0
+__________________________________________________________________________________________________
+conv1d_5 (Conv1D) (None, 10068, 64) 960 input_5[0][0]
+__________________________________________________________________________________________________
+max_pooling1d_3 (MaxPooling1D) (None, 2013, 64) 0 conv1d_5[0][0]
+__________________________________________________________________________________________________
+conv1d_6 (Conv1D) (None, 2000, 64) 57408 max_pooling1d_3[0][0]
+__________________________________________________________________________________________________
+global_max_pooling1d_3 (GlobalM (None, 64) 0 conv1d_6[0][0]
+__________________________________________________________________________________________________
+input_6 (InputLayer) (None, 104) 0
+__________________________________________________________________________________________________
+concatenate_3 (Concatenate) (None, 168) 0 global_max_pooling1d_3[0][0]
+ input_6[0][0]
+__________________________________________________________________________________________________
+dense_5 (Dense) (None, 128) 21632 concatenate_3[0][0]
+__________________________________________________________________________________________________
+dense_6 (Dense) (None, 2) 258 dense_5[0][0]
+==================================================================================================
+Total params: 80,258
+Trainable params: 80,258
+Non-trainable params: 0
+
+and the training process
+__________________________________________________________________________________________________
+Train on 588 samples, validate on 148 samples
+Epoch 1/120
+588/588 [==============================] - 16s 26ms/step - loss: 5.6355 - acc: 0.4932 - val_loss: 4.1086 - val_acc: 0.6216
+Epoch 2/120
+588/588 [==============================] - 15s 25ms/step - loss: 4.5977 - acc: 0.5748 - val_loss: 3.8252 - val_acc: 0.4459
+Epoch 3/120
+588/588 [==============================] - 15s 25ms/step - loss: 4.3815 - acc: 0.4575 - val_loss: 2.4087 - val_acc: 0.6622
+Epoch 4/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.7480 - acc: 0.6003 - val_loss: 2.0060 - val_acc: 0.6892
+Epoch 5/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.3019 - acc: 0.5408 - val_loss: 2.3176 - val_acc: 0.5676
+Epoch 6/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.1739 - acc: 0.5663 - val_loss: 2.2607 - val_acc: 0.6892
+Epoch 7/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.2322 - acc: 0.6207 - val_loss: 1.8898 - val_acc: 0.7230
+Epoch 8/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.9777 - acc: 0.6020 - val_loss: 1.8401 - val_acc: 0.7500
+Epoch 9/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.8982 - acc: 0.6429 - val_loss: 1.8517 - val_acc: 0.7365
+Epoch 10/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.8342 - acc: 0.6344 - val_loss: 1.7941 - val_acc: 0.7095
+Epoch 11/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7426 - acc: 0.6327 - val_loss: 1.8495 - val_acc: 0.7162
+Epoch 12/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7340 - acc: 0.6531 - val_loss: 1.7652 - val_acc: 0.7162
+Epoch 13/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6680 - acc: 0.6616 - val_loss: 1.8097 - val_acc: 0.7365
+Epoch 14/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6922 - acc: 0.6786 - val_loss: 1.7143 - val_acc: 0.7500
+Epoch 15/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6161 - acc: 0.6786 - val_loss: 1.6960 - val_acc: 0.7568
+Epoch 16/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6054 - acc: 0.6905 - val_loss: 1.6779 - val_acc: 0.7297
+Epoch 17/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6072 - acc: 0.6684 - val_loss: 1.6750 - val_acc: 0.7703
+Epoch 18/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5907 - acc: 0.6871 - val_loss: 1.6774 - val_acc: 0.7432
+Epoch 19/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5854 - acc: 0.6718 - val_loss: 1.6609 - val_acc: 0.7770
+Epoch 20/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5621 - acc: 0.6905 - val_loss: 1.6709 - val_acc: 0.7365
+Epoch 21/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5515 - acc: 0.6854 - val_loss: 1.6904 - val_acc: 0.7703
+Epoch 22/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5749 - acc: 0.6837 - val_loss: 1.6862 - val_acc: 0.7297
+Epoch 23/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6212 - acc: 0.6514 - val_loss: 1.7215 - val_acc: 0.7568
+Epoch 24/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6532 - acc: 0.6633 - val_loss: 1.7105 - val_acc: 0.7230
+Epoch 25/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7300 - acc: 0.6344 - val_loss: 1.6870 - val_acc: 0.7432
+Epoch 26/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7355 - acc: 0.6650 - val_loss: 1.6733 - val_acc: 0.7703
+Epoch 27/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6336 - acc: 0.6650 - val_loss: 1.6572 - val_acc: 0.7297
+Epoch 28/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6018 - acc: 0.6803 - val_loss: 1.7292 - val_acc: 0.7635
+Epoch 29/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5448 - acc: 0.7143 - val_loss: 1.8065 - val_acc: 0.7095
+Epoch 30/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5724 - acc: 0.6820 - val_loss: 1.8029 - val_acc: 0.7297
+Epoch 31/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6622 - acc: 0.6650 - val_loss: 1.6594 - val_acc: 0.7568
+Epoch 32/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6211 - acc: 0.6582 - val_loss: 1.6375 - val_acc: 0.7770
+Epoch 33/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5911 - acc: 0.6854 - val_loss: 1.6964 - val_acc: 0.7500
+Epoch 34/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5050 - acc: 0.7262 - val_loss: 1.8496 - val_acc: 0.6892
+Epoch 35/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6012 - acc: 0.6752 - val_loss: 1.7443 - val_acc: 0.7432
+Epoch 36/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5688 - acc: 0.6871 - val_loss: 1.6220 - val_acc: 0.7568
+Epoch 37/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4843 - acc: 0.7279 - val_loss: 1.6166 - val_acc: 0.7905
+Epoch 38/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4707 - acc: 0.7449 - val_loss: 1.6496 - val_acc: 0.7905
+Epoch 39/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4683 - acc: 0.7109 - val_loss: 1.6641 - val_acc: 0.7432
+Epoch 40/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4671 - acc: 0.7279 - val_loss: 1.6553 - val_acc: 0.7703
+Epoch 41/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4479 - acc: 0.7347 - val_loss: 1.6302 - val_acc: 0.7973
+Epoch 42/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4355 - acc: 0.7551 - val_loss: 1.6241 - val_acc: 0.7973
+Epoch 43/120
+588/588 [==============================] - 14s 25ms/step - loss: 2.4286 - acc: 0.7568 - val_loss: 1.6249 - val_acc: 0.7973
+Epoch 44/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4250 - acc: 0.7585 - val_loss: 1.6248 - val_acc: 0.7770
+Epoch 45/120
+588/588 [==============================] - 14s 25ms/step - loss: 2.4198 - acc: 0.7517 - val_loss: 1.6212 - val_acc: 0.7703
+Epoch 46/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4246 - acc: 0.7568 - val_loss: 1.6129 - val_acc: 0.7838
+Epoch 47/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4237 - acc: 0.7517 - val_loss: 1.6166 - val_acc: 0.7973
+Epoch 48/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4287 - acc: 0.7432 - val_loss: 1.6309 - val_acc: 0.8041
+Epoch 49/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4179 - acc: 0.7381 - val_loss: 1.6271 - val_acc: 0.7838
+Epoch 50/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4164 - acc: 0.7381 - val_loss: 1.6258 - val_acc: 0.7973
+Epoch 51/120
+588/588 [==============================] - 14s 24ms/step - loss: 2.1996 - acc: 0.7398 - val_loss: 1.3612 - val_acc: 0.7973
+Epoch 52/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1387 - acc: 0.8265 - val_loss: 1.4811 - val_acc: 0.7973
+Epoch 53/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1607 - acc: 0.8078 - val_loss: 1.5060 - val_acc: 0.7838
+Epoch 54/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1783 - acc: 0.8129 - val_loss: 1.4878 - val_acc: 0.8176
+Epoch 55/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1745 - acc: 0.8197 - val_loss: 1.4762 - val_acc: 0.8108
+Epoch 56/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1764 - acc: 0.8129 - val_loss: 1.4631 - val_acc: 0.7905
+Epoch 57/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1637 - acc: 0.8078 - val_loss: 1.4615 - val_acc: 0.7770
+Epoch 58/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1563 - acc: 0.8112 - val_loss: 1.4487 - val_acc: 0.7703
+Epoch 59/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1396 - acc: 0.8146 - val_loss: 1.4362 - val_acc: 0.7905
+Epoch 60/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1240 - acc: 0.8316 - val_loss: 1.4333 - val_acc: 0.8041
+Epoch 61/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1173 - acc: 0.8333 - val_loss: 1.4369 - val_acc: 0.8041
+Epoch 62/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1228 - acc: 0.8384 - val_loss: 1.4393 - val_acc: 0.8041
+Epoch 63/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1113 - acc: 0.8316 - val_loss: 1.4380 - val_acc: 0.8041
+Epoch 64/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1102 - acc: 0.8452 - val_loss: 1.4217 - val_acc: 0.8041
+Epoch 65/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0961 - acc: 0.8469 - val_loss: 1.4129 - val_acc: 0.7973
+Epoch 66/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0903 - acc: 0.8537 - val_loss: 1.4019 - val_acc: 0.8041
+Epoch 67/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0890 - acc: 0.8503 - val_loss: 1.3850 - val_acc: 0.8176
+Epoch 68/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0878 - acc: 0.8520 - val_loss: 1.4035 - val_acc: 0.7635
+Epoch 69/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0984 - acc: 0.8469 - val_loss: 1.4060 - val_acc: 0.8041
+Epoch 70/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0893 - acc: 0.8418 - val_loss: 1.3981 - val_acc: 0.7973
+Epoch 71/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0876 - acc: 0.8605 - val_loss: 1.3951 - val_acc: 0.8041__________________________________________________________________________________________________
+Train on 588 samples, validate on 148 samples
+Epoch 1/120
+588/588 [==============================] - 16s 26ms/step - loss: 5.6355 - acc: 0.4932 - val_loss: 4.1086 - val_acc: 0.6216
+Epoch 2/120
+588/588 [==============================] - 15s 25ms/step - loss: 4.5977 - acc: 0.5748 - val_loss: 3.8252 - val_acc: 0.4459
+Epoch 3/120
+588/588 [==============================] - 15s 25ms/step - loss: 4.3815 - acc: 0.4575 - val_loss: 2.4087 - val_acc: 0.6622
+Epoch 4/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.7480 - acc: 0.6003 - val_loss: 2.0060 - val_acc: 0.6892
+Epoch 5/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.3019 - acc: 0.5408 - val_loss: 2.3176 - val_acc: 0.5676
+Epoch 6/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.1739 - acc: 0.5663 - val_loss: 2.2607 - val_acc: 0.6892
+Epoch 7/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.2322 - acc: 0.6207 - val_loss: 1.8898 - val_acc: 0.7230
+Epoch 8/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.9777 - acc: 0.6020 - val_loss: 1.8401 - val_acc: 0.7500
+Epoch 9/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.8982 - acc: 0.6429 - val_loss: 1.8517 - val_acc: 0.7365
+Epoch 10/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.8342 - acc: 0.6344 - val_loss: 1.7941 - val_acc: 0.7095
+Epoch 11/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7426 - acc: 0.6327 - val_loss: 1.8495 - val_acc: 0.7162
+Epoch 12/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7340 - acc: 0.6531 - val_loss: 1.7652 - val_acc: 0.7162
+Epoch 13/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6680 - acc: 0.6616 - val_loss: 1.8097 - val_acc: 0.7365
+Epoch 14/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6922 - acc: 0.6786 - val_loss: 1.7143 - val_acc: 0.7500
+Epoch 15/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6161 - acc: 0.6786 - val_loss: 1.6960 - val_acc: 0.7568
+Epoch 16/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6054 - acc: 0.6905 - val_loss: 1.6779 - val_acc: 0.7297
+Epoch 17/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6072 - acc: 0.6684 - val_loss: 1.6750 - val_acc: 0.7703
+Epoch 18/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5907 - acc: 0.6871 - val_loss: 1.6774 - val_acc: 0.7432
+Epoch 19/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5854 - acc: 0.6718 - val_loss: 1.6609 - val_acc: 0.7770
+Epoch 20/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5621 - acc: 0.6905 - val_loss: 1.6709 - val_acc: 0.7365
+Epoch 21/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5515 - acc: 0.6854 - val_loss: 1.6904 - val_acc: 0.7703
+Epoch 22/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5749 - acc: 0.6837 - val_loss: 1.6862 - val_acc: 0.7297
+Epoch 23/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6212 - acc: 0.6514 - val_loss: 1.7215 - val_acc: 0.7568
+Epoch 24/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6532 - acc: 0.6633 - val_loss: 1.7105 - val_acc: 0.7230
+Epoch 25/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7300 - acc: 0.6344 - val_loss: 1.6870 - val_acc: 0.7432
+Epoch 26/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7355 - acc: 0.6650 - val_loss: 1.6733 - val_acc: 0.7703
+Epoch 27/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6336 - acc: 0.6650 - val_loss: 1.6572 - val_acc: 0.7297
+Epoch 28/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6018 - acc: 0.6803 - val_loss: 1.7292 - val_acc: 0.7635
+Epoch 29/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5448 - acc: 0.7143 - val_loss: 1.8065 - val_acc: 0.7095
+Epoch 30/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5724 - acc: 0.6820 - val_loss: 1.8029 - val_acc: 0.7297
+Epoch 31/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6622 - acc: 0.6650 - val_loss: 1.6594 - val_acc: 0.7568
+Epoch 32/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6211 - acc: 0.6582 - val_loss: 1.6375 - val_acc: 0.7770
+Epoch 33/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5911 - acc: 0.6854 - val_loss: 1.6964 - val_acc: 0.7500
+Epoch 34/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5050 - acc: 0.7262 - val_loss: 1.8496 - val_acc: 0.6892
+Epoch 35/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6012 - acc: 0.6752 - val_loss: 1.7443 - val_acc: 0.7432
+Epoch 36/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5688 - acc: 0.6871 - val_loss: 1.6220 - val_acc: 0.7568
+Epoch 37/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4843 - acc: 0.7279 - val_loss: 1.6166 - val_acc: 0.7905
+Epoch 38/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4707 - acc: 0.7449 - val_loss: 1.6496 - val_acc: 0.7905
+Epoch 39/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4683 - acc: 0.7109 - val_loss: 1.6641 - val_acc: 0.7432
+Epoch 40/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4671 - acc: 0.7279 - val_loss: 1.6553 - val_acc: 0.7703
+Epoch 41/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4479 - acc: 0.7347 - val_loss: 1.6302 - val_acc: 0.7973
+Epoch 42/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4355 - acc: 0.7551 - val_loss: 1.6241 - val_acc: 0.7973
+Epoch 43/120
+588/588 [==============================] - 14s 25ms/step - loss: 2.4286 - acc: 0.7568 - val_loss: 1.6249 - val_acc: 0.7973
+Epoch 44/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4250 - acc: 0.7585 - val_loss: 1.6248 - val_acc: 0.7770
+Epoch 45/120
+588/588 [==============================] - 14s 25ms/step - loss: 2.4198 - acc: 0.7517 - val_loss: 1.6212 - val_acc: 0.7703
+Epoch 46/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4246 - acc: 0.7568 - val_loss: 1.6129 - val_acc: 0.7838
+Epoch 47/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4237 - acc: 0.7517 - val_loss: 1.6166 - val_acc: 0.7973
+Epoch 48/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4287 - acc: 0.7432 - val_loss: 1.6309 - val_acc: 0.8041
+Epoch 49/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4179 - acc: 0.7381 - val_loss: 1.6271 - val_acc: 0.7838
+Epoch 50/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4164 - acc: 0.7381 - val_loss: 1.6258 - val_acc: 0.7973
+Epoch 51/120
+588/588 [==============================] - 14s 24ms/step - loss: 2.1996 - acc: 0.7398 - val_loss: 1.3612 - val_acc: 0.7973
+Epoch 52/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1387 - acc: 0.8265 - val_loss: 1.4811 - val_acc: 0.7973
+Epoch 53/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1607 - acc: 0.8078 - val_loss: 1.5060 - val_acc: 0.7838
+Epoch 54/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1783 - acc: 0.8129 - val_loss: 1.4878 - val_acc: 0.8176
+Epoch 55/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1745 - acc: 0.8197 - val_loss: 1.4762 - val_acc: 0.8108
+Epoch 56/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1764 - acc: 0.8129 - val_loss: 1.4631 - val_acc: 0.7905
+Epoch 57/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1637 - acc: 0.8078 - val_loss: 1.4615 - val_acc: 0.7770
+Epoch 58/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1563 - acc: 0.8112 - val_loss: 1.4487 - val_acc: 0.7703
+Epoch 59/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1396 - acc: 0.8146 - val_loss: 1.4362 - val_acc: 0.7905
+Epoch 60/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1240 - acc: 0.8316 - val_loss: 1.4333 - val_acc: 0.8041
+Epoch 61/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1173 - acc: 0.8333 - val_loss: 1.4369 - val_acc: 0.8041
+Epoch 62/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1228 - acc: 0.8384 - val_loss: 1.4393 - val_acc: 0.8041
+Epoch 63/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1113 - acc: 0.8316 - val_loss: 1.4380 - val_acc: 0.8041
+Epoch 64/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1102 - acc: 0.8452 - val_loss: 1.4217 - val_acc: 0.8041
+Epoch 65/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0961 - acc: 0.8469 - val_loss: 1.4129 - val_acc: 0.7973
+Epoch 66/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0903 - acc: 0.8537 - val_loss: 1.4019 - val_acc: 0.8041
+Epoch 67/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0890 - acc: 0.8503 - val_loss: 1.3850 - val_acc: 0.8176
+Epoch 68/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0878 - acc: 0.8520 - val_loss: 1.4035 - val_acc: 0.7635
+Epoch 69/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0984 - acc: 0.8469 - val_loss: 1.4060 - val_acc: 0.8041
+Epoch 70/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0893 - acc: 0.8418 - val_loss: 1.3981 - val_acc: 0.7973
+Epoch 71/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0876 - acc: 0.8605 - val_loss: 1.3951 - val_acc: 0.8041
+
+Notice how at first acc
is lower than val_acc
and later is greater than val_acc
. Can someone please shed some light what could be happening here? Thank you
+"
+['neural-networks']," Title: How can I train a neural network to detect subliminal messages?Body: Is there a way to train a neural network to detect subliminal messages? Where can I find the dataset on which to train the neural network?
+
+If I have to create the dataset, how would I go about it?
+
+United Nations has defined subliminal messages as perceiving messages without being aware of them, it is unconscious perception, or perception without awareness. Like you may be aware of a message but cannot consciously perceive that message in the form of text, etc.
+
+There are two many types of subliminal messages, one which can be made through visual means, another which can be made through audio.
+
+In visual means, I'm referring to these types:
+
+
+- Messages which are flashed for very short while on the screen.
+- Messages whose opacity is changed to blend with the background.
+- Messages whose colors are varied slightly to blend with the background.
+
+
+Example of 3rd type of subliminal messages: if there is a background of red, on this can be shown a message made up of slight variation of red, as a conscious mind can't distinguish between such close shades of red, people will take the entire thing to be red block, but a subconscious mind notice the slight variation in color, register the message, because humans can see millions of colors.
+"
+"['deep-learning', 'reinforcement-learning', 'ai-design', 'game-ai']"," Title: Designing a reinforcement learning AI for a game of connect 4Body: I've made a connect 4 game in javascript, and I want to design an AI for it. I made a post the other day about what output would be needed, and I think I could use images of the board and a CNN. I did some research into Reinforcement learning, and I think that's what I need to do. I don't have much experience with ML in general, much less RL with Q-learning, but that is what I'd like to do.
+
+Now, I don't really know how to start out with such a big project. I have a few questions first:
+
+
+- What do I do with my input? I'm thinking I give the AI 7 options for moves to make, one for each row of the board. How do I implement a way that it can ""look"" at the board? Can I just import an image of the current board state?
+- How do I make a reward table? How should I do the points system for the Q-learning? I'm thinking something like: If it drops a chip it gets a point, if it lines up 2 chips in a row it gets 5 points, if it gets 3 in a row it gets 30, and if it gets 4 in a row it gets 100. Would that be an effective way to do this? How do I implement this?
+- Is there a library I can use to do any of the work where I make an algorithm and board states and reward tables? Or do I have to hard code any of it?
+- I've done some research, are there any links or tutorials you think I should read or follow along with? Any other general advice or help for me?
+
+
+I greatly appreciate anyone who answers one or all of these questions! Thank you so much!
+"
+"['convolutional-neural-networks', 'geometric-deep-learning']"," Title: How should I handle different input sizes in graph convolution networks?Body: I'm a student beginning to study deep learning, and would like to practice with a simple project using a Graph Convolutional Network.
+
+However, I'm not quite sure how to handle different input sizes of graphs for the GCN. How would I do this?
+
+Is zero-padding the only way to solve this problem? While zero-padding is applicable to CNNs, I'm not sure if it is for a GCN.
+"
+"['machine-learning', 'optimization', 'logistic-regression']"," Title: Is logistic regression used for unconstrained or constrained optimisation problems?Body: Is logistic regression used for unconstrained or constrained optimization problems, and why?
+"
+"['convolutional-neural-networks', 'papers', 'object-detection', 'r-cnn', 'faster-r-cnn']"," Title: In Faster R-CNN, how can I get the predicted bounding box given the neural network's output?Body: The RPN loss in Faster RCNN paper is
+$$
+L({p_i}, {t_i}) = \frac{1}{N_{cls}} \sum_{i} L_{cls}(p_i,p_i^*) + \lambda \frac{1}{N_{reg}} \sum_i p_i^* L_{reg}(t_i, t_i^*)
+$$
+For regression problems, we have the following parametrization
+$$t_x=\frac{x - x_a}{w_a}, \\ t_y=\frac{y−y_a}{h_a}, \\ t_w= \log \left( \frac{w}{w_a} \right),\\ t_h= \log \left(\frac{h}{h_a} \right)$$
+and the ground-truth labels are
+$$t_x^*=\frac{x^* - x_a}{w_a},\\ t_y^*=\frac{y^*−y_a}{h_a}, \\ t_w^*= \log \left( \frac{w^*}{w_a} \right), \\ t_h^*= \log \left(\frac{h^*}{h_a} \right)$$
+where
+
+- $x$ and $y$ are the two coordinates of the center, $w$ the width, and $h$ the height of the predicted box.
+
+- $x$ and $y$ are the two coordinates of the center, $w$ the width, and $h$ the height of the anchor box.
+
+- $L_{reg}(t_i, t_i^*) = R(t_i − t_i^*)$, where $R$ is a robust loss function (smooth $L_1$)
+
+
+These equations are unclear to me, so here are my two questions.
+
+- How can I get the predicted bounding box given the neural network's output?
+
+- What exactly is the smooth $L_1$ here? How is it defined?
+
+
+"
+"['convolutional-neural-networks', 'image-processing', 'convolution']"," Title: Can I shuffle image channel data as a form of data augmentation?Body: If I want to augment my dataset, is shuffling or permuting the channels (RGB) of an image a sensible augmentation for training a CNN? IIRC, the way convolutions work is that a kernel operates over parts of the image but maintains the order of the kernels.
+
+For example, the kernel has $k \times k$ weights for each channel and the resulting output is the multiplication of the weights and the pixel values of the image and is finally averaged to form a new pixel in the next feature map.
+
+In this case, if we shuffle the channels of the image (GBR, BGR, RBG, GRB, etc.), a CNN that is only trained on the ordering RGB would do poorly on such images. Therefore, is it not sensible to shuffle the channels of the image as a form of data augmentation? Or will this have a regularizing effect on the CNN model?
+"
+"['machine-learning', 'datasets', 'object-recognition', 'object-detection']"," Title: Is it possible to use AI for detecting the volume of a cupBody: I was just wondering if it's possible to use Machine Learning to train a model from a dataset of images of cups with a given volume in the image and then use object detection to detect other cups and assume the volume of the cup,
+
+Basically the end goal is to detect the volume of a cup using object detection with a phone's camera,
+
+I would highly appreciate it if someone can point me to the right direction.
+"
+"['implementation', 'variational-autoencoder', 'hyper-parameters']"," Title: What is the correct dimension of mu/logvar and z in the VAE?Body: I'm having a problem to understand the needed dimensions of a VAE, especially for mu
, logvar
and z
layer.
+Let's say I have an input of 512x512, 1 color channel (CT images), batch size 32. Then my encoder/decoder looks like the following:
+self.encoder = nn.Sequential(
+ nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1), # 32x512x512
+ nn.ReLU(True),
+ nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1), # 32x256x256
+ nn.ReLU(True),
+ nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1), # 32x128x128
+ nn.ReLU(True),
+ nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1), # 32x64x64
+ nn.ReLU(True),
+ nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1), # 32x32x32
+ nn.ReLU(True))
+
+self.decoder = nn.Sequential(
+ nn.ConvTranspose2d(32, 32, kernel_size=4, stride=2, padding=1),
+ nn.ReLU(True),
+ nn.ConvTranspose2d(32, 32, kernel_size=4, stride=2, padding=1),
+ nn.ReLU(True),
+ nn.ConvTranspose2d(32, 32, kernel_size=4, stride=2, padding=1),
+ nn.ReLU(True),
+ nn.ConvTranspose2d(32, 32, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(True),
+ nn.ConvTranspose2d(32, 1, kernel_size=4, stride=2, padding=1),
+ nn.Sigmoid())
+
+What is the correct dimension of mu
/logvar
and z
? latent_dim = 1000
, filter_depth=32
.
+I'm not sure if the input of the linear layer mu
/logvar
is right or not.
+mu = nn.Linear(self.filter_depth * 32 * 32, self.latent_dim)
+logvar = nn.Linear(self.filter_depth * 32 * 32, self.latent_dim)
+z = nn.Linear(self.latent_dim, self.filter_depth * 32 * 32)
+
+"
+"['pattern-recognition', 'matlab']"," Title: How can I recognise possibly overlapping line segments in 2D?Body: I am given a 2-dimensional picture (black & white, white background) and it is assumed that there are some 'sticks' (basically 'thick lines' with different widths and lengths) that are (mostly) overlapping with one another.
+I want to somehow recognize these sticks (where they are and how big they are).
+
+Is there any approach you would recommend or, even better, anything that already exists? I am working with MatLab, but a general (theoretical) approach would be also fine! I am open to machine-learning, but I'd prefer classical algorithms here.
+"
+"['computer-vision', 'object-detection', 'image-processing']"," Title: Do backgroundSubtractor functions in opencv only detect moving objects?Body: There are some backgroundsubtractor functions in opencv like backgroundsubtractormog2 , backgroundsubtractorGMG and ... . It seems that these functions only detect moving objects in a video.
+
+But I understand from the concept of these functions, that they do some clustering in an image.
+Do these functions only detect moving objects? Why? Or am I wrong?
+
+Any help will be appreciated
+"
+"['neural-networks', 'deep-learning', 'tensorflow', 'computer-vision', 'ensemble-learning']"," Title: Are there ensemble methods for regression?Body: I have heard of ensemble methods, such as XGBoost, for binary or categorical machine learning models. However, does this exist for regression? If so, how are the weights for each model in the process of predictions determined?
+
+I am looking to do this manually, as I was planning on training two different models using separate frameworks (YoloV3 aka Darknet and Tensorflow for bounding box regression). Is there a way I can establish a weight for each model in the overall prediction for these boxes?
+
+Or is this a bad idea?
+"
+"['reinforcement-learning', 'open-ai', 'gym']"," Title: Are there OpenAI Gym continuing environments (other than inverted pendulum) and baselines?Body: I would like to use OpenAI Gym to solve a continuing environment, that is, a problem with a single, never-ending episode (please note I don't mean a continuous environment with continuous state and actions).
+
+The only continuing environment I found in their repository was the classic inverted pendulum problem, and I found no baseline methods (algorithms) that don't require episodic environments.
+
+So I have two questions:
+
+
+- are there any continuing environments other than the inverted pendulum one?
+- is there an OpenAI Gym baseline method that I can use to solve the inverted pendulum problem as well as other continuing environments?
+
+"
+"['neural-networks', 'tensorflow']"," Title: How can I find what does an specific neuron do in neural network?Body: How can I know what each neuron does in NN?
+Consider the Playground from Tensorflow, there are some hidden layers with some neurons in each. Each of them shows a line(horizontal or vertical or ...). Where these shapes come from. I think they are understandable for nn not a person!
+"
+"['machine-learning', 'models', 'model-based-methods']"," Title: Correlating two models to predict the output of one that corresponds to an output of the otherBody: I am currently working on a problem and now got stuck to implement one of it's steps. This is a simple attempt to explain what I am currently facing, which is something that I am aiming to implement in my python simulation.
+
+The idea is that I will some input parameters into my simulation, however one simulation is not able to perfectly capture all the dynamics involved in a real scenario. Hence, what I am aiming to do is to feed some inputs of the real scenario into my simulate and perform the simulation for all cases in which I have real data. So I will have the same amount of data for technically the same situation for both real and simulated scenario.
+
+With my simulated data I can find out the optimal parameters(for the simulation), so the idea now is to correlate my simulated model with the real data, and then, with this correlation, find out what would be the equivalent of the optimal simulated parameters into the the optimal real parameters. Here not really precise diagram that might help on the visualization of the problem:
+
+
+
+I have already seen a lot of machine learning being utilized to fit to a set of data, but haven't really seen anything that could help me on this task that I currently have in hand, as ""fitting models"". So here comes the questions, how to correlate the models and utilized it to extract the optimal parameters.
+
+Hope that I managed to be succinct and precise albeit the length of the text. I would really appreciate your help on this one!
+"
+"['convolutional-neural-networks', 'classification']"," Title: structure of neural network for classification problems with large amounts of null classificationsBody: I am building a Convolution neural network to predict certain categories based on images (the location of a pointer on a surface) . However in many cases there will be no pointer in the view or something that is not the pointer. Initially I was just going to train it with outputs of the different classifications including the null classification. However given that the null classification is far more common than the others (perhaps 1000 times more likely) would it be better to have a separate null classifier and then if this outputted non null then the second classifier would be used.
+
+Any suggestions?
+"
+"['machine-learning', 'gradient-descent', 'policy-gradients', 'softmax-policy']"," Title: Eligibility vector for softmax policy with policy gradientsBody: There is this nice result for policy gradients that the gradient of some performance measure, $\nabla v_{\pi_{\theta}}(s_0)$ (here, in the episodic case for the starting state $s_0$ and policy $\pi$, parametrised by some weights $\theta$) is equal to the expectation gradient of the logarithm of the policy, i.e.
+
+$$\nabla v_{\pi_{\theta}}(s_0)=\mathbb{E}\Big{[}\sum_{t=0}^{T-1}\nabla_\theta\log(\pi_{\theta}(a_t|s_t)]\cdot G_t\Big{]},$$
+
+where $G_t$ is the discounted future reward from state $s_t$ onward and $s_T$ the final state of some trajectory $(s_0, a_0, s_1, a_1, ..., s_{T-1}, a_{T-1}, s_T)$.
+
+Now, when using a softmax policy, $\nabla_\theta\log(\pi_{\theta}(a_t|s_t)$ can be written as
+
+$$\nabla_\theta\log(\pi_{\theta}(a_t|s_t))=\phi(s_t,a_t)-\mathbb{E}[\phi(s_t,\cdot)],$$
+
+where $\phi(s,a)$ is some input vector of a state-action tuple.
+
+However: what exactly is this vector? A typical input with policy gradients (for example in a neural network) is a feature vector for the state and the output a vector with dimensions equal to the number of actions, e.g. $(14, 15, 11, 17)^T$ for four possible actions. The softmax-function now scales these outputs, which results in the logits $(.042, .114, .002, .842)^T$ in this example.
+
+What I would usually do in neural networks is take some input vector, for example something that describes if there are borders in a grid world, e.g. $\phi(s)=(1, 0, 0, 1)^T$, and multiply that with my weight matrix $\theta$ (and add biases b), i.e. $\theta\phi(s)+b$. So, continuing above example, $1\cdot \theta_{1,1} + 0\cdot \theta_{1,2} + 0\cdot \theta_{1,3} + 1\cdot \theta_{1,4} = 14$ and $1\cdot \theta_{2,1} + 0\cdot \theta_{2,2} + 0\cdot \theta_{2,3} + 1\cdot \theta_{2,4} = 15$.
+
+But what is $\phi(s,a)$ here? And how would I compute $\nabla_\theta\log(\pi_{\theta}(a|s))=\phi(s,a)-\mathbb{E}[\phi(s,\cdot)]$?
+"
+"['machine-learning', 'ai-design', 'tensorflow', 'keras', 'regression']"," Title: How to perform regression with multiple numeric (positive and negative) inputs and one numeric output?Body: I have a dataset with different types of numerical values (both negative and positive numerical values) for the inputs (for example, -40, -35, 1, 25, 39, etc., that is, multiple inputs) and single output numerical value (either negative or positive).
+
+I have tried to use linear regression, but I haven't been so successful and I think one of the reasons is negative values.
+
+What is the best way to deal with this scenario? What model should I use?
+
+I am using Keras for my AI model.
+"
+"['game-ai', 'search', 'minimax', 'alpha-beta-pruning', 'checkers']"," Title: To deal with infinite loops, should I do a deeper search of the best moves with the same value, in alpha-beta pruning?Body: I have implemented minimax with alpha-beta pruning to play checkers. As my value heuristic, I am using only the summation of material value on the board regardless of the position.
+My main issue lays in actually finishing games. A search with depth 14 draws against depth 3, since the algorithm becomes stuck in a loop of moving kings back and forth in a corner. The depth 14 player has a significant material advantage with four kings and a piece against a single king, however, it moves only one piece.
+I have randomly selected a move from the list of equally valued moves and this leads to more interesting games (thus preventing the loop). However, whichever player used this random tactic ended up far worse off.
+I am not quite sure how to solve this problem. Should I do a deeper search of the best moves with the same value? Or is the heuristic at fault? If so, what changes would you suggest?
+So far I have tried a simple genetically generated algorithm that optimizes a linear scoring function (that accounts for the position). However as the algorithm optimized, it led to only draws and the same king loop.
+Any suggestions for how to stop this king loop are very welcome!
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing']"," Title: How to detect any native language when written in Latin characters?Body: Assume somebody knows only to write in Latin characters. If they write words of any other language (example: Hindi, French, Latin) using the Latin alphabet, how can I detect that language?
+
+Example: If they write Hindi language word using the Latin alphabet)
+
+
+
+ kya kar raha hai
+
+
+
+ >> the output is Hindi language
+
+"
+"['deep-learning', 'convolutional-neural-networks', 'regression']"," Title: Optimisation of dependence of efficiency of CNN on training dataBody: I got a large dataset of images (dimensions of 16 x 16, 250k samples) and corresponding spherical coordinates (distributed uniformly in each coordinate). On these, I trained a convolutional regression network to directly yield the coordinates for a provided image. The network itself is rather simple and consists of multiple convolutional layers where the last of them is flattened and followed by some dense layers to get the desired output. Since the input image size is rather small, pooling layers are obsolete I think (doesn't make much difference if used).
+
+If I now train on all of the data I will get reasonable results in the end. But, if I filter them before training, i.e. only use coordinates which are limited by a certain radius, the network will increase it's performance quite a bit, but will only work well if my input image corresponds to the parameter space used during training.
+
+So my question is, if the network isn't deep enough or has the wrong architecture to perform on the complete dataset with high confidence or if this is expected behaviour. One naive approach would be to train the network for different coordinate ranges and to store the weights for each of them. Then, you could train a classifier to decide in which range you are and use the previously determined weights for the network accordingly. But this seems strange to me, as a single network should somehow be able to achieve the same without this weird architecture, I think.
+
+I would be pleased if someone has an idea how I could optimise the performance of my network to yield the best results over the whole coordinate space.
+"
+"['neural-networks', 'machine-learning']"," Title: Finding the right modelBody: Let us say that i have two ball throwing machines which has some algorithm running in the back-end for releasing the balls. One machine shows it throws 5 balls in 1 sec. Other shows the exact distribution of how many balls were thrown in each 0.2 secs (say the distribution is: 2,1,0,1,1) but the sum is 5 balls/sec for this machine too. Can i use this data and some other independent parameters like speed, direction etc as inputs and predict the similar distribution for lower accuracy machine.?
+
+Re-framing my question:
+
+I am searching for an apt supervised model for the following use case:
+
+If I have a sum (say 10) and it can be distributed in a predefined number of bins (say 5) in a number of ways for instance:
+
+1. 1, 2, 5, 0, 2
+
+2. 0, 0, 3, 7, 0
+
+etc.
+
+The distribution bins always have whole numbers and the sum is also a whole number. The distribution depends on a number of factors and patterns which can be learned in the volumetric data. Hence, if I am able to load more than one sum (say n sums) and output the corresponding n*5 distributions it will be better for precise prediction (as per my intuitions). I tried using some networks but they are not doing much good.
+"
+['unsupervised-learning']," Title: In unsupervised learning, what is meant by ""finding the probability of an image""?Body: The specific problem I'm having is with a Fully Visible Belief Network. It is an explicit density model (though I don't know what quantifies something being such) that uses the chain rule to decompose the likelihood of an image x into a product of 1-d distributions.
+
+
+
+What is meant by ""the likelihood of an image x""? With respect to what? I assume it refers to how common this image would be in the data set it is selected from? Like if you had 1000 images, 800 of which were white and 200 of which were black, the model should ideally output 0.2 for any black image inputted? Obliviously with more complicated clustering like dogs vs cats it'd be a bit different, but that's my intuition. Is that correct?
+
+Also as a side question, that equation looks very wrong. If you have an image of $1048\times720$ pixels, and say every pixel evaluates to have a probability of 0.9, you'd expect the final probability of the image to be 0.9 or 90%. But according to that equation, it's $0.9^{720*1048}$, which is stupidly small, essentially 0. What's going on here?
+"
+"['neural-networks', 'keras', 'multilayer-perceptrons']"," Title: Applying Machine Learning to 2D Laser Scanner DataBody: We are using 2D Laser Scanner to scan various objects of different geometric shapes for e.g. cylinder, spiked, cylinder with notch, cylinder with curved edges e.t.c. The dataset contains points in the format [x, y] with the dimension of 1 complete scan being 160x2. The goal is to use these scan points to classify the various shapes.
+
+I have used a multilayer NN with sigmoid as the final layer and Adadelta optimizer for this problem but the accuracy reaches only upto 70%.
+
+Can anyone recommend a proper model that can be used for Laser Scanner Data Classification?
+
+
+
+ MODEL
+
+def baseline_model():
+ model = Sequential()
+ model.add(Dense(2048, input_dim=160, activation='relu'))
+ model.add(Dropout(0.1))
+ model.add(Dense(1024, activation='relu'))
+ model.add(Dropout(0.1))
+ model.add(Dense(512, activation='relu'))
+ model.add(Dropout(0.2))
+ model.add(Dense(256, activation='relu'))
+ model.add(Dropout(0.3))
+ model.add(Dense(128, activation='relu'))
+ model.add(Dropout(0.4))
+ model.add(Dense(64, activation='relu'))
+ model.add(Dropout(0.5))
+ model.add(Dense(32, activation='relu'))
+ model.add(Dropout(0.5))
+ model.add(Dense(6, activation='softmax'))
+ Adam = optimizers.Adam(lr=0.001)
+ Adadelta = optimizers.Adadelta(lr = 1)
+ model.compile(loss='categorical_crossentropy', optimizer=Adadelta, metrics=['accuracy'])
+
+"
+"['machine-learning', 'classification', 'decision-trees', 'random-forests', 'k-means']"," Title: How can I classify instances into two categories and then into sub-categories, when the number of features is high?Body: I'm working with a problem where I have a lot of variables for different cases of different users. Depending on the values of the different variables of a concrete user in a concrete case, the algorithm must classify that user in that case as:
+
+
+
+But if the user is classified as positive, it must be classified as:
+
+
+- Positive normal
+- Positive high
+- Positive extra-high
+
+
+If a case is positive, depending on the values of a part of the parameters, we know that the probability to be, for example, positive normal is bigger or lower.
+
+To sum up, I see the problem as a spam detector with different spam types.
+
+May this work if I apply an algorithm like:
+
+
+- Random Forest
+- Decision Tree
+
+
+Or maybe I can include the negative case as a new group and then implement a K-means algorithm? Maybe this would help to find new groups of parameters that will say that the concrete case forms part of a group for sure.
+
+Which one will fit best with a lot of parameters?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'object-recognition', 'object-detection']"," Title: FasterRCNN's RPN network trainingBody: I would like to know if my understanding of RPN training is correct, and if never training the RPN on some specific anchor box is bad (i.e if the anchor never sees good nor bad examples).
+
+To make my point clear, assume we have two functions. $f_{\theta_1}$ which represents the backbone that outputs a feature map of size $n$ (assume flattened) for an image of size $m$ (WLOG assume the image is flattened)
+$$
+f_{\theta_1}: \mathbb{R}^m \to \mathbb{R}^n
+$$
+and $f_{\theta_2}$ that represents the 'objectness' of each anchor box. We can suppose that $f_{\theta_2}$ and $f_{\theta_1}$ are convolutional neural networks, where $\theta_1, \theta_2$ are the networks' parameters. For simplicity, assume the RPN does not output bounding boxes correction, and only outputs the probabilities that an anchor box is an object or not.
+$$ f_{\theta_1}: \mathbb{R}^n \to \mathbb{R}^{k \cdot n}$$
+We can assume $k=1$, which is the number of boxes per anchor.
+
+If my understanding is correct, we select $p$ good proposals $G_p$, and $p$ bad proposals $B_p$ for training the RPN, which are indices of good and bad predictions. In other terms, if $x$ is an image (assume flattened), then $f_{\theta_2}(f_{\theta_1}(x)) = y$, next we only back-propagate the loss for coordinates $B_p$ in $y$ and $G_p$ in y. For instance, if $p=1$, and $G_p = \{i\}$ and $B_p = \{j\}$ and $ 1\leq i \neq j \leq n$ then we only compute the loss of the RPN for coordinates $i$ and $j$ in $y$.
+My questions are:
+
+1- Is my understanding correct? and if not, how do we perform training?
+
+2- Assuming my understanding is right or partially right about the last step, What happens if we never train component $y_0$ from the RPN's output for example? (i.e we never back propagate the loss through some components for $y$) woudn't this be a problem (i.e hurt performance or network training does not go well at all in some cases?)
+"
+"['reinforcement-learning', 'q-learning']"," Title: Does using the softmax function in Q learning not defeat the purpose of Q learning?Body: It is my understanding that, in Q-learning, you are trying to mimic the optimal $Q$ function $Q*$, where $Q*$ is a measure of the predicted reward received from taking action $a$ at state $s$ so that the reward is maximised.
+
+I understand for this to be properly calculated, you must explore all possible game states, and as that is obviously intractable, a neural network is used to approximate this function.
+
+In a normal case, the network is updated based on the MSE of the actual reward received and the networks predicted reward. So a simple network that is meant to chose a direction to move would receive a positive gradient for all state predictions for the entire game and do a normal backprop step from there.
+
+However, to me, it makes intuitive sense to have the final layer of the network be a softmax function for some games. This is because in a lot of cases (like Go for example), only one ""move"" can be chosen per game state, and as such, only one neuron should be active. It also seems to me that would work well with the gradient update, and the network would learn appropriately.
+
+But the big problem here is, this is no longer Q learning. The network no longer predicts the reward for each possible move, it now predicts which move is likely to give the greatest reward.
+
+Am I wrong in my assumptions about Q learning? Is the softmax function used in Q learning at all?
+"
+"['neural-networks', 'computer-vision', 'terminology', 'papers', 'action-recognition']"," Title: What is ""temporal depth""?Body: I need some explanation about the following paragraph (page 3) from the paper A Novel Approach for Robust Multi Human Action Detection and Recognition based on 3-Dimentional Convolutional Neural Networks.
+
+
+ We introduce a 3D convolution neural network with the following notations: $I(x, y, d)$ as an input video with a size of $x y$ and $d$ the temporal depth
+
+
+What is ""temporal depth""? Is it the number of frames?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'artificial-neuron']"," Title: Building an AI that generates text by itselfBody: Now I know this might break some StackExchange rules and I am definitely open for taking the thread down if it does!
+I am trying to build an AI that can write it's own book and I have no idea where to start or what are the appropriate algorithms and approaches to go with.
+How should I start and what do I exactly need for such a project?
+"
+"['machine-learning', 'natural-language-processing']"," Title: Need examples for the following definitionsBody: I am currently reading the paper ""Similarity of Narratives"" by Loizos Michael (link below) and I am having a hard time figuring out the definitions listed (p.107 - p.109).
+
+Could someone please give me a practical example for each of the definitions?
+
+Article: http://narrative.csail.mit.edu/cmn12/proceedings.pdf
+"
+"['deep-learning', 'sequence-modeling']"," Title: Sequence-to-Sequence models without specifying the start and end of sentencesBody: Is there a seq-to-seq model which does not require to know the start and end of a sentence? I need to model a system which gets a long sequence of words and creates a long sequence of tokens as long as the input. For example it takes a sequence of 1000 words and creates a sequence of 1000 tokens, each token corresponds to an input word.
+"
+"['machine-learning', 'convolutional-neural-networks', 'object-detection', 'adversarial-ml']"," Title: Can a trained object detection model deal with variations of the input?Body: Suppose an object detection algorithm is good at detecting objects and people when an object and person is close to a camera and upright. If the person walks farther away from the camera and is ""upside-down"" from the perspective of the camera (e.g. a fisheye camera), should the algorithm still be good at detecting people and objects in this position?
+"
+"['game-ai', 'graph-theory', 'path-finding']"," Title: How to solve Snake Game with a Hamiltonian graph algorithm?Body: I wonder if there is a way to solve snake game using Hamiltonian algorithm?
+if there is a way
+
+
+- how to apply it?
+- what data structure will be used with algorithm?
+- time complexity and space complexity?
+- is this algorithm an optimal solution or there is a better way?
+
+"
+"['game-ai', 'optimization']"," Title: How should I weight the factors that affect the choice of an action in a strategy board game with multiple actions?Body: I have written an AI that plays a strategy board game. There are lots of different types of moves (e.g. attack, defend, help ally colony, etc.).
+
+I calculate the best moves to do depending on a variety of factors, such as the value of nearby enemy colonies, the number of armies the colony currently has, etc (each of these has separate weightings). I'm trying to find the optimal weighting for each of the different factors.
+
+Currently, I decide the best configuration of parameters in a King of the Hill style tournament. I choose random values between a suitable range for each of the different parameters and then play two of these AI against each other 20 times. I have a total of 100 AI that play against the king, and then take the final king as the best AI.
+
+The problem is that this is quite slow and I feel it's very inefficient, as a lot of the AI don't play well at all (probably due to the randomness of parameter values).
+
+I'm wondering if there's a more efficient way to determine the optimal value of parameters?
+"
+"['natural-language-processing', 'sentiment-analysis', 'text-classification']"," Title: What is the most accurate pretrained sentiment analysis model by 2019?Body: I've been using OpenAI's 2017 Sentiment Neuron implementation (https://github.com/openai/generating-reviews-discovering-sentiment) for a while, because it was easy to set up and was the most accurate on benchmarks. What is the most accurate alternative now that I should use?
+"
+"['neural-networks', 'q-learning']"," Title: Is the following the correct implementation of the Q learning algorithm for a neural network?Body: I just wanted to confirm that my understanding of Q learning was correct (with respect to a neural network).
+
+The network, Q, is initialised randomly.
+for n ""episodes"":
+ The state, s1, is initialised randomly
+ while s1 != terminal state:
+ s1 is fed into Q to get action vector q1
+ action a1 is chosen based off the max of q1 (or randomly)
+ state s2 is found by progressing s1 based on a1
+
+ s2 is fed into Q to get action vector q2
+
+ The expected output for Q at q1, y, is found by:
+ {If s2 is terminal, it is the reward at s2
+ {Otherwise: reward at s2 + gamm*max(q2)
+ (The ""otherwise"" doesn't match bellmans equation as α=1)
+
+ Do gradient step where error = (y - max(q1))^2, only the max of q1 gets any gradient
+ s1 = s2
+
+
+This does not directly follow equations found by searching Q-learning as I find them rather ambiguous.
+
+I am also not taking into account storing network states (in this case, the network is called Q) for proper learning to avoid catastrophic forgetting, as I'm more concerned on getting the specifics right before good practice.
+"
+['dqn']," Title: DQN card game - how to represent the actions?Body: I want to train a DQN card game named witches. It consists of 60 Cards (1-14 of Yellow, Blue, Green, Red Cards + 4 Wizzards). The color of the first layed card has to be respected by the other players (if they have this card in hand). The one who has the card with the highest number gets the played cards. Each collected red card gives you a -1 point.
+
+With respect to this answer I setup the inputs / state of the NN as binary (meaning I have 180 bool values (60 card x is currently on the table, 60 card x is in the ai hand, 60 card x is already been played)
+
+How to design the outputs / actions?
+
+
+- If the ai is the first player of a round it can play any card
+- If the ai is not first player it has to respect the first played card (or play a wizzard)
+
+
+This means there is actually a list of available options. I then sort this list and have 60 Output bools which I set to 1 if this option is possible. Among these options the ai should then decide what the correct option is? Is this the correct procedure?
+
+Inconsistent / Varying Action Space
+This is what we have to deal here with. As explained in here I think a DQN as well as Policy Gradient Methods is not the correct architecture to choose for solving such multi-agent card games. What architecture would you choose?
+
+General procedure?
+
+Assume I have 4 players, so do I have to get the old state before the ai is the next player and the new state is directly after this round is finished?
+
+my_game = game([""Tim"", ""Bob"", ""Lena"", ""Anja""])
+while True:
+ #1. Play unti AI has to move:
+ my_game.play_round_until_ai()
+
+ #2. Get old state:
+ state_old = agent.get_state(my_game)
+
+ #3. Get the action the AI should perform
+ action_ = agent.get_action(state_old, my_game)
+
+ #4. perform new Action and get new state
+ #reward rates how good the current action was
+ #score is the actual score of this game!
+ reward, done, score = my_game.finishRound(action_)
+
+ # 5: Calculate new state
+ state_new = agent.get_state(my_game)
+
+ #6. train short memory base on the new action and state
+ agent.train_short_memory(state_old, action_, reward, state_new, done)
+
+ #7. store the new data into a long term memory
+ agent.remember(state_old, action_, reward, state_new, done)
+
+ if done == True:
+ # One game is over, train on the memory and plot the result.
+ sc = my_game.reset()
+
+
+My code so far is available here: https://github.com/CesMak/witches_ai
+"
+"['machine-learning', 'deep-learning', 'comparison', 'game-theory']"," Title: What is the difference between game theory and machine learning?Body: What is the difference between game theory and machine learning?
+
+I had gone through the papers Deep Learning for Predicting
+Human Strategic Behavior, by Jason Hartford et al., and When Machine Learning Meets AI and Game
+Theory, by Anurag Agrawal et al., but I am not able to understand.
+"
+"['convolutional-neural-networks', 'image-processing', 'convolution', 'hidden-layers', 'convolution-arithmetic']"," Title: Does each filter in each convolution layer create a new image?Body: Say I have a CNN with this structure:
+
+
+- input = 1 image (say, 30x30 RGB pixels)
+- first convolution layer = 10 5x5 convolution filters
+- second convolution layer = 5 3x3 convolution filters
+- one dense layer with 1 output
+
+
+So a graph of the network will look like this:
+
+
+
+Am I correct in thinking that the first convolution layer will create 10 new images, i.e. each filter creates a new intermediary 30x30 image (or 26x26 if I crop the border pixels that cannot be fully convoluted).
+
+Then the second convolution layer, is that supposed to apply the 5 filters on all of the 10 images from the previous layer? So that would result in a total of 50 images after the second convolution layer.
+
+And then finally the last FC layer will take all data from these 50 images and somehow combine it into one output value (e.g. the probability that the original input image was a cat).
+
+Or am I mistaken in how convolution layers are supposed to operate?
+
+Also, how to deal with channels, in this case RGB? Can I consider this entire operation to be separate for all red, green and blue data? I.e. for one full RGB image, I essentially run the entire network three times, once for each color channel? Which would mean I'm also getting 3 output values.
+"
+"['reinforcement-learning', 'ai-design', 'training', 'q-learning', 'genetic-algorithms']"," Title: Reinforcement learning for a 2D game involving two playersBody: I'd like to create an AI for a 2D game involving two players fighting against each other. The map look something like this (The map is a NxN array somehow randomly generated):
+
+
+
+Basically the players can look for objects such as weapons located on platforms, shoot at each other to cause damages etc. The output actions are therefore limited to a few such as going up, left, right, down, shooting angle, shooting boolean...
+
+I'm wondering if Reinforcement Learning using a neural network is a good approach to the problem. If so, how should I proceed for the learning phase? Should I force the AI to compete with a weaker version of itself at each iteration? Would it be computationally reasonable to train on a 4Gb GPU?
+Thanks in advance for your advice !
+"
+"['reinforcement-learning', 'rewards']"," Title: Should an RL agent directly observe the reward?Body: I am training an A2C reinforcement learning agent in a dense reward environment (where rewards are known and explicit at every timestep).
+
+Is it redundant to include the previous reward in the current observation space?
+
+The reward is implicitly observed by the agent when collecting experiences and updating the network parameters. But could it also be useful for the agent to explicitly observe the reward of its previous action?
+"
+"['machine-learning', 'dqn']"," Title: DQN, how to choose the reward fucntion?Body: I built a simple AI system that tries to solve the 8 puzzle using DQN.
+The problem is, if the agent gets only a reward greater than zero when winning, the training will take a long time, so I made a smooth reward function instead: $R=(n/9)^3$, where $n$ is the number of pieces that are in the right position.
+
+The training became quicker but the AI chose to match 7 pieces out of 9 to get a reward of $(7/9)^3/(1-\gamma) = 0.47/(1-\gamma) = 4.7$, for $\gamma=0.9$, choosing to win and getting reward of 1 doesn't make sense to the AI, lowering $\gamma$ will result in the AI to choose instant reward instead of long-term reward, so that will not be very helpful; lowering rewards of non-winning stats will make the training very slow.
+
+So, how do I choose a good reward function?
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks']"," Title: Should I use deep learning to solve my task?Body: I need to predict the performance (CPI cycles-per-instruction) of 90 machines for the next hour (or day). Each machine has a thousand records (e.g. CPU and memory usage).
+
+Currently, I am using a neural network with one hidden layer for this task. I have 9 inputs (features), 23 neurons in the hidden layer, and one output. I am using the Levenberg-Marquardt algorithm. Examples of the inputs (or features) are the CPU and memory capacity and usage, and the machine_id
. and Output is performance. I have 90 machines. Currently, I get an MSE of $0.1$ and an R of $0.80$.
+
+My dataset consist of 30 days. I trained my network for the first 29 days, and I use day 30 to test.
+
+I have been advised to use deep learning to have more flexibility and improve the MSE and R results. Could deep learning be helpful in this case? If yes, which deep learning model could I use to improve the results?
+"
+"['neural-networks', 'comparison']"," Title: What is the difference between artificial intelligence and artificial neural networks?Body: I have made several neural networks by using Brain.Js and TensorFlow.js.
+
+What is the difference between artificial intelligence and artificial neural networks?
+"
+['machine-learning']," Title: Need to analyze input CSV files and determine whether input file is good or bad w.r.t it's dataBody: We have a scenario where we need to implement an Artificial Intelligence solution which will evaluate the input data file of my Azure Data Factory pipeline and let us know whether the file is good or bad with respect to it's data.
+
+For example, I have 10 files several rows which are good input files and 2 files with several rows which are bad input files.
+
+Each file either it is good/bad has 26 columns. The above two files are bad because of below reasons.
+
+
+- One file has all empty values for one column which is not expected.
+- Another file has, the value 'TRUE for all rows for a specific column which was also not he general scenario. (some % of TRUE's and some % records with FALSE will appear in good files)
+
+
+Like this, there may be several scenarios where the input file may be treated as bad file.
+
+We want to implement an Artificial Intelligence solution which should analyze all the input files and identify the hidden patterns of the data in file and detect abnormal scenarios like above and should eventually mark the file as bad file.
+
+Please suggest for the approach or what components in Azure can help to achieve this kind of file sanity check.
+
+Thanks.
+"
+"['keras', 'artificial-neuron', 'multilayer-perceptrons']"," Title: Does a varying ANN model accuracy mean underfitting or overfitting?Body: Background:
+This is for a simulated robot with four legs, walking on a flat terrain. The ANN (an MLP) is given inputs as the robot's body angle, positions and angle of each leg with respect to the body and two points of contact with the terrain, on each leg (if there's no contact, the value is zero). The outputs are the four motor rates for each leg.
+I'm using Keras with the CNTK backend to train the network. The network has 30 input nodes (ReLU), one hidden layer with 64 nodes (ReLU) and an output layer with 4 nodes (Sigmoid). Optimizer is Adam
.
+
+The training data has 2459 datapoints. Running model.validate
with parameters testDataPercentage = 0.25, epochs = 50 and batchSize = 10 gave me: loss: 2.9509 - accuracy: 0.3283 - val_loss: 2.8592 - val_accuracy: 0.3213
.
+
+But running model.evaluate multiple times gave me:
+
+['loss', 'accuracy'] [3.10, 0.50]
+['loss', 'accuracy'] [3.04, 0.23]
+['loss', 'accuracy'] [3.01, 0.11]
+['loss', 'accuracy'] [3.45, 0.02]
+['loss', 'accuracy'] [3.17, 0.40]
+['loss', 'accuracy'] [3.03, 0.27]
+['loss', 'accuracy'] [3.012, 0.46]
+
+
+Loss doesn't decrease much over 50 epochs. It reduces from maybe 3 to 2.8. That's it.
+
+Question:
+I don't understand why the accuracy varies so much for each run.
+If I add a hidden layer or even add a dropout of 0.2, the results are similar: loss: 2.9253 - accuracy: 0.2978 - val_loss: 2.9350 - val_accuracy: 0.3148
.
+Reducing the number of hidden nodes to 15 gives the same results: loss: 2.9253 - accuracy: 0.2978 - val_loss: 2.9350 - val_accuracy: 0.3148
. Hidden layers with 64 nodes gives the same results. Training data with just 500 data points also gives the same results. Using sigmoid instead of ReLU gives slightly worse results.
+I've been through many tutorials and guides on how to debug or check why the neural network is not working, but they don't teach properly, what these values mean and how to adjust the network.
+Does the loss not decreasing, mean the network is not learning?
+Does the fact that increasing or decreasing the layers or the number of training data mean that the network is not learning?
+"
+"['machine-learning', 'reinforcement-learning', 'policy-gradients', 'probability-distribution', 'proximal-policy-optimization']"," Title: Deciding std. deviation for policy network output?Body: When I try to fit a Normal Distribution to the output of a policy network, for a continuous action space problem, what should be its standard deviation? mean for the distribution will directly be the output of the policy network.
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'keras']"," Title: Is CNN capable of extracting the descriptive statistics featuresBody: I was trying to build a CNN model. I used time series data of daily temperature to predict if there is risk of an event, say bacteria growth. I calculated the descriptive statistics of the time series, ie. mean, variance, skewness, kurtosis etc for each observation and added them to input data.
+
+My question:
+
+Is CNN capable of extracting the effect of the descriptive statistics the label, meaning that adding these descriptive statistics features manually does not make a difference?
+(I will still try this later, but like to hear what you think about it). Thanks
+"
+"['neural-networks', 'deep-learning', 'datasets', 'gradient-boosting', 'boosting']"," Title: Are there tabular datasets where deep neural networks outperform traditional methods?Body: Are there (complex) tabular datasets where deep neural networks (e.g. more than 3 layers) outperform traditional methods such as XGBoost by a large margin?
+I'd prefer tabular datasets rather than image datasets, since most image dataset are either too simple that even XGBoost can perform well (e.g. MNIST), or too difficult for XGBoost that its performance is too low (e.g. almost any dataset that is more complex than CIFAR10; please correct me if I'm wrong).
+"
+"['neural-networks', 'machine-learning', 'training', 'backpropagation', 'activation-functions']"," Title: What would be the implications of mistakenly adding bias after the activation function?Body: I was looking at the source code for a personal project neural network implementation, and the bias for each node was mistakenly applied after the activation function. The output of each node was therefore $\sigma\big(\sum_{i=1}^n w_i x_i\big)+b$ instead of $\sigma\big(\sum_{i=1}^n w_i x_i + b\big)$. Assuming standard back-propagation algorithms are used for training, what (if any) impact would this have?
+"
+"['reinforcement-learning', 'deep-learning', 'philosophy', 'agi', 'intelligent-agent']"," Title: How can an AI freely make decisions?Body: Suppose a deep neural network is created using Keras or Tensorflow. Usually, when you want to make a prediction, the user would invoke model.predict
. However, how would the actual AI system proactively invoke their own actions (i.e. without the need for me to call model.predict
)?
+"
+"['classification', 'algorithm', 'probability']"," Title: Is there an algorithm for ""contextual recognition"" with probabilities?Body: Example 1
+
+An object is composed of 3 sub-objects.
+
+
+- Object 1: 90% looks like an eye 10% looks like a wheel
+- Object 2: 50% looks like an eye 50% looks like a wheel
+- Object 3: 90% looks like a mouth 10% looks like a roof
+
+
+OK. So now we want to determine what the whole-object is. Using this evidence maybe we find:
+
+
+- Combined Object: 90% looks like a face 10% looks like an upside-down car.
+
+
+But now, given this, we go back an reclassify the sub-objects.
+
+
+- Given whole object is a car object 2 is 99% an eye.
+
+
+I'm looking for an algorithm that sort of goes back and forth between the context and the subobjects to classify both an object and it's parts.
+
+(This is related to the rabbit-duck illusion, where such an algorithm once decided something is a rabbit, it classifies it's parts as rabbit-parts).
+
+In other words the algorithm needs to calculate conditional probabilities P(A|B) but the B depends on what all the A's are! So it's a feed back loop.
+
+Example 2
+There is a word ""funny"". The sub-letters are classified (wrongly) as F U M Y. It guesses that the word is FUNNY and then goes back and tries to reclassify the letters. Using this reclassification it is even more certain the word is FUNNY. And much more certain the letters in the middle are NN and not M.
+(Perhaps later the word is used in a sentence ""The fumes from this fire are really fumy"". And then using this new evidence it has to go back and reclassify the word again and now it thinks the word is FUMY with 80% probability).
+
+I have an idea that I would write all the condition probabilities in a matrix $M$ then gives starting probabilities $S_0$. Then iterate like this: $S_{n+1}=M S_n$ until hopefully it converges on something.
+"
+"['human-like', 'heuristics']"," Title: What are some common heuristics that might be innate?Body: Here's a question I might ask an AI to solve:
+
+""Colour the states of the USA using just 4 colours"".
+
+
+Now, a common heuristic a human might use is to start at one state and ""work their way out"". Or start at an edge state. Now this seems to work best rather than just colouring states in a random order like a computer might do. And it means a human is often better than a computer because a computer might just start colouring random states and get into trouble very quickly.
+
+(Also I wonder if this is a learned heuristic or would a child develop this on his/her own?)
+
+Now the question is, whether this heuristic is an innate optimisation strategy, or just laziness on the part of the human. i.e. colouring things close together takes less effort. Either way it leads to a good strategy.
+
+But I wonder if there are any other examples of heuristics that humans inately use without realising it, that lead to good strategies?
+
+One heuristic that computers often don't know is
+
+""If you're trying to play a game don't keep turn around and go the other way for no reason.""
+
+
+Again, a human would not do this, but again this could be laziness on the part of a human. It takes more effort to turn around than keep going in one direction to explore it.
+"
+"['reinforcement-learning', 'policy-gradients', 'proximal-policy-optimization']"," Title: Should I consider mean or sampled value for action selection in ppo algorithm?Body: When considering the policy network in PPO algorithm, we need to fit a Gaussian distribution to the neural network output (for a continuous action space problem). When I use this network to obtain action, should I sample from the fitted distribution or directly consider the mean output from the policy network? Also, what should be the standard deviation of the fitted distribution?
+"
+"['natural-language-processing', 'training', 'transformer', 'attention', 'gpu']"," Title: How does a transformer leverage the GPU to be trained faster than RNNs?Body: How does a transformer leverage the GPU to be trained faster than RNNs?
+I understand the parameter space of the transformer might be significantly larger than that of the RNN. But why does the transformer structure can leverage multiple GPUs, and why does that accelerate its training?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'resource-request']"," Title: Which neural network should I use to transform the pixels of a video overtime?Body: I want to train a network with video data and have it transform pixel values overtime on an input video. This is for an art project and does not need to be super elaborate, but the videos I want to render out of this might be big in resolution and frame count.
+
+Which neural network would be appropriate for this task? I think I'm looking for a convolutional network (but I'm not too sure of that either). Which framework could easily allow me to do this?
+
+Now, I'm no proper programmer, but self-learned on the go. I know some Javascript, but rather would like to learn more Python. Ideally, the easier and simpler the better though: I would be perfectly happy with something like ""Uber Ludwig"" (except maybe that it's from Uber).
+"
+"['deep-learning', 'convolutional-neural-networks', 'batch-normalization']"," Title: Why do current models use multiple normalization layers?Body: In most current models, the normalization layer is applied after each convolution layer. Many models use the block $\text{convolution} \rightarrow \text{batch normalization} \rightarrow \text{ReLU}$ repeatedly. But why do we need multiple batch normalization layers? If we have a convolution layer that receives a normalized input, shouldn't it spit out a normalized output? Isn't it enough to place normalization layers only at the beginning of the model?
+"
+"['machine-learning', 'object-recognition', 'optical-character-recognition']"," Title: How can I recognise the name of a molecule given an image of its structure?Body: I want to recognize the name of the chemical structure from the image of the chemical structure. For example, in the image below, it is a benzene structure, and I want to recognize that it is benzene from the image (I should be able to recognize all these structures as benzene).
+
+How can I recognize the name of a molecule given an image of its structure?
+
+
+"
+"['convolutional-neural-networks', 'objective-functions', 'architecture', 'image-segmentation', 'u-net']"," Title: Using U-NET for image semantic segmentationBody: I'm getting literally crazy trying to understand how U-NET works. Maybe it is very easy, but I'm stuck (and I have a terrible headache). So, I need your help.
+
+I'm going to segment MRI to find white matter hyperintensities. I have a dataset with MRI brain images, and another dataset with the WMH. For each one of the brain images, I have one black image with white dots on it in the WMH dataset. These white dots represent where is a WMH on its corresponding brain image.
+
+This is an image from the MRI brain images:
+
+
+
+And this is the corresponding WMH image from the WMH dataset:
+
+
+
+How can I use the other images in network validation?
+
+I suppose there will be a loss function and this network is supervised learning.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'dropout', 'regularization']"," Title: Why is dropout favoured compared to reducing the number of units in hidden layers?Body: Why is dropout favored compared to reducing the number of units in hidden layers for the convolutional networks?
+
+If a large set of units leads to overfitting and dropping out ""averages"" the response units, why not just suppress units?
+
+I have read different questions and answers on the dropout topic including these interesting ones, What is the "dropout" technique? and this other Should I remove the units of a neural network or increase dropout?, but did not get the proper answer to my question.
+
+By the way, it is weird that this publication A Simple Way to Prevent Neural Networks from Overfitting (2014), Nitish Srivastava et al., is cited as being the first on the subject. I have just read one that is from 2012:
+Improving neural networks by preventing co-adaptation of feature detectors.
+"
+"['convolutional-neural-networks', 'generative-adversarial-networks', 'image-generation', 'variational-autoencoder']"," Title: What makes GAN or VAE better at image generation than NN that directly maps inputs to imagesBody: Say a simple neural network's input is a collection of tags (encoded in some way), and the output is an image that corresponds to those tags. Say this network consists of some dense layers and some reverse (transpose) convolution layers.
+
+What is the disadvantage of this network, that directs people to invent fairly complicated things like GANs or VAEs?
+"
+"['neural-networks', 'deep-learning', 'backpropagation', 'regression']"," Title: Regression using neural networkBody: I'd like to ask for any kind of assistance regarding the following problem:
+
+I was given the following training data: 100 numbers, each one is a parameter, they together define a number X(also given).This is one instance,I have 20 000 instances for training.Next, I have 5000 lines given, each containing the 100 numbers as parameters.My task is to predict the number X for these 5000 instances.
+
+I am stuck because I only know of the sigmoid activation function so far, and I assume it's not suitable for cases like this where the output values aren't either 0 or 1.
+
+So my question is this : What's a good choice for an activation function and how does one go about implementing a neural network for a problem such as this?
+"
+"['machine-learning', 'deep-learning', 'classification', 'training', 'deep-neural-networks']"," Title: Relationship between model complexity (depth) and dataset sizeBody: I'm new to deep learning. I was wondering what's the relationship between a deep model complexity (e.g. total number of parameters, or depth) and the dataset size?
+
+Assuming I want to do a binary classification with 10K data for a problem like fire detection. How should I know what complexity I should go for?
+"
+"['neural-networks', 'machine-learning', 'classification']"," Title: Can a deep neural network be trained to classify an integer N1 as being divisible by another integer N2?Body: So I’ve been working on my own little dynamic architecture for a deep neural network (any number of hidden layers with any number of nodes in every layer) and got it solving the XOR problem efficiently. I moved on to trying to see if I could train my network on how to classify a number as being divisible by another number or not while experimenting with different network structures and have noticed some odd things. I know this is a weird thing to try and train a neural network to do but I just thought it might be easy because I can simply generate the training data set and test data set programmatically.
+
+From what I’ve tested, it seems that my network is only really good at identifying whether or not a number is divisible by a number who is a power of 2. If you test divisibility by a power of two, it converges on a very good solution very quickly. And it generalizes well on numbers outside of the training set - which I guess it kind of makes sense, as I’m inputting the numbers into the network in binary representation, so all the network has to learn is that a number n is only divisible by 2^m if the last m digits in the binary input vector are 0 (i.e. fire the output neuron if the last m neurons on the input layer don't fire, else don't). When checking divisibility by non-powers of two, however, there does not seem to be as much of a ""positional"" (maybe that's the word, maybe not) relationship between the input bits and whether or not the number is divisible. I thought though, that if I threw more neurons and layers at the problem that it might be able to solve classifying divisibility by other numbers – but that is not the case. The network seems to converge on not-so-optimal local minima on the cost function (for which I am using mean-squared-error) when dividing by numbers that are not powers of 2. I’ve tried different learning rates as well to no avail.
+
+Do you have any idea what would cause something like this or how to go about trying to fix it? Or are plain deep neural networks maybe just not good at solving these types of problems?
+
+Note: I should also add that I've tried using different activation functions for different layers (like having leaky-relu activation for your first hidden layer, then sigmoid activation for your output layer, etc.) which has also not seem to have made a difference
+
+Here is my code if you feel so inclined as to look at it: https://github.com/bigstronkcodeman/Deep-Neural-Network/blob/master/Neural.py
+
+(beware: it was all written from scratch by me in the quest to learn so some parts (namely the back-propagation) are not very pretty - I am really new to this whole neural network thing)
+"
+"['variational-autoencoder', 'graphs', 'evidence-lower-bound']"," Title: Why does the ELBO come to a steady state and the latent space shrinks?Body: I'm trying to train a VAE using a graph dataset. However, my latent space shrinks epoch by epoch. Meanwhile, my ELBO plot comes to a steady state after a few epochs.
+I tried to play around with parameters and I realized, by increasing the batch size or training data, this happens faster, and ELBO comes to a steady state even faster.
+Is this a common problem, with a general solution?
+With these signs, which part of the algorithm is more possible to cause the issue? Is it an issue from computing loss function? Does it look like the decoder is not trained well? Or it is more likely for the encoder not to have detected features that are informative enough?
+
+Edit:
+I figured out that the problem is probably caused by the loss function. My loss function is a combination of the KL term and reconstruction loss. In the github page for graph auto-encoders, it is suggested that the loss function should include normalization factors according to the number of nodes in the graph. I haven't figured it out exactly, but by adding a factor of 100 to my reconstruction loss and a factor of 0.5 to my KL loss, the algorithm is working fine. I would appreciate it if someone can expand on how this exactly is supposed to be set up.
+"
+"['research', 'papers', 'glove']"," Title: Are most things generally discovered because they work empirically and later justified mathematically, or vice-versa?Body: In the original GloVe paper, the authors discuss group theory when coming up with the equation (4). Is it possible that the authors came up with this model, figured out it was good, and then later found out various group theory justifications that justified it? Or was it discovered sequentially as it is described in the paper?
+More generally: In AI research, are most things discovered because they work empirically and later justified mathematically, or is it the other way around?
+"
+['machine-learning']," Title: How are data assimilation and machine learning different?Body: This might seem like a really silly question, however I have not been able to find any answers to it on the internet.
+
+From my rough understanding of data assimilation, it combines data with numerical models by having weights on the adjustment initial condition parameters? That sounds really similar to what machine learning/neural network does.
+
+What are the distinct differences?
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'sequence-modeling']"," Title: Can a character-level Seq2Seq setup learn to perfectly reconstruct structured data like name strings?Body: If not perfect, how well can they do? For example, if I give the Seq2Seq setup a name it did not see in the training process, can it output the same name without error?
+
+Example
+
+
+
+name = ""Will Smith""
+output = DecoderRNN(EncoderRNN(name))
+can_this_be_true = name == output
+
+"
+"['convolutional-neural-networks', 'video-classification']"," Title: Most suitable model for video classification with a fixed cameraBody: Consider a fixed camera that records a given area. Three things can happen in this area:
+
+
+- No action
+- People performing action A
+- People performing action B
+
+
+I want to train a model to detect when action B happens. A human observer could typically recognize action B even with a single frame, but it would be much easier with a short video (a few seconds at low FPS).
+
+What are the most suitable models for this task? I read this paper where different types of fusion are performed in order to feed different frames to a CNN. Are there better alternatives?
+"
+"['machine-learning', 'deep-learning', 'object-detection', 'optical-character-recognition']"," Title: How can I detect diagram region and extract (crop) it from a research paperBody: How can I detect diagram region and extract(crop) it from a research paper
+
+"
+['linear-algebra']," Title: CSP Formulation of an algebraic problemBody:
+
+Is anyone able to explain how to do this? I'm not looking for the complete answer, I would settle for a ""how to for dummies"" explanation of how this is supposed to be solved.
+
+I understand constraints, but in the first example it would seem to me that the second half of the first partial assignment $x_2=-1$ is a violation of the constraint 2, that says $x_1 > x_2$... when down below it says both $x_1$, and $x_2$, are $-1$
+"
+"['dqn', 'deep-rl']"," Title: Is the training of multi-version of the same system at the same time affecting the results?Body: I'm using DQN to train multi-version of the same system and there is a small difference when I run them both separately. However, my result suddenly dropped in both versions if I run them both at the same time. I tried again but I got the same results with slightly different. Would it be affecting my results if I run multi-version of my system at the same time?
+
+Is there any explanation for that and How can I get accurate results when I train multi-version of the same system at the same time?
+"
+"['neural-networks', 'tensorflow', 'keras', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: What is the correct input shape for my LSTM network?Body: My professor gave us a workshop where we have to do classification of a dataset of ECG signals between healthy and unhealthy types using LSTM. Each signal consists of 1,285 time steps.
+
+What my prof did was to cut up each signal into segments 24 time steps long, advancing 6 steps for the next segment. In other words, for the following signal
+
+0, 1, 2, ... 1283, 1284, 1285
+
+
+It will be cut up into the following segments
+
+0, 1, 2, ... 21, 22, 23
+6, 7, 8, ... 27, 28, 29
+12, 13, 14, ... 33, 34, 35
+...
+
+
+These segments are the input to the LSTM model for each signal to be classified.
+
+Using the code that my prof used to cut the signal into segments, and feeding that into Tensorflow-Keras InputLayer, it tells me that the output shape is (None, 211, 24)
.
+
+However, I am told by a classmate that the correct implementation for Tensorflow-Keras LSTM should be (None, 24, 211)
. He tried clarifying with the prof but it seems the prof doesn't really understand what my classmate's trying to say. I tried to Google but examples I can find online are either of the two cases below:
+
+
+- The format of an input signal should be
(None, # timesteps, # features)
. However, in my problem the signal only has one feature that is chopped up into segments. In this example I found online, there's no mention of segments.
+- The only example of signals being cut into segments I can find is when each segment is a single input. That is not the case for me. My input is a single signal, cut up into 211 segments which collectively make up a single input.
+
+
+Which is the correct output shape for my LSTM model? My prof used his method and achieved 96% classification accuracy, and our assignment is to surpass that rate, but when I tested using what my classmate said is the correct shape, my LSTM model with the exact same architecture, hyperparameters etc. gave me a flat 74.04% accuracy from the 1st epoch all the way to the end without ever changing. So which is wrong?
+"
+"['machine-learning', 'deep-learning']"," Title: Would AI be appropriate for converting unstructured text into an XML?Body: I need to understand whether it is better to use AI algorithms (ML, DL, etc.) instead of the classic parser (based onto grammars with regular expression and automaton) for the following task: structuring in XML an unstructured plain text.
+
+The text is a legal document, so the structure is well defined and a classic parser could do a good job.
+
+In the case AI could be a viable way, what would be an appropriate approach for the task?
+"
+"['machine-learning', 'pattern-recognition', 'clustering']"," Title: How to detect patterns in salary distribution if we are suspecting malicious distribution based on employee's region?Body: We are having suspects in salary distribution in our organisation due to employee's region. The data we have is as the following:
+
+Name
+Region
+Work Position (4 main positions)
+Salary
+Gender
+
+
+What technique should we use in Machine learning to check and detect malicious salary distribution? By using clustering?
+"
+"['deep-learning', 'definitions', 'convergence', 'generalization']"," Title: When exactly is a model considered over-parameterized?Body: When exactly is a model considered over-parameterized?
+There are some recent researches in Deep Learning about the role of over-parameterization toward generalization, so it would be nice if I can know what exactly can be considered as such.
+A hand-wavy definition is: over-parameterized model is often used to described when you have a model bigger than necessary to fit your data.
+In some papers (for example, in A Convergence Theory for Deep Learning
+via Over-Parameterization), over-parameterization is described as:
+
+they have much more parameters than the number of training samples
+meaning that the number of neurons is polynomially large comparing to the input size
+the network width is sufficiently large: polynomial in $L$, the number of layers, and in $n$, the number of samples
+
+Shouldn't this definition depend on the type of input data as well?
+For example, I fit:
+
+- 1M-parameters model on 10M samples of 2 binary features, then it should not be over-parameterized, or
+
+- 1M-parameters model on 0.1M samples of 512x512 images, then is over-parameterized, or
+
+- the model in the paper Exploring the Limits of
+Weakly Supervised Pretraining "IG-940M-1.5k ResNeXt-101 32×48d" with 829M parameters, trained on 1B Instagram images, is not over-parameterized
+
+
+"
+"['reinforcement-learning', 'philosophy', 'agi', 'chinese-room-argument', 'artificial-curiosity']"," Title: Why is reinforcement learning not the answer to AGI?Body: I previously asked a question about How can an AI freely make decisions?. I got a great answer about how current algorithms lack agency.
+The first thing I thought of was reinforcement learning, since the entire concept is oriented around an agent getting rewarded for performing a correct action in an environment. It seems to me that reinforcement learning is the path to AGI.
+I'm also thinking: what if an agent was proactive instead of reactive? That would seem like a logical first step towards AGI. What if an agent could figure out what questions to ask based on their environment? For example, it experiences an apple falling from a tree and asks "What made the Apple fall?". But it's similar to us not knowing what questions to ask about say the universe.
+"
+"['computer-vision', 'autonomous-vehicles', 'robotics']"," Title: How can I avoid displaying the velocity in the updated tracklets?Body: In target tracking, the dimensions of objects change, especially if they are detected using a LIDAR sensor. Also, the static objects in consecutive frames are not 100% static, their position changes a little bit, due to the point cloud segmentation algorithms (which is somehow expected).
+
+After I associate a tracklet (which maintains the objects previous dimension) and a measurement (that has changed its dimension in the current frame) and perform a Kalman update, a small velocity is induced in the new updated tracklet, even if my object is static (I am considering the reference point of the object and tracking its center).
+
+Is there any solution for not inducing and displaying such a velocity in the updated tracklets?
+"
+"['deep-learning', 'recommender-system']"," Title: Anyone familiar with Bilateral Recommendation System? And suggest any related papers?Body: I'm working on Bilateral Recommendation System. But not able to find much related papers. Could anyone suggest any papers relative?
+
+Thanks
+"
+"['reinforcement-learning', 'rewards']"," Title: Why cannot an AI agent adjust the reward function directly?Body: In standard Reinforcement Learning the reward function is specified by an AI designer and is external to the AI agent. The agent attempts to find a behaviour that collects higher cumulative discounted reward. In Evolutionary Reinforcement Learning the reward function is specified by
+the agent’s genetic code and evolved in simulated Darwinian evolution over multiple generations.
+Here too the AI agent cannot directly adjust the reward function and instead adjusts its behaviour
+towards collecting higher rewards. Why do both approaches prevent the AI agent from changing its reward function at will? What happens if we do allow the AI agent to do so?
+"
+"['neural-networks', 'convolutional-neural-networks', 'classification', 'image-segmentation', 'u-net']"," Title: How to train image segmentation task with only one class?Body: Is there a neural network that has architecture optimizations for segmenting only one class (object and background)? I have tried U-net but it is not providing good enough results.
+
+I am wondering if this can be due to the fact that my dataset has different image resolutions/aspect ratios.
+"
+"['object-detection', 'image-processing']"," Title: Recognition of lines in a chalkboardBody: I'm trying to develop a real-time application that, from the sequence of chalkboard images captured by a webcam, recognizes the lines being draw on it.
+
+It must be able of recognize the lines from the chalkboard background, filter the presence in the image of the teacher, and translate these lines to some representation, something as a list of basic events like ""start of line at xxx,xxx"", ""continue line at xxx,xxx"", ...
+
+After several days looking for references and bibliography, none is found. The most similar are the character recognition applications, in particular when they have a stroke recognition stage.
+
+Any hint ?
+
+Input will be a sequence as this one, this one or this one (just without the presence of the students). I've expect the teacher not hidding his hand. We could imagine a start with an empty chalkboard.
+
+Thanks.
+
+Note: I am looking for more than an answer which says only something similar to ""you can use a deep learning training it with two classes"", without details or references.
+"
+"['neural-networks', 'machine-learning', 'learning-algorithms', 'hebbian-learning']"," Title: How can we prove that an autoassociator network will continue to perform if we zero the diagonal elements of a weight matrix?Body: How can we prove that an auto-associator network will continue to perform if we zero the diagonal elements of a weight matrix that has been determined by the Hebb rule? In other words, suppose that the weight matrix is determined from $W = PP^T- QI$, where $Q$ is the number of prototype vectors.
+
+I have been given a hint: show that the prototype vectors continue to be eigenvectors of the new weight matrix.
+
+This is a question from Neural Network Design (2nd Edition) book by
+Martin T. Hagan, Howard B. Demuth, Mark H. Beale, Orlando De Jesus .
+
+Resource : E7.5 p 224-225
+"
+"['neural-networks', 'convolutional-neural-networks', 'keras', 'filters', 'convolution-arithmetic']"," Title: How to compute the number of weights of a CNN?Body: How can we theoretically compute the number of weights considering a convolutional neural network that is used to classify images into two classes:
+
+- INPUT: 100x100 gray-scale images.
+- LAYER 1: Convolutional layer with 60 7x7 convolutional filters (stride=1, valid
+padding).
+- LAYER 2: Convolutional layer with 100 5x5 convolutional filters (stride=1, valid
+padding).
+- LAYER 3: A max pooling layer that down-samples Layer 2 by a factor of 4 (e.g., from 500x500 to 250x250)
+- LAYER 4: Dense layer with 250 units
+- LAYER 5: Dense layer with 200 units
+- LAYER 6: Single output unit
+
+Assume the existence of biases in each layer. Moreover, the pooling layer has a weight (similar to AlexNet)
+How many weights does this network have?
+Here would be the corresponding model in Keras, but note that I am asking for how to calculate this with a formula, not in Keras.
+import keras
+from keras.models import Sequential
+from keras.layers import Dense
+from keras.layers import Conv2D, MaxPooling2D
+
+model = Sequential()
+model.add(Conv2D(60, (7, 7), input_shape = (100, 100, 1), padding="same", activation="relu")) # Layer 1
+model.add(Conv2D(100, (5, 5), padding="same", activation="relu")) # Layer 2
+model.add(MaxPooling2D(pool_size=(2, 2))) # Layer 3
+model.add(Dense(250)) # Layer 4
+model.add(Dense(200)) # Layer 5
+
+model.summary()
+
+"
+"['natural-language-processing', 'classification', 'bert', 'language-model']"," Title: How to use BERT as a multi-purpose conversational AI?Body: I'm looking to make an NLP model that can achieve a dual purpose. One purpose is that it can hold interesting conversations (conversational AI), and another being that it can do intent classification and even accomplish the classified task.
+
+To accomplish this, would I need to use multimodal machine learning, where you combine the signal from two models into one? Or can it be done with a single model?
+
+In my internet searches, I found BERT, developed by Google engineers (although apparently not a Google product), which is an NLP model trained in an unsupervised fashion on 3.3 billion words or more and seems very capable.
+
+How can I leverage BERT to make my own conversational AI that can also carry out tasks? Is it as simple as copying the weights from BERT to your own model?
+
+Any guidance is appreciated.
+"
+"['convolutional-neural-networks', 'classification', 'computer-vision']"," Title: How to draw bounding boxes for gender classification?Body: I wonder what is the better way of drawing rectangles on images for gender classification. My task is to create a classifier (CNN based) to detect gender from pictures of entire bodies (not just faces). When I started labeling pictures I noticed that I am not sure whether I should draw it around an entire person like example 1 (including hands and legs and some background space between them) or just the inner part like example 2 (where there is almost no background), in order to achieve better results?
+
+Example 1
+
+
+
+Example 2
+
+
+"
+['computer-vision']," Title: Ghost camera or video overlays for example in sportsBody: Secondary camera, ghost overlay, video merge... I do not know if what I mean has a more specific name.
+
+I wonder if this is a thing. This could be insightful for example in racing sports where participants race one after another e.g. alpine skiing, downhill mountain bike, showjumping etc. E.g. comparing the current starter to the leader.
+
+Given the camera position is fixed and only the camera angle and zoom is varying to focus on the current starter, the tasks to be able to overlay videos would be to:
+
+
+- match the timing, i.e. both videos start when the timer starts
+- align and overlay the videos according to specific marker points. Keypoint detection and tracking.
+- get the opacity right so that both videos are visible
+
+
+My question is if there is any research on this. If so, what keywords do I need to search for?
+
+
+
+Edit:
+My seach led me to SIFT (Scale Invariant Feature Transform) and SURF (Speeded-Up Robust Features). Feature matching should be possible with kNN or brute force. A lot can be done with OpenCV.
+"
+"['convolutional-neural-networks', 'backpropagation', 'gradient', 'pooling', 'max-pooling']"," Title: How can we compute the gradient of max pooling with overlapping regions?Body: While studying backpropagation in CNNs, I can't understand how can we compute the gradient of max pooling with overlapping regions.
+That's also a question from this quiz and can be also found on this book.
+"
+"['reinforcement-learning', 'stochastic-policy']"," Title: What's the value of making the RL agent's output stochastic opposed to deterministic?Body: I have a question about a reinforcement learning problem.
+
+I'm training an agent to add or delete pixels in a [12 x 12] 2D space (going to be 3D in the future).
+Its action space consists of two discrete outputs: x[0-12] and y[0-12].
+
+What would be the value of instead outputting a (continuous) probabilistic output representation, like the [12 x 12] space with each pixel as a probability, and sampling from it. E.g. a softmax function applied to 144 (12*12) output nodes.
+
+My environment is deterministic itself: taking action 𝑎 in state 𝑠 always results in the same next state 𝑠′.
+
+I understand that this may be more difficult to train since the output space becomes continuous instead of discrete, and therefore bigger, but does stochastic/probabilistic output have any benefits over 1 discrete output?
+
+Thanks!
+"
+"['machine-learning', 'ai-design', 'applications', 'prediction']"," Title: How can I predict the nutrients in dishes given the ingredients used to prepare them?Body: I want to know which algorithm will work most efficiently for calculating nutrients present in a food dish if I am giving the ingredients used in the food. Basically, let us assume that I want to make a health status for a person A based on the intake of food and based on it create a diet for him.
+"
+"['machine-learning', 'definitions', 'data-preprocessing', 'data-science', 'data-mining']"," Title: How to define the ""Pre-Processing"" in machine learning?Body: Is every process (such as data acquisition, splitting the data for validation, data cleaning, or feature engineering) that is done on the data before we train the model always called the pre-processing part? Or are there some processes that are not included?
+"
+"['deep-learning', 'applications', 'generative-adversarial-networks']"," Title: Is it useful to train a CycleGAN in a supervised setting?Body: I am very interested in the application of CycleGANs. I understand the concept of unpaired data and it makes sense to me.
+But now a question comes to my mind: what if I have enough paired image data, is then a CycleGAN an over-engineering, if I use it in a "supervised" setting (input matches with the label - but still a CycleGAN)? For what kind of application could it be useful? Would it be more useful to process it using a "normal" supervised setting?
+So, basically, my question is whether it is useful to train a CycleGAN in a supervised setting?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'prediction', 'randomness']"," Title: Can a neural network be used to predict a sequence of integers based on dataset of previously produced random numbers?Body: What i really want to do, is to predict an integer sequence of (5 numbers with values from 1 to 50) for example based on a big dataset of other 5 numbers sequences with same values range created by the same random number generator. I suppose there is a way to train based on the dataset and the program will find a pattern or based on the most common numbers predict the next number sequence. The more numbers will predict in the sequence correctly the better of course. Any help, directions and preferably python code would be greatly appreciated.
+
+I recently read the following can-a-neural-network-be-used-to-predict-the-next-pseudo-random-number and i am new to the AI field. The proposed code while it creates a sequence of 25 numbers it ends showing 20 numbers i do not understand why. It seems they try to do something similar if i understand correctly
+
+I tried The code here can-a-neural-network-be-used-to-predict-the-next-pseudo-random-number
+
+It shows always the same numbers no matter how many epochs and or iterations i do is that normal?
+Is the last code close to what i want to accomplish?
+
+Thanks in advance.
+"
+"['neural-networks', 'decision-trees']"," Title: What is the difference between Inductive Learning and Connectionist Learning?Body: According to what we know about inductive and connectionist learning, what is the difference between them ?
+
+For those who do not know about :
+
+Inductive Learning, like what we have in decision tree and make a decision based on amount of samples
+
+Connectionist Learning, like what we have in artificial neural network
+"
+['gpu']," Title: Good model and training algorithm to store texture data for fast gpu inferenceBody: Now, the following may sound silly, but I want to do it for my better understanding of performance and implementation of GPU inference for a set of deep learning problems.
+
+What I want to do is to replace a surface texture for a 3d model by a NN that stores that texture data in some way and allows to infere the rgb color of an arbitrary texel from its UV coordinates. So basically it should offer the same functionality as the texture itself.
+
+A regular texture lookup takes a UV coordinate and returns the (possibly filtered) RGB color at these texture coordinates.
+
+So, I want to train a network that takes two floats in [0,1] range as input and outputs three floats of rgb color.
+
+I further want to then train that network to store my 4096x4096 texture. So the training data I have available are 4096*4096=16777216 of <float2, float3>
pairs
+
+Finally I want to evaluate the trained network in my (OpenGL 4 or directX11) pixel shader, feeding it for every rendered pixel the interpolated UV coordinates at this pixel and retrieving the RGB value from it.
+
+It's clear that this will
+
+
+- have lower fidelity than just using the texture directly
+- use likely more memory than just using the texture directly
+- be slower than using the texture directly
+
+
+and as such may be silly to do, but I'd still like to try to do this somewhat optimally, especially in terms of inference performance (I'd like to be able to run it at interactive framerates at 1080p resolutions).
+
+Can someone point me to a class of networks or articles or describe a model and training algorithm that would be well suited for this task (especially in terms of implementing inference for the pixel shader)?
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks']"," Title: Train a competitive layer on nonnormalized vectors using LVQ techniqueBody: How can we train a competitive layer on non-normalized vectors using LVQ technique ?
+
+an example is given below from Neural Network Design (2nd Edition) book
+
+The net input expression for LVQ networks calculates the distance between the input
+and each weight vector directly, instead of using the inner product. The result is that the
+LVQ network does not require normalized input vectors. This technique can also be
+used to allow a competitive layer to classify non-normalized vectors. Such a network is
+shown in figure below.
+
+
+Use this technique to train a two-neuron competitive layer on the (non-normalized)
+vectors below, using a learning rate $\alpha=0.5$
+
+$p_1=\begin{bmatrix}
+1 \\
+1
+\end{bmatrix}, p_2=\begin{bmatrix}
+-1 \\
+2
+\end{bmatrix}, p_3=\begin{bmatrix}
+-2 \\
+-2
+\end{bmatrix}$
+
+Present the vectors in the following order : $p_1, p_2, p_3, p_2, p_3, p_1$
+
+Initial weights : $W_1=\begin{bmatrix}
+0 \\
+1
+\end{bmatrix}, W_2=\begin{bmatrix}
+1 \\
+o
+\end{bmatrix}$
+"
+"['convolutional-neural-networks', 'computer-vision', 'object-detection', 'affine-transformations']"," Title: If you have a very distorted image, would affine transformations applied to images make object detection algorithms make more mistakes?Body: If you have a very distorted video/image, would affine transformations of the images make object detection algorithms make more mistakes compared to a normal camera?
+"
+"['deep-learning', 'natural-language-processing', 'chat-bots', 'question-answering']"," Title: Are there any approaches other than deep learning to deal with unexpected questions in a question answering system?Body: I'm working on a question answering bot as my graduation project. The main concept is having a text file with many sentences, and building a question answering bot which answers a user's question based on the text file in hand.
+Until now, I used tf-idf and cosine similarity and the results are somewhat satisfactory. The main problem is, if the user was to ask a question that doesn't have a word that is in the text file, my bot can't deduce what to bring back as an answer. For example, if I have a sentence in my text file that says "I have a headache because my heart rate is low", if the user was to ask "Why do you have a headache?", my bot chooses the correct sentence, but if he asked "What's wrong with you?" my bot doesn't know what to do.
+All I've seen on the web until now are deep learning methods and neural networks, such as LSTM and such. I was wondering if there are any pure NLP approaches to go with my requirements.
+"
+"['natural-language-processing', 'algorithm', 'generative-model']"," Title: Giving an AI a purpose to talkBody: I am trying to teach my AI to talk. The problem is I'm struggling to find a good scenario in which it needs to.
+
+Some ideas I had were:
+""Describe a geometric scene"" - Then together with a parser we could see how close the generated instructions came to the official geometric language.
+
+""Give another AI instructions of where to find some food"" e.g. ""Go straight on passed the box then turn left until you get to the tree. Look under the rock.""
+
+Another one might be ""Find out more information about a scene by asking questions of another AI in order to navigate a scene blindfolded"". This is quite an extreme example!
+
+I need it to talk in formal English sentences (not some kind of made up secret langauge.)
+
+Basically instead of just interpreting a language and following instructions, I want my AI to generate instructions.
+
+So the things I want to teach it are the following:
+
+
+- Ability to ask questions + ability to use the information gathered
+- Ability to give instructions
+
+
+Do you know of any projects like this?
+"
+"['convolutional-neural-networks', 'recurrent-neural-networks', 'optimization', 'convergence']"," Title: Imposing contraints on sequence of image classificationsBody: Are there example implementations of networks that apply constraints across sequences of image classifications where class labels are ordinal numbers? For example, to cause the output of a CNN to monotonically increase across frames, where the number may increase either more or less steeply but only in one direction across the entire sequence, or as another example, to smoothly vary rather than jumping precipitously from frame to frame. In my first example, the output can jump quickly from one frame to the next, as long as only in one direction, whereas in my second example, they can either increase or decrease as long as not too ""fast"" from one frame to the next as if being passed through a low pass boxcar filter. The first is a monotonicity constraint and the second is a smoothness constraint, but in both examples, the key is for adjacent frames to have an effect on the conclusions for a given frame.
+
+Thank you,
+Andy
+"
+"['machine-learning', 'computer-vision', 'reference-request', 'facial-recognition']"," Title: Which AI methods are most appropriate for login face recognition?Body: I want to make a face authentication application. I need to approve the face during the login based on whether the registered face and the login face match.
+
+Which are the possible appropriate AI methods or technologies for this task?
+"
+"['deep-learning', 'sequence-modeling', 'machine-translation']"," Title: Can sequence-to-sequence models be used to convert source code from one programming language to another?Body: Sequence-to-sequence models have achieved good performance in natural language translation. Could these models also be applied to convert source code written in one programming language to source code written in another language? Could they also be used to convert JSON to XML? Has this been done?
+
+There are plenty of models that generate source code (which looks like real source code) using RNNs, although the generated source code doesn't make logical sense. I haven't been able to find any models or examples that take valid existing code and convert it into different valid code.
+"
+"['ai-design', 'models', 'performance', 'gpu', 'real-time']"," Title: What are the properties of a model that is well suited for for high performance real-time inferenceBody: What are general best practices or considerations in designing a model that is optimized for real-time inference?
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'novelty-search']"," Title: Are there any strategies that would help me visualize the 'behavior space' and make a novelty function?Body: In “Abandoning Objectives: Evolution through the Search for Novelty Alone”, it is explained how the novelty search is a function that is domain specific, depending on the differing behaviors that can potentially emerge.
+
+The primary test is a deceptive maze and it seems like they define novelty as a function that is dependent on each actor's ending position as a distance from other actors' ending position.
+
+I am wanting to try implementing this on some tasks. Some simple AI tasks such as playing pong, or recreating MarI/O, or sticking them in an arena as an actor who can move, turn, and shoot (with other actors in the arena with them).
+
+I have a really hard time thinking of how to model the behavior functions for these kinds of instances without making it into an objective.
+For pong, I imagine I could determine novelty by the AI's point score, but isn't this basically making the score an objective since it can only go up? For MarI/O, I've seen some implementations that look at the list of unique grid locations that Mario visited in what order, but I didn't come up with that myself.
+
+For the arena example, my first impulse is to have a score based on how long the actor survived and how many other actors the AI eliminated; but again, this can only go up and seems to me like it is defining an objective.
+
+Are there any strategies or ways to think about the problems that would help me better visualize the 'behavior space' and make a better novelty function?
+"
+['reinforcement-learning']," Title: Reinforcement learning without trajectoriesBody: Does it make sense to use Reinforcement Learning methods in an environment that does not have trajectories?
+
+I have a lot of states and actions in my environment. However, there are no trajectories.
+
+If the agent takes action $a$ in the state $s$, it reaches $s'$. In $s'$, it takes action $a'$ and reaches state $s''$. However, if it takes the reverse order of actions $a'$ and then $a$, it would reach the same state $s''$.
+
+How reinforcement learning methods handle with this?
+"
+"['neural-networks', 'convolutional-neural-networks', 'optimization']"," Title: Optimizer effects on neural network with two outputsBody: I'm confused about the following issue. Let assume that we have a neural network that takes one input and two outputs. I try to visualize my model like as follows:
+
+ / --- First stream --- > output_1
+Input --
+ \ ---- Second stream ---> output_2
+
+
+I used sgd with momentum. Is there any difference between using one optimizer for both streams and using two optimizers for each stream? In other words, if i use one optimizer, can one stream optimization process affect another stream? If it can, How can it be possible?
+"
+"['deep-learning', 'logic']"," Title: Implementing Logic Inference with Deep LearningBody: Short question
+How can I implement Logic Inference with Deep Learning?
+Long question
+Based on Symbolic Logic, chaining multiple predicates (a short example is Syllogism) is a method of implementing Logic Inference. This programmatic way is suitable for many cases, however, if combined with NLP, too much programmatic way is involved.
+If someone implements that Logic Inference with not programmatic way but machine learning, what model should be adopted, and what label data should be input to the model?
+"
+"['tensorflow', 'keras', 'recurrent-neural-networks', 'long-short-term-memory', 'batch-normalization']"," Title: How are batch statistics computed in Recurrent Batch Normalization?Body: I'm implementing recurrent BN per this paper in Keras, but looking at it and those citing it, a detail remains unclear to me: how are batch statistics computed? Authors omit explicit clarification, but state (pg. 3) (emphasis mine):
+
+
+ At training time, the statistics E[h] and Var[h] are estimated by the sample mean and sample variance of the current minibatch
+
+
+Yet another paper (pg. 3) using and citing it describes:
+
+
+ We subscript BN by time (BN_t) to indicate that each time step tracks its own mean and variance. In practice, we track these statistics as they change over the course of training using an exponential moving average (EMA)
+
+
+My question's thus two-fold:
+
+
+- Are minibatch statistics computed per immediate minibatch, or as an EMA?
+- How are the inference parameters, shared across all timesteps,
gamma
and beta
computed? Is the computation in (1) simply averaged across all timesteps? (e.g. average EMA_t
for all t
)
+
+
+
+
+Existing implementations: in Keras and TF below, but are all outdated, and am unsure regarding correctness
+
+
+- Keras, TF-A, and TF-B
+- All above agree that during training, immediate minibatch statistics are used, and that
beta
and gamma
are updated as an EMA of these minibatches
+- Problem: the
bn
operation (in A, and presumably B & C) is applied on a single timestep slice, to be passed to the K.rnn
control flow for re-iteration. Hence, EMA is computed w.r.t. minibatches and timesteps - which I find questionable:
+
+
+- EMA is used in place of a simple average when population statistics are dynamic (e.g. minibatch-to-minibatch), whereas we have access to all timesteps in a minibatch prior having to update
gamma
and beta
+- EMA is a worse but at times necessary alternative to a simple average, but per above, we can use latter - so why don't we? Timestep statistics can be cached, averaged at the end, then discarded - holds also for
stateful=True
+
+
+"
+"['deep-learning', 'tensorflow', 'accuracy', 'efficiency']"," Title: Why don't people always use TensorFlow Lite, if it doesn't decrease the accuracy of the models?Body: I have been exploring edge computation for AI, and I came across multiple libraries or frameworks, which can help to convert the model into a lite format, which is suitable for edge devices.
+
+- TensorFlow Lite will help us to convert the TensorFlow model into TensorFlow lite.
+- OpenVino will optimise the model for edge devices.
+
+Questions
+
+- If we have a library to optimise the model for edge devices (e.g. TensorFlow Lite), after conversion, could it make the accuracy decrease?
+
+- If not, then why do people prefer don't always use e.g. TensorFlow Lite?
+
+
+"
+"['computer-vision', 'comparison', 'image-processing', 'principal-component-analysis', 'singular-value-decomposition']"," Title: What is the difference between principal component analysis and singular value decomposition in image processing?Body: What is the difference between principal component analysis and singular value decomposition in image processing? Which one performs better, and why?
+"
+"['neural-networks', 'reinforcement-learning', 'python', 'policy-gradients']"," Title: Reinforcement Learning on quantum circuitBody: I am trying to teach an agent to make any random 1-qubit state reach uniform superposition. So basically, the full circuit will be State -> measurement -> new_state (|0> if 0, |1> if 1) -> Hadamard gate
. It just needs to perform 2 actions
. That's all. So it's more of an RL problem rather than QC.
+
+I am using reinforcement learning to train the model but it doesn't seem to learn anything. The reward keeps on decreasing and even after 3 million episodes, the agent doesn't seem to converge anywhere. This is how I am training:
+
+def get_exploration_rate(self, time_step):
+ return self.epsilon_min + (self.epsilon - self.epsilon_min)*\
+ math.exp(1.*time_step*self.epsilon_decay)
+
+def act(self, data,t): #state
+ rate = self.get_exploration_rate(t)
+ if random.random() < rate:
+ options = self.model.predict(data) #state
+ options = np.squeeze(options)
+ action = random.randrange(self.action_size)
+ else:
+ options = self.model.predict(data) #state
+ options = np.squeeze(options)
+ action = options.argmax()
+ return action, options, rate
+
+def train(self):
+
+ batch_size = 200
+ t = 0 #increment
+ states, prob_actions, dlogps, drs, proj_data, reward_data =[], [], [], [], [], []
+ tr_x, tr_y = [],[]
+ avg_reward = []
+ reward_sum = 0
+ ep_number = 0
+ prev_state = None
+ first_step = True
+ new_state = self.value
+ data_inp = self.data
+
+ while ep_number<3000000:
+ prev_data = data_inp
+ prev_state = new_state
+ states.append(new_state)
+ action, probs, rate = self.act(data_inp,t)
+ prob_actions.append(probs)
+ y = np.zeros([self.action_size])
+ y[action] = 1
+ new_state = eval(command[action])
+ proj = projection(new_state, self.final_state)
+ data_inp = [proj,action]
+ data_inp = np.reshape(data_inp,(1,1,len(data_inp)))
+ tr_x.append(data_inp)
+ if(t==0):
+ rw = reward(proj,0)
+ drs.append(rw)
+ reward_sum+=rw
+
+ elif(t<4):
+ rw = reward(new_state, self.final_state)
+ drs.append(rw)
+ print(""present reward: "", rw)
+ reward_sum+=rw
+ elif(t==4):
+ if not np.allclose(new_state, self.final_state):
+ rw = -1
+ drs.append(rw)
+ reward_sum+=rw
+ else:
+ rw = 1
+ drs.append(rw)
+ reward_sum+=rw
+
+ print(""reward till now: "",reward_sum)
+ dlogps.append(np.array(y).astype('float32') * probs)
+ print(""dlogps before time step: "", len(dlogps))
+ print(""time step: "",t)
+ del(probs, action)
+ t+=1
+ if(t==5 or np.allclose(new_state,self.final_state)): #### Done State
+ ep_number+=1
+ ep_x = np.vstack(tr_x) #states
+ ep_dlogp = np.vstack(dlogps)
+ ep_reward = np.vstack(drs)
+ disc_rw = discounted_reward(ep_reward,self.gamma)
+ disc_rw = disc_rw.astype('float32')
+ disc_rw -= np.mean(disc_rw)
+ disc_rw /= np.std(disc_rw)
+
+ tr_y_len = len(ep_dlogp)
+ ep_dlogp*=disc_rw
+ if ep_number % batch_size == 0:
+ input_tr_y = prob_actions - self.learning_rate * ep_dlogp
+ input_tr_y = np.reshape(input_tr_y, (tr_y_len,1,6))
+
+ self.model.train_on_batch(ep_x, input_tr_y)
+ tr_x, dlogps, drs, states, prob_actions, reward_data = [],[],[],[],[],[]
+ env = Environment()
+ new_state = env.reset()
+ proj = projection(state, self.final_state)
+ data_inp = [proj,5]
+ data_inp = np.reshape(data_inp,(1,1,len(data_inp)))
+ print(""State after resetting: "", new_state)
+ t=0
+
+
+I have tried various things like changing the inputs, reward function, even added exploration
rate. I have assigned max time step as 5
even though it should complete in just 2
.
+
+What am I doing wrong? Any suggestions?
+"
+"['game-ai', 'minimax', 'alpha-beta-pruning']"," Title: Could you share with me the tree size, search time and search depth of your implementation of Gomoku with minimax and alpha-beta prunning?Body: Currently, I am working on a Gomoku AI implementation with minimax + alpha-beta pruning.
+I'm targeting these two rules from 'acceptable implementation' in terms of search time and search depth :
+
+- Search time (over 0.5 seconds is "bad", less 0.5 seconds is ok)
+- Search depth (less than 10 search depth levels is "bad", over 10 search depth levels is ok)
+
+The minimax algorithm generates, by recursive function calls, a tree of nodes, each node represented by a function call with a specific game state.
+Increasing the depth search increases the number of nodes in the tree, and therefore search time.
+There is a compromise between search time and search depth.
+Alpha-beta pruning tends to help this compromise by pruning useless nodes search and reducing tree size. The pruning is directly related to the evaluation/heuristic function. Bad implementation of heuristic may lead to bad efficiency of alpha-beta pruning.
+
+If you are working on or have done a Gomoku AI, sharing your stats of tree size, search depth and time search from your implementation at some game steps, and explain how you reach it may help to investigate.
+
+The implementation at this time does not fit the 'is not acceptable' for me, having search time over 1sec for a search depth of 4 at first step ... on IntelCore i7 3.60GHz CPU ...
+Here are the properties of the actual implementation:
+
+- Board of size 19x19
+- Implements search window of size 5x5 around stones to reduce search nodes
+- Implements heuristic computation at each node on the played stone instead of computation on all board size on leaf nodes number.
+- Implements alpha-beta pruning
+- No multi thread
+
+
+Here are the current stats it is reaching for search depth of 4 at the first step:
+
+- Timing minimax algorithm: 1.706175 seconds
+- Number of nodes in that compose the tree: 2850
+
+. . . . . . . . . . . . . . . . . . . 00
+. . . . . . . . . . . . . . . . . . . 01
+. . . . . . . . . . . . . . . . . . . 02
+. . . . . . . . . . . . . . . . . . . 03
+. . . . . . . . . . . . . . . . . . . 04
+. . . . . . . . . . . . . . . . . . . 05
+. . . . . . . . . . . . . . . . . . . 06
+. . . . . . . . . . . . . . . . . . . 07
+. . . . . . . . . . . . . . . . . . . 08
+. . . . . . . . . . . . . . . . . . . 09
+. . . . . . . . . . . . . . . . . . . 10
+. . . . . . . . . . . . . . . . . . . 11
+. . . . . . . . . . . . . . . . . . . 12
+. . . . . . . . . . . . . . . . . . . 13
+. . . . . . . . . . . . . . . . . . . 14
+o x . . . . . . . . . . . . . . . . . 15
+. . . . . . . . . . . . . . . . . . . 16
+. . . . . . . . . . . . . . . . . . . 17
+. . . . . . . . . . . . . . . . . . . 18
+A B C D E F G H I J K L M N O P Q R S
+Player: o - AI: x
+
+Bad stats might be lead to bad heuristics, causing inefficient pruning. Waiting for other stats/replies to validate this hypothesis may help.
+Edit 1
+Coming back from a new search campaign on this question.
+
+- The implementation was facing a 19*19 loop index at each heuristic computation ... Removed this by heuristic computation at a specific index (not the entire board)
+
+- The implementation was facing a 19*19 loop index to check win state ... Removed this by checking only around played index any alignment at each step.
+
+- The implementation was facing a 19*19 loop index to check where it can play (even with the windows) ...
+Removed by propagating indexes array of valid indexes through the recursion updated at each step.
+The array is a dichotomic array (with $O(n)$ insertion, $O(\log n)$ search and $O(1)$ deletion by index)
+
+- The implementation was lacking a Zobrist hash table, a very nice idea from the below answer. It is now implemented with unit tests to prove that implementation is working. An array sorted by hash is updated at each new node, with the hash-node association. The array is a dichotomic array (with $O(n)$ insertion, $O(\log n)$ search and $O(1)$ deletion by index)
+
+- The implementation is at each step trying each index in a random way (not computation order or evaluation score order).
+
+
+The before edit example is not great because it is playing on a sideboard and the allowed indexes window is half max size.
+Here are the newly obtained performances :
+
+- with Zobrist table off and seed at 42 for search depth of 4 at the first step
+
+- Timing minimax algorithm: 0.083288 seconds
+- Number of nodes that compose the tree: 6078
+
+
+
+. . . . . . . . . . . . . . . . . . . 00
+. . . . . . . . . . . . . . . . . . . 01
+. . . . . . . . . . . . . . . . . . . 02
+. . . . . . . . . . . . . . . . . . . 03
+. . . . . . . . . . . . . . . . . . . 04
+. . . . . . . . . . . . . . . . . . . 05
+. . . . . . . . . . . . . . . . . . . 06
+. . . . . . . . . . . . . . . . . . . 07
+. . . . . . . . . . . . . . . . . . . 08
+. . . . . . . . . . . . . . . . . . . 09
+. . . . . . . . . . . . . . . . . . . 10
+. . . . . . . . . x . . . . . . . . . 11
+. . . . . . . . o . . . . . . . . . . 12
+. . . . . . . . . . . . . . . . . . . 13
+. . . . . . . . . . . . . . . . . . . 14
+. . . . . . . . . . . . . . . . . . . 15
+. . . . . . . . . . . . . . . . . . . 16
+. . . . . . . . . . . . . . . . . . . 17
+. . . . . . . . . . . . . . . . . . . 18
+A B C D E F G H I J K L M N O P Q R S
+Player: o - AI: x
+
+
+- with Zobrist table on and seed at 42 for search depth of 4 at the first step
+
+- Timing minmax_algorithm: 0.434098 seconds
+- Number of nodes that compose the tree: 9320
+
+
+
+. . . . . . . . . . . . . . . . . . . 00
+. . . . . . . . . . . . . . . . . . . 01
+. . . . . . . . . . . . . . . . . . . 02
+. . . . . . . . . . . . . . . . . . . 03
+. . . . . . . . . . . . . . . . . . . 04
+. . . . . . . . . . . . . . . . . . . 05
+. . . . . . . . . . . . . . . . . . . 06
+. . . . . . . . . . . . . . . . . . . 07
+. . . . . . . . . . . . . . . . . . . 08
+. . . . . . . . . . . . . . . . . . . 09
+. . . . . . x . . . . . . . . . . . . 10
+. . . . . . . . . . . . . . . . . . . 11
+. . . . . . . . o . . . . . . . . . . 12
+. . . . . . . . . . . . . . . . . . . 13
+. . . . . . . . . . . . . . . . . . . 14
+. . . . . . . . . . . . . . . . . . . 15
+. . . . . . . . . . . . . . . . . . . 16
+. . . . . . . . . . . . . . . . . . . 17
+. . . . . . . . . . . . . . . . . . . 18
+A B C D E F G H I J K L M N O P Q R S
+Player: o - AI: x
+
+Actually, it is ok for search depth 4, but not for more than 6. The node number is becoming exponential (over 20 000) ...
+Found here great implementation in the same language/techno than can go to 10 depth in less than 1sec, without Zobrist or smart trick, and followed the logic.
+The issue must be somewhere else, causing exponential growth of node - inefficient pruning.
+"
+"['genetic-algorithms', 'crossover-operators', 'mutation-operators', 'constrained-optimization', 'chromosomes']"," Title: How can we design the mutation and crossover operations when the order of the genes in the chromosomes matters?Body: Consider an optimization problem that involves a set of tasks $T = \{1,2,3,4,5\}$, where the goal is to find a certain order of these tasks.
+I would like to solve this problem with a genetic algorithm, where each chromosome $C = [i, j, k, l, m]$ corresponds to a specific order of these five tasks, so each gene in $C$ corresponds to a task in $T$.
+So, for example, $C = [1,3,5,4,2]$ and $C' = [1,5,4,2,3]$ would be two chromosomes that correspond to two different orders of the tasks.
+In this case, how could we design the mutation and cross-over operations so that these constraints are maintained during evolution?
+The genetic algorithm should produce the three best chromosomes or order of tasks.
+"
+['reinforcement-learning']," Title: How to perform Interpretability analysis toward a simple reinforcement learning networkBody: We are currently using a RL network with the following simple structure to train a model which helps to solve a transformation task:
+
+Environment (a binary file) + reward ---> LSTM (embedding) --> FC layer --> FC layer --> FC layer --> decision (to select and apply a kind of transformation toward the environment from a pool of transformations)
+
+The model will receive a simple reward and also take the input to make the decision. And we have a condition to stop each episode.
+
+So the current workflow, although it is simple, it seems to have learned something and with multiple episode of training, we can observe the accumulated reward for each episode increases. So right now, what we are thinking is to interpret
the model, well, a fancy term.
+
+So basically we are thinking to let the model tell us from which component of the Environment (input file), the model somewhat makes the decision to select a transformation to apply. And I have learned a bunch of interpretability articles, which basically use an activation map (e.g., link) to highlight certain components from the input.
+
+However, the problem is that, we don't have any sort of CNN layer in our simple RL model. In that sense, the aforementioned method cannot apply, right? I also learned a number of techniques from this book, but still, I don't see any specific techniques applicable for RL models.
+
+So here is my question, in terms of our simple RL model, how can we do certain ""interpretability"" analysis and therefore have a better idea on which part of the ""Environment"" leads to each step of decision? Thank you very much.
+"
+"['neural-networks', 'artificial-neuron', 'biology']"," Title: What effect does a negative output of a neuron have on neighbouring neurons?Body: Artificial neural networks are composed of multiple neurons that are connected to each other. When the output of an artificial neuron is zero, it does not have any effect on neighboring neurons. When the output is positive, the neuron has an effect on neighboring neurons.
+
+What does it mean when the output of a neuron is negative (which can e.g. occur when the activation function of a neuron is the hyperbolic tangent)? What effect would this output have on neighboring neurons?
+
+Do biological neural networks also have this property?
+"
+"['machine-learning', 'regression']"," Title: Finding the optimal combination of inputs which return maximal outputBody: I am currently working on a problem and now got stuck to implement one of it's steps. This is a simple attempt to explain what I am currently facing, which is something that I am aiming to implement in my regression simulation in python.
+
+Let's say that I fit a non-linear model to my data. Now, I want to find the combination of inputs within a specified range that returns the the highest outcome. When I am using a quadratic function or only a few inputs, this task is quite simple. However, the problem comes when trying to apply the same logic for more complex models. Supposing that I have 9 variables as inputs, I will have to test all possible combinations and that would be computationally unfeasible by doing it with meshgrid if you want to cover a range with a several intervals in between.
+
+So, here it comes my question, is there such a way to avoid having to go through this computationally costly process in order to achieve the combinations of inputs defined in a given range that return the optimal output?
+"
+"['branching-factors', 'checkers']"," Title: Maximum Single Ply Branching Factor for Legal Checkers BoardsBody: I am writing a checkers move generation function in C that will be extended to Python. It is much easier to handle the possible boards in a fixed size array to pass back to Python.
+
+Basically, I build out possible boards and add them to this array:
+
+uint32_t boards[x][3];
+
+
+
+Therefore, the optimal value for x should be the maximum single ply branching factor out of all possible legal board states.
+
+I am not sure if I am being very clear, so here is an example:
+
+For tic-tac-toe this value would be 9, as the first move has the greatest number of possible directly resulting board states, out of all of the legal boards.
+
+Has this value been calculated for checkers? Has a program like Chinook derived a reasonably close number?
+
+Thank you for your help!
+"
+['reinforcement-learning']," Title: How does reinforcement learning with video data work?Body: My goal is to train an agent to play MarioKart on the Nintendo DS. My first approach (in theory) was to setup an emulator on my pc and let the agent play for ages. But then a colleague suggested to train the agent first on pre recorded humanly played video data, to achieve some sort of base level. And then for further perfection let the agent play for its own with the emulator.
+But I have no clue how training with video data works. E.g. I wonder how to calculate a loss since there is no reward. Or am I getting the intuition wrong?
+I would appreciate it if someone could explain this technique to me.
+"
+"['convolutional-neural-networks', 'terminology']"," Title: What do the terms ""front-end"" and ""back-end"" refer to in this article?Body: I found the terms front-end and back-end in the article (or blog post) How to Develop a CNN for MNIST Handwritten Digit Classification. What do they mean here? Are these terms standard in this context?
+"
+"['tensorflow', 'python', 'keras', 'speech-synthesis']"," Title: Can't figure out what's going wrong with my dataset construction for multivariate regressionBody: TL;DR: I can't figure out why my neural network wont give me a sensible output. I assume it's something to do with how I'm presenting the input data to it but I have no idea how to fix it.
+
+Background:
+
+I am using matched pairs of speech samples to generate a model which morphs one persons voice into another. There are some standard pre-processing steps which have been done and can be reversed in order to generate a new speech file.
+
+With these I am attempting to generate a very simple neural network that translates the input vector into the output one and then reconstructs a waveform.
+
+I understand what I'm trying to do mathematically but that's not helping me make keras/tensorflow actually do it.
+
+Inputs:
+
+As inputs to my model I have vectors containing Fourier Transform values from the input speech sample matched with their counterpart target vectors.
+
+These vectors contain the FT values from each 25ms fragment of utterance are in the form $[r_1, i_1, ..., r_n, i_n]$ where $r$ is the real part of the number and $i$ is the imaginary one.
+
+I am constructing these pairs into a dataset reshaping each input vector as I do so:
+
+def create_dataset(filepaths):
+ """"""
+ :param filepaths: array containing the locations of the relevant files
+ :return: a tensorflow dataset constructed from the source data
+ """"""
+ examples = []
+ labels = []
+
+ for item in filepaths:
+ try:
+ source = np.load(Path(item[0]))
+ target = np.load(Path(item[1]))
+
+ # load mapping
+ with open(Path(item[2]), 'r') as f:
+ l = [int(s) for s in list(f.read()) if s.isdigit()]
+ it = iter(l)
+ mapping = zip(it, it)
+
+ for entry in mapping:
+ x, y = entry
+ ex, lab = source[x], target[y]
+ ex_ph, lab_ph = np.empty(1102), np.empty(1102)
+
+ # split the values into their real and imaginary parts and append to the appropriate array
+ for i in range(0, 1102, 2):
+ idx = int(i / 2)
+
+ ex_ph[i] = ex[idx].real
+ ex_ph[i+1] = ex[idx].imag
+ lab_ph[i] = lab[idx].real
+ lab_ph[i+1] = lab[idx].imag
+
+ examples.append(ex_ph.reshape(1,1102))
+
+ # I'm not reshaping the labels based on a theory that doing so was messing with my loss function
+ labels.append(lab_ph)
+
+ except FileNotFoundError as e:
+ print(e)
+
+ return tf.data.Dataset.from_tensor_slices((examples, labels))
+
+
+This is then being passed to the neural network:
+
+
+
+def train(training_set, validation_set, test_set, filename):
+ model = tf.keras.Sequential([tf.keras.layers.Input(shape=(1102,)),
+ tf.keras.layers.Dense(551, activation='relu'),
+ tf.keras.layers.Dense(1102)])
+
+ model.compile(loss=""mean_squared_error"", optimizer=""sgd"")
+
+ model.fit(training_set, epochs=1, validation_data=validation_set)
+
+ model.evaluate(test_set)
+ model.save(f'../data/models/{filename}.h5')
+ print(model.summary())
+
+
+and I get out... crackling. Every time, no matter how much data I throw at it. I assume I'm doing something obviously and horribly wrong with the way I'm setting this up.
+"
+['fuzzy-logic']," Title: How can I prove that all the a-cuts of any fuzzy set A defined on $R^n$ are convex?Body: How can I prove that all the a-cuts of any fuzzy set A defined on $R^n$
+are convex if and only if
+
+$$\mu_A(\lambda r + (1-\lambda)s) \geq min \{\mu_A(r), \mu_A(s)\}$$
+
+such that $r, s \in R^n$, $\lambda \in [0, 1]$ ?
+
+That's a fuzzy question on my assignment. Any idea on how to start with?
+"
+"['computer-vision', 'comparison', 'terminology', 'papers', 'slam']"," Title: What is the difference between tracking and mapping (TAM) and localization and mapping (LAM)?Body: In the paper Visual SLAM algorithms: a survey from 2010 to 2016 by Takafumi Taketomi, Hideaki Uchiyama and Sei Ikeda it is mentioned
+
+It should be noted that tracking and mapping (TAM) is used instead of using localization and mapping. TAM was first used in Parallel Tracking and Mapping (PTAM) [15] because localization and mapping are not simultaneously performed in a traditional way. Tracking is performed in every frame with one thread whereas mapping is performed at a certain timing with another thread. After PTAM was proposed, most of vSLAM algorithms follows the framework of TAM. Therefore, TAM is used in this paper.
+
+I do not quite follow the difference between localization and mapping versus tracking and mapping.
+
+- What is the difference?
+
+- What are some advantages of TAM?
+
+- Why is SLAM not called STAM?
+
+
+"
+"['minimax', 'expectiminimax']"," Title: Is there a probabilistic version of minimax?Body: How would a probabilistic version of minimax work?
+
+For example, we may choose a move that could result in a very bad outcome, but that outcome might just be extremely unlikely so we might think it would be worth the risk.
+"
+"['machine-learning', 'natural-language-processing', 'learning-algorithms', 'human-computer-interaction']"," Title: How would an AI learn the concept of the words ""repeat twice""?Body: In a hypothetical conversation:
+
+Person A - ""Repeat the word 'cat' twice"".
+Person B - ""cat cat"".
+
+
+I'm thinking about how a human or AI can learn the concept of ""repeat twice"". In reinforcment learning it would require that after the first sentence the AI would go through every random sentence until it got it right and hence got a reward.
+
+Another way might be the AI or human overhearing the conversation. Then on hearing a repetition or a word it may trigger some neurons in the brain related to detecting repetition. Thus by pavlovian learning associate the word ""repeat"" or ""twice"" with these neurons. When given the stimulus of the word ""repeat"" these neurons may get triggered making the brain do some repetition algorithm. (This is my favorite theory).
+
+I suppose a third way might be as follows:
+
+Person A - ""Hello! Hello!""
+Person B - ""Stop repeating yourself"".
+
+
+It might learn to associate repeating with the word ""repeating"" in this way.
+
+I think either way the brain must have some neurons dedicated to detecting repetitions and possibly inacting them. (I don't think any standard RNN has this capability).
+
+What do you think is the most likely way?
+"
+['deep-learning']," Title: Why the error rates in table3 and table4 are differenct in the paper ""deep residual learning for image recognition""Body: Why are the error rates in table 3 and table 4 are different in the paper Deep Residual Learning for Image Recognition (2015).
+
+They are both error rates on the validation sets by single model.
+
+
+- Why there are different rates for the same architecture?
+
+"
+['constraint-satisfaction-problems']," Title: What is exactly the role of commutative property in a Constraint Satisfaction Problem?Body: I have been looking into the backtracking search for CSPs, and understand that if we just plainly do a typical depth-limited search we have a vast tree with leaves size $n!d^n$ where $n$ is the number of variables and $d$ the domain size. It can also be easily understood that there exists instead only $d^n$ complete assignments. So the reason for the the tree being so large is attributed to the fact that we are ignoring the commutative way of variable assignments in CSP. Can anyone please explain, as to how exactly this commutative property affects?
+"
+"['search', 'minimax', 'breadth-first-search', 'depth-first-search', 'adversarial-search']"," Title: Why does the adversarial search minimax algorithm use Depth-First Search (DFS) instead of Breadth-First Search (BFS)?Body: I understand that the actual algorithm calls for using Depth-First Search, but is there a functionality reason for using it over another search algorithm like Breadth-First Search?
+"
+"['neural-networks', 'machine-learning', 'artificial-neuron']"," Title: Are neurons in layer $l$ only affected by neurons in the previous layer?Body: Are artificial neurons in layer $l$ only affected by those in layer $l-1$ (providing inputs) or are they also affected by neurons in layer $l$ (and maybe by neurons in other layers)?
+"
+"['neural-networks', 'recurrent-neural-networks', 'learning-algorithms']"," Title: How can I write out the Real-TIme Recurrent Learning Gradient equations for a network?Body: This question is about Real-Time Recurrent Learning Gradient on a Recurrent neural network .
+
+How can I write out the RTRL equations for a network ?
+
+Before present an example give let's introduce some notation :
+
+Notation
+
+
+So the network for which we want to write the RTRL equations is the following :
+
+Network
+
+
+A similar question can be found here at page 561 for another network .
+"
+"['deep-learning', 'training', 'features', 'inference']"," Title: Training and inference for highly-context-sensitive informationBody: What is the best way to train / do inference when the context matters highly as to what the inferred result should be?
+
+For example in the image below all people are standing upright, but because of the perspective of the camera, their location highly affects their skeletal pose. If the 2D inferred skeleton of the person on the right were located where the middle person is in pixel space, it should not be considered upright even though it should be considered upright where it is now.
+
+I assume the location would be fed in during both training and inference somehow, but I don't know the names of the techniques that should be used and are there any best practices when doing this type of scenario?
+
+
+"
+"['neural-networks', 'long-short-term-memory', 'attention']"," Title: Why can't LSTMs keep track of the ""important parts"" of a sequence?Body: I keep reading about how LSTMs can't remember the ""important parts"" of a sequence which is why attention-based mechanisms are required. I was trying to use LSTMs to find people's name format.
+
+For example, ""Millie Bobby Brown"" can be seen as first_name middle_name last_name format, which I'll denote as 0, but then there's ""Brown, Millie Bobby"" which is last_name, first_name middle_name, which I'll denote as 1.
+
+The LSTM seems to be overfitting to one classification of format. I suspect it's because it's not paying special attention to the comma which is a key feature of what format it could be. I'm trying to understand why an LSTM won't work for a case like this. It makes sense to me because LSTMs are better at identifying sequence to sequence generation and things such as summarization and sentiment analysis usually require attention. I suspect another reason why the LSTM is not able to infer the format is that the comma can be placed in different indexes of the sequence, so it could be losing its importance in the hidden state the longer the sequence is (not sure if that makes sense). Anyone else has any theories? I'm trying to convince my fellow researchers that a pure LSTM won't be sufficient for this problem.
+"
+"['machine-learning', 'reinforcement-learning', 'agi', 'artificial-creativity']"," Title: Are current AI models sufficient to achieve Artificial General Intelligence?Body: I read an interesting essay about how far we are from AGI. There were quite a few solid points that made me re-visit the foundation of AI today. A few interesting concepts arose:
+
+imagine that you require a program with a more ambitious functionality: to address some outstanding problem in theoretical physics — say the nature of Dark Matter — with a new explanation that is plausible and rigorous enough to meet the criteria for publication in an academic journal.
+Such a program would presumably be an AGI (and then some). But how would you specify its task to computer programmers? Never mind that it’s more complicated than temperature conversion: there’s a much more fundamental difficulty. Suppose you were somehow to give them a list, as with the temperature-conversion program, of explanations of Dark Matter that would be acceptable outputs of the program. If the program did output one of those explanations later, that would not constitute meeting your requirement to generate new explanations. For none of those explanations would be new: you would already have created them yourself in order to write the specification. So, in this case, and actually in all other cases of programming genuine AGI, only an algorithm with the right functionality would suffice. But writing that algorithm (without first making new discoveries in physics and hiding them in the program) is exactly what you wanted the programmers to do!
+
+The concept of creativity seems like the initial thing to address when approaching a true AGI. The same type of creativity that humans have to ask the initial question or generate new radical ideas to long-lasting questions like dark matter.
+Is there current research being done on this?
+I've seen work with generating art and music, but it seems like a different approach.
+
+In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.
+
+This is an interesting concept behind why reinforcement learning is not the answer. Without input from the environment, the agent has nothing to improve upon. However, with the actual brain, if you had no input or output, it is still in a state of "thinking".
+"
+"['robotics', 'decision-trees', 'path-planning']"," Title: What is the most suitable AI technique to use for path planning?Body: I am making a firetruck using Arduino Uno with flame sensors and ultrasonic sensors to detect how to move and where to go. As this is a project for my university, I am asked to implement AI in it for path planning.
+
+I am not sure whether to use something like A* technique or ID3 decision tree or if there is something better than both to implement path planning for my robot. Any suggestions?
+"
+"['neural-networks', 'pytorch', 'implementation', 'incremental-learning', 'elastic-weight-consolidation']"," Title: Why are the current means and the old ones the same in this implementation of Elastic Weight Consolidation?Body: I'm trying to re-implement Elastic Weight Consolidation (EWC) as outlined in this paper. As a reference, I am also using this Github repository (another implementation).
+
+My model/idea is pretty straightforward. Train the network to do the bit operation AND (e.g 1 && 0 = 0), then using EWC, train it to use OR (e.g 1 || 0 = 1). I've got three inputs: bit1, bit2 and operation (0 stands for AND and 1 for OR) and one output neuron - the output of the operation. For example, if I have 0 1 0 the ground truth should be 0.
+
+The problem, however, comes when calculating the EWC loss.
+
+def penalty(self, model: nn.Module):
+ loss = 0
+ for n, p in model.named_parameters():
+ _loss = self._precision_matrices[n] * (p - self._means[n]) ** 2
+ loss += _loss.sum()
+ return loss
+
+
+I've got two problems:
+
+
+- The current means (
p
) and the old ones (self._means[n]
) are always the same, resulting in multiplication by 0, which completely negates EWC.
+- As I have just one output neuron the calculation of the fisher's matrix is a bit different than the repo. The one I have written seems to be wrong. Any ideas?
+
+
+I initialise the self._means[n]
and self._precision_matrices
(fisher's matrix) in the init method of the EWC model:
+
+class EWC(object):
+def __init__(self, model: nn.Module, dataset: list, device='cpu'):
+
+ self.model = model
+ self.dataset = dataset
+ self.device = device
+
+ self._means = {}
+ self._precision_matrices = self._diag_fisher()
+
+ for n, p in self.model.named_parameters():
+ self._means[n] = p.data.clone()
+
+def _diag_fisher(self):
+ precision_matrices = {}
+
+ # Set it to zero
+ for n, p in self.model.named_parameters():
+ params = p.clone().data.zero_()
+ precision_matrices[n] = params
+
+ self.model.eval()
+
+ for input in self.dataset:
+ input = input.to(self.device)
+
+ self.model.zero_grad()
+
+ output = self.model(input)
+ label = torch.sigmoid(output).round()
+ loss = F.binary_cross_entropy_with_logits(output, label)
+ # loss = F.nll_loss(F.log_softmax(output, dim=1), label)
+ loss.backward()
+
+ for n, p in self.model.named_parameters():
+ precision_matrices[n].data += p.grad.data ** 2 / len(self.dataset)
+
+ precision_matrices = {n: p for n, p in precision_matrices.items()}
+ return precision_matrices
+
+
+And this is the actual training:
+
+# Train the model EWC
+for epoch in tqdm(range(EPOCS)):
+
+ # Get the loss
+ ls = ewc_train(model, opt, loss_func, dataloader[task], EWC(model, old_tasks), importance, device)
+
+def ewc_train(model: nn.Module, opt: torch.optim, loss_func:torch.nn, data_loader: torch.utils.data.DataLoader, ewc: EWC, importance: float, device):
+ epoch_loss = 0
+
+ for i, (inputs, labels) in enumerate(data_loader):
+ inputs = inputs.to(device).long()
+ labels = labels.to(device).float()
+
+ opt.zero_grad()
+
+ output = model(inputs)
+ loss = loss_func(output.view(-1), labels) + importance * ewc.penalty(model)
+ loss.backward()
+ opt.step()
+
+ epoch_loss += loss.item()
+
+ return loss
+
+
+Note: the loss function that I am using is nn.BCEWithLogitsLoss()
and optimisation is: SGD(params=model.parameters(), lr=0.001)
.
+"
+"['machine-learning', 'tensorflow', 'python', 'autoencoders']"," Title: Why does the denoising autoencoder always returns the same output?Body: I am trying to implement a denoising autoencoder (DAE) to remove noise from 1024-point FFT spectra. I am using two types of spectra: (1) that contain a distinctive high amplitude spectral peak and (2) that contain only noise peaks.
+
+If I understood correctly, I can train the DAE using the corruputed spectra (spectra+noise) and afterwards I can use it the remove noise from new datasets. The problem is that when testing the DAE, it returns the type (1) spectrum mentioned above, regardless of the input. The same case when I apply predict on the training data. This is the code I am using (Python/Tensorflow):
+
+def BuildModel(nInput):
+ input_dim = Input(shape = (nInput, ))
+
+ # Encoder Layers
+ encoded1 = Dense(896, activation = 'relu')(input_dim)
+ encoded2 = Dense(768, activation = 'relu')(encoded1)
+ encoded3 = Dense(640, activation = 'relu')(encoded2)
+ encoded4 = Dense(512, activation = 'relu')(encoded3)
+ encoded5 = Dense(384, activation = 'relu')(encoded4)
+ encoded6 = Dense(256, activation = 'relu')(encoded5)
+ encoded7 = Dense(encoding_dim, activation = 'relu')(encoded6)
+
+ # Decoder Layers
+ decoded1 = Dense(256, activation = 'relu')(encoded7)
+ decoded2 = Dense(384, activation = 'relu')(decoded1)
+ decoded3 = Dense(512, activation = 'relu')(decoded2)
+ decoded4 = Dense(640, activation = 'relu')(decoded3)
+ decoded5 = Dense(768, activation = 'relu')(decoded4)
+ decoded6 = Dense(896, activation = 'relu')(decoded5)
+ decoded7 = Dense(nInput, activation = 'sigmoid')(decoded6)
+
+ # Combine Encoder and Deocoder layers
+ autoencoder = Model(inputs = input_dim, outputs = decoded7)
+
+ autoencoder.summary()
+ # Compile the Model
+ autoencoder.compile(optimizer=OPTIMIZER, loss='binary_crossentropy')
+ #autoencoder.compile(loss='mean_squared_error', optimizer = RMSprop())
+
+ return autoencoder
+
+X_train, X_test, y_train, y_test = train_test_split(spectra.iloc[:,0:spectra.shape[1]-1], spectra['Class'], test_size=testDatasetSize, stratify=spectra.Class, random_state=seedValue)
+
+X_train, y_train = shuffle(X_train, y_train, random_state=seedValue)
+X_test, y_test = shuffle(X_test, y_test, random_state=seedValue)
+
+X_unseen = X_train.to_numpy()[0:1000,:] # Data not used for training, only for testing
+y_unseen = y_train.to_numpy()[0:1000]
+X_train = X_train.iloc[1000:]
+y_train = y_train.iloc[1000:]
+
+# Scaling
+maxVal = max(X_train)
+X_train = (X_train/maxVal).to_numpy()
+X_test = (X_test/maxVal).to_numpy()
+X_unseen = (X_unseen/maxVal)#.to_numpy()
+
+# Corrupted data
+noise_factor = 0.01
+X_train_noisy = X_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=X_train.shape)
+X_test_noisy = X_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=X_test.shape)
+X_unseen_noisy = X_unseen + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=X_unseen.shape)
+
+ae = BuildModel(X_train.shape[1])
+PrintConsoleLine('Creating model finished')
+print('')
+
+history = ae.fit(X_train_noisy, X_train, epochs=NB_EPOCH, batch_size=BATCH_SIZE, validation_data=[X_test_noisy, X_test])
+save_model(ae, modelFile, overwrite=True)
+
+# Test
+X = X_unseen
+X_noisy = X_unseen_noisy
+X_denoised = ae.predict(X_noisy) # X_train gives the same result (spectra type (1)) !?!
+N = len(X_denoised[0,:])
+index = 6
+PlotDataSimple(3, np.linspace(0,N-1,N), X[index,:], 'Frequency domain', 'Index', 'Amplitude', None)
+PlotDataSimple(4, np.linspace(0,N-1,N), X_noisy[index,:], 'Frequency domain', 'Index', 'Amplitude', None)
+PlotDataSimple(5, np.linspace(0,N-1,N), X_denoised[index, :], 'Frequency domain', 'Index', 'Amplitude', None)
+
+Dataset shape: (17000, 65, 65, 1) (files, samples X axis, samples Y axis, class)
+Train on 12600 samples, validate on 3400 samples
+Epoch 1/3
+12600/12600 [==============================] - 26s 2ms/sample - loss: 0.6813 - val_loss: 0.4913
+Epoch 2/3
+12600/12600 [==============================] - 14s 1ms/sample - loss: 0.1621 - val_loss: 0.0578
+Epoch 3/3
+12600/12600 [==============================] - 16s 1ms/sample - loss: 0.0230 - val_loss: 0.0169
+
+
+The results I am getting (Column 1 - Initial signal, Column 2 - Corrupted signal, Column 3 - Denoised signal):
+
+
+
+So why does the DAE output the same spectra regardless of the inputs? Am I misunderstanding the DAE principle or is there a problem in my implementation?
+"
+"['machine-learning', 'generative-model', 'variational-autoencoder']"," Title: What is the input for the prior model of VQ-VAE?Body: I'm trying to implement the VQ-VAE model. In there, a continuous variable $x$ is encoded in an array $z$ of discrete latent variables $z_i$ that are mapped each to an embedding vector $e_i$. These vectors can be used to generate an $\hat{x}$ that approximates $x$.
+
+In order to obtain a reasonable generative model $p_\theta(x)=\int p_\theta(x|z)p(z)$, one needs to learn the prior distribution of the code $z$. However, it is not clear in this paper, or its second version, what should be the input of the network that learns the prior. Is it $z=[z_i]$ or $e=[e_i]$? The paper seems to indicate that it is $z$, but if that's the case, I don't understand how I should encode $z$ properly. For example, a sample of $z$ might be an $n\times n$ matrix with discrete values between $0$ and $511$. It is not reasonable to me to use a one-hot encoding, nor to simply use the discrete numbers as if they were continuous, given that there is no defined order for them. On the other hand, using $e$ doesn't have this problem since it represents a matrix with continuous entries, but then the required network would be much bigger.
+
+So, what should be the input for the prior model? $z$ or $e$? If it is $z$, how should I represent it? If it is $e$, how should I implement the network?
+"
+"['machine-learning', 'comparison', 'linear-regression', 'non-linear-regression']"," Title: What is the difference between linear and non-linear regression?Body: In machine learning, I understand that linear regression assumes that parameters or weights in equation should be linear. For Example:
+
+$$y = w_1x_1 + w_2x_2$$
+
+is a linear equation where $x_1$ and $x_2$ are feature variables and $w_1$ and $w_2$ are parameters.
+
+Also
+
+$$y = w_1(x_1)^2 + w_2(x_2)^2$$
+
+is also linear as parameters $w_1$ and $w_2$ are linear with respect to $y$.
+
+Now, I read some articles stating that in the equation like
+
+$$y = \log(w_1)x_1 + \log(w_2)x_2$$
+
+can also be made linear by considering other variables $v_1$ and $v_2$ as:
+
+\begin{align}
+v_1 &= \log(w_1)\\
+v_2 &= \log(w_2)
+\end{align}
+
+Thus,
+
+$$y = v_1x_1 + v_2x_2$$
+
+So, in this sense, any non-linear equation can be made linear, then what is non-linear regression here? I think I am missing something important here. I am a beginner in the field of Machine Learning. Can somebody help me?
+"
+"['training', 'image-recognition', 'datasets']"," Title: Is there a way of automatically drawing bounding boxes around interested objects?Body: Given thousands of images, where some of the images contain target objects and others do not, is there an easy way of drawing bounding boxes on these target objects rather than relying on manual annotation? Wouldn't drawing 4 orientations of an object and their respective bounding boxes and randomly inserting them into the images be a viable option?
+It becomes painful to manually annotate thousands of images by yourself.
+"
+"['neural-networks', 'long-short-term-memory', 'math']"," Title: How does the forget layer of an LSTM work?Body: Can someone explain the mathematical intuition behind the forget layer of an LSTM?
+
+So as far as I understand it, the cell state is essentially long term memory embedding (correct me if I'm wrong), but I'm also assuming it's a matrix. Then the forget vector is calculated by concatenating the previous hidden state and the current input and adding the bias to it, then putting that through a sigmoid function that outputs a vector then that gets multiplied by the cell state matrix.
+
+How does a concatenation of the hidden state of the previous input and the current input with the bias help with what to forget?
+
+Why is the previous hidden state, current input and the bias put into a sigmoid function? Is there some special characteristic of a sigmoid that creates a vector of important embeddings?
+
+I'd really like to understand the theory behind calculating the cell states and hidden states. Most people just tell me to treat it like a black box, but I think that, in order to have a successful application of LSTMs to a problem, I need to know what's going on under the hood. If anyone has any resources that are good for learning the theory behind why cell state and hidden state calculation extract key features in short and long term memory I'd love to read it.
+"
+['neural-networks']," Title: What is a working configuration of a neuronal network (number of layers, lerning rate and so on) for a specific dataset?Body: I try to solve some easy functions with a neuronal network (aforge-lib):
+
+This is how I generate the dataset:
+
+const int GesamtAnzahl = 200;
+float[,] tempData = new float[GesamtAnzahl, 2];
+float minX = float.MaxValue;
+float maxX = float.MinValue;
+
+Random rnd = new Random();
+var granzen = new List<int>()
+{
+ rnd.Next(1, GesamtAnzahl-1),
+ rnd.Next(1, GesamtAnzahl-1),
+ rnd.Next(1, GesamtAnzahl-1),
+ rnd.Next(1, GesamtAnzahl-1),
+};
+granzen.Sort();
+
+for (int i = 0; i < GesamtAnzahl; i++)
+{
+
+ var x = i;
+ var y = -1;
+ if ((i > granzen[0] && i < granzen[1]) ||
+ (i > granzen[2] && i < granzen[3]))
+ {
+ y = 1;
+ }
+ tempData[i, 0] = x;
+ tempData[i, 1] = y;
+}
+
+
+So this is quite easy: The output is 1 if the input is between the 2 lower random generated ""borders"" or between the 2 higher numbers. Otherwise the output is 1.
+
+The input values are standardices to fit between -1 and 1. So 0 is -1 and 200 is 1.
+
+As a network I used a BackPropagationLearning with a BipolarSigmoidFunction and several configurations like:
+
+Learning Rate: 0,1
+Momentum: 0
+Sigmoids alpha value: 2
+Hidden Layer 1: 4 neurons
+Hidden Layer 2: 2 neurons
+
+
+Learning Rate: 0,1
+Momentum: 0
+Sigmoids alpha value: 2
+Hidden Layer 1: 4 neurons
+Hidden Layer 2: 2 neurons
+Hidden Layer 3: 2 neurons
+
+
+Learning Rate: 0,2
+Momentum: 0
+Sigmoids alpha value: 2
+Hidden Layer 1: 4 neurons
+Hidden Layer 2: 2 neurons
+Hidden Layer 3: 2 neurons
+
+
+and so on. None of them worked. As described here: https://towardsdatascience.com/beginners-ask-how-many-hidden-layers-neurons-to-use-in-artificial-neural-networks-51466afa0d3e it should be enough to have 2 hidden layers. The first one with 4 neurons and the second one with 2.
+
+The configurations which worked best were:
+
+Learning Rate: 0,01
+Momentum: 0
+Sigmoids alpha value: 2
+Hidden Layer 1: 4 neurons
+Hidden Layer 2: 4 neurons
+Hidden Layer 3: 4 neurons
+
+Learning Rate: 0,02
+Momentum: 0
+Sigmoids alpha value: 2
+Hidden Layer 1: 4 neurons
+Hidden Layer 2: 2 neurons
+
+
+This solves the problem about 50 % of the times.
+
+As this is a quite simple problem I wonder if I am doing something wrong. I think there has to be a configuration which has better results.
+
+What is the best configuration for this problem and why?
+
+Additionally I tried:
+
+
+- Having more data does not help. I created 5000 a dataset of 5000 points ( GesamtAnzahl = 5000). Then the networks have a even worse sucess rate.
+- I tried to add an extra constant input (always 1) to the dataset but this also lowered the sucess rate
+
+"
+"['machine-learning', 'datasets', 'objective-functions', 'binary-classification', 'imbalanced-datasets']"," Title: How to perform binary classification when one class is more predominant than the other?Body: Assuming we have big $m \times n$ input dataset, with $m \times 1$ output vector. It's a classification problem with only two possible values: either $1$ or $0$.
+Now, the problem is that almost all elements of the output vector are $0$s with a very few $1$s (i.e. it's a sparse vector), such that if the neural network would "learn" to give always 0 as output, this would produce high accuracy, while I'm also interested in learning when the 1s occurs.
+I thought one possible approach could be to write a custom loss function giving more weight to the 1s, but I'm not completely sure if this would be a good solution.
+What kind of strategy can be applied to detect such outliers?
+"
+"['convolutional-neural-networks', 'probability']"," Title: How can I convert the probability score between 0 to 1 to another format?Body: I have trained a multi-class CNN model using fastai. The model splits out probabilites for each of the three classes, which, of course, sum up to 1. The class with highest probability becomes the predicted class.
+
+Is there any way I can convert them into 0 to 1 scale, where near to 0 value would mean class 1, near to 0.5 would mean class 2 and near to 1 would mean class 3?
+"
+"['machine-learning', 'reinforcement-learning']"," Title: Reinforcement learning possible with big action space?Body: I’m experimenting with reinforcement learning for a 2D pixel plotting task, and am running into an issue that (I think) has to do with the big action space. It goes like this:
+
+The Agent gets two vector inputs each step.
+Each describes an (n x n) 2d matrix composed of zeros and ones.
+
+One is the (n x n) target matrix, containing a certain shape of zeros
+The other is an (n x n) state matrix, containing another shape
+
+Every step, I want my agent to pick an (x, y) coordinate:
+x (picks one of n)
+y (picks one of n)
+
+This will turn a zero into one, or one into zero.
+
+every step, if correct, I give a small reward, and it’ll get punished when incorrect.
+
+I’m training the agent (a network with 3 layers with 256 hidden units) with PPO, and curiosity in the loss, and for a 12 x 12 matrix it works quite well, not 100% but okay. (see image). Note that the agent doesn't get enough steps here to fully delete the initial shape when the target shape is empty, that's why it doesn't fully make it. Takes about 800K steps to converge though.
+
+
+But the agent starts struggling in local minima when I increase beyond 32 x 32.
+
+This one is at 32 x 32:
+
+
+
+Is this even scalable to bigger matrices even? I was hoping to go 3D eventually, by reaching 100x100x100 .
+
+I do realize that i have a huge input and action space when working with such a grid.
+Is something like that even possible with an RL paradigm? I’ve tried increasing the network size, and decreasing learning rate, but I’m not satisfied. Any ideas or alternative approaches to plot pixels like this?
+
+Any input is very much appreciated!
+Thanks!
+"
+"['python', 'cross-validation', 'scikit-learn']"," Title: How can I split the data into training and validation sets such that entries with a certain value are kept together?Body: I have the following kind of data frame. These are just example:
+
+A 1 Normal
+A 2 Normal
+A 3 Stress
+B 1 Normal
+B 2 Stress
+B 3 Stress
+C 1 Normal
+C 2 Normal
+C 3 Normal
+
+
+I want to do 5-fold cross-validation and splitting the data using
+
+skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=1)
+
+data = (ImageList.from_folder(PATH)
+ .split_by_rand_pct(valid_pct=0.2)
+ .label_from_folder()
+ .transform(get_transforms(do_flip=True, flip_vert= True,max_zoom=1.1, max_rotate=10, max_lighting=0.5),size=224)
+ .databunch()
+ .normalize() )
+
+
+It works great. It splits the data randomly which is expected. Though I want to keep the data points together in the training or validation, having the same value in column 1. So, all the A's would be in either the training or validation dataset, all the B's would be in the training or validation dataset, and so on.
+
+More info on my data:
+I have cell assay images which are labelled in three classes. Now, these images are big in size, so I split one image into 16 small, non overlapping tiles, to bring down the size to 224( optimal enough to feed into CNN). All these tiles have the same label as the original image. These tiles are the final input to the CNN. TO perform cross-validation, I need to keep the tiles of same image into one fold and set.
+"
+"['game-theory', 'environment', 'markov-decision-process', 'benchmarks']"," Title: Interesting examples of discrete stochastic gamesBody: SGs are a generalization of MDPs to multiple agents. Like this previous question on MDPs, are there any interesting examples of zero-sum, discrete SGs—preferably with small state and action spaces? I'm hoping to use such examples as benchmarks, but couldn't find much in the literature. One example I can think of is a pursuit-evasion game on a graph.
+"
+"['neural-networks', 'deep-learning', 'batch-normalization']"," Title: How does a batch normalization layer work?Body: I understood that we normalize to input features in order to bring them on the same scale so that weights won't be learned in arbitrary fashion and training would be faster.
+
+Then I studied about batch-normalization and observed that we can do the normalization for outputs of the hidden layers in following way:
+
+Step 1: normalize the output of the hidden layer in order to have zero mean and unit variance a.k.a. standard normal (i.e. subtract by mean and divide by std dev of that minibatch).
+
+Step 2: rescale this normalized vector to a new vector with new distribution having $\beta$ mean and $\gamma$ standard deviation, where both $\beta$ and $\gamma$ are trainable.
+
+I did not understand the purpose of the second step. Why can't we just do the first step, make the vector standard normal, and then move forward? Why do we need to rescale the input of each hidden neuron to an arbitrary distribution which is learned (through beta and gamma parameters)?
+"
+"['self-supervised-learning', 'representation-learning']"," Title: How to understand the concept of self-supervised learning in AI?Body: I am new to self-supervised learning and it all seems a little magical at the moment.
+The only way I can get an intuitive understanding is to assume that, for real-world problems, features are still embedded at a per-object level.
+For example, to detect cats in unseen images, my self-supervised network would still have to be composed exclusively of cats.
+So, if I had 100 images of cats and 100 images of dogs, then I thought self-supervised approaches would learn the features of the images. For example, if an image is rotated 90 degrees, it learns what was in the image that was rotated 90 degrees. However, if I wanted to classify just cats using this representation, then I wouldn't be able to do so without separating out what makes a cat a cat and a dog a dog.
+Is my assumption correct?
+"
+"['machine-learning', 'reinforcement-learning', 'convolutional-neural-networks']"," Title: Flattened vector observation or convolutional neural network input?Body: This is more of a general question of how to model/preprocess 'visual' state-observations to an Agent in Reinforcement Learning that I'll illustrate with an example.
+
+Say you have a reinforcement learning problem where the agent has to draw pixels in an n * n 2D state-matrix of 0's and 1's. Say n = 100. The agent can move one step (up, down, left, right) and on its location can additionally switch 0's into 1's or the other way around.
+
+Each step, it needs to take action so that the state-matrix resembles an n * n target-matrix (that has a certain shape). It is rewarded accordingly each step.
+
+The agent will know its location from an x and y position that are given in addition to the state- and target-matrix each step.
+
+Now I'm curious to the question what the best way is to represent the state to the agent. Using a visual 'prior', or not. Here's two ways:
+
+
+- Based on that you want to give only the essential information to the agent: The agent is presented with a matrix (with target subtracted from state), that will be flattened into one array of n^2. Additionally it'll know its current location as an additional (x, y) vector observation.
+- Based on that (1) would be more difficult to solve for a human, because you'll have to learn from a flattened array how different points are connected (think about how hard a flattened game of chess would be), you can also use a convolutional neural network to encode the current scene. In this case the agent will be e.g. a red dot. Given that it's such a visual task, it seems to me that using this would give the agent a better model of how the environment works, since the spatial relations are kept intact. Also it feels that keeping the 2D shape intact with a CNN would mean that it'd form better representations that generalize to other shapes, but I can't really say why.
+
+
+On the other hand one could say that it's arrogant to assume that our 'human' spatial way of interpreting visual information is the best way for this case. Maybe there's a mathematical solution?
+
+Any ideas?
+"
+"['convolutional-neural-networks', 'training', 'accuracy', 'loss']"," Title: How to explain peak in training history of a convolutional neural network?Body: I am training a simple convolutional neural network to recognize two types of 1024-point frequency spectra (FFT). This is the model I'm using:
+
+cnn = Sequential()
+cnn.add(Conv1D(filters=64, kernel_size=3, activation=LeakyReLU(), input_shape=(nInput,1)))
+cnn.add(Conv1D(filters=64, kernel_size=3, activation=LeakyReLU()))
+cnn.add(MaxPooling1D(pool_size=2))
+cnn.add(Flatten())
+cnn.add(Dense(nFinalDense, activation=LeakyReLU()))
+cnn.add(Dense(nOutput, activation='sigmoid'))
+
+
+However I get the following accuracy and loss during training:
+
+
+Why do I get the large peak in both plots? How can it be explained? Is there a problem with the data I'm using (I mention that I obtain a similar peak when training an autoencoder for denoising using the same data)?
+"
+"['reinforcement-learning', 'deep-rl', 'self-play']"," Title: How to deal with nonstationary rewards in asymmetric self-play reinforcement learning?Body: Suppose we're training two agents to play an asymmetric game from scratch using self play (like Zerg vs. Protoss in Starcraft). During training one of the agents can become stronger (discover a good broad strategy for example) and start winning most of the time, which causes big portion of the state values (or Q(s,a) values) become very high for this agent and low for another, just because the first is generally stronger and receives most of the rewards. Some training time later the other one finds a weakness in the first's play (in many states too) and starts dominating and the reward stream shift the other way.
+
+The problem is, we have to retrain function approximator (deep neural net) to wildly different value/Q states, this slows and destabilizes learning. For each of the agents this is similar to highly nonstationary environment (the opponent), that can be harsh or easy at times.
+
+What do people usually do in such a case? I think what is needed is some kind of slowly changing baseline (similar to advantage in A2C), but applied to the reward values themselves.
+"
+"['gradient-descent', 'pytorch']"," Title: How can I train a neural network to find the hyper-parameters with which the data was generated?Body: I have 10000 tuples of numbers (x1, x2, y)
generated from the equation: y = np.cos(0.583 * x1) + np.exp(0.112 * x2)
. I want to use a neural network, trained with gradient descent, in PyTorch, to find the 2 parameters, i.e. 0.583 and 0.112
+
+Here is my code:
+
+class NN_test(nn.Module):
+ def __init__(self):
+ super().__init__()
+ self.a = torch.nn.Parameter(torch.tensor(0.7))
+ self.b = torch.nn.Parameter(torch.tensor(0.02))
+
+ def forward(self, x):
+ y = torch.cos(self.a*x[:,0])+torch.exp(self.b*x[:,1])
+ return y
+
+model = NN_test().cuda()
+
+lrs = 1e-4
+optimizer = optim.SGD(model.parameters(), lr = lrs)
+loss = nn.MSELoss()
+
+epochs = 30
+for epoch in range(epochs):
+ model.train()
+ for i, dtt in enumerate(my_dataloader):
+ optimizer.zero_grad()
+
+ inp = dtt[0].float().cuda()
+ output = dtt[1].float().cuda()
+
+ ls = loss(model(inp),output)
+
+ ls.backward()
+ optimizer.step()
+ if epoch%1==0:
+ print(""Epoch: "" + str(epoch), ""Loss Training: "" + str(ls.data.cpu().numpy()))
+
+
+where x
contains the 2 numbers x1
and x2
. In theory, it should work easily, but the loss doesn't go down. What am I doing wrong?
+"
+"['neural-networks', 'deep-learning', 'autoencoders', 'latent-variable']"," Title: What are some new deep learning models for learning latent representation of data?Body: I know that autoencoders are one type of deep neural networks that can learn the latent representation of data. I guess there should be several other models like autoencoders.
+
+What are some new deep learning models for learning latent representation of data?
+"
+"['deep-learning', 'keras', 'dimensionality', 'video-classification']"," Title: How to handle a high dimensional video (large number of frames per video) data for training a video classification networkBody: I have a video dataset as follows.
+
+Dataset size: 1k videos
+
+Frames per video: 4k (average) and 8k (maximum)
+
+Labels: Each video has one label.
+
+So the size of my input will be (N, 8000, 64, 64, 3)
+64 is height and width of video. I use keras. I am not really sure how to do an end-to-end training with this kind of dataset. I was thinking of dividing each input in blocks of frames (N, 80, 100, 64, 64, 3) for training. But still it wont work for an end-to-end network training.
+
+I am not in favor of dropping the frames. That might be my last choice.
+
+Any help will be appreciated. Thanks in advance.
+"
+"['neural-networks', 'classification', 'image-recognition', 'video-classification']"," Title: How to classify human actions?Body: I'm quite new to machine learning (I followed the Coursera course of Andrew Ng and now starting deeplearning.ai courses).
+
+I want to classify human actions real-time like:
+
+
+- Left-arm bended
+- Arm above shoulder
+- ...
+
+
+I first did some research for pre-trained models, but I didn't find any.
+Because I'm still quite new, I want to have advice about how I should solve this.
+
+
+- I thought maybe I need to create for every action enough pictures and from there on I can do image classification.
+- Or I use PoseNet from TensorFlow so that I have the pose estimation points. And from there on I create videos of a couple of seconds with every pose I want to track and I save the estimation points. From there on, I use a classification algorithm (neural network) to classify those points.
+
+
+What is the most efficient option or are they both bad and is there a better way to do this?
+"
+"['neural-networks', 'deep-learning', 'math', 'memory', 'neural-turing-machine']"," Title: How does the memory mechanism (reading and writing) work in a neural Turing machine?Body: In neural Turing machine (NTM), reading memory is represented as
+
+\begin{align}
+r_t \leftarrow \sum\limits_i^R w_t(i) \mathcal{M}_t(i) \tag{2}
+\end{align}
+
+and writing to memory is represented as
+
+Step1: Erase
+
+\begin{align}
+\mathcal{M}_t^{erased}(i) \leftarrow \mathcal{M}_{t-1}(i)[\mathbf{1} - w_t(i) e_t ] \tag{3}
+\end{align}
+
+Step2: Add
+
+\begin{align}
+\mathcal{M}_t(i) \leftarrow \mathcal{M}_t^{erased}(i) + w_t(i) a_t \tag{4}
+\end{align}
+
+In the reading mechanism, if we take this example values and applied to the above formula, instead of a vector, we get a scalar of value 2.
+
+M_t =[[1,0,1,0],
+ [0,1,0,0],
+ [1,1,1,0]]
+
+w_t = [1,1,1]
+
+
+The same thing happens in writing as well; here we take the dot product of two vectors, $w_t(i) e_t$, with a scalar value as output. According to paper, unless $w_t$ or $e_t$ are zeros, it will erase all values in the memory matrix.
+
+My own idea about NTM memory was that it uses the weights to find the indices or rows inside the memory matrix corresponding to a certain task.
+
+How does the memory in NTM work?
+
+How a memory for a particular task is stored, that is, is it stored row-wise or it's stored in the whole matrix?
+"
+"['deep-learning', 'object-recognition', 'papers', 'yolo']"," Title: YOLO 9000 about Better StrongerBody: In this paper, YOLO has three features compared to YOLO v1. This question is about Better and Faster.
+
+In the Better section, there are many techniques such as Batch Norm, Anchor Box and so on. In the Faster section, there is a darknet only.
+Darknet has 19 Conv Layer but it doesn't use Layer Norm or Passthrough Layer. So, I think that Darknet doesn't use Better section techniques.
+
+Is the Better Section model different from Faster Section Model?
+In my understanding, there are three models named YOLO v2. First is Better YOLO v2, second is Faster YOLO v2, third Strong YOLO v2. Is this right?
+"
+"['q-learning', 'temporal-difference-methods']"," Title: N-tuple based tic tac toe diverges in temporal difference learningBody: I have n-tuple based tic tac toe. I already have perfect minimax player and perfectly trained table-based player. My n-tuple network consists of 8 different rows of 3 of the board as triplets having possible empty, X or O, and one bit defining who's move is now, so totally 2 * 3^3 = 54 states in tuple. I train and update weights with the idea of the pseudo code from ""Learning to Play Othello with N-Tuple Systems"" by Simon Lucas:
+
+public void inGameUpdate(double[] prev, double[] next) {
+ double op = tanh(net.forward(prev));
+ double tg = tanh(net.forward(next));
+ double delta = alpha * (tg - op) * (1 - op * op);
+ net.updateWeights(prev, delta);
+}
+
+public void terminalUpdate(double[] prev, double tg) {
+ double op = tanh(net.forward(prev));
+ double delta = alpha * (tg - op) * (1 - op * op);
+ net.updateWeights(prev, delta);
+}
+
+
+And the score is the sum of weights of those rows of 3. The temporal difference training generally works for n-tuple based tic tac toe and after several thousands games it mostly plays perfectly. But after a while it diverges from the perfection and oscillates between perfect and near perfect. I realized it was in situations like this:
+
+OXO
+X-O
+-XX
+
+
+I suspect because row that prevent opponent from winning has big value. And having two of such rows seems to be better than losing later.
+
+I know I can have perfect player basing on this particular n-tuple network. I could just stop training after I reach perfection, but in bigger games I can't do that. I fiddled with different alpha in ranges 0.1-0.0001 and e-greedy epsilon 1%-50%, or adaptive. Increasing epsilon to about 50% somewhat mitigates this effect, but this value is mostly very big to use in other games.
+
+Here are couple questions:
+
+
+- Does this effect have a name in the machine learning world? It values preventing opponent from winning. But if opponent has more opportunities to win, its value will be bigger, so that it will exceed the (negative of) losing value.
+- Aside from probably using different n-tuple networks and tweaking hyper-parameters, what can I do to mitigate or eliminate this effect?
+- In bigger games, this learning and n-tuple system give fairly good result, but I see big oscillations after certain points. I.e. in breakthrough game against 1-ply minimax, after it reaches about 60% winrate (testing 10000 games after training every 10000 games against itself), its winrate goes slitghly up, but in testing its winrate oscillates between 45-65%. Can this effect be caused by the problem I mentioned in 1.?
+
+"
+"['machine-learning', 'decision-trees', 'metric']"," Title: Why information gain with entropy as impurity function can't be used as a splitting method for Decision Tree Regression?Body: In Decision Tree Regression, we can use 'Reduction in Variance' or MSE (Mean Squared Errors) as splitting methods. There are methods like Gini Index, Information Gain, Chi-Square for splitting on classification trees. Now, I read somewhere that we cannot use Information gain (with impurity function as entropy) as a splitting method for regression trees. Why is it so, and what other methods are there which we can and cannot use, and why?
+
+EDITS:
+
+Please suggest me a reference to understand maths behind it.
+
+The references I used are :
+
+https://www.analyticsvidhya.com/blog/2016/04/complete-tutorial-tree-based-modeling-scratch-in-python/
+
+https://www.python-course.eu/Regression_Trees.php
+
+https://towardsdatascience.com/https-medium-com-lorrli-classification-and-regression-analysis-with-decision-trees-c43cdbc58054
+
+In the first article, it is mentioned that:
+
+
+ Gini Index, Chi-Square and Information gain (impurity function as entropy) algorithms are used for Classification trees while Reduction in Variance is used for Regression Trees.
+
+
+In the second article, it is mentioned that:
+
+
+ Since our target feature is continuously scaled, the IGs of the categorically scaled descriptive features are no longer appropriate splitting criteria.
+
+ As stated above, the task during growing a Regression Tree is in principle the same as during the creation of Classification Trees. Though, since the IG turned out to be no longer an appropriate splitting criteria (neither is the Gini Index) due to the continuous character of the target feature we must have a new splitting criteria.
+ Therefore we use the variance which we will introduce now.
+
+
+In the third article, it is mentioned that:
+
+
+ ""Entropy as a measure of impurity is a useful criterion for classification. To use a decision tree for regression, however, we need an impurity metric that is suitable for continuous variables, so we define the impurity measure using the weighted mean squared error (MSE) of the children nodes instead""
+
+
+Thank You!
+"
+"['machine-learning', 'computer-vision', 'object-detection', 'image-processing', 'image-segmentation']"," Title: Find the nearest object in a image which is captured from camera?Body: Objective : To find the nearest object (closer distance object) in the single camera image. But Image Contains multiple objects shown below:
+
+
+
+I searched in the net and found this formula to calculate the distance of object from camera
+
+F = (P x D) / W Further Detail click on the Link
+
+Is there any other better approach to find the nearest object in a image?
+
+Thanks in Advance!!!
+"
+['datasets']," Title: Should I add some noise when the dataset is small?Body: I want to create a small dataset (about 10 classes and 20-30 images each), should I add some noise (wrong label sample) in the training, validation and test datasets, and why?
+"
+"['autoencoders', 'pytorch', 'variational-autoencoder']"," Title: Why is my variational auto-encoder generating random noise?Body: This is my first variational autoencoder. Background info: I am using the MNIST digits dataset. The model is created and trained in PyTorch. The model is able to get a reasonably low loss, but the images that it generates are just random noise. Here are my script and the images that were generated by the model: https://github.com/jweir136/PyTorch-Variational-Autoencoder-Latent-Space-Visualization.
+
+Any advice or answers on how to solve this problem are greatly appreciated.
+"
+"['convolutional-neural-networks', 'python', 'keras', 'gradient-descent']"," Title: How many parameters are being optimised over in a simple CNN?Body: Okay so here's my CNN (simple example from a tutorial) along with some arithmetic to get the total number of free parameters.
+
+We've got a dataset of 28*28 grayscale image (MNIST).
+
+
+- First layer is a 2D convolution using 32 3x3 kernels. Dimensionality of the output is 26x26x32 (kernel stride length was 1 and we have 32 feature maps of 26x26). Running parameter count: 288
+- Second layer is 2x2 MaxPool with a 2x2. Dimensionality of the output is 13x13x32 but then we flatten so we got a vector of length 5408. No extra parameters here.
+- Third layer is Dense. A 5408x100 matrix. Dimensionality of the output is 100. Running Parameter count: 540988
+- Fourth layer is Dense also. A 100x10 matrix. Dimensionality of the output is 10. Running Parameter count: 541988
+
+
+Then we're supposed to do stochastic gradient descent on a 541988 parameter space!
+
+That feels like a ridiculously big number to me. And this is meant to be the hello world problem of CNNs. Am I missing something fundamental in my understanding of how this is meant to work? Or maybe the number is correct but it's not actually a big deal for a computer to crunch?
+
+In case it helps. Here is how the model was built in Keras:
+
+def define_model():
+ model = Sequential()
+ model.add(Conv2D(32, (3,3), activation = 'relu', kernel_initializer = 'he_uniform', input_shape=(28,28,1)))
+ model.add(MaxPooling2D((2,2)))
+ model.add(Flatten())
+ model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
+ model.add(Dense(10, activation='softmax'))
+ opt = SGD(lr=0.01, momentum=0.9)
+ model.compile(optimizer=opt, loss='categorical_crossentropy', metric=['accuracy'])
+ return model
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'feedforward-neural-networks']"," Title: How to make DNN learn multiplication/division?Body: A single neuron with 2 weights and identity activation can learn addition/subtraction as the 2 weights will converge to 1 and 1 (addition), or 1 and -1 (subtraction).
+
+However, for multiplication and division, it's not that easy. Can a single neuron learn multiplication or division? If not, how many layers of DNN can learn these?
+"
+"['generative-model', 'graphs']"," Title: Improving graph decoder networkBody: I have been using a network to generate graphs. The architecture that I have been using is the following:
+
+
+
+In this figure, $D_1$ is the signal generator and $D_2$ is the graph topology generator, which is a square, symmetric matrix which indicates which node is connected to which. In this network, $l$ shows linear layers, $a$ shows activation functions. Here we are using leaky relu activation function.
+
+The problem that I am experiencing is that after training the network, my output is only a chain of nodes, meaning that only subdiagonal and superdiagonal elements have non-zero values and it is very rare to have other forms of graph. I was wondering if anyone has a suggestion for improving the output. Note that my training data is diverse and has every kind of graphs.
+"
+"['classification', 'computer-vision', 'object-detection']"," Title: Can an image recognition model used for human pose estimation?Body: I am currently writing my thesis about human pose estimation and wanted to use Google's inception network, modify it for my needs and use transfer learning to detect human key joints. I wanted to ask if that could be done in that way?
+
+Assuming I am having n-keypoints, generating the n-feature maps, use transfer learning and cut off the final classification layers and replace it by a FCN which guesses the key joints. I am asking myself if this might be possible.
+
+However, these feature maps should output heatmaps with the highest probability as well. Is this assumption valid?
+"
+"['deep-learning', 'convolutional-neural-networks', 'object-detection', 'precision', 'recall']"," Title: How to calculate the precision and recall given the predictions and targets in this case?Body: I'm using three pre-trained deep learning models to detect vehicles and count from an image data set. The vehicles belong to one of these classes ['car', 'truck', 'motorcycle', 'bus']. So, for a sample I have manually counted number of vehicles in each image. Also, I employed the three deep learning models and obtained the vehicle counts. For example:
+
+ Actual | model 1 count| model 2 count | model 3 count
+------------------------------------------------------------------
+ 4 cars, 1 bus | 2 cars | 2 cars, 1 truck| 4 cars
+ 2 cars | 0 | 1 truck | 1 car, 1 bus
+
+
+In this case, how can I measure accuracy scores such as precision and recall?
+"
+['datasets']," Title: Ambulance dataset neededBody: Could I get a dataset that can classify ambulances?
+I have searched everywhere, but, couldn't seem to get hold of a set of annotated images for ambulances.
+"
+"['neural-networks', 'batch-normalization']"," Title: What is a ""batch"" in batch normalization?Body: I'm working on an example of CNN with the MNIST hand-written numbers dataset. Currently I've got convolution -> pool -> dense -> dense, and for the optimiser I'm using Mini-Batch Gradient Descent with a batch size of 32.
+
+Now this concept of batch normalization is being introduced. We are supposed to take a ""batch"" after or before a layer, and normalize it by subtracting its mean, and dividing by its standard deviation.
+
+So what is a ""batch""? If I feed a sample into a 32 kernel conv layer, I get 32 feature maps.
+
+
+- Is each feature map a ""batch""?
+- Are the 32 feature maps the ""batch""?
+
+
+Or, if I'm doing Mini-Batch Gradient Descent with a batch size of 64,
+
+
+- Are 64 sets of 32 feature maps the ""batch""? So in other words, the batch from Mini-Batch Gradient Descent, is the same as the ""batch"" from batch-optimization?
+
+
+Or is a ""batch"" something else that I've missed?
+"
+"['machine-learning', 'keras', 'academia']"," Title: How to describe an keras Model in a scientific reportBody: how would you describe a machine learning model in a scientific report? It should be detailed but I just listed the hyperparameters... Have you got more important properties?
+"
+"['comparison', 'word-embedding', 'feature-extraction']"," Title: Is word embedding a form of feature extraction?Body: Feature extraction is a concept concerning the translation of raw data into the inputs that a particular machine learning algorithm requires. These derived features from the raw data that are actually relevant to tackle the underlying problem. On the other hand, word embeddings are basically distributed representations of text in an n-dimensional space.
+
+As far as I understand, word embedding is a somehow feature extraction technique. Am I wrong ? I had an argument with a friend who believes the two topics are totally separate. Is he right? What are the similarities and dissimilarities between word embedding and feature extraction?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'capsule-neural-network']"," Title: Is the number of neurons in each capsule in a capsule neural network hardcoded?Body: The capsule neural networks have been formally introduced in the paper Dynamic Routing Between Capsules.
+
+Much ado has been made about how the capsules output a vector (magnitude = probability that an entity is present, orientation space = the instantiated parameters), which can then allow it to maintain more information than a max-pooled operation which outputs only a scalar.
+
+Within these vector representations, the dimensions of this space turn out to be the parameters by which a written digit could vary: scale, stroke thickness, skew, width, etc. There are 16 neurons in each capsule to represent 16 dimensions.
+
+It is unclear to me, from reading the paper, if these parameters emerged through training, or if they were hand-coded a priori. If these parameters were not hand-coded, why do such ""clean"" dimensions emerge? Why don't mixed-selective neurons emerge within the 16?
+"
+"['convolutional-neural-networks', 'training', 'keras', 'transfer-learning', 'fine-tuning']"," Title: Is my fine-tuned model learning anything at all?Body: I am practicing with Resnet50 fine-tuning for a binary classification task. Here is my code snippet.
+base_model = ResNet50(weights='imagenet', include_top=False)
+x = base_model.output
+x = keras.layers.GlobalAveragePooling2D(name='avg_pool')(x)
+x = Dropout(0.8)(x)
+model_prediction = keras.layers.Dense(1, activation='sigmoid', name='predictions')(x)
+model = keras.models.Model(inputs=base_model.input, outputs=model_prediction)
+opt = SGD(lr = 0.01, momentum = 0.9, nesterov = False)
+
+model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) #
+
+train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=False)
+
+test_datagen = ImageDataGenerator(rescale=1./255)
+train_generator = train_datagen.flow_from_directory(
+ './project_01/train',
+ target_size=(input_size, input_size),
+ batch_size=batch_size,
+ class_mode='binary')
+
+validation_generator = test_datagen.flow_from_directory(
+ './project_01/val',
+ target_size=(input_size, input_size),
+ batch_size=batch_size,
+ class_mode='binary')
+
+hist = model.fit_generator(
+ train_generator,
+ steps_per_epoch= 1523 // batch_size, # 759 + 764 NON = 1523
+ epochs=epochs,
+ validation_data=validation_generator,
+ validation_steps= 269 // batch_size) # 134 + 135NON = 269
+
+I plotted a figure of the model after training for 50 epochs:
+
+You may have noticed that train_acc
and val_acc
have highly fluctuated, and train_acc
merely reaches 52%, which means that network isn't learning, let alone over-fitting the data.
+As for the losses, I haven't got any insights.
+Before training starts, network outputs:
+Found 1523 images belonging to 2 classes.
+Found 269 images belonging to 2 classes.
+
+Is my fine-tuned model learning anything at all?
+I'd appreciate if someone can guide me to solve this issue.
+"
+"['reference-request', 'agi', 'research', 'state-of-the-art']"," Title: Which AGI systems have already been implemented and tested?Body: I wish to compile a (somewhat) comprehensive list of AGI systems that have actually been created and tested (to whatever degrees of success) instead of those that simply advertise they are going to 'do' something about it or have patented theoretical concepts.
+
+For the purposes of this question, we can use the following definition of AGI:
+
+
+ Artificial general intelligence (AGI) is the intelligence of a machine that can understand or learn any intellectual task that a human being can
+
+"
+"['neural-networks', 'gradient-descent', 'overfitting', 'capacity']"," Title: Is running more epochs really a direct cause of overfitting?Body: I've seen some comments in online articles/tutorials or Stack Overflow questions which suggest that increasing the number of epochs can result in overfitting. But my intuition tells me that there should be no direct relationship at all between the number of epochs and overfitting. So I'm looking for an answer which explains if I'm right or wrong (or whatever's in between).
+Here's my reasoning though. To overfit, you need to have enough free parameters (I think this is called "capacity" in neural networks) in your model to generate a function that can replicate the sample data points. If you don't have enough free parameters, you'll never overfit. You might just underfit.
+So really, if you don't have too many free parameters, you could run infinite epochs and never overfit. If you have too many free parameters, then yes, the more epochs you have the more likely it is that you get to a place where you're overfitting. But that's just because running more epochs revealed the root cause: too many free parameters. The real loss function doesn't care about how many epochs you run. It existed the moment you defined your model structure before you ever even tried to do gradient descent on it.
+In fact, I'd venture as far as to say: assuming you have the computational resources and time, you should always aim to run as many epochs as possible because that will tell you whether your model is prone to overfitting. Your best model will be the one that provides great training and validation accuracy, no matter how many epochs you run it for.
+EDIT
+While reading more into this, I realise I forgot to take into account that you can arbitrarily vary the sample size as well. Given a fixed model, a smaller sample size is more prone to overfitting. And then that kind of makes me doubt my intuition above. Still happy to get an answer though!
+"
+"['machine-learning', 'deep-learning', 'tensorflow', 'keras', 'transfer-learning']"," Title: Reasoning behind $Zero$ validation accuracy in the following ResNet50 model for classificationBody: I have written this code to classify Cats and dogs using Resnet50. Actually while studying I came to the conclusion that Transfer learning gives very good accuracy for deep learning models, but I ended getting a far worse result and I didn't understand the cause for it. Any description with reasoning would be very helpful. The dataset contains 2000 images of cats and dogs as training and 1000 images as the validation set.
+
+The following summarises my model
+
+
+
+from tensorflow.keras.applications import ResNet50
+from tensorflow.keras.models import Sequential
+from tensorflow.keras.layers import Dense, InputLayer, Flatten, GlobalAveragePooling2D
+num_classes = 2
+IMG_SIZE = 224
+IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
+my_new_model=tf.keras.applications.ResNet50(include_top=False, weights='imagenet', input_shape=IMG_SHAPE, pooling='avg', classes=2)
+my_new_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
+
+
+from tensorflow.keras.preprocessing.image import ImageDataGenerator
+from tensorflow.keras.applications.resnet50 import preprocess_input
+train_datagen = ImageDataGenerator(
+ preprocessing_function=preprocess_input,
+ rotation_range=40,
+ width_shift_range=0.2,
+ height_shift_range=0.2,
+ shear_range=0.2,
+ zoom_range=0.2,
+ horizontal_flip=True,)
+
+# Note that the validation data should not be augmented!
+test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
+
+train_generator = train_datagen.flow_from_directory(
+ train_dir, # This is the source directory for training images
+ target_size=(224,224), # All images will be resized to 224x224
+ batch_size=20,
+ class_mode='binary')
+
+validation_generator = test_datagen.flow_from_directory(
+ validation_dir,
+ target_size=(224, 224),
+ class_mode='binary')
+
+my_new_model.fit_generator(
+ train_generator,
+ epochs = 8,
+ steps_per_epoch=100,
+ validation_data=validation_generator)
+
+
+For this I get the training logs as,
+
+Train for 100 steps, validate for 32 steps
+Epoch 1/8
+100/100 - 49s - loss: 7889.4051 - accuracy: 0.0000e+00 - val_loss: 7834.5318 - val_accuracy: 0.0000e+00
+Epoch 2/8
+100/100 - 35s - loss: 7809.7583 - accuracy: 0.0000e+00 - val_loss: 7775.1556 - val_accuracy: 0.0000e+00
+Epoch 3/8
+100/100 - 35s - loss: 7808.4858 - accuracy: 0.0000e+00 - val_loss: 7765.3964 - val_accuracy: 0.0000e+00
+Epoch 4/8
+100/100 - 35s - loss: 7808.0520 - accuracy: 0.0000e+00 - val_loss: 7764.0735 - val_accuracy: 0.0000e+00
+Epoch 5/8
+100/100 - 35s - loss: 7807.7891 - accuracy: 0.0000e+00 - val_loss: 7762.4891 - val_accuracy: 0.0000e+00
+Epoch 6/8
+100/100 - 35s - loss: 7807.6872 - accuracy: 0.0000e+00 - val_loss: 7762.1766 - val_accuracy: 0.0000e+00
+Epoch 7/8
+100/100 - 35s - loss: 7807.6633 - accuracy: 0.0000e+00 - val_loss: 7761.9766 - val_accuracy: 0.0000e+00
+Epoch 8/8
+100/100 - 35s - loss: 7807.6514 - accuracy: 0.0000e+00 - val_loss: 7761.9346 - val_accuracy: 0.0000e+00
+<tensorflow.python.keras.callbacks.History at 0x7f5adff722b0>
+
+
+If I change the class_mode='categorical' it's giving error as
+Incompatible shapes: [20,2] vs. [20,2048].
+"
+['categorical-data']," Title: Categorizing text into dynamic amount of categoriesBody: I'm looking for a supervized system/approach, that could learn how to categorize incoming texts/documents, where new categories can be added over time and the training set will be small. The trained model should not be static and should be able to evolve with adding new categories or evaluating new documents.
+
+For each document it should first give it's suggestion that can be then corrected.
+"
+"['recurrent-neural-networks', 'terminology']"," Title: What is the term for an RNN that is a completely connected directed graph?Body: There seems to be a severe problem with the taxonomy of neural network topologies. What I'd like to know is the term I should use to search for the most general topology: completely connected directed cyclic graph (henceforth CCDCGRNN). This is because all other topologies degenerate by constraint from CCDCGRNN. This includes topologies that are often confused with CCDCGRNN such as Elfman and Jordan networks* and more legitimately-so than, say LSTMs.
+
+I know there are claims such as this question at stats.stackexchange.com (including cites) that unqualified ""RNN"" refers to CCDCGRNN but this is not true if one looks a little deeper. Examples include not only the Wikipedia article on ""RNN"" (who trusts WP anyway, right?), but a ""mostly complete"" catalog of neural network topologies.
+
+There must have been, at some point in the ancient past, research into the methods by which one can, in a principled manner, degenerate the CCDCGRNs or at least why it isn't worth studying in its own right.
+
+*RNNs containing feed-through time delays are a degenerate case of CCDCGRNNs where a time delay of N out of a node is accomplished by allocating N neurons constrained to have only one input with weight of 1 (and a linear transfer function with slope 1).
+"
+"['machine-learning', 'computational-learning-theory', 'pac-learning', 'vc-dimension', 'vc-theory']"," Title: Are PAC learning and VC dimension relevant to machine learning in practice?Body: Are PAC learning and VC dimension relevant to machine learning in practice? If yes, what is their practical value?
+
+To my understanding, there are two hits against these theories. The first is that the results all are conditioned on knowing the appropriate models to use, for example, the degree of complexity. The second is that the bounds are very bad, where a deep learning network would take an astronomical amount of data to reach said bounds.
+"
+"['neural-networks', 'convolutional-neural-networks', 'support-vector-machine']"," Title: Is it possible to combine multiple SVMs that were trained on sublayers of a CNN into one combined SVM?Body: I have created a CNN for use on the MNIST dataset for now (so I have 10 classes). I have trained SVMs on the sublayers of this trained CNN and wish to combine them into a combined SVM as to give a combined score.
+
+So far, I trained two individual SVMs at two of the sublayers of my neural network.
+What is the best method I can go about combining the two SVMs and what are the different options available to me? Is it simply a case of taking the maximum/average of each SVM prediction for a class and using that as the score for the combined SVM class prediction?
+
+Thanks
+"
+"['neural-networks', 'convolutional-neural-networks', 'classification', 'objective-functions']"," Title: How to understand my CNN's training results?Body: I created a multi-label classification CNN to classify chest X-ray images into zero or more possible lung diseases. I've been doing some configuration tests on it and analyzing its results and I'm having a hard time understanding some things about it.
+
+First of all, these are the graphs that I got for different configurations:
+
+Results of CNN with different configurations
+
+Note 1: I've only changed the dataset size and the number of color channels in each configuration
+Note 2: In case you're wondering why I tested the network with both 1 and 3 color channels, it's because the images are technically grayscale, but I am using the AlexNet architecture, which was made to take as input 224 x 224 images with 3 channels, so I wanted to see if the network somehow performed better with 3 channels instead of just the one
+
+These are the things about it I don't understand:
+
+
+- Why does the sensitivity and specificity of the network vary so much between different epochs?
+- Is it normal for the validation loss of the network barely ever change as the number of epochs increase?
+- Looking at the results I got, it looks like 2 epochs is where there tends to be the best results. Does that make sense? I've heard of people training their networks with dozens of epochs sometimes.
+- Why is it that, many times, when the sensitivity of the network increases between epochs, the specificity tends to decrease, and vice-versa?
+
+
+Sorry if some of these questions are dumb, I'm still a newbie at this. Also, my total dataset is drastically larger than what I present in these results (~110,000 images). I just haven't done tests with more images due to the time the network takes to train.
+
+Network Architecture:
+
+
+- Base Architecture: AlexNet
+- Loss Function: Sigmoid Cross-Entropy Loss
+- Optimizer: Adam Optimization Algorithm with learning rate of 0.001
+
+
+EDIT: I forgot to mention that the number of diseases to predict is 15, and that the network sees 0's much more than 1's due to the imbalance of classes. I've considered changing the loss function to a weighted version of sigmoid cross-entropy because of that, but I'm not sure if that would help the network much.
+"
+"['deep-learning', 'gradient-descent']"," Title: How is the loss value calculated in order to compute the gradient?Body: The gradient descent step is the following
+
+\begin{align}
+\mathbf{W}_i = \mathbf{W}_{i-1} - \alpha * \nabla L(\mathbf{W}_{i-1})
+\end{align}
+
+were $L(\mathbf{W}_{i-1})$ is the loss value, $\alpha$ the learning rate and $\nabla L(\mathbf{W}_{i-1})$ the gradient of the loss.
+
+So, how do we get to the $L(\mathbf{W}_{i-1})$ to calculate the gradient of $L(\mathbf{W}_{i-1})$? As an example, we can initialize the set of $\mathbf{W}$ to 0.5. How can you explain it to me?
+"
+"['neural-networks', 'activation-functions', 'hidden-layers', 'network-design']"," Title: Do all neurons in a layer have the same activation function?Body: I'm new to machine learning (so excuse my nomenclature), and not being a python developer, I decided to jump in at the deep (no pun intended) end writing my own framework in C++.
+In my current design, I have given each neuron/cell the possibility to have a different activation function. Is this a plausible design for a neural network? A lot of the examples I see use the same activation function for all neurons in a given layer.
+Is there a model which may require this, or should all neurons in a layer use the same activation function? Would I be correct in using different activation functions for different layers in the same model, or would all layers have the same activation function within a model?
+"
+"['reinforcement-learning', 'deep-rl', 'hyperparameter-optimization']"," Title: Hyperparameter optimisation over entire range or shorter range of training episodes in Deep Reinforcement LearningBody: I am optimising hyperparameters for my deep reinforcement learning project (using PPO2, DQN and A2C) and was wondering:
+
+Should I find the optimum hyperparameters to get maximum reward from training over my entire range of training (e.g. 50 million steps) or can I optimise over less time (e.g. 1 million steps)?
+
+What is the conventional approach and why?
+"
+"['machine-learning', 'feature-selection', 'regularization']"," Title: What is the $\ell_{2, 1}$ norm?Body: I'm reading this paper and it says:
+
+
+ In this paper, we present a multi-class embedded feature selection method called as sparse optimal scoring with adjustment (SOSA), which is capable of addressing the data heterogeneity issue. We propose to perform feature selection on the adjusted data obtained by estimating and removing the unknown data heterogeneity from original data. Our feature selection is formulated as a sparse optimal scoring problem by imposing $\ell_{2, 1}$-norm regularization on the coefficient matrix which hence can be solved effectively by proximal gradient algorithm. This allows our method can well handle the multi-class feature selection and classification simultaneously for heterogenous data
+
+
+What is the $\ell_{2, 1}$ norm regularization? Is it L1 regularization or L2 regularization?
+"
+"['deep-learning', 'performance', 'accuracy']"," Title: Sample size for the evaluation of Deep Learning ModelsBody: I'm evaluating the performance and accuracy in detecting objects for my data set using three deep learning algorithms. In total there are 24,085 images. I measure the performance in terms of time taken to detect the objects. To measure the accuracy, I manually count the number of objects in each image and then calculate recall and precision values for three algorithms.
+
+However, since I'm manually counting to get actual object count, I selected only 30 images. Will that sample be enough to make a conclusion that algorithm 1 is better than others in terms performance and accuracy?
+"
+"['machine-learning', 'reference-request', 'prediction']"," Title: Is there an AI technology that can predict human behaviour?Body: Is there an AI technology out there or being developed that can predict human behaviour, given that we as humans are irrational decision-makers?
+I'm looking at this from an economic standpoint - the issue with current economic models is that they assume that humans are perfectly rational, but obviously this isn't the case. Could AI develop better models and therefore produce better models of recessions?
+"
+['convolutional-neural-networks']," Title: How to create a fully connected(matrix) layer with vector inputBody: I am trying to replace last fully connected layer of size 4096/2048 with a matrix of size 100x300 with previous fc layer output of 2048.
+
+
+
+I've tried
+
+
+- 2D convolution - to map from 2048 --> 100x300 (Which is not realizable)
+- Intermediate projections :
+ 2048 --> 100
+ [100x1] X [1x300] --> [100x300] (possible but complicated)
+
+
+I am looking for a simple and effective solution with least linear transformations.
+
+
+"
+"['tensorflow', 'distributed-computing']"," Title: Why do we average gradients and not loss in distributed training?Body: I'm running some distributed trainings in Tensorflow with Horovod. It runs training separately on multiple workers, each of which uses the same weights and does forward pass on unique data. Computed gradients are averaged within the communicator (worker group) before applying them in weight updates. I'm wondering - why not average the loss function across the workers? What's the difference (and the potential benefits) of averaging gradients?
+"
+"['reinforcement-learning', 'actor-critic-methods', 'temporal-difference-methods']"," Title: What is the intuition behind the TD(0) equation with average reward, and how is it derived?Body: In chapter 10 of Sutton and Barto's book (2nd edition) is given the equation for TD(0) error with average reward (equation 10.10):
+
+$$\delta_t = R_{t+1} - \bar{R} + \hat{v}(S_{t+1}, \mathbf{w}) - \hat{v}(S_{t}, \mathbf{w})$$
+
+What is the intuition behind this equation? And how exactly is it derived?
+
+Also, in chapter 13, section 6, is given the Actor-Critic algorithm, which uses the TD error. How can you use 1 error to update 3 distinct things - like the average reward, value function estimator (critic), and the policy function estimator (actor)?
+
+Average Reward update rule: $\bar{R} \leftarrow \bar{R} + \alpha^{\bar{R}}\delta$
+
+Critic weight update rule: $\mathbf{w} \leftarrow \mathbf{w} + \alpha^{\mathbf{w}}\delta\nabla \hat{v}(s,\mathbf{w})$
+
+Actor weight update rule: $\mathbf{\theta} \leftarrow \mathbf{\theta} + \alpha^{\mathbf{\theta}}\delta\nabla ln \pi(A|S,\mathbf{\theta})$
+"
+"['classification', 'training', 'implementation']"," Title: How difficult is this sound classification?Body: I want a microphone to pick up sounds around me (let's say beyond a 3 foot radius), but ignore sounds made at my desk, such as the rustling of paper, clicking a mouse and typing, my hands brushing up on the table, putting a pen down, etc.
+
+How hard would it be for AI to be able to distinguish these sounds from surrounding sounds, such as someone knocking on my door or a random loud sound from further away? How would you implement this? Is it possible that a pre-trained model could accomplish this, and work reliably for most people at their desk? I don't have any experience in AI.
+"
+"['convolutional-neural-networks', 'training', 'hyper-parameters']"," Title: Which CNN hyper-parameters are most sensitive to centered versus off centered data?Body: Which hyper-parameters of a convolutional neural network are likely to be the most sensitive to depending on whether the training (and test and inference) data involves only accurately centered images versus off-centered images.
+
+More convolutional layers, wider convolution kernels, more dense layers, wider dense layers, more or less pooling, or ???
+
+e.g. If I can preprocess the data to include only accurately centered images, which hyper-parameters should I experiment with changing to create a smaller CNN model (for a power and memory constrained inference engine)? Or conversely, if I have a minimized model trained on centered data, which hyper-parameters would I most likely need to increase to get similar loss and accuracy on uncentered (shifted in XY) data?
+"
+"['gaming', 'incomplete-information']"," Title: How do AI that play games of incomplete information decide their opening strategy?Body: This question was inspired by watching AlphaStar play Starcraft 2, but I'm also interested in the concept in general.
+
+How does the AI decide what build order to start with? In Starcraft, and many other games, the player must decide what strategy or class of strategies to follow as soon as the game begins. To use a Starcraft-specific example, one must decide to 6-pool Zerg Rush before any scouting information has been gathered. Delaying the rush to wait for info means the opponent will be stronger when the rush arrives; the opponent may even discover the rush and prepare a dedicated counter.
+
+This is not limited to deciding between a risky early all-or-nothing attack. Some long-term strategies also preclude others. Terran players must decide early on how heavily they will invest in mech units. They can focus on biological units like marines, or vehicular units like siege tanks and hellions. Going equally into both, however, often means a weaker army overall, because you must spend resources on the overhead costs of both tech trees. You must upgrade your vehicle weapons as well as your infantry weapons for instance, meaning less resources can be spent on more units. Suffice to say, Terran players usually must decide very early on what they will focus on.
+
+How can AI make these kinds of choices given incomplete and often uncertain information?
+"
+"['neural-networks', 'machine-learning', 'overfitting', 'computational-learning-theory', 'generalization']"," Title: Why can neural networks generalize at all?Body: Neural networks are incredibly good at learning functions. We know by the universal approximation theorem that, theoretically, they can take the form of almost any function - and in practice, they seem particularly apt at learning the right parameters. However, something we often have to combat when training neural networks is overtfitting - reproducing the training data and not generalizing to a validation set. The solution to overfitting is usually to simply add more data, with the rationalization that at a certain point the neural network pretty much has no choice but to learn the correct function.
+
+But this never made much sense to me. There is no reason, in terms of loss, that a neural network should prefer a function that generalizes well (i.e. the function you are looking for) over a function that does incredibly well on the training data and fails miserably everywhere else. In fact, there is usually a loss advantage to overfitting. Equally, there is an infinite number of functions that fit the training data and have no success on anything but.
+
+So why is it that neural networks almost always (especially for simpler data) stumble upon the function we want, as opposed to one of the infinite other options? Why is it that neural networks are good at generalizing, when there is no incentive for them to?
+"
+"['neural-networks', 'classification', 'data-preprocessing', 'feature-selection']"," Title: When using neural networks, should I bin the continuous variables and apply some transformation before performing variable selection and modeling?Body: I come from a background of scorecard development using logistic regression. Steps involved there are:
+
+- binning of continuous variables into intervals (eg age can be binned into 10-15 years, 15-20 years, etc)
+
+- weight of evidence transformation
+
+- coarse classing of bins to ensure event rate has a monotonic relationship with the variable
+
+
+Variable selection is made from the coarse classed transformed variables.
+I was wondering if I should follow the same steps for ANN models. That is, should I bin the continuous variables and apply some transformation before performing variable selection and modeling.
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision']"," Title: What is the reasoning behind the number of filters in the convolution layer?Body: Let's assume an extreme case in which the kernel of the convolution layer takes only values 0 or 1. To capture all possible patterns in input of $C$ number of channels, we need $2^{C*K_H*K_W}$ filters, where $(K_H, K_W)$ is the shape of a kernel. So to process a standard RGB image with 3 input channels with 3x3 kernel, we need our layer to output $2^{27}$ channels. Do I correctly conclude that according to this, the standard layers of 64 to 1024 filters are only able to catch a small part of (perhaps) useful patterns?
+"
+"['neural-networks', 'activation-functions']"," Title: Why is a softmax used rather than dividing each activation by the sum?Body: Just wondering why a softmax is typically used in practice on outputs of most neural nets rather than just summing the activations and dividing each activation by the sum. I know it's roughly the same thing but what is the mathematical reasoning behind a softmax over just a normal summation? Is it better in some way?
+"
+"['deep-learning', 'long-short-term-memory', 'sequence-modeling', 'text-classification', 'text-generation']"," Title: How to use LSTM to generate a paragraphBody: A LSTM model can be trained to generate text sequences by feeding the first word. After feeding the first word, the model will generate a sequence of words (a sentence). Feed the first word to get the second word, feed the first word + the second word to get the third word, and so on.
+
+However, about the next sentence, what should be the next first word? The thing is to generate a paragraph of multiple sentences.
+"
+"['natural-language-processing', 'terminology']"," Title: What role do distractors play in natural language processing?Body: I’m doing research on natural language processing (NLP). I’d like to put together my own model. However, I'm running into a concept I am not familiar with, namely, distractors. A google search does not reveal much.
+
+I've been reading this article specifically: https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313
+
+In the section under ""Multi-Tasks Losses"" it reads:
+
+
+ Next-sentence prediction: we pass the hidden-state of the last token
+ (the end-of-sequence token) through a linear layer to get a score and
+ apply a cross-entropy loss to classify correctly a gold answer among
+ distractors.
+
+
+I understand how transformers and coss-entropy works, however I'm not sure what a distractor or a ""gold answer"" is for that matter.
+
+In this context, what does the author mean by distractor?
+"
+['natural-language-processing']," Title: Is NLP likely to be sufficiently solved in the next few years?Body: The reason I am asking this question is because I am about to start a PhD in NLP. So I am wondering if there would be as much job opportunities in research in industry as oppose to in academia in the future (~ 5 to 10 years) or would it be mostly a matter of using a library off the shelf. I have done some research and it seems NLP is AI-complete, which means it's probably a problem that will be ""solved"" only when AGI is solved , but still I would appreciate any input.
+"
+"['machine-learning', 'natural-language-processing', 'audio-processing', 'speech-recognition']"," Title: How to use AI for language recognition?Body: Given an audio track, I'm trying to find a way to recognize the audio language. Only within a small set (e.g. English vs Spanish). Is there a simple solution to detect the language in a speech?
+"
+"['machine-learning', 'classification', 'python', 'scikit-learn']"," Title: Interpretation of feature selection based on the modelBody: The description of feature selection based on a random forest uses trees without pruning.
+Do I need to use tree pruning?
+The thing is, if I don't cut the trees, the forest will retrain.
+
+Below in the picture is the importance of features based on 500 trees without pruning.
+
+
+With a depth of 3.
+
+
+I always use the last four signs 27, 28, 29, 30.
+And I try to add to them signs from 0 to 26 by means of cycles, going through possible combinations.
+Empirically, I assume that the trait number 0, 26 is significant.
+But, on both pictures it is not visible. Although the quality of classification with the addition of 0, 26 has improved.
+"
+['ai-design']," Title: What can model everything?Body: I've been thinking about what ""mathematical model"" can be used to model every possible thing (including itself).
+
+Examples: a simple neuron network models a function but doesn't model an algorithm. A list of instructions models an algorithm but doesn't model relations between elements...
+
+You might be thinking ""maybe there is nothing that can model everything"" but in reality ""language"" does model everything including itself. The issue is that it's not an organized model and it's not clear how to create it from scratch (e.g. if you will send it to aliens that don't have any common knowledge to start with).
+
+So what is some possible formalization of a mathematical model that models every possible thought that can be communicated?
+
+Edit 1:
+
+The structure formalization I'm looking for has to have a few necessary properties:
+
+
+- Hierarchical: the representation of ideas should rely on other ideas. (E.g. an programming function is a set of programming functions, the concept ""bottle of water"" is sum of two concepts ""water"" and a ""bottle""...)
+- Uniqueness of elements: When an idea uses in its definition another idea, it must refer to one specific idea, not recreate it each time. For example, when you think of a digit ""9"" and the digit ""8"", you notice that both have a small circle at the top, you don't recreate a new concept ""circle"" every time, instead, you use a fixed concept ""circle"" for everything. By contrast, a neural network might recreate the same branch for different inputs. So two representations of concepts must be different iff they have a difference.)
+
+"
+"['image-recognition', 'hidden-layers', 'multilayer-perceptrons', 'c++']"," Title: Is it expected that adding an additional hidden layer to my 3-layer ANN reduces accuracy significantly?Body: I've been using several resources to implement my own artificial neural network package in C++.
+
+Among some of the resources I've been using are
+
+https://www.anotsorandomwalk.com/backpropagation-example-with-numbers-step-by-step/
+
+https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/
+
+https://cs.stanford.edu/people/karpathy/convnetjs/intro.html,
+
+as well as several others.
+
+My code manages to replicate the results in the first two resources exactly. However, these are fairly simple networks in terms of depth. Hence the following (detailed) question:
+
+For my implementation, I've been working with the MNIST Database of handwritten digits (http://yann.lecun.com/exdb/mnist/).
+
+Using the ANN package I wrote, I have created a simple ANN with 784 input neurons, one hidden layer with 16 neurons, as well as an output layer with ten neurons. I have implemented ReLU on the hidden layer and the ouput layer, as well as a softmax on the output layer to get probabilities.The weights and biases are each individiually initialized to random values in the range [-1,1]
+
+So the network is 784x16x10.
+
+My backpropagation incorporates weight gradient and bias gradient logic.
+
+With this configuration, I repeatedly get about a 90% hit rate with a total average cost of ~0.07 on the MNIST training set comprising 60,000 digits, and a slightly higher hit rate of ~92.5% on the test set comprising 10,000 digits.
+
+For my first implementation of an ANN, I am pretty happy with that. However, my next thought was:
+
+""If I add another hidden layer, I should get even better results...?"".
+
+So I created another artificial network with the same configuration, except for the addition of another hidden layer of 16 neurons, which I also run through a reLU. So this network is 784x16x16x10.
+
+On this ANN, I get significantly worse results. The hit rate on the training set repeatedly comes out at ~45% with a total average error of ~0.35, and on the test set I also only get about 45%.
+
+This leads me to either one or both of the following conclusions:
+
+A) My implementation of the ANN in C++ is somehow faulty. If so, my bet would be it is somewhere in the backpropagation, as I am not 100% certain my weight gradient and bias gradient calculation is correct for any layers before the last hidden layer.
+
+B) This is an expected effect. Something about adding another layer makes the ANN not suitable for this (digit classification) kind of problem.
+
+Of course, A, B, or A and B could be true.
+
+Could someone with more experience than me give me some input, especially on whether B) is true or not?
+
+If B) is not true, then I know I have to look at my code again.
+"
+"['machine-learning', 'classification']"," Title: Is it acceptable to use various training sets for the individual models when using a majority vote classifier?Body: So I am trying to use a majority vote classifier combining different models and I was wondering if it is acceptable to use different training sets for the individual models (including different features) if these sets all come from one larger dataset?
+
+Thanks
+"
+"['neural-networks', 'signal-processing']"," Title: Determine Frequency from Noisy Signal With Neural Networks (With Adeline Model)Body: I'm trying to determine the frequency from a signal with NN. I'm using the Adeline model for my project and I'm taking a few samples in each 0.1-volt step in a true signal and a noisy one.
+
+First question: am I wrong?
+
+Second question: my network works fine until the frequency of my sample for the test is equal to the frequency of my sample for the training. Otherwise, my network doesn't work and gives me the wrong answer.
+What do I need to do for this model?
+for solving this problem I must use nonlinear steps like logarithmic steps. but How to use logarithmic steps in MatLab?
+
+Edition: I understand my problem is not Overfitting! I found that my samples step are linear and my samples are nonlinear so this is wrong
+for solving this problem I must use nonlinear steps like logarithmic steps. but How to use logarithmic steps in MatLab?
+"
+"['neural-networks', 'keras', 'long-short-term-memory', 'dropout', 'regularization']"," Title: Can dropout layers not influence LSTM training?Body: I am working on a project that requires time-series prediction (regression) and I use LSTM network with first 1D conv layer in Keras/TF-gpu as follows:
+
+
+
+model = Sequential()
+model.add(Conv1D(filters=60, activation='relu', input_shape=(x_train.shape[1], len(features_used)), kernel_size=5, padding='causal', strides=1))
+model.add(CuDNNLSTM(units=128, return_sequences=True))
+model.add(CuDNNLSTM(units=128))
+model.add(Dense(units=1))
+
+
+As an effect my model is clearly overfitting:
+
+
+So I decided to add dropout layers, first I added layers with 0.1, 0.3 and finally 0.5 rate:
+
+
+
+model = Sequential()
+model.add(Dropout(0.5))
+model.add(Conv1D(filters=60, activation='relu', input_shape=(x_train.shape[1], len(features_used)), kernel_size=5, padding='causal', strides=1))
+model.add(Dropout(0.5))
+model.add(CuDNNLSTM(units=128, return_sequences=True))
+model.add(Dropout(0.5))
+model.add(CuDNNLSTM(units=128))
+model.add(Dense(units=1))
+
+
+However I think that it has no effect on the network learning process, even though 0.5 is quite large dropout rate:
+
+
+Is this possible that dropout has little/no effect on a training process of LSTM or maybe I do something wrong here?
+
+[EDIT] Adding plots of my TS, general and zoomed in view.
+
+
+
+I also want to add that the time of training increases just a bit (i.e. from 1540 to 1620 seconds) when I add the dropout layers.
+"
+"['reinforcement-learning', 'probability', 'pomdp']"," Title: What are some approaches to estimate the transition and observation probabilities in POMDP?Body: What are some common approaches to estimate the transition or observation probabilities, when the probabilities are not exactly known?
+
+When realizing a POMDP model, the state model needs additional information in terms of transition and observation probabilities. Often these probabilities are not known and an equal distribution is also not given. How can we proceed?
+"
+['object-detection']," Title: Object Detection Algorithm that detects four corners of arbitrary quadrilateral, not just perpendicular rectangularBody: Is there some established Object Detection algorithm that is able to detect the four corners of an arbitrary quadrilateral (x0,y0,x1,y1,x2,y2,x3,y3) as opposed to the more typical perpendicular rectangular (x,y,w,h) ?
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing', 'classification', 'text-classification']"," Title: Is there any classifier that works best in general for NLP based projects?Body: I've written a program to analyse a given piece of text from a website and make conclusary classifications as to its validity. The code basically vectorizes the description (taken from the HTML of a given webpage in real time) and takes in a few inputs from that as features to make its decisions. There are some more features like the domain of the website and some keywords I've explicitly counted.
+
+The highest accuracy I've been able to achieve is with a RandomForestClassifier, (>90%). I'm not sure what I can do to make this accuracy better except incorporating a more sophisticated model. I tried using an MLP but for no set of hyperparameters does it seem to exceed the previous accuracy. I have around 2000 datapoints available for training.
+
+Is there any classifier that works best for such projects? Does anyone have any suggestions as to how I can bring about improvements? (If anything needs to be elaborated, I'll do so.)
+
+Any suggestions on how I can improve on this project in general? Should I include the text on a webpage as well? How should I do so? I tried going through a few sites, but the next doesn't seem to be contained in any specific element whereas the description is easy to obtain from the HTML. Any help?
+
+What else can I take as features? If anyone could suggest any creative ideas, I'd really appreciate it.
+"
+"['deep-learning', 'terminology', 'agi']"," Title: Are CNN, LSTM, GRU and transformer AGI or computational intelligence tools?Body: Will CNN, LSTM, GRU and transformer be better classified as Computational Intelligence (CI) tools or Artificial General Intelligence (AGI) tools? The term CI arose back when some codes like neural networks, GA, PSO were considered doing magical stuff. These days CI tools do not appear very magical. Researchers want codes to exude AGI. Do the current state of art Deep Learning codes fall in the AGI category?
+"
+"['machine-learning', 'objective-functions', 'plotting']"," Title: What is the best way to smoothen out a loss curve plot?Body: I am currently using a loss averaged over the last 100 iterations, but this leads to artifacts like the loss going down even when the current iteration has an average loss, because the loss 100 iterations ago was a large outlier.
+I thought about using different interval lengths, but I wonder if an average over the last few iterations really is the right way to plot the loss.
+Are there common alternatives? Maybe using decaying weights in the average? What are the best practices for visualizing the loss?
+"
+"['reinforcement-learning', 'proximal-policy-optimization', 'importance-sampling', 'on-policy-methods', 'imitation-learning']"," Title: Can we use imitation learning for on-policy algorithms?Body: Imitation learning uses experiences of an (expert) agent to train another agent, in my understanding. If I want to use an on-policy algorithm, for example, Proximal Policy Optimization, because of it's on-policy nature we cannot use the experiences generated by another policy directly. Importance Sampling can be used to overcome this limitation, however, it is known to be highly unstable. How can imitation learning be used for such on-policy algorithms avoiding the stability issues?
+"
+['machine-learning']," Title: Why machine learning instead of simple sorting and grouping?Body: I have a hard time formulating this question(I'm not knowledgeable enough I think), so I'll give an example first and then the question:
+
+You have a table of data, let's say the occupancy of a building during the course of the day; each row has columns like ""people_inside_currently"", ""apartment_id"", ""hour_of_day"", ""month"", ""year"", ""name_of_day""(monday-sunday), ""amount_of_kids"", ""average_income"" etc.
+
+You might preprocess two columns into a column ""percent_occupied_during_whole_day"" or something like that, and you want to group the data points in accordance with this as the main focus.
+
+What I'm wondering is: why use machine learning(particularly unsupervised clustering) for this? Why not just put it into an SQL database table(for example), calculate two columns into that new one, sort by descending order, and then split it into ""top 25%, next 25%, next 25%, last 25%"" and output this as ""categories of data""? This is simpler, isn't it? I don't see the value of, for instance, making a Principle Component Analysis on it, reducing columns to some ""unifying columns"" which you don't know what to call anymore, and looking at the output of that, when you can get so much clearer results by just simply sorting and dividing the rows like this? I don't see the use of unsupervised clustering, I've googled a bunch of terms, but only found tutorials and definitions, applications(which seemed unnecessarily complex for such simple work), but no explanation of this.
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: Rarely predict minority class imbalanced datasetsBody: I have a dataset in which class A has 99.8%, class B 0.1% and class C 0.1%. If I train my model on this dataset, it predicts always class A. If I do oversampling, it predicts the classes evenly. I want my model to predict class A around 98% of the time, class B 1% and class C 1%. How can I do that?
+"
+['algorithm']," Title: Which algorithm to use to solve this optimization problem?Body:
+- I have items called 'Resources' from 1 to 7.
+- I have to use them in different actions identified from 1 to 10.
+- I can do a maximum of 4 actions each time. This is called 'Operation'.
+- The use of a resource has a cost of 1 per each 'Operation' even if it is used 4 times.
+- The following table indicates the resources needed to do the related actions:
+
+
+
+| | Resources |
+|--------|----------------------------------|
+| Action | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
+|--------|----------------------------------|
+| 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 |
+| 2 | 1 | 1 | 0 | 0 | 1 | 0 | 0 |
+| 3 | 1 | 0 | 1 | 0 | 0 | 1 | 0 |
+| 4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
+| 5 | 1 | 0 | 1 | 1 | 0 | 1 | 0 |
+| 6 | 1 | 1 | 1 | 0 | 0 | 0 | 0 |
+| 7 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
+| 8 | 1 | 0 | 1 | 0 | 1 | 0 | 0 |
+| 9 | 0 | 1 | 0 | 1 | 0 | 0 | 0 |
+| 10 | 1 | 1 | 1 | 0 | 0 | 0 | 1 |
+
+
+
+The objective is to group all the 'Actions' in 'Operations' that minimize the total cost. For example, a group composed by actions {3, 7, 9} needs the resources {1, 2, 3, 4, 6} and therefore has a cost of 5, but a group composed by actions {4, 7, 9} needs the resources {2, 4} and therefore has a cost of 2.
+
+It is needed to get done all the actions the most economically.
+
+Which algorithm can solve this problem?
+"
+"['generative-model', 'data-science', 'speech-recognition', 'state-of-the-art']"," Title: Speaker Identification / Recognition for less size audio filesBody: I am working on speaker identification problem using GMM (Gaussian Mixture Model). I have to just identify one user present in the given audio, so for second class noise or silent audio may use or not just like in image classification for an object we create a non-object class.
+
+I have used a silent class is always showing the user is present ( which is not).
+
+If any other model can give better accuracy fulfil the condition that only 30 sec of audio of a particular user is available and given test audio may has long size.
+"
+"['convolutional-neural-networks', 'object-detection', 'yolo']"," Title: Calculation of FPS on object detection taskBody: How to calculate mean speed in FPS for an object detection model like YOLOv3 or YOLOv3-Tiny? Different object detection models are often presented on charts like this:
+
+I am using the DarkNet framework in my project and I want to create similar charts for my own models based on YOLOv3. Is there some easy way to get mean FPS speed for my model with the ""test video""?
+"
+"['neural-networks', 'deep-learning', 'text-classification']"," Title: How can a system recognize if two strings have the same or similar meaning?Body: How can a system recognize if two strings have the same or similar meaning?
+
+For example, consider the following two strings
+
+
+- Wikipedia provides good information.
+- Wikipedia is a good source of information.
+
+
+What methods are available to do this?
+"
+"['training', 'objective-functions', 'loss']"," Title: Deduce properties of the loss functions from the training loss curvesBody: I have two convex, smooth loss functions to minimise. During the training (a very simple model) using batch SGD (with tuned optimal learning rate for each loss function), I observe that the (log) loss curve of the loss 2 converges much faster and is much more smooth than that of the loss 2, as shown in the figure.
+
+What can I say more about the properties of the two loss functions, for example in terms of smoothness, convexity, etc?
+
+
+"
+"['recurrent-neural-networks', 'machine-translation']"," Title: How to back propagate for implementation of Sequence-to-Sequence with Multi DecodersBody: I am proposing a modified version of Sequence-to-Sequence model with dual decoders. The problem that I am trying to solve is Neural Machine Translation into two languages at once. This is the simplified illustration for the model.
+
+ /--> Decoder 1 -> Language Output 1
+Language Input -> Encoder -|
+ \--> Decoder 2 -> Language Output 2
+
+
+What I understand about back propagation is that we are adjusting the weights of the network to enhance the signal of the targeted output. However, it is not clear to me on how to back propagate in this network because I am not able to find similar implementations online yet. I am thinking of doing the back propagation twice after each training batch, like this:
+
+$$ Decoder\ 1 \rightarrow Encoder $$
+$$ Decoder\ 2 \rightarrow Encoder $$
+
+But I am not sure whether the effect of back propagation from Decoder 2 will affect the accuracy of prediction by Decoder 1. Is this true?
+
+In addition, is this structure feasible? If so, how do I properly back propagate in the network?
+"
+"['neural-networks', 'training', 'overfitting', 'multilayer-perceptrons']"," Title: Why does my model overfit on pseudo-random numbers training data?Body: I am trying to predict pseudo-random numbers using the past numbers with a multiplayer perceptron. The error while training is very low. However, as soon as I test it with a test set, the model overfits and returns very bad results. The correlation coefficient and error metrics are both not performing well.
+
+What would be some of the ways to solve this issue?
+
+For example, if I train it with 5000 rows of data and test it with 1000, I get:
+
+Correlation coefficient 0.0742
+Mean absolute error 0.742
+Root mean squared error 0.9407
+Relative absolute error 146.2462 %
+Root relative squared error 160.1116 %
+Total Number of Instances 1000
+
+
+As mentioned, I can train it with as many training samples as I want and still have the model overfits. If anyone is interested, I can provide/generate some data and post it online.
+"
+"['deep-learning', 'accuracy']"," Title: Can we calculate mean recall and precisionBody: I'm evaluating the accuracy in detecting objects for my image data set using three deep learning algorithms. I have selected a sample of 30 images. To measure the accuracy, I manually count the number of objects in each image and then calculate recall and precision values for three algorithms. Following is a sample:
+
+
+
+Finally to select the best model for my data set, can I calculate the mean Recall and mean Accuracy? For Example:
+
+
+"
+"['convolutional-neural-networks', 'autoencoders', 'features']"," Title: How are small scale features represented in an Inverse Graphics Network (autoencoder)?Body:
+
+This post refers to Fig. 1 of a paper by Microsoft on their Deep Convolutional Inverse Graphics Network:
+
+https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/kwkt_nips2015.pdf
+
+Having read the paper, I understand in general terms how the network functions. However, one detail has been bothering me: how does the network decoder (or ""Renderer"") generate small scale features in the correct location as defined by the graphics code? For example, when training the dataset on faces, one might train a single parameter in the graphics code to control the (x,y) location of a small freckle. Since this feature is small, it will be ""rendered"" by the last convolutional layer where the associated kernels are small. What I don't understand is how the information of the location of the freckle (in the graphics code) propagates through to the last layer, when there are many larger-scale unpooling + convolutional layers in-between.
+
+Thanks for the help!
+"
+"['reinforcement-learning', 'q-learning']"," Title: Is the Q value updated at every episode?Body: I trying to understand the Bellman equation for updating the Q table values. The concept of initially updating the value is clear to me. What is unclear is the subsequent updates to the value. Is the value replaced with each episode? It doesn't seem like this would learn from the past. Maybe average the value from the previous episode with the existing value?
+
+Not specifically from the book. I'm using the equation
+
+$$V(s) = \max_a(R(s, a) + \gamma V(s')),$$
+
+where $\gamma$ is the learning rate. $0.9$ will encourage exploration, and $0.99$ will encourage exploitation. I'm working with a simple $3 \times 4$ matrix from YouTube
+"
+"['deep-learning', 'convolutional-neural-networks', 'cross-entropy']"," Title: How to formalize learning in terms of information theory?Body: Consider the following game on a MNIST dataset:
+
+
+- There are 60000 images.
+- You can pick any 1000 images and train your Neural Network without access to the rest of images.
+- Your final result is prediction accuracy on all dataset.
+
+
+How to formalize this process in terms of information theory? I know that information theory works with distributions, but maybe you can provide some hints how to think in terms of datasets instead of distributions.
+
+
+- What is the information size of all datasets. My first idea was
+that each image is iid from uniform distribution and information
+content = -log2(1/60000). But common sense and empirical results
+(training neural network) show that there are similar images and
+very different images holding a lot more information. For example if
+you train NN only on good looking images of 1 you will get bad
+results on unusual 1s.
+- How to formalize that the right strategy is to choose as much as possible different 1000 images. I was thinking to take image by image with the highest entropy relative to the images you already have. How to define distance function.
+- How to show that all dataset contains N bits of information, training dataset contain M bits of information and there is a way to choose K images < 60000 that hold >99.9% of information.
+
+"
+"['computer-vision', 'image-processing']"," Title: Rendering images and voxelizing the imagesBody: I am using the shapenet dataset. From this dataset, I have 3d models in .obj format. I rendered the images of these 3d models using pyrender library which gives me an image like this :
+
+
+Now I am using raycasting to voxelize this image. The voxel model I get is something like below :
+
+
+
+I am not able to understand why I am getting the white or light brown colored artifacts in the boundary of the object.
+
+The reason I could come up with was maybe the pixels at the boundary of the object contain two colors, so when I traverse the image as numpy array, I get an average of these two colors which gives me these artifacts. But I am not sure if this is the correct reason.
+
+If anyone has any idea about what could be the reason, please let me know
+"
+"['reinforcement-learning', 'math', 'notation']"," Title: What does the notation sup dist mean in distributional RL?Body: I'm trying to understand distributional RL, based on this article. In one of the equations, there is a symbol $\operatorname{sup dist}$.
+
+\begin{align}
+\operatorname{sup dist}_{s, a} (R(s, a) + \gamma Z(s', a^*), Z(s, a)) \\
+s' \sim p(\cdot \mid s, a)
+\end{align}
+
+What does $\operatorname{sup dist}$ mean?
+"
+"['natural-language-processing', 'generative-model', 'geometric-deep-learning']"," Title: Creating an AI than can learn to give instructionsBody: So we think a computer is dumb because it can only follow instructions. Therefor I am trying to create an AI that can give instructions.
+
+The idea is this: Create a geometric scene (A) then make a change in scene such as turning a square red or moving a circle right one unit. This becomes the new scene B. Then the computer compares the scenes A and B and it's goal is to give the shortest possible instruction that will change scene A to scene B. Examples might be:
+
+""Turn the green square red"".
+
+
+or
+
+""Move the yellow square down"".
+
+
+Or when we get more advanced we might have:
+
+""Move the green square below the leftmost purple square down.""
+
+
+Equally, this task could be seen as finding a description of the change. e.g. ""The green square has turned red"".
+
+The way it would work is that there'd be a simplified English parser, and the computer would generate a number of phrases and check whether these achieved the desired result.
+
+I would probably give it some prior knowledge of things like colours, shape-names, and so on. Or it could learn these by example.
+
+Eventally I would hope it to generate more complicated loop type expressions such as ""Move the square left until it reaches the purple circle."" And these would essentially be little algorithms the AI has generated in words.
+
+I've got some ideas how to do this. But do you know any similar projects that I could look at? If you were implementing this, how would you go about it?
+
+[In other words we have an English parser that is interpreted to change a scene A into a scene B. But we want the AI to learn, given scenes A and B, how to generate instructions.]
+"
+"['backpropagation', 'math', 'resource-request']"," Title: What is the neuron-level math behind backpropagation for a neural network?Body: I am quite new in the AI field. I am trying to create a neural network, in a language (Dart) where I couldn't find examples or premade libraries or tutorials. I've tried looking online for a strictly ""vanilla"" python implementation (without third-party libraries), but I couldn't find any.
+
+I've found a single layer implementation, but it's done only with matrices and it's quite cryptic for a beginner.
+
+I've understood the idea between the feed forwarding, a neuron calculates the sum of its inputs, adds a bias and activates it.
+
+But I couldn't find anything a neuron-level explanation of the math behind backpropagation. (By neuron-level I think of the math down to the single neuron as a sequence of operations instead of multiple neurons treated as matrices).
+
+What is the math behind it? Are there any resources to learn it that are suitable as a beginner?
+"
+['machine-learning']," Title: Given enough graphical data, could you train an AI to plot a polynomial graph based on the input conditions?Body: Good day everyone.
+
+I am curious if it is possible for an AI to plot a time-series graph based on a single input.
+Using free fall impact as an example.
+
+Assuming we drop a ball from height 100m and record the force it receives relative to time.
+We will get a graph that looks something like below.
+
+
+Now we drop the ball from a height of 120m, record the forces, and we get another graph in addition to our original.
+
+
+
+What I am wondering is:
+If we have a large set of data on 60m to 140m (20m interval) height drops, would we be able to generate a regression model that plots the responses when given an arbitrary drop height? (i.e plot force response when dropped from 105m)
+
+Thank you all very much for your time and attention.
+"
+"['machine-learning', 'algorithm-request']"," Title: Is this a problem well suited for machine learning?Body: The light field of a certain scene is the set of all light rays that travel through the volume of that scene at a specific point in time. A light field camera, for example, captures and stores a subset of the light field of a scene.
+I've got an unstructured subsampling of such a scene (a few billion rays, each having a direction and light intensity information).
+What I wish to do now is to create an approximation of the original scene that created this light field, with the approximation consisting of 3 arbitrarily positioned (alpha-)textured 2D planes (in 3D space), where each point on the surface radiates light towards uniformly in all directions based on the pixel color at this position.
+So, I guess, this is like finding regions in the volume where similarly 'colored' rays intersect, such that the planes maximize the number of intersections they can cover.
+So, the available data is the few billions of rays, the desired output is the parameters(position, normal and size) of the three planes plus one RGBA texture for each.
+I'm asking here about experiences and opinions: Is this problem rather well-suited for a machine learning approach or rather not?
+Edit:
+A classical algorithm I could think of to solve this would be to voxelize the volume and use pathtracing to add a color sample for each ray to all cells along its way, then give each cell some value based on how similar all its contained samples are and then search for planar surfaces that intersect as many high rated cells as possible.
+But maybe machine learning is better suited for such a problem?
+"
+"['deep-learning', 'natural-language-processing', 'word-embedding', 'text-classification', 'binary-classification']"," Title: Does summing up word vectors destroy their meaning?Body: For example, I have a paragraph that I want to classify in a binary manner. But because the inputs have to have a fixed length, I need to ensure that every paragraph is represented by a uniform quantity.
+One thing I've done is taken every word in the paragraph, vectorized it using GloVe word2vec, and then summed up all of the vectors to create a "paragraph" vector, which I've then fed in as an input for my model. In doing so, have I destroyed any meaning the words might have possessed?
+Considering these two sentences would have the same vector:
+
+My dog bit Dave
+
+
+Dave bit my dog
+
+How do I get around this? Am I approaching this wrong?
+What other way can I train my model? If I take every word and feed that into my model, how do I know how many words I should take? How do I input these words? In the form of a 2D array, where each word vector is a column?
+I want to be able to train a model that can classify text accurately.
+Surprisingly, I'm getting a high (>90%) for a relatively simple model like RandomForestClassifier just by using this summing up method. Any insights?
+"
+"['genetic-algorithms', 'genetic-programming', 'constraint-satisfaction-problems']"," Title: How can I develop a genetic algorithm with a constraint on the sum of alleles?Body: I'm working on a genetic algorithm with a constraint on the sum of the alleles, e.g. if we use regular binary coding and a chromosome is 5-bits long I'd like to constrain it so that the sum of the bits has to be 3 or less (011100 is valid but 011110 is not). Moreover, the fitness function is such that invalid chromosomes cannot be evaluated.
+
+Any ideas on how this problem could be approached?
+
+I've started looking into the direction of messy GAs (since those can be over-specified) but I'm not sure if there's anything there.
+"
+"['reinforcement-learning', 'comparison', 'policy-gradients', 'reinforce']"," Title: What is the difference between Sutton's and Levine's REINFORCE algorithm?Body: I followed the videos/slides of Berkley RL course, but now I am a bit confused when implementing it. Please see the picture below.
+
+
+
+In particular, what does $i$ represent in the REINFORCE algorithm? If $\tau^i$ is the trajectory for the whole episode $i$, then why don't we average across the episodes $\frac{1}{N}$, which approximates the gradient of the objective function? Instead, it is a sum over the $i$. So, do we update the gradients per episode or have batches of episodes to update it?
+When I compare the algorithm to Sutton's book as shown below, I see that there we update the gradients per episode.
+
+
+
+But wouldn't it then contradict the derivation on the Levine's slide that the gradient of the objective function $J$ is the expectation (therefore sampling) of the gradients of the logs?
+
+Secondly, why do we have a cumulative sum of the returns over $T$ in Sutton's version but do not do it in Levine's (instead, all returns are summed together)
+"
+"['machine-learning', 'datasets']"," Title: How exactly does MICE imputation combine multiple datasets into one?Body: I'm trying to understand Multiple Imputation with Chained Equation (MICE) imputation process (a statistical method for imputing missing data). I have read some articles and I have understood how the imputation happens, but I didn't get the "pooling" step.
+After analyzing the resulting datasets with Rubin's rules, how to pool these datasets? How to get only one dataset?
+In the end, do I combine all these datasets? If yes, how? Or do I compare every dataset's estimators with Rubin's estimators and choose one dataset?
+"
+"['neural-networks', 'deep-learning', 'natural-language-processing']"," Title: How can I extract the reason of the legal compensation from a court report?Body: I'm working on a project (court-related). At a certain point, I have to extract the reason of the legal compensation. For instance, let's take these sentences (from a court report)
+
+
+ Order mister X to pay EUR 5000 for compensation for unpaid wages
+
+
+and
+
+
+ To cover damages, mister X must pay EUR 4000 to mister Y
+
+
+I want to make an algorithm that is able from this sentence to extract the motive of legal compensation. For the first sentence
+
+
+ Order mister X to pay EUR 5000 for compensation for unpaid wages
+
+
+the algorithm's output must be ""compensation for unpaid wages"" or ""compensation unpaid wages "".
+
+For the second sentence, the algorithm's output must be ""cover damages"". Output can be a string or a list of string, it doesn't matter.
+
+As I'm not an NLP expert (but I have already worked on a project on sentiment analysis, so I know some stuff about NLP), and there are so many articles, I don't know where to start.
+
+I'm working on French texts, but I can get away with working on English texts.
+"
+"['convolutional-neural-networks', 'autoencoders', 'image-processing']"," Title: Autoencoder produces repeated artifacts after convergenceBody: As experiment, I have tried using an autoencoder to encode height data from the alps, however the decoded image is very pixellated after training for several hours as show in the image below. This repeating patter is larger than the final kernel size, so I would think it would possible to remove these repeating patterns from the image to some extent.
+
+The image is (1, 512, 512) and is sampled down to (16, 32, 32). This is done with pytorch. Here is the relevant sample of the code in which the exact layers are shown.
+
+
+
+ self.encoder = nn.Sequential(
+ # Input is (N, 1, 512, 512)
+ nn.Conv2d(1, 16, 3, padding=1), # Shape (N, 16, 512, 512)
+ nn.Tanh(),
+ nn.MaxPool2d(2, stride=2), # Shape (N, 16, 256, 256)
+ nn.Conv2d(16, 32, 3, padding=1), # Shape (N, 32, 256, 256)
+ nn.Tanh(),
+ nn.MaxPool2d(2, stride=2), # Shape (N, 32, 128, 128)
+ nn.Conv2d(32, 32, 3, padding=1), # Shape (N, 32, 128, 128)
+ nn.Tanh(),
+ nn.MaxPool2d(2, stride=2), # Shape (N, 32, 64, 64)
+ nn.Conv2d(32, 16, 3, padding=1), # Shape (N, 16, 64, 64)
+ nn.Tanh(),
+ nn.MaxPool2d(2, stride=2) # Shape (N, 16, 32, 32)
+ )
+ self.decoder = nn.Sequential(
+ # Transpose convolution operator
+ nn.ConvTranspose2d(16, 32, 4, stride=2, padding=1), # Shape (N, 32, 64, 64)
+ nn.Tanh(),
+ nn.ConvTranspose2d(32, 32, 4, stride=2, padding=1), # Shape (N, 32, 128, 128)
+ nn.Tanh(),
+ nn.ConvTranspose2d(32, 16, 4, stride=2, padding=1), # Shape (N, 32, 256 256)
+ nn.Tanh(),
+ nn.ConvTranspose2d(16, 1, 4, stride=2, padding=1), # Shape (N, 32, 512, 512)
+ nn.ReLU()
+ )
+
+
+
+
+Relevant image: left side original, right side result from autoencoder
+
+So could these pixellated effects in the above image be resolved?
+"
+"['deep-learning', 'convolutional-neural-networks', 'long-short-term-memory', 'hyperparameter-optimization', 'hyper-parameters']"," Title: How can I do hyperparameter optimization for a CNN-LSTM neural network?Body: I have built a CNN-LSTM neural network with 2 inputs and 2 outputs in Keras. I trained the network with model.fit_generator()
(and not model.fit()
), to load just parts of the training data when needed, because the training data is too large to load at once.
+
+After the training the model was not working. So I checked training data (before and after augmentation). The training data are correct. So I thought the reason why the model does not work must be that I have not found the optimal hyperparameters yet.
+
+But how can I do hyperparameter optimization on a network with multiple inputs and outputs and trained with model.fit_generator()
? All I can find online is hyperparameter optimization of networks with a single input and single output and trained with model.fit()
.
+"
+"['deep-learning', 'keras', 'accuracy', 'gpu', 'batch-size']"," Title: Effect of batch size and number of GPUs on model accuracyBody: I have a data set that was split using a fixed random seed and I am going to use 80% of the data for training and the rest for validation.
+Here are my GPU and batch size configurations
+
+- use
64 batch size
with one GTX 1080Ti
+- use
128 batch size
with two GTX 1080Ti
+- use
256 batch size
with four GTX 1080Ti
+
+All other hyper-parameters such as lr
, opt
, loss
, etc., are fixed. Notice the linearity between the batch size
and the number of GPUs
.
+Will I get the same accuracy for those three experiments? Why and why not?
+"
+"['word-embedding', 'glove']"," Title: Is there a way to parallelize GloVe cooccur function?Body: I would like to create a GloVe word embedding on a very large corpus (trillions of words). However, creating the co-occurence matrix with the GloVe cooccur script is projected to take weeks. Is there any way to parallelize the process of creating a co-occurence matrix, either using GloVe or another resource that is out there?
+"
+"['neural-networks', 'natural-language-processing', 'feature-extraction']"," Title: Extract product information from email receipt HTMLBody: I am trying to extract product information from email receipts HTML. Most services I have found focus on OCR from paper receipts or PDFs. I would imagine that extraction of product information would be easier from structured HTML. What type of AI approach would be used to support this?
+"
+['image-recognition']," Title: Recognizing Set CARDsBody: Set is a card game and is Nicely described here.
+
+Each set-card has 4 properties:
+
+
+- The number(1,2 or 3)
+- the color (Red, Green or Purple)
+- Fill (Full, Stripes, None)
+- Form (Wave, Oval or Diamond)
+
+
+
+
+converts to 2 Purple Waves No fill (code: 2PWN)
+
+
and
+
+convert to codes 1RON and 3GDN
+
+For every combination there is one card so in total there are 3^4 = 81 cards. The goal is to identify 3 cards (set) out of collection of 12 displayed randomly chosen set cards where all properties occur 0,1 or 3 times.
+
+As a hobby project I want to create an android app which can -with the camera- capture the 12 (less or more) set cards
and indicate the sets present in the collection of 12. I'm looking for ways to leverage image recognition as efficient as possible.
+
+I've been thinking of taking multiple pictures of all the individual cards, label them and feeding them to a trainer (firebase ML KIT AutoML Vision Edge) But I have the feeling that this a bit of brute force and takes a lot of time and effort photographing and labeling. I could also take pictures of multiple set cards and provide the different codes as labels.
+
+What would be the best (most efficient) approach to have a model for labelling all cards?
+"
+"['neural-networks', 'machine-learning', 'math', 'learning-curve']"," Title: Is there a mathematical formula that describes the learning curve in neural networks?Body: In training a neural network, you often see the curve showing how fast the neural network is getting better. It usually grows very fast then slows down to almost horizontal.
+
+Is there a mathematical formula that matches these curves?
+
+
+
+Some similar curves are:
+
+$$y=1-e^{-x}$$
+
+$$y=\frac{x}{1+x}$$
+
+$$y=\tanh(x)$$
+
+$$y=1+x-\sqrt{1+x^2}$$
+
+Is there a theoretical reason for this shape?
+"
+"['deep-learning', 'classification']"," Title: Can training a model on a dataset composed by real images and drawings hurt the training process of a real-world application model?Body: I'm training a multi-label classifier that's supposed to be tested on underwater images. I'm wondering if feeding the model drawings of a certain class plus real images can affect the results badly. Was there a study on this? Or are there any past experiences anyone could share to help?
+"
+"['reinforcement-learning', 'deep-rl', 'sarsa']"," Title: Optimal RL function approximation for TicTacToe gameBody: I modeled the TicTacToe game as a RL problem - with an environment and an agent.
+
+At first I made an ""Exact"" agent - using the SARSA algorithm, I saved every unique state, and chose the best (available) action given that state. I made 2 agents learn by competing against each other.
+
+The agents learned fast - it took only 30k games for them to reach a tie stand-off. And the agent clearly knew how to play the game.
+
+I then tried to use function approximation instead of saving the exact state. My function was a FF-NN. My 1st (working) architecture was 9 (inputs) x 36 x 36 x 9 (actions). I used the semi-gradient 1-step SARSA. The agents took much longer time to learn. After about 50k games they were still less good than the exact agent. I then made a stand off between the Exact and the NN agent - the exact agent won 1721 games from 10k, the rest were tied. Which is not bad.
+
+I then tried reducing the number of units in the hidden layers to 12, but didn't get good results (even after playing for 500k+ games total, tweaking stuff). I also tried playing with convolution architectures, again - not getting anywhere.
+
+I am wondering if there's some optimal function approximation solution that can get as-good of results as the exact agent. TicTacToe doesn't seem like such a hard problem for me.
+
+Conceptually I think there should be much less complexity involved in solving it then can be expressed in a 9x36x36x9 network. Am I wrong, and it's just an illusion of simplicity? Or are there better ways? Maybe modeling the problem differently?
+"
+"['neural-networks', 'deep-learning', 'explainable-ai']"," Title: Has anyone attempted to take a bunch of similar neural networks to extract general formulae about the focus area?Body: When a neural network learns something from a data set, we are left with a bunch of weights which represent some approximation of knowledge about the world. Although different data sets or even different runs of the same NN might yield completely different sets of weights, the resulting equations must be mathematically similar (linear combinations, rotations, etc.). Since we usually build NNs to model a particular concrete task (identify cats, pedestrians, tumors, etc.), it seems that we are generally satisfied to let the network continue to act as a black box.
+
+Now, I understand that there is a push for ""understandability"" of NNs, other ML techniques, etc. But this is not quite what I'm getting at. It seems to me that given a bunch of data points recording the behavior of charged particles, one could effectively recover Maxwell's laws using a sufficiently advanced NN. Perhaps that requires NNs which are much more sophisticated than what we have today. But it illustrates the thing I am interested in: NNs could, in my mind, be teaching us general truths about the world if we took the time to analyze and simplify the formulae that they give us1.
+
+For instance, there must be hundreds, if not thousands of NNs which have been trained on visual recognition tasks that end up learning many of the same sub-skills, to put it a bit anthropomorphically. I recently read about gauge CNNs, but this goes the other way: we start with what we know and then bake it into the network.
+
+Has anyone attempted to go the opposite way? Either:
+
+
+- Take a bunch of similar NNs and analyze what they have in common to extract general formulae about the focus area2
+- Carefully inspect the structure of a well-trained NN to directly extract the ""Maxwell's equations"" which might be hiding in them?
+
+
+1 Imagine if we built a NN to learn Newtonian mechanics just to compute a simple ballistic trajectory. It could surely be done, but would also be a massive waste of resources. We have nice, neat equations for ballistic motion, given to us by the ""original neural networks"", so we use those.
+
+2 E.g., surely the set of all visual NNs have collectively discovered near-optimal algorithms for edge/line/orientation detection, etc.). This could perhaps be done with the assistance of a meta-ML algorithm (like, clustering over NN weight matrices?).
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: Training an RNN to answer simple quesitonsBody: I would like to train an RNN to follow the sentences:
+
+""Would you like some cheese""? with ""Yes, I would like some cheese.""
+
+So whenever the template ""Would you like some ____?"" appears then RNN produces the sequence above. And it should even work on sentences which are new like ""Would you like some blumf?""
+
+I have thought of various ways of doing this. Such as, as well as having 26 outputs for letters of the alphabet have about 20 more for ""repeat the character that is 14 characters to the left"" and so on.
+
+Has this been done before or is there a better way?
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'definitions', 'papers']"," Title: What is a cascaded convolutional neural network?Body: For a project I am doing, I found the paper Face Alignment in Full Pose Range: A 3D Total Solution.
+
+It is using a cascaded convolutional neural network, but I wasn't able to find the original paper explaining what that is.
+
+In layman's terms and intuitively, how does a cascaded CNN work? What does it solve?
+"
+"['neural-networks', 'deep-learning', 'pattern-recognition', 'structured-data', 'data-mining']"," Title: how to convert one structured data to another without specifying structureBody: I have lots of text documents structured as
+
+{
+{
+ Item1=[
+ {a1=1,
+ a2=2,
+ a3=3},
+ {a1=11,
+ a2=22,
+ a3=33},
+ {a1=41,
+ a2=52,
+ a3=63},
+ {a1=19,
+ a2=29,
+ a3=39}
+ ],
+ Item2=[
+ {a4=1,
+ a5=2,
+ a6=3},
+ {a4=11,
+ a5=22,
+ a6=33},
+ {a4=41,
+ a5=52,
+ a6=63},
+ {a4=19,
+ a5=29,
+ a6=39}
+ ],
+}
+}
+
+
+Now this can be formatted into two csv's as
+
+
+
+and
+
+
+
+I can write regex parser for this but is there a way by which a neural network or deep learning model can be trained for this, which can create these csvs?
+
+The above example has been indented for better visuals, the raw text looks something like
+
+{{Item1=[{a1=1,a2=2,a3=3},{a1=11,a2=22,{a1=41,a2=52,a3=63},{a1=19,a2=29,a3=39}],Item2=[{a4=1,a5=2,a6=3},{a4=11,a5=22,a6=33},{a4=41,a5=52,a6=63},{a4=19,a5=29,a6=39}]}}
+
+"
+"['neural-networks', 'deep-learning', 'math', 'definitions', 'activation-functions']"," Title: What is the mathematical definition of an activation function?Body: What is the mathematical definition of an activation function to be used in a neural network?
+
+So far I did not find a precise one, summarizing which criterions (e.g. monotonicity, differentiability, etc.) are required. Any recommendations for literature about this or – even better – the definition itself?
+
+In particular, one major point which is unclear for me is differentiability. In lots of articles, this is required for the activation function, but then, out of nowhere, ReLU (which is not differentiable) is used. I totally understand why we need to be able to take derivatives of it and I also understand why we can use ReLU in practice anyway, but how does one formalize this?
+"
+"['regularization', 'non-linear-regression']"," Title: Regularization of non-linear parameters?Body: I was wondering whether it is possible to regularize (L1 or L2) non-linear parameters in a general regression model. Say, I have the following non-linear least squares cost function, where $p$ is a $3d$ vector of fitting parameters:
+
+$cost(p_i) = ( y(x) - sin^{p_1}(x) + p{_2}e^{(p_3*x)} )^2$
+
+In the above cost function, $p_1$ and $p_3$ are non-linear parameters. How should I go about regularizing them? If they were linear, I can just sum them up together with the linear parameters (absolute values or squares) and add as a penalty to the cost function, right? However, I'm not sure if I'm allowed to do so for non-linear parameters.
+
+Has anyone considered this problem?
+"
+"['neural-networks', 'deep-learning', 'comparison', 'variational-autoencoder']"," Title: What's the difference between semi-supervised VAEs and conditional VAEs?Body: Can someone explain the difference? I'm assuming the difference is just that the neural nets representing the encoder and decoder are trained in a semi-supervised manner in semi-supervised VAE, which in conditional the approximation to the posterior and the posterior's distributions are conditioned on some labels? So, I'm guessing that semi-supervised VAE affects the loss evaluation, whereas, in conditional VAEs, the inference network is conditioned on another label as well?
+"
+"['reinforcement-learning', 'alphazero', 'self-play']"," Title: In AlphaZero, which policy is saved in the dataset, and how is the move chosen?Body: I've been doing some research on the principles behind AlphaZero.
+Especially this cheat sheet (1) and this implementation (2) (in Connect 4) were very useful.
+Yet, I still have two important questions:
+
+- How is the policy network updated? In (2), board positions are saved in a dataset of tuples (state, policy, value). The value is derived from the result of the self-played game. However, I'm not sure which policy is saved: the number of times that each move has been played, the prior probabilities for each move (I guess not), or something else?
+
+- The cheat sheet says that (for competitive play) the move is chosen with the greatest N (=most visited). Wouldn't it be more logical to choose the move with the highest probability calculated by the policy head?
+
+
+"
+"['deep-learning', 'classification']"," Title: What's the mathematical relationship between number of trainable parameters and size of training set?Body: Let's say that I have a pre-trained model where the training set used to pretrain the model is very different from my training set.
+Let's say I unfreeze layers that have X trainable parameters. What size should the training set be with/without data augmentation for multi-class/multi-label image classification with Y number of labels?
+"
+"['classification', 'datasets']"," Title: How to correctly label images for multi-label classification?Body: I have images that contain lots of elements. Some I know, some I don't. I want to know if it's ok to only label those I do know. Let's take this image for example. I would label the green stuff and the worm but leave the rest unlabeled.
+Is that ok?
+
+Another question I would also like to ask is how concise I should be in labeling. For instance, You can see in the picture a bit of blue behind the green plant. So should I label that bit and say water or leave it unlabeled?
+
+EDIT:
+
+I also want to ask if it's ok to label only the things I'm interested in even if they take up to 30% of the picture? Won't the neural network be confused by all the details in the picture that it perceives and that I label as A for example, even if A is just a part of it?
+
+Another question would be, let's say I have labels A, B and C. I have an image in which I'm a bit confused if a certain object is of label B or A or even a totally different class other than (A,B,C). What should I do in this instance?
+
+I'm having a hard time with the dataset. It would take an expert to label this correctly. But I want to do things as cleanly as possible, so all the effort doesn't go to waste. I would really appreciate your help. Thank you guys.
+
+
+"
+"['neural-networks', 'training', 'randomness']"," Title: How to deal with random weights initialization in hyperparameters tuning?Body: In the step of tuning my neural networks I often encounter a problem that every time I train the exact same network, it gives me different final error due to random initialization of the weights. Sometimes the differences are small and negligible, sometimes they are significant, depending on the data and architecture.
+
+My problem arises when I want to tune some parameters like number of layers or neurons, because I don't know if the change in final error was caused by recent changes in network's architecture or it is simply effect of the aforementioned randomness.
+
+My question is how to deal with this issue?
+"
+"['probability', 'probability-distribution']"," Title: Why am I getting the logarithm of the probability bigger than zero when using Neural Spline Flows?Body: I am using a normalizing flow (Neural Spline Flows) to approximate a probability. After some training, the average loss is around 0.5 (so the logarithm of the probability = -0.5). However, when I am trying it on some new test data, I am getting some values of the logarithm of the probability bigger than zero, which would mean that the probability for that element is bigger than one (which doesn't make sense).
+
+Does anyone know what could cause this? Isn't the flow supposed to keep all the probabilities below 1 automatically?
+"
+"['convolutional-neural-networks', 'object-recognition', 'object-detection']"," Title: YOLOv3 Synthetic Data TrainingBody: Suppose we want to train a model to detect various objects. Let's say we have training data of those objects in various backgrounds along with their bounding boxes. Basically these objects have been three dimensionally created and the bounding boxes have been drawn on them. Then these have been ""synthetically inserted"" into various blank backgrounds.
+
+Why would a model trained only on this data do better than a model that has this data along with ""real"" data of these objects with their bounding boxes manually drawn?
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: How to Layer based Feature extraction?Body: I have read that in deep networks you can engineer each layer for a particular purpose with regards to feature learning. I'm wondering how that is actually done and how it is trained?
+
+In addition doesn't this conflict with the idea of deep-networks having ""automatic"" feature extraction?
+
+For example consider this:
+
+Lets say you want to detect stop signs. How would you teach a deep network to do this in a layer-wise fashion?
+
+People write about one layer of a Deep Network does edge detection, but how?
+"
+['natural-language-processing']," Title: Is Sanskrit still relevant for NLP/AI?Body: I came across a news article from 2018 where the president of India was saying that Sanskrit is the best language for ML/AI. I have no idea regarding his qualification on either AI or Sanskrit to say this but this idea has been floated earlier in the context of NLP. Specifically, Rick Briggs had said so in 1985.
+
+I know elementary Sanskrit and I know NLP. I can understand the point that as a strongly declined language Sanskrit is less ambiguous than say English as the position of words in a sentence are not important. Add to it the fact that words are also declined (that's not the technical term for verbs and I am not sure what is) so that verb gender and number help identify which entity they refer to.
+
+However, that point was valid in 1985. My question is that post the Deep Learning revolution of the last couple of years is it still relevant? Especially given the fact that humans still have to first learn Sanskrit in case NLP is done in Sanskrit.
+
+Of course, as can be guessed from the tone of the question, I am of the opinion that Sanskrit is not relevant for AI now but I wanted to know if someone who works in AI thinks otherwise and if so what is their reason so think so.
+"
+"['machine-learning', 'reinforcement-learning']"," Title: Why is the average reward plot for my reinforcement learning agent different than the usual plots?Body: I'm building an RL agent using SARSA and Q-Learning for testing its capabilities.
+
+The environment is a 10x10 grid, where it gets a reward of 1 if he reaches the goal while he takes -1 every time he takes a step out of the grid. So, it can freely move out and every time it takes a step outside of the grid it gets -1.
+
+After tuning the main parameters
+
+
+- alpha_val: 0.25
+- discount: 0.99
+- episode_length: 50
+- eps_val: 0.5
+
+
+I get the following plot for 10000 episodes (The plot is sampled every 100 episodes):
+
+
+
+But when I look at the plots online I see usually plots like this one:
+
+
+
+Since I'm new at RL, I'm asking some comments about my outcome or any type of suggestion if anyone of you think that I'm doing something wrong.
+"
+"['computer-vision', 'image-segmentation', 'u-net', 'instance-segmentation', 'semantic-segmentation']"," Title: If I trained a model to perform semantic segmentation on images with only one object, would it also work on images with multiple objects?Body: I'm working on semantic segmentation tasks in the medical space using the U-Net. Let's say that I train a U-Net model on medical images with the goal of segmenting out, say, ligaments, from a medical image. If I train that model on images that contain just a single labelled ligament, it will be able to segment out single ligaments pretty well, I assume. If I present it with an image with multiple ligaments, should it also be able to segment the multiple ligaments well too?
+Based on my understanding, semantic segmentation is just pixel-wise classification. As a result, shouldn't the number of the objects in the image not be relevant since it's only looking at individual pixels? So, as long as a pixel matches that of a ligament, it should be able to segment it equally right?
+Or am I misunderstanding some piece?
+Basically, if I train a U-Net on images with just single ligaments, will it also be able to segment images with multiple ligaments equally as well based on my logic above?
+"
+"['machine-learning', 'deep-learning', 'game-ai', 'models']"," Title: Why isn't there a model playing FPS like CoD or Battlefield already existing?Body: Assuming we had an unlimited time to train a model and a very powerful machine to use our model in real-time (hello quantum computer), I'd like to know why no one could achieve to build an AI able to play a FPS, using ONLY pixels shown on the screen.
+
+Disclaimer: I am not tackling this problem and neither am I planning on doing such a thing, this is pure speculation and curiosity.
+
+
+
+I read this great article: playing FPS Games with Deep Reinforcement Learning (2017) (Guillaume Lample, Devendra Singh Chaplot) where they achieve a 4/1 kills/death ratio on Doom against bots. But this is 3 years old now.
+
+Here is a picture of their model:
+
+
+
+But they made 2 assumptions that we, humans, do not make when we are playing for the first time to a new game like Call of Duty or Battlefield:
+
+
+- Game feature augmentation. To train a part of their model they used the game engine to know if there is (or not) an enemy in the frame they are processing. We obviously can't do this with CoD or Battlefield, and we, as human, just ""learn"" to recognize an enemy without these informations.
+- Changing textures of several maps while training to generalize better the model (see 5.2 of the paper linked previously). To summarize, they trained the model with 10 maps changing texture of some elements to make the model generalize better. Then they tested it on 3 unknown maps. In real world (ie in the scenario where we base our training/testing exclusively on the pixels of the screen), we can't train a model with different textures on the same map. And humans are able to play a deathmatch on an unknow map without re-learning everything (detect enemies, move, hide, reloading...). We just need to construct a 3D model of the map in our head to play our best.
+
+
+
+
+Their agent ""divides the problem into two phases: navigation (exploring the map to collect items and find enemies) and action (fighting enemies when they are observed), and uses separate networks for each phase of the game"".
+
+Would it be wise to use more than 2 models? Let's say:
+
+
+- 1 CNN to detect enemies
+- 1 model to deal with space features (position/navigation of the agent and the enemies)
+- 1 model to choose the actions given all data previous models have found?
+
+
+We could train them independently, at the same time.
+
+I think we'd get better result by processing manually some features (using computer vision techniques) like the minimap to get know positions of enemies and number of ammo to feed as input of the last model (action decider).
+
+But there are other problem we'd get: there is a delay between the frame we choose to pull the trigger, the time the bullet hit the enemy and the time the ""reward"" appears on the screen (ex: ""100 points, kill [nameOfLateEnemy]"" appears after the 3th bullet, and if there is ping because we are playing online it may appear 100ms after). How to use reinforcement learning when we don't know exactly which action was the one getting the reward? (we can move the agent while changing the lok directon while pulling the trigger at the same time. It's the combination of these actions that is making the agent kill an enemy).
+
+If the 2 assumptions they made were easy to get rid of, they would be discarded already.
+However detecting enemies is basically a simple CNN, and making the navigation network generalize certainly have solutions I can't think of right now but some researchers should have in this 3-year gap between the paper and today.
+
+So why isn't there a model playing CoD or Battlefield better than humans? What am I missing?
+"
+"['object-detection', 'accuracy', 'precision', 'recall']"," Title: Given the precision and recall of this model, what can I say about it?Body: The following table shows the precision and recall values I obtained for three object detection models.
+
+The goal is to find the best object detection model for that particular data set.
+I evaluate the first two models as the following.
+
+- Model 1 has a high recall and precision values. High precision relates to a low false-positive rate, and high recall relates to a low false-negative rate. High scores for both show that the model is returning accurate results.
+
+- Model 2 has high precision but low recall. This means it returns very few results, but most of its identified objects are correct.
+
+
+How can I evaluate the third one?
+"
+['neural-networks']," Title: Generating 5 numbers with 1 input before loss functionBody: I am trying make an ANN model that takes a constant m (will be changed later but now it is just a constant, let's say 0) as an input and generate 5 non-integer numbers (a1,a2..a5) after some layers like relu, linear,relu ... and then these 5 numbers enter to the loss function layer along with an additional 5 numbers (b1,b2..b5) given by hand directly into the same loss function. In the loss function, S=a1xb1+...+a5xb5 will be calculated and the model should use mean square error with this S and S0, which is given by hand again to tune the 5 numbers generated after the NN layers.
+
+For a dummy like me, this looks like a totally different model than the examples online. I'd really appreciate any guidance here like the model I should use, examples etc. I can't even know where to start even though I believe I understand the generic NN examples that one can find online.
+
+Thanks
+"
+"['natural-language-processing', 'semantics']"," Title: How much knowledge of the world is learnt through words?Body: We know a lot of common sense about the world. Things like ""to buy something you need money"".
+
+I wonder how much of this common sense comes about through actual someone explicitly telling you the instructions ""You need money to buy things"". Which we store in our brains as a sort of rule. As opposed to just intutively understanding things and picking it up.
+
+I am imagining children playing at shop-keeping and saying things like ""I give you this and you give me that"". And other children not quite understanding the concept of buying things until being told by a teacher.
+
+If so, giving a computer a list of common sense rules likes these is no different to teaching a child. So I am wondering why this area of AI research (semantic webs etc.) has been frowned upon in the last decade in favour of trying to learn everything through experience like deep neural networks?
+"
+"['image-processing', 'image-generation']"," Title: Train AI with shapes + drop shadows to remove background colorsBody: for a screen printing app, I'd like to remove background colors from images.
+
+There is still a white border around text from anti-aliasing.
+Dropshadows also break it.
+
+So, I was thinking I could train an AI by creating images with shapes and text, with and without backgrounds.
+
+The AI input would get a version with a background and the ""goal"" would be the version without the background.
+
+How do I go about doing this? ( total AI noob )
+
+================
+
+Non-AI solution
+
+If anyone is interested... I have made a non AI solution which takes all colors within a tolerance of the background, then looks at the 4x4 neighbors.. from each neighbor(which is a candidate for converting into semi-transparent), it looks at the 3x3 neighbors around the candidate for the furthest color from the removal color ( which typically grabs the solid pixels ), and then converts the current pixel to an alpha version by copying the rgb values, and converting alpha to 255 * (1 - dist_removal_to_current / dist_removal_to_furthest)
or something like that.
+
+I should write an article or something... it was an interesting algorithm to write. linkedin me Dan Schumann in wisconsin
+"
+"['machine-learning', 'gradient-descent', 'momentum', 'adam', 'optimizers']"," Title: What is the formula for the momentum and Adam optimisers?Body: In the gradient descent algorithm, the formula to update the weight $w$, which has $g$ as the partial gradient of the loss function with respect to it, is:
+$$w\ -= r \times g$$
+where $r$ is the learning rate.
+What should be the formula for momentum optimizer and Adam (adaptive momentum?) optimizer? Something should be added to the right side of the formula above?
+"
+"['deep-learning', 'long-short-term-memory']"," Title: Why don't the neural networks inside LSTM cells contain hidden layers?Body: I watched a video explaining how LSTM cells have very rudimentary feed-forward neural networks, basically a 2 layer input-output with no hidden layers.
+
+Why don't LSTM cells have more complex neural networks before each gate, i.e. containing 1 hidden layer?
+
+I would think that if you want more advanced gating decisions, that you would use at least 1 hidden layer to have more complex processing.
+"
+"['machine-learning', 'convolutional-neural-networks', 'classification', 'datasets', 'data-preprocessing']"," Title: Suggestion for finding the stable regions in spiral galaxy data?Body: I am working with a data set that consists of the actual pitch angle (given as PA(Y)
) and the pitch angle at each radii (listed from 1 to 217). In the image below, you can only see radii 1 through 16. The Mode(Y)
in the image below is not of relevance at the moment.
+
+
+
+There are regions that range between certain radii in which the pitch angle measurement does not change (in the image, you'll notice this happens for all the radii values, but they do change after a certain radius that's cut off in the image). These are known as stable regions. My goal is to capture all the ranges in the data in which the pitch angle measurement does not change, and create a program that returns those values.
+
+Is there a machine learning method in which this is possible, or is this just a non-machine learning problem? I have tried creating plots and have considered creating a CNN that can identify these flat regions, but I feel like this is overkill. My PIs want to use a machine learning method and they have proposed neural networks, hence why I tried the CNN, but I just am not sure if that is possible?
+
+I should add, usually stable regions radii ranges are unknown, so the goal is to try to see if certain radii ranges usually can predict where a stable region are located.
+
+Moreover, I've thought of using a classifier to determine whether a region is flat or not. I am just very confused as to how to approach this. Are there any similar examples to the problem I'm currently working on that someone can point me to?
+"
+"['convolutional-neural-networks', 'hyperparameter-optimization', 'bayesian-optimization']"," Title: Is it normal to see oscillations in tested hyperparameters during bayesian optimisation?Body: I've been trying out bayesian hyperparameter optimisation (with TPE) on a simple CNN applied to the MNIST handwritten digit dataset. I noticed that over iterations of the optimisation loop, the tested parameters appear to oscillate slowly.
+
+Here's the learning rate:
+
+
+Here's the momentum:
+
+
+I won't add a graph, but the batch size is also sampled from one of 32, 64, or 128. Also note that I did this with a fixed 10 epochs in each trial.
+
+I understand that we'd expect the trialled parameters to converge gradually towards the optimal, but why the longer term movement of the average?
+
+For context here is the score (1 - accuracy) over iterations
+
+
+And also for context, here's the architecture of the CNN.
+
+_________________________________________________________________
+Layer (type) Output Shape Param #
+=================================================================
+conv2d_1 (Conv2D) (None, 26, 26, 32) 320
+_________________________________________________________________
+max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32) 0
+_________________________________________________________________
+conv2d_2 (Conv2D) (None, 11, 11, 64) 18496
+_________________________________________________________________
+conv2d_3 (Conv2D) (None, 9, 9, 64) 36928
+_________________________________________________________________
+max_pooling2d_2 (MaxPooling2 (None, 4, 4, 64) 0
+_________________________________________________________________
+flatten_1 (Flatten) (None, 1024) 0
+_________________________________________________________________
+dense_1 (Dense) (None, 100) 102500
+_________________________________________________________________
+dense_2 (Dense) (None, 10) 1010
+=================================================================
+
+
+Optimization done with mini-batch gradient descent on the cross entropy.
+"
+['machine-learning']," Title: How can I match numbers with expressions?Body: Let's say I have the number 123.45 and the expression one hundred twenty-three and forty-five cents.
+
+Can I develop AI to identify these two values as a match? If I can, how should I do that?
+"
+"['neural-networks', 'convolutional-neural-networks', 'hyperparameter-optimization']"," Title: What are some ways to quickly evaluate the potential of a given NN architecture?Body: Main question
+
+Is there some way we can leverage general knowledge of how certain hyperparameters affect performance, to very rapidly get some sort of estimate for how good a given architecture could be?
+
+Elaboration
+
+I'm working on a handwritten character recognition problem using CNNs. I want to try out a few different architectures (mostly at random) to iterate towards something which might work. The problem is that one run takes a really long time.
+
+So what's a way to quickly verify if a given architecture is promising? And let me elaborate on what I've tried:
+
+
+- Just try it once. Yeah but maybe I chose some bad hyperparameter combination and actually that architecture was going to be the ground breaker.
+- Do Bayesian optimisation. That's still really slow. From examples and trials, I've seen that it takes quite some time for convergence. And besides, I'm not trying to optimise yet, I just want to check if there's any potential.
+
+"
+"['reinforcement-learning', 'dqn']"," Title: NoisyNet DQN with default parameters not exploringBody: I implemented a DQN algorithm that plays OpenAIs Cartpole environment. The NN architecture consists of 3 normal linear layers that encode the state, and one noisy linear layer, that predicts the Q value based on the encoded state.
+My NoisyLinear
layers looks like this:
+
+class NoisyLinear(nn.Module):
+ def __init__(self, in_features, out_features):
+ super(NoisyLinear, self).__init__()
+ self.in_features = in_features
+ self.out_features = out_features
+ self.sigma_zero = 0.5
+ self.weight_mu = torch.empty(out_features, in_features)
+ self.weight_sigma = torch.empty(out_features, in_features)
+ self.weight_epsilon = torch.empty(out_features, in_features, requires_grad=False)
+ self.bias_mu = torch.empty(out_features)
+ self.bias_sigma = torch.empty(out_features)
+ self.bias_epsilon = torch.empty(out_features, requires_grad=False)
+ self.reset_parameters()
+ self.reset_noise()
+
+ def reset_parameters(self):
+ mu_range = 1 / math.sqrt(self.in_features)
+ self.weight_mu.data.uniform_(-mu_range, mu_range)
+ self.weight_sigma.data.fill_(self.sigma_zero / math.sqrt(self.in_features))
+ self.bias_mu.data.uniform_(-mu_range, mu_range)
+ self.bias_sigma.data.fill_(self.sigma_zero / math.sqrt(self.out_features))
+
+ def _scale_noise(self, size):
+ x = torch.randn(size)
+ return x.sign().mul_(x.abs().sqrt_())
+
+ def reset_noise(self):
+ epsilon_in = self._scale_noise(self.in_features)
+ epsilon_out = self._scale_noise(self.out_features)
+ self.weight_epsilon.copy_(epsilon_out.ger(epsilon_in))
+ self.bias_epsilon.copy_(epsilon_out)
+
+ def forward(self, input):
+ return F.linear(input, self.weight_mu + self.weight_sigma * self.weight_epsilon, self.bias_mu + self.bias_sigma * self.bias_epsilon)
+
+
+However, with the default hyperparameters from the (sigma_0 = 0.5), the agent does not explore at all, and even if I crank it up to sigma_0 = 5, it works way worse than epsilon greedy.
+(When I use noisy nets I don't use epsilon greedy).
+"
+"['machine-learning', 'deep-learning', 'keras', 'comparison']"," Title: Why are traditional ML models still used over deep neural networks?Body: I'm still on my first steps in the Data Science field. I played with some DL frameworks, like TensorFlow (pure) and Keras (on top) before, and know a little bit of some ""classic machine learning"" algorithms like decision trees, k-nearest neighbors, etc.
+
+For example, image classification problems can be solved with deep learning, but some people also use the SVM.
+
+Why are traditional ML models still used over neural networks, if neural networks seem to be superior to traditional ML models? Keras is rather simple to use, so why don't people just use deep neural networks with Keras? What are the pros and cons of each approach (considering the same problem)?
+"
+"['convolutional-neural-networks', 'keras', 'image-processing', 'convolution', 'filters']"," Title: How can I make the kernels non-learnable and set them manually?Body: I'm a newbie in Convolutional Neural Networks. I have found out that kernels in convolutional layers are usually learned while training.
+Suppose I have a kernel that is very good to extract the features that I want to extract. In that case, I don't want the kernels to be learnable. So, how can I make the kernels non-learnable and set them manually?
+Maybe, in that case, I have to use something different from a CNN.
+"
+"['deep-learning', 'reinforcement-learning', 'q-learning', 'backpropagation', 'dqn']"," Title: How can a DQN backpropagate its loss?Body: I'm currently trying to take the next step in deep learning. I managed so far to write my own basic feed-forward network in python without any frameworks (just numpy and pandas), so I think I understood the math and intuition behind backpropagation. Now, I'm stuck with deep q-learning. I've tried to get an agent to learn in various environments. But somehow nothing works out. So there has to be something I'm getting wrong. And it seems that I do not understand the critical part right at least that's what I'm thinking.
+
+The screenshot is from this video.
+
+What I'm trying to draw here is my understanding of the very basic process of a simple DQN. Assuming this is right, how is the loss backpropagated?
+Since only the selected $Q(s, a)$ values (5 and 7) are further processed in the loss function, how is the impact from the other neurons calculated so their weights can be adjusted to better predict the real q-values?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'image-recognition', 'training']"," Title: Training dataset for convolutional neural network classification - will images captured on the ground be useful for training aerial imagery?Body: I am an agronomy graduate student looking to classify crops from weeds using convolutional neural networks (CNNs).
+
+The basic idea that I am wanting to get into involves separating crops from weeds from aerial imagery (either captured by drones or piloted aircraft). The idea of the project that I am proposing involves spending some time driving around to different fields and capturing many images of both crops and weeds. These images will then be used to train a CNN that will classify aerial imagery on the location of crops and weeds. After classifying the imagery, a herbicide application map will be generated for site-specific weed control. This involves the integration of CNN classification and GIS technology.
+
+My question is this: If you have an orthomosaic image generated from a drone, will images captured from a digital camera on the ground be effective for training a CNN that will classify high-resolution aerial imagery?
+
+Being new to CNNs, I just didn't know if I had to use aerial imagery to train a CNN to classify aerial imagery, or if a digital camera will work just fine.
+"
+"['neural-networks', 'deep-learning', 'backpropagation', 'objective-functions', 'gradient-descent']"," Title: What's the function that SGD takes to calculate the gradient?Body: I'm struggling to fully understand the stochastic gradient descent algorithm.
+I know that gradient descent allows you to find the local minimum of a function. What I don't know is what exactly that function IS.
+More specifically, the algorithm should work by initializing the network with random weights. Then, if I'm not mistaken, it forward-propagates $n$ times (where $n$ is the mini-batch size). At this point, I've no idea about what function should I search for, with over hundreds of neurons each having hundreds of parameters.
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'crossover-operators', 'chromosomes']"," Title: Are there clever (fitness-based) crossover operators for binary chromosomes?Body: While studying genetic algorithms, I've come across different crossover operations used for binary chromosomes, such as the 1-point crossover, the uniform crossover, etc. These methods usually don't use any "intelligence".
+I found methods like the fitness-based crossover and Boltzmann crossover, which use fitness value so that the child will be created from better parents with a better probability.
+Is there any other similar method that uses fitness or any other way for an intelligent crossover for binary chromosomes?
+"
+"['neural-networks', 'deep-learning', 'objective-functions', 'pytorch', 'cross-entropy']"," Title: Why does PyTorch use a different formula for the cross-entropy?Body: In my understanding, the formula to calculate the cross-entropy is
+
+$$
+H(p,q) = - \sum p_i \log(q_i)
+$$
+
+But in PyTorch nn.CrossEntropyLoss
is calculated using this formula:
+
+$$
+loss = -\log\left( \frac{\exp(x[class])}{\sum_j \exp(x_j)} \right)
+$$
+
+that I think it only addresses the $\log(q_i)$ part in the first formula.
+
+Why does PyTorch use a different formula for the cross-entropy?
+"
+"['convolutional-neural-networks', 'training', 'computer-vision', 'optimization', 'hyperparameter-optimization']"," Title: When training a CNN, what are the hyperparameters to tune first?Body: I am training a convolutional neural network for object detection. Apart from the learning rate, what are the other hyperparameters that I should tune? And in what order of importance? Besides, I read that doing a grid search for hyperparameters is not the best way to go about training and that random search is better in this case. Is random search really that good?
+"
+"['reinforcement-learning', 'deep-learning', 'backpropagation', 'objective-functions', 'batch-learning']"," Title: What is the difference between batches in deep Q learning and supervised learning?Body: How is the batch loss calculated in both DQNs and simple classifiers? From what I understood, in a classifier, a common method is that you sample a mini-batch, calculate the loss for every example, calculate the average loss over the whole batch, and adjust the weights w.r.t the average loss?
+(Please correct me if I'm wrong)
+But is this the same in DQNs? So, you sample a batch from your memory, say 64 transitions. Do I iterate through each transition and adjust the weights "on the fly", or do I calculate the average loss of the batch and THEN in a big step adjust the weights w.r.t the average batch loss?
+"
+"['convolutional-neural-networks', 'reference-request', 'image-segmentation', 'algorithm-request', 'model-request']"," Title: How many ways are there to perform image segmentation?Body: I'm new in Artificial Intelligence and I want to do image segmentation.
+
+Searching I have found these ways
+
+
+- Digital image processing (I have read it in this book: Digital Image Processing, 4th edition)
+- Convolutional neural networks
+
+
+Is there something else that I can use?
+"
+"['machine-learning', 'logistic-regression', 'text-classification']"," Title: How does the weight update formula for logistic regression work?Body: I am trying to use Logistic Regression to make a spam filter, but I am having trouble understanding the weight update part. I have processed my email dataset, and I have an attribute vector of the top n words that are most likely to be contained within a spam.
+
+From my understanding, during training, I will have to implement an optimization formula after each training example in order to update the weights.
+
+$$
+w_l \leftarrow w_l + \eta \cdot \sum_{i=1}^m [ y^{(i)} - P(c_+ \mid \vec{x}^{(i)} )] \cdot x_l^{(i)}
+$$
+
+How does a formula such as this work? How can it be implemented in Python?
+"
+"['neural-networks', 'deep-learning', 'comparison', 'datasets', 'computational-linguistics']"," Title: How can I compare EEG data with accelerometer data in 1 algorithm?Body: I have frequency EEG data from fall and non-fall events and I am trying to incorporate it with accelerometer data that was collected at the same time. One approach is, of course, to use two separate algorithms and find the threshold for each. Then comparing the threshold of each. In other words, if the accelerometer algorithm predicts a fall (fall detected = 1) and the EEG algorithm detects a fall, based on the power spectrum (fall detected = 1), then the system outputs a ""1"" that a fall was truly detected. This approach uses the idea of a simple AND gate between the two algorithms.
+
+I would like to know how to correctly process the data so that I can feed both types of data into one algorithm, perhaps a CNN. Any advice is really appreciated, even a lead to some literature, articles or information would be great.
+"
+"['reinforcement-learning', 'probability-distribution']"," Title: Why we multiply probabilities with support to obtain Q-values in Distributional C51 algorithm?Body: In 'Deep Reinforcement Learning Hands-On' book and chapter about Distributional C51 algorithm I'm reading, that to obtain Q-values from the distribution I need to calculate the weighted sum of the normalized distribution and atom's values.
+
+Why I have to multiply that distribution with support? How does it work and what happening there?
+"
+"['machine-learning', 'proofs', 'probability', 'computational-learning-theory', 'pac-learning']"," Title: Convert a PAC-learning algorithm into another one which requires no knowledge of the parameterBody: This is part of the exercise 2.13 in the book Foundations of Machine Learning (page 28). You can refer to chapter 2 for the notations.
+
+Consider a family of concept classes $\left\{\mathcal{C}_{s}\right\}_{s}$ where $\mathcal{C}_{s}$ is the set of concepts in $\mathcal{C}$ with size at most $s.$ Suppose we have a PAC-learning algorithm $\mathcal{A}$ that can be used for learning any concept class $\mathcal{C}_{s}$ when $s$ is given. Can we convert $\mathcal{A}$ into a PAC-learning algorithm $\mathcal{B}$ that does not require the knowledge of $s ?$ This is the main objective of this problem.
+To do this, we first introduce a method for testing a hypothesis $h,$ with high probability. Fix $\epsilon>0, \delta>0,$ and $i \geq 1$ and define the sample size $n$ by $n=\frac{32}{\epsilon}\left[i \log 2+\log \frac{2}{\delta}\right].$ Suppose we draw an i.i.d. sample $S$ of size $n$ according to some unknown distribution $\mathcal{D}.$ We will say that a hypothesis $h$ is accepted if it makes at most $3 / 4 \epsilon$ errors on $S$ and that it is rejected otherwise. Thus, $h$ is accepted iff $\widehat{R}(h) \leq 3 / 4 \epsilon$
+(a) Assume that $R(h) \geq \epsilon .$ Use the (multiplicative) Chernoff bound to show that in that case $\mathbb{P}_{S \sim D^{n}}[h \text { is accepted}] \leq \frac{\delta}{2^{i+1}}$
+(b) Assume that $R(h) \leq \epsilon / 2 .$ Use the (multiplicative) Chernoff bounds to show that in that case $\mathbb{P}_{S \sim \mathcal{D}^{n}}[h \text { is rejected }] \leq \frac{\delta}{2^{i+1}}$
+(c) Algorithm $\mathcal{B}$ is defined as follows: we start with $i=1$ and, at each round $i \geq 1,$ we guess the parameter size $s$ to be $\widetilde{s}=\left\lfloor 2^{(i-1) / \log \frac{2}{\delta}}\right\rfloor .$ We draw a sample $S$ of size $n$ (which depends on $i$ ) to test the hypothesis $h_{i}$ returned by $\mathcal{A}$ when it is trained with a sample of size $S_{\mathcal{A}}(\epsilon / 2,1 / 2, \widetilde{s}),$ that is the sample complexity of $\mathcal{A}$ for a required precision $\epsilon / 2,$ confidence $1 / 2,$ and size $\tilde{s}$ (we ignore the size of the representation of each example here). If $h_{i}$ is accepted, the algorithm stops and returns $h_{i},$ otherwise it proceeds to the next iteration. Show that if at iteration $i,$ the estimate $\widetilde{s}$ is larger than or equal to $s,$ then $\mathbb{P}\left[h_{i} \text { is accepted}\right] \geq 3 / 8$
+
+Question (a) and (b) are easy to prove, but I have trouble with the question (c). More specifically, I don't know how to use the condition that $\widetilde{s} \geq s$. Can anyone help?
+"
+"['neural-networks', 'deep-learning', 'activation-functions']"," Title: What activation functions are better for what problems?Body: I’ve been reading about neural network architectures. In certain cases, people say that the sigmoid ""more accurately reflects real-life"" and, in other cases, functions like hard limits reflect ""the brain neural networks more accurately"".
+
+What activation functions are better for what problems?
+"
+"['deep-learning', 'reinforcement-learning', 'open-ai', 'pytorch', 'gym']"," Title: Why is this deep Q agent constantly learning just one action?Body: I'm trying to implement deep q learning in the OpenAI's gym ""Taxi-v3"" environment. But my agent only learns to do one action in every state. What am I doing wrong? Here is the Github repository with the code.
+"
+"['convolutional-neural-networks', 'applications', 'image-segmentation', 'u-net']"," Title: Why everyone is using CNN for image segmentation?Body: I'm a newbie in artificial intelligence.
+
+I have started to research how to do image segmentation and all the papers that I have found are about CNN. Most of them use the same network, U-net, but with little variations: with more or fewer layers, different parameter values, etc.; but with not very different results.
+
+It seems that CNNs are in fashion and everyone uses them. Or there are other reasons that I don't know.
+
+If everyone is getting not very different results, why are they using the same approach instead of trying different ones?
+"
+"['reinforcement-learning', 'q-learning', 'monte-carlo-methods']"," Title: Why Monte Carlo epsilon-soft approach cannot compute $\max Q(s,a)$?Body: I am new to Reinforcement learning and am currently reading up on the estimation of Q $\pi(s, a)$ values using MC epsilon-soft approach and chanced upon this algorithm. The link to the algorithm is found from this website.
+
+https://www.analyticsvidhya.com/blog/2018/11/reinforcement-learning-introduction-monte-carlo-learning-openai-gym/
+
+def monte_carlo_e_soft(env, episodes=100, policy=None, epsilon=0.01):
+
+if not policy:
+ policy = create_random_policy(env)
+# Create an empty dictionary to store state action values
+Q = create_state_action_dictionary(env, policy)
+
+# Empty dictionary for storing rewards for each state-action pair
+returns = {} # 3.
+
+for _ in range(episodes): # Looping through episodes
+ G = 0 # Store cumulative reward in G (initialized at 0)
+ episode = run_game(env=env, policy=policy, display=False) # Store state, action and value respectively
+
+ # for loop through reversed indices of episode array.
+ # The logic behind it being reversed is that the eventual reward would be at the end.
+ # So we have to go back from the last timestep to the first one propagating result from the future.
+
+ # episodes = [[s1,a1,r1], [s2,a2,r2], ... [Sn, an, Rn]]
+ for i in reversed(range(0, len(episode))):
+ s_t, a_t, r_t = episode[i]
+ state_action = (s_t, a_t)
+ G += r_t # Increment total reward by reward on current timestep
+
+ # if state - action pair not found in the preceeding episodes,
+ # then this is the only time the state appears in this episode.
+
+ if not state_action in [(x[0], x[1]) for x in episode[0:i]]: #
+ # if returns dict contains a state action pair from prev episodes,
+ # append the curr reward to this dict
+ if returns.get(state_action):
+ returns[state_action].append(G)
+ else:
+ # create new dictionary entry with reward
+ returns[state_action] = [G]
+
+ # returns is a dictionary that maps (s,a) : [G1,G2, ...]
+ # Once reward is found for this state in current episode,
+ # average the reward.
+ Q[s_t][a_t] = sum(returns[state_action]) / len(returns[state_action]) # Average reward across episodes
+
+ # Finding the action with maximum value.
+
+
+
+ Q_list = list(map(lambda x: x[1], Q[s_t].items()))
+ indices = [i for i, x in enumerate(Q_list) if x == max(Q_list)]
+ max_Q = random.choice(indices)
+
+ A_star = max_Q # 14.
+
+ # Update action probability for s_t in policy
+ for a in policy[s_t].items():
+ if a[0] == A_star:
+ policy[s_t][a[0]] = 1 - epsilon + (epsilon / abs(sum(policy[s_t].values())))
+ else:
+ policy[s_t][a[0]] = (epsilon / abs(sum(policy[s_t].values())))
+
+return policy
+
+
+This algorithm computes the $Q(s, a)$ for all state action value pairs that the policy follows. If $\pi$ is a random policy, and after running through this algorithm, and for each state take the $\max Q(s,a)$ for all possible actions, why would that not be equal to $Q_{\pi^*}(s, a)$ (optimal Q function)?
+
+From this website, they claim to have been able to find the optimal policy when running through this algorithm.
+
+I have read up a bit on Q-learning and the update equation is different from MC epsilon-soft. However, I can't seem to understand clearly how these 2 approaches are different.
+"
+"['reinforcement-learning', 'convolutional-neural-networks']"," Title: Reinforcement learning CNN input weaknessBody: I'm trying to train a network to navigate a 48x48 2D grid, and switch pixels from on to off or off to on. The agent receives a small reward if correct, and small punishment if incorrect pixel plotted.
+
+I thought, like the Deepmind ""Playing Atari with Deep Reinforcement Learning"" Paper, I could just use only the pixel input, fed through 2 convolutional layers, to solve this task. The output of this is fed into 512 fully connected layer.
+
+Unfortunately, it barely trains.
+When instead using additional vectors as input containing information about nearby pixels' state around the agent, the agent learns the task quite well (yet often orients the wrong awkwardly).
+
+Each step, the agent moves up down left right, and plot pixel or not.
+The agent is visualized in the environemtn as a red square with white center dot. (also tried single red pixel). On-pixels within the red square are colored purple.
+
+Is there something I can try to make the agent learn visual input better?
+
+The orange line is the training with only visual observations, the grey one contained vector observations about the immediate neighboring pixel state as well.
+
+![]()
+"
+"['machine-learning', 'proofs', 'probability', 'computational-learning-theory', 'generalization']"," Title: How to Prove This Inequality, Related to Generalization Error (Not Using Rademacher Complexity)?Body: This is an inequality on page 36 of the Foundations of Machine Learning by Mohri, but the author only states it without proof.
+$$
+\mathbb{P}\left[\left|R(h)-\widehat{R}_{S}(h)\right|>\epsilon\right] \leq 4 \Pi_{\mathcal{H}}(2 m) \exp \left(-\frac{m \epsilon^{2}}{8}\right)
+$$
+Here the growth function $\Pi_{\mathcal{F}}: \mathbb{N} \rightarrow \mathbb{N}$ for a hypothesis set $\mathcal{H}$ is defined by:
+$$
+\forall m \in \mathbb{N}, \Pi_{\mathcal{F}}(m)=\max _{\left\{x_{1}, \ldots, x_{m}\right\} \subseteq X}\left|\left\{\left(h\left(x_{1}\right), \ldots, h\left(x_{m}\right)\right): h \in \mathcal{H}\right\}\right|
+$$
+Given a hypothesis h $\in \mathcal{H},$ a target concept $c \in \mathcal{C}$ and an underlying distribution $\mathcal{D},$ the generalization error or risk of $h$ is defined by
+$$
+R(h)=\underset{x \sim D}{\mathbb{P}}[h(x) \neq c(x)]=\underset{x \sim D}{\mathbb{E}}\left[1_{h(x) \neq c(x)}\right]
+$$
+where $1_{\omega}$ is the indicator function of the event $\omega$.
+And the empirical error or empirical risk of $h$ is defined
+$$
+\widehat{R}_{S}(h)=\frac{1}{m} \sum_{i=1}^{m} 1_{h\left(x_{i}\right) \neq c\left(x_{i}\right)}
+$$
+In the book, the author proves another inequality that differs from this one by only a constant using Rademacher complexity, but he says that the stated inequality can be proved without using Rademacher complexity. Does anyone know how to prove it?
+"
+"['machine-learning', 'proofs', 'computational-learning-theory', 'pac-learning']"," Title: A problem about the relation between 1-oracle and 2-oracle PAC modelBody: This problem is about two-oracle variant of the PAC model. Assume that positive and negative examples are now drawn from two separate distributions $\mathcal{D}_{+}$ and $\mathcal{D}_{-} .$ For an accuracy $(1-\epsilon),$ the learning algorithm must find a hypothesis $h$ such that:
+$$
+\underset{x \sim \mathcal{D}_{+}}{\mathbb{P}}[h(x)=0] \leq \epsilon \text { and } \underset{x \sim \mathcal{D}_{-}}{\mathbb{P}}[h(x)=1] \leq \epsilon$$
+
+Thus, the hypothesis must have a small error on both distributions. Let $\mathcal{C}$ be any concept class and $\mathcal{H}$ be any hypothesis space. Let $h_{0}$ and $h_{1}$ represent the identically 0 and identically 1 functions, respectively. Prove that $\mathcal{C}$ is efficiently PAC-learnable using $\mathcal{H}$ in the standard (one-oracle) PAC model if and only if it is efficiently PAC-learnable using $\mathcal{H} \cup\left\{h_{0}, h_{1}\right\}$ in this two-oracle PAC model.
+
+However, I wonder if the problem is correct. In the official solution, when showing that 2-oracle implies 1-oracle, the author returns $h_0$ and $h_1$ when the distribution is too biased towards positive or negative examples. However, in the problem, it is required that only in 2-oracle case we can return $h_0$ and $h_1$. Therefore, in this too-biased case, it seems that there may not exist a 'good' hypothesis at all.
+
+Is this problem wrong? Or I make some mistake somewhere?
+"
+"['reinforcement-learning', 'python', 'actor-critic-methods', 'advantage-actor-critic', 'a3c']"," Title: Why I got the same action when testing the A2C?Body: I'm working on an advantage actor-critic (A2C) reinforcement learning model, but when I test the model after I trained for 3500 episodes, I start to get almost the same action for all testing episodes. While if I trained the system for less than 850 episodes, I got different actions. The value of state
is always different, and around 850 episodes, the loss
becomes zero.
+Here is the Actor and critic Network
+ with g.as_default():
+ #==============================actor==============================#
+ actorstate = tf.placeholder(dtype=tf.float32, shape=n_input, name='state')
+ actoraction = tf.placeholder(dtype=tf.int32, name='action')
+ actortarget = tf.placeholder(dtype=tf.float32, name='target')
+
+ hidden_layer1 = tf.layers.dense(inputs=tf.expand_dims(actorstate, 0), units=500, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
+ hidden_layer2 = tf.layers.dense(inputs=hidden_layer1, units=250, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
+ hidden_layer3 = tf.layers.dense(inputs=hidden_layer2, units=120, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
+ output_layer = tf.layers.dense(inputs=hidden_layer3, units=n_output, kernel_initializer=tf.zeros_initializer())
+ action_probs = tf.squeeze(tf.nn.softmax(output_layer))
+ picked_action_prob = tf.gather(action_probs, actoraction)
+
+ actorloss = -tf.log(picked_action_prob) * actortarget
+ # actorloss = tf.reduce_mean(tf.losses.huber_loss(picked_action_prob, actortarget, delta=1.0), name='actorloss')
+
+ actoroptimizer1 = tf.train.AdamOptimizer(learning_rate=var.learning_rate)
+
+ if var.opt == 2:
+ actoroptimizer1 = tf.train.RMSPropOptimizer(learning_rate=var.learning_rate, momentum=0.95,
+ epsilon=0.01)
+ elif var.opt == 0:
+ actoroptimizer1 = tf.train.GradientDescentOptimizer(learning_rate=var.learning_rate)
+
+ actortrain_op = actoroptimizer1.minimize(actorloss)
+
+ init = tf.global_variables_initializer()
+ saver = tf.train.Saver(max_to_keep=var.n)
+
+ p = tf.Graph()
+ with p.as_default():
+ #==============================critic==============================#
+ criticstate = tf.placeholder(dtype=tf.float32, shape=n_input, name='state')
+ critictarget = tf.placeholder(dtype=tf.float32, name='target')
+
+ hidden_layer4 = tf.layers.dense(inputs=tf.expand_dims(criticstate, 0), units=500, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
+ hidden_layer5 = tf.layers.dense(inputs=hidden_layer4, units=250, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
+ hidden_layer6 = tf.layers.dense(inputs=hidden_layer5, units=120, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
+ output_layer2 = tf.layers.dense(inputs=hidden_layer6, units=1, kernel_initializer=tf.zeros_initializer())
+ value_estimate = tf.squeeze(output_layer2)
+
+ criticloss= tf.reduce_mean(tf.losses.huber_loss(output_layer2, critictarget,delta = 0.5), name='criticloss')
+ optimizer2 = tf.train.AdamOptimizer(learning_rate=var.learning_rateMADDPG_c)
+ if var.opt == 2:
+ optimizer2 = tf.train.RMSPropOptimizer(learning_rate=var.learning_rate_c, momentum=0.95,
+ epsilon=0.01)
+ elif var.opt == 0:
+ optimizer2 = tf.train.GradientDescentOptimizer(learning_rate=var.learning_rateMADDPG_c)
+
+ update_step2 = optimizer2.minimize(criticloss)
+
+ init2 = tf.global_variables_initializer()
+ saver2 = tf.train.Saver(max_to_keep=var.n)
+
+
+
+This is the choice of action.
+def take_action(self, state):
+ """Take the action"""
+ action_probs = self.actor.predict(state)
+ action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
+ return action
+
+This is the actor.predict
function.
+def predict(self, s):
+ return self._sess.run(self._action_probs, {self._state: s})
+
+Any Idea what causing this?
+Update
+Change the learning rate, state, and the reward solve the problem where I reduce the size of the state and also added switching cost to the reward.
+"
+"['proofs', 'computational-learning-theory', 'vc-dimension']"," Title: How can I show that the VC dimension of the set of all closed balls in $\mathbb{R}^n$ is at most $n+3$?Body: How can I show that the VC dimension of the set of all closed balls in $\mathbb{R}^n$ is at most $n+3$?
+
+For this problem, I only try the case $n=2$ for 1. When $n=2$, consider 4 points $A,B,C,D$ and if one point is inside the triangle formed by the other three, then we cannot find a circle that only excludes this point. If $ABCD$ is convex assume WLOG that $\angle ABC + \angle ADC \geq 180$ then use some geometric argument to show that a circle cannot include $A,C$ and exclude $B,D$.
+
+For the general case I’m thinking of finding $n+1$ points so that a ball should be quite ‘large‘ to include them, and that this ball can not exclude the other 2 points. However, in high-dimensional case I do not know how to use maths language to describe what is ‘large’.
+
+Can anyone give some ideas to this question please?
+"
+"['computer-vision', 'object-detection', 'non-max-suppression']"," Title: How does non-max suppression work when one or multiple bounding boxes are predicted for the same object?Body: My understanding of how non-max suppression works is that it suppresses all overlapping boxes that have a Jaccard overlap smaller than a threshold (e.g. 0.5). The boxes to be considered are on a confident score (may be 0.2 or something). So, if there are boxes that have a score over 0.2 (e.g. the score is 0.3 and overlap is 0.4) the boxes won't be suppressed.
+In this way, one object will be predicted by many boxes, one high score box, and many low confident score boxes, but I found that the model predicts only one box for one object. Can someone enlighten me?
+I currently viewing the ssd from https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection
+Here is the code.
+#Finding Jaccap Overlap and sorting scotes
+class_scores, sort_ind = class_scores.sort(dim=0, descending=True)
+class_decoded_locs = class_decoded_locs[sort_ind] # (n_min_score, 4)
+overlap = find_jaccard_overlap(class_decoded_locs, class_decoded_locs)
+suppress = torch.zeros((n_above_min_score), dtype=torch.uint8).to(device)
+
+for box in range(class_decoded_locs.size(0)):
+# If this box is already marked for suppression
+ if suppress[box] == 1:
+ continue
+ suppress = torch.max(suppress, overlap[box] > max_overlap)
+ suppress[box] = 0
+
+"
+['neural-networks']," Title: Is greedy layer-wise pretraining obsolete?Body: I was looking into the use of a greedy layer-wise pretraining to initialize the weights of my network.
+
+Just for the sake of clarity: I'm referring to the use of gradually deeper and deeper autoencoders to teach the network gradually more abstract representations of the input one layer at the time.
+
+However, reading HERE, I read:
+
+
+ Nevertheless, it is likely better performance may be achieved using modern methods such as better activation functions, weight initialization, variants of gradient descent, and regularization methods.
+
+
+and
+
+
+ Today, we now know that greedy layer-wise pretraining is not required to train fully connected deep architectures, but the unsupervised pretraining approach was the first method to succeed.
+
+
+My question is then: if I'm building a network already using ""modern"" techniques, such as ReLU activations, batch normalization, adam optimizers, etc, is the good-ol' greedy layer-wise pretraining useless? Or can it still provide an edge in the initialization of the network?
+"
+"['algorithm', 'optimization', 'problem-solving', 'graph-theory']"," Title: Solving a planning if finding the goal state is part of the problemBody: I having trouble finding some starting points for solving an occupancy problem which seems like a good candidate for ai.
+
+Assume the following situation:
+In a company I have n cars and m employees. Not every employee can drive any car (f. e. a special driving license is required). A car can only be used by one employee at a specific point in time.
+
+There is a plan which states which employee must be somewhere within some time (therefor they must use a car, so the car is blocked for that amount of time).
+
+The goal is to find a near optimal occupancy of the cars according to that plan.
+
+This problem is easy to specify, but I'm stumped as to which methods to implement.
+
+As it can be represented by a graph I think the right way to solve such a problem is using searching techniques, but a problem here is that I don't know the goal state (and there is no efficient way to compute it - thats the task I want the ai to do...). Neither finding the goal state is in fact part of the problem.
+
+So my question is: What ai techniques could be used to solve such a problem ?
+
+Edit: Some clarification:
+
+Assmume we have two sets - one of the employees (E) and one of cars (C). |C| < |E| is most likely true.
+Each car has an assigned priority which corresponds to the costs of using it (for example using a Ferrari costs more than using a Dacia, therefore a Dacia has a higher priority (ex. 1) compred to the Ferrari (ex. 10))).
+Assume further that having employees which are not using a car at a specific time slice are a bad thing - they cost an individual penalty (you want the employeed to be at the customer and sell things etc.).
+The goal is to find the occupation of employees and cars which has a low total cost.
+
+One Example: If you assign an employee to a car at a specific time slice it may turn out that another employee gets no car within that time slice.
+This can be either because
+
+
+- a car is free, but he has no license for it
+- because a car is free, but the costs of using this car would be higher than having the employee staing at the head quater
+- because no car is free anymore
+
+
+Of cause it could be better in terms of costs to change the occupation and give that employee which got no car in this solution a car and therefore having another employee getting no car or not using all cars or ...
+
+Note: There is no need to find an exact optimal solution (=lowest total cost of all possible occupations), as this would require checking out all possible occupations of the exponential solution space.
+Insetad finding a more or less good approximation of a near-optimal low total cost is sufficent.
+"
+"['neural-networks', 'reference-request', 'artificial-neuron', 'neural-architecture-search']"," Title: Regional specialization in neural networks (especially for language processing)?Body: What is the status of the research on regional specialization of the artificial neural networks? Biology knows that such specialization exists in the brain and it is very important for the functioning of the brain. My thinking is that specialization can solve the transfer learning/catastrophic forgetting by creating centers of sophisticated skills if such sophistication is necessary. Actually - there is no much alternative to specialization? If specialization exists then there can be small decision centers that routes the request to the actual part and such decision centers can be efficient. But if specialization does not exist then routing happens in the total soup/pudding of neuronal see and such all-to-all routing should be very inefficient. Of course there should be some mixing of specialization vs pudding, because there is always mixing between rationality and emotions, between execution and exploration, but nevertheless - specialization should happen, at least partially.
+
+The problem is - that I can not find any focused article about such specialization and I can not find how such specialization can be trained? There are research on hierarchical reinforcement learning - but that is about imposing the external fixed structure on the set of neural networks, but it is not how the nature works - nature implements such hiearchy within neural network and not by imposing rigorous, symbolic structures.
+
+Are there some notions, terms, keywords, research trends, important articles (and researchers) devoted to such specialization (including the machine learning of such specialization).
+
+Of course, my topic is very large, but the actual work on this is small ir nonexistant and that is why it is focused.
+
+There is work on convolution neural networks but maybe there is another approach for language processing where the parts can be specialized in - parsing, understanding, anaphora resolution, translation, etc? And is the convolution the kind of specialization I am seeking?
+
+Maybe the notion of attention somehow is connected with my question. But usually the attention is connected with the single neurons and not with the regions? Maybe there is notions about hierarchy of attentions - one level of attention values refers to high level tasks/skills, but another level of attention values refers to subskills, etc.
+"
+"['deep-learning', 'generative-adversarial-networks']"," Title: How does the generator in GAN's work?Body: After reading a lot of articles (for instance, this one - https://developers.google.com/machine-learning/gan/generator), I've been wondering: how does the generator in GAN's work?
+
+What is the input to the generator? What is the meaning behind ""input noise""?
+
+As I've read, the only input that the generator receives is a random noise, which is weird.
+
+If I would like to create a similar picture of $x$, and put as an input a matrix of random numbers (noise) - it would take A LOT of training until I would get some sort of picture $x^*$, that is similar to the source picture $x$.
+
+The algorithm should receive some type of reference or a basic dataset (for instance, the set of $x$'s) in order to start the generation of the fake image $x^*$.
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'prediction', 'gated-recurrent-unit']"," Title: RNN models displays upper limit on predictionsBody: I have trained a RNN, GRU, and LSTM on the same dataset, and looking at their respective predictions I have observed, that they all display an upper limit on the value they can predict. I have attached a graph for each of the models, which shows the upper limit quite clearly. Each dot is a prediction, and the orange graph is simply there to illustrate the ground truth (i.e. ground truth on both axis).
+
+
+
+
+My dataset is split in 60% for training, 20% for test, and 20% for validation and then each of the splits are shuffled. The split/shuffle is the same for all three models, so each model uses the exact same split/shuffle of data for its predictions too. The models are quite simple (2 layers, nothing fancy going on). I have used grid search to find the most optimal hyperparameters for each model. Each model is fed 20 consecutive inputs (a vector of features, e.g. coordinates, waiting time, etc) and produces a single number as output which is the expected remaining waiting time.
+I know this setup strongly favours LSTM and GRU over RNN, and the accuracy of the predictions definitively shows this too.
+
+However, my question is why do each model display an upper limit on its predictions? And why does it seem like such a hard limit?
+
+I cannot wrap my head around what the cause of this is, and so I am not able to determine whether it has anything to do with the models used, how they are trained, or if it is related to the data. Any and all help is very much appreciated!
+
+
+
+Hyperparameters for the models are:
+
+RNN: 128 units pr layer, batch size of 512, tanh activation function
+
+GRU: 256 units pr layer, batch size of 512, sigmoid activation function
+
+LSTM: 256 units pr layer, batch size of 256, sigmoid activation function
+
+All models have 2 layers with a dropout in between (with probability rate 0.2), use a learning rate of $10^{-5}$, and are trained over 200 epochs with early stopping with a patience of 10. All models use SGD with a momentum of 0.8 , no nesterov and 0.0 decay. Everything is implemented using Tensorflow 2.0 and Python 3.7. I am happy to share the code used for each model if relevant.
+
+
+
+EDIT 1
+I should point out the graphs are made up of 463.597 individual data points, most of which are placed very near the orange line of each graph. In fact, for each of the three models, of the 463.597 data points, the number of data points within 30 seconds of the orange line is:
+
+RNN: 327.206 data points
+
+LSTM: 346.601 data points
+
+GRU: 336.399 data points
+
+In other words, the upper limit on predictions shown on each graph consists of quite a small number of samples compared to the rest of the graph.
+
+EDIT 2
+In response to Sammy's comment I have added a graph showing the distribution of all predictions in 30 second intervals. The y-axis represents the base 10 logarithm of the number of samples which fall into a given 30 second interval (the x-axis). The first interval ([0;29]) consists of approximately 140.000 predicted values, out of the roughly 460.000 total number of predicted values.
+
+"
+"['neural-networks', 'matlab', 'neural-architecture-search', 'thought-vectors']"," Title: Is it possible to train a neural network with 3 inputs and 12 outputs?Body: The selection of experimental data includes a set of vectors of different dimensions. The input is a 3-dimensional vector, and the output is a 12-dimensional vector. The sample size is 120 pairs of input 3-dimensional and output 12-dimensional vectors.
+
+Is it possible to train such a neural network (in MATLAB)? Which structure of the neural network is best suited for this?
+"
+"['philosophy', 'agi']"," Title: Can we just switch off a malicious artificial intelligence?Body: Let us assume we have a general AI that can improve itself and is at least as intelligent as humans.
+
+It has wide access to technical systems including the internet, and it can communicate with humans.
+
+The AI could become malicious.
+
+Can we just switch off a rouge AI?
+"
+['game-theory']," Title: Understanding the nature of psychological defense system by artificial intelligenceBody: Do you think psychological defense system, for example, repression, regression, reaction, formation, isolation, undoing, projection, introjection, sublimation, etc., could be created by artificial intelligence systems? Can AI also be used to better understand the psychological defense system?
+
+If yes, what tools do we need? Maybe supervised learning algorithms, such as PSO or ANN, are better suited for these levels?
+
+That doesn't seem to be that easy, and I think it needs to have a more general understanding of these algorithms. So I asked here.
+
+On the other hand, what do you think is the appropriate workspace or workspace available for this job?
+
+For example, I think the interactions between robots that use different levels of the defense system by our selection as a game theory round play, could be good and suited to the community which is equal to the psychological connection of this part of us and as a testing environment.
+
+But the robots themselves are dealing with a real problem that makes it even more difficult, som according to this approach, by selecting a more nonlinear problem to solve we can count the using the levels of these defense mechanisms (for example, in simple problem 30% 1th level, 20% 2th level and ...).
+"
+"['neural-networks', 'deep-learning', 'backpropagation']"," Title: How is back-propagation useful in neural networks?Body: I am reading about backpropagation and I wonder why I have to backpropagate.
+
+For example, I would update the network by randomly choosing a weight to change, $w$. I would have $X$ and $y$. Then, I would choose $dw$, a random number from $-0.1$ to $0.1$, for example. Then, I would do two predictions of the neural network and get their losses with the original neural network and one with $w$ changed by $dw$ to get the respective losses $L_{\text{original}}$ and $L_{\text{updated}}$. $L_{\text{updated}} - L_{\text{original}}$ is $dL$. I would update $w$ by $\gamma \frac{d L}{dw}$, where $\gamma$ is the learning rate and $L$ is the loss.
+
+This does not need a gradient backpropagation throughout the system, and must have somehow a disadvantage because no one uses it. What is this disadvantage?
+"
+"['neural-networks', 'deep-learning', 'python', 'math']"," Title: Can we get the inverse of the function that a neural network represents?Body: I was wondering if it's possible to get the inverse of a neural network. If we view a NN as a function, can we obtain its inverse?
+
+I tried to build a simple MNIST architecture, with the input of (784,) and output of (10,), train it to reach good accuracy, and then inverse the predicted value to try and get back the input - but the results were nowhere near what I started with. (I used the pseudo-inverse for the W matrix.)
+
+My NN is basically the following function:
+
+$$
+f(x) = \theta(xW + b), \;\;\;\;\; \theta(z) = \frac{1}{1+e^{-z}}
+$$
+
+I.e.
+
+def rev_sigmoid(y):
+ return np.log(y/(1-y))
+
+def rev_linear(z, W, b):
+ return (z - b) @ np.linalg.pinv(W)
+
+y = model.predict(x_train[0:1])
+z = rev_sigmoid(y)
+x = rev_linear(z, W, b)
+x = x.reshape(28, 28)
+plt.imshow(x)
+
+
+
+
+^ This should have been a 5:
+
+
+
+Is there a reason why it failed? And is it ever possible to get inverse of NN's?
+
+EDIT: it is also worth noting that doing the opposite does yield good results. I.e. starting with the y's (a 1-hot encoding of the digits) and using it to predict the image (an array of 784 bytes) using the same architecture: input (10,) and output (784,) with a sigmoid. This is not exactly equivalent, to an inverse as here you first do the linear transformation and then the non-linear. While in an inverse you would first do (well, undo) the non-linear, and then do (undo) the linear. I.e. the claim that the 784x10 matrix is collapsing too much information seems a bit odd to me, as there does exist a 10x784 matrix that can reproduce enough of that information.
+
+
+"
+"['comparison', 'tensorflow', 'pytorch']"," Title: What are the differences between TensorFlow and PyTorch?Body: What are the differences between TensorFlow and PyTorch, both in terms of performance and functionality?
+"
+"['recurrent-neural-networks', 'feedforward-neural-networks', 'neural-architecture-search', 'thought-vectors']"," Title: How do I determine the best neural network architecture for a problem with 3 inputs and 12 outputs?Body: This post continues the topic in the following post:
+ Is it possible to train a neural network with 3 inputs and 12 outputs?.
+
+I conducted several experiments in MATLAB and selected those neural networks that best approximate the data.
+
+Here is a list of them:
+
+
+- Cascade-forward backpropagation
+- Elman backpropagation
+- Generalized regression
+- Radial basis (exact fit)
+
+
+I did not notice a fundamental difference in quality, except for Elman's backpropagation, which had a higher error than the rest.
+
+How to justify the choice of the structure of the neural network in this case?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'natural-language-processing', 'bert']"," Title: Can Bert be used to extract embedding for large categorical features?Body: I've lot of training data points (i.e in millions) and I've around few features but the issue with that is all the features are categorical data with 1 million+ categories in each.
+
+So, I couldn't use one hot encoding because it's not efficient so I went with the other option which is embedding of fixed length. I've just used neural nets to compute embedding.
+
+My question is can we use advanced NLP models like bert to extract embeddings for categorical data from my corpus? Is it possible? I've only asked it because I've only heard that bert is good for sentence embeddings.
+
+Thank you.
+"
+"['machine-learning', 'comparison', 'objective-functions', 'mean-squared-error', 'categorical-crossentropy']"," Title: In which cases is the categorical cross-entropy better than the mean squared error?Body: In my code, I usually use the mean squared error (MSE), but the TensorFlow tutorials always use the categorical cross-entropy (CCE). Is the CCE loss function better than MSE? Or is it better only in certain cases?
+"
+"['neural-networks', 'machine-learning', 'unsupervised-learning', 'clustering']"," Title: How is clustering used in the unsupervised training of a neural network?Body: How is clustering used in the unsupervised training of a neural network? Can you provide an example?
+"
+['real-time']," Title: Searching for powerfull AI modules to improve teef glovesBody: I have seen the teef glove ofNavid Azodi and Thomas Pryor, like this :
+
+
+and also seen this post which has been said about this kind of work problem :
+
+
+ Their six-page letter, which Padden passed along to the dean, points
+ out how the SignAloud gloves—and all the sign-language translation
+ gloves invented so far—misconstrue the nature of ASL (and other sign
+ languages) by focusing on what the hands do. Key parts of the grammar
+ of ASL include “raised or lowered eyebrows, a shift in the orientation
+ of the signer’s torso, or a movement of the mouth,” reads the letter.
+ “Even perfectly functioning gloves would not have access to facial
+ expressions.” ASL consists of thousands of signs presented in
+ sophisticated ways that have, so far, confounded reliable machine
+ recognition. One challenge for machines is the complexity of ASL and
+ other sign languages. Signs don’t appear like clearly delineated beads
+ on a string; they bleed into one another in a process that linguists
+ call “coarticulation” (where, for instance, a hand shape in one sign
+ anticipates the shape or location of the following sign; this happens
+ in words in spoken languages, too, where sounds can take on
+ characteristics of adjacent ones). Another problem is the lack of
+ large data sets of people signing that can be used to train
+ machine-learning algorithms.
+
+
+So i like to know what proper AI module do you know for doing the navid works better by adding for first step the geometrical position analyzing into it, i like to use popular AI blocks like Tensorfliw for doing this kind of analyzing by fast online and the modules which updated at the time by large community users.
+
+Update:
+
+I think, some Virtual reality analyzer for the position analyzing must be existing, so which one if popular and free to contributing with large community?
+
+thanks for your attention.
+"
+"['neural-networks', 'deep-learning', 'tensorflow', 'javascript']"," Title: Why does the bias need to be a vector in a neural network?Body: I am learning to use tensorflow.js. I am also using the tfvis library to print information about the neural net to the web browser. When I create a create a dense neural net with a layer with 5 neurons and another layer with 2 neurons, each layer has a bias vector of length 5 and 2 respectively. I checked the docs (https://js.tensorflow.org/api/0.6.1/#layers.dense), and it says that there is indeed a bias vector for each dense layer. Isn't a vector redundant? Doesn't each layer only need a single number for the bias? See the code below:
+
+
+
+//Create tensorflow neural net
+this.model = tf.sequential();
+
+this.model.add(tf.layers.dense({units: 5, inputShape: [1]}))
+this.model.add(tf.layers.dense({units: 2}))
+
+const surface = { name: 'Layer Summary', tab: 'Model Inspection'};
+tfvis.show.layer(surface, this.model.getLayer(undefined, 0))
+
+"
+"['reinforcement-learning', 'markov-decision-process', 'papers', 'monte-carlo-methods']"," Title: Why is this Monte Carlo approach scalable for a growing number of states variables and action variables?Body: I am reading a research paper on the formulation of MDP problems to ICU treatment decision making: Treatment Recommendation in Critical Care: A Scalable and Interpretable Approach in Partially Observable Health States. The paper applies a Monte Carlo approach to approximate the value function. Below is a screenshot of the excerpt that I came across.
+
+
+
+The last sentence of the excerpt reads ""The approach is scalable for growing number of states variables and action variables"".
+
+What does it mean when the author says that the Monte Carlo approach is scalable for a growing number of states variables and action variables? Wouldn't the amount of data needed to approximate the value function increase with the higher dimensionality of states? Or does the Monte Carlo approach scale better in time complexity as compared to traditional Q-learning methods?
+"
+"['machine-learning', 'facial-recognition']"," Title: Lego minifigure facial recognition: where to start?Body: I'm interested in starting a project that will identify the face of a Lego mini figure from a digital photo. My goal is to eventually map the expression of a person's face to the Lego mini figure.
+
+I don't have any experience working with image recognition technology (my technical experience is mainly in web technology), and I am looking for recommended platforms or resources that I could get started with.
+
+Most helpful would be recommendations for image recognition technologies (Python would be great!) that I could start to experiment with.
+
+NOTE: I'm aware of SparkAR as a library designed to for Instagram camera effects specifically, and even though I'm not interested in Instagram, I wonder if there are comparable libraries/studios/products for working with image recognition development.
+"
+"['convolutional-neural-networks', 'computer-vision', 'papers', 'support-vector-machine']"," Title: Scoring feature vector with Support Vector MachineBody: I am reading the R-CNN paper by Ross Girshick1 et al. (link) and I fail to understand how they do the inference. This is described in the section 2.2.Test-time Detection in the paper. I quote:
+
+
+ At test time, we run selective search on the test image to extract around 2000 region proposals (we use selective search’s “fast mode” in all experiments). We warp each proposal and forward propagate it through the CNN in order to read off features from the desired layer. Then, for each class, we score each extracted feature vector using the SVM trained for that class.
+
+
+I do not understand how a Support Vector Machine (SVM) can score a feature vector since SVM does not tell you class probability, it only tells you if an object belongs to a class or not. How is this possible?
+
+It seems that detection flow is: get image, run it through CNN and get feature vector, score this feature vector and run Non-Maximal Suppresion (NMS). But for running NMS we need the feature vector scored, and again, SVM do not score predictions, right?
+
+Actually, when represented in the same paper, the SVM does not provide a score as you can see in the next image (taken from the same paper).
+
+
+
+So, how this makes sense?
+"
+"['convolutional-neural-networks', 'computer-vision']"," Title: Which deep learning models are suitable for image-to-image mapping?Body: I am working on a problem in which I need to train a neural network to map one or more input images to one or more output images (1 channel for image). Below I report some examples of input&output. In this case I report 1 input and 1 output image, but may need to pass to more inputs and outputs, maybe by encoding this in channels. However, the images are all of this kind, maybe rotated, tralated or changed a bit in shape. (fyi, they are fields defined by fluid dynamics simulations)
+
+I was thinking about CNN, but the standard architecture used for image classification (convolutional layers + fully connected layers) seems not to be the best choice. Instead, I tried using the U-net architecture, composed of compression+decompression convolutional layers. This works quite fine, but maybe there is some other architecture that could be more suited to my problem.
+
+Any suggestion would be appreciated!
+
+
+
+"
+"['neural-networks', 'training', 'genetic-algorithms', 'evolutionary-algorithms', 'neuroevolution']"," Title: What are evolutionary algorithms for topology and weights evolving of ANN (TWEANN) other than NEAT?Body: I wonder, if there are other than NEAT approaches to evolving architectures and weights of artificial neural networks?
+
+To be more specific: I am looking for projects/frameworks/libraries that use evolutionary/genetic algorithms to simultanousely evolve both topology and train weights of ANNs other than NEAT approach. By 'other' I mean similar to NEAT but not based entirely on NEAT. I hope to find different approaches to the same problem.
+"
+"['neural-networks', 'datasets', 'data-preprocessing', 'computational-linguistics']"," Title: EEG and Accelerometer Neural NetworkBody: I have frequency EEG data from fall and non-fall events and I am trying to incorporate it with accelerometer data that was collected at the same time. One approach is, of course, to use two separate algorithms and find the threshold for each. Then comparing the threshold of each. In other words, if the accelerometer algorithm predicts a fall (fall detected = 1) and the EEG algorithm detects a fall, based on the power spectrum (fall detected = 1), then the system outputs a ""1"" that a fall was truly detected. This approach uses the idea of a simple AND gate between the two algorithms.
+
+I would like to know how to correctly process the data so that I can feed both types of data into a CNN. Any advice is really appreciated, even a lead to some literature, articles or information would be great.
+"
+"['reinforcement-learning', 'comparison', 'policies', 'off-policy-methods', 'importance-sampling']"," Title: Can the importance sampling estimator have a non-stationary behaviour policy even if the target policy is stationary?Body: The inverse propensity score (IPS) estimator, which is used for off-policy evaluation in a contextual bandit problem, is well explained in the paper Doubly Robust Policy Evaluation and Optimization.
+
+The old policy $\mu$, or the behavior policy, is okay to be non-stationary in the IPS estimator even if the new policy $\nu$, or the target policy, should be stationary.
+
+Is this true for the importance sampling (IS) estimator, which seems to be a variant of IPS, for off-policy evaluation in a reinforcement learning problem?
+
+IS estimator is explained in this paper Doubly Robust Off-policy Value Evaluation for Reinforcement Learning.
+
+The target policy should be stationary, but can the old policy be non-stationary in the IS estimator?
+"
+"['reinforcement-learning', 'q-learning', 'sarsa', 'epsilon-greedy-policy', 'softmax-policy']"," Title: What is the difference between the $\epsilon$-greedy and softmax policies?Body: Could someone explain to me which is the key difference between the $\epsilon$-greedy policy and the softmax policy? In particular, in the contest of SARSA and Q-Learning algorithms. I understood the main difference between these two algorithms, but I didn't understand all the combinations between algorithm and policy
+
+- SARSA + $\epsilon$-greedy
+- SARSA + Softmax
+- Q-Learning + $\epsilon$-greedy
+- Q-Learning + Softmax
+
+"
+"['reinforcement-learning', 'comparison', 'monte-carlo-methods', 'temporal-difference-methods', 'td-lambda']"," Title: What is the intuition behind TD($\lambda$)?Body: I'd like to better understand temporal-difference learning. In particular, I'm wondering if it is prudent to think about TD($\lambda$) as a type of ""truncated"" Monte Carlo learning?
+"
+"['classification', 'autoencoders']"," Title: What class of problem is this?Body: If I have a lot of input output pairs as training data
+
+<float Xi, float Yi>
+
+and I have a parametrized approximation function (I know the function algorithm, but not the values of the many many parameters it contains) which shall approximate the process by which the original data pairs were generated. The function takes two input values:
+
+// c is a precomputed classifier for x and can have values from 0 to 255, so there can be up to 256 different classes
+y = f(float x, int c)
+
+
+the hidden parameters of the function are some big lookup tables (a lot of free parameters, but still much fewer than the amount of data points in the training data)
+
+Now I want to fit all the hidden parameters that f contains AND compute for each Xi
a ci
, such that for the fitted function the error over all i of Yi - f(Xi, ci)
is minimized
+
+So, using some algorithm I want to fit the parameters of f and also classify the inputs Xi so that f(Xi, ci) aproximates Yi
+
+How is this kind of problem called and what kind of algorithm is used to solve it?
+
+I assume it's possible to initialize all hidden parameters as well as all ci with random values and then somehow use back propagation of the error to iteratively find parameters and ci such that the function works well.
+
+What I don't know is whether this is a well known class of problem and I just don't know the name of it, so I'm asking for pointers.
+
+Or maybe in other words:
+I have a function that has a certain layout (for performance reasons) which I want to use to approximate and interpolate my training data, I want to tune the parameters of this function such that it approximates the original data well. since the data points fall into some 'categories', I want to pre-classify the data-points to make it easier for the function to do its job. What kind of algorithm do I use to find the function's parameters and to pre-classify the input?
+"
+"['reinforcement-learning', 'deep-rl', 'dqn', 'objective-functions', 'generalization']"," Title: How is the DQN able to generalise the learning to unseen states with such a loss function?Body: I am trying to understand how deep Q learning (DQN) works. To my current understanding, each $Q(s, a)$ functions is estimated to be a function of a feature vector of its state $\phi$(s) and the weight of the network $\theta$.
+The loss function to minimise is $||\delta_{t+1}||^2$ where $\delta_{t+1}$ is shown below. The loss function is from the website talking about function approximation. Even though it is not explicitly deep Q learning, the loss function to minimise is similar.
+$$\delta_{\mathrm{t}+1}=\mathrm{R}_{\mathrm{t}+1}+\max _{\mathrm{a}\in\mathrm{A}} \boldsymbol{\theta}^{\top} \Phi\left(\mathrm{s}_{t+1}, \mathrm{a}\right)-\boldsymbol{\theta}^{\top} \Phi\left(\mathrm{s}_{\mathrm{t}}, \mathrm{a}\right)$$
+Source: https://towardsdatascience.com/function-approximation-in-reinforcement-learning-85a4864d566.
+Intuitively, I am not able to understand why the loss function is defined as such. Once the network converges to a $\theta$ using gradient descent, does that mean that the $Q_{max}(s,a)$ is found?
+In essence, I am not able to grasp intuitively how the neural network is able to generalise the learning to unseen states.
+The algorithm I am looking at to help me understand the deep Q networks is below.
+
+Source: https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf
+"
+"['deep-learning', 'backpropagation', 'optimization', 'activation-functions', 'relu']"," Title: In deep learning, is it possible to use discontinuous activation functions?Body: In deep learning, is it possible to use discontinuous activation functions (e.g. one with jump discontinuity)?
+(My guess: for example, ReLU is non-differentiable at a single point, but it still has a well-defined derivative. If an activation function has a jump discontinuity, then its derivative is supposed to have a delta function at that point. However, the backpropagation process is incapable of considering that delta function, so the convex optimization process will have some problem?)
+"
+['deep-learning']," Title: Getting started with creating a general AI based on textual and then image based data?Body: I have a pool of knowledge that I want to mine for information and allow an AI to deduce likely conclusions from this information.
+
+My goal is to give the AI a set of textual data that is rated on a scale of 0 to 100 ranging from false (0) to unequivocally true (100). Based on ongoing learning I want to be able to ask it about it's data and to make relational conclusions as to not simply whether things are true or false, but to extrapolate likelihoods, conclusions and so forth... or to simply tell me it can't understand something which would then trigger me to give it more information and to train it with additional material - even if it's my own limited answers.
+
+Ultimately I'll deal with image data as well, but that's a bit down the road.
+
+I'm new to the area of neural nets and deep learning and so I'm hoping someone could point me in the right direction in the way of terminology to search for / research as well as where perhaps I should start.
+
+I wouldn't mind working in C predominantly if possible, but other languages (especially Ruby) is fine.
+
+The field is moving so fast and there's so much research now that seems to trump information available from just a couple years ago now and so I'm hoping to jump into information that takes advantage of more general learning algorithms so that this can be as robust possible while taking care of current trends.
+
+Where do I go from here?
+"
+"['object-detection', 'facial-recognition', 'clustering']"," Title: Finding unique faces in a videoBody: I am trying to find unique (distinct) faces in multiple videos files. What is the best way to do that?
+"
+"['regularization', 'sigmoid']"," Title: Do L2 regularization and input normalization depend on sigmoid activation functions?Body: Following the online courses with Andrew Ng, he talks about L2 regularization (a.k.a. weight decay) and input normalization. Now, the argument is that L2 regularization make the weights smaller, which makes the sigmoid activation functions (and thus the whole network) ""more"" linear.
+
+Question 1: can this rather handwavey explanation be formalized? Can we define ""more linear"" in a mathematical sense, and demonstrate that smaller weights in fact achieve this?
+
+Question 2: in contrast to sigmoid, ReLU activations have a single point where it is nonlinear - i.e. the breaking point at x=0. No scaling of the input changes the shape (i.e. derivative) of this function, the only effect is reducing the magnitude of positive outputs. Does the argument still hold? Why?
+
+Input normalization is given as a good practice, but it seems to me that the network should just compensate for varying magnitude between components of the input by scaling the weights appropriately. The only exception I can think of is again under L2 regularization, which would penalize large weights (assoiciated with small inputs).
+
+Question 3: Is this correct, and is input scaling thus mostly important with L2 normalization, or is there some reason why the network would fail to adjust the weights without scaling?
+"
+"['reinforcement-learning', 'comparison', 'q-learning', 'sarsa']"," Title: Do we need an explicit policy to sample $A'$ in order to compute the target in SARSA or Q-learning?Body: I would much appreciate if you could point me in the right direction regarding this question about targets for SARSA and Q-learning (notation: $S$ is the current state, $A$ is the current action, $R$ is the reward, $S'$ is the next state and $A'$ is the action chosen from that next state).
+
+Do we need an explicit policy for the Q-learning target to sample $A'$ from? And for SARSA?
+
+I guess this is true for Q-learning since we need to get max Q-value which determines which action $A'$ we'll use for the update. For SARSA, we update the $Q(S, A)$ depending on which action was actually taken (no need for max). Please correct me if I'm wrong.
+"
+"['machine-learning', 'optical-character-recognition']"," Title: OCR - Text recognition from ImageBody: I plan to develop OCR application using tensorflow to get the value from the image. Text in the image may handwritting or text printed.
+
+
+
+From the image, my ocr appplication will able to get the value of below:
+1. ChequeDate
+2. Payee Name
+3. Legal Amount
+4. Courtesy Amount
+
+How the OCR application can get the value i want as highlighted as red color? is it need to crop it as small size of the picture for the OCR?
+"
+"['neural-networks', 'machine-learning', 'object-detection']"," Title: What is the expected value of an IOU in this case?Body: I have a detection problem. An object with a probability of 0.5 is in a box with coordinates ((0,0), (2, 2)) and with a probability of 0.5 a box with coordinates ((2,0), (4,2)).
+
+What is the maximum expected value of an intersection over union (IOU) with an object that a constant detection algorithm that produces one box can reach? I can't understand how I should find the expected value here. P.S. IOU is 0 cos intersection is empty.
+"
+"['reinforcement-learning', 'q-learning']"," Title: Q-learning problem wrong policyBody: I am coding out a simple 4x4 grid game whereby the agent starts at a particular state and his aim is to reach the terminal state. The agent is supposed to avoid traps along the way and reach the end goal with high reward. The below picture illustrates the environment.
+
+
+
+The code that I am running is shown below:
+
+# 4x4 Grid
+import random
+
+
+gamma = 1
+grid = [[-0.1 for i in range(4)] for j in range(4)]
+episodes = 500000
+epsilon = 1 # start greedy
+decay = 0.999
+min_epsilon = 0.1
+alpha = 0.65
+# set terminal states
+grid[1][0] = -1
+grid[2][2] = -1
+grid[0][3] = 1
+
+# Set up Q tables
+# 0: up, 1: down, 2: left, 3: right
+# Q = {(0,0): {0: z, 1: x, 2: c, 3: v}, ... }}
+Q = {}
+
+# 4 rows
+for row in range(4):
+ # 4 columns
+ for column in range(4):
+ Q[(row,column)] = {}
+ # 4 actions
+ for k in range(4):
+ Q[(row,column)][k] = 0
+
+
+def isTerminal(state):
+ if state == (1,0) or state == (2,2) or state == (0,3):
+ return True
+ return False
+
+def get_next_state_reward(state, action):
+
+ row = state[0]
+ col = state[1]
+ #print(row, col)
+ if action == 0: # up
+ # out of grid
+ if (row - 1) < 0:
+ return (state, grid[row][col])
+
+ if action == 1: # down
+
+ # out of grid
+ if (row + 1) > len(grid) - 1:
+ return (state, grid[row][col])
+
+ if action == 2: # left
+
+ if (col - 1 < 0):
+ return (state, grid[row][col])
+
+ if action == 3: # right
+
+ if (col + 1 > len(grid[row]) - 1):
+ return (state, grid[row][col])
+
+ if action == 0:
+
+ row -= 1
+ return ((row,col), grid[row][col])
+
+ if action == 1:
+
+ row += 1
+ return ((row,col), grid[row][col])
+
+ if action == 2:
+
+ col -= 1
+ return ((row,col), grid[row][col])
+
+ if action == 3:
+
+ col += 1
+ return ((row,col), grid[row][col])
+
+state_visit = {}
+for row in range(4):
+ # 4 columns
+ for column in range(4):
+ state_visit[(row,column)] = 0
+
+for episode in range(episodes):
+
+ # let agent start at start state
+ state = (3,0)
+
+ while not isTerminal(state):
+
+ r = random.uniform(0,1)
+
+ if r < epsilon:
+ action = random.randint(0,3)
+ else:
+ action = max(Q[state], key=lambda key: Q[state][key])
+
+
+ next_state, reward = get_next_state_reward(state, action)
+
+ TD_error = reward + gamma * max(Q[next_state]) - Q[state][action]
+
+ Q[state][action] = Q[state][action] + alpha * TD_error
+
+ state = next_state
+
+ state_visit[next_state] += 1
+ epsilon = max(min_epsilon, epsilon*decay)
+
+ #input()
+
+
+policy = {}
+# get optimal policies for each state
+for states in Q:
+ policy[states] = max(Q[states], key=lambda key: Q[states][key])
+
+
+When I finish running the algorithm however, I am unable to achieve the optimal policy no matter how many tweaks I do to the number of episodes, or epsilon decay, or the alpha value.
+
+Particularly, the Q values that I attain for state (2,0), (0,1) and (0,0) have Q values that are equal values for three directions except for the last direction which brings the agent to the terminal state.
+
+For example, these are the Q-values that I get for state (0,0), (0,1) and (2,0) respectively.
+
+(0,0): {0: 2.0, 1: 2.9, 2: 2.9, 3: 2.9}
+
+(0,1): {0: 2.9, 1: 2.0, 2: 2.9, 3: 2.9}
+
+(2,0): {0: 2.9, 1: 2.9, 2: 2.9, 3: 2.9}
+
+I am not sure why the Q-values for the 3 directions should be the same because each extra step that the agent takes incurs a negative reward.
+
+Would anyone be able to help ? Thank you so much !
+"
+"['computer-vision', 'emotional-intelligence', 'facial-recognition', 'voice-recognition', 'audio-processing']"," Title: State of the art in voice recognitionBody: In the media there's lot of talk about face recognition, mainly with respect to identifying faces (= assigning to persons). Less attention is paid to the recognition of facially expressed emotions but there's a lot of research done into this direction, too. Even less attention is paid to the recognition of facially expressed emotions of a single person (which could be much more detailed) - even though this would be a very interesting topic.
+
+What holds for faces does similarily hold for voices. With the help of artifical intelligence voices can be identified (= assigned to persons) and emotions as expressed by voice can be recognized - on a general and on an individual's level.
+
+My general question goes into another direction: As huge progress has been made in visual scene analysis (""what is seen in this scene?"") there has probably been some progress made in auditory scene analysis: ""What is heard in this scene?""
+
+My specific question is: Are there test cases and results where some AI software was given some auditory data with a lot of ""voices"" and could tell how many voices there were?
+
+As a rather easy specific test case consider some Gregorian chant sung in perfect unison. (See also here.)
+"
+"['reinforcement-learning', 'math', 'actor-critic-methods']"," Title: How does the update rule for the one-step actor-critic method work?Body: Can you please elucidate the math behind the update rule for the critic? I've seen in other places that just a squared distance of $R + \hat{v}(S', w) - \hat{v}(S,w)$ is used, but Sutton suggests an update rule (and the math behind) that is beyond my understanding?
+
+
+
+Also, why do we need $I$?
+"
+"['neural-networks', 'machine-learning', 'ai-design']"," Title: How to track performance of your model during experimenting?Body: During weeks and months of your work, many things may change, for example :
+
+
+- You may modify the loss function
+- Your training or validation datasets may change
+- You modify data augmentation
+
+
+Which tools or processes do you use to track modifications you have made and how did they affected the model ?
+"
+"['training', 'terminology', 'meta-learning']"," Title: What does ""episodic training"" mean?Body: I'm reading the book Hands-On Meta Learning with Python, and in Prototypical networks said:
+
+So, we use episodic training—for each episode, we randomly sample a
+few data points from each class in our dataset and we call that a
+support set and train the network using only the support set, instead
+of the whole dataset.
+
+I think, but I'm not sure, I have understood what "episodic training" is, but what is the meaning of "episodic" or "episode" here?
+I'm sorry, I'm not English and I can't guess what it is meaning searching in a dictionary. I know what an episode is, but I don't know what an episode, in this context of training, means.
+"
+"['convolutional-neural-networks', 'representation-learning', 'principal-component-analysis', 'dimensionality-reduction']"," Title: What are examples of approaches to dimensionality reduction of feature vectors?Body: Given a pre-trained CNN model, I extract feature vector of images in reference and query dataset with several thousands of elements.
+
+I would like to apply some augmentation techniques to reduce the feature vector dimension to speed up cosine similarity/euclidean distance matrix calculation.
+
+I have already come up with the following two methods in my literature review:
+
+
+- Principal Component Analysis (PCA) + Whitening
+- Locality Search Hashing (LSH)
+
+
+Are there more approaches to perform dimensionality reduction of feature vectors? If so, what are the pros/cons of each perhaps?
+"
+"['machine-learning', 'natural-language-processing', 'terminology']"," Title: What is ""Computational Linguistics""?Body: It's not clear to me whether or not someone whose work aims to improve an NLP system may be called a ""Computational Linguist"" even when she/he doesn't modify the algorithm directly by coding.
+
+Let's consider the following activities:
+
+Annotation for Machine Learning:
+analysis of Morphology, Syntax, POS tagging
+Annotation, analysis, and annotation of entities (NER) and collocations; supporting content categorization; chunking; word sense disambiguation.
+Recording of technical issues of the annotation tool to improve its reliability.
+Recording of linguistic and logical particular rules adopted by the research team who develops the NLP algorithm to improve consistency between annotation and criteria previously adopted to train the NLP.
+
+May be these activities considered ""Computational Linguistics""? If not, which is their professional category and how should they be included in the resume in a word which synthesizes them?
+"
+"['computer-vision', 'image-processing']"," Title: What's the best solution to find distance of an object to camera?Body: I have an object with known size and I want to know that's the distance from the camera and camera angle. Is there any way to do this? I have a single source (camera).
+"
+['convolutional-neural-networks']," Title: What is generally the best way to combine tabular image metadata with image data in a convolutional neural network?Body: I have 26 features from tabular data (clinical variables from patients like age gender etc) that I want to add to my cnn which is using xray images from patients. I am using the inception network. Right now I am just concatenating these features to the final fully connected layer just before the softmax activation.
+
+My concern though is that since this final layer also contains 2048 image features that these 26 features will not contribute much in the classification.
+
+Empirically, these 26 features should contribute more than the 2048 image features. A random forest trained on only 26 features did better than the cnn using only the images. I'd like to get a model that does better than either of them separately so I thought I should add these metadata features to the cnn.
+
+Are my concerns warranted? What is the best approach?
+"
+"['autoencoders', 'learning-rate']"," Title: Autoencoder network for feature selection not convergingBody: I am training an undercomplete autoencoder network for feature selection. I am using one hidden layer in the encoder and decoder networks each. The ELU activation function is used for each layer. For optimization, I am using the ADAM optimizer. Moreover, to improve convergence, I have also introduced learning rate decay. The model shows good convergence initially, but later starts to generate losses (12 digits) in the same range of values for several epochs, and is not converging. How to solve this issue?
+"
+"['reinforcement-learning', 'comparison', 'terminology', 'norvig-russell']"," Title: What is the difference between the concepts ""known environment"" and ""deterministic environment""?Body: According to the book ""Artificial Intelligence: A Modern Approach"", ""In a known environment, the outcomes (or outcome probabilities if the environment is stochastic) for all actions are given."", and in a deterministic environment, ""the next state of the environment is completely determined by the current state and the action executed by the agent..."".
+
+What's the difference between the two terms? Don't they mean the same thing?
+"
+"['reinforcement-learning', 'planning']"," Title: Can I solve this assignment problem with RL or AI planning, and if yes how?Body: I have a list of positive nonzero integers $T=[v_1,\dots,v_𝑛|v_𝑖\in Z^{\neq}]$ which sum up to $V=\sum_i v_i$. Typically, the length of T (number of integers) goes from 100 to 1000. The list is not sorted, i.e., there's no guarantee that $v_i\leq v_{i+1}\ \forall\ i.$ Each integer can be be assigned either to a set $S_1$ or a set $S_2$: equivalently, it can be labeled as $l_1$ or $l_2$. The objective is to label $v_1,\dots,v_n$ so that
+
+$$\sum_{v_i\in S_1} v_i = 0.3V \tag{1}$$
+$$\sum_{v_i\in S_2} v_i = 0.7V \tag{2}$$
+
+i.e., to minimize the cost
+$$L=\left(\sum_{v_i\in S_1} v_i - 0.3V\right)^2$$
+
+Up to this point, the problem would be fairly trivial and it definitely wouldn't require AI. However, there are a couple additional details: the integers must be labeled in the sequence they appear ($v_1$ first, then $v_2$, etc.), and each time we ""switch"" label, we incur a cost. In other words, if the agent assigns $v_1$ to $S_1$, then $v_2$ to $S_2$, then $v_3$ to $S_1$, etc., it should be penalized for that.
+
+I was thinking of formalizing this by counting the number $m$ of switches (of course $m\geq 1$) and adding it to $L$, i.e. by modifying the cost function to
+
+$$L'=\left(\sum_{v_i\in S_1} v_i - 0.3V\right)^2+\beta m^2$$
+
+where $\beta$ is a positive parameter, which I could use to weight the two objectives.
+
+Would it make sense to cast this as a Reinforcement Learning problem? Or is it more appropriately an AI planning problem? Can you suggest an efficient algorithm to solve it?
+"
+"['deep-learning', 'convolutional-neural-networks', 'ensemble-learning']"," Title: How can we combine different deep learning models?Body: I know that ensembles can be made by combining sklearn models with a VotingClassifier, but is it possible to combine different deep learning models? Will I have to make something similar to Voting Classifiers?
+"
+"['deep-learning', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: How to process data in a data stream for a LSTMBody: How can a data stream for a RNN (LSTM) be handled, when the stream contains data sets belonging to different prediction classes?
+
+Training phase:
+I have trained a LSTM to predict a class out of a sequence of Letters. For the training phase I used a fixed data array where the beginning an the ending of a sequence belonged to a class. Of course there is a little noise but the whole data set was labled with a class. E.g:
+
+Seq. is Class
+ABC is One
+CBA is Two
+ABD is Three
+
+
+The network predicts well when it sees a static data array.
+
+Problem in Prediction Phase:
+During prediction the LSTM will receive a data stream where there is a sequence off arrays but there is no delimiter. The data set can not be distinguished or separated. I am not sure how it would perform when I have a data stream for different classes like
+ ABCABCCBAABD.
+
+I guess in speech recognition one must face similiar problems.
+"
+"['neural-networks', 'tensorflow']"," Title: TensorFlow fit() and GradientTape - number of epochs are differentBody: if I define the architecture of a neural network using only dense fully connected layers and train them such that there are two models which are trained using model.fit() and GradientTape. Both the methods of training use the same model architecture.
+
+The randomly initialized weights are shared between the two models and all other parameters such as optimizer, loss function and metrics are also the same.
+
+Dimensions of training and testing sets are:
+X_train = (960, 4), y_train = (960,), X_test = (412, 4) & y_test = (412,)
+
+import pandas as pd, numpy as np
+import matplotlib.pyplot as plt
+import seaborn as sns
+from sklearn.model_selection import train_test_split
+from sklearn.preprocessing import StandardScaler, MinMaxScaler, RobustScaler
+from sklearn.preprocessing import LabelEncoder
+from sklearn.metrics import accuracy_score, precision_score, recall_score, confusion_matrix
+
+import tensorflow as tf
+from tensorflow.keras.models import Sequential
+from tensorflow.keras.layers import Dense
+import tensorflow_model_optimization as tfmot
+from tensorflow_model_optimization.sparsity import keras as sparsity
+
+
+def create_nn():
+ """"""
+ Function to create a
+ Neural Network
+ """"""
+ model = Sequential()
+
+ model.add(
+ Dense(
+ units = 4, activation = 'relu',
+ kernel_initializer = tf.keras.initializers.GlorotNormal(),
+ input_shape = (4,)
+ )
+ )
+
+ model.add(
+ Dense(
+ units = 3, activation = 'relu',
+ kernel_initializer = tf.keras.initializers.GlorotNormal()
+ )
+ )
+
+ model.add(
+ Dense(
+ units = 1, activation = 'sigmoid'
+ )
+ )
+
+ """"""
+ # Compile the defined NN model above-
+ model.compile(
+ loss = 'binary_crossentropy', # loss = 'categorical_crossentropy'
+ optimizer = tf.keras.optimizers.Adam(lr = 0.001),
+ metrics=['accuracy']
+ )
+ """"""
+
+ return model
+
+
+# Instantiate a model- model = create_nn()
+
+# Save weights for fair comparison- model.save_weights(""Random_Weights.h5"", overwrite=True)
+
+
+# Create datasets to be used for GradientTape-
+# Use tf.data to batch and shuffle the dataset train_ds = tf.data.Dataset.from_tensor_slices(
+ (X_train, y_train)).shuffle(100).batch(32)
+
+test_ds = tf.data.Dataset.from_tensor_slices(
+ (X_test, y_test)).shuffle(100).batch(32)
+
+# Define early stopping- callback = tf.keras.callbacks.EarlyStopping(
+ monitor='val_loss', patience=3,
+ min_delta = 0.001, mode = 'min' )
+
+# Train defined model- history_orig = model.fit(
+ x = X_train, y = y_train,
+ batch_size = 32, epochs = 500,
+ validation_data = (X_test, y_test),
+ callbacks = [callback],
+ verbose = 1 )
+
+
+# Instantiate a model- model_gt = create_nn()
+
+# Restore random weights as used by the previous model for fair comparison- model_gt.load_weights(""Random_Weights.h5"")
+
+
+# Choose an optimizer and loss function for training- loss_fn = tf.keras.losses.BinaryCrossentropy() optimizer = tf.keras.optimizers.Adam(lr = 0.001)
+
+# Select metrics to measure the error & accuracy of model.
+# These metrics accumulate the values over epochs and then
+# print the overall result- train_loss = tf.keras.metrics.Mean(name = 'train_loss') train_accuracy = tf.keras.metrics.BinaryAccuracy(name = 'train_accuracy')
+
+test_loss = tf.keras.metrics.Mean(name = 'test_loss') test_accuracy = tf.keras.metrics.BinaryAccuracy(name = 'train_accuracy')
+
+
+# Use tf.GradientTape to train the model-
+
+@tf.function def train_step(data, labels):
+ """"""
+ Function to perform one step of Gradient
+ Descent optimization
+ """"""
+
+ with tf.GradientTape() as tape:
+ predictions = model_gt(data)
+ loss = loss_fn(labels, predictions)
+
+ gradients = tape.gradient(loss, model_gt.trainable_variables)
+ optimizer.apply_gradients(zip(gradients, model_gt.trainable_variables))
+
+ train_loss(loss)
+ train_accuracy(labels, predictions)
+
+
+@tf.function def test_step(data, labels):
+ """"""
+ Function to test model performance
+ on testing dataset
+ """"""
+
+ predictions = model_gt(data)
+ t_loss = loss_fn(labels, predictions)
+
+ test_loss(t_loss)
+ test_accuracy(labels, predictions)
+
+
+EPOCHS = 100
+
+# User input- minimum_delta = 0.001 patience = 3
+
+patience_val = np.zeros(patience)
+
+
+# Dictionary to hold scalar metrics- history = {}
+
+history['accuracy'] = np.zeros(EPOCHS) history['val_accuracy'] = np.zeros(EPOCHS) history['loss'] = np.zeros(EPOCHS) history['val_loss'] = np.zeros(EPOCHS)
+
+for epoch in range(EPOCHS):
+ # Reset the metrics at the start of the next epoch
+ train_loss.reset_states()
+ train_accuracy.reset_states()
+ test_loss.reset_states()
+ test_accuracy.reset_states()
+
+ for x, y in train_ds:
+ train_step(x, y)
+
+ for x_t, y_t in test_ds:
+ test_step(x_t, y_t)
+
+ template = 'Epoch {0}, Loss: {1:.4f}, Accuracy: {2:.4f}, Test Loss: {3:.4f}, Test Accuracy: {4:4f}'
+
+ history['accuracy'][epoch] = train_accuracy.result()
+ history['loss'][epoch] = train_loss.result()
+ history['val_loss'][epoch] = test_loss.result()
+ history['val_accuracy'][epoch] = test_accuracy.result()
+
+ print(template.format(epoch + 1,
+ train_loss.result(), train_accuracy.result()*100,
+ test_loss.result(), test_accuracy.result()*100))
+
+ if epoch > 2:
+ # Computes absolute differences between 3 consecutive loss values-
+ differences = np.abs(np.diff(history['val_loss'][epoch - 3:epoch], n = 1))
+
+ # Checks whether the absolute differences is greater than 'minimum_delta'-
+ check = differences > minimum_delta
+
+ # print('differences: {0}'.format(differences))
+
+ # Count unique element with it's counts-
+ # elem, count = np.unique(check, return_counts=True)
+ # print('\nelem = {0}, count = {1}'.format(elem, count))
+
+ if np.all(check == False):
+ # if elem.all() == False and count == 2:
+ print(""\n\nEarlyStopping Evoked! Stopping training\n\n"")
+ break
+
+
+In ""model.fit()"" method, it takes around 82 epochs, while GradientTape method takes 52 epochs.
+
+Why is there this discrepancy in the number of epochs?
+
+Thanks!
+"
+['reinforcement-learning']," Title: Is the temperature equal to epsilon in Reinforcement Learning?Body: This is a piece of code from my homework.
+
+# action policy: implements epsilon greedy and softmax
+def select_action(self, state, epsilon):
+ qval = self.qtable[state]
+ prob = []
+ if (self.softmax):
+ # use Softmax distribution
+ prob = sp.softmax(qval / epsilon)
+ #print(prob)
+ else:
+ # assign equal value to all actions
+ prob = np.ones(self.actions) * epsilon / (self.actions -1)
+ # the best action is taken with probability 1 - epsilon
+ prob[np.argmax(qval)] = 1 - epsilon
+ return np.random.choice(range(0, self.actions), p = prob)
+
+
+This is a method in order to select the best action according to the two polices i think. My question is, why in the softmax computation there is the epsilon parameter used as temperature. Is really the same thing? Are they different? I think they should be two different variables. Should the temperature be a fixed value over time? Because when i use the epsilon-greedy policy my epsilon decrease over time.
+"
+"['machine-learning', 'probability', 'supervised-learning', 'statistical-ai', 'bayesian-deep-learning']"," Title: How can supervised learning be viewed as a conditional probability of the labels given the inputs?Body: In the literature and textbooks, one often sees supervised learning expressed as a conditional probability, e.g.,
+$$\rho(\vec{y}|\vec{x},\vec{\theta})$$
+where $\vec{\theta}$ denotes a learned set of network parameters, $\vec{x}$ is an arbitrary input, and $\vec{y}$ is an arbitrary output. If we assume we have already learned $\vec{\theta}$, then, in words, $\rho(\vec{y}|\vec{x},\vec{\theta})$ is the probability that the network will output an arbitrary $\vec{y}$ given an arbitrary input $\vec{x}$.
+I am having a hard time reconciling how, after learning $\vec{\theta}$, there is still a probabilistic aspect to it. Post training, a network is, in general, a deterministic function, not a probability. For any specific input $\vec{x}$, a trained network will always produce the same output.
+Any insight would be appreciated.
+"
+"['machine-learning', 'recurrent-neural-networks', 'time-series', 'forecasting']"," Title: How to exploit translational symmetry for extrapolation in video generation using machine learningBody: I'll try to rephrase my problem in the context of video processing. Imagine that initial frame of video has some translational symmetry. The frame evolves according to an update rule.
+
+I generate a time series for how an edge, say right up edge, of the frame evolves. I generate another time series for how a larger edge, including the smaller right up edge, evolves. Since there is translational symmetry, I should be able to find how the smaller edge is related to the larger edge. The final goal is to use the obtained correlation to extrapolate to larger edges. I want to find the correlation between these two multivariate time series using machine learning (ML) methods.
+
+I want to know
+
+1 - which one of ML methods can be used in general for this task?
+
+2 - if I use neural networks, the input and output shapes would be (values at time steps, number of variables)
. For the input it makes sense, but how can I define the output layer (for example, for LSTM in tensorflow)?
+"
+"['machine-learning', 'math', 'notation']"," Title: What is the difference between the notations $\|x\|_1, \|x\|_2$ and $|x|$?Body: What is the difference between the notations $\|x\|_1, \|x\|_2$ and $|x|$? I think $|x|$ is the magnitude of $x$.
+"
+"['reinforcement-learning', 'open-ai', 'chess', 'gym', 'go']"," Title: How powerful is OpenAI's Gym and Universe in board games area?Body: I'm a big fan of computer board games and would like to make Python chess/go/shogi/mancala programs. Having heard of reinforcement learning, I decided to look at OpenAI Gym.
+
+But first of all, I would like to know, is it possible using OpenAI Gym/Universe to create a chess bot that will be nearly as strong as Stockfish and create a go bot that will play as good as AlphaGo?
+
+Is it worth learning OpenAI?
+"
+"['neural-networks', 'classification']"," Title: Neural nets not learning mnist datasetBody: I tried training a 2 hidden layer network using the mnist dataset, but I am not getting any results. I have tried tuning the learning rate(tried 0.1 and 0.0001) and the number of epochs(tried 10 and 50). I even changed the size of hidden layer from 10 to 250. First i had initialized the weights between 0 and 1 and was getting the same classification for all test samples but added (-) sign to 50% of them(chose the figure of 50% by myself) and that problem was solved. Now I cant figure out why it is not working.
+
+import numpy as np
+import pandas as pd
+import matplotlib.pyplot as plt
+import math
+from sklearn.preprocessing import StandardScaler
+
+scaler = StandardScaler()
+
+def to_array(img):
+ img = np.asarray(img)
+ return img
+
+'''def standardize(gray):
+ st_gray = (gray-np.amin(gray))/(np.amax(gray)-np.amin(gray))
+ return st_gray
+'''
+def activ_func(x):
+ for i in range(x.shape[0]):
+ '''x[i][0]=(1-np.e**(-2*x[i][0]))/(1+np.e**(-2*x[i][0]))'''
+ x[i][0] = 1/(1+np.e**(-x[i][0]))
+ return x
+
+def deriv_activ_func(x):
+ for i in range(x.shape[0]):
+ '''x[i][0] = 1-math.pow(x[i][0],2)'''
+ x[i][0] = (x[i][0])*(1-x[i][0])
+ return x
+
+def cost(out_layer, label, ind):
+ cost = (out_layer[ind]-label)**2
+ return cost
+
+def update(x, grad, r):
+ for i in range(x.shape[0]):
+ x[i][0] = x[i][0]+r*grad[i][0]
+ return x
+
+path = ""mnist/mnist_train.csv""
+gray = pd.read_csv(path)
+labels = gray['label']
+gray = gray.drop(['label'], axis=1)
+gray = to_array(gray)
+labels = to_array(labels)
+st_gray = np.empty(shape=(gray.shape[1],1))
+
+def rand_sign(w):
+ n = np.random.randint(2,size=w.shape[0]*w.shape[1]).reshape(w.shape[0],w.shape[1])
+ for i in range(w.shape[0]):
+ for j in range(w.shape[1]):
+ if(n[i][j]==1):
+ w[i][j]=(-1)*w[i][j]
+ return w
+
+def initialize():
+ in_layer = np.empty(shape=(st_gray.shape[0],1))
+ out_layer = np.unique(labels).reshape(-1,1)
+ w1 = rand_sign(np.random.rand(250,in_layer.shape[0]))
+ b1 = rand_sign(np.random.rand(250,1))
+ l1 = np.empty(shape=(250,1))
+ w2 = rand_sign(np.random.rand(250,l1.shape[0]))
+ b2 = rand_sign(np.random.rand(250,1))
+ l2 = np.empty(shape=(250,1))
+ w3 = rand_sign(np.random.rand(out_layer.shape[0],l2.shape[0]))
+ b3 = rand_sign(np.random.rand(out_layer.shape[0],1))
+ l3 = np.empty_like(out_layer)
+ return l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer
+
+def feed_forward(l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,i):
+ st_gray = scaler.fit_transform(gray[i][:].reshape(-1,1))
+ in_layer = st_gray
+ l1 = np.dot(w1,in_layer)+b1
+ l1 = activ_func(l1)
+ l2 = np.dot(w2,l1)+b2
+ l2 = activ_func(l2)
+ l3 = np.dot(w3,l2)+b3
+ l3 = activ_func(l3)
+ return l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer
+
+def one_hot(out_layer,label):
+ for j in range(out_layer.shape[0]):
+ if(out_layer[j][0]==label):
+ out_layer[j][0] = 1
+ else:
+ out_layer[j][0] = 0
+ return out_layer
+
+def back_prop(l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer,lr):
+ error = out_layer-l3
+ grad = np.dot(error*deriv_activ_func(l3),l2.T)
+ w3 = update(w3, grad, lr)
+ grad = error*deriv_activ_func(l3)
+ b3 = update(b3, grad, lr)
+ grad = np.dot(w3.T,error*deriv_activ_func(l3))
+ error = grad
+ grad = error*deriv_activ_func(l2)
+ w2 = update(w2, grad, lr)
+ grad = error*deriv_activ_func(l2)
+ b2 = update(b2, grad, lr)
+ grad = np.dot(w2.T,error*deriv_activ_func(l2))
+ error = grad
+ grad = error*deriv_activ_func(l1)
+ w1 = update(w1, grad, lr)
+ grad = error*deriv_activ_func(l1)
+ b1 = update(b1, grad, lr)
+ return l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer
+
+def predict(l3):
+ out = np.amax(l3)
+ count = 0
+ for j in range(l3.shape[0]):
+ count=count+1
+ if(l3[j]==out):
+ break
+ return count
+
+def trainer():
+ l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer = initialize()
+ for epochs in range(50):
+ for i in range(gray.shape[0]):
+ out_layer = np.unique(labels).reshape(-1,1)
+ l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer = feed_forward(l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,i)
+ out_layer = one_hot(out_layer,labels[i])
+ l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer = back_prop(l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer,0.0001)
+ print(""End of epoch :"",epochs+1)
+ return l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer
+
+l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer = trainer()
+
+path = ""mnist/mnist_train.csv""
+gray = pd.read_csv(path)
+labels = gray['label']
+gray = gray.drop(['label'], axis=1)
+gray = to_array(gray)
+labels = to_array(labels)
+st_gray = np.empty(shape=(gray.shape[1],1))
+
+for i in range(10):
+ st_gray = scaler.fit_transform(gray[i][:].reshape(-1,1))
+ in_layer = st_gray
+ l1 = np.dot(w1,in_layer)+b1
+ l1 = activ_func(l1)
+ l2 = np.dot(w2,l1)+b2
+ l2 = activ_func(l2)
+ l3 = np.dot(w3,l2)+b3
+ l3 = activ_func(l3)
+ count = predict(l3)
+ print(""Expected: "",labels[i],"" Predicted: "",count)
+
+
+"
+"['reinforcement-learning', 'open-ai', 'rewards', 'gym']"," Title: Simulating successful trajectories in Montezuma's Revenge turns out to be unsuccessfulBody: I have written code in OpenAI's gym to simulate a random playing in Montezuma's Revenge where the agent randomly samples actions from the action space and tries to play the game. A success for me is defined as the case when the agent is atleast able to successfully retrieve the key (Gets a reward of 100). And such cases I dump in a pickle file. I got 44 successful cases when I kept it to run for a day or so. Here is the code I use to generate the training set :
+
+import numpy
+import gym
+import cma
+import random
+import time
+import pickle as pkl
+
+env = gym.make('MontezumaRevenge-ram-v0')
+
+observation = env.reset()
+
+#print(observation)
+#print(env.action_space.sample())
+
+obs_dict = []
+action_dict = []
+success_ctr = 0
+
+for i in range(0, 1000000):
+ print('Reward for episode',i+1)
+ done = False
+ rew = 0
+ action_list = []
+ obs_list = []
+ while not done:
+ action = env.action_space.sample()
+ observation, reward, done, _ = env.step(action)
+ action_list.append(action)
+ obs_list.append(observation)
+
+ rew += reward
+ env.render()
+ time.sleep(0.01)
+ if done:
+ env.reset()
+ if rew > 0:
+ success_ctr += 1
+ print(action_list)
+ action_dict.append(action_list)
+ obs_dict.append(obs_list)
+ pkl.dump(obs_dict, open(""obslist.pkl"", ""wb""))
+ pkl.dump(action_dict, open(""action.pkl"", ""wb""))
+
+ print(rew)
+ time.sleep(1)
+
+try:
+ print(obs_dict.shape)
+except:
+ pass
+
+print(""Took key:"", success_ctr)
+
+
+I loaded the successful cases from my generated pickle file, and simulated the agent's playing using those exact same cases. But, the agent never receives a reward of 100. I dont understand why this is happening. A little search online suggested it could be because of noise in the game. So, I gave a sleep time, before running each episode. Still, doesn't work. Can someone please explain why is this happening? And suggest a way I could go about generating the training set?
+"
+"['prediction', 'data-science', 'regression', 'time-series']"," Title: Predicting a day's dataBody: I have a dataset containing timestamp and temperature. For each day, I have 1440 values viz., I have data for every minute of that day(60minutes * 24hrs = 1440).
+
+The Dataset looks like this:
+
+
+
+As an initial step, I gathered day1 data to predict day2 data. I have tried AR, ARIMA, SARIMAX models but I didn't find any positive results. I think this is multivariate since the time and the temperature values changes with respect to date. I need guidance to choose the ML model that will suit for my dataset and it should be able to predict next day/ next month
+"
+"['convolutional-neural-networks', 'classification', 'image-recognition', 'image-segmentation']"," Title: What make a CNN suitable for image classification or semantic segmentation?Body: I've just started with CNN and there is something that I haven't understood yet:
+
+How do you ""ask"" a network: ""classify me these images"" or ""do semantic segmentation""?
+
+I think it must be something on the architecture, or in the loss function, or whatever that makes the network classify its input or do semantic segmentation.
+
+I suppose its output will be different on classification and on semantic segmentation.
+
+Maybe the question could be rewritten to:
+
+What do I have to do to use a CNN for classification or for semantic segmentation?
+"
+"['machine-learning', 'math', 'objective-functions', 'notation']"," Title: How to understand the average l2 loss?Body: In the snippet below, the highlighted part is the average norm, but since $1/|p_i|$ is outside the summation, it is very confusing to understand.
+
+
+- is $|p_i|$ l2-norm(as per wolfram) or l1-norm or absolute value as per wiki.
+- Should the $i$ inside summation be considered for $1/|p_i|$, which is outside the summation?
+
+
+
+"
+"['comparison', 'optimization', 'applications', 'meta-heuristics']"," Title: What are advantages of using meta-heuristic algorithms on optimization problems?Body: What are the advantages and disadvantages of using meta-heuristic algorithms on optimization problems? Simply, why do we use meta-heuristic algorithms, like PSO, over traditional mathematical techniques, such as linear, non-linear and dynamic programming?
+
+I actually have a good understanding of meta-heuristic algorithms and I know how they work. For example, one advantage of this kind of algorithms is that they can find an optimal solution in a reasonable time.
+
+However, my lack of knowledge about other methods and techniques brought this question to my mind.
+"
+"['neural-networks', 'prediction', 'probability']"," Title: Predicting probabilities of events using neural networksBody: I've got a few thousands of sequences like
+
+1.23, 2.15. 3.19, 4.30, 5.24, 6.22
+
+
+where the numbers denote times on which an event happened (there's just a single kind of events). The events are sort of periodical and the period is known to be exactly one, however, the exact times varies. Sometimes, events are missing and there are other irregularities, but let's ignore them for now.
+
+I'd like to train an neural network for predicting the probability that there'll be a next even in a given time interval. The problem is that I have no probabilities for the training.
+
+All I have are the above sequences. If I had four sequences like
+
+1.23, 2.15. 3.19, 4.30, 5.24, 6.05
+1.23, 2.15. 3.19, 4.30, 5.24, 6.83
+1.23, 2.15. 3.19, 4.30, 5.24, 6.27
+1.23, 2.15. 3.19, 4.30, 5.24, 6.22
+1.23, 2.15. 3.19, 4.30, 5.24, 6.17
+
+
+then I could say that the probability of an event in the interval [6.10, 6.30]
is 60% and use this value for learning. However, all my sequences are different. I could try to group them somehow so that I can define something like a probability, but this sounds way more complicated than what I'm trying to achieve.
+
+Instead, I could try to use the sequence
+
+1.23, 2.15. 3.19, 4.30, 5.24, 6.22
+
+
+to learn that after the prefix 1.23, 2.15. 3.19, 4.30, 5.24
, there will be an event in the interval [6.10, 6.30]
for sure (value to learn equal to one); if there was 6.05 instead of 6.22, the value to learn would be zero. A learned network would produce the average value (let's say 0.60).
+
+However, the error would never converge to zero, so there'd be no quality criterion and probably a big chance of overtraining leading to non-sense results.
+
+Is there a way to handle this?
+"
+['fuzzy-logic']," Title: How can I formulate a fuzzy inference system to approximate the tangent function?Body: Consider the function $f(x)=\tan(2x)$. How can I determine a fuzzy system that approximates $f(x)$? How to choose membership functions and how to determine fuzzy rules? Any help would be appreciated.
+"
+"['ai-design', 'game-ai']"," Title: Best AI Approach for 2D to-down space shooterBody: I am building a 2d top-down space game, which involves several objects, such as asteroids, drones, spaceships, space litter and power-ups.
+It follows the rules of space gravity, with speed and acceleration.
+The idea is that a player controls his own spaceship, can fire bullets, and is allowed to spawn 3 support drones that will attack the enemy. Its goal is to deal the most damage to the opponent spaceship in 3 minutes.
+
+It involves certain dynamics, such as ""Destroying an asteroid spawns a power-up"", ""touching an asteroid deals damage to your spaceship"" etc.
+
+What would be the best approach to define an AI agent for it?
+
+I was thinking of Reinforcement learning, but maybe the game is too complex and I wouldn't have the computational power for it?
+"
+"['deep-learning', 'convolutional-neural-networks', 'python', 'training', 'deep-neural-networks']"," Title: How many layers exists in my neural network?Body: I have a neural network model defined as below. How many layers exist there? Not sure which ones to count when we are asked about the number.
+
+def create_model():
+ channels = 3
+ model = Sequential()
+ model.add(Conv2D(32, kernel_size = (5, 5), activation='relu', input_shape=(IMAGE_SIZE, IMAGE_SIZE, channels)))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Conv2D(256, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+
+ model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Dropout(0.2))
+ model.add(Flatten())
+ model.add(Dense(256, activation='relu'))
+ model.add(Dropout(0.2))
+ model.add(Dense(128, activation='relu'))
+ model.add(Dense(2, activation = 'softmax'))
+
+ return model
+
+"
+"['convolutional-neural-networks', 'weights']"," Title: Param count in last layer high, how can I decrease?Body: Not sure where to put this... I am trying to create a convolutional architecture for a DQN in keras, and I want to know why my param count is so high for my last layer compared to the rest of the network. I've tried slowly decreasing the dimensions of the layers above it, but it performs quite poorly. I want to know if there's anything I can do to decrease the param count of that last layer, besides the above.
+
+Code:
+
+#Import statements.
+import random
+import numpy as np
+import tensorflow as tf
+import tensorflow.keras.layers as L
+from collections import deque
+import layers as mL
+import tensorflow.keras.optimizers as O
+import optimizers as mO
+import tensorflow.keras.backend as K
+
+
+#Conv function.
+def conv(x, units, kernel, stride, noise=False, padding='valid'):
+ y = L.Conv2D(units, kernel, stride, activation=mish, padding=padding)(x)
+ if noise:
+ y = mL.PGaussian()(y)
+ return y
+
+#Network
+ x_input = L.Input(shape=self.state)
+ x_goal = L.Input(shape=self.state)
+ x = L.Concatenate(-1)([x_input, x_goal])
+ x_list = []
+ for i in range(2):
+ x = conv(x, 4, (7,7), 1)
+ for i in range(2):
+ x = conv(x, 8, (5,5), 2)
+ for i in range(10):
+ x = conv(x, 6, (3,3), 1, noise=True)
+ x = L.Conv2D(1, (3,3), 1)(x)
+ x_shape = K.int_shape(x)
+ x = L.Reshape((x_shape[1], x_shape[2]))(x)
+ x = L.Flatten()(x)
+ crit = L.Dense(1, trainable=False)(x)
+ critic = tf.keras.models.Model([x_input, x_goal], crit)
+ act1 = L.Dense(self.action, trainable=False)(x)
+ act2 = L.Dense(self.action2, trainable=False)(x)
+ act1 = L.Softmax()(act1)
+ act2 = L.Softmax()(act2)
+ actor = tf.keras.models.Model([x_input, x_goal], [act1, act2])
+ actor.compile(loss=mish_loss, optimizer='adam')
+ actor.summary()
+
+
+
+actor.summary():
+
+________________________________________________________________
+Layer (type) Output Shape Param # Connected to
+==================================================================================================
+input_2 (InputLayer) [(None, 300, 300, 3) 0
+__________________________________________________________________________________________________
+input_3 (InputLayer) [(None, 300, 300, 3) 0
+__________________________________________________________________________________________________
+concatenate (Concatenate) (None, 300, 300, 6) 0 input_2[0][0]
+ input_3[0][0]
+__________________________________________________________________________________________________
+conv2d_52 (Conv2D) (None, 294, 294, 4) 1180 concatenate[0][0]
+__________________________________________________________________________________________________
+conv2d_53 (Conv2D) (None, 288, 288, 4) 788 conv2d_52[0][0]
+__________________________________________________________________________________________________
+conv2d_54 (Conv2D) (None, 142, 142, 8) 808 conv2d_53[0][0]
+__________________________________________________________________________________________________
+conv2d_55 (Conv2D) (None, 69, 69, 8) 1608 conv2d_54[0][0]
+__________________________________________________________________________________________________
+conv2d_56 (Conv2D) (None, 67, 67, 6) 438 conv2d_55[0][0]
+__________________________________________________________________________________________________
+p_gaussian (PGaussian) (None, 67, 67, 6) 1 conv2d_56[0][0]
+__________________________________________________________________________________________________
+conv2d_57 (Conv2D) (None, 65, 65, 6) 330 p_gaussian[0][0]
+__________________________________________________________________________________________________
+p_gaussian_1 (PGaussian) (None, 65, 65, 6) 1 conv2d_57[0][0]
+__________________________________________________________________________________________________
+conv2d_58 (Conv2D) (None, 63, 63, 6) 330 p_gaussian_1[0][0]
+__________________________________________________________________________________________________
+p_gaussian_2 (PGaussian) (None, 63, 63, 6) 1 conv2d_58[0][0]
+__________________________________________________________________________________________________
+conv2d_59 (Conv2D) (None, 61, 61, 6) 330 p_gaussian_2[0][0]
+__________________________________________________________________________________________________
+p_gaussian_3 (PGaussian) (None, 61, 61, 6) 1 conv2d_59[0][0]
+__________________________________________________________________________________________________
+conv2d_60 (Conv2D) (None, 59, 59, 6) 330 p_gaussian_3[0][0]
+__________________________________________________________________________________________________
+p_gaussian_4 (PGaussian) (None, 59, 59, 6) 1 conv2d_60[0][0]
+__________________________________________________________________________________________________
+conv2d_61 (Conv2D) (None, 57, 57, 6) 330 p_gaussian_4[0][0]
+__________________________________________________________________________________________________
+p_gaussian_5 (PGaussian) (None, 57, 57, 6) 1 conv2d_61[0][0]
+__________________________________________________________________________________________________
+conv2d_62 (Conv2D) (None, 55, 55, 6) 330 p_gaussian_5[0][0]
+__________________________________________________________________________________________________
+p_gaussian_6 (PGaussian) (None, 55, 55, 6) 1 conv2d_62[0][0]
+__________________________________________________________________________________________________
+conv2d_63 (Conv2D) (None, 53, 53, 6) 330 p_gaussian_6[0][0]
+__________________________________________________________________________________________________
+p_gaussian_7 (PGaussian) (None, 53, 53, 6) 1 conv2d_63[0][0]
+__________________________________________________________________________________________________
+conv2d_64 (Conv2D) (None, 51, 51, 6) 330 p_gaussian_7[0][0]
+__________________________________________________________________________________________________
+p_gaussian_8 (PGaussian) (None, 51, 51, 6) 1 conv2d_64[0][0]
+__________________________________________________________________________________________________
+conv2d_65 (Conv2D) (None, 49, 49, 6) 330 p_gaussian_8[0][0]
+__________________________________________________________________________________________________
+p_gaussian_9 (PGaussian) (None, 49, 49, 6) 1 conv2d_65[0][0]
+__________________________________________________________________________________________________
+conv2d_66 (Conv2D) (None, 47, 47, 1) 55 p_gaussian_9[0][0]
+__________________________________________________________________________________________________
+reshape (Reshape) (None, 47, 47) 0 conv2d_66[0][0]
+__________________________________________________________________________________________________
+flatten (Flatten) (None, 2209) 0 reshape[0][0]
+__________________________________________________________________________________________________
+dense_1 (Dense) (None, 2000) 4420000 flatten[0][0]
+__________________________________________________________________________________________________
+dense_2 (Dense) (None, 200) 442000 flatten[0][0]
+__________________________________________________________________________________________________
+softmax (Softmax) (None, 2000) 0 dense_1[0][0]
+__________________________________________________________________________________________________
+softmax_1 (Softmax) (None, 200) 0 dense_2[0][0]
+==================================================================================================
+Total params: 4,869,857
+Trainable params: 7,857
+Non-trainable params: 4,862,000
+__________________________________________________________________________________________________
+
+"
+"['convolutional-neural-networks', 'tensorflow', 'keras', 'bayesian-deep-learning', 'uncertainty-quantification']"," Title: Why do CNN's sometimes make highly confident mistakes, and how can one combat this problem?Body: I trained a simple CNN on the MNIST database of handwritten digits to 99% accuracy. I'm feeding in a bunch of handwritten digits, and non-digits from a document.
+
+I want the CNN to report errors, so I set a threshold of 90% certainty below which my algorithm assumes that what it's looking at is not a digit.
+
+My problem is that the CNN is 100% certain of many incorrect guesses. In the example below, the CNN reports 100% certainty that it's a 0. How do I make it report failure?
+
+
+
+My thoughts on this:
+Maybe the CNN is not really 100% certain that this is a zero. Maybe it just thinks that it can't be anything else, and it's being forced to choose (because of normalisation on the output vector). Is there any way I can get insight into what the CNN ""thought"" before I forced it to choose?
+
+PS: I'm using Keras on Tensorflow with Python.
+
+Edit
+
+Because someone asked. Here is the context of my problem:
+
+This came from me applying a heuristic algorithm for segmentation of sequences of connected digits. In the image above, the left part is actually a 4, and the right is the curve bit of a 2 without the base. The algorithm is supposed to step through segment cuts, and when it finds a confident match, remove that cut and continue moving along the sequence. It works really well for some cases, but of course it's totally reliant on being able to tell if what it's looking at is not a good match for a digit. Here's an example of where it kind of did okay.
+
+
+
+My next best option is to do inference on all permutations and maximise combined score. That's more expensive.
+"
+"['neural-networks', 'machine-learning', 'prediction', 'multilayer-perceptrons']"," Title: How I can predict the next number in a sequence with a neural network?Body: I've been dabbling with machine learning and neural networks (namely, resnet50) for a few months now, mostly doing image recognition. I am currently trying to make a program that, given a string of numbers as input, can predict the next number in this sequence. For example, the input could be 1, 2, 3, 4
and the output should be 5
.
+
+I read something that said this could be done with a multilayer perceptron neural net, but that didn't elaborate much.
+
+Any ideas, or links to tutorials/code?
+"
+"['generative-adversarial-networks', 'generative-model', 'papers', 'image-generation']"," Title: Is it feasible to use GAN for high-quality image synthesis other than human faces?Body: The famous Nvidia paper Progressive Growing of GANs for Improved Quality, Stability, and Variation, the GAN can generate hyperrealistic human faces. But, in the very same paper, images of other categories are rather disappointing and there hasn't seemed to be any improvements since then. Why is it the case? Is it because they didn't have enough training data for other categories? Or is it due to some fundamental limitation of GAN?
+
+I have come across a paper talking about the limitations of GAN: Seeing What a GAN Cannot Generate.
+
+Anybody using GAN for image synthesis other than human faces? Any success stories?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'convergence', 'function-approximation']"," Title: Why does reinforcement learning using a non-linear function approximator diverge when using strongly correlated data as input?Body: While reading the DQN paper, I found that randomly selecting and learning samples reduced divergence in RL using a non-linear function approximator (e.g a neural network).
+So, why does Reinforcement Learning using a non-linear function approximator diverge when using strongly correlated data as input?
+"
+"['deep-learning', 'convolutional-neural-networks', 'representation-learning']"," Title: Can a neural network learn to predict a number given a binarized image of a rectangle?Body: Let's assume that we have a regression problem. Our input is just binarized image that contains a single rectangle and we want to predict just a float number. Actually, this floating-point number depends on rectangle angle, rectangle size and rectangle location. Is this problem can be solved by a neural network?
+
+I think, it can not be solved by a neural network, because rectangle angle, size and location are latent variables and without learning these latent variable, above problem can not be solved. What do you think?
+"
+"['tensorflow', 'recurrent-neural-networks']"," Title: Is there a way to use RNN (in tensorflow) to do something like a batch Kalman with the weight dynamics specified in the loss?Body: Or would you simply do this as a time series of models.
+
+Basically I think you can think of time series of weights as the hidden states and the dynamics driving the weight time series as the RNN weights. I'm not sure if the data gradients are avoiding look ahead in this context though.
+
+I am basically thinking of smoothing (data assimilation) formulation of the filtering problem. Usually smoothing has look-ahead bias but with stop_gradients (and a large graph) it should be possible to do ""batch"" filtering.
+"
+"['search', 'minimax', 'alpha-beta-pruning']"," Title: Why is it possible to eliminate this branch with alpha-beta pruning?Body: Can someone explain to me why it is possible to eliminate the rest of the middle branch in this image for alpha-beta pruning? I am confused because it seems the only information you know is that Helen would pick at least a 2 at the top (considering if we iterate from left to right in DFS), and Stavros would absolutely not pick anything above 7. This leaves 5 possible numbers that the rest of the branch could take on that Helen could potentially end up picking, but can't because we've eliminated those possibilities via pruning.
+
+
+"
+"['deep-learning', 'convolutional-neural-networks', 'comparison', 'image-segmentation', 'transfer-learning']"," Title: What is the difference between using a backbone architecture and transfer learning?Body: I'm super new to deep learning and computer vision, so this question may sound dumb.
+
+In this link (https://github.com/GeorgeSeif/Semantic-Segmentation-Suite), there are pre-trained models (e.g., ResNet101) called front-end models. And they are used for feature extraction. I found these models are called backbone models/architectures generally. And the link says some of the main models (e.g. DeepLabV3 or PSPNet) rely on pre-trained ResNet.
+
+Also, transfer learning is to take a model trained on a large dataset and transfer its knowledge to a smaller dataset, right?
+
+
+- Do the main models that rely on pre-trained ResNet do transfer learning basically (like from ResNet to the main model)?
+- If I use a pre-trained network, like ResNet101, as the backbone architecture of the main model(like U-Net or SegNet) for image segmentation, is it considered as transfer learning?
+
+"
+"['neural-networks', 'deep-learning', 'ai-design', 'long-short-term-memory']"," Title: How should I design this LSTM network to perform stock prediction?Body: I'm trying to develop a stock predictor.
+
+I'm using LSTM but I am unsure about the structure of the Neural Network. For example, I'm assuming that the Neural Network is a many-to-one since we have many inputs (i.e Open, Close etc) and one output (stock price).
+
+By misunderstanding is coming with how to construct the nodes. For example, what input goes into the ""Cell"" (or node)? I.e does say 60 timestep mean 60 days of 'Open Price' are fed into the Neural Network at t and then 60 days of 'Close' into t + 1 until we use all input to produce an output?
+
+If someone could explain the process of how LSTM are used with stock predictions that would be appreciated.
+"
+"['machine-learning', 'classification', 'features']"," Title: Can we train the model to detect real users with only positive labels?Body: We have hundreds of thousands of customers records, and we need to take the benefits of our data to train a model that will recognize fake entries or unrealistic ones for our platform, where customers are asked to enter their names, phone number and zip code.
+
+So, our attributes are name, phone number, zip code and IP address to train the model with. We have only data associated with real users. Can we train a model provided with only positive labels (as we do not have a negative dataset to train the model with)?
+"
+"['convolutional-neural-networks', 'object-detection', 'text-classification']"," Title: Text detection on English and Chinese languageBody: https://arxiv.org/abs/1910.07954
+In this paper, we have a convolutional character neural network where we have object detection by taking a character as a basic unit. First, we do character detection and recognition and then we go for text detection.
+
+
+
+Here (Page number 5 under the subheading Iterative character detection) it is written that a model trained on English and Chinese texts will generalize well in regards to text detection.
+But how English and Chinese texts are good for generalization in text detection.
+If you have any queries regarding the paper you can ask me in the comment section
+Thanks in advance!
+"
+"['deep-learning', 'reference-request', 'long-short-term-memory']"," Title: Did people analyze dynamics of very simple LSTMs?Body: I wonder if researchers tried to understand how LSTMs work by analyzing the dynamics of simple LSTM (e.g. with 2 units)? For example, how the hidden state evolves depending on the properties of weight matrices.
+It seems like a very natural thing to try (especially because it is easy to draw hidden states with 2 units on a 2D plane). However, I haven't found any papers that would play with such a toy example.
+Are there such papers at all?
+Or with such a simple example, it is impossible to get some understanding because of its over-simplicity? (which I doubt because even logistic maps generate very complicated behavior)
+"
+"['machine-learning', 'convolutional-neural-networks', 'padding', 'transpose-convolution']"," Title: Transpose convolution in TiF-GAN: How does ""same"" padding works?Body: This question should be quite generic but I faced the problem in the case of the TiF-GAN generator so I am going to use it as an example. (Link to paper)
+
+If you check the penultimate page in the paper you can find the architecture design of the generator.
+
+The generator has a dense layer and then a reshape layer converting the hidden layer feature map to a dimensionality of 8x4x512 (given that $s = 64$)
+
+Then what follows is a transpose convolution operation with a kernel size of 12x3 with $512$ filters and a stride of $2$ in all dimensions. The output of this layer should be then 16x8x512.
+
+After fiddling with some coding I found out that the authors also used the setting padding=same
in their tensor flow code.
+
+So, my question is: How and what do you pad when you perform such a transpose convolution to get those output dimensions?
+
+Without any padding I would assume that you should get an output of 26x9x1534 assuming that each output dimension is equal to dim = kernel_dim + strides * (input_dim - 1)
+"
+"['neural-networks', 'tensorflow']"," Title: How to handle set-like size agnostic input formatBody: Let's set up some hypothetical simplified scenario: Each instance $i$ of my imaginary dataset $D=\{i_{1}, \ldots, i_{MAX}\}$ has different number $k_{i}$ of $n$-dimensional vectors as input into my neural network. Each of them will be transformed with $m \times n$ matrix $M$ (so, matrices with same parameters) and acted point-wise with some non-linearity $\sigma_{1}$.
+
+Now there are 2 possibilities I want to consider separately:
+
+
+- Case: I want to average all those outputs, thus depending of total number of vectors $k_{i}$.
+- Case: I want to choose 2 subsets (no necessarily mutually disjoint) and average vectors from those subsets.
+
+
+For the later layers (in both cases) I'd use some the ""standard"" neural networks architectures and loss functions. Notice, I'll probably go with more complex connections than averaging, this is only for the purpose of this technical question.
+
+My questions are:
+ 1. Is there a ""simple"" way (meaning: high-level library like
+ tensorflow) which allows me to create custom layer with shared
+ weights which is also size-agnostic (not size of vectors, but number
+ of vectors) Is it possible to parallelise computation like this?
+
+I have some knowledge of tf, but I still didn't use all obscure low-level details and don't know all capabilities of $2.0$ version. Also, I'd know to do this stuff in numpy with handcrafted back-propagation implementation. I'd like to do it in more high level way which will handle backpropagation and parallelisation for me.
+
+I'd like to avoid having tensor of vectors which will be padded, but I'm not sure is it possible. My real problem is more complex but has same characteristics: I need to use same weights on differently connected layers, and I would handle normalization of the input to the following layers to the same scale manually (i.e. with ""normal"" averaging or weighted attention-like averaging,...). Not always I'd connect all vectors from first layers to all from second. It needs to be flexible for each instance of data.
+
+So, in short: I want to reuse the same weight matrices but connect them differently for each instance of dataset. I want to do it in framework which already has automatic differentiation/backpropagation and parallelisation. Is it possible?
+
+Does eager execution in tf help in solving problems like this? I guess nothing could be done with prebuilt computation graphs in 1.x versions.
+"
+"['convolutional-neural-networks', 'pytorch', 'batch-normalization', 'inference']"," Title: Why does the BatchNormalization layer produce different outputs during training and inference?Body: I modified resnet50 architecture to get a regression network. I just add batchnorm1d and ReLU layers just before the fully connected layer. During the training, the output of batchnorm1d layer is nearly equal to 3 and this gives good results for training. However, during inference, output of batchnorm1d layer is about 30 so this leads to too low accuracy for test results. In other words, outputs of batchnorm1d layers give very different normalized output during training and inference.
+
+What is the reason for this situation, and how can I solve it? I am using PyTorch.
+"
+"['natural-language-processing', 'text-generation']"," Title: How to tell if two hotel reviews addressing the same thingBody: I am playing with a large dataset of hotel reviews, which contains both positive and negative reviews (the reviews are labeled). I want to use this dataset to perform textual style transfer - given a positive review, output a negative review which address the same thing. For example, if the positive review mentioned how spacious the rooms are, I want the output to be a review that complains about the small and claustrophobic rooms.
+
+However, I don't have positive review-negative review pairs for the training. I was thinking that maybe I could create those pairs myself, but I'm not sure what is the best way to do that. Simple heuristics like jaccard index and such didn't give the desired results.
+"
+"['deep-learning', 'activation-functions', 'exploding-gradient-problem', 'residual-networks', 'vanishing-gradient-problem']"," Title: Why do ResNets avoid the vanishing gradient problem?Body: I read that, if we use the sigmoid or hyperbolic tangent activation functions in deep neural networks, we can have some problems with the vanishing of the gradient, and this is visible by the shapes of the derivative of these functions. ReLU solves this problem thanks to its derivative, even if there may be some dead units. ResNet uses ReLU as activation function, but looking online what I understood is that ResNet solves the vanishing of the gradient thanks to its identity map, and I do not totally agree with that. So what's the purpose of the identity connections in ResNet? Are they used for solving the vanishing of the gradient? And ReLU really solves the vanishing of the gradient in deep neural networks?
+"
+['datasets']," Title: Labeling for multilabel image classificationBody: with a friend of mine, we got in an argument over how to label images for multi-label.
+
+Note:
+Groups of a species and the species of catfish is important to recognize.
+
+The labels are:
+
+
+- 'I': an individual fish of any type except catfish
+- 'R': A group of same species
+- 'K': Catfish
+
+
+First conflict:
+
+For an image containing a bank of fish, our conflicting opinions are:
+
+
+- I and R
+- R
+
+
+Second conflict:
+
+If in an image there's actually a bank of catfish, our conflicting opinions are:
+
+
+- K and R
+- K
+
+
+Third conflict:
+
+If in an image there's actually a group of same species of fish and other individual fish from different species, our conflicting opinions are:
+
+
+- R
+- I and R
+
+
+Summary:
+
+
+- I think one very important difference of opinion here, is if in multi-label, we should give two labels to one object in image(say group and fish) or if an object should have only one label.
+- Should the overwhelming presence of an object, overshadow the presence of another (group of fish of same species and another individual fish)
+
+
+What do you think?
+"
+"['datasets', 'object-recognition', 'object-detection', 'resource-request']"," Title: Small size datasets for object detection, segmentation and localizationBody: I am looking for a small size dataset on which I can implement object detection, object segmentation and object localization.
+
+Can anyone suggest me a dataset less than 5GB? Or do I need to know anything before implementing these algorithms?
+"
+"['neural-networks', 'python', 'artificial-neuron']"," Title: Applying Artificial neural network into kaggle's house prices data set gave bad predicted valuesBody: I am trying to solve the kaggle's house prices using neural network. I've already made it with ensembling several models (XGBoost, GradientBooster and Ridge) and I've got a great score ranking me between the top 25%.
+
+I imagined that by adding a new model to the ensembled models like ANN would increase prediction accuracy, so I did the following:
+
+import keras
+
+model = keras.models.Sequential()
+
+model.add(keras.layers.Dense(235, activation='relu', input_shape=(235,)))
+model.add(keras.layers.Dense(235, activation='relu'))
+model.add(keras.layers.Dense(235, activation='relu'))
+model.add(keras.layers.Dense(235, activation='relu'))
+model.add(keras.layers.Dense(235, activation='relu'))
+model.add(keras.layers.Dense(1))
+
+model.compile(optimizer='adam', loss='mean_squared_error')
+model.summary()
+model.fit(dataset_ann, y, epochs=100, callbacks=[keras.callbacks.EarlyStopping(patience=3)])
+y_pred = model.predict(X_test_ann)
+
+
+I choosed 235 neurons for each layer, as the training set has 235 features.
+
+For model ensembling:
+
+y_p = (0.1*model.predict(X_test_ann)+0.2*gbr.predict(testset)+0.3*xgb.predict(testset)+0.1*regressor.predict(testset)+0.1*elastic.predict(testset)+0.2*ridge.predict(testset))
+
+
+The shape of y_p
is (1459, 1459)
instead of (1459, )
where columns are all having the same values, so taking y_p[0]
would be more than enough.
+
+I submitted the result into kaggle and went from top 25% into bottom 60%.
+
+Is it because the number of hidden layers with its input? Or because there is few data to train (1460 rows of train set) and the neural network needs more than that? Or is it because of the number of neurons in each layer?
+
+I tried with epoch = 30, 100, 1000 and got nearly the same bad ranking.
+"
+"['neural-networks', 'reinforcement-learning', 'open-ai', 'markov-property']"," Title: Is a neural network able to optimize itself for speed?Body: I am experimenting with OpenAI Gym and reinforcement learning. As far as I understood, the environment is waiting for the agent to make a decision, so it's a sequential operation like this:
+
+decision = agent.decide(state)
+state, reward, done = environment.act(decision)
+agent.train(state, reward)
+
+
+Doing it in this sequential way, the Markov property is fulfilled: the new state is a result of the old state and the action. However, a lot of games will not wait for the player to make a decision. The game will continue to run and perhaps, the action comes too late.
+
+Has it been observed or is it even possible that a neuronal network adjusts its weights so that the PC computes the result faster and thus makes the ""better"" decision? E.g. one AI beats another because it is faster.
+
+Before posting an answer like ""there are always the same amount of calculations, so it's impossible"", please be aware that there is caching (1st level cache versus RAM), branch prediction and maybe other stuff.
+"
+"['machine-learning', 'prediction']"," Title: Why is my model accuracy high in train-test split but actually worse than chance in validation set?Body: I have trained a XGboost model to predict survival for the Kaggle Titanic ML competition.
+
+As with all Kaggle competitions there is a train
dataset with the target variable included and a test
dataset without the target variable which is used by Kaggle to compute the final accuracy score that determines your leaderboard ranking.
+
+My problem:
+
+I have build a fairly simple ensemble classifier (based on XGboost) and evaluated it via standard train-test-splits of the train
data. The accuracy I get from this validation is ~80% which is good but not amazing by public leaderboard standards (excluding the 100% cheaters).
+
+The results and all the KPIs I looked at of this standard model do not indicate severe overfitting, etc. to me.
+
+However when I submit my predictions for the test
set my public score is ~35% which is way below even a random chance model. It is sooo bad I even improved my score by simply reversing all predictions from the model.
+
+Why is my model so much worse on the test
?
+
+I know that Kaggle computes their scores a bit differently than I do locally, additionally there is probably some differences between the datasets. Most who join the competition notices at least some difference between their local test scores and the public scores.
+
+However my difference is really drastic and indeed reversing the predictions improves my score. This does not make sense to me because reversing the predictions on my local validations leads to garbage predictions, so this is not a simple problem of generally reversed predictions.
+
+So can you help me understand how those two issues happen at the same time:
+
+
+- Drastic difference between local accuracy and public score
+- Reversing actually leads to the better public score.
+
+
+Here is my notebook for the code (please ignore the errors, they are simply because the code does not work on kaggle kernels only locally):
+
+https://www.kaggle.com/fnguyen/titanicrising-test
+"
+"['reinforcement-learning', 'rewards', 'ddpg', 'reward-shaping', 'reward-design']"," Title: How to avoid rapid actuator movements in favor of smooth movements in a continuous space and action space problem?Body: I'm working on a continuous state / continuous action controller. It shall control a certain roll angle of an aircraft by issuing the correct aileron commands (in $[-1, 1]$).
+To this end, I use a neural network and the DDPG algorithm, which shows promising results after about 20 minutes of training.
+I stripped down the presented state to the model to only the roll angle and the angular velocity, so that the neural network is not overwhelmed by state inputs.
+So it's a 2 input / 1 output model to perform the control task.
+In test runs, it looks mostly good, but sometimes, the controller starts thrashing, i.e. it outputs flittering commands, like in a very fast bangbang-controlm which causes a rapid movement of the elevator.
+
+Even though this behavior kind of maintains the desired target value, this behavior is absolutely undesirable. Instead, it should keep the output smooth. So far, I was not able to detect any special disturbance that starts this behavior. Yet it comes out of the blue.
+Does anybody have an idea or a hint (maybe a paper reference) on how to incorporate some element (maybe reward shaping during the training) to avoid such behavior? How to avoid rapid actuator movements in favor of smooth movements?
+I tried to include the last action in the presented state and add a punishment component in my reward, but this did not really help. So obviously, I do something wrong.
+"
+"['deep-learning', 'classification', 'comparison', 'terminology', 'prediction']"," Title: Why are the terms classification and prediction used as synonyms in the context of deep learning?Body: Why are the terms classification and prediction used as synonyms especially when it comes to deep learning? For example, a CNN predicts the handwritten digit.
+
+To me, a prediction is telling the next step in a sequence, whereas classification is to put labels on (a finite set of) data.
+"
+"['machine-learning', 'algorithm', 'genetic-algorithms', 'problem-solving', 'intelligence']"," Title: In machine learning, how can we overcome the restrictive nature of conjunctive space?Body: In machine learning, problem space can be represented through concept space, instance space version space and hypothesis space. These problem spaces used the conjunctive space and are very restrictive one and also in the above-mentioned representations of problem spaces, it is not sure that the true concept lies within conjunctive space.
+
+So, let's say, if we have a bigger search space and want to overcome the restrictive nature of conjunctive space, then how can we represent our problem space? Secondly, in a given scenario which algorithm is used for our problem space to represent the learning problem?
+"
+"['computer-vision', 'human-computer-interaction']"," Title: How can a new metric applied for humans causing danger on railtracks?Body: I am writing myself and was thinking about, what kind of metric can be applied to measure the ""dangerousness"" of a human being on a railtrack? For example detecting if a human is running on the rails?
+
+Maybe predicting the human movement towards rails, the distance to the vehicle (ego perspective) or something else?
+"
+"['convolutional-neural-networks', 'filters', 'convolutional-layers', 'convolution-arithmetic', 'feature-maps']"," Title: How is the depth of the filters of convolutional layers determined?Body: I am a bit confused about the depth of the convolutional filters in a CNN.
+At layer 1, there are usually about 40 3x3x3 filters. Each of these filters outputs a 2d array, so the total output of the first layer is 40 2d arrays.
+Does the next convolutional filter have a depth of 40? So, would the filter dimensions be 3x3x40?
+"
+"['neural-networks', 'ai-design', 'python', 'image-processing', 'image-generation']"," Title: How can I generate unique random patterns (similar to the ones in Nutella jars)?Body: How can I generate unique patterns, as they did for these Nutella jars? See, for example, the video Algorithm designs seven million different jars of Nutella.
+
+
+"
+"['machine-learning', 'reinforcement-learning', 'training']"," Title: What effect does increasing the actions in RL have?Body: Consider a 2D snake game, where the snake has to eat food to become longer. It must avoid hitting walls and biting into her tail.
+
+Such a game could have a different amount of actions:
+
+
+- 3 actions: go straight, turn left, turn right (relative to crawling direction)
+- 4 actions: north, east, south, west (absolute direction on the 2D map)
+- 7 actions: a combination of option A and option B (leaves the preferred choice to the player)
+
+
+While the game in principle is always the same, I would like to understand the impact of the amount of actions on the training of a neuronal network. One obvious thing is the number of output nodes of the neuronal network.
+
+In case A (3 actions), the neuronal network cannot perform an incorrect action. Any of the 3 choices are valid moves.
+
+In case B (4 actions), the net IMHO needs to learn that going into opposite direction does not have the desired effect and the snake continues moving into the old direction.
+
+In case C (7 actions), the net needs to learn both, 1 action is always illegal and the 3 relative actions somehow map to the 3 absolute actions.
+
+How can I consider the learning curve in these situations? Does option B need 25% more training than option A to achieve the same results (same fitness) (similar: option C needs 125% more training time)?
+
+Is giving a negative reward for an impossible move considered cheating, because I do code the rules of the game into the reward logic?
+"
+"['reinforcement-learning', 'comparison', 'terminology', 'off-policy-methods', 'model-free-methods']"," Title: Are model-free and off-policy algorithms the same?Body: In respect of RL, is model-free and off-policy the same thing, just different terminology? If not, what are the differences? I've read that the policy can be thought of as 'the brain', or decision making part, of machine learning application, where it stores its learnings and refers to it when a new action is required in the new state.
+"
+"['objective-functions', 'pytorch']"," Title: RealNVP gives wrong probabilitiesBody: I am trying to use RealNVP with some data I have (the input size is a 1D vector of size 22). Here is the link to the RealNVP paper and here is a nice, short explanation of it (the paper is pretty long). My code is mainly based on this code from GitHub and below are the main piece that I am using (with slight adjustments). The problem is that the loss is getting negative, which in the definition of my code means that the log-probability of the my data is positive, which in turn means that the probabilities are bigger than 1. This is impossible mathematically, and I see no way how this can happen, from a mathematical point of view. I also couldn't find a mistake in my code. Can someone help me with this? Is there a mistake there? Am I missing something with my understanding of normalizing flows? Thank you!
+
+class NormalizingFlowModel(nn.Module):
+
+ def __init__(self, prior, flows):
+ super().__init__()
+ self.prior = prior
+ self.flows = nn.ModuleList(flows)
+
+ def forward(self, x):
+ m, _ = x.shape
+ log_det = torch.zeros(m).cuda()
+ for flow in self.flows:
+ x, ld = flow.forward(x)
+ log_det += ld
+ z, prior_logprob = x, self.prior.log_prob(x)
+ return z, prior_logprob, log_det
+
+ def inverse(self, z):
+ m, _ = z.shape
+ log_det = torch.zeros(m).cuda()
+ for flow in self.flows[::-1]:
+ z, ld = flow.inverse(z)
+ log_det += ld
+ x = z
+ return x, log_det
+
+ def sample(self, n_samples):
+ z = self.prior.sample((n_samples,))
+ x, _ = self.inverse(z)
+ return x
+
+
+class FCNN_for_NVP(nn.Module):
+ """"""
+ Simple fully connected neural network to be used for Real NVP
+ """"""
+ def __init__(self, in_dim, out_dim):
+ super().__init__()
+ self.network = nn.Sequential(
+ nn.Linear(in_dim, 32),
+ nn.Tanh(),
+ nn.Linear(32, 32),
+ nn.Tanh(),
+ nn.Linear(32, 64),
+ nn.Tanh(),
+ nn.Linear(64, 64),
+ nn.Tanh(),
+ nn.Linear(64, 32),
+ nn.Tanh(),
+ nn.Linear(32, 32),
+ nn.Tanh(),
+ nn.Linear(32, out_dim),
+ )
+
+ def forward(self, x):
+ return self.network(x)
+
+
+class RealNVP(nn.Module):
+ """"""
+ Non-volume preserving flow.
+
+ [Dinh et. al. 2017]
+ """"""
+ def __init__(self, dim, base_network=FCNN_for_NVP):
+ super().__init__()
+ self.dim = dim
+ self.t1 = base_network(dim // 2, dim // 2)
+ self.s1 = base_network(dim // 2, dim // 2)
+ self.t2 = base_network(dim // 2, dim // 2)
+ self.s2 = base_network(dim // 2, dim // 2)
+
+ def forward(self, x):
+ lower, upper = x[:,:self.dim // 2], x[:,self.dim // 2:]
+ t1_transformed = self.t1(lower)
+ s1_transformed = self.s1(lower)
+ upper = t1_transformed + upper * torch.exp(s1_transformed)
+ t2_transformed = self.t2(upper)
+ s2_transformed = self.s2(upper)
+ lower = t2_transformed + lower * torch.exp(s2_transformed)
+ z = torch.cat([lower, upper], dim=1)
+ log_det = torch.sum(s1_transformed, dim=1) + torch.sum(s2_transformed, dim=1)
+ return z, log_det
+
+ def inverse(self, z):
+ lower, upper = z[:,:self.dim // 2], z[:,self.dim // 2:]
+ t2_transformed = self.t2(upper)
+ s2_transformed = self.s2(upper)
+ lower = (lower - t2_transformed) * torch.exp(-s2_transformed)
+ t1_transformed = self.t1(lower)
+ s1_transformed = self.s1(lower)
+ upper = (upper - t1_transformed) * torch.exp(-s1_transformed)
+ x = torch.cat([lower, upper], dim=1)
+ log_det = torch.sum(-s1_transformed, dim=1) + torch.sum(-s2_transformed, dim=1)
+ return x, log_det
+
+flow = RealNVP(dim=data.size(1))
+flows = [flow for _ in range(1)]
+prior = MultivariateNormal(torch.zeros(data.size(1)).cuda(), torch.eye(data.size(1)).cuda())
+model = NormalizingFlowModel(prior, flows)
+model = model.cuda()
+
+for i in range(10):
+ for j, dtt in enumerate(my_dataloader_bkg_only):
+ optimizer.zero_grad()
+ x = dtt[0].float()
+ z, prior_logprob, log_det = model(x)
+ logprob = prior_logprob + log_det
+ loss = -torch.mean(prior_logprob + log_det)
+ loss.backward()
+ optimizer.step()
+ if i % 1 == 0:
+ print(""Saved"")
+ best_loss = logprob.mean().data.cpu().numpy()
+ print(logprob.mean().data.cpu().numpy(), prior_logprob.mean().data.cpu().numpy(),
+ log_det.mean().data.cpu().numpy())
+
+"
+"['machine-learning', 'comparison', 'geometric-deep-learning', 'graphs', 'semi-supervised-learning']"," Title: What is the difference between graph semi-supervised learning and normal semi-supervised learning?Body: Whenever I look for papers involving semi-supervised learning, I always find some that talk about graph semi-supervised learning (e.g. A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning).
+
+What is the difference between graph semi-supervised learning and normal semi-supervised learning?
+"
+"['reinforcement-learning', 'applications']"," Title: How can I perform lane detection with reinforcement learning?Body: I'm quite new to reinforcement learning and my project will consist of detecting lanes with RL.
+
+I'm using q-learning and I'm having a hard time thinking how my q table should look like, I mean - what could represent a state. My main idea is to feed the machine with a frame that contains a road picture, which the edge detection function is being applied to (and by thus getting lots of lines that exits in the frame). And train the machine which lines are the correct lane line. I already have a deterministic function that already recognizes the lanes and it will be the function that will teach the machine. I already organized some lane parameters such as (lane length, lane cords, lane color (white or yellow have a better probability to be a lane), lane diameter and the lane incline).
+
+Now, my only issue is how should I construct the Q-table. Basically, what could represent a state and which lanes or decisions I should reward.
+"
+"['reinforcement-learning', 'training', 'game-ai']"," Title: How can I train a RL agent to play board games successfully without human play?Body: How would you go about training an RL Tic Tac Toe (well, any board game, really) application to learn how to play successfully (win the game), without a human having to play against the RL?
+
+Obviously, it would take a lot longer to train the AI if I have to sit and play ""real"" games against it. Is there a way for me to automate the training?
+
+I guess creating ""a human player"" to train the AI who just selects random positions on the board won't help the AI to learn properly, as it won't be up against something that's not using a strategy to try to beat it.
+"
+"['neural-networks', 'biology', 'spiking-neural-networks']"," Title: Is it a good idea to first train a spiking neural network and then convert it to a conventional neural network?Body: In many papers about artificial spiking neural networks (SNNs), the performance of them is not up to par with traditional ANNs. I have read how some people have converted ANNs to SNNs using various techniques.
+
+There has been work done on using unsupervised learning in SNN to recognise MNIST digits through spike-timing-dependent plasticity (for example, the paper Unsupervised learning of digit recognition using spike-timing-dependent plasticity, by Diehl and Cook, 2015). This form of learning is not possible in traditional ANNs due to their synchronous nature.
+
+I was wondering would it be a good idea to first train an SNN in an unsupervised manner to learn some of the structure of the data. Then convert to a traditional ANN to take advantage of their superior performance with some more training. I can see this being useful for training a network on a sparsely labelled dataset.
+
+I am quite a novice in this area, so I was looking for feedback on any immediate barriers as to why this would not work or if it is even worth doing.
+"
+"['neural-networks', 'deep-learning', 'prediction', 'time-series']"," Title: What's the best architecture for time series prediction with a long dataset?Body: I have to build a neural network without any architecture limitations which have to predict the next value of a time series.
+
+The dataset is composed of 400.000 values, which are given in hex format. For example
+
+0xbfb22b14
+0xbfb22b10
+0xbfb22b0c
+0xbfb22b18
+0xbfb22b14
+
+
+I think LSTM is suitable for this problem, but I am worried about the length of the input. Would it be a good idea to use CNN?
+
+def structure(step,n_features):
+ # define model
+ model = Sequential()
+ model.add(LSTM(50, activation='relu', return_sequences=True, input_shape=(step, n_features)))
+ model.add(Dense(1))
+ model.compile(optimizer='adam', loss='mse')
+ return model
+
+
+What about this one ?
+
+""model"": {
+ ""loss"": ""mse"",
+ ""optimizer"": ""adam"",
+ ""save_dir"": ""saved_models"",
+ ""layers"": [
+ {
+ ""type"": ""lstm"",
+ ""neurons"": 999,
+ ""input_timesteps"": 998,
+ ""input_dim"": 1,
+ ""return_seq"": true
+ },
+ {
+ ""type"": ""dropout"",
+ ""rate"": 0.05
+ },
+ {
+ ""type"": ""lstm"",
+ ""neurons"": 100,
+ ""return_seq"": false
+ },
+ {
+ ""type"": ""dropout"",
+ ""rate"": 0.05
+ },
+ {
+ ""type"": ""dense"",
+ ""neurons"": 1,
+ ""activation"": ""linear""
+ }
+
+"
+"['machine-learning', 'comparison', 'computational-learning-theory', 'pac-learning', 'no-free-lunch-theorems']"," Title: Are PAC learnability and the No Free Lunch theorem contradictory?Body: I am reading the Understanding Machine Learning book by Shalev-Shwartz and Ben-David and based on the definitions of PAC learnability and No Free Lunch Theorem, and my understanding of them it seems like they contradict themselves. I know this is not the case and I am wrong, but I just don't know what I am missing here.
+
+So, a hypothesis class is (agnostic) PAC learnable if there exists a learner A and a function $m_{H}$ s.t. for every $\epsilon,\delta \in (0,1)$ and for every distribution $D$ over $X \times Y$, if $m \geq m_{H}$ the learner can return a hypothesis $h$, with a probability of at least $1 - \delta$
+$$ L_{D}(h) \leq min_{h'\in H} L_{D}(h') + \epsilon $$
+
+But, in layman's terms, the NFL theorem states that for prediction tasks, for every learner there exists a distribution on which the learner fails.
+
+There needs to exists a learner that is successful (defined above) for every distribution $D$ over $X \times Y$ for a hypothesis to be PAC learnable, but according to NFL there exists a distribution for which the learner will fail, aren't these theorems contradicting themselves?
+
+What am I missing or misinterpreting here?
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'sutton-barto']"," Title: What does the figure ""Blackjack Value Function..."" from Sutton represent?Body: I came across this graph in David Silver's youtube lecture and Sutton's book on reinforcement learning.
+
+Can anyone help me understand the graph?
+From the graph, for 10000 episodes what i see is that when we don't have a usable ace we always lose the game except if the sum is 20 or 21. But in case if we have a usable ace there are chances to win when our sum is below 20. I don't know how this is possible.
+
+
+"
+"['convolutional-neural-networks', 'object-detection', 'regression']"," Title: Possible model to use to find pixel locations of objectsBody: I want to make a model that outputs the centre pixel of objects appearing in an image.
+
+My current method involves using a CNN with L2 loss to output an image of equivalent size to the input where each pixel has a value of 1 if it is the center of an object and 0 otherwise. Each input image has roughly ~80 objects.
+
+The problem with this is the CNN learns the easiest way to reduce the error, which is having the entire output be 0, because for 97% of cases that's correct. As such, error decreases but it learns nothing.
+
+What is another potential method for training a network to do something similar? I also tried adding dropout, which made the output a lot more noisy and it seemed to learn ok, but eventually ended up in the same state as before with the entire output being 0, never really seeming to learn how to output the locations of objects.
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'temporal-difference-methods', 'bias-variance-tradeoff']"," Title: How does Monte Carlo have high variance?Body: I was going through David Silver's lecture on reinforcement learning (lecture 4). At 51:22 he says that Monte Carlo (MC) methods have high variance and zero bias. I understand the zero bias part. It is because it is using the true value of value function for estimation. However, I don't understand the high variance part. Can someone enlighten me?
+"
+"['reinforcement-learning', 'markov-decision-process', 'probability', 'probability-distribution']"," Title: Formulation of a Markov Decision Process ProblemBody: Given a list of $N$ questions. If question $i$ is answered correctly (given probability $p_i$), we receive reward $R_i$; if not the quiz terminates. Find the optimal order of questions to maximize expected reward. (Hint: Optimal policy has an ""index form"".)
+
+I am fairly new to Reinforcement Learning and Markov Decision Problems (MDP). I am aware that the goal of the problem is to maximize the expected reward but I am not sure how exactly to formulate this into an MDP.
+
+This is the approach I thought of:
+
+1) Assume only 2 questions. Then the state space is $S\in \{1,2\}$.
+
+2) Compute the expected total reward $J = E(R)$ for both cases, when we start with question $1$ and question $2$ and then find the maximum of the two.
+
+3) If we start with $1$, then $$J(S_0 = 1) = p_1(1-p_{2})R_1 + (R_1 + R_2)p_1p_2$$
+
+4) Similarly, if we start with $2$, $$J(S_0 = 2) = p_2(1-p_{1})R_2 + (R_1 + R_2)p_1p_2$$.
+
+To determine the maximum reward of the two, the required condition for $1$ to be the optimal starting question is $$R_1p_1 - R_2p_2 + p_1p_2(R_2 - R_1) \gt 0$$ If the above expression is negative, then we should start with $2$.
+
+I would like to know if the approach is correct and how to proceed further. I am also not sure how to define the action space in this case. Can a dynamic programming approach be used here to find the optimal policy?
+"
+"['neural-networks', 'convolutional-neural-networks', 'recurrent-neural-networks', 'prediction', 'time-series']"," Title: Convert input dataset given in hex addresses to intBody: I have created an LSTM Neural Network which take as input the following format in an .csv file
+
+sinewave
+0.841470985
+0.873736397
+0.90255357
+0.927808777
+0.949402346
+0.967249058
+0.98127848
+0.991435244
+
+
+How can I write some code so it can take as input hex addresses and convert them to int ?
+
+eg the following .xlsx file containing 400.000 samples
+
+0xbfb22b18
+0xbfb22b14
+0xbfb22b10
+0xbfb22b0c
+0xbfb22b18
+0xbfb22b14
+0xbfb22b10
+0xbfb22b0c
+0xbfb22b18
+0xbfb22b14
+0xbfb22b10
+0xbfb22b0c
+
+"
+"['long-short-term-memory', 'convergence']"," Title: Why does the error of my LSTM not decrease after 10 epochs?Body: Despite the problem being very simple, I was wondering why an LSTM network was not able to converge to a decent solution.
+import numpy as np
+import keras
+
+X_train = np.random.rand(1000)
+y_train = X_train
+X_train = X_train.reshape((len(X_train), 1, 1))
+
+model= keras.models.Sequential()
+model.add(keras.layers.wrappers.Bidirectional(keras.layers.LSTM(1, dropout=0., recurrent_dropout=0.)))
+model.add(keras.layers.Dense(1))
+
+optimzer = keras.optimizers.SGD(lr=1e-1)
+
+model.build(input_shape=(None, 1, 1))
+model.compile(loss=keras.losses.mean_squared_error, optimizer=optimzer, metrics=['mae'])
+history = model.fit(X_train, y_train, batch_size=16, epochs=100)
+
+After 10 epochs, the algorithm seems to have reached its optimal solution (around 1e-4
RMSE), and is not able to improve further the results.
+A simple Flatten + Dense network with similar parameters is however able to achieve 1e-13 RMSE.
+I'm surprised the LSTM cell does not simply let the value through, is there something I'm missing with my parameters? Is LSTM only good for classification problems?
+"
+"['machine-learning', 'terminology', 'feedforward-neural-networks', 'residual-networks']"," Title: What is the name of this neural network architecture with layers that are also connected to non-neighbouring layers?Body: Consider a feedforward neural network. Suppose you have a layer of inputs, which is feedforward to a hidden layer, and feedforward both the input and hidden layers to an output layer. Is there a name for this architecture? A layer feeds forward around the layer after it?
+"
+"['search', 'planning', 'strips', 'pddl']"," Title: FastDownward PDDL Planner LimitationsBody: I recently had a look at automated planners and experimented a little bit with FastDownward. As I wanted to start a toy project, I created a PDDL model for the ordinary 3D Rubik's Cube (of course using a planner may not be the most efficient approach).
+
+Although my model may not necessary be ""totally correct"" yet, so far it consists of 24 predicates, 12 actions for the respective moves (each with 8 typed parameters, 4 ""edge subcubes"" and 4 ""corner subcubes""). For each of the movable subcubes I have a domain object whose position is basically determined by the respective predicate; overall, at first glance this seemed to me as a model of quite moderate size.
+
+It was indeed not a very complex task to come up with this model and although it currently does not consider orientations of subcubes yet, I simply wanted to give it a try with an instance where only a single move (so application of one action) has to be conducted — I assumed that such a plangraph should level off quite soon, so basically after the first layer where a goal state can be reached.
+
+However, as I started the planner, it soon run out of memory and started to page.
+
+Previously, I only read something on PDDL and the respective sections of Russell & Norvig. I took a closer look at the planner itself and found that it transform the PDDL description into some intermediate description.
+
+I tried to only execute the transformation and after cutting off a third of my Rubik's Cube it — at least — terminated. I investigated the transformed model file and found that the planner/solver actually instantiates (flattens) the actions with its parameters.
+
+Since the Rubik's Cube has a quite unrestricted domain and each action has apparently 8 parameters (4 corners, 4 edges), this inherently results in a huge number of flattened actions. Even more, although I added precondition constraints to ensure distinctness of the parameters (so the very same subcube cannot be both edgecube $e_1$ and $e_2$), the flattened version still contains these invalid actions.
+
+I have the following questions:
+
+
+- are state-of-the-art planners even suitable for such a problem or are they designed for problems where flattening the actions is of great advantage (e.g. few parameters, moderate number of objects per class, etc.). IMHO this would be a major limitation for their applicability in contrast to e.g. CP solvers.
+- can anyone recommend another planner that is more appropriate and that maybe does not perform the model transformation, which seems to be quite expensive for my PDDL spec (e.g. from this list, https://ipc2018-classical.bitbucket.io, i have chosen FastDownward since it seemed to be among the best...)
+- does PDDL or even FastDownward allow to specify that the parameters have to be distinct (i just found this: https://www.ida.liu.se/~TDDC17/info/labs/planning/writing.shtml — search for distinct — but this is more than vague), cause this may already lead to a significant reduction of flattened actions.
+- I'd also be happy for any other recommendations or remarks
+
+"
+"['neural-networks', 'terminology', 'definitions']"," Title: Is the following statement about neural networks overclaimed?Body: Is the following statement about neural networks overclaimed?
+
+
+ Neural networks are iterative methods that minimize a loss function defined on the output layer of neurons.
+
+
+I wrote this statement in the Introduction
section of a conference paper. Reviewer got back to me saying that, ""this statement is over claimed without proper citations"". How is this statement over claimed? Is it not obvious that neural networks are iterative methods that try to minimize a cost function?
+"
+"['recurrent-neural-networks', 'sentiment-analysis']"," Title: What is the appropriate RNN structure to do Sentiment Analysis with multiple dependent ratings?Body: Suppose we are doing sentiment analysis for a restaurant. Customers can rate the restaurant by #1: how expensive the restaurant is, #2:how good is the food and #3: how likely they will come again. The ratings are dependent,i.e. the more expensive the restaurant is (higher #1), the less likely they will come back (lower #3), but whey will if the food is good (higher #2).
+
+My questions are: is there a good RNN structure(review as input, #1-#3 as output) that can capture and model the dependency among #1 - #3?
+"
+"['machine-learning', 'gradient-descent', 'variational-autoencoder', 'calculus', 'math']"," Title: Is Gradient Descent algorithm a part of Calculus of Variations?Body: As in https://en.wikipedia.org/wiki/Calculus_of_variations
+
+
+ The calculus of variations is a field of mathematical analysis that
+ uses variations, which are small changes in functions and functionals,
+ to find maxima and minima of functionals
+
+
+The Gradient Descent algorithm is also a method to find minima of a function, it it a part of the Calculus of Variations?
+"
+"['machine-learning', 'math', 'statistics']"," Title: Why is Standard Deviation based on L2 Variance and not L1 VarianceBody: Standard deviation and variance are in statistics but the formula for variance is somehow related to the L1 and L2.
+
+Mathematically (L2 in machine learning sense),
+$$Variance = \dfrac{(X_1-Mean)^2+..+(X_n-Mean)^2}{N}$$
+and,
+$$Standard\ Deviation= \sqrt(Variance)$$
+
+Why shouldn't it be (L1 in machine learning sense):
+$$Variance = \dfrac{|X_1-Mean|+..+|X_n-Mean|}{N}$$
+and,
+$$Standard\ Deviation= Variance$$
+"
+"['machine-learning', 'object-recognition']"," Title: Recognize carp and give them a unique idBody: For my internship assignment I have to implement a proof of concept for an application that is supposed to scan a picture with a carp on it and identify which carp this is. All of the carps that are going to be scanned are known and they all exist in the database, so no new carps are scanned.
+
+Is this possible? I've been searching a lot about this topic and the only thing I found is customvision.ai, but for using this I need to have at least 15 pictures of the same carp per tag, but the client only has 1 picture per carp.
+
+What are your recommendations or do you think this is not possible?
+"
+"['neural-networks', 'machine-learning', 'math', 'autoencoders', 'variational-autoencoder']"," Title: What is the mean in the variational auto-encoder?Body: Here's a diagram of a variational auto-encoder.
+
+
+
+There are 2 nodes before the sample (encoding vector). One is the mean, one is the standard deviation. The mean one is confusing.
+
+Is it the mean of values or is it the mean deviation?
+
+$$\text{mean} = \dfrac{X_i+..+X_n}{N}$$
+
+$$\text{mean deviation} = \dfrac{[X_i|+..+|X_n|}{N}$$
+"
+"['machine-learning', 'computational-learning-theory', 'math']"," Title: Mathematical foundations of the ability to learnBody: I am an undergraduate student in applied mathematics with an interest in artificial intelligence. I am currently exploring topics where I could do research. Coming from a mathematical background I am interested in the question: Can we mathematically establish that a certain AI system has the ability to learn a task given some examples of how it should be done?
+I would like to know what research has been done on this topic and also what mathematical tools could be helpful in answering such questions.
+"
+"['machine-learning', 'deep-learning', 'classification', 'deep-neural-networks', 'object-detection']"," Title: Post-classification after inferenceBody: I designed a fire detection using Deep Learning based classification approach. In my training dataset, I have both fire and fire smokes are supposed to be detected (all under ""fire""; mostly real fires are detected. Fire smokes are less accurate).
+
+Now after months, I need to differentiate them in my detection results. It would be difficult to retrain each class separately now. Another option coming into my mind is building a binary classification after the main one, getting the main detections as input and saying which of the two it belongs to. However, I may miss some fire smokes I believe because that's less accurate.
+
+Is there any other approaches? What are pros/cons of various approaches?
+"
+['turing-test']," Title: Nuances of Turing test requirementsBody: My understanding is that there is no singular ""The Turing Test Ruleset"" and competitions don't all do it the exact same way. Still, I'm wondering about some commonly accepted rules and their nuances. My Googling is not producing any specifics about this.
+
+I think most people agree that the purpose of the humans who talk to the judges is to just act normal. In the so-called ""passed Turing test"" instances where the humans tried to fool judges into thinking they were AIs, I would say the tests should be thrown out and I've seen critics in the field agree with that.
+
+But this question is more about how the judge should act.
+
+Let's say we are doing a competition where the threshold is for 40% of judges to call an AI a human after 5 minutes of chat.
+
+During the chat, is the judge supposed to try to trick the potential AI into revealing itself, or is the judge supposed to attempt to act as unbiased and natural as possible, as if it is accepted the potential AI is human, and judge based on a completely natural conversation?
+
+For example, asking ""What is the value of PI out 10 digits?"" or ""What is 123456 times 654321?"" or ""If you saw a bunny and a dime stuck on the road with a truck about to hit them, which one would you pick up?"" would be trying to trick AIs into revealing themselves because you are relying on exploiting the fact that the AI might tell you the correct answer or the inhuman answer.
+
+This is as opposed to simply carrying on a natural and normal conversation, with no biases or expectations. If you came upon someone on the street you would not spend 5 minutes trying to hurriedly ask ridiculous interrogation questions in an effort to prove the other person was an AI.
+
+So is the point of a Turing test generally assumed to be an attempt to flush out AIs or an attempt to judge their natural human conversation without interrogation-like prejudice?
+"
+"['neural-networks', 'supervised-learning']"," Title: Specific Neural Network Subtype for Automatic Web Scraping (Hyperlink Identification)Body: For a personal project, I'm trying to download files from a specific set of websites using a web scraper. The scraper has to navigate multiple webpages to get to the files I want to download. I'd like to use AI to find each successive link in order to avoid the inflexibility of hardcoded DOM paths. In other words, I want an AI system that replicates the way that a human would click through links to get to a download page.
+
+This seems like a task for a supervised neural network, where the input would be the HTML page and the output would be the href link to the next webpage in the search. But that's about the extent of my AI knowledge.
+
+Which subtype of neural network would be most effective for this kind of problem?
+
+Note, I have looked at this related question. My problem would probably fall under bullet two.
+"
+"['neural-networks', 'comparison', 'objective-functions', 'gradient-descent', 'hyperparameter-optimization']"," Title: Is there a reason to choose regular momentum over Nesterov momentum for neural networks?Body: I've been reading about Nesterov momentum from here and it seems like a nice improvement over regular momentum with no extra cost whatsoever.
+
+However, is this really the case? Are there instances where regular momentum performs better than Nesterov momentum or is Nesterov momentum performs at least as good as the regular momentum all the time?
+"
+"['convolutional-neural-networks', 'data-preprocessing']"," Title: Acoustic Input Data: Decibel or PascalsBody: In acoustics decibel levels were defined to solve an issue with showing values that are interpretive, understandable, and easy to communicate in contrast to intensity or pressure in Pascals.
+
+$dB = 10*\log({\frac{p^2}{p_{ref}^2}})$
+
+This log scale helps human understanding of an acoustic signal because human hearing is capable of discerning the difference of about 1 dB ref 20 $uPa$. However, if I have input data for a 2D CNN would it make more sense to have my input data be in Decibels where it is a logarithmic or to have raw pressures?
+
+Would one or the other benefit my models learning?
+"
+"['convolutional-neural-networks', 'image-generation']"," Title: Best architecture to learn image rotation?Body: Given an input image and an angle I want the output to be the image rotated at the given angle.
+
+So I want to train a neural network to do this from scratch.
+
+What sort of archetecture do you think would work for this if I want it to be lossless?
+
+I'm thinking of this archetecture:
+
+256x256 image
+
+--> convolutions to 64x64 image with 4 channels
+
+--> convolutions to 32x32 image with 16 channels and so on
+
+until a 1 pixel image with 256x256 channels.
+
+And then combine this with the input angle, and then a series of deconvolutions back up to 256x256.
+
+Do you think this would work? Could this be trained as a general rotation machine? Or is there a better archetecture?
+
+I would also like to train the same archetecture to do other transforms.
+"
+"['tensorflow', 'gaussian-process']"," Title: Error when using tensorflow HMC to marginalise GPR hyperparametersBody: I originally posted on SO (original post) but was suggested to post here.
+
+I would like to use tensorflow (version 2) to use gaussian process regression to fit some data and I found the google colab example online here 1. I have turned some of this notebook into a minimal example that is below.
+
+Sometimes the code fails with the following error when using MCMC to marginalize the hyperparameters: and I was wondering if anyone has seen this before or knows how to get around this?
+
+tensorflow.python.framework.errors_impl.InvalidArgumentError: Input matrix is not invertible.
+ [[{{node mcmc_sample_chain/trace_scan/while/body/_168/smart_for_loop/while/body/_842/dual_averaging_step_size_adaptation___init__/_one_step/transformed_kernel_one_step/mh_one_step/hmc_kernel_one_step/leapfrog_integrate/while/body/_1244/leapfrog_integrate_one_step/maybe_call_fn_and_grads/value_and_gradients/value_and_gradient/gradients/leapfrog_integrate_one_step/maybe_call_fn_and_grads/value_and_gradients/value_and_gradient/PartitionedCall_grad/PartitionedCall/gradients/JointDistributionNamed/log_prob/JointDistributionNamed_log_prob_GaussianProcess/log_prob/JointDistributionNamed_log_prob_GaussianProcess/get_marginal_distribution/Cholesky_grad/MatrixTriangularSolve}}]] [Op:__inference_do_sampling_113645]
+
+Function call stack:
+do_sampling
+
+
+1 https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Process_Regression_In_TFP.ipynb#scrollTo=jw-_1yC50xaM
+
+Note that some of code below is a bit redundant but it should in some sections but it should be able to reproduce the error.
+
+Thanks!
+
+import time
+
+import numpy as np
+import tensorflow.compat.v2 as tf
+import tensorflow_probability as tfp
+tfb = tfp.bijectors
+tfd = tfp.distributions
+tfk = tfp.math.psd_kernels
+tf.enable_v2_behavior()
+
+import matplotlib
+import matplotlib.pyplot as plt
+from mpl_toolkits.mplot3d import Axes3D
+#%pylab inline
+# Configure plot defaults
+plt.rcParams['axes.facecolor'] = 'white'
+plt.rcParams['grid.color'] = '#666666'
+#%config InlineBackend.figure_format = 'png'
+
+def sinusoid(x):
+ return np.sin(3 * np.pi * x[..., 0])
+
+def generate_1d_data(num_training_points, observation_noise_variance):
+ """"""Generate noisy sinusoidal observations at a random set of points.
+
+ Returns:
+ observation_index_points, observations
+ """"""
+ index_points_ = np.random.uniform(-1., 1., (num_training_points, 1))
+ index_points_ = index_points_.astype(np.float64)
+ # y = f(x) + noise
+ observations_ = (sinusoid(index_points_) +
+ np.random.normal(loc=0,
+ scale=np.sqrt(observation_noise_variance),
+ size=(num_training_points)))
+ return index_points_, observations_
+
+# Generate training data with a known noise level (we'll later try to recover
+# this value from the data).
+NUM_TRAINING_POINTS = 100
+observation_index_points_, observations_ = generate_1d_data(
+ num_training_points=NUM_TRAINING_POINTS,
+ observation_noise_variance=.1)
+
+def build_gp(amplitude, length_scale, observation_noise_variance):
+ """"""Defines the conditional dist. of GP outputs, given kernel parameters.""""""
+
+ # Create the covariance kernel, which will be shared between the prior (which we
+ # use for maximum likelihood training) and the posterior (which we use for
+ # posterior predictive sampling)
+ kernel = tfk.ExponentiatedQuadratic(amplitude, length_scale)
+
+ # Create the GP prior distribution, which we will use to train the model
+ # parameters.
+ return tfd.GaussianProcess(
+ kernel=kernel,
+ index_points=observation_index_points_,
+ observation_noise_variance=observation_noise_variance)
+
+gp_joint_model = tfd.JointDistributionNamed({
+ 'amplitude': tfd.LogNormal(loc=0., scale=np.float64(1.)),
+ 'length_scale': tfd.LogNormal(loc=0., scale=np.float64(1.)),
+ 'observation_noise_variance': tfd.LogNormal(loc=0., scale=np.float64(1.)),
+ 'observations': build_gp,
+})
+
+x = gp_joint_model.sample()
+lp = gp_joint_model.log_prob(x)
+
+print(""sampled {}"".format(x))
+print(""log_prob of sample: {}"".format(lp))
+
+# Create the trainable model parameters, which we'll subsequently optimize.
+# Note that we constrain them to be strictly positive.
+
+constrain_positive = tfb.Shift(np.finfo(np.float64).tiny)(tfb.Exp())
+
+amplitude_var = tfp.util.TransformedVariable(
+ initial_value=1.,
+ bijector=constrain_positive,
+ name='amplitude',
+ dtype=np.float64)
+
+length_scale_var = tfp.util.TransformedVariable(
+ initial_value=1.,
+ bijector=constrain_positive,
+ name='length_scale',
+ dtype=np.float64)
+
+observation_noise_variance_var = tfp.util.TransformedVariable(
+ initial_value=1.,
+ bijector=constrain_positive,
+ name='observation_noise_variance_var',
+ dtype=np.float64)
+
+trainable_variables = [v.trainable_variables[0] for v in
+ [amplitude_var,
+ length_scale_var,
+ observation_noise_variance_var]]
+# Use `tf.function` to trace the loss for more efficient evaluation.
+@tf.function(autograph=False, experimental_compile=False)
+def target_log_prob(amplitude, length_scale, observation_noise_variance):
+ return gp_joint_model.log_prob({
+ 'amplitude': amplitude,
+ 'length_scale': length_scale,
+ 'observation_noise_variance': observation_noise_variance,
+ 'observations': observations_
+ })
+
+# Now we optimize the model parameters.
+num_iters = 1000
+optimizer = tf.optimizers.Adam(learning_rate=.01)
+
+# Store the likelihood values during training, so we can plot the progress
+lls_ = np.zeros(num_iters, np.float64)
+for i in range(num_iters):
+ with tf.GradientTape() as tape:
+ loss = -target_log_prob(amplitude_var, length_scale_var,
+ observation_noise_variance_var)
+ grads = tape.gradient(loss, trainable_variables)
+ optimizer.apply_gradients(zip(grads, trainable_variables))
+ lls_[i] = loss
+
+print('Trained parameters:')
+print('amplitude: {}'.format(amplitude_var._value().numpy()))
+print('length_scale: {}'.format(length_scale_var._value().numpy()))
+print('observation_noise_variance: {}'.format(observation_noise_variance_var._value().numpy()))
+
+
+num_results = 100
+num_burnin_steps = 50
+
+
+sampler = tfp.mcmc.TransformedTransitionKernel(
+ tfp.mcmc.HamiltonianMonteCarlo(
+ target_log_prob_fn=target_log_prob,
+ step_size=tf.cast(0.1, tf.float64),
+ num_leapfrog_steps=8),
+ bijector=[constrain_positive, constrain_positive, constrain_positive])
+
+adaptive_sampler = tfp.mcmc.DualAveragingStepSizeAdaptation(
+ inner_kernel=sampler,
+ num_adaptation_steps=int(0.8 * num_burnin_steps),
+ target_accept_prob=tf.cast(0.75, tf.float64))
+
+initial_state = [tf.cast(x, tf.float64) for x in [1., 1., 1.]]
+
+# Speed up sampling by tracing with `tf.function`.
+@tf.function(autograph=False, experimental_compile=False)
+def do_sampling():
+ return tfp.mcmc.sample_chain(
+ kernel=adaptive_sampler,
+ current_state=initial_state,
+ num_results=num_results,
+ num_burnin_steps=num_burnin_steps,
+ trace_fn=lambda current_state, kernel_results: kernel_results)
+
+t0 = time.time()
+samples, kernel_results = do_sampling()
+t1 = time.time()
+print(""Inference ran in {:.2f}s."".format(t1-t0))
+
+"
+"['convolutional-neural-networks', 'convolution']"," Title: When is max pooling exactly applied in convolutional neural networks?Body: When using convolutional networks on images with multiple channels, do we max pool after we sum the feature map from each channel, or do we max pool each feature map separately and then sum?
+
+What's the intuition behind this, or is there a difference between the two?
+"
+"['deep-learning', 'datasets', 'object-detection']"," Title: On which data evaluate an object detection model ? (similar or real life data ?)Body: I'm training an object detection model (SSD300) to detect and classify body poses in thermal images.
+
+Even I have more than 2k different poses, but the background does not change much (I have only 5 different points of view).
+
+I trained my model on these images (70% for the training and 30% for validation).
+
+Now, I want to evaluate the model on an unbiased dataset.
+
+Should I keep images of my dataset for this purpose or should I use a real life dataset ?
+
+(A good solution would be to have a real life training set, but I don't have)
+
+I tried both, but as expected, I have an mAP=0.9 when evaluated on similar pictures and mAP=0.5 when evaluated on completely different images.
+
+Bonus question: is mAP a relevant metric when I want to show result to a client ? (e.g a client doesn't understand if I tell him ""my model has a mAP=0.7"")
+
+Precision-Recall ? (but I have to choose a pose classification threshold...)
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'computer-vision', 'interpolation']"," Title: Interpolating image to increase resolution before feeding it to a neural networkBody: Interpolation is a common way to make an image fit the right input shape for a neural network.
+
+But is there any point in using interpolation to make it easier for the network to learn?
+
+I assume interpolation adds no extra information to the input; It only uses existing information to increase the resolution and fill missing values.
+
+However, sometimes I have observed that while I can not see anything with my human eye, using some kind of advanced interpolation technique such as b-spline interpolation makes it crystal clear that the object i am looking for is in the image, especially in the domain of infrared images.
+
+So, is there any benefit for using interpolation rather than feeding a low dimensional image to a neural network?
+"
+"['deep-learning', 'training', 'accuracy', 'cross-validation', 'early-stopping']"," Title: Should I choose the model with highest validation accuracy or the model with highest mean of training and validation accuracy?Body: I'm training a deep network in Keras
on some images for a binary classification (I have around 12K images). Once in a while, I collect some false positives and add them to my training sets and re-train for higher accuracy.
+I split my training into 20/80 percent for training/validation sets.
+Now, my question is: which resulting model should I use? Always the one with higher validation accuracy, or maybe the higher mean of training and validation accuracy? Which one of the two would you prefer?
+Epoch #38: training acc: 0.924, validation acc: 0.944
+Epoch #90: training acc: 0.952, validation acc: 0.932
+
+"
+"['neural-networks', 'deep-learning', 'computational-learning-theory', 'vc-dimension', 'capacity']"," Title: How to estimate the capacity of a neural network?Body: Is it possible to estimate the capacity of a neural network model? If so, what are the techniques involved?
+"
+"['deep-learning', 'recurrent-neural-networks', 'audio-processing', 'speech-recognition', 'speech-synthesis']"," Title: How do I train a multiple-speaker model (speech synthesis) based on Tacotron 2 and espnet?Body: I'm new to Speech Synthesis & Deep Learning. Recently, I got a task as described below:
+
+I have problem in training a multi-speaker model which should be created by Tacotron2. And I was told I can get some ideas from espnet, which is a end-to-end audio tools library. In this way, I found a good dataset called libritts: http://www.openslr.org/60/. And it's also found at espnet: https://github.com/espnet/espnet#tts-results
+
+
+
+
+Here is my initial thought:
+
+
+- Download libritts corpus / Read the espnet code: ../espnet/egs/libritts/tts1/run.sh, learning how to train the libritts corpus by pytorch-backend.
+
+
+
+ Difficulty: But I cannot get it across that how the author trained a libritts.tacotron2.v1 as I didn't found anything about tacotron2 along those shells related to run.sh. Maybe He didn't make those codes open-source.
+
+
+
+- Read the tacotron2 code and tune it into a multi-speaker network:
+
+
+
+ Difficulty: I found the code is really complex.... I just got lost in reading these codes, without a clear understanding about how to tune this model. Cause Tacotron2 was designed with a LJSpeech Dataset(only 1 person).
+
+
+
+- Training the multi-speaker model with tiny set of dataset (http://www.openslr.org/60/) to save time.
+
+
+
+
+
+ they contains about 110 people's data, which can be enough for my scenario.
+
+
+In the end:
+
+Coud you please help me about my questions. I've been puzzled by this problem for a long time...
+"
+"['machine-learning', 'autoencoders', 'variational-autoencoder', 'notation']"," Title: Why is exp used in encoder of VAE instead of using the value of standard deviation alone?Body: There's one VAE example here:
+https://towardsdatascience.com/teaching-a-variational-autoencoder-vae-to-draw-mnist-characters-978675c95776.
+
+And the source code of encoder can be found at the following URL: https://gist.github.com/FelixMohr/29e1d5b1f3fd1b6374dfd3b68c2cdbac#file-vae-py.
+
+The author is using $e$ (natural exponential) for calculating values of the embedding vector:
+
+$$z = \mu + \epsilon \times e^{\sigma}$$
+
+where $\mu$ is the mean, $\epsilon$ a small random number and $\sigma$ the standard deviation.
+
+Or in code
+
+z = mn + tf.multiply(epsilon, tf.exp(sd))
+
+
+It's not related to the code (practical programming), but why using natural exponential instead of:
+
+$$z = \mu + \epsilon \times \sigma$$
+"
+"['deep-learning', 'computer-vision', 'image-segmentation']"," Title: What are the current tools and techniques for image segmentation in order of pragmatism?Body: To explain what I mean I'll depict the two extremes and something in the middle.
+
+1) Most pragmatic: If you need to just segment a few images for a design project, forget AI. Go into Adobe Photoshop and hand select the outline of the object you need to extract.
+
+2) Middle ground: If you need to build a reasonably accurate app for human aided segmentation of images, use a pre-trained model on a well known architecture.
+
+3) Least pragmatic: If you need to reach unprecedented levels of accuracy on a large volume of images. Do heavily funded research on new and better methods of image segmentation.
+
+So I'm most interested in painting out the spectrum for that middle ground. That is, how much of the wheel needs to be reinvented versus the complexity of the problem.
+
+For example (and this is what lead me here), I need to segment out dogs from several hundred photos that owners have taken. The dog is probably going to be among the main subjects of the photos. Do I need to reinvent the wheel (design an architecture)? Probably not. Do I even need to change the tyres (train my own model)? I'm guessing not. Do I need to code at all? I'm not sure.
+
+While I'm happy to get answers about my use case, it would be awesome if someone could map out the spectrum on my unfinished rainbow.
+"
+['deep-learning']," Title: Improve prediction with LSTMs when data have no particular trend (complex)Body: I have a deep learning problem, I am working with the CMAPSS dataset, which contains data simulating the degradation of several aircraft engines. The aim is to predict from data collected on a machine in full operation, the remaining useful time at this machine. My problem is the following when the features (sensor data) have a specific trend (either up or down), my model (LSTMs) predicts good results but when the data have no trend, my deep learning model gets a very bad score. I must specify that I work with sequential data. my dataset contains several aircraft engines with data recorded by the sensors. my question is how to process the data that has no particular trend in deep learning.
+
+You will find below some pictures of my dataset where, RUL: is the life of a machine, unit: the machine identity and s1 to s3 is the sensor data.
+
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'geometric-deep-learning']"," Title: How to represent and work with the feature matrix for graph convolutional network (GCN) if the number of features for each node is different?Body: I have a question regarding features representation for graph convolutional neural network.
+
+For my case, all nodes have a different number of features, and for now, I don't really understand how should I work with these constraints. I can not just reduce the number of features or add meaningless features in order to make the number of features on each node the same - because it will add to much extra noise to the network.
+
+Are there any ways to solve this problem? How should I construct the feature matrix?
+
+I'll appreciate any help and if you have any links to papers that solve this problem.
+"
+"['optimization', 'hyperparameter-optimization', 'adam']"," Title: Are there optimizers that schedule their learning rate, momentum etc. autonomously?Body: I'm aware there are some optimizer such as Adam that adjust the learning rate for each dimension during training. However, afaik, the maximum learning rate they can have is still determined by the user's input.
+
+So, I wonder if there are optimizers that can increase/decrease their overall learning rate and other parameters (such as momentum or even weight decay) autonomously depending on some metric, e.g., validation loss, running average of gradients etc. ?
+"
+"['reinforcement-learning', 'environment', 'value-functions']"," Title: In reinforcement learning, is the value of terminal/goal state always zero?Body: Let's assume we are in a $3 \times 3$ grid world with states numbered as $0,1, \dots, 8$. Suppose that the goal state is $8$, the reward of landing in the goal state is $10$, and the reward of just wandering around in the grid world is $0$. Is the state-value of state $8$ always $0$?
+"
+"['objective-functions', 'variational-autoencoder', 'latent-variable', 'kl-divergence', 'evidence-lower-bound']"," Title: Why is the evidence equal to the KL divergence plus the loss?Body: Why is the equation $$\log p_{\theta}(x^1,...,x^N)=D_{KL}(q_{\theta}(z|x^i)||p_{\phi}(z|x^i))+\mathbb{L}(\phi,\theta;x^i)$$ true, where $x^i$ are data points and $z$ are latent variables?
+I was reading the original variation autoencoder paper and I don't understand how the marginal is equal to the RHS equation. How does the marginal equal the KL divergence of $p$ with its approximate distribution plus the variational lower bound?
+"
+"['neural-networks', 'machine-learning', 'probability', 'feedforward-neural-networks']"," Title: Neural network seems to just figure out the probability of a specific resultBody: I am really new to neural networks, so i was following along with a video series, created by '3blue1brown' on youtube. I created an implementation of the network he explained in c++. I am attempting to train the network to recognize hand written characters, using the MNIST data set. What seems to be happening is, rather than actually learn how to recognize the characters, it is just learning how many of each input there is in the data set and doing it with probability. When testing on a smaller dataset this is more noticeable, for example i was testing on a set with 100, and the numbers that were more frequent would always have a slightly higher activation at the end, and others where very close to 0. Here is my code if it helps:
+
+
+
+#include <random>
+#include <vector>
+#include <iostream>
+#include <fstream>
+#include <cmath>
+
+double weightMutationRate = 1;
+double biasMutationRate = 1;
+
+//Keeps track of the weights, biases, last activation and the derivatives
+//of the weights and biases for a single node in a neural network.
+struct Node
+{
+ std::vector<double> weights;
+ std::vector<double> derivWeights;
+ double activation;
+ double derivActivation;
+ double bias;
+ double derivBias;
+};
+
+//Struct to hold the nodes in each layer of a network.
+struct Layer
+{
+ std::vector<struct Node> nodes;
+};
+
+//Struct to hold the layers in a network.
+struct Network
+{
+ std::vector<struct Layer> layers;
+ double cost;
+};
+
+//Stores the inputs and outputs for a single training example.
+struct Data
+{
+ std::vector<double> inputs;
+ std::vector<double> answers;
+};
+
+//Stores all of the data to be used to train the neural network.
+struct DataSet
+{
+ std::vector<struct Data> data;
+};
+
+//Generates a double by creating a uniform distribution between the two arguments.
+double RandomDouble(double min, double max)
+{
+ std::random_device seed;
+ std::mt19937 random(seed());
+ std::uniform_real_distribution<> dist(min, max);
+ return dist(random);
+}
+
+//Constructs a network with the node count in each layer defined with 'layers'.
+//the first layer will not have any weights and biases and will simply have
+//the activation of the input data.
+struct Network CreateNetwork(std::vector<int> layers, double minWeight = -1, double maxWeight = 1, double minBias = -1, double maxBias = 1)
+{
+ //Network to construct.
+ struct Network network;
+ //Used to store the nodes in the previous layer.
+ int prevLayerNodes;
+ //Iterates through the layers vector and constructs a neural network with the values in
+ //the vector determining how many nodes that are in each of the layers.
+ bool isFirstLayer = true;
+ for (int layerNodes : layers)
+ {
+ //Layer to construct.
+ struct Layer layer;
+ //Creating the nodes for the current layer.
+ for (int i = 0; i < layerNodes; i++)
+ {
+ //Node to construct
+ struct Node node;
+ //Checks to see if the current layer is not the input layer, which does not have
+ //any weights or biases.
+ if (!isFirstLayer)
+ {
+ //Creating weights for the connections between this node
+ //and the nodes in the previous layer.
+ for (int i = 0; i < prevLayerNodes; i++)
+ {
+ //Getting a random double for the weight, between the bounds set in the arguments.
+ double inputWeight = RandomDouble(minWeight, maxWeight);
+ //Adding the inputWeight to the current node.
+ node.weights.push_back(inputWeight);
+ //Adding a 0 to the deriv weights for the weight just added.
+ node.derivWeights.push_back(0.0);
+ }
+ //Getting a random double for the bias, between the bounds set in the arguments.
+ double bias = RandomDouble(minBias, maxBias);
+ //Adding the bias to the current node.
+ node.bias = bias;
+ //Adding the node to the layer.
+ }
+ layer.nodes.push_back(node);
+ }
+ //Updating the isFirstLayer variable if the current layer is the input layer.
+ if (isFirstLayer)
+ {
+ isFirstLayer = false;
+ }
+ //Updating the prevLayerNodes variable for use in the next layer.
+ prevLayerNodes = layerNodes;
+ //Adding the layer to the network.
+ network.layers.push_back(layer);
+ }
+ //Returning the constructed network.
+ return network;
+}
+
+//Outputs the network passed to the networkPrint.txt file.
+void PrintNetwork(struct Network network)
+{
+ std::cout << ""Printing network ..."" << std::endl;
+ std::ofstream networkPrintFile;
+ networkPrintFile.open(""networkPrint.txt"");
+ //Iterates through each of the layers in teh network.
+ for (int i = 0; i < network.layers.size(); i++)
+ {
+ std::cout << ""Layer : "" << i << std::endl;
+ networkPrintFile << ""Layer "" << i << "":"" << std::endl;
+ //Iterates through each of the nodes in the current layer.
+ for (int j = 0; j < network.layers[i].nodes.size(); j++)
+ {
+ networkPrintFile << ""\t"" << ""Node "" << j << "":"" << std::endl;
+ //Outputs the node's activation into networkPrintFile.
+ double activation = network.layers[i].nodes[j].activation;
+ networkPrintFile << ""\t\t"" << ""Activation"" << "": "" << activation << std::endl;
+ //Outputs the node's derivActivation into networkPrintFile.
+ double derivActivation = network.layers[i].nodes[j].derivActivation;
+ networkPrintFile << ""\t\t"" << ""Deriv Activation"" << "": "" << derivActivation << std::endl;
+ double bias = network.layers[i].nodes[j].bias;
+ double derivBias = network.layers[i].nodes[j].derivBias;
+ //Outputs the bias and derivative of the bias.
+ networkPrintFile << ""\t\t"" << ""Bias"" << "": "" << bias << std::endl;
+ networkPrintFile << ""\t\t"" << ""Deriv Bias"" << "": "" << derivBias << std::endl;
+ //Iterates through all of the inputWeights in the current node.
+ networkPrintFile << ""\t\t"" << ""Weights"" << "":"" << std::endl;
+ for (int k = 0; k < network.layers[i].nodes[j].weights.size(); k++)
+ {
+ double inputWeight = network.layers[i].nodes[j].weights[k];
+ double derivWeight = network.layers[i].nodes[j].derivWeights[k];
+ networkPrintFile << ""\t\t\t"" << ""Weight "" << k << "":"" << std::endl;
+ networkPrintFile << ""\t\t\t\t"" << ""Value"" << "":"" << inputWeight << std::endl;
+ networkPrintFile << ""\t\t\t\t"" << ""Derivative"" << "":"" << derivWeight << std::endl;
+ }
+ }
+ }
+ std::cout << ""Done"" << std::endl;
+}
+
+//Takes and input and peforms a mathematical sigmoid
+//function on it and returns the value.
+// 1
+// σ(x) = ---------
+// 1 + e^x
+double Sigmoid(double input)
+{
+ double expInput = std::exp(-input);
+ double denom = expInput + 1;
+ double value = 1 / denom;
+ return value;
+}
+
+//Returns the activation of the node passed in give the previous layer.
+double CalculateNode(struct Node &node, struct Layer &prevLayer)
+{
+ //Keeps a runing total of the weights and activations added up so far.
+ double total = 0.0;
+ int weightCount = node.weights.size();
+ //Iterated through each of the weights, and thus each of the
+ //nodes in the previous layer to find the weight * activation.
+ for (int i = 0; i < weightCount; i++)
+ {
+ //Calculated the current weight and activation and
+ //adds it to the 'total' variable.
+ double weight = node.weights[i];
+ double input = prevLayer.nodes[i].activation;
+ double value = weight * input;
+ total += value;
+ }
+ //Add the node's bias to the total.
+ total += node.bias;
+ //Normalises the node's activation value by passing it through
+ //a sigmoid function, which bounds it between 0 and 1.
+ double normTotal = Sigmoid(total);
+ //Returns the caclulated value for this node.
+ return normTotal;
+}
+
+//Adds the activation values to a layer passed in, given the previous layer.
+void CaclulateLayer(struct Layer &layer, struct Layer &prevLayer)
+{
+ //Iterates through all of the nodes and calculated their activations.
+ for (struct Node &node : layer.nodes)
+ {
+ double activation = CalculateNode(node, prevLayer);
+ //Setting the activation to the node.
+ node.activation = activation;
+ }
+}
+
+//Takes in the first layer of the neural network and interates through the
+//nodes and sets each input to each node in a loop.
+void SetInputs(struct Layer &layer, std::vector<double> inputs)
+{
+ for (int i = 0; i < layer.nodes.size(); i++)
+ {
+ //Setting the node's activation to the corrosponding input.
+ layer.nodes[i].activation = inputs[i];
+ }
+}
+
+//Takes in a network and inputs and calculates the value of
+//activation for every node for a single input vector.
+void CalculateNetwork(struct Network &network, std::vector<double> inputs)
+{
+ //Setting the activations of the first layer to the inputs vector.
+ SetInputs(network.layers[0], inputs);
+ //Iterates through all of the layers, apart from the first layer, and
+ //calculated the activations of the nodes in that layer.
+ for (int i = 1; i < network.layers.size(); i++)
+ {
+ //Getting the layer to calculate to activations on and the
+ //previous layer, which already has it's activations calculated.
+ struct Layer currentLayer = network.layers[i];
+ struct Layer prevLayer = network.layers[i - 1];
+ //Calculating the nodes on the current layer.
+ CaclulateLayer(currentLayer, prevLayer);
+ //Setting the currentLayer back into the network struct with
+ //all of the activations in it now calculated.
+ network.layers[i] = currentLayer;
+ }
+}
+
+//Caclulates the sum of the differences between the outputs and the correct
+//values squared.
+//
+// Cost = Σ((a-y)^2)
+//
+double CalculateCost(struct Network &network, std::vector<double> correctOutputs)
+{
+ //Keeps track of the current sum of the costs.
+ double totalCost = 0.0;
+ //The layer of the network that holds the calculated values, the
+ //last layer in the network.
+ struct Layer outputLayer = network.layers[network.layers.size() - 1];
+ //Loops through all the node sin the output layer and compared them
+ //to their corresponding correctOutput value, calculates the cost
+ //and adds it to the running total, totalCoat.
+ for (int i = 0; i < outputLayer.nodes.size(); i++)
+ {
+ struct Node node = outputLayer.nodes[i];
+ double calculatedActivation = node.activation;
+ double correctActivation = correctOutputs[i];
+ double diff = calculatedActivation - correctActivation;
+ double modDiff = diff * diff;
+ //Adding the cost to the sum of the other costs.
+ totalCost += modDiff;
+ }
+ //Returning the value of the calculated cost.
+ return totalCost;
+}
+
+//Takes in the output layer of the network and calculates the derivatives of the
+//cost function with respect to the activations in each node. this value is then
+//stored on the Node struct.
+void LastLayerDerivActivations(struct Layer &layer, std::vector<double> correctOutputs)
+{
+ //Iterating through all the nodes in the layer.
+ for (int i = 0; i < layer.nodes.size(); i++)
+ {
+ //Getting the values of the node output and correct output.
+ double activation = layer.nodes[i].activation;
+ double correctOutput = correctOutputs[i];
+ //Caclulating the partial derivative of the cost function with respect
+ //to the current node's activation value.
+ double activationDiff = activation - correctOutput;
+ double derivActivation = 2 * activationDiff;
+ //Setting the activation partial derivative to the layer passed in.
+ layer.nodes[i].derivActivation = derivActivation;
+ }
+}
+
+//Returns the derivative of the sigmoid function.
+// d
+// ---- σ(x) = σ(x)(1 - σ(x))
+// dx
+double DerivSigma(double input)
+{
+ double sigma = Sigmoid(input);
+ double value = sigma * (1 - sigma);
+ return sigma;
+}
+
+//Takes in a node and the layer that the node takes inputs from and adds.
+//to the derivWeight and derivBias of the node and adds to each of the
+//deriv activations in the previous layer for them to be used in this
+//function to calculate their derivatives.
+void NodeDeriv(struct Node &node, struct Layer &prevLayer)
+{
+ //Starting the total at the bias.
+ double total = node.bias;
+ //Looping through all the weights and biases to find z(x).
+ // z(x) = a w + a w + ... + a w + b
+ // 1 1 2 2 n n
+ for (int i = 0; i < node.weights.size(); i++)
+ {
+ double weight = node.weights[i];
+ double activation = prevLayer.nodes[i].activation;
+ double value = weight * activation;
+ //Adding to the running total for z(x).
+ total += value;
+ }
+ //Finding the derivative of the cost function with respect to the
+ //z(x) by multiplying the DerivSigma() by the node's derivActivation
+ //using the chain rule.
+ double derivAZ = DerivSigma(total);
+ double derivCZ = derivAZ * node.derivActivation;
+ //The derivative of the cost with respect to the bias is the same as
+ //the derivative of the cost function with respect to z(x) since
+ //d/db z(x) = 1
+ node.derivBias += derivCZ;
+ //Iterating through all of the nodes and weights to find the derivatives
+ //of all of the weights for the node and the activations on the
+ //previous layer.
+ for (int i = 0; i < node.weights.size(); i++)
+ {
+ // dc/dw = dc/dz * activation
+ double derivCW = derivCZ * prevLayer.nodes[i].activation;
+ // dc/da = dc/dz * weight
+ double derivCA = derivCZ * node.weights[i];
+ //Adding the weights and activations to the node objects.
+ node.derivWeights[i] += derivCW;
+ prevLayer.nodes[i].derivActivation += derivCA;
+ }
+ //Resetting the activation derivative.
+ node.derivActivation = 0;
+}
+
+//Takes in a layer and iterates through all the nodes in order to find
+//the derivatives of thw weight and biases in the current layer and the
+//activations in the previous layer.
+void LayerDeriv(struct Layer &layer, struct Layer &prevLayer)
+{
+ for (int i = 0; i < layer.nodes.size(); i++)
+ {
+ NodeDeriv(layer.nodes[i], prevLayer);
+ }
+}
+
+//Takes in a network and uses backpropogation to find the derivatives of
+//all the nodes for a single training example.
+void NetworkDeriv(struct Network &network, std::vector<double> expectedOutputs)
+{
+ //Calculating the derivatives of the activations in the last layer.
+ LastLayerDerivActivations(network.layers[network.layers.size() - 1], expectedOutputs);
+ //Looping through all the layers to find the derivatives of all of
+ //the weights and activations in the network for this training example.
+ for (int i = network.layers.size() - 1; i > 0; i--)
+ {
+ LayerDeriv(network.layers[i], network.layers[i - 1]);
+ }
+}
+
+//Takes in an input string and char and will return a vector of the string split
+//by the char. The char is lost in this conversion.
+std::vector<std::string> SplitString(std::string stringToSplit, char delimiter)
+{
+ //Creating the output vector.
+ std::vector<std::string> outputVector;
+ //Initialising the lastDelimiter to -1, since the first string should be split as if
+ //the char before it was the splitter.
+ int lastDelimiterIndex = -1;
+ for (int i = 0; i < stringToSplit.size(); i++)
+ {
+ //Getting the current char.
+ char chr = stringToSplit[i];
+ //If the current char is the delimiter, create a new substring in the vector.
+ if (chr == delimiter)
+ {
+ //Creating the new substring at the delimiter and adding it to the end
+ //of the output vector.
+ std::string subString = stringToSplit.substr(lastDelimiterIndex + 1, i - lastDelimiterIndex - 1);
+ outputVector.push_back(subString);
+ //Setting the last delimiter variable to the current character.
+ lastDelimiterIndex = i;
+ }
+ }
+ //Adding the last section of the string to the output vector, since there is no
+ //delimiter and will not be added in the for loop.
+ std::string subString = stringToSplit.substr(lastDelimiterIndex + 1, stringToSplit.size() - lastDelimiterIndex - 1);
+ outputVector.push_back(subString);
+ //Returning the split string as a vector of strings.
+ return outputVector;
+}
+
+//Takes in a vector of strings and converts it to a vector of doubles. normalise argument
+//sets what value will be taken to be 1, and other numbers will be a fraction of that.
+//Set normalise to 0 to disable normalisation.
+std::vector<double> ConvertStringVectorToDoubleVector(std::vector<std::string> input, int normalise = 0)
+{
+ std::vector<double> convertedVector;
+ //Iterating through all the strings int the input vector.
+ for (std::string str : input)
+ {
+ //Converting the string into a double.
+ double value = stod(str);
+ //Checks to see if normalisation is enabled.
+ if (normalise != 0)
+ {
+ //Normalising the double.
+ value /= normalise;
+ }
+ //Adding the double to the output vector.
+ convertedVector.push_back(value);
+ }
+ //Returning the converted vector.
+ return convertedVector;
+}
+
+//Takes in a string of data and uses it to create a DataSet object to
+//be used in the training of the neural network.
+struct DataSet FormatData(std::string dataString)
+{
+ struct DataSet dataSet;
+ //Splitting the input string into the seperate images.
+ std::vector<std::string> imageSplit = SplitString(dataString, '|');
+ //Looping through all of the images.
+ for (int i = 0; i < imageSplit.size(); i++)
+ {
+ //Getting the current image string.
+ std::string imageData = imageSplit[i];
+ //Splitting the image between the inputs and expected outputs.
+ std::vector<std::string> ioSplit = SplitString(imageData, '/');
+ std::string inputs = ioSplit[0];
+ std::string outputs = ioSplit[1];
+ //converting the input and output strings into string arrays of the values.
+ std::vector<std::string> inputVectorString = SplitString(inputs, ',');
+ std::vector<std::string> outputVectorString = SplitString(outputs, ',');
+ //Converting the string arrays into double arrays and normalising the input doubles.
+ std::vector<double> inputVector = ConvertStringVectorToDoubleVector(inputVectorString, 255);
+ std::vector<double> outputVector = ConvertStringVectorToDoubleVector(outputVectorString);
+ //Creating a new Data object.
+ struct Data data;
+ data.inputs = inputVector;
+ data.answers = outputVector;
+ //Adding the object to the dataset.
+ dataSet.data.push_back(data);
+ }
+ //Returning the completed dataset.
+ return dataSet;
+}
+
+//Takes in a filename and extracts all of the ascii data from the
+//file and calls the FormatData function to create a DataSet object.
+struct DataSet CreateDataSetFromFile(std::string fileName)
+{
+ //Opening file.
+ std::ifstream dataFile;
+ dataFile.open(fileName);
+ //Storing fild data in a string.
+ std::string data;
+ dataFile >> data;
+ //Creating DataSet object.
+ struct DataSet dataSet = FormatData(data);
+ //Returning completed DataSet object.
+ return dataSet;
+}
+
+//Takes in a network and a Data object and runs the network and adds
+//to the derivatives of the network for that one training example.
+void NetworkIteration(struct Network &network, struct Data data)
+{
+ //Extracting the input and output data from the data object.
+ std::vector<double> inputs = data.inputs;
+ std::vector<double> outputs = data.answers;
+ //Caclulating the activations of the network for this data.
+ CalculateNetwork(network, inputs);
+ //Caclulating the cost for this iteration and adding it to the total.
+ double cost = CalculateCost(network, outputs);
+ network.cost += cost;
+ //Caclulating the derivatives for the network weights and biases
+ //for this training example.
+ NetworkDeriv(network, outputs);
+}
+
+//Takes in a node and caclulates the average of the derivatives over
+//the dataset and then multiplies them by a fixes mutation rate and
+//applies the derivatives to the node's values.
+void GradientDecentNode(struct Node &node, int dataCount)
+{
+ //Iterating through all of the weights of the node.
+ for (int i = 0; i < node.weights.size(); i++)
+ {
+ double weight = node.weights[i];
+ double derivWeight = node.derivWeights[i];
+ //Getting the average over all of the training data.
+ derivWeight /= dataCount;
+ //Applying a constant multiplier to alter the rate at which is mutates.
+ derivWeight *= weightMutationRate;
+ //Subtracting the derivative from the weight.
+ node.weights[i] -= derivWeight;
+ //Reseting the weight derivative
+ node.derivWeights[i] = 0;
+ }
+ double bias = node.bias;
+ double derivBias = node.derivBias;
+ //Applying a constant multiplier to alter the rate at which is mutates.
+ derivBias *= biasMutationRate;
+ //Subtracting the derivative from the bias.
+ node.bias -= derivBias;
+ //Resetting the bias derivative.
+ node.derivBias = 0;
+}
+
+//Takes in a layer and iterated through all of the nodes in the layer
+//and applies all of their derivatives to them.
+void GradientDecentLayer(struct Layer &layer, int dataCount)
+{
+ for (struct Node &node : layer.nodes)
+ {
+ GradientDecentNode(node, dataCount);
+ }
+}
+
+//Takes in a network and iterated through all of the layers and applies
+//all of the derivatives to them.
+void GradientDecentNetwork(struct Network &network, int dataCount)
+{
+ for (int i = 1; i < network.layers.size(); i++)
+ {
+ GradientDecentLayer(network.layers[i], dataCount);
+ }
+}
+
+//Iterates through all of the training data in dataSet and calculates the derivatives
+//of the weights and biases and then peforms the gradient decent using the derivatives.
+void TrainNetworkSingle(struct Network &network, struct DataSet dataSet)
+{
+ //Iterating through all of the training data.
+ for (struct Data data : dataSet.data)
+ {
+ //Caclulating the network for a single training example.
+ NetworkIteration(network, data);
+ }
+ //Peforming the derivatives.
+ GradientDecentNetwork(network, dataSet.data.size());
+}
+
+void TrainNetwork(struct Network &network, struct DataSet dataSet, int iterations)
+{
+ for (int i = 0; true/*i < iterations*/; i++)
+ {
+ TrainNetworkSingle(network, dataSet);
+ std::cout << network.cost << std::endl;
+ network.cost = 0;
+ }
+}
+
+int main()
+{
+ struct Network network = CreateNetwork({784, 784, 16, 16, 10});
+ struct DataSet dataSet = CreateDataSetFromFile(""data.txt"");
+ TrainNetwork(network, dataSet, 100);
+ PrintNetwork(network);
+ return 0;
+}
+```
+
+"
+"['machine-learning', 'computer-vision', 'support-vector-machine', 'image-recognition', 'bag-of-features']"," Title: Does the bag-of-visual-words method improve the classification accuracy?Body: I'm a beginner in computer vision. I want to know which option among the following two can get better accuracy of image classification.
+
+- SIFT features + SVM
+- Bag-of-visual-words features + SVM
+
+Here's a reference: https://www.mathworks.com/help/vision/ug/image-classification-with-bag-of-visual-words.html.
+"
+"['machine-learning', 'training', 'cross-validation']"," Title: While we split data in training and test data, why we have two pairs of each?Body: Why do we split the data into two parts, and then split those segments into training and testing data? Why do we have two sets of data for each training and test data?
+"
+"['search', 'norvig-russell', 'uniform-cost-search', 'pseudocode']"," Title: Understanding the pseudocode of uniform-cost search from the book ""Artificial Intelligence: A Modern Approach""Body: On page 84 of Russell & Norvig's book ""Artificial Intelligence: A Modern Approach Book"" (3rd edition), the pseudocode for uniform cost search is given. I provided a screenshot of it here for your convenience.
+
+I am having trouble understanding the highlighted line if child.STATE is not in explored **or** frontier then
+
+
+
+Shouldn't that be
+if child.STATE is not in explored **and** frontier then
+
+The way it's written, it seems to suggest that if the child node has already been explored, but not currently in the frontier, then we add it to the frontier, but I don't think that's correct. If it's already been explored, then it means we already found the optimal path for this node previously and should not be processing it again. Is my understanding wrong?
+"
+"['keras', 'generative-adversarial-networks', 'time-series', 'sequence-modeling']"," Title: Generation of realistic real-valued sequences using Wasserstein GAN failsBody: My goal is to generate artificial sequences of real-valued data (e.g. time series) with GANs. Starting simple I tried to generate realistic sine-waves using a Wasserstein GAN. But even on this simple task it fails to generate any useful samples.
+
+This is my model:
+
+Generator
+
+model = Sequential()
+model.add(LSTM(20, input_shape=(50, 1)))
+model.add(Dense(40, activation='linear'))
+model.add(Reshape((40, 1)))
+
+
+Critic
+
+model = Sequential()
+model.add(Conv1D(64, kernel_size=5, input_shape=(40, 1), strides=1))
+model.add(MaxPooling1D(3, strides=2))
+model.add(LeakyReLU(alpha=0.2))
+model.add(Conv1D(64, kernel_size=5, strides=1))
+model.add(MaxPooling1D(3, strides=2))
+model.add(LeakyReLU(alpha=0.2))
+model.add(Flatten())
+model.add(Dense(1))
+
+
+Is this model capable of learning such a task or should I use a different model architecture?
+"
+"['machine-learning', 'comparison', 'regularization', 'l2-regularization', 'l1-regularization']"," Title: Which is a better form of regularization: lasso (L1) or ridge (L2)?Body: Given a ridge and a lasso regularizer, which one should be chosen for better performance?
+
+An intuitive graphical explanation (intersection of the elliptical contours of the loss function with the region of constraints) would be helpful.
+"
+['neural-networks']," Title: Strategy of using intermediate layers of a neural network as features?Body: There is a popular strategy of using a neural network trained on one task to produce features for another related task by ""chopping off"" the top of the network and sewing the bottom onto some other modeling pipeline.
+
+Word2Vec models employ this strategy, for example.
+
+Is there an industry-popular term for this strategy? Are there any good resources that discuss its use in general terms?
+"
+"['variational-autoencoder', 'latent-variable', 'evidence-lower-bound']"," Title: In this VAE formula, why do $p$ and $q$ have the same parameters?Body: In $$\log p_{\theta}(x^1,...,x^N)=D_{KL}(q_{\theta}(z|x^i)||p_{\phi}(z|x^i))+\mathbb{L}(\phi,\theta;x^i),$$ why does $p(x^1,...,x^N)$ and $q(z|x^i)$ have the same parameter $\theta?$
+Given that $p$ is just the probability of the observed data and $q$ is the approximation of the posterior, shouldn't they be different distributions and thus their parameters different?
+"
+['natural-language-processing']," Title: Creating a noising model for NLP that models human noisingBody: I'm trying to create a noising model that accurately reflects how people would noise name data. I was thinking of randomly switching out characters and creating a probability over which character gets switched in based on keyboard closeness and how similar anatomically another character looks to it. For example, ""l"" has a higher prob of being switched in with ""|"" and ""k"" cause ""k"" is close by on the keyboard and ""|"" looks like ""l"", but that requires a lot of hard coding and reward for that seemed low because that's not the only 2 ways people can noise things. I also had the same idea above except use template matching of every character to every other character but itself and that would give it a similarity score then divide that by the sum over all chars to get the probs. Any other suggestions? My goal is the maximize closeness to actual human noising.
+"
+"['machine-learning', 'deep-learning']"," Title: Trained a regression network and getting EXACT same result on validation set, on every epochBody: I trained this network from this github.
+
+The training went well, and returns nice results for new, unseen images.
+
+On training, the loss changed (decreased), thus I must assume the weights changed as well.
+
+On training, I saved a snapshot of the net every epoch.
+
+When trying to run a validation set through each epoch's snapshot, I get the exact same results on every epoch.
+
+How can this be possible? What's causing this?
+"
+"['search', 'path-planning', 'path-finding']"," Title: How to report the solution path of a search algorithm on a graph?Body: I'm working on a problem where we are given a graph and asked to perform various search algorithms (BFS, DFS, UCS, A*, etc.) and the goal state is to visit all nodes in the graph. After all nodes are visited, we need to print out the ""solution path."" However, I am a bit confused on what ""path"" means in AI.
+
+For simplicity, let's just consider a graph of 3 nodes: A, B and C with 2 undirected edges (A, B) (A, C). If we perform BFS on this graph starting at node A and traversing alphabetically, we'd visit A, then B, then C. So, in this case, is the solution path A -> B -> C
, i.e. the order in which the nodes are visited? Or is the solution path A -> B -> A -> C
? Basically saying that we go from A to B, but to go from B to C, we must go through A again.
+"
+"['neural-networks', 'deep-learning', 'weights', 'weights-initialization']"," Title: How are newer weight initialization techniques better than zero or random initialization?Body: How do newer weight initialization techniques (He, Xavier, etc) improve results over zero or random initialization of weights in a neural network? Is there any mathematical evidence behind this?
+"
+"['reinforcement-learning', 'meta-learning']"," Title: What are the state-of-the-art meta-reinforcement learning methods?Body: This question can seem a little bit too broad, but I am wondering what are the current state-of-the-art works on meta reinforcement learning. Can you provide me with the current state-of-the-art in this field?
+"
+"['neural-networks', 'deep-learning', 'long-short-term-memory']"," Title: Is the LSTM component a neuron or a layer?Body: Given the standard illustrative feed-forward neural net model, with the dots as neurons and the lines as neuron-to-neuron connection, what part is the (unfold) LSTM cell (see picture)? Is it a neuron (a dot) or a layer?
+
+
+"
+"['genetic-algorithms', 'definitions', 'crossover-operators']"," Title: How does crossover work in a genetic algorithm?Body: If I had the weights of a certain number of ""parents"" that I wanted to crossbreed, and I used whatever method to pick out the ""best parents"" (I used a roulette wheel option, if that's any relevant), would I be doing this correctly?
+
+For example, suppose I have picked the following two parents.
+
+\begin{align}
+P_1 &= [0.5, -0.02, 0.4, 0.1, -0.9] \\
+P_2 &= [0.42, 0.55, 0.18, -0.3, 0.12]
+\end{align}
+
+When I'm iterating through each index (or gene) of the parents, I am selecting a weight from one parent only. I called this rate the ""cross-rate"", which in my case is $0.2$ (i.e. with $20$% chance, I will switch to choosing the other parents' weight).
+
+So, using our example above, this is what would happen:
+
+\begin{align}
+P_1 &=[\mathbf{0.5}, \mathbf{-0.02}, 0.4, 0.1, \mathbf{-0.9}] \\
+P_2 &= [0.42, 0.55, \mathbf{0.18}, \mathbf{-0.3}, 0.12]
+\end{align}
+
+So the child would be
+
+$$C = [0.5, -0.02, 0.18, -0.3, -0.9]$$
+
+I would choose $0.5$ from $P_1$, but for every time I choose a weight from $P_1$, there's a 20% chance that I actually choose the corresponding gene from $P_2$. But, for the first weight, I end up not landing on that 20% chance. So I move onto the second weight, $-0.02$. This time, we hit the 20% chance, so now we swap over. Our next weight is now from $P_2$, which is $0.18$. And so on, until we hit another 20% chance.
+
+We keep doing this until we hit the end of the indexes ($P_1$ and $P_2$ have the same number of indexes, of course).
+
+Is this the correct way to form a child from 2 parents? Is this the correct ""crossbreeding"" method when it comes to genetic algorithms?
+"
+"['neural-networks', 'natural-language-processing', 'gpt']"," Title: Can we use GPT-2 to smooth out / correct text?Body: Are we able to use models like GPT-2 to smooth out/correct text? For instance if I have two paragraphs that need some text to make the transition easier to read, could this text be generated? And, could it find inconsistencies between the paragraphs and fix them?
+As an example, imagine we're reordering some text so that we can apply the pyramid principle. What I'd like to do is reorder the sentences/paragraphs and still have a coherant story. The following three sentences for instance, start with a statement and then have some facts to support it. What's missing is the story that joins them together, right now they're three independent sentences.
+
+The strawberry is the best fruit based on its flavor profile, its coloring and texture and the nutritional profile.
+Strawberries are very rich in antioxidants and plant compounds, which may have benefits for heart health and blood sugar control.
+Strawberries have a long history and have been enjoyed since the Roman times.
+
+Feel free to point me at things to read, I have not been able to find anything like this in my searches.
+"
+"['reinforcement-learning', 'definitions', 'monte-carlo-tree-search', 'pomdp', 'self-play']"," Title: How exactly does self-play work, and how does it relate to MCTS?Body: I am working towards using RL to create an AI for a two-player, hidden-information, a turn-based board game. I have just finished David Silver's RL course and Denny Britz's coding exercises, and so am relatively familiar with MC control, SARSA, Q-learning, etc. However, the course was focused on single-player, perfect-information games, and I haven't managed to find any examples similar to the type of game I have, and would like advice on how to proceed.
+
+I am still unsure how self-play works, and how it relates to MCTS. For example, I don't know if this involves using the latest agent to play both sides, or playing an agent against older versions, or training multiple opposing agents simultaneously. Are there good examples (or repositories) for learning self-play and MCTS for two-player games?
+"
+['variational-autoencoder']," Title: Why can't VAE do sequence to sequence name generation?Body: I'm working on research in this sector where my supervisor wants to do cannonicalization of name data using VAEs, but I don't think it's possible to do, but I don't know explicitly how to show it mathematically. I just know empirically that VAEs don't do good on discrete distributions of latents and observed variables(Because in order to do names you need your latent to be the character at each index and it can be any ASCII char, which can only be represented as a distribution). So the setup I'm using is a VAE with 3 autoencoders, for latents, one for first, middle and last name and all of them sample each character of their respective names from the gumbel-softmax distribution(A form a categorical that is differentiable where the parameters is a categorical dist). From what I've seen in the original paper on the simple problem of MNIST digit image generation, the inference and generative network both did worse as latent dimension increased and as you can imagine the latent dimension of my problem is quite large. That's the only real argument for why this can't work, that I have. The other would have been it's on a discrete distribution, but I solved that by using a gumbel softmax dist instead.
+
+This current setup isn't working at all, the name generations are total gibberish and it plateaus really early. Are there any mathematical intuitions or reasons that VAEs won't work on a problem like this?
+
+As a note I've also tried semi-supervised VAEs and it didn't do much better. I even tried it for seq2seq of just first names given a first and it super failed as well and I'm talking like not even close to generation of names or the original input.
+"
+"['deep-learning', 'convolutional-neural-networks', 'activation-functions', 'pooling', 'max-pooling']"," Title: Is a non-linear activation function needed if we perform max-pooling after the convolution layer?Body: Is there any need to use a non-linear activation function (ReLU, LeakyReLU, Sigmoid, etc.) if the result of the convolution layer is passed through the sliding window max function, like max-pooling, which is non-linear itself? What about the average pooling?
+"
+"['reinforcement-learning', 'markov-decision-process']"," Title: Are there reinforcement learning algorithms not based on Markov decision processes?Body: Are all RL algorithms based on the MDP? If not, could you give examples of some which aren't? I've looked elsewhere, but I haven't seen it explicitly said.
+"
+"['neural-networks', 'image-recognition', 'go']"," Title: Image-to-Image Regression for GO territory classificationBody: I'm trying to implement a neural network that is able to generate an image indicating territory occupation given a board state for GO (a strategy board game). Input images are 19x19x1 grayscale images, with white pixels indicating white pieces, black pixels indicating black pieces, and gray pixels indicating unoccupied areas. Output images are 19x19x1 grayscale images with white pixels indicating white territory, black pixels indicating black territory, and gray areas indicating unassigned territories. A sample input and desired output image is as follows:
+
+
+
+
+
+The images are quite small, so just to give an overview of trends I noticed: - Pixels surrounded by pixels of opposite colors are 'captured' pieces and therefore part of opponent territory - Two 'eyes' or closed groups of pieces comprising at least two open intersections are invincible or confirmed territory
+
+While I'm not looking for exact specifications of network layers etc., I was hoping I could be given some direction as to what type of network to use, and what it should comprise. Looking at MATLAB documentation, I've found info about semantic segmentation, and autoencoder networks but neither of these seem particularly helpful. I know the question is a little broad, but I just need some direction more than anything. This kind of image recognition problem is a first for me.
+"
+"['reinforcement-learning', 'q-learning']"," Title: How to represent a state in a card game environment? (Wizard)Body: We are attempting to build an AI that manages to play the cardgame Wizard. So far er have a working network (based on the YOLO object-detection) that is abled to detect which cards are played. When asked it returns the color and rank of the cards on the table.
+
+But now when starting to build an agent for the actual training I just cant figure out how to represent the states for this game.
+
+In each round, each player gets the amount of cards represented by the round(one card in round one, two in round two and so on). Based on that the players estimate how many tricks they will win in this round. With ending the round the players calculate their points w.r.t their estimation.
+
+So the agent have to estimate its future tricks and have to play depending on that strategy. So how do I encode that into a form that a neural network can work with?
+"
+"['neural-networks', 'natural-language-processing']"," Title: Sign Language to Speech conversionBody: Is there any solution about sign language to speech conversion for mobiles? Can anyone suggest me the flow and tools so that I may implement the solution for mobiles?
+"
+['data-preprocessing']," Title: How to automatically detect and correct false information in columnar data?Body: I'm working on data cleaning and I'm stuck. I have a data set with 3 columns: id, age, and weight.
+
+Supposing I have an entry:
+
+id:1 | age:3 (years) | weight: 150 (kg)
+
+
+How can we detect that the information is wrong, assuming I have a thousand lines?
+
+And how can I correct it (using Python)?
+
+Is there any function in Python that I can use or should I use machine learning techniques?
+"
+"['recurrent-neural-networks', 'papers', 'geometric-deep-learning']"," Title: Can GraphRNN be used with very large graphs?Body: In the GraphRNN paper, the authors only implement the algorithm up to a graph size of 2k nodes. Would this still work on much larger graphs (on the order of $10^7$)? Or would the computation just become too substantial?
+"
+"['deep-learning', 'training', 'object-detection', 'data-preprocessing', 'accuracy']"," Title: Choosing Data Augmentation smartly for different applicationBody: I'm trying to understand the role of data augmentation and how it can affect the performance/accuracy of a deep model. My target application is fire detection (on video frames), with almost 15K positive and negative samples, and I was using the following data augmentation techniques. Does using ALL the followings always increase the performance? Or we have to choose them somehow smartly given our target application?
+
+rotation_range=20, width_shift_range=0.2, height_shift_range=0.2,zoom_range=0.2, horizontal_flip=True
+
+
+When I think a bit more, fire is always straight up, so I think rotation
or shift
might in fact worsen the results, given that it makes the image sides stretch like this, which is irrelevant to fires in video frames. Same with rotation. So I think maybe I should only keep zoom_range=0.2, horizontal_flip=True
and remove the first three. Because I see some false positives when we have a scene transition effect in videos.
+
+Is my argument correct? Should I keep them or remove them?
+
+
+
+
+"
+"['convolutional-neural-networks', 'python', 'feature-selection']"," Title: Binarize ConvNet Feature vectorBody: Given a pre-trained CNN model, I extract feature vector of 3450 reference images FV_R
as follows:
+
+FV_R = [ [-8.2, -52.2, 9.07, -1.1, -0.08, -9.1, ........, -4.11],
+ [7.8, -3.8, 6.4, -4.27, -2.2, -5.0, ............., 3.6],
+ [-1.2, -0.8, 49.3, 1.73, -1.74, -7.1, ..........., 2.41],
+ [-1.2, -.8, 49.3, 0.6, -1.24, -1.04, .........., -2.06],
+ .
+ .
+ .
+ [-1.2, -.8, 49.3, 12.77. -2.2, -5.0, .........., -51.1]
+ ]
+
+
+and FV_Q
for 1200 query images :
+
+FV_Q = [ [-0.13, 2.6, -3.7, -0.5, -1.02, -0.6, ........, -0.11],
+ [0.3, -3.8, 6.4, -1.6, -2.2, -5.0, ............., 0.97],
+ [-6.4, -0.08, 8.0, 7.3, -8.07, -5.6, ..........., 0.01],
+ [-6.09, -.8, 0.5, -8.9, -0.74, -0.08, .........., -8.9],
+ .
+ .
+ .
+ [-1.2, -.8, 49.3, 12.77. -2.2, -5.0, .........., -51.1]
+ ]
+
+
+The size info:
+
+>>> FV_R.shape
+(3450, 64896)
+
+
+Query images:
+
+>>> FV_Q.shape
+(1200, 64896)
+
+
+I would like to binarize the CNN feature vectors (descriptors) and calculate Hamming Distance. I am already aware of this answer to probably use np.count_nonzero(a!=b)
(if a.shape == b.shape
) but does anyone know a method to binarize a feature vector with different size?
+
+Cheers,
+"
+"['keras', 'long-short-term-memory']"," Title: LSTM implementation in KERASBody: I would like to build an LSTM to predict the correct words order given a sentence. My dataset is composed of sentences, where each sentence has a variable number of words (each word is embedded). The dataset then is an array of matrices, where each matrix is an array of embedded words.
+
+Now, I'm looking to implement it with Keras but I'm not sure how to fit the necessary parameters wanted by the LSTM layer in Keras, like timesteps
and batch_size
.
+
+Reading on the web, I notice that timesteps
is the length of the sequence, so in my case I believe that corresponds to the length of the sentence. But I want to train my LSTM with one sentence at a time, so would the batch_size
be 1?
+"
+"['computational-learning-theory', 'generalization']"," Title: Why does the discrepancy measure involve a supremum over the hypothesis space?Body: I am referring specifically to the disc defined by Kuznetsov and Mohri in https://arxiv.org/pdf/1803.05814.pdf
+
+
+
+This is a kind of worst case path dependent generalization error. But what is the intuitive way of seeing why a worst case is needed? I am probably missing something or reading something incorrectly.
+"
+"['neural-networks', 'convolutional-neural-networks', 'recurrent-neural-networks', 'terminology', 'feedforward-neural-networks']"," Title: Are there names for neural networks with a well-defined layer or neuron characteristics?Body: Are there names for neural networks with a well-defined layer or neuron characteristics?
+
+For example, a matrix that has the same number of rows and columns is called a square matrix.
+
+Is there an equivalent for classifying different neural network structures. Specifically, I am interested if there is a name for a neural network with x number of layers, but each layer has the same number of neurons?
+"
+"['learning-algorithms', 'legal', 'business']"," Title: Intellectual property in the age of Industry 4.0Body: I am looking for specific references describing guidance principles around the interplay between IP (intellectual property) and Artificial Intelligence algorithms. For example, Company A has a large dataset and Company B has advanced algorithmic capabilities (assume near-AI). How might Company A protect itself in a joint partnership with company B? Apologies if AI SE is not the place for this - any suggestions for the right site?
+"
+"['python', 'keras', 'batch-normalization', 'handwritten-characters']"," Title: Does this tutorial use normalization the right way?Body: There is this video on pythonprogramming.net that trains a network on the MNIST handwriting dataset. At ~9:15, the author explains that the data should be normalized.
+
+The normalization is done with
+
+x_train = tf.keras.utils.normalize(x_train, axis=1)
+x_test = tf.keras.utils.normalize(x_test, axis=1)
+
+
+The explanation is that values in a range of 0 ... 1 make it easier for a network to learn. That might make sense, if we consider sigmoid functions, which would otherwise map almost all values to 1.
+
+I could also understand that we want black to be pure black, so we want to adjust any offset in black values. Also, we want white to be pure white and potentially stretch the data to reach the upper limit.
+
+However, I think the kind of normalization applied in this case is incorrect. The image before was:
+
+
+
+After the normalization it is
+
+
+
+As we can see, some pixels which were black before have become grey now. Columns with few black pixels before result in black pixels. Columns with many black pixels before result in lighter grey pixels.
+
+This can be confirmed by applying the normalization on a different axis:
+
+
+
+Now, rows with few black pixels before result in black pixels. Rows with many black pixels result in lighter grey pixels.
+
+Is normalization used the right way in this tutorial? If so, why? If not, would my normalization be correct?
+
+What I expected was a per pixel mapping from e.g. [3 ... 253] (RGB values) to [0.0 ... 1.0]. In Python code, I think this should do:
+
+import numpy as np
+import imageio
+image = imageio.imread(""sample.png"")
+image = (image - np.min(image))/np.ptp(image)
+
+"
+"['reinforcement-learning', 'deep-rl', 'rewards']"," Title: Immediate reward received in Atari game using DQNBody: I am trying to understand the different reward functions modelled in a reinforcement learning problem. I want to be able to know how the temporal credit assignment problem, (where the reward is observed only after many sequences of actions, and hence no immediate rewards observed) can be mitigated.
+
+From reading the DQN paper, I am not able to sieve out how the immediate rewards are being modelled when $Q_{target}(s,a; \theta) = r_s + argmax_aQ(s',a'; \theta)$. What is $r_s $ used in the case where the score has not changed ? Therefore what is the immediate rewards being modelled for temporal credit assignment problems in atari game ?
+
+If $r_s$ is indeed 0 until score changes, would it affect the accuracy of the DQN ? it seems like the update equation would not be accurate if you do not even know what is the immediate reward if you take that action.
+
+What are some of the current methods used to solve the temporal credit assignment problem ?
+
+Also, I can't seem to find many papers that address the temporal credit assignment problem
+"
+"['terminology', 'definitions']"," Title: What is an end-to-end AI project?Body: I often read about the so-called end-to-end AI (or analytics) projects, but I couldn't find a definition of it. What is an end-to-end AI project? Can someone explain what is meant/expected when someone asks you ""Have you already implemented an end-to-end AI project""?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'classification', 'keras']"," Title: Hand-Signs Recognition using Deep Learning Convolutional Neural NetworksBody: I am developing a CNN model to recognize 24 hand-signs of American Sign Language. I have 2500 Images/hand-sign. The data split is:
+Training = 1250 Images/hand-sign
+Validation = 625 Images/hand-sign
+Testing = 625 Images/hand-sign
+
+How should I proceed with training the model?:
+1. Should I develop a model starting from fewer hand-signs (like 5) and then increase them gradually?
+2. Should I start models from scratch or use transfer learning (VGG16 or other)
+Applying data augmentation, I did some tests with VGG16 and added a dense classifier at the end and received these accuracies:
+Train: 0.87610877
+Validation: 0.8867307
+Test: 0.96533334
+
+Accuracy and Loss Graph
+
+Test parameters:
+NUM_CLASSES = 5
+EPOCHS = 50
+STEPS_PER_EPOCH = 125
+VALIDATION_STEPS = 75
+TEST_STEPS = 75
+Framework = Keras, Tensorflow
+OPTIMIZER = adam
+
+Model:
+
+model = Sequential([
+ Conv2D(32, (3, 3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
+ MaxPooling2D(pool_size=(2,2)),
+
+ Conv2D(64, (3, 3), activation='relu'),
+ MaxPooling2D(pool_size=(2,2)),
+
+ Conv2D(128, (3, 3), activation='relu'),
+ MaxPooling2D(pool_size=(2,2)),
+
+ Conv2D(256, (3, 3), activation='relu'),
+ MaxPooling2D(pool_size=(2,2)),
+
+ Conv2D(512, (3, 3), activation='relu'),
+ MaxPooling2D(pool_size=(2,2)),
+
+ Flatten(),
+ Dense(512, activation='relu'),
+
+ Dense(NUM_CLASSES, activation='softmax')
+])
+
+
+If I try images with slightly different background and predict the classes (predict_classes()), I do not get accurate results. Any suggestions on how to make the model robust?
+"
+"['neural-networks', 'machine-learning', 'training', 'stochastic-gradient-descent']"," Title: How does batch size affect model size?Body: I'm suffering from a significant brain fart while trying to get my head around how does batch size affect overall model size e.g for CNNs. Does it serve as an additional dimension for all the weight tensors?
+
+Considering:
+
+
+- VGG16 model
+- batch_size of 16
+- image size of 224x224x3
+- conv_1 being the initial 1x1 convolution with stride 1 and 3:64 channels mapping
+
+
+The input will be a tensor of [16, 224, 224, 3]
shape. Will the output of convolution layer be [16, 224, 224, 64]
and therefore - will all the weights have additional 'batch size' dimension and thus - impose a linear increase of model size with respect to the batch size?
+"
+"['search', 'time-complexity', 'computational-complexity', 'best-first-search', 'space-complexity']"," Title: Why is the space-complexity of greedy best-first search is $\mathcal{O}(b^m)$?Body: I am reading through Artificial Intelligence: Modern Approach and it states that the space complexity of the GBFS (tree version) is $\mathcal{O}(b^m)$.
+
+While I am reading, at some points, I found GBFS similar to DFS. It expands the whole branches and goes after one according to the heuristic function. It doesn't expand the rest like BFS. Perceiving this as similar to what depth-first search does, I understand that the worst time complexity is $\mathcal{O}(b^m)$. But I don't understand the space complexity.
+
+Shouldn't it be the same as DFS, $\mathcal{O}(bm)$, since it only will be expanding $b*m$ nodes during the search in one path?
+"
+"['neural-networks', 'deep-learning', 'applications', 'overfitting', 'early-stopping']"," Title: Should I prefer the model with the lowest validation loss or the highest validation accuracy to deploy?Body: I trained a ResNet20 on Cifar10 and obtained the following learning curves.
+
+
+
+From the figures, I see at epoch 52, my validation loss is 0.323 (the lowest), and my validation accuracy is 89.7%.
+
+On the other hand, at the end of the training (epoch 120), my validation loss is 0.413 and my validation accuracy is 91.3% (the highest).
+
+Say I'd like to deploy this model on some real-world application. Should I prefer the snapshotted model at epoch 52, the one with lowest validation loss, or the model obtained at the end of training, the one with highest validation accuracy?
+"
+"['deep-learning', 'tensorflow', 'comparison', 'research', 'pytorch']"," Title: Is there a reason to use TensorFlow over PyTorch for research purposes?Body: I've been using PyTorch to do research for a while and it seems to be quite easy to implement new things with. Also, it is easy to learn and I didn't have any problem with following other researchers code so far.
+
+However, I wonder whether TensorFlow has any advantage over PyTorch. The only advantage I know is, it's slightly faster than PyTorch.
+
+In general, does TensorFlow have any concrete advantages over PyTorch apart from performance, in particular for research purposes?
+"
+"['reinforcement-learning', 'rewards']"," Title: Curiosity Driven Learning affect optimal policyBody: I am trying to understand some of the different approaches used to overcome sparse rewards in a reinforcement learning setting for a research project. Particularly, I have looked at curiosity driven learning, where an agent learns an intrinsic reward function based on the uncertainty of the next state that the agent will end up in as he takes action a in state s. The greater the uncertainty of the next state, the higher the rewards. This will incentive agent's to be more exploratory and it is used particularly in some games where a huge number of steps is needed before the agent reaches the terminal state where is he only then rewarded.
+
+The curiosity driven approach as demonstrated in this paper:
+https://pathak22.github.io/noreward-rl/ is able to learn faster than if a 0 rewards were used for each state, action.
+
+To my knowledge, using different reward functions will affect the optimal policy obtained. Would curiosity driven learning therefore lead to a different policy as compared to whether a 0 reward was used ? Assume that for a 0 immediate reward system, it is able to derive a policy that reaches the goal state. Which of these 2 policies will be more optimal ?
+"
+"['machine-learning', 'deep-learning', 'audio-processing']"," Title: Noise Cancellation on live audio streamBody: I want to build an application which takes a live audio from source (mic) and filtering the noise (unwanted sounds like chattering, traffic noises) and fetch into an application for further processing.
+
+I want to apply Machine Learning Framework (TensorFlow, Keras) and Deep Learning neural Networks (i.e RNN) for filtering the noise from the audio. I want to do this in a real time environment. My inference device will be a Nvidia Jetson Device. Please guide me where I can find the related documents and how to proceed with the project.
+
+If there is any solution available in any website please refer the link.
+"
+"['objective-functions', 'image-segmentation']"," Title: Tversky Loss paper implementation: Recall/Precision do not improve as statedBody: I have been trying to implement this paper and I am very much intrigued. I am working on a medical image problem where I have to segment very small specimens on Whole Slide Images (gigapixel resolution). Therefore my dataset is highly unbalanced and I am having a high false positives rate.
+
+I did my research and found that paper that describes the implementation of Tversky Loss and Focal Tversky Loss. It also describes some modifications to the network architecture which I am postponing for now.
+
+I implemented the loss (Pytorch) and ran some experiments with several alpha/beta combinations. Well, the results are easy to understand: higher alpha results in higher precision and a lower beta increases the recall and pushes the precision down. Basically, what this loss is doing is balancing my recall and precision, only. That is good, I can solve my False Positives issue but since this is a medical problem, a good recall is mandatory. In the paper, the results show that there is an improvement in the Precision/Recall and I cannot understand how is that possible and how I cannot replicate that. I am just weighing false positives and penalizing them, it does not seem enough to improve the model overall.
+
+Regards
+
+
+"
+"['transformer', 'gpt', 'text-generation']"," Title: Pretrained Models for Keyword-Based Text GenerationBody: I'm looking for an implementation that allows me to generate text based on a pre-trained model (e.g. GPT-2).
+An example would be gpt-2-keyword-generation (click here for demo). As the author notes, there is
+
+[...] no explicit mathematical/theoetical basis behind the keywords
+aside from the typical debiasing of the text [...]
+
+Hence my question: Are there more sophisticated ways of keyword-based text generation or at least any other alternatives?
+Thank you
+"
+['long-short-term-memory']," Title: LSTM model on different time scalesBody: I am a newbie to machine learning. I have an LSTM model that predicts the next output n+1
+
+time 1, params 1, output 1
+
+time 2, params 2, output 2
+
+time 3, params 3, output 3
+
+.
+.
+
+time n, params n, , output n
+
+time n+1 --> predicts output n+1
+
+ Here the times are all in minutes, so I can predict the next output in the series which is going to be the next minute. My question is that what if I want to predict the next 5 minutes. One solution was to throw out all the data except in steps of 5 minutes so the next step is automatically would be 5 minutes. This is clearly a waste of all the data that I have gathered. Can you please recommend what I can do about the prediction on different time scales?
+"
+"['philosophy', 'agi', 'models', 'papers']"," Title: Will AI always depend on models and thus approximations?Body: In section 3 of the paper The Limits of Correctness (1985) Brian Cantwell Smith writes
+
+
+ When you design and build a computer system, you first formulate a model of the problem you want it to solve, and then construct the computer program in its terms.
+
+
+He then writes
+
+
+ computers have a special dependence on these models: you write an explicit description of the model down inside the computer, in the form of a set
+ of rules or what are called representations - essentially linguistic formulae encoding, in the terms of the model, the facts and data
+ thought to be relevant to the system's behavior. It is with respect to these representations that computer systems work. In fact, that's really what computers are (and how they differ from other machines): they run by manipulating representations, and representations are always formulated
+ in terms of models. This can all be summarized in a slogan: no computation without representation.
+
+
+And then he says
+
+
+ Models have to ignore things exactly because they view the world at a level of abstraction
+
+
+He then writes in section 7
+
+
+ The systems that land airplanes are hybrids - combinations of computers and people - exactly because the unforeseeable happens, and because what
+ happens is in part the result of human action, requiring human interpretation
+
+
+As quoted above, computers depend on models, which are abstractions (i.e. they ignore a lot of details), which are written inside the computer. Therefore, the true world cannot really be encoded into an algorithm, but only an abstraction and thus simplification of the world can.
+
+So, will AI always depend on models and thus approximations? Can it get rid of or overcome this limitation?
+"
+"['reinforcement-learning', 'deep-rl', 'rewards', 'reward-shaping', 'inverse-rl']"," Title: Can recovering a reward function using IRL lead to better policies compared to reward shaping?Body: I am working on a research project about the different reward functions being used in the RL domain. I have read up on Inverse Reinforcement Learning (IRL) and Reward Shaping (RS). I would like to clarify some doubts that I have with the 2 concepts.
+In the case of IRL, the goal is to find a reward function based on the policy that experts take. I have read that recovering the reward function that experts were trying to optimize, and then finding an optimal policy from those expert demonstrations has a possibility of resulting in a better policy (e.g. apprenticeship learning). Why does it lead to a better policy?
+"
+"['neural-networks', 'models', 'anomaly-detection']"," Title: What models will you suggest to use in Industrial Anomaly Detection and Predictive analysis on live streamed data?Body: I have been working on industrial data, that is fed live, I want to explore a few models which might suit for this the best.
+
+The data are KPI data from the manufacturing Industry.
+"
+"['python', 'tensorflow', 'long-short-term-memory']"," Title: How do I make my LSTM model more sensitive to changes in the sequence?Body: I have a many to one LSTM model for multiclass classification. For reference, this is the architecture of the model
+
+ model.add(LSTM(147, input_shape=(1000, 147)))
+ model.add(Dense(5, activation='softmax'))
+ model.compile(loss='categorical_crossentropy',
+ optimizer='rmsprop',
+ metrics=['accuracy'])
+
+
+The model is trained in 5 types of sequences is able to effectively classify each sequence I feed into the model with high accuracy. Now my new objective is to combine these sequences together to form a new sequence.
For e.g
I denote the elements from the class '1' with the sequence:
+
[1,1,1,1,1,1,1]
So when I input the above sequence into the LSTM model for prediction, it classifies the sequence as class '1' with accuracy of 0.99
+
And I denote the elements from class '2' with the sequence:
+
[2,2,2]
+LIkewise for the above sequence, the LSTM model will classify the sequence as class '2' with accuracy of 0.99
+
+Now I combine these sequences together and feed it into the model :
+
New sequence : [1,1,1,1,1,1,1,2,2,2]
+
However the model does not seem to be sensitive to the presence of the class '2' sequence and still classifies the sequence as class '1' with accuracy of 0.99.
+
+How do I make the model more ""sensitive"", meaning that I would expect the LSTM model to still maybe predict class '1' but with a drop in accuracy? Or is the LSTM incapable of detecting the inclusion of class '2' sequences?
+
+Thanks.
+"
+"['reinforcement-learning', 'intelligent-agent']"," Title: What kind of enemy to train a good RL-agentsBody: So I want to create an RL-agent for two players-board game. I want to use a simple DQN for the first player (my RL-agent). Then, what kind of algorithm that should I use on the second player (my RL-agent's enemy)?
+
+I have three options in my mind:
+
+
+- a random agent that act randomly
+- a rule-based agent that acts by some defined rules
+- another RL-agent
+
+
+I have tried the first and second options. When I use a random agent as an enemy, the first player gets a high score and wins easily. But I think it's not really smart as the enemy is a random agent. When I use the second option, the first player got difficulties to train itself, as it can't win any game.
+
+what should I choose and why?
+"
+"['neural-networks', 'machine-learning', 'recurrent-neural-networks', 'definitions', 'teacher-forcing']"," Title: What is teacher forcing?Body: In the paper Neural Programmer-Interpreters, the authors use the teacher forcing technique, but what exactly is it?
+"
+"['neural-networks', 'convolutional-neural-networks', 'graphs']"," Title: How to solve the problem of variable-sized AST as input for a (convolutional) neural network model?Body: In my work I have a given source code for a module. From this module I generate an AST, whose size is dependent on the size of the module (e.g. more source code -> bigger AST). I want to train a neural network model which will learn a general structure of a module and be able to rate (on a scale of 0 to 1) how ""good"" a module is structure wise (if requires are at the beginning, followed by local functions, variables and finally returns). Now I have learnt that Convolutional NNs are quite convenient for this, but the problem I can't seem to solve is that they require a fixed sized input which I can't produce. If I add zero-padding then the outcome will be skewed and the accuracy will suffer. Is there a clear solution to this problem?
+"
+"['neural-networks', 'deep-learning', 'reinforcement-learning', 'alphazero']"," Title: How to encode board before input into the neural net?Body: Currently I'm working on an educational project (implementation of AlphaZero approach to different types of board games).
+My biggest concern at the moment is how to encode board before input into the neural network?
+
+For example, how can this be done for Kalah game?
+
+Kalah board has two straight rows of six pits, and two large score-houses.
+Each pit contains 6 seeds (so there are 72 seeds)
+"
+"['neural-networks', 'convolutional-neural-networks', 'training']"," Title: What is the use of concatenate layer in CNN?Body: I am not asking what does concatenate layer does in general in point of mathematical operation. But at feature level, what significance does it provide. Does it helps removing false negatives or does it prevents over-fitting? Do give the reference of papers regarding this topic.
+"
+"['machine-learning', 'programming-languages', 'resource-request']"," Title: Machine learning frameworks for esoteric languagesBody: Is there a machine learning framework/library for any of the esoteric languages, such as the ones listed here ?
+"
+['deep-learning']," Title: Details on body measurements predictionBody: if someone want to do mobile app for body measurements prediction, please what are the necessary things to start with. I need details explanation on this.
+"
+"['neural-networks', 'artificial-neuron']"," Title: In a neural network, can colors be used for neurons in place of floating points and would there be any benefit in doing so?Body: Firstly, some context. I have been reading and watching videos on the subject for around 3 years, but I am still very much a beginner in machine learning and artificial intelligence. That said, I might not know what I'm even talking about here. So bear with me.
+
+If I understand correctly, each node in a neural network (neuron) is represented by some floating point number between 0 and 1, that are arranged in layers and have corresponding weights. Right? While a color has RGB values, CMYK values, and HSV values that are all interrelated to each other.
+
+My question is would there be any benefit to having each node represented by a color instead of a single floating point number?
+
+My thinking is that each neuron could select any of the values (r, g, b, c, m, y, k, h, s, or v) contained within the color in some meaningful way, while the Alpha value could possibly represent the weight associated with that neuron.
+
+Thoughts? Would it not work like that? Could you use it to have multiple congruent networks running on 3 different channels? Again, would there be any benefit to doing this than just using a single number? Or would it over-complicate (or even break) the network? Would it be useless?
+
+Although I've also dabbled in Unity3D (which is how I got the idea in the first place), I'm too much of a beginner to know how to even begin an attempt at testing this myself.
+"
+"['reinforcement-learning', 'comparison', 'value-functions']"," Title: Why is the state-action value function used more than the state value function?Body: In reinforcement learning, the state-action value function seems to be used more than the state value function. Why is it so?
+"
+"['classification', 'tensorflow', 'video-classification']"," Title: I need to select the image from a predefined dataset that are the closest to the input, is this possible or do I even need to use ML/AI?Body: So as the title states, I have a set of images and I want to process input images and need to select the image that ""looks"" the most like the input image.
+
+I know I've seen something similar where the code could guess who's face was in a picture, I guess I want something like that but for general images.
+
+Sorry if this is a stupid question, but any suggestions or points at resources would be greatly appreciated.
+"
+"['proofs', 'variational-autoencoder', 'kl-divergence']"," Title: Why does the KL divergence not satisfy the triangle inequality?Body: The KL divergence is defined as
+$$D_{KL}=\sum_i p(x_i)log\left(\frac{p(x_i)}{q(x_i)}\right)$$
+Why does $D_{KL}$ not satisfy the triangle inequality?
+Also, can't you make it satisfy the triangle inequality by taking the absolute value of the information at every point?
+"
+"['reinforcement-learning', 'q-learning', 'rewards', 'convergence']"," Title: How do we know that the algorithm has converged and ensures the highest possible reward?Body: I started learning about Q table from this blog post Introduction to reinforcement learning and OpenAI Gym, by Justin Francis, which has a line as below -
+
+After so many episodes, the algorithm will converge and determine the optimal action for every state using the Q table, ensuring the highest possible reward. We now consider the environment problem solved.
+
+The Q table was updated by Q-learning formula
+Q[state,action] += alpha * (reward + np.max(Q[state2]) - Q[state,action])
+I ran 100000 episodes of which I got the following -
+Episode 99250 Total Reward: 9
+Episode 99300 Total Reward: 7
+Episode 99350 Total Reward: 6
+Episode 99400 Total Reward: 14
+Episode 99450 Total Reward: 10
+Episode 99500 Total Reward: 10
+Episode 99550 Total Reward: 9
+Episode 99600 Total Reward: 14
+Episode 99650 Total Reward: 5
+Episode 99700 Total Reward: 7
+Episode 99750 Total Reward: 3
+Episode 99800 Total Reward: 5
+
+I don't know what the highest reward is. It does not look like it has converged. Yet, the following graph
+
+shows a trend in convergence but it was plotted for a larger scale.
+What should be the sequence of actions to be taken when the game is reset() but the "learned" Q table is available? How do we know that and the reward in that case?
+"
+"['convolutional-neural-networks', 'python', 'comparison', 'computer-vision', 'image-recognition']"," Title: What is the difference between exhaustive nearest neighbor search and k-nearest neighbour search?Body: I have two lists of feature vectors calculated from pre-trained CNN for image retrieval task:
+
+Query: FV_Q
and Reference FV_R
.
+
+>>> FV_R.shape
+(3450, 128)
+
+>>> FV_Q.shape
+(3450, 128)
+
+
+I am a little confused between the concept of exhaustive nearest neighbor search and k-nearest neighbor search.
+
+In python, I use from sklearn.neighbors import KDTree
to extract top k = 5
similar images from the reference database, given the query image!
+
+Can somebody explain if there might be any similarities/differences between these two concepts?
+
+Am I making a mistake somewhere in my feature vector comparison?
+"
+['machine-learning']," Title: Where can I find good tutorials on user tailored recommendation system for web?Body: I'm currently working on my uni project, but I have no idea where to start for the user tailored recommendation system on web. Where can I find a good guide on it, preferrably on languages like php and javascript.
+"
+"['classification', 'proofs', 'multilayer-perceptrons', 'perceptron']"," Title: Is there a mathematical theory behind why MLP can classify handwritten digits?Body: I'm trying to really understand how multi-layer perceptrons work. I want to prove mathematically that MLP's can classify handwritten digits. The only thing I really have is that each perceptron can operate exactly like a logical operand, which obviously can classify things, and, with backpropagation and linear classification, it's obvious that, if a certain pattern exists, it'll activate the correct gates in order to classify correctly, but that is not a mathematical proof.
+"
+"['machine-learning', 'image-processing', 'google-cloud']"," Title: What are some of the best methods in detecting facial movement using state-of-the-art machine learning models?Body: I am currently working on implementing a lip reading system in Python using machine learning and image processing. Currently, two initial implementations have provided promising results, albeit not perfect: the LipNet model and the Google Cloud AutoML Vision API. Before I dive head on into these two different systems, I was wondering if anyone on here had been exposed to any other alternatives for lip reading or any other facial detection issue, and their experiences with these models.
+
+Any information will be greatly appreciated.
+"
+"['reinforcement-learning', 'q-learning', 'healthcare', 'sarsa']"," Title: Evaluation a policy learned using Q - learningBody: I have been reading literature on reinforcement learning in healthcare. I am slightly confused between the policy evaluation for both SARSA and Q-learning.
+
+To my knowledge, I believe that SARSA is used for policy evaluation, to find the Q values of following an already existing policy. This is usually the clinician's policy.
+
+Q - learning on the other hand seeks to find another policy, different from the clinician's such that the policy learned at different states always maximise the Q - values. This leads to a better treatment policy.
+
+Suppose the Q values are learned from both policies, if the Q values for Q - learning are higher than those of SARSA's, can we say that the policy learned from Q - learning is better than that of the clinician's ?
+
+EDIT
+
+From readings I have found out that computing the state - value function is usually used to compare how good policies are. I believe that new data has to be generated to apply the policy learned from Q - learning and compute the state - value function for following this learnt policy from Q - learning.
+
+Why can't the Q values learnt from SARSA and Q - learning be used as comparison instead ? Also, for model free approaches (eg. continuous state space), how is policy evaluation usually carried out ?
+"
+"['reinforcement-learning', 'actor-critic-methods', 'experience-replay']"," Title: Can I apply experience on naive actor critic directly? Should it work?Body: Can I apply experience replay on naive actor-critic directly? Should it work?
+
+I have tried that but unfortunately it didn't work.
+"
+"['reinforcement-learning', 'actor-critic-methods', 'experience-replay']"," Title: Implementing Actor-Critic with Experience Replay for Continuous Action SpacesBody: I have been trying to implement the ACER algorithm for continuous action spaces in reinforcement learning. The paper for the algorithm can be found here:
+
+
+
+I have implemented parts of the algorithm, but I have encountered some roadblocks that I have not been able to figure out.
+
+The following is the pseudo-code provided in the paper:
+
+
+
+Here is what I have implemented so far:
+
+
+
+states = tf.convert_to_tensor(trajectory.state)
+actions = tf.squeeze(tf.convert_to_tensor(trajectory.action), axis=-1)
+rewards = tf.convert_to_tensor(trajectory.reward)
+dones = tf.convert_to_tensor(trajectory.done)
+
+explore_means, state_values, action_values = actor_critic(states, actions)
+
+average_means, *_ = brain.average_actor_critic(states)
+
+k = len(trajectory.state)
+d = env.action_space.shape[0]
+
+# Policies
+explore_policies = k*[None]
+behavior_policies = k*[None]
+average_policies = k*[None]
+
+# Tracking
+explore_actions = np.zeros([k, d])
+importance_weights = np.zeros([k, 1])
+explore_importance_weights = np.zeros([k, 1])
+truncation_parameters = np.zeros([k, 1])
+
+for i in range(k):
+
+ behavior_policy = tfd.MultivariateNormalDiag(
+ loc=trajectory.statistics[i],
+ scale_diag=tf.ones(d)*POLICY_STD
+ )
+
+ explore_policy = tfd.MultivariateNormalDiag(
+ loc=explore_means[i],
+ scale_diag=tf.ones(d)*POLICY_STD
+ )
+
+ average_policy = tfd.MultivariateNormalDiag(
+ loc=average_means[i],
+ scale_diag=tf.ones(d)*POLICY_STD
+ )
+
+ explore_action = explore_policy.sample()
+
+ importance_weight = explore_policy.prob(actions[i]) / behavior_policy.prob(actions[i])
+ explore_importance_weight = explore_policy.prob(explore_action) / behavior_policy.prob(explore_action)
+
+ truncation_parameter = min(1, (importance_weight)**d)
+
+
+ behavior_policies[i] = behavior_policy
+ explore_policies[i] = explore_policy
+ average_policies[i] = average_policy
+ explore_actions[i] = explore_action
+ importance_weights[i] = importance_weight
+ explore_importance_weights[i] = explore_importance_weight
+ truncation_parameters[i] = truncation_parameter
+
+
+explore_actions = tf.convert_to_tensor(explore_actions, dtype=tf.float32)
+importance_weights = tf.convert_to_tensor(importance_weights, dtype=tf.float32)
+explore_importance_weights = tf.convert_to_tensor(explore_importance_weights, dtype=tf.float32)
+truncation_parameters = tf.convert_to_tensor(truncation_parameters, dtype=tf.float32)
+
+
+q_ret = values[-1] if not dones[-1] else tf.zeros(1)
+q_opc = tf.identity(q_ret)
+
+for i in reversed(range(k - 1)):
+
+ q_ret = rewards[i] + GAMMA*q_ret
+ q_opc = rewards[i] + GAMMA*q_opc
+
+
+ # Compute quantities needed for trust region updating
+ c = TRUNCATION_PARAMETER
+
+ with tf.GradientTape(persistent=True) as tape:
+
+ tape.watch(explore_policies[-2].loc)
+
+ log_prob = explore_policies[-2].log_prob(actions[-2])
+ explore_log_prob = explore_policies[-2].log_prob(explore_actions[-2])
+
+ kl_div = tfp.distributions.kl_divergence(average_policies[-2], explore_policies[-2])
+
+
+ lp_grad = tape.gradient(log_prob, explore_policies[-2].loc)
+ elp_grad = tape.gradient(explore_log_prob, explore_policies[-2].loc)
+ kld_grad = tape.gradient(kl_div, explore_policies[-2].loc)
+
+
+ term1 = min(c, importance_weights[-2])*lp_grad*(q_opc - state_values[-2])
+ term2 = tf.nn.relu(1 - (c / explore_importance_weights[-2]))*(action_values[-2] - state_values[-2])*elp_grad
+
+ g = term1 + term2
+
+
+So the goal here was to implement it exactly the way they have it in the paper and then afterwards optimize it for doing batches of trajectories. For now, however, it is sufficient for the purposes of learning.
+
+My confusion comes from the use of differentials in this algorithm. I don't know what the specific type is for them, such as whether they are using it for the loss value to optimize on or if they are storing the gradients that will be used for updating. Another issue I am having is that it is not clear what they mean by this line:
+
+
+I don't understand why they are using a partial derivative here if there is clearly more than one parameter in the neural network. Maybe they mean the gradient I am not sure, however.
+
+So what would be helpful is if anybody has some guidance as to what they are getting at in this portion of the paper or if anybody has some advice as to what steps need to be taken in TensorFlow 2.0 to implement this algorithm.
+
+Any help would be greatly appreciated! Thanks!!
+"
+['object-detection']," Title: Similarity of Images (CBIR) for two different camerasBody: Suppose we have a top down picture of an object (let's say it is a shoe) from an overhead camera. Also suppose we have a database of various objects from a closeup camera. If we feed the top-down picture of the hsoe into a CBIR model, then this picture would obviously have very low similarity with the images in the database. But would the image with the highest similarity score in the database still be a shoe even though the absolute score is small?
+"
+['audio-processing']," Title: Is there an AI that can complete Deezer Spleeter work?Body: I have used Deezer Spleeter but it produces echoes aside the stems, so I wonder if there is already an AI that remove echoes noises.
+"
+"['reinforcement-learning', 'notation', 'value-functions']"," Title: What is the purpose of the arrow $\leftarrow$ in this formula?Body: What is the purpose of the arrow $\leftarrow$ in the formula below?
+
+$$V(S_t) \leftarrow V(S_t) + \alpha \left[ G_t - V(S_t) \right]$$
+
+I presume it's not the same as 'equals'.
+"
+"['machine-learning', 'classification', 'objective-functions', 'cross-entropy', 'categorical-crossentropy']"," Title: How should I penalize the model proportionally to the error?Body: I am making an MNIST classifier. I am using categorical cross-entropy as my loss function. I want to make it so that if the correct label is 3, then it will penalize the model less heavily if it classifies a 4 than a 7 because 4 is closer numerically to 3 than 7 is. How do I do this?
+"
+"['reinforcement-learning', 'rewards']"," Title: Is there a good ratio between the positive and negative rewards in reinforcement learning?Body: Is there an ideal ratio in reinforcement learning between the positive and negative rewards?
+
+Suppose I have the scenario of moving a robot across the river. There are two options, walk across the bridge or walk across the river. If it walks across the river then the robot breaks so the idea is to reinforce the robot to walk across the bridge. What would be the best rewards values? Does this ratio vary between cases?
+
+option1:
+
+Bridge: +10
+River: -10
+
+Option2:
+
+Bridge: +10
+River: -1
+
+Option3:
+
+Bridge: +1
+River: -10
+
+"
+"['markov-decision-process', 'value-iteration']"," Title: Unable to understand V* at infinite time horizon using Bellman equation for solving MDPBody: I've been following the Berkeley cs188's assignment (I'm not taking the course). Currently, they don't show the solution in the gradescope unless I get it correct.
+
+
+My reasoning was
+
+$V^*(a)$ = 10 fixed, because the optimal action is to terminate and receive the reward 10.
+
+$V^*(b) = 10 \times 0.2 = 2$ using Bellman optimality eqn $V^*(s) = R(s)+ \gamma \ \rm{max}_{a} \sum_{s'} P(s'|s,a) V^*(s')$, where the optimal actionfrom b is to left.
+
+Similarly, I get $V^*(c) = 10 \times (0.2)^2 = 0.4$
+
+For the state $d$, it is optimal to move to the right and exit at $e$ to receive 1, therefore $V^*(d) = 1 \times 0.2 = 0.2$.
+
+And $V^*(e) = 1$ fixed.
+
+However, there autograder says it's incorrect and doesn't show explanation. Can anyone explain what the right approach or answer is?
+"
+"['reinforcement-learning', 'actor-critic-methods', 'temporal-difference-methods', 'eligibility-traces', 'td-lambda']"," Title: Why not more TD(𝜆) in actor-critic algorithms?Body: Is there either an empirical or theoretical reason that actor-critic algorithms with eligibility traces have not been more fully explored? I was hoping to find a paper or implementation or both for continuous tasks (not episodic) in continuous state-action spaces.
+
+This has been the only related question on SE-AI that I have been able to find Why are lambda returns so rarely used in policy gradients?.
+
+Although I appreciated the dialog and found it useful, I was wondering if there was any further detail or reasoning that might help explain the void.
+"
+"['machine-learning', 'deep-learning', 'training', 'deep-neural-networks', 'metric']"," Title: Steps to train and re-train a good modelBody: I'm still a bit new to deep learning. What I'm still struggling, is what is the best practice in re-training a good model over time?
+
+I've trained a deep model for my binary classification problem (fire vs non-fire) in Keras
. I have 4K fire images and 8K non-fire images (they are video frames). I train with 0.2/0.8 validation/training split. Now I test it on some videos, and I found some false positives. I add those to my negative (non-fire) set, load the best previous model, and retrain for 100 epochs. Among those 100 models, I take the one with lowest val_loss
value. But when I test it on the same video, while those false positives are gone, new ones are introduced! This never ends, and Idk if I'm missing something or am doing something wrong.
+
+How should I know which of the resulting models is the best? What is the best practice in training/re-training a good model? How should I evaluate my models?
+
+Here is my simple model architecture if it helps:
+
+def create_model():
+ model = Sequential()
+ model.add(Conv2D(32, kernel_size = (3, 3), activation='relu', input_shape=(300, 300, 3)))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2,2)))
+ model.add(BatchNormalization())
+ model.add(Dropout(0.2))
+ model.add(Flatten())
+ model.add(Dense(256, activation='relu'))
+ model.add(Dropout(0.2))
+ model.add(Dense(64, activation='relu'))
+ model.add(Dense(2, activation = 'softmax'))
+
+ return model
+
+#....
+if retrain_from_prior_model == False:
+ model = create_model()
+ model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
+else:
+ model = load_model(""checkpoints/model.h5"")
+
+"
+"['deep-learning', 'datasets', 'hyperparameter-optimization', 'cross-validation']"," Title: How to fairly conduct a model performance with 5-fold cross validation after augmentation?Body: I have, say, a (balanced) data-set with 2k images for binary classification. What I have done is that
+
+
+- randomly divided the data-set into 5 folds;
+- copy-pasted all 5-fold data-set to have 5 exact copies of data-set (folder_1 to folder_5, all absolutely same data-set)
+- first fold in folder_1 is saved as
test
folder and remaining (fold_2, fold_3, fold_4, fold_5) are combined as one train
folder
+- second fold in folder_2 is saved as
test
folder and remaining (namely, fold_1, fold_3, fold_4, fold_5) are combined as one train
folder
+- third fold in folder_3 is saved as
test
folder and remaining (namely, fold_1, fold_2, fold_4, fold_5) are combined as one train
folder.
+- similar process has been done on folder_4 and foder_5.
+
+
+I hope, by now, you got the idea of how I distributed the data-set.
+
+The reason I did so is as follows:
+
+I have augmented the training data (train
folder) in each of the folders and used test
folders respectively to evaluate (ROC-AUC score). Now I kind of have 5 ROC-AUC scores which I evaluated using test
folders. If I get the average value out of those 5 scores.
+
+(Assuming the above cross-validation process is done right) If I were to perform some manual hyperparameter optimizations (like an optimizer, learning rate, batch size, dropout, activation) and perform the above cross-validation with data augmentation and find the best so-called ""mean ROC-AUC"", does it mean I successfully conducted hyper-parameter optimization?
+
+FYI: I have no problem with computing power OR/AND time at all to loop through the hyper-parameters for this type of cross-validation with data augmentation
+"
+"['convolutional-neural-networks', 'image-segmentation']"," Title: How do I generate a feature representation of a saliency map (or mask)?Body: Generally, CNNs are used to extract feature representations of an image. I'm right now dealing with the class of CNN that produces saliency maps, which are generally in the format of a mask. I'm trying to generate a feature representation of that specific Mask. What could be the best way to approach this problem?
+"
+"['neural-networks', 'convolutional-neural-networks', 'tensorflow']"," Title: Is this neural network architecture appropriate for CIFAR-10?Body: I have a CNN architecture for CIFAR-10 dataset which is as follows:
+
+
+ Convolutions: 64, 64, pool
+
+ Fully Connected Layers: 256, 256, 10
+
+ Batch size: 60
+
+ Optimizer: Adam(2e-4)
+
+ Loss: Categorical Cross-Entropy
+
+
+When I train this model, training and testing accuracy along with loss has a very jittery behavior and does not converge properly.
+
+Is the defined architecture correct? Should I have a max-pooling layer after every convolution layer?
+"
+"['machine-learning', 'reinforcement-learning']"," Title: Is there a family tree for reinforcement learning algorithms?Body: Can anyone point me in the direction of a nice graph that depicts the ""family tree"", or hierarchy, of RL algorithms (or models)? For example, it splits the learning into TD and Monte Carlo methods, under which is listed all of the algorithms with their respective umbrella terms. Beneath each algorithm is shown modifications to those algorithms, etc. I'm having difficulty picturing where everything lies within the RL landscape.
+"
+['convolutional-neural-networks']," Title: Can neural style transfer work on the image style in this question or is there a better technique?Body: I've been working with this neural style paper https://arxiv.org/pdf/1508.06576v2.pdf
+to try and transfer the style from this image to photos of pets. In case you're not familiar with the technique, I'll leave a brief explanation of it at the end.
+
+
+
+After taking the time to understand some of the concepts in the paper I've come to my own conclusion that the method won't work. Here's why I think this:
+
+
+- The style is not localised/fine-grained enough. If I take a small piece of this image, I might just get a solid color, or two colors separated by a zig-zag boundary. So the first few convolutional layers won't find any characteristics of interest.
+- The style depends on long-distance correlations (I made that term up). If you follow some of theses zig-zag interfaces with your eyes you'll see that they traverse up to 1/3 of the characteristic size of the image. So bunches of pixels that take up a lot of space are correlated via the style. And from my intuition of CNNs, you can distill lots of pixels into coarser information (like ""this is a dog"") but you can't really go backwards. I don't think the CNN encodes the required logic to trace out a zig-zag over a long distance.
+- The color palette is restricted. I'm not sure why, but my intuition tells me the Neural Style Transfer technique won't be able to produce a restricted color pallet like in the style example. I'm guessing again, that the CNN doesn't encode any logic around the number of colors used.
+
+
+So my question is around whether or not the technique could work, and if not, what's a better technique for this problem.
+
+(optional read) Summary of deep style transfer
+
+
+- Take a pre-trained model like VGG19.
+- Feed in a photo. For each conv layer you can reconstruct a content representation of what that layer encodes about the photo. You do this by treating the feature maps of that layer as a desired output, and doing gradient descent on a white noise image as the input, where the loss function is the RMS between the original photo and the generated image.
+- Feed in a painting. For each conv layer you can reconstruct a style representation of what that layer encodes about the painting in a similar way as step 2. This time though, your loss function is the RMS between the gram matrix of all the features maps produced using a white noise input, and the gram matrix of all the feature maps produced using the painting as input. And here, you sum loss over all features maps of prior layers as well, not just the layer you are considering now.
+- Jointly minimise the loss functions described in 2 and 3 (so you're minimising content loss and style loss together) in order to produce an image with the content of the photo and the style of the painting.
+
+
+EDIT
+
+I have tried this. Here is an example of my results (left is input, right is output). So it's kind of cool to see some color map reduction happening, and what kind of looks like accentuation of texture, but definitely not getting these illustrated zig zag boundaries that the human mind so readily perceives as fur.
+
+
+"
+"['reinforcement-learning', 'math', 'value-functions', 'probability-theory']"," Title: How can the V and Q functions take the expectation over a sum where the number of summands is random?Body: Assume the existence of a Markov Decision Process consisting of:
+
+- State space $S$
+- Action space $A$
+- Transition model $T: S \times A \times S \to [0,1]$
+- Reward function $R: S \times A \times S \to \mathbb{R}$
+- Distribution of initial state $p_0: S \to [0,1]$
+
+and a policy $\pi: S \to A$.
+The $V$ and $Q$-functions take expectations of the sum of future rewards.
+Let's start off by $r_0:= R(x_0,\pi(x_0),x_1)$, where $\pi$ is the current policy while $x_0 \sim p_0$ and $x_1 \sim T(x_0,\pi(x_0),-)$ are random variables. With setting $\mu_i:= T(x_i,\pi(x_i),-),\rho_i:=R(x_i,\pi(x_i),-)$, I obtain
+$$E[r_0]= \int_{\mathbb{R}} r d\mu_0^{\rho_0} = \int_S R(x_0,\pi(x_0),-)d\mu_0,$$
+where $\mu_i^{\rho_i}:= \mu_i\circ \rho_i^{-1}$ is the pushforward of $\mu_i$ under random variable $\rho_i$. But the above quantity still depends on $x_0$ as both $\mu_o$ and $\rho_0$ depend on $x_0$. Intuitively, I would guess that one has to calculate the integral over every occuring random variable to obtain the overall expectation, i.e. $$E[r_0]= \int_S\int_S R(x_0,\pi(x_0),x_1)d\mu_0(x_1)dp_0(x_0)$$ is that correct ?
+Now, the $V$ and $Q$-functions take the expectation over the sum $R_{\tau} = \sum^T_{t=\tau}\gamma^{t-\tau}r_t$, where the instant of termination $T$ itself is a random variable, and, besides that, the agent does not know its distribution, as it is not even included in the MDP model.
+How can I take the expectation over a sum where the number of summands is random?
+We cannot just calculate $\sum^{E[T]}(\dots)$, because $E[T]$ might not even be an integer.
+"
+"['reinforcement-learning', 'comparison', 'minimax', 'model-based-methods', 'model-free-methods']"," Title: Is the minimax algorithm model-based?Body: Trying to get my head around model-free and model-based algorithms in RL. In my research, I've seen the search trees created via the minimax algorithm. I presume these trees can only be created with a model-based agent that knows the full environment/rules of the game (if it's a game)? If not, could you explain to me why?
+"
+"['machine-learning', 'regularization', 'support-vector-machine', 'adversarial-ml', 'causation']"," Title: How do I poison an SVM with manifold regularization?Body: I'm working on Adversarial Machine Learning, and have read multiple papers on this topic, some of them are mentioned as follows:
+
+
+
+However, I am not able to find any literature on data poisoning for SVMs using Manifold regularization. Is there anyone who has knowledge about that?
+"
+"['natural-language-processing', 'recurrent-neural-networks']"," Title: How to produce documents like factset blackline?Body: Factset blackline reports essentially can compare two 10-Q SEC filings and show you the difference between the two documents. It highlights added items in green and removed items in red + strikethrough (essentially, it's a document difference, but longer-term I would like to run algorithms on the differences).
+I don't care to change colors, but what I would like to do is to produce similar extracts that summarize addition and deletions.
+Which AI/ML algorithm could do the same?
+"
+['neural-networks']," Title: In neural networks, what does the term depth generally mean?Body: Is it
+
+
+- number of units in a layer
+- number of layers
+- overall complexity of the network (both 1 and 2)
+
+"
+"['math', 'multi-agent-systems']"," Title: Formal proof that every purely reactive agent has behaviorally equivalent standard agentBody: It kind of makes sense intuitively but I'm not sure about a formal proof. I'll start with briefly listing definitions from Intro to Multiagent systems, Wooldridge, 2002 and then give you my reasoning attempts thus far.
+
+$E$ is a finite set of discrete, instantaneous states, $E=(e, e',...)$. $Ac$ is a repertoire of possible actions (also finite) available to an agent, which transform the environment, $Ac=(\alpha, \alpha', ...)$. A run is a sequence of interleaved environment states and actions, $r=(e_0, \alpha_0, e_1, \alpha_1,..., \alpha_{u-1}, e_u)$, set of all such possible finite sequences (over $E$ and $Ac$) is $R$, $R^E$ is a subset of $R$ containing the runs that end with an env. state.
+
+Purely reactive agent is modeled as: $Ag_{pure}: E\mapsto Ac$, a standard agent is modeled as $Ag_{std}: R^E\mapsto Ac$.
+
+So, if $R^E$ is a sequence of agent's actions and environment states, than it just makes sense that $E\subset R^E$. Hence, $Ag_{std}$ can map to every action to which $Ag_{pure}$ can. And behavioral equivalence with respect to environment $Env$ is defined as $R(Env, Ag_{1}) = R(Env, Ag_{2})$; where $Env=\langle E,e_{0},t \rangle$, $e_{0}$ - initial environment state, $t$ - transformation function (definition irrelevant for now).
+
+Finally, if $Ag_{pure}: E\mapsto Ac$ and $Ag_{std}: R^E\mapsto Ac$, and $E\subset R^E$, we can say that $R(Env,Ag_{pure}) = R(Env, Ag_{std})$ (might be too bold of an assumption). Hence, every purely reactive agent has behaviorally equivalent standard agent. The opposite might not be true, since $E\subset R^E$ means that all elements of $E$ belong to $R^E$, while not all elements $R^E$ belong to $E$.
+
+It's a textbook problem, but I couldn't find an answer key to check my solution. If anyone has formally (and perhaps mathematically) proven this before, can you post your feedback, thoughts, proofs in the comments? For instance, set of mathematical steps to infer $E\subset R^E$ from their definitions: $E=(e_{0}, e_{1},..., e_{u})$ and $R^E$ is ""all agent runs that end with an environment state"" (no formal equation found) is not clear to me.
+"
+"['neural-networks', 'machine-learning', 'statistical-ai']"," Title: Is traditional machine learning obsolete given that neural networks typically outperform them?Body: I have been coming across visualizations showing that the neural nets tend to perform better as compared to the traditional machine learning algorithms (Linear regression, Log regression, etc.)
+
+Assuming that we have sufficient data to train deep/neural nets, can we ignore the traditional machine learning topics and concentrate more on the neural network architectures?
+
+Given the huge amount of data, are there any instances where traditional algorithms outperform neural nets?
+"
+"['convolutional-neural-networks', 'classification']"," Title: Can I provide a CNN with hints?Body: Let's say I want to classify a dataset of handwritten digits (CNNs on their own can get 99.7% on the MNIST dataset but let's pretend they can only get 90% for the sake of this question).
+
+Now, I already have some classical computer vision techniques which might be able to give me a clue. For instance, I can count the intersection points of the pen stroke
+
+
+- 1,2,3,5,7 will usually have no intersection points
+- 6,9 will usually have one intersection point each
+- 4,8 will usually have two intersection points each (usually a 4-way crossover yields two intersection points which are close together)
+
+
+So if I generate some meta-data telling me how many intersection points each sample has, how can I feed that into the CNN training so that it can take advantage of that knowledge?
+
+My best guess is to slot it into the last fully connected layer just before classification.
+"
+"['intelligent-agent', 'utility']"," Title: What happens if an agent has two contrasting utility functions?Body: What would happen if an agent were programmed with two utility functions which are one the opposite of the other? Would one prevail or will they cancel each other out? Are there studies in this direction?
+"
+['loss']," Title: Crossing of training and validation lossBody: During training of my models I often encounter the following situation about training (green) and validation (gray) loss:
+
+
+
+Initially, the validation loss is significantly lower than the training loss. How is this possible? Does this tell me anything important about my data or model?
+
+One explanation might be, that training and validation data are not properly split, i.e. the validation might primarily contain data, that the model can easily represent. But then why do the curves cross after epoch 30? If this is because of overfitting, then I would expect the validation loss to increase, but so far both losses are (slowly) decreasing.
+
+There is a related question at Data Science SE, but it doesn't give a clear answer.
+"
+"['neural-networks', 'tensorflow', 'comparison', 'regularization', 'early-stopping']"," Title: What is the difference between TensorFlow's callbacks and early stopping?Body: What is the difference between TensorFlow's callbacks and early stopping?
+"
+['convolutional-neural-networks']," Title: Specifying resolution for objects with known dimensions using CNNBody: I would like to ask you for advice. I deal with beekeeping but I am also a bit a programmer and an electronics specialist. And this is where my 3 interests come together, actually 4 because recently deep learning has joined this group.
+I would like to analyze (I already run some tests) bee behavior using deep learning mechanisms based on images (photos or videos) with bees. These images, of course, can be very different, this applies primarily to the area that can be shown. It can literally be 2x3cm like a macro picture or an area of 40x30cm and 3000 bees on it. Trying to create a network analyzing such different areas is a nightmare. And because I am interested in aspects of bees separately, it seems logical to divide the image into these parts, which will contain single bees in their entirety, reduce their resolution to a minimum ensuring recognition of the necessary details and only then analyze each part of the large image separately.
+This approach seems even more right to me when I started to read the details, e.g. YOLOv3. Thanks to this, I would avoid a large amount of additional calculations.
+
+And here comes the first stage to be implemented. I need to specify the image resolution, more precisely the number of pixels that on average falls on one bee length. If it is too small, I do nothing but say that the resolution is too small for analysis. If it is suitable (at least 50 pixels for the length of the bee), I algorithmically cut the input image into smaller ones so that they contain individual insects and I subject them to further analysis.
+
+Fortunately, the sizes of bees and bee cells are precisely known to within 0.1mm. And there may be hundreds of such objects (a bee or a hexagonal bee cell) and I can decide how many of them will be at the stage of preparing training data. I have thousands of photos and videos from which I can easily create training data for the network in the form of pairs of objects image bee size expressed in pixels from this image.
+
+Time for question:
+At the output I want to get a number saying that one bee in the image is X pixels long, e.g. 43.02 or 212.11
+Has anyone of you dealt with a similar case of determining resolution for known objects using a neural network (probably CNN with an output element with rel function) and can share your experience, e.g. a network structure that would be suitable for this purpose?
+"
+"['natural-language-processing', 'classification', 'text-classification']"," Title: When is it time to switch to deep neural networks from simple networks in text classification problems?Body: I did an out of domain detection task (as a binary classification problem) and tried LR and Naive Bayes and BERT but the deep neural network didn't perform better than LR and NB. For the LR I just used BOW and it beats the 12-layer BERT.
+
+In a lecture, Andrew Ng suggests ""Build First System Quickly, Then Iterate"", but it turns out that sometimes we don't need to iterate the model into a deep neural network and most of the time traditional shallow neural networks are good/competitive enough and much simpler for training.
+
+As this tweet (and its replies) indicate, together with various papers [1, 2, 3, 4 etc], traditional SVM, LR, and Naive Bayes can beat RNN and some complicated neural networks.
+
+Then my two questions are:
+
+
+- When should we switch to complicated neural networks like RNN, CNN, and transformer and etc? How can we see that from the data set or the results (by doing error analysis) of the simple neural networks?
+- The aforementioned experiments may be caused by the simple test set, then (how) is it possible for us to design a test set that can fail the traditional models?
+
+"
+"['natural-language-processing', 'word-embedding']"," Title: Using word embedding to extend words for searching POI namesBody: I am developing my own mobile app related to digital map. One of the functions is searching POIs (points of interest) in the map according to relevance between user query and POI name.
+
+Besides the POIs whose names contain exact words in the query, the app also needs to return those whose names are semantically related. For example, searching 'flower' should return POI names that contain 'flower' as well as those that contain 'florist'. Likewise, searching 'animal' should return 'animal' as well as 'veterinary'.
+
+That said, I need to extend words in the query semantically. For example, 'flower' has to be extended to ['flower', 'florist']. I have tried to use word embeddings: using the words corresponding to most similar vectors as extensions. Due to the fact I don't have user review data right now and most of the POI names are very short, I used trained word2vec model published by Google. But the results turn out to be not what I expect: most similar words of 'flower' given by word2vec are words like 'roses'and 'orchid', and 'florist' is not even in the top 100 most similar list. Likewise, 'animal' gives 'dog', 'pets', 'cats' etc. Not very useful for my use case.
+
+I think simply using word embedding similarity may not be enough. I may need to build some advanced model based on word embedding. Do you have any suggestions?
+"
+"['neural-networks', 'deep-learning', 'training', 'objective-functions']"," Title: How to reduce fluctuation of a neural network?Body: I've modeled an AlexNet neural network, with 50 epochs and a batch size of 64. I used a stochastic gradient descent optimizer with a learning rate of 0.01. I attached the train and validation loss and accuracy plots.
+
+
+
+
+
+How can I reduce the fluctuation of the first epochs?
+"
+"['monte-carlo-tree-search', 'combinatorics', 'inductive-programming', 'program-synthesis']"," Title: How can I reduce combinatorial explosion in an MCTS-like algorithm for program induction?Body: I'd like to develop an MCTS-like (Monte Carlo Tree Search) algorithm for program induction, i.e. learning programs from examples.
+
+My initial plan is for nodes to represent programs and for the search to expand nodes by revising programs.
+
+Many of these expansions revise a single program: randomly resample a subtree of the program, replace a constant with a variable, etc. It looks straightforward to use these with MCTS.
+
+Some expansions, however, generate a program from scratch (e.g. sample a new program). Others use two or more programs to generate a single output program (e.g. crossover in Genetic Programming).
+
+These latter types of moves seem nonstandard for vanilla MCTS.
+
+One idea I've had is to switch from nodes representing programs to nodes representing tuples of programs. The root node would represent the empty tuple $()$, to which expansions could be applied only if they can generate a program from scratch. The first such expansion would produce some program $p$, so the root would now have child $(p)$. The second expansion would produce $p'$, so the root would now also have child $(p')$ as well as the pair $(p, p')$. Even assuming some reasonable restrictions (e.g. moves can use at most 2 programs, pairs cannot have identical elements, element order doesn't matter), the branching factor will grow combinatorially.
+
+What techniques from the MCTS literature (or other literatures) might reduce the impact of this combinatorial explosion?
+"
+"['machine-learning', 'convolutional-neural-networks', 'computer-vision', 'datasets']"," Title: Weird border artifacts when training a CNNBody: I've been trying to use this DeepLabv3+ implementation with my dataset (~1000 annotated images of the same box, out of the same video sequence): https://github.com/srihari-humbarwadi/person_segmentation_tf2.0
+
+But I get border artifacts like this:
+
+
+
+Any ideas what could be causing it? Note that if I use bigger batches and train for more epochs, the borders tend to get thinner but never disappear. They also appear randomly around the image.
+
+Any clues what could be causing them and how to solve it?
+"
+"['reinforcement-learning', 'deep-rl', 'monte-carlo-methods', 'model-free-methods', 'policy-evaluation']"," Title: How does policy evaluation work for continuous state space model-free approaches?Body: How does policy evaluation work for continuous state space model-free approaches?
+
+Theoretically, a model-based approach for the discrete state and action space can be computed via dynamic programming and solving the Bellman equation.
+
+Let's say you use a DQN to find another policy, how does model-free policy evaluation work then? I am thinking of Monte Carlo simulation, but that would require many many episodes.
+"
+"['reinforcement-learning', 'ai-design', 'open-ai', 'gym']"," Title: How to define an action space when an agent can take multiple sub-actions in a step?Body: I'm attempting to design an action space in OpenAI's gym and hitting the following roadblock. I've looked at this post which is closely related but subtly different.
+
+The environment I'm writing needs to allow an agent to make between $1$ and $n$ sub-actions in each step. Leaving it up to the agent to decide how many sub-actions it wants to take. So, something like (sub-action-category, sub-action-id, action) where the agent can specify between $1$ and $n$ such tuples.
+
+It doesn't seem possible to define a Box
space without specifying bounds on the shape which is what I need here. I'm trying to avoid defining an action space where each sub-action is explicitly enumerated by the environment like (action) tuple with n entries for each sub-action.
+
+Are there any other spaces I could use to dynamically scale the space?
+"
+"['deep-learning', 'reinforcement-learning', 'deep-rl']"," Title: How to estimate the error during training in deep reinforcement learningBody: How do I calculate the error during the training phase for deep reinforcement learning models?
+
+Deep reinforcement learning is not supervised learning as far as I know. So how can the model know whether it predicts right or wrong? In literature, I find that the ""actual"" Q-value is calculated, but that sounds like the whole idea behind deep RL is obsolete. How could I even calculate/know the real Q-value if there is not already a world model existing?
+"
+"['reinforcement-learning', 'convolutional-neural-networks', 'dqn', 'deep-rl', 'data-preprocessing']"," Title: Should I grey-scale the coloured frames/channels to build the approximation of the state?Body: I'm doing reinforcement learning, and I have a visual observation that I will use to build an input state for my agent. In the DeepMind's Atari paper, they greyscale the input image before they input it into the CNN to reduce the input space's size, which makes sense to me.
+In my environment, I have, for each pixel, 5 possible channels, which are represented in black, white, blue, red, and green. This also makes intuitive sense to me since it's like a bit-encoding.
+Any thoughts on what could be better? Greyscaling into 2 shades of grey and black and white also maintains the information, but feels somehow less direct, since my environment's visual space is categorical, which makes more sense in a categorical encoding.
+"
+"['reinforcement-learning', 'rewards', 'apprenticeship-learning']"," Title: Does apprenticeship learning require prospective data?Body: I am thinking of applying apprenticeship learning on retrospective data. From looking at this paper by Ng https://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf which talks about apprenticeship learning, it seems to me that at the 5th step of the algorithm,
+
+
+- Compute (or estimate) $μ^{(i)}$ = $μ(π^{(i)})$,
+where $\mu^{(i)}$ = $E[\sum_{t=0}^{∞}\gamma^{t}$$\phi(s_{t})$ | $\pi^{(i)}]$, $\phi(s_{t})$ is the reward feature vector at state $s_t$.
+
+
+From my understanding, a sequence of $s_0, s_1, s_2 ..$ trajectory would have to be generated at this step, following this policy $\pi^{(i)}$. Hence, applying this algorithm on retrospective data would not work?
+"
+"['long-short-term-memory', 'pytorch', 'time-series', 'attention']"," Title: time-series prediction : loss going down, then stagnates with very high varianceBody: I am trying to design a model based on LSTM cells to do time-series prediction. The ouput value is an integer in [0,13]
. I have noticed that one-hot encoding it and using cross-entropy loss gives better results than MSE loss.
+
+Here is my problem : no matter how deep I make the network or how many fully connected layers I add I always obtain pretty much the same behavior. Changing the optimizer also doesn't really help.
+
+
+- The loss function quickly decreases then stagnates with a very high variance and never goes down again.
+- The prediction seems to be offset around the value
9
, I really do not understand why since I have one-hot encoded the input and the output.
+
+
+Here is an example of a the results of a typical training phase, with the total loss :
+![]()
+
+Do you have any tips/ideas as to how I could improve this or could have gone wrong ? I am a bit of a beginner in ML si I might have missed something. I can also include the code (in PyTorch) if necessary.
+"
+"['convolutional-neural-networks', 'explainable-ai']"," Title: Can a NN be configured to indicate which points of the input influenced its prediction and how?Body: Suppose I want to classify a dataset like the MNIST handwritten dataset, but it has added distractions. For example, here we have a 6 but with extra strokes around it that don't add value.
+
+
+
+I suppose a good model would predict a 6, but maybe with less than 100% certainty (or maybe with 100% certainty - I don't know that it matters for the purpose of this question).
+
+Is there any way to get information about which pixels most strongly influenced the decision of the CNN, and which pixels were not so important? So to represent that visually, green means that those pixels were important:
+
+
+
+Or conversely, is it possible to highlight pixels which did not contribute to the outcome (or which cast doubt on the outcome thereby reducing the certainty from 100%)
+
+
+"
+"['natural-language-processing', 'tensorflow', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: Simple sequential model with LSTM which doesn't convergeBody: I'm actually trying to create a sequential neural network in order to translate a ""human"" sentence in a ""machine"" sentence understandable by an algorithm. Like It didn't work, I've try to create a NN that understands whether the input is a unit or not.
+
+Even this NN doesn't work and I don't understand Why. I tried with different optimize/loss/metrics/with Rnn/with LSTM.
+
+So there is one array of unit and one array with lambda words.
+I send to the NN :
+
+input -> word in OneHotEncoding where each char is a vector
+
+output -> a vector with a 1 at the relative position of the unit in the array, ex : [0,0,0,1,0]. If It's not a unit, the vector is composed of 0.
+
+I'm actually using LSTM layers and sigmoid activation because I will need it for my ""big"" NN.
+
+
+
+model = tf.keras.Sequential()
+model.add(tf.keras.Input(shape=(13,vocab_size),batch_size=80))
+model.add(tf.keras.layers.LSTM(32, return_sequences=True))
+model.add(tf.keras.layers.LSTM(6, return_sequences=False))
+model.add(tf.keras.layers.Dense(6, activation=""sigmoid""))
+model.reset_states()
+model.summary()
+model.compile(optimizer= 'adam', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
+model.fit(encoded_word, my_targets, batch_size=80, epochs=100, validation_data=(encoded_word, my_targets))
+
+
+
+Model: ""sequential_38""
+_________________________________________________________________
+Layer (type) Output Shape Param #
+=================================================================
+lstm_57 (LSTM) (80, 13, 32) 6400
+_________________________________________________________________
+lstm_58 (LSTM) (80, 6) 936
+_________________________________________________________________
+dense_32 (Dense) (80, 6) 42
+=================================================================
+Total params: 7,378
+Trainable params: 7,378
+Non-trainable params: 0
+
+
+Epoch 1/100
+10000/10000 [==============================] - 6s 619us/sample - loss: 0.7441 - categorical_accuracy: 0.2898 - val_loss: 0.6181 - val_categorical_accuracy: 0.3388
+Epoch 2/100
+10000/10000 [==============================] - 2s 233us/sample - loss: 0.5768 - categorical_accuracy: 0.3388 - val_loss: 0.5382 - val_categorical_accuracy: 0.3388
+Epoch 3/100
+10000/10000 [==============================] - 2s 229us/sample - loss: 0.5039 - categorical_accuracy: 0.3979 - val_loss: 0.4640 - val_categorical_accuracy: 0.5084
+Epoch 4/100
+10000/10000 [==============================] - 2s 229us/sample - loss: 0.4207 - categorical_accuracy: 0.4759 - val_loss: 0.3709 - val_categorical_accuracy: 0.5041
+
+
+My NN is converging towards 0.5 all the time.
+
+Thank you in advance for your answers !
+"
+"['deep-learning', 'convolutional-neural-networks', 'deep-neural-networks', 'vanishing-gradient-problem']"," Title: How to detect vanishing gradients?Body: Can vanishing gradients be detected by the change in distribution (or lack thereof) of my convolution's kernel weights throughout the training epochs? And if so how?
+For example, if only 25% of my kernel's weights ever change throughout the epochs, does that imply an issue with vanishing gradients?
+Here are my histograms and distributions, is it possible to tell whether my model suffers from vanishing gradients from these images? (some middle hidden layers omitted for brevity)
+
+
+
+
+"
+"['neural-networks', 'keras', 'function-approximation', 'model-request']"," Title: Which neural network can approximate the function $y = x^2 + b$?Body: I am new to ANN. I am trying out several 'simple' algorithms to see what ANN can (or cannot) be used for and how. I played around with Conv2d once and had it recognize images successfully. Now I am looking into trend line analyses. I have succeeded in training a network where it solved for linear equations. Now I am trying to see if it can be trained to solve for $y$ in the formula $y = b + x^2$.
+No matter what parameters I change, or the number of dense layers, I get high values for loss and validation loss, and the predictions are incorrect.
+Is it possible to solve this equation, and with what network? If it is not possible, why not? I am not looking to solve a practical problem, rather build up understanding and intuition about ANNs.
+See the code I tried with below
+#region Imports
+from __future__ import absolute_import, division, print_function, unicode_literals
+import math
+import numpy as np
+import tensorflow as tf
+from tensorflow.keras import models, optimizers
+from tensorflow.keras.models import Sequential
+from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense, Lambda
+import tensorflow.keras.backend as K
+#endregion
+
+#region Constants
+learningRate = 0.01
+epochs: int = 1000
+batch_size = None
+trainingValidationFactor = 0.75
+nrOfSamples = 100
+activation = None
+#endregion
+
+#region Function definitions
+def CreateNetwork(inputDimension):
+ model = Sequential()
+ model.add(Dense(2, input_dim=2, activation=activation))
+ model.add(Dense(64, use_bias=True, activation=activation))
+ model.add(Dense(32, use_bias=True, activation=activation))
+ model.add(Dense(1))
+ adam = optimizers.Adam(learning_rate=learningRate)
+ # sgd = optimizers.SGD(lr=learningRate, decay=1e-6, momentum=0.9, nesterov=True)
+ # adamax = optimizers.Adamax(learning_rate=learningRate)
+ model.compile(loss='mse', optimizer=adam)
+ return model
+
+def SplitDataForValidation(factor, data, labels):
+ upperBoundary = int(len(data) * factor)
+
+ trainingData = data[:upperBoundary]
+ trainingLabels = labels[:upperBoundary]
+
+ validationData = data[upperBoundary:]
+ validationLabels = labels[upperBoundary:]
+ return ((trainingData, trainingLabels), (validationData, validationLabels))
+
+def Train(network, training, validation):
+ trainingData, trainingLabels = training
+ history = network.fit(
+ trainingData
+ ,trainingLabels
+ ,validation_data=validation
+ ,epochs=epochs
+ ,batch_size=batch_size
+ )
+
+ return history
+
+def subtractMean(data):
+ mean = np.mean(data)
+ data -= mean
+ return mean
+
+def rescale(data):
+ max = np.amax(data)
+ factor = 1 / max
+ data *= factor
+ return factor
+
+def Normalize(data, labels):
+ dataScaleFactor = rescale(data)
+ dataMean = subtractMean(data)
+
+ labels *= dataScaleFactor
+ labelsMean = np.mean(labels)
+ labels -= labelsMean
+
+def Randomize(data, labels):
+ rng_state = np.random.get_state()
+ np.random.shuffle(data)
+ np.random.set_state(rng_state)
+ np.random.shuffle(labels)
+
+def CreateTestData(nrOfSamples):
+ data = np.zeros(shape=(nrOfSamples,2))
+ labels = np.zeros(nrOfSamples)
+
+ for i in range(nrOfSamples):
+ for j in range(2):
+ randomInt = np.random.randint(1, 5)
+ data[i, j] = (randomInt * i) + 10
+ labels[i] = data[i, 0] + math.pow(data[i, 1], 2)
+
+ Randomize(data, labels)
+ return (data, labels)
+#endregion
+
+allData, allLabels = CreateTestData(nrOfSamples)
+Normalize(allData, allLabels)
+training, validation = SplitDataForValidation(trainingValidationFactor, allData, allLabels)
+
+inputDimension = np.size(allData, 1)
+network = CreateNetwork(inputDimension)
+
+history = Train(network, training, validation)
+
+prediction = network.predict([
+ [2, 2], # Should be 2 + 2 * 2 = 6
+ [4, 7], # Should be 4 + 7 * 7 = 53
+ [23, 56], # Should be 23 + 56 * 56 = 3159
+ [128,256] # Should be 128 + 256 * 256 = 65664
+])
+print(str(prediction))
+
+"
+"['deep-learning', 'autoencoders', 'recommender-system']"," Title: How to get top 5 movies recommendations from Auto-EncoderBody: I have trained a model using Auto-encoder on movielens dataset. Below is how i trained the model.
+
+
+
+r = model.fit_generator(
+ generator(A, mask),
+ validation_data=test_generator(A_copy, mask_copy, A_test_copy, mask_test_copy),
+ epochs=epochs,
+ steps_per_epoch=A.shape[0] // batch_size + 1,
+ validation_steps=A_test.shape[0] // batch_size + 1,
+)
+
+
+It is giving good results but now i am confused how should i get the top 5 recommendation on user input.
+
+Just wanted to print the result on console. Can anyone help me please?
+"
+"['reinforcement-learning', 'tensorflow', 'keras', 'game-ai']"," Title: Can we use a neural network that is trained using Reinforcement Learning for dynamic game level difficulty designing in realtime?Body: I am a newbie to Machine Learning and AI. As per my understanding, with the use of reinforcement learning
+(reward/punishment environment), we can train a neural network to play a game. I would like to know, whether it possible to use this trained model for deciding the difficulty of the next game level dynamically in realtime according to a player's skill level? As an example, please consider a neural network is trained using Reinforcement Learning for playing a mobile game (chess/puzzle, etc.). The game is not consists of a previously designed static set of game levels. After the training, can this model use to detect a particular player's playing style(score, elapsed time) to dynamically decide the difficulty of the next game level and provide customized game levels for each player in realtime?
+
+Thank you very much and any help will be greatly appreciated.
+"
+"['machine-learning', 'classification', 'learning-algorithms', 'probability-distribution']"," Title: Why does the machine learning algorithm need to learn a set of functions in the case of missing data?Body: I am currently studying the textbook Deep Learning by Goodfellow, Bengio, and Courville. Chapter 5.1 Learning Algorithms says the following:
+
+
+ Classification with missing inputs: Classification becomes more challenging if the computer program is not guaranteed that every measurement in its input vector will always be provided. To solve the classification task, the learning algorithm only has to define a single function mapping from a vector input to a categorical output. When some of the inputs may be missing, rather than providing a single classification function, the learning algorithm must learn a set of functions. Each function corresponds to classifying $\mathbf{x}$ with a different subset of its inputs missing. This kind of situation arises frequently in medical diagnosis, because many kinds of medical tests are expensive or invasive. One way to efficiently define such a large set of functions is to learn a probability distribution over all the relevant variables, then solve the classification task by marginalizing out the missing variables. With $n$ input variables, we can now obtain all $2^n$ different classification functions needed for each possible set of missing inputs, but the computer program needs to learn only a single function describing the joint probability distribution. See Goodfellow et al. (2013b) for an example of a deep probabilistic model applied to such a task in this way. Many of the other tasks described in this section can also be generalized to work with missing inputs; classification with missing inputs is just one example of what machine learning can do.
+
+
+I was wondering if people would please help me better understand this explanation. Why is it that, when some of the inputs are missing, rather than providing a single classification function, the learning algorithm must learn a set of functions? And what is meant by ""each function corresponds to classifying $\mathbf{x}$ with a different subset of its inputs missing.""?
+
+I would greatly appreciate it if people would please take the time to clarify this.
+"
+"['deep-learning', 'convolutional-neural-networks', 'keras', 'test-datasets', 'standardisation']"," Title: Wouldn't training the model with this data lead to inaccuracies since the testing data would not be normalized in a similar way?Body: I was trying to normalize my input data images for feeding to my convolutional neural network and wanted to use standardize my input data.
+I referred to this post, which says that featurewise_center
and featurewise_std_normalization
scale the images to the range [-1, 1]
.
+Wouldn't training the model with this data lead to inaccuracies since the testing data would not be normalized in a similar way and would only range between [0, 1]
? How can I standardize my data while keeping its range between [0, 1]
?
+"
+"['neural-networks', 'machine-learning', 'semi-supervised-learning']"," Title: Semi-supervised: Can I predict the label of purposely unlabelled observations?Body: Let's say I have a data set with of length N. A small proportion N2 is labeled. Can I remove some labels and then 'reverse' this action with a trained neural network? I could then use the same process to fill the other (N - N2) rows with labels.
+"
+"['machine-learning', 'natural-language-processing']"," Title: Are POS tagging, Chunking, Disambiguation, etc. subtasks of annotation?Body: I wonder about the legitimacy of using the terms ""POS tagging"", ""Chunking"", ""Disambiguation"" and ""Categorization"" to describe an activity that doesn't include writing code and database queries, or interacting with the NLP algorithm and database directly.
+
+More specifically, let's suppose I use the following tools:
+
+
+- an ""Annotator"" for analyzing the input text (e.g. sentences copypasted from online newspapers) and choose and save proper values as regards to ""POS"" of tokens and words, ""Words""(entities and collocations) and ""Chunk"". Tokens are already detected by default. I have to decide which words are entities and/or collocations or not and their typology, though. May the performed tasks be called ""POS tagging"", ""Chunking"" and ""support to categorization""?
+- A knowledge base, for searching and choosing the proper synsets of the lemmas and assigning them to the words analyzed in the previous Annotation tool. May such a task be called ""Disambiguation""?
+- A graphical user interface which shows how the NLP analyzes by default the input texts as regards to Lemmas, POS, Chunks, Senses, entities, domains, main concepts, dependency tree, in order to make analyses consistent with it.
+
+
+If I want to define these activities in a few words, ""Machine Learning annotation"" may be the most correct.
+
+But what if I want to be more specific? I don't know whether or not the terms ""POS tagging"", ""Chunking"", ""Disambiguation"" and ""Support to categorization"" may be appropriate for they generally come within ""programming contexts"", as far as I know. In other terms, do they involve writing algorithms and programming or are they / may they be referred to the ""less-technical"" activities described above?
+"
+"['machine-learning', 'generative-adversarial-networks', 'generative-model', 'geometric-deep-learning']"," Title: Which generative methods are better for generating graphs, while preserving node and edge labels?Body: I started to dig into the topic of graph generation and I have a question - which out of generative methods (autoregressive, variational autoencoders, GANs, any other?) are better for generating graphs while preserving both node and edge labels? (in other words, I want to generate graphs with node and edge labels, similar to what I have in my dataset)
+
+I checked over some papers, but I didn't find exact clues about how to choose the method for graph generation. If anybody is more familiar with the topic, I'll appreciate any link to papers or any advice/recommendations.
+"
+"['machine-learning', 'convolutional-neural-networks', 'convolution']"," Title: How do I optimize the number of filters in a convolution layer?Body: I’m trying to figure out how to write an optimal convolutional neural network with respect to maximizing and minimizing filters in a convolution 2D layer. This is my thinking and I’m not sure if it's correct.
+
+If we have a dataset of 32x32 images, we could start with a Conv2D
layer, filter of 3x3 and stride of 1x1. Therefore the maximum times this filter would be able to fit into the 32 x 32 images would be 30 times e.g. newImageX * newImageY
+
+newImageX = (imageX – filterX + 1)
+newImageY = (imageY – filterY + 1)
+
+
+Am I right in thinking that because there are only newImageX * newImageY
patterns in the 32 x 32 image, that the maximum amount of filters should be newImageX * newImageY
, and any more would be redundant?
+
+So, the following code is the maximum possible filters given that we have 3x3 filter, 1x1 stride and 32x32 images?
+
+Conv2D((30*30), kernel=(3,3), stride=(1,1), input_shape=(32,32,1))
+
+
+Is there any reason to go above 30*30
filters, and are there any reasons to go below this number, assuming that kernel
, stride
and input_shape
remain the same?
+
+If you knew you were looking for one specific filter, would you have to use the maximum amount of filters to ensure that the one you were looking for was present, or can you include it another way?
+"
+"['machine-learning', 'convolutional-neural-networks', 'training', 'overfitting', 'loss']"," Title: Why is the loss of one of the outputs of a model with multiple outputs increasing while the others are decreasing?Body: I'm a newbie in neural networks. I'm trying to fit my neural network that has 3 different outputs:
+
+
+- semantic segmentation,
+- box mask and
+- box coordinates.
+
+
+When my model is training, the loss of semantic segmentation and box coordinates are decreasing, but the loss of the box mask is increasing too much.
+
+My neural network is a CNN and it's based on Chargrid from here. The architecture is this:
+
+
+
+For semantic segmentation outputs, it's expected to have 15 classes, for box mask it's expected to have 2 classes (foreground vs background) and for box coordinates it's expected to have 5 classes (1 for each corner of bounding box + 1 for None).
+
+Loss details
+
+Step 1
+
+Here's the loss/accuracy for each of the three outputs at the end of the first step.
+
+
+- Semantic segmentation (Output 1) - 0.0181/0.958
+- Box mask (Output 2) - 13.88/0.946
+- Box coordinates (Output 3) - 0.2174/0.0000000867
+
+
+Last step
+
+Here's the loss/accuracy at the last step.
+
+
+- Semantic segmentation (Output 1) - 0.0157/0.963
+- Box mask (Output 2) - 73.02/0.935
+- Box coordinates (Output 3) - 0.06/0.82
+
+
+
+
+Is that normal? How can I interpret these results?
+
+I will leave below, the output of the model fit.
+
+
+
+
+"
+"['reinforcement-learning', 'tensorflow', 'dqn', 'gym', 'sarsa']"," Title: Why isn't my implementation of DQN using TensorFlow on the FrozenWorld environment working?Body: I am trying to test DQN on FrozenWorld environment in gym using TensorFlow 2.x. The update rule is (off policy)
+$$Q(s,a) \leftarrow Q(s,a)+\alpha (r+\gamma~ max_{a'}Q(s',a')-Q(s,a))$$
+
+I am using an epsilon greedy policy.
+In this environment, we get a reward only if we succeed. So I explored with 100% until I have 50 successes. Then I saved the data of failures and success in different bins. Then I sampled (with replacement) from these bins and used them to train the Q network. However, no matter how long I train the agent doesn't seem to learn.
+
+The code is available in Colab. I am doing this for a couple of days.
+
+PS: I modified the code for SARSA and Expected SARSA; nothing works.
+"
+"['deep-learning', 'convolutional-neural-networks', 'pooling', 'fully-convolutional-networks', 'max-pooling']"," Title: Does a fully convolutional network share the same translation invariance properties we get from networks that use max-pooling?Body: Does a fully convolutional network share the same translation invariance properties we get from networks that use max-pooling?
+
+If not, why do they perform as well as networks which use max-pooling?
+"
+"['neural-networks', 'deep-learning']"," Title: Can neural network be trained to solve this problem?Body: I'm working on a problem that given a dataset; where each train example is a binary matrix $X_i$ with dimension $(N_i,D_i)$ (think a training example is a feature matrix) each entry is either 1 or 0.
+
+Also, each training example $X_i$ has a corresponding label $Y_i$ is a correlation matrix dimension $(D_i,D_i)$
+
+My goal is to construct a model that take the input $X_i$ and coutput $\hat{Y}_i$ that matches with $Y_i$.
+
+The major challenges here is that; each training example $X_i$ can have different dimensionality $(N_i,D_i)$ different data points and different number of variables.
+
+I'm wondering if there is any neural network architecture can handle case like this ?
+"
+"['machine-learning', 'classification', 'accuracy', 'metric', 'multi-label-classification']"," Title: Why is there more than one way of calculating the accuracy?Body: Some sources consider the true negatives (TN) when computing the accuracy, while some don't.
+
+Source 1:
+https://medium.com/greyatom/performance-metrics-for-classification-problems-in-machine-learning-part-i-b085d432082b
+
+
+
+Source 2:https://www.researchgate.net/profile/Mohammad_Sorower/publication/266888594_A_Literature_Survey_on_Algorithms_for_Multi-label_Learning/links/58d1864392851cf4f8f4b72a/A-Literature-Survey-on-Algorithms-for-Multi-label-Learning.pdf
+
+
+
+which can be translated as
+
+
+
+which one of these must be considered for my multi-label model.
+"
+"['reference-request', 'social', 'ethics', 'green-ai']"," Title: What AI applications exist to solve sustainability issues?Body: The Sustainable Development Goals of the United Nations describe a normative framework which states what future development until 2030 should strive for. On a more abstract level a basic definition describes sustainable development as
+
+
+ development that meets the needs of the present without compromising the ability of future generations to meet their own needs.
+
+
+Alone through the consumption of energy, AI technologies already have a (negative) impact on sustainability questions.
+
+What AI applications already exist, are researched or are at least thinkable from which sustainability would benefit?
+"
+"['neural-networks', 'training', 'accuracy', 'binary-classification', 'alexnet']"," Title: Why am I getting a difference between training accuracy and accuracy calculated with Keras' predict_classes on a subset of the training data?Body: I'm trying to solve a binary classification problem with AlexNet. I split the original dataset into training and validation datasets using a 70/30 ratio. I have trained my neural network with a dataset of 11200 images, and I obtained a training accuracy of 99% and a validation accuracy was 96%. At the end of the training, I saved my model's weights to a file.
+After training, I loaded the saved weights to the same neural network. I chose 738 images out of the 11200 training images, and I tried to predict the class of each of them with my model, and compare them with true labels, then again I calculated the accuracy percentage and it was 74%.
+What is the problem here? I guess its accuracy should be about 96% again.
+Here's the code that I'm using.
+prelist=[]
+for i in range(len(x)):
+ prediction = model.predict_classes(x[i])
+ prelist.append(prediction)
+count = 0
+for i in range(len(x)):
+ if(y[i] == prelist[i]):
+ count = count + 1
+test_precision = (count/len(x))*100
+print (test_precision)
+
+When I use predict_classes
on 11200 images that I used to train the neural network and compare its result with true labels and calculated accuracy again its accuracy is 91%.
+"
+"['math', 'deep-rl', 'policy-gradients', 'stochastic-policy']"," Title: In the policy gradient equation, is $\pi(a_{t} | s_{t}, \theta)$ a distribution or a function?Body: I am learning about policy gradient methods from the Deep RL Bootcamp by Peter Abbeel and I am a bit stumbled by the math presented. In the lecture, he derives the gradient logarithm likelihood of a trajectory to be
+
+$$\nabla log P(\tau^{i};\theta) = \Sigma_{t=0}\nabla_{\theta}log\pi(a_{t}|s_t, \theta).$$
+
+Is $\pi(a_{t} | s_{t}, \theta)$ a distribution or a function? Because a derivative can only be taken wrt a function. My understanding is that $\pi(a_{t},s_{t}, \theta)$ is usually represented as a distribution of actions over states, since input of a neural network for policy gradient would be the $s_t$ and output would be $\pi(a_t, s_t)$, using model weights $\theta$.
+"
+"['heuristics', 'graphs', 'constraint-satisfaction-problems']"," Title: Can the degree and minimum remaining values heuristics be used in conjunction?Body: I am currently studying constraint satisfaction problems and have come across two heuristics for variable selection. The minimum remaining values(MRV) heuristic and the degree heuristic.
+
+The MRV heuristic tells you to choose a variable that has the least legal assignments, while the degree heuristic tells you to choose a variable that has the biggest effect on the remaining unassigned variables.
+
+Can these 2 heuristics for variable selection be used in conjunction with each other? Some books say that degree heuristic can be used as first selection of variables. Which heuristic is better followed after that? The MRV or degree heuristic?
+
+
+
+The picture shows the case where degree heuristic is used. If MRV were to be used, then at the 3rd step, the leftmost side of the map would be coloured blue.
+"
+"['neural-networks', 'comparison', 'recurrent-neural-networks', 'linear-regression', 'statistical-ai']"," Title: What is the difference between an generalised estimating equation and a recurrent neural network?Body: What is the difference between a generalised estimating equation (GEE) model and a recurrent neural network (RNN) model, in terms of what these two models are doing? Apart from the differences in the structure of these two models, where GEE is an extension of generalised linear model (GLM) and RNN is of a neural network structure, it seems to me that these 2 models are doing the same thing?
+"
+"['neural-networks', 'machine-learning']"," Title: Using tensor networks as machine learning modelsBody: Tensor networks (check this paper for a review) are a numerical method originally introduced in condensed matter physics to model complex quantum systems. Roughly speaking, such systems are described by a very high-dimensional tensor (where the indices take a number of values scaling exponentially with the number of system constituents) and tensor networks provide an efficient representation of the latter as an outer product and contraction of many low-dimensional tensors.
+
+More recently, a specific kind of tensor network (called Matrix Product State in physics) found interesting applications in machine-learning through the so-called Tensor-Train decomposition (I do not know of a precise canonical reference in this context, so I will abstain from citing anything).
+
+Now, over the last few years, several works from the physics community seemed to push for a generalized use of tensor networks in machine learning (see this paper, a second one and a third one and this article from Google AI for context). As a physicist, I am glad to learn that tools initially devised for physics may find interdisciplinary applications. However, at the same time, my critical mind tells me that from the machine learning research community's perspective, these results may not look that intriguing. After all, machine learning is now a very established field and it takes probably more than a suggestion for a new machine learning model and a basic benchmarking on a trivial dataset (as the MNIST one) -which is what the papers essentially do in my humble opinion- to attract any attention in the area. Besides, as I believe to know, there already exists quite a solid body of knowledge on tensor analysis techniques for machine learning (e.g. tensor decompositions), which may cast doubt on the originality of the contribution.
+
+I would therefore be very curious to have the opinion of machine learning experts on this line of research: is it really an interesting direction to look into, or is it just about surfing on the current machine learning hype with a not-so-serious proposal?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks']"," Title: How is the receptive field of a CNN affected by transposed convolution?Body: When computing receptive field recursively through a CNN, does a transposed convolution affect the receptive field the same way that a convolution does if the kernel and stride is the same?
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'hyperparameter-optimization']"," Title: When stacking LSTM's, should the hidden units increase?Body: I'm using Weights and Biases to do some hyperparameter sweeping for a supervised sequence-to-sequence problem I'm working on. One thing I noticed is that the sweeps with a gradually increasing number of hidden units tend to have a lower validation loss:
+
+
+
+I'm wondering if this is generally true or just a function of my particular problem?
+"
+"['deep-learning', 'object-recognition', 'object-detection']"," Title: What is the best way to detect and recognize traffic signs in a picture?Body: I'm working on a project for my college to recognize traffic signs in pictures. I searched a lot but can't find the best method to do it.
+Can someone recommend me a paper, article, or even GitHub link that describes the best way to achieve this? It will be helpful.
+"
+"['ai-design', 'activation-functions', 'data-preprocessing', 'text-classification', 'knowledge-base']"," Title: Language Learning feedback with AIBody: Is there a program under development that uses AI technology, like Siri, to ""hold hands"" so to speak with a language learner and coach them on accent, colloqiual expressions, or to let them guide the language learning process using an archive of language knowledge?
+Also, could this sort of program be used to learn things in a language one already knows, or in a new language, say for the purposes of travel or to learn about related hyperlinks in an online database?
+"
+"['reinforcement-learning', 'reference-request', 'proofs', 'convergence', 'temporal-difference-methods']"," Title: Is there a simple proof of the convergence of TD(0)?Body: Does anybody know a simple proof of the convergence of the TD(0) value function prediction algorithm?
+
+
+"
+"['reinforcement-learning', 'open-ai']"," Title: OpenAI spinning up convolutional networks with PPOBody: I am using pytorch version of PPO and I have image input that I need to process with convolutional neural networks, are there any examples on how to set up the network? I know that stable baselines support this to some extend, but I had better performance with spinning up so I would prefer to keep using these.
+"
+"['neural-networks', 'deep-learning', 'genetic-algorithms', 'neat', 'neuroevolution']"," Title: Are connections genes in a genome ever deleted or just disabled?Body: When a new node is added, the previous connection is disabled and not removed.
+
+
+
+Is there any situation in which a connection gene is removed? For example, in the above diagram connection gene with innovation number 2 is not present. It could be because some other genome used that innovation number for a different connection that isn't present in this genome. But are there cases where a connection gene has to be removed?
+"
+"['generative-model', 'image-generation']"," Title: Suggestion on image inpainting algorithmBody: Currently, many algorithms are available for image inpainting. In my application, I have some special restriction on training dataset-
+
+
+- Let's consider the training dataset of human facial images.
+- Although all human face has the same general structure, they may have subtle differences depending on racial characteristics.
+- Consider that in the training dataset, we have ten facial images from each race.
+
+
+Now, in my learning algorithm, can we come up with a two-step method? Wherein the first step, we will learn about the general facial structure more accurately using all training data. In the next step, we will learn those subtle features of each race by only learning ten images associated with that race? It might restore a distorted image more accurately.
+
+Suppose we have a distorted facial image of a person from race 'A,' where the nasal area of that image is lost. Now with the first step, we can learn the nasal structure more accurately by using all of the training data, and in the second step using only the ten images associated with race 'A,' we can fine-tune those generated data. As we have only 10 data with race 'A,' if we use only those small subset data to learn the whole model, then probably we will not be able to capture the all general architecture of the face in the first place.
+
+P.S. I am not from Computer Science/ML background, probably my problem description is a little bit vague. It would be great if someone provides an edit/tag suggestion.
+"
+"['deep-learning', 'reinforcement-learning', 'deep-rl', 'ddpg']"," Title: How does adding noise to the action in DDPG help in learning?Body: I can't understand how playing with the action generated by the actor network in DDPG by adding the noise term helps in exploration.
+"
+"['neural-networks', 'comparison', 'recurrent-neural-networks', 'long-short-term-memory', 'recurrent-layers']"," Title: What is the difference between LSTM and RNN?Body: What is the difference between LSTM and RNN? I know that RNN is a layer used in neural networks, but what exactly is an LSTM? Is it also a layer with the same characteristics?
+"
+"['reinforcement-learning', 'policy-gradients']"," Title: Is the negative of the policy loss function in a simple policy gradient algorithm an estimator of expected returns?Body: Let
+
+$$
+\nabla_\theta J(\pi_\theta) = \mathbb{E}_{\tau \sim \pi_\theta} \left[ \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t|s_t) R(\tau) \right]
+$$
+be the expanded expression for a simple policy gradient, where $\theta$ are the parameters of the policy $\pi$, $J$ denotes the expected return function, $\tau$ is a trajectory of states and actions, $t$ is a timestep index, and $R$ gives the sum of rewards for a trajectory.
+
+Let $\mathcal{D}$ be the set of all trajectories used for training. An estimator of the above policy gradient is given by
+
+$$
+\hat{g} = \frac{1}{\mathcal{D}} \sum_{\tau \in \mathcal{D}} \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t|s_t) R(\tau).
+$$
+A loss function associated with this estimator, given a single trajectory with $T$ timesteps, is given by
+$$
+L(\tau) = -\sum_{t = 0}^T \log \pi_\theta (a_t|s_t) R(\tau).
+$$
+Minimizing $L(\tau)$ by SGD or a similar algorithm will result in a working policy gradient implementation.
+
+My question is what is the proper terminology for this loss function? Is it an (unbiased?) estimator for the expected returns $J(\pi_\theta)$ if summed over all trajectories? If someone is able to provide a proof that minimizing $L$ maximizes $J(\pi_\theta)$, or point me to a reference for this, that would be greatly appreciated.
+"
+"['deep-learning', 'reinforcement-learning', 'robotics']"," Title: Drone Deployment Platform for Neural NetworksBody: Good day everyone,
+
+I would just like to ask if anyone part of a lab or company doing research on aerial robotics has any suggestions of a good platform for deploying computer vision algorithms for aerial robots?
+
+Currently our lab has a set of DJI Matrice drones but are too heavy for our liking. We really wanted to use the Skydio R2 drone for out future research projects but found out later that the SDK does not allow access for implementing our own Deep Learning or Reinforcement Learning networks (only allows the user to code their own preset movements in python). We also took a look at the Parrot AR drones but found that they were discontinued and do not have the computing power that the Skydio has although as an alternative, it does have the capability to stream the video feed to an external computer for online processing instead of on-board.
+
+I suggested that we stick with the DJI Matrice and just use an Nvidia Jetson for deployment but I am still curious to know if anyone knows of other available platforms. Does anyone know of a more compact platform available for research purposes?
+
+Thank you :)
+"
+"['philosophy', 'agi', 'intelligence-testing']"," Title: What event would confirm that we have implemented an AGI system?Body: I was listening to a podcast on the topic of AGI and a guest made an argument that if strong music generation were to happen, it would be a sign of "true" intelligence in machines because of how much creative capability creating music requires (even for humans).
+It got me wondering, what other events/milestones would convince someone, who is more involved in the field than myself, that we might have implemented an AGI (or a "highly intelligent" system)?
+Of course, the answer to this question depends on the definition of AGI, but you can choose a sensible definition of AGI in order to answer this question.
+So, for example, maybe some of these milestones or events could be:
+
+- General conversation
+- Full self-driving car (no human intervention)
+- Music generation
+- Something similar to AlphaGo
+- High-level reading/comprehension
+
+What particular event would convince you that we've reached a high level of intelligence in machines?
+It does not have to be any of the events I listed.
+"
+"['deep-learning', 'reinforcement-learning', 'convolutional-neural-networks', 'tensorflow', 'numpy']"," Title: How can I perform lossless compression of images so that they can be stored to train a CNN?Body: I have a set of images, which are quite large in size (1000x1000), and as such do not easily fit into memory. I'd like to compress these images, such that little information is missing. I am looking to use a CNN for a reinforcement learning task which involves a lot of very small objects which may disappear when downsampling. What is the best approach to handle this without downscaling/downsampling the image and losing information for CNNs?
+"
+"['deep-learning', 'classification', 'optimization', 'hyperparameter-optimization', 'optimizers']"," Title: What kind of optimizer is suggested to use for binary classification of similar images?Body: I have spent some time searching Google and wasn't able to find out what kind of optimization algorithm is best for binary classification when images are similar to one another.
+
+I'd like to read some theoretical proofs (if any) to convince myself that particular optimization has better results over the rest.
+
+And, similarly, what kind of optimizer is better for binary classification when images are very different from each other?
+"
+"['neat', 'activation-functions', 'neuroevolution']"," Title: How to choose the activation function in neuroevolution?Body: I am developing a NEAT flappy bird game, and it doesn't work, the system stays stupid for 300 generations. I chose tanh() for activation, just because it's included in JS.
+
+I can't find a good discussion on the internet of activation functions in the context of neuroevolution, most of what I see is about derivative and other gradient descent issues which I suspect are irrelevant to forward only networks.
+
+If you need a fixed point to answer, I have 8 inputs, one output and the problem is a classification (""jump"", ""don't jump""). But please explain your answer. I currently use tanh() for all the hidden and output nodes, and the output is considered ""jump"" if the output neuron value is >0.85
+
+For some context, the code is here: https://github.com/nraynaud/nraygame and the game here: https://nraynaud.github.io/nraygame/
+"
+"['convolutional-neural-networks', 'tensorflow', 'keras', 'unsupervised-learning']"," Title: Training an unsupervised convolutional neural network to learn a general representation of a Lua moduleBody: I am trying to train a CNN in keras to learn a general representation of a Lua module, e.g. requires at the beginning, local variables, local functions, interface (returns) and in between some runnable code (labeled ""other""). So for each module (source code) I generate an AST which I then encode in a json file. The file contains the order of the node in the AST, the text it represents and the type of node it is (require, variable, function, interface, other)
. It can contain other metrics but so far I have settled on these three, where only the order and type of node will be converted into a vector to serve as input to the CNN. Now I don't have any labels for these modules (I want to treat one module as one input to the CNN), so I have concluded that I need to use unsupervised learning. In keras this should translate to using autoencoders, where I use the encoder part to learn weights for the representation and then connect a fully-connected layer and generate an output. Now before I specify the output, I want to specify the input more closely. In my mind It should be a 3D vector let's say (x,y,z)
. x
represents the number of nodes of an AST that are taken into consideration, y
represents the local neighborhood of said node (for now I have settled on 5 nodes) and z
should represent the node itself, the order and type of node. So with that, I would want the output of the network to be in the (almost) same dimension. I want x
outputs for every node that was taken as input and a number (ideally between 0-1) to specify how ""correct"" the node under consideration is in response to the learned representation. My question as a beginner to neural networks is, how feasible is this and are there any points which are simply impossible to do or are wrongly interpreted on my part?
+"
+"['backpropagation', 'weights']"," Title: Function to update weights in back-propagationBody: I am trying to wrap my head around how weights get updated during back propagation. I've been going through a school book and I have the following setup for an ANN with 1 hidden layer, a couple of inputs and a single output.
+
+
+The first line gives the error that will be used to update the weights going from the hidden layer to the output layer. $t$ represents the target output, $a$ represents the activation and the formula is using the derivative for the sigmoid function $(a(1-a))$. Then, the weights are updated with the learning rate, multiplied by the error and the activation of the given neuron which uses the weight $w_h$. Then, the next step is moving on to calculate the error with respect to the input going into the hidden layer from the input layer (sigmoid is the activation function on both the hidden and the output layer for this purpose). So we have the total error * derivative of the activation for the hidden layer * the weight for the hidden layer.
+
+I am following this train of thought as it was provided, but my question is — if the activation is changed to $tanh$ for example and the derivative of $tanh$ is $1-f(x)^2$, then would we have the error formula update to $(t-a)*(1-a^2)$ where $a$ represents the activation function so $1-a^2$ is the derivative of $tanh$?
+"
+"['prediction', 'regression']"," Title: Predicting population density from satellite imageryBody: I have very high resolution images from LANDSAT 8 (5 out of 12 bands), which are of various administrative regions of a country. Each image is of variable dimensions, but generally of the order of [1500 X 1200 X 5].
+
+My aim is to predict the population density from urban features visible on the images.
+
+Since the number of images (and hence data points) is small, what is the best implementation strategy to build a model that can predict a value for population density based on these images?
+"
+"['reinforcement-learning', 'math', 'proofs', 'convergence', 'temporal-difference-methods']"," Title: Does TD(0) prediction require Robbins-Monro conditions to converge to the value function?Body: Does the learning rate parameter $\alpha$ require the Robbins-Monro conditions below for the TD(0) algorithm to converge to the true value function of a policy?
+
+$$\sum \alpha_t =\infty \quad \text{and}\quad \sum \alpha^{2}_t <\infty$$
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'forecasting', 'survival']"," Title: Recurrent neural Network for survival analyses: Dealing with forecast data as feature which can exceed the number of days untill a event occursBody: I am building a Recurrent Neural network (LSTM) for predicting the number of days until a Pollen season starts (when the cumulative of the year exceeds X). One of the features I am including in my model is the weather forecast.
+
+However, I do not feel confident about the way I defined the model while including this weather forecast; currently, the weather forecast of 7 days is included as one of the predictors, However, when the label (number of days until the season starts) is smaller than the forecast I am training te model on forecast data which is completely irrelevant in determining the start of the season (e.g. if the season starts in 2 days and I am including the forecast of 7 days as predictor I am also training the model on the 5 days after the season already started while these are completely irrelevant).
+
+My feeling is that it is not right when training RNN's for survival analyses. Does anyone know a way to deal with this? Or have an example where someone dealt with a similar issue?
+
+Thanks a lot!
+"
+"['neural-networks', 'computational-learning-theory', 'regularization', 'vc-dimension', 'capacity']"," Title: Are there any rules of thumb for having some idea of what capacity a neural network needs to have for a given problem?Body: To give an example. Let's just consider the MNIST dataset of handwritten digits. Here are some things which might have an impact on the optimum model capacity:
+
+- There are 10 output classes
+- The inputs are 28x28 grayscale pixels (I think this indirectly affects the model capacity. eg: if the inputs were 5x5 pixels, there wouldn't be much room for varying the way an 8 looks)
+
+So, is there any way of knowing what the model capacity ought to be? Even if it's not exact? Even if it's a qualitative understanding of the type "if X goes up, then Y goes down"?
+Just to accentuate what I mean when I say "not exact": I can already tell that a 100 variable model won't solve MNIST, so at least I have a lower bound. I'm also pretty sure that a 1,000,000,000 variable model is way more than needed. Of course, knowing a smaller range than that would be much more useful!
+"
+"['neural-networks', 'recurrent-neural-networks', 'long-short-term-memory', 'sequence-modeling', 'language-model']"," Title: How to pad sequences during training for an encoder decoder modelBody: I've got an encoder-decoder model for character level English language spelling correction, it is pretty basic stuff with a two LSTM encoder and another LSTM decoder.
+
+However, up until now, I have been pre-padding the input sequences, like below:
+
+abc -> -abc
+defg -> defg
+ad -> --ad
+
+
+And next I have been splitting the data into several groups with the same output length, e.g.
+
+train_data = {'15': [...], '16': [...], ...}
+
+
+where the key is the length of the output data and I have been training the model once for each length in a loop.
+
+However, there has to be a better way to do this, such as padding after the EOS character etc. But if this is the case, how would I change the loss function so that this padding isn't counted into the loss?
+"
+"['neural-networks', 'machine-learning', 'overfitting', 'regularization', 'generalization']"," Title: Is there a way to ensure that my model is able to recognize an unseen example?Body: My question is more theoretical than practical. Let's say that I am training my cat classifier with a dataset that I feel is pretty representative of cat images in general. But then a new breed of cat is created that is distinct from other cats and it does not exist in my dataset.
+My question is: is there a way to ensure that my model is still able to recognize this unseen breed, even though I didn't know it would come into existence when I originally trained my model?
+I have been trying to answer this question by intentionally designing my validation and test sets such that they contain examples that are quite distantly related to those that exist in the training set (think of it like intentionally leaving out specific breeds of cats from the training set).
+The results are interesting. For example, slight changes to parameters can dramatically change performance on the distantly related test examples, while not changing performance very much for the more closely related examples. I was wondering if anyone has done a deeper analysis of this phenomenon.
+"
+"['object-detection', 'object-recognition', 'stochastic-gradient-descent', 'adam', 'adadelta']"," Title: Is the choice of the optimiser relevant when doing object detection?Body: Suppose that we have 4 types of dogs that we want to detect (Golden Retriever, Black Labrador, Cocker Spaniel, and Pit Bull). The training data consists of png images of a data set of dogs along with their annotations. We want to train a model using YOLOv3.
+Does the choice of optimizer really matter in terms of training the model? Would the Adam optimizer be better than the Adadelta optimizer? Or would they all basically be the same?
+Would some optimizers be better because they allow most of the weights to achieve their "global" minima?
+"
+['object-detection']," Title: CBIR and object detectionBody: How does CBIR (content based image recognition) fit into the problem of object detection? Let's say we want to detect 4 types of dogs (Golden Retriever, Cocker Spaniel, Greyhound, and Labrador). We have an ""average"" model trained using YOLOv3. So it might, for example, have a lot of false positives and false negatives.
+
+How could we use CBIR to improve the detections from this ""average' YOLOv3 model?
+"
+"['machine-learning', 'deep-learning', 'comparison', 'transfer-learning', 'meta-learning']"," Title: What are the differences between transfer learning and meta learning?Body: What are the differences between meta-learning and transfer learning?
+
+I have read 2 articles on Quora and TowardDataScience.
+
+
+ Meta learning is a part of machine learning theory in which some
+ algorithms are applied on meta data about the case to improve a
+ machine learning process. The meta data includes properties about the
+ algorithm used, learning task itself etc. Using the meta data, one can
+ make a better decision of chosen learning algorithm(s) to solve the
+ problem more efficiently.
+
+
+and
+
+
+ Transfer learning aims at improving the process of learning new tasks
+ using the experience gained by solving predecessor problems which are
+ somewhat similar. In practice, most of the time, machine learning
+ models are designed to accomplish a single task. However, as humans,
+ we make use of our past experience for not only repeating the same
+ task in the future but learning completely new tasks, too. That is, if
+ the new problem that we try to solve is similar to a few of our past
+ experiences, it becomes easier for us. Thus, for the purpose of using
+ the same learning approach in Machine Learning, transfer learning
+ comprises methods to transfer past experience of one or more source
+ tasks and makes use of it to boost learning in a related target task.
+
+
+The comparisons still confuse me as both seem to share a lot of similarities in terms of reusability. Meta-learning is said to be ""model agnostic"", yet it uses metadata (hyperparameters or weights) from previously learned tasks. It goes the same with transfer learning, as it may reuse partially a trained network to solve related tasks. I understand that there is a lot more to discuss, but, broadly speaking, I do not see so much difference between the two.
+
+People also use terms like ""meta-transfer learning"", which makes me think both types of learning have a strong connection with each other.
+
+I also found a similar question, but the answers seem not to agree with each other. For example, some may say that multi-task learning is a sub-category of transfer learning, others may not think so.
+"
+"['deep-learning', 'probability-distribution']"," Title: Many of the best probabilistic models represent probability distributions only implicitlyBody: I am currently studying Deep Learning by Goodfellow, Bengio, and Courville. In chapter 5.1.2 The Performance Measure, P, the authors say the following:
+
+The choice of performance measure may seem straightforward and objective, but it is often difficult to choose a performance measure that corresponds well to the desired behavior of the system.
+In some cases, this is because it is difficult to decide what should be measured. For example, when performing a transcription task, should we measure the accuracy of the system at transcribing entire sequences, or should we use a more fine-grained performance measure that gives partial credit for getting some elements of the sequence correct? When performing a regression task, should we penalize the system more if it frequently makes medium-sized mistakes or if it rarely makes very large mistakes? These kinds of design choices depend on the application.
+In other cases, we know what quantity we would ideally like to measure, but measuring it is impractical. For example, this arises frequently in the context of density estimation. Many of the best probabilistic models represent probability distributions only implicitly. Computing the actual probability value assigned to a specific point in space in many such models is intractable. In these cases, one must design an alternative criterion that still corresponds to the design objectives, or design a good approximation to the desired criterion.
+
+It is this part that interests me:
+
+Many of the best probabilistic models represent probability distributions only implicitly.
+
+I don't have the experience to understand what this means (what does it mean to represent distributions "implicitly"?). I would greatly appreciate it if people would please take the time to elaborate upon this.
+"
+"['deep-learning', 'natural-language-processing', 'tensorflow', 'training', 'bert']"," Title: Why is my loss (binary cross entropy) converging on ~0.6? (Task: Natural Language Inference)Body: I’m trying to debug my neural network (BERT fine-tuning) trained for natural language inference with binary classification of either entailment or contradiction. I've trained it for 80 epochs and its converging on ~0.68. Why isn't it getting any lower?
+
+Thanks in advance!
+
+
+
+Neural Network Architecture:
+
+
+
+Training details:
+
+
+- Loss function: Binary cross entropy
+- Batch size: 8
+- Optimizer: Adam (learning rate = 0.001)
+- Framework: Tensorflow 2.0.1
+- Pooled embeddings used from BERT output.
+- BERT parameters are not frozen.
+
+
+Dataset:
+
+
+- 10,000 samples
+- balanced dataset (5k each for entailment and contradiction)
+- dataset is a subset of data mined from wikipedia.
+- Claim example: ""'History of art includes architecture, dance, sculpture, music, painting, poetry literature, theatre, narrative, film, photography and graphic arts.'""
+- Evidence example: ""The subsequent expansion of the list of principal arts in the 20th century reached to nine : architecture , dance , sculpture , music , painting , poetry -LRB- described broadly as a form of literature with aesthetic purpose or function , which also includes the distinct genres of theatre and narrative -RRB- , film , photography and graphic arts .""
+
+
+Dataset preprocessing:
+
+
+- Used [SEP] to separate the two sentences instead of using separate embeddings via 2 BERT layers. (Hence, segment ids are computed as such)
+- BERT's FullTokenizer for tokenization.
+- Truncated to a maximum sequence length of 64.
+
+
+See below for a graph of the training history. (Red = train_loss, Blue = val_loss)
+
+
+"
+"['machine-learning', 'applications', 'research', 'generative-adversarial-networks', 'regression']"," Title: Have GANs been used to solve regression problems?Body: I've noticed that in the last 2 years GANs have become really popular. I know that initially they have been proposed for image classification but I was curious if any of you are aware of any papers where GANs are used to solve regression problems?
+"
+"['unsupervised-learning', 'supervised-learning', 'probability-distribution', 'conditional-probability']"," Title: Solving the supervised learning problem of learning $p(y \vert \mathbf{x})$ by using traditional unsupervised technologies to learn $p(\mathbf{x}, y)$Body: I am currently studying Deep Learning by Goodfellow, Bengio, and Courville. In chapter 5.1.2 The Performance Measure, $P$, the authors say the following:
+
+Unsupervised learning and supervised learning are not formally defined terms. The lines between them are often blurred. Many machine learning technologies can be used to perform both tasks. For example, the chain rule of probability states that for a vector $\mathbf{x} \in \mathbb{R}^n$, the joint distribution can be decomposed as
+$$p(\mathbf{x}) = \prod_{i = 1}^n p(x_i \vert x_1, \dots, x_{i - 1} ).$$
+This decomposition means that we can solve the ostensibly unsupervised problem of modeling $p(\mathbf{x})$ by splitting it into $n$ supervised learning problems. Alternatively, we can solve the supervised learning problem of learning $p(y \vert \mathbf{x})$ by using traditional unsupervised technologies to learn the joint distribution $p(\mathbf{x}, y)$, then inferring
+$$p(y \vert \mathbf{x} ) = \dfrac{p(\mathbf{x}, y)}{\sum_{y'}p(\mathbf{x}, y')}.$$
+
+I found this part vague:
+
+Alternatively, we can solve the supervised learning problem of learning $p(y \vert \mathbf{x})$ by using traditional unsupervised technologies to learn the joint distribution $p(\mathbf{x}, y)$, then inferring
+$$p(y \vert \mathbf{x} ) = \dfrac{p(\mathbf{x}, y)}{\sum_{y'}p(\mathbf{x}, y')}.$$
+
+Can someone please elaborate on this, and also explain more clearly the role of $p(y \vert \mathbf{x} ) = \dfrac{p(\mathbf{x}, y)}{\sum_{y'}p(\mathbf{x}, y')}$?
+I would greatly appreciate it if people would please take the time to clarify this.
+"
+"['convolutional-neural-networks', 'image-recognition', 'image-processing']"," Title: Should I apply image processing techniques to the inputs of convolution networks?Body: After working for some time with feature-based pattern recognition, I am switching to CNN to see if I can get a higher recognition rate.
+
+In my feature-based algorithm, I do some image processing on the picture before extracting the features, such as some convolution filters to reduce noise and segmentation into the foreground and background, and finally identifying and binarization of objects.
+
+Should I do the same image processing before feeding data into my CNN, or is it possible to feed raw data to a CNN and expect that the CNN will adapt automatically without per-image-processing steps?
+"
+"['neural-networks', 'training']"," Title: Is this model overfitted or not?Body: I am training a neural network and plot model accuracy and model loss.
+I am a little confused about overfitting. Is my model overfitted or not? how can I interpret it
+
+
+
+
+
+EDIT: here is a sample of my input data, I have a binary image classification
+
+"
+"['neural-networks', 'natural-language-processing', 'classification', 'text-classification']"," Title: Is it possible to derive meaning from text by providing multiple ways of saying the same thing to a neural network?Body: Let's say I feed a neural network with multiple string sentences that mean roughly the same thing but are formulated differently. Will the neural network be able to derive patterns of meaning in the same way that it is being done with images. Is this an approach currently used in Natural Language Processing?
+
+With images of dogs the neural network will get the underlying patterns that define a dog. Could it be the same thing with sentences?
+"
+"['neural-networks', 'deep-learning', 'tensorflow', 'convergence']"," Title: Is it possible to use deeplearning with spark (with a distributed databases as HDFS or Cassandra)?Body: If it is possible, will it be really useful or the model will end up converging very early(with a typical optimum learning rate) ? Any content on this topic will be helpful for me.
+"
+"['machine-learning', 'math', 'artificial-neuron', 'activation-functions']"," Title: What is the equation of the separation line for this neuron with identity activation?Body: I have a single neuron with 2 inputs, and identity activation, where f
is activation function and u
is output:
+
+$u = f(w_1x_1 + w_2x_2 + b) = w_1x_1 + w_2x_2 + b$
+
+My guessing for the separation line equation:
+
+$u = w_1x_1 + w_2x_2 + b$
+$\implies x_2 = \dfrac{u - w_1x_1 - b}{w_2}$
+$\implies x_2 = (\dfrac{-w_1}{w_2})x_1 + \dfrac{u-b}{w_2}$
+
+And the questions are:
+
+1) Is the separation line equation above correct?
+
+2) And when f
is not identity function, is the separation line equation still the same? or different?
+"
+"['neural-networks', 'machine-learning', 'regression', 'transformer']"," Title: How does the regression layer in the localization network of a spatial transformer work?Body: I am trying to understand the spatial transformer network mentioned in this paper https://papers.nips.cc/paper/5854-spatial-transformer-networks.pdf. I am clear about the last two stages of the spatial transformer i.e. the grid generator and sampler. However I am unable to understand the localization network which outputs the parameters of the transformation that is applied to the input image. So here are my doubts.
+
+
+- Is the network trained on various affine/projective transforms of the input or only the standard input with a standard pose?
+- If the answer to question 1 is no, then how does the regression layer correctly regress the values of the transformation applied to the image? In other words how does the regression layer know what transformation parameters are required when it has never seen those inputs before?
+
+
+Thanks in advance.
+"
+"['reinforcement-learning', 'policy-gradients', 'markov-decision-process']"," Title: How can I constraint the actions with dependent coordinates?Body: I am working on a customized RL environment where each action is represented as a tuple $a = (a_1,a_2,\cdots,a_n)$ such that certain condition must be satisfied for entries of $a$ (for instance, $a_1+a_2+\cdots+a_n \leq \text{constant}$).
+
+I am using the policy gradient method, but I am having some difficulty modeling the underlying probability distribution of actions. Is there any work done in this direction?
+
+For the constraint $a_1+a_2+\cdots+a_n \leq \text{constant}$, I was thinking about generating $n+1$ uniform random variables $U_1,U_2,\cdots,U_n, U$, and set $a_i = \text{constant}\times U \times \frac{U_i}{\sum_{j=1}^n U_j}$. Problem is that the joint density is a bit messy to calculate, which is needed to get the negative log likelihood. I am curious about how such issue is handled in practice.
+"
+"['convolutional-neural-networks', 'dqn', 'convolution', 'deepmind', 'pooling']"," Title: What's the difference in using multiple convolutional layers and no pooling versus using a single convolutional layer and a single max pooling layer?Body: I'm currently working on a college project in which I'm designing a Deep Q-Network that takes images/frames as an input.
+
+I've been searching online to see how other people have designed their convolutional stage and I've seen many different implementations.
+
+Some projects, such as DeepMinds Atari 2600 project, use 3 convolutional layers and no pooling (from what I can see).
+
+However, other projects use fewer convolutional layers and add a pooling layer onto the end.
+
+I understand what both layers do, I was just wondering is there a benefit to how DeepMind did it and not use pooling or should I be using a pooling layer and fewer convolutional layers?
+
+Or have I completely missed out on something? Is Deep Mind actually using pooling after each convolutional layer?
+"
+"['planning', 'pddl']"," Title: Can two planning PDDL actions be taken simultaneously?Body: We are discussing planning algorithms currently, and the question is to describe the steps to check if actions could be taken simultaneously. This is a really open-ended question so I'm not sure where to start.
+"
+"['deep-rl', 'policy-gradients', 'function-approximation']"," Title: Monte Carlo updates on policy gradient with no terminal stateBody: Consider some MDP with no terminal state. We can apply bootstrapping methods (like TD(0)) to learn in these cases no problem, but in policy gradient algorithms that have only a simple monte carlo update, it requires us to supply a complete trajectory (which is impossible with no terminal state).
+
+Naturally, one might let the MDP run for 1000 periods, and then terminate as an approximation. If we feed these trajectories into a monte carlo update, I imagine that samples for time period t=1,2,...,100 would give very good estimates for the value function due to the discount factor. However, the time periods 997, 998, 999, 1000, we'd have an expected value for those trajectories far different than V(s) due to their proximity to the cutoff of 1000.
+
+The question is this:
+
+
+- Should we even include these later-occurring data points when we update our function approximation?
+
+
+OR
+
+
+- Is the assumption that these points become really sparse in our updates, so they won't have much effect in our training?
+
+
+OR
+
+
+- Is it usually implied that the final data reward in the trajectory is bootstrapped in these cases (i.e., we have some TD(0)-like behavior in this case)?
+
+
+OR
+
+
+- Are monte carlo updates for policy gradient algorithms even appropriate for non-terminating MDPs due to this issue?
+
+"
+"['tensorflow', 'prediction', 'object-detection']"," Title: Irregular results while prediction identical object on same imageBody: I used the pre-trained model faster_rcnn_resnet101_coco.config with my own dataset.
+
+I have two issues
+
+
+- some objects were not detected, while I learned it, with a high number of steps, and test over the same learning data, but still have some missing data
+- sometimes, the same identical objects predicted correctly on some images and sometimes it got missed, although the objects are identical
+
+
+Any help would be appreciated.
+"
+"['computer-vision', 'yolo', 'darknet', 'models', 'filters']"," Title: YOLOv3 Model Structure: Why is filters = (classes + coords + 1) * num?Body: Here's a tutorial about doing custom training of YOLO (Darknet): https://medium.com/@manivannan_data/how-to-train-yolov3-to-detect-custom-objects-ccbcafeb13d2
+
+The tutorial guides how to set values in the .cfg
files:
+
+
+- classes = Number of classes, OK
+- filters = (classes + 5) * 3
+
+
+Why is it 'plus 5' then 'times 3'?
+
+Some say it's (classes + coords + 1) * num, but I can't guess it out the meaning.
+"
+"['papers', 'backpropagation', 'notation', 'regularization', 'batch-normalization']"," Title: How is the gradient with respect to weights derived in batch normalization?Body: At the bottom of page 2 of the paper L2 Regularization versus Batch and Weight Normalization, the equation for the gradient of the output with respect to the weights is given as:
+$$
+\triangledown y_{BN} (X; w, \gamma, \beta) = \frac{X}{\sigma(X)}\gamma g'(z).
+$$
+Can someone break down into smaller steps on how the author got to that equation?
+"
+"['reinforcement-learning', 'math', 'proofs', 'convergence', 'sarsa']"," Title: What are the conditions for the convergence of SARSA to the optimal value function?Body: Is it correct that for SARSA to converge to the optimal value function (and policy)
+
+
+- The learning rate parameter $\alpha$ must satisfy the conditions:
+$$\sum \alpha_{n^k(s,a)} =\infty \quad \text{and}\quad \sum \alpha_{n^k(s,a)}^{2} <\infty \quad \forall s \in \mathcal{S}$$
+where $n_k(s,a)$ denotes the $k^\text{th}$ time $(s,a)$ is visited
+- $\epsilon$ (of the $\epsilon$-greedy policy) must be decayed so that the policy converges to a greedy policy.
+- Every state-action pair is visited infinitely many times.
+
+
+Are any of these conditions redundant?
+"
+"['reinforcement-learning', 'proofs', 'convergence', 'temporal-difference-methods']"," Title: Does SARSA(0) converge to the optimal policy in expectation if the Robbins-Monro conditions are removed?Body: The conditions of convergence of SARSA(0) to the optimal policy are :
+
+
+- The Robbins-Monro conditions above hold for $α_t$.
+- Every state-action pair is visited infinitely often
+- The policy is greedy with respect to the policy derived from $Q$ in the limit
+- The controlled Markov chain is communicating: every state can be reached from any other with positive probability (under some policy).
+- $\operatorname{Var}{R(s, a)} < \infty$, where $R$ is the reward function
+
+
+The original proof of the convergence of TD(0) prediction (page 24 of the paper Learning to Predict by the Method of Temporal Differences) was for convergence in the mean of the estimation to the true value function. This did not require the learning rate parameter to satisfy Robbins-Monro conditions.
+
+I was wondering if the Robbins-Monro conditions are removed from the SARSA(0) assumptions would the policy converge in some notion of expectation to the optimal policy?
+"
+"['reinforcement-learning', 'q-learning', 'convergence', 'epsilon-greedy-policy', 'exploration-strategies']"," Title: Is there an advantage in decaying $\epsilon$ during Q-Learning?Body: If the agent is following an $\epsilon$-greedy policy derived from Q, is there any advantage to decaying $\epsilon$ even though $\epsilon$ decay is not required for convergence?
+"
+"['reinforcement-learning', 'q-learning', 'math', 'dqn']"," Title: How is the expected value in the loss function of DQN approximated?Body: In Deep Q Learning the parametrized Q-functions $Q_i$ are optimised by performing gradient descent on the series of loss functions
+
+$L_i(\theta_i)= E_{(s,a)\sim p}[(y_i-Q(s,a;\theta_i))^2]$ , where
+
+$y_i = E_{s' \sim \mathcal{E}}[r+\gamma \max_{a'}Q(s',a';\theta_{i+1})\mid s,a]$.
+
+In the actual algorithm, however, the expected value is never computed. Also, I think it cannot be computed since the transition probabilities of the underlying MDP remain hidden from the agent. Instead of the expected value, we compute $y_i = r_i + \gamma \max_a Q(\phi_{i+1},a;\theta)$. I assume some sort of stochastic approximation is taking place here. Can someone explain the details?
+"
+"['reinforcement-learning', 'ai-design', 'q-learning', 'applications']"," Title: How should I define the state space for this life science problem?Body: I would like to ask for a piece of advice with regard to Q-learning. I am studying RL and would like to do a basic project applied to life science and calculate the reward. I have been trying to get my head around how to define all possible states of the environment.
+
+My states are $S = ( \text{health } (4 \text{ levels}), \text{shape } (3 \text{ levels}) \}$. My actions are $A=\{a_1, a_2, \dots, a_4 \}$. My possible states are $60=4 * 3 * 5$. Could you advise whether these are correct?
+
+$(s_{w_0, sh_0}, a_1, s'_{w_1, sh_1})$ is a tuple of the initial state $s_{w_0, sh_0}$, the first action $a_1$ and the next state $s'_{w_1, sh_1}$, where $w$ is the health level, $sh$ is the shape of the tumor.
+"
+"['philosophy', 'intelligent-agent', 'intelligence', 'goal-based-agents']"," Title: How do we define intention if there is no free will?Body: There is an idea that intentionality may be a requirement of true intelligence, here defined as human intelligence.
+
+But all I know for certain is that we have the appearance of free will. Under the assumption that the universe is purely deterministic, what do we mean by intention?
+
+(This seems an important question given that intention is not just a philosophical matter in relation to definitions of AI, but involves ethics in the sense of application of AI, ""offloading responsibility to agents that cannot be meaningfully punished"" as an example. Also touches on goals, implied by intention, whether awareness is a requirement, and what constitutes awareness. I'm interested in all angles, but was inspired by the question ""does true art require intention, and, if so, is that the sole domain of humans?"")
+"
+"['deep-learning', 'convolutional-neural-networks', 'image-recognition', 'datasets']"," Title: Is it legal to construct a public image database (for deep learning) with images from the internet?Body: I am trying to put together a public agricultural image database of corn and soybeans, to train convolutional neural networks. The main method of image collection will be through taking pictures of various fields in the growing season. The images will be uploaded to a public data sharing site which will be accessible by many.
+
+However, I could get much more images compiled if I were to take some off of, say, Google Images. If there anything wrong with this? Would there be any issues with copywright infringements if I find the images on a publicly-available search engine? I need a lot of images, so I thought this would be a good method of increasing my image numbers.
+"
+"['neural-networks', 'machine-learning', 'game-ai', 'genetic-algorithms', 'models']"," Title: Using ML for Enemy Generation in Video GamesBody: I am attempting to make a 2-D platformer game where the player traverses through an evil factory that is producing killer robots. The robots spawn at multiple specific locations in each level and impede the player's progress.
+
+Enemies are procedurally generated using machine learning. Early levels have ""garbage"" robots that plop down and can't really do anything. After generations of training, the robots begin having more refined bodies and are able to move about and attack the player. Later levels produce enemies that are more challenging.
+
+Enemies consist of a body and up to 4 limbs. The body is simply a circle of a certain radius, while the limbs are just a bar with a certain length. Limbs can pivot and/or contract/extend. Additionally, each limb can have one of three types of ""motor"" (wheel, spring, or hover). This makes for about 20-25 input parameters:
+
+
+ BodySize, Limb1Enabled, Limb1PivotPoint, Limb1Length, Limb1Angle, Limb1MotorType, Limb1MotorStrength, Limb2Enabled, Limb2PivotPoint, Limb2Length, Limb2Angle, Limb2MotorType, Limb2MotorStrength, Limb3Enabled, Limb3PivotPoint, Limb3Length, Limb3Angle, Limb3MotorType, Limb3MotorStrength, Limb4Enabled, Limb4PivotPoint, Limb4Length, Limb4Angle, Limb4MotorType, Limb4MotorStrength
+
+
+My thoughts are that a genetic algorithm (or something similar) would be used to generate a body, while a neural network would control that body by using the same inputs to generate outputs that control the limbs and motors.
+
+There would actually be 3 ""control brains"" that would have to be trained using the same inputs, but having different fitness goals: Moving Right/Left, Moving Up, and Attacking the Player. (Gravity exists in 2-D platformers, so moving down isn't necessary.)
+
+A fourth, ""master brain"" would take the player's relative location, score, and maybe time elapsed, as inputs, and would output one of the goals for the robot to achieve (move left, move right, and attack).
+
+The master brain's fitness would be determined by the ""inverse"" of the player's ""progress"", while each control brain's fitness would be determined by how well it was able to perform the task assigned by the master brain. Finally, the overall fitness for the body's genetic algorithm would be an average (or some other function like min, max, etc.) of the three control brain's fitness values.
+
+Now that I have all this ""down on paper"", where do I start? I had planned on doing this in Unity, but early attempts have been a bit confusing for me. I've been able to procedurally generate a body with random limbs (no motors) that wiggle about randomly, but there's no neural network or any machine learning going on whatsoever. I am not exactly sure how to expose my parameters to be used as inputs, and am barely grasping how I should take those outputs to control what I want them to. Are there any libraries I should look at, or should I write this all from scratch?
+
+Also, before I get too far ahead of myself, what are the flaws in my approach (as I'm sure there are plenty). I want my project to be something practical in scope, if training can't be done feasibly while a player traverses a level, this might just be a dead project idea.
+
+Anyways, that all being said, thank you for your help.
+"
+"['training', 'computer-vision', 'object-detection', 'yolo', 'darknet']"," Title: What are the reasons behind slow YOLO training?Body: I'm testing out YOLOv3 using the 'darknet' binary, and custom config. It trains rather slow.
+
+My testing out is only with 1 image, 1 class, and using YOLOv3-tiny instead of YOLOv3 full, but the training of yolov3-tiny isn't fast as expected for 1 class/1 image.
+
+The accuracy reached near 100% after like 3000 or 4000 batches, in similarly 3 to 4 hours.
+
+Why is it slow with just 1 class/1 image?
+"
+"['deep-learning', 'geometric-deep-learning', 'graphs', 'scikit-learn']"," Title: Suitable deep learning algorithms for spatial / geometric dataBody: I have a task of classifying spatial data from a geographic information system. More precisely, I need a way to filter out unnecessary line segments from the CAD system before loading into the GIS (see the attached picture, colors for illustrative purposes only).
+
+
+
+The problem is that there are much more variations of objects than in the picture. The task is difficult to solve in an algorithmic way.
+
+I tried to apply a bunch of classification algorithms from the Scikit-learn package and, in general, got significant results. GradientBoostingClassifier and ExtraTreeClassifier achive an accuracy about 96-98%, but:
+
+
+- this accuracy is achieved in the context of individual segments into
+which I explode the source objects (hundreds of thousands of objects)
+After reverse aggregation of objects it may turn out that one of
+their segments in each object is classified incorrectly. The error in
+the context of the source objects is high
+- significant computational resources and time are required for the
+preparation of the source data, and calculation of features for
+classifiers
+- it is impossible to use the source 2d coordinates of objects in
+algorithms, but only their derivatives
+
+
+I tried to find good examples of use deep neural networks for this kind of tasks / for spatial data, but I only found articles that are quite difficult to understand about the use of such networks for point clouds classification, and some information on geometric deep learning for graphs. I do not have enough knowledge to adapt them for my case
+
+Can someone provide me good examples of using neural networks directly with 2d coordinates and, maybe, good articles on these theme written in simple language?
+
+Thanks
+"
+"['social', 'computation']"," Title: Is a dystopian surveillance state computationally possible?Body: This isn't really a conspiracy theory question. More of an inquire on the global computational power and data storage logistics question.
+
+Most recording instruments such as cameras and microphones are typically voluntary opt in devices, in that, they have to be activated before they start recording. What happens if all of these devices were permanently activated and started recording data to some distributed global data storage?
+
+There are 400 hours of video uploaded to YouTube every minute.
+
+Let’s do some very rough math.
+
+I’m going to assume for the rest of this post that the average video is 1080p which is 2.5GB (or $10^9$ bytes) per hour. From that, we get about 400 hrs * 60 mins * 2.5GB/hrs * 24 hrs = 1.5 petabytes (or $10^{15}$ bytes) per day.
+
+But YouTube videos post are voluntary, and they are far from continuous video streams.
+
+There are about 3.5 billion smartphones in the world. If video was continuously streamed and recorded, going through the same video math above ($3.5 * 10^9 * 1.5 * 10^{15} * 24)$ = 126 yottabytes (or $10^{24}$ bytes) per day.
+
+The IDC projects there will be 175 zettabytes (or $10^{21}$ bytes) in 2025.
+
+Unless my math is very wrong, it would seem as though smartphone cameras alone could produce more data in one day than all of the data created in human history in 2025.
+
+This, so far, has only been about the data recording, but, to implement a surveillance state, all recorded data would need to be processed by AI to intelligent flag data that is significant. How much processing power would be needed to filter 126 yottabytes into relevant information?
+
+Overall, this question is motivated by the spread of dystopian surveillance media like Edward Snowden NSA whistle blowing leaks or George Orwell's sentiment of ""Big Brother is Watching You"".
+
+Computationally, could we be surveilled, and to what extent? I imagine text messages surveillance would be the easiest, does the world have the computation power to surveil all text messages? How about audio? or video?
+"
+"['deep-learning', 'time-series', 'sequence-modeling', 'machine-translation', 'signal-processing']"," Title: What's the best method to predict/generate signal from one sensor (source) to signal from another another (target)?Body: I was wondering what is the best method out there to find relationship between two 1D signals so that I can predict/generate one (source) from the other (target). For example, let's say that in response to an event, my sensor A's readings vary in a certain way for 5 seconds. For the same event sensor B's readings vary as well for 5 seconds. Sensor A and B are not measuring the same physical quantities but respond to the same event and seem to have a relationship.
+
+What can I do to use the signal from sensor A to learn how the signal from sensor B would look like for that event? What is the state of the art in deep learning?
+"
+"['search', 'proofs', 'heuristics', 'admissible-heuristic', 'consistent-heuristic']"," Title: Is the summation of consistent heuristic functions also consistent?Body: Imagine that we have a set of heuristic functions $\{h_i\}_{i=1}^N$, where each $h_i$ is both admissible and consistent (monotonic). Is $\sum_{i=1}^N h_i$ still consistent or not?
+
+Is there any proof or counterexample to show the contradiction?
+"
+['convolutional-neural-networks']," Title: How do 3 channels affect a network when detecting human skin (CNN)?Body: Yeah I know, best title ever. Anyway,
+
+I want to make a neural network which is fed with frames coming from an usb camera.
+Don't wanna be so specific, so I'm just gonna say that the network's goal is to classify human hand gestures, therefore I need to make sure it can effectively learn how the hand moves around.
+
+My problem is that I've no idea about what happens when having 3 channels instead of 1, I only know that (for 3 channels) it does 3 separate convolution operations with the same kernel, resulting actually in 3 separate layers.
+How do this 3 channels affect the network? Does it learn from the movement 3 parallel times, then it mixes toghether this 3 ""separate movements""? Do I need to make it single channel to help him detect the hand?
+
+PS: the text is problably confusing, but that's because I'm confused to, that's why I'm asking.
+"
+"['machine-learning', 'generative-adversarial-networks', 'generative-model', 'generator', 'discriminator']"," Title: GANs: Should Generator update weights when Discriminator says false continuouslyBody: My GANs is like this:
+
+
+- Train an autoencoder (VAE), get the decoder part and use as Generator
+- Train Discriminator
+
+
+After training, do the generation in these steps:
+
+
+- Call Generator to generate an image
+- Call the Discriminator to classify the image to see whether it's acceptable
+
+
+The problem is that the Discriminator says 'false' a lot, which means the generated image is not useful.
+
+How should the Generator change (update weights) when Discriminator doesn't accept its generated image?
+"
+"['reinforcement-learning', 'applications', 'real-time']"," Title: Is reinforcement learning suited for real-time systems?Body: From what I have seen, any results involving RL almost always take a massive number of simulations to reach a remotely good policy.
+
+Will any form of RL be viable for real-time systems?
+"
+"['machine-learning', 'recommender-system']"," Title: How to model personalized threshold problem with machine learningBody: Assume that I have a candidate selection system to generate product/user pairs for recommendation. Currently, in order to hold a quality bar for the recommended product, we trained a model to optimize for the click of the link, denoting as pClick(product, user) model, the output of the model is a score of (0,1) representing how likely the user will click on the recommended product.
+
+For our initial launch product, we set a manually selected threshold, say T for all users. For all users, only when the threshold pass T, we will send user the recommendation.
+
+Now we realize this is not optimal: Some users care less about recommendation quality while some other users have a high bar of recommendation quality. And a personalized threshold, instead of the global T can help us improve the overall relevance.
+
+The goal is to output the threshold for each user, assume we have training data for each user's activity and user/product attributes.
+
+The question is: How should we model this problem with machine learning? Any reference or papers is highly appreciated.
+"
+"['deep-learning', 'backpropagation', 'gradient-descent']"," Title: Why gradients are so small in deep learning?Body: The learning rate in my model is 0.00001
and the gradients of the model is within the distribution of [-0.0001, 0.0001]
. Is it normal?
+"
+"['machine-learning', 'unsupervised-learning', 'machine-translation', 'unlabeled-datasets']"," Title: How to do machine translation with no labeled data?Body: Is it be possible to train a neural network, with no parallel bilingual data, for machine translation?
+"
+"['machine-learning', 'reinforcement-learning', 'markov-chain']"," Title: In the Markov chain, how are the directions to each successive state defined?Body: I'm watching the David Silver series on YT which has raised a couple of questions:
+
+In the Markov process (or chain), how are the directions to each successive state defined? For example, how are the arrow directions defined for the MP below? What's stopping our sample episodes from choosing A -> C -> D -> F
?
+
+
+
+Also, how is the probability transition matrix populated? From David's example, the probabilities seem to have already been set. For example:
+
+
+"
+"['convolutional-neural-networks', 'object-detection']"," Title: How to adapt MTCNN to large images with relatively small ROIsBody: This question could be generalised to how to adapt state-of-the-art object detection models to large images with small ROIs.
+
+In my particular case I'm trying to use this implementation of MTCNN to get bounding boxes for the faces of images of statues.
+
+One challenge is that the face could take up a large proportion of the image like this:
+
+
+
+Or a very small proportion of the image like this:
+
+
+
+Where I'll zoom in on the statue's face so you can see it:
+
+
+
+Bonus
+
+If anyone has additional comments on my overall approach to this particular problem, happy to hear them.
+"
+"['machine-learning', 'math', 'support-vector-machine']"," Title: What are the variables used in a Gaussian radial basis kernel in the context of SVMs?Body: If I have the Gaussian kernel
+
+$$
+k(x, x') = \operatorname{exp}\left( -\| x - x' \|^2 / 2\sigma^2 \right)
+$$
+
+What is $x$ and $x'$ in the context of training an SVM?
+"
+"['neural-networks', 'attention', 'game-theory']"," Title: Impact or applications of introducing attention in deep networks modelling multi-agent systemsBody: I have been reading quite a lot about the research progress in the domain of self attention-based neural networks that were introduced by Google Inc. in their paper titled "Attention is all you need".
+The concept of introducing attention to neural networks in order to free ourselves from a strict context vector being really unique on one hand and moreover using the same concept to model sequences without recurrent neural networks as introduced in the paper is extremely elegant.
+I have been trying to figure out so has to how this concept of attention would aid deep networks that model multi-agent systems in which game-theoretic factors come into play for the network to learn.
+I was looking for some direction or a toy example/explanation or even possible previous research done to try to test these concepts together.
+P.S - I'm just tinkering with some ideas hoping to build something experimental.
+"
+"['convolutional-neural-networks', 'datasets', 'data-preprocessing']"," Title: Using three image datasets with different image sizes to train a CNNBody: I've just started with AI and CNN networks.
+
+I have two NIFTI images dataset, one with (240, 240) dimensions and the other one with (256, 132). Each dataset is front a different hospital and machine.
+
+If I want to use both to train my model. What do I have to do?
+
+The model needs to have all the train data with the same shape. I've thought to reshape all the data to have the same shape, but I don't know if I'm going to lose information if I reshape the images.
+
+By the way, I have also a third dataset with (232, 256).
+"
+"['ai-design', 'research', 'papers', 'meta-learning']"," Title: What AI conferences in Europe should I consider submitting papers to explaining the ongoing work on RefPerSys?Body: https://afia.asso.fr/journee-hommage-j-pitrat/ is a seminar on March 6th, 2020, in Paris (France, European Union), in honor of the late Jacques Pitrat, who advocated during all his professional life a meta-knowledge and reflective approach. (You need to register to attend that seminar).
+
+Pitrat's blog is available (in spring 2020) on http://bootstrappingartificialintelligence.fr/WordPress3/ (and some snapshot of his CAIA system is downloadable here - but no documentation; however you might try to type L EDITE
on stdin to caia
). He wrote the Artificial Beings : the conscience of a conscious machine book describing the software architecture of, and the motivations for (some previous version of) CAIA. See also this A Step toward an Artificial Artificial Intelligence Scientist paper by J.Pitrat.
+
+What AI conferences (or AGI workshops) in Europe should I consider submitting papers to explaining the ongoing work on RefPerSys?
+
+That RefPerSys project (open source, open science, work-in-progress, with contact information) is explicitly following J.Pitrat's meta-knowledge approach. Feel free to follow or join that ambitious open-source (actually free software) project.
+"
+"['neural-networks', 'machine-learning']"," Title: Where can I find people solving this smoothing, filtering, temporal learning problem?Body: Consider a prediction problem for example. This is a loss function (negative log likelihood) that I am roughly talking about:
+
+\begin{align*}
+ J_{\text{train}} &= -\sum_t \log L\left(\theta_t, X_t, Y_t\right) \\
+ J_{\text{test}} &= -\sum_t \log L\left(\theta_t, X_{t+1}, Y_{t+1}\right) \\
+ J_{\text{dyn}} &= - \sum_t \frac{\left(\theta_{t+1} - \text{stop_gradient}({\theta_t})\right)^2}{2 \sigma_{\theta}^2} \\
+ \theta_0 &\sim \Psi
+\end{align*}
+
+Here, the dynamics are simply first-order smoothness (Brownian motion) in the time-dependent parameters $\text{d}\theta_t = \sigma_{\theta} \text{d} W_t$.
+
+This is basically what they do in weather systems and in papers like A Bayesian Approach to Data Assimilation.
+
+The main difference is, we now have the ability to use stop gradients in the smoothness, which changes the problem somewhat. Other issues might arise.
+
+Who else is doing this? Feel free to also provide links to papers on related work. I am not finding enough hits, so maybe I am missing something. Are RNNs, if twisted slightly, falling into this framework and I am no seeing it?
+"
+"['machine-learning', 'reinforcement-learning', 'game-ai', 'markov-chain']"," Title: How is the probability transition matrix populated in the Markov process (chain) for a board game?Body: Following on from my other (answered) question:
+
+With regards to the Markov process (chain), if an environment is a board game and its states are the various position the game pieces may be in, how would the transition probability matrix be initialised? How would it be (if it is?) updated?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'experience-replay']"," Title: Intutitive explanation of why Experience Replay is used in a Deep Q Network?Body: I understand that Experience Replay is used for data efficiency reasons and to remove correlations in sequences of data. How exactly do these sequences of correlated data affect the performance of the algorithm?
+"
+"['neural-networks', 'deep-learning', 'training', 'backpropagation', 'batch-size']"," Title: What is the purpose of the batch size in neural networks?Body: Why is a batch size needed to update the weights of a neural network?
+
+According to that Youtube Video from 3B1B, the weights are updated by calculating the error between expectation and outcome of the neural net. Based on that, the chain rule is applied to calculate the new weights.
+
+Following that logic, why would I pass a complete batch through the net? The first entries wouldn't have an impact on the weighting.
+
+Do I need to define a batch size when I use backpropagation?
+"
+"['admissible-heuristic', 'consistent-heuristic', 'heuristic-functions']"," Title: If $h_i$ are consistent and admissible, are their sum, maximum, minimum and average also consistent and admissible?Body: Consider the following question:
+
+$n$ vehicles occupy squares $(1, 1)$ through $(n, 1)$ (i.e., the bottom row) of an $n \times n$ grid. The vehicles must be moved to the top row but in reverse order; so the vehicle $i$ that starts in $(i, 1)$ must end up in $(n − i + 1, n)$. On each time step, every one of the $n$ vehicles can move one square up, down, left, or right, or stay put; but if a vehicle stays put, one other adjacent vehicle (but not more than one) can hop over it. Two vehicles cannot occupy the same square.
+
+Suppose that each heuristic function $h_i$ is both admissible and consistent. Now what I want to know is to check the admissibility and consistency of the following heuristics:
+
+- $h= \Sigma_i h_i$
+
+- $h= \min_i (h_i)$
+
+- $h= \max_i (h_i)$
+
+- $h = \frac{\Sigma_i h_i}{n}$
+
+
+P.S: As a lemma, we know that consistency implies the admissibility of the heuristic function.
+Problem Explanation
+From this link, I have found that the first heuristic is neither admissible, nor consistent.
+I know that the second and the fourth heuristics are either consistent, or admissible.
+I have faced with one contradiction in the third heuristic:
+
+Here we see that if car 3 hops twice, the total cost of moving all the cars to their destinations is 3, whereas the heuristic $\max(h_1, \dots, h_n) = 4$.
+Problem
+So, $\max(h_1, ..., h_n)$ must be consistent and admissible, but the above example shows that it's not. What is my mistake?
+"
+"['math', 'probability', 'hidden-markov-model']"," Title: Expected duration in a stateBody: I am going through Rabiner 1989 and he writes that the discrete probability density function of duration $d$ in state $i$ (that is, staying in a state for duration $d$, conditioned on starting in that state) is $$p_i(d) = {a_{ii}}^{d-1}(1-a_{ii})$$
+
+($a_{ii}$ is the state transition probability from state $i$ to state $i$ - that is, staying in the same state).
+
+He then continues to say that the expected durations in a state, conditioned on starting in that state, is $$\overline d_i = \sum_{i=1}^\infty d p_i(d) = \frac{1}{1-a_{ii}}$$
+
+Where does the coefficient $d$ (in $\sum_{i=1}^\infty d p_i(d)$) come from?
+"
+"['machine-learning', 'probability-distribution', 'generalization']"," Title: What does ""the expectation is taken across different possible inputs, drawn from the distribution of inputs we expect the system to encounter"" mean?Body: I am currently studying Deep Learning by Goodfellow, Bengio, and Courville. In chapter 5.2 Capacity, Overfitting and Underfitting, the authors say the following:
+
+
+ Typically, when training a machine learning model, we have access to a training set; we can compute some error measure on the training set, called the training error; and we reduce this training error. So far, what we have described is simply an optimization problem. What separates machine learning from optimization is that we want the generalization error, also called the test error, to be low as well. The generalization error is defined as the expected value of the error on a new input. Here the expectation is taken across different possible inputs, drawn from the distribution of inputs we expect the system to encounter in practice.
+
+
+I found this part unclear:
+
+
+ Here the expectation is taken across different possible inputs, drawn from the distribution of inputs we expect the system to encounter in practice.
+
+
+The language used here is confusing me, because it is discussing a ""distribution"", as in a ""probability distribution"", but then refers to inputs, which are data gathered from outside of any probability distribution. Based on the limited information my studying of machine learning has taught me so far, my understanding is that the machine learning algorithm (or, rather, some machine learning algorithms) uses training data to implicitly construct some probability distribution, right? So is this what it is referring to here? Is the ""distribution of inputs we expect the system to encounter in practice"" the so called ""test set""? I would greatly appreciate it if people would please take the time to clarify this.
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: How can I normalize gamestates in order to use with a machine learning library?Body: I have currently collected 150000 gamestates from playing a Monte Carlo Tree Search AI player against a basic rule based AI at the game of Castle. The information captured represents the information available to the MCTS player on the start of each of their turn and whether they won the game in the end. They are stored within CVS files.
+Example gamestate entry:
+
+For example the entry above shows that:
+
+- HAND: MCTS player's hand contains the cards 7,5,4,9,9,9 (suit has been omitted because it has no baring on the game). (list of cards)
+- CASTLE_FU: MCTS face up cards are 8,2,10 (list of cards)
+- CASTLE_FD_SIZE: MCTS has 3 cards face down (int)
+- OP_HAND_SIZE: The opponent has 3 cards in their hand (int)
+- OP_CASTLE_FU: The opponents face up cards are Jack, Queen, Ace. (list of cards)
+- OP_CASTLE_FD_SIZE: The opponent has 3 cards face down (int)
+- TOP: The top of the discard pile is a 4 (single value)
+- DECK_EMPTY: The deck in which players pick up cards is not empty (boolean)
+- WON: The MCTS player ended up winning the hand (boolean)
+
+I hope to input this data into machine learning algorithms to produce an evaluation function for the MCTS algorithm to use.
+How can I normalize this data so I can use it in Keras/Scikit-Learn?
+EDIT:
+I'm not sure normalizing is the right term here. Encoding or mapping may be more accurate for what I am trying to achieve. Another difficulty I've encountered is the fact that the players hand size can vary up to almost holding the full deck in theory (although this would be incredibly rare in practice). However this is the only column that can be of a size greater than 3.
+EDIT 2:
+I've come up with this model to represent the data. Does this look suitable?
+
+"
+"['long-short-term-memory', 'prediction']"," Title: How should I go about selecting an optimal num_units within a LSTM cell for different sequence sizesBody: I am currently working on a stock market prediction model which incorporates sentiments along with historical price for next day price prediction.
+
+I wanted to test different window / sequence size e.g (3 days, 4 days .. 10 days) to identify which window size is most optimal in predicting the next day prices.
+However the selection for num_units in model.add(LSTM(units=num_units)) for different window sizes are varying.
+
+If a smaller window size is paired with a larger num_unit, there is over-fitting in the data where the model prediction for the price at day t+1 is almost equal to the price at day t.
+
+Hence I am unable to make a fair comparison between different window sizes without varying num_units
+
+I have referred to this How to select number of hidden layers and number of memory cells in an LSTM? however am unable to come to a conclusion.
+
+Is there a predefined guideline for the num_units to use within a LSTM cell for timeseries prediction based on the sequence length?
+"
+['classification']," Title: How to calculate the confidence of a classifier's output?Body: I'm training a classifier and I want to collect incorrect outputs for human to double check.
+
+the output of the classifier is a vector of probabilities for corresponding classes. for example, [0.9,0.05,0.05]
+
+This means the probability for the current object being class A is 0.9, whereas for it being the class B is only 0.05 and 0.05 for C too.
+
+In this situation, I think the result has a high confidence. As A's probability dominants B's and C's.
+
+In another case, [0.4,0.45,0.15], the confidence should be low, as A and B are close.
+
+What's the best formula to use to calculate this confidence?
+"
+"['natural-language-processing', 'python', 'word-embedding', 'word2vec', 'weights']"," Title: Why I have a different number of terms in word2vec and TFIDF? How I can fix it?Body: I need multiply the weigths of terms in TFIDF matrix by the word-embeddings of word2vec matrix but I can't do it because each matrix have a different number of terms. I am using the same corpus for get both matrix, I don't know why each matrix have a different number of terms
+.
+
+My problem is that I have a matrix TFIDF with the shape (56096, 15500)
(corresponding to: number of terms, number of documents) and matrix Word2vec with the shape (300, 56184)
(corresponding to : number of word-embeddings, number of terms).
+And I need the same numbers of terms in both matrix.
+
+I use this code for get the matrix of word-embeddings Word2vec:
+
+def w2vec_gensim(norm_corpus):
+ wpt = nltk.WordPunctTokenizer()
+ tokenized_corpus = [wpt.tokenize(document) for document in norm_corpus]
+ # Set values for various parameters
+ feature_size = 300
+ # Word vector dimensionality
+ window_context = 10
+ # Context window size
+ min_word_count = 1
+ # Minimum word count
+ sample = 1e-3
+ # Downsample setting for frequent words
+ w2v_model = word2vec.Word2Vec(tokenized_corpus, size=feature_size, window=window_context, min_count = min_word_count, sample=sample, iter=100)
+ words = list(w2v_model.wv.vocab)
+ vectors=[]
+ for w in words:
+ vectors.append(w2v_model[w].tolist())
+ embedding_matrix= np.array(vectors)
+ embedding_matrix= embedding_matrix.T
+ print(embedding_matrix.shape)
+
+ return embedding_matrix
+
+
+And this code for get the TFIDF matrix:
+
+tv = TfidfVectorizer(min_df=0., max_df=1., norm='l2', use_idf=True, smooth_idf=True)
+
+
+def matriz_tf_idf(datos, tv):
+ tv_matrix = tv.fit_transform(datos)
+ tv_matrix = tv_matrix.toarray()
+ tv_matrix = tv_matrix.T
+ return tv_matrix
+
+
+And I need the same number of terms in each matrix. For example, if I have 56096 terms in TFIDF, I need the same number in embeddings matrix, I mean matrix TFIDF with the shape (56096, 1550)
and matrix of embeddings Word2vec with the shape (300, 56096)
. How I can get the same number of terms in both matrix?
+Because I can't delete without more data, due to I need the multiplication to make sense because my goal is to get the embeddings from the documents.
+
+Thank you very much in advance.
+"
+"['reinforcement-learning', 'markov-decision-process', 'policies', 'reward-functions', 'reward-shaping']"," Title: Is the policy really invariant under affine transformations of the reward function?Body: In the context of a Markov decision process, this paper says
+
+it is well-known that the optimal policy is invariant to positive affine transformation of the reward function
+
+On the other hand, exercise 3.7 of Sutton and Barto gives an example of a robot in a maze:
+
+Imagine that you are designing a robot to run a maze. You decide to give it a reward of +1 for escaping from the maze and a reward of zero at all other times. The task seems to break down naturally into episodes—the successive runs through the maze—so you decide to treat it as an episodic task, where the goal is to maximize expected total reward (3.7). After running the learning agent for a while, you find that it is showing no improvement in escaping from the maze. What is going wrong? Have you effectively communicated to the agent what you want it to achieve?
+
+It seems like the robot is not being rewarded for escaping quickly (escaping in 10 seconds gives it just as much reward as escaping in 1000 seconds). One fix seems to be to subtract 1 from each reward, so that each timestep the robot stays in the maze, it accumulates $-1$ in reward, and upon escape it gets zero reward. This seems to change the set of optimal policies (now there are way fewer policies which achieve the best possible return). In other words, a positive affine transformation $r \mapsto 1 \cdot r - 1$ seems to have changed the optimal policy.
+How can I reconcile "the optimal policy is invariant to positive affine transformation of the reward function" with the maze example?
+"
+"['comparison', 'recurrent-neural-networks', 'seq2seq', 'bidirectional-rnn']"," Title: Do Seq2Seq models and the Bidirectional RNN do the same thing?Body: It seems to me that Seq2Seq models and Bidirectional RNNs try to do the same thing. Is that true?
+Also, when would you recommend one setup over another?
+"
+"['deep-learning', 'computer-vision', 'yolo']"," Title: Intuition behind single-shot object detectionBody: Is there a good way to understand how single-shot object detection works? The most basic way to do detection is use a sliding-window detector and look at the output of the NN to detect if a class is there or not.
+
+I'm wondering if there is a way to understand how many of the single-shot detectors work? Internally is there some form of sliding window going on? Or is it basically the same detector learned at each point?
+"
+"['deep-learning', 'natural-language-processing', 'stochastic-gradient-descent']"," Title: Why is the sample size of stochastic gradient descent a power of 2?Body: I watched the video lecture of cs224: Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 2 – Word Vectors and Word Senses.
+They take the sample size of the window to be $2^5 = 32$ or $2^6 = 64$. Why is the sample size of stochastic gradient descent a power of 2? Why not we can take 42 or 53 as the sample window size?
+Btw, how do I identify the best minimum window sample size?
+"
+"['neural-networks', 'deep-learning', 'comparison', 'activation-functions']"," Title: What are the pros and cons of the common activation functions?Body: I have heard that sigmoid activation functions should not be used on neural networks with many hidden layers as the gradients tend to vanish in deep networks.
+
+When should each of the common activation functions be used, and why?
+
+
+- ReLu
+- Sigmoid
+- Softmax
+- Leaky ReLu
+- TanH
+
+"
+"['neural-networks', 'machine-learning', 'computer-vision', 'pattern-recognition']"," Title: Suitable algorithms for classifying terrain condition (asphalt, dirt etc) for motor vehiclesBody: I am required to obtain data through a sensor located on the vehicle reading speed, vibration, roll and tilt, within a sample time, to classify the current road condition using machine learning for a high school project.
+
+Which algorithm/approach may be most suitable for this task? Suggestions to sources for learning (books, tutorials) would be also appreciated, as I am new to AI and ML.
+"
+"['training', 'yolo', 'loss']"," Title: Data scan not making sense for coco datasetBody: I am doing a simple scan to see how dataset size affects training. Basically, I took 10% of the coco dataset and trained a yolov3 net (from scratch) to just look for people. Then I took 20% of the coco dataset and did the same thing.... all the way to 100%. What is strange is that all 9 nets are getting similar loss at the end (~7.5). I must be doing something wrong, right? I expected to see an exponential curve where loss started out high and assymptotically approached some value as the dataset increased to 100%. If it didn't approach a value (and still had a noticeable slope at 100%), then that meant more data could help my algorithm.
+
+This is my .data file:
+classes= 1
+train = train-run-less.txt
+valid = data/coco/5k.txt
+names = data/humans.names
+backup = backup
+
+I am trying to train just one class (person) from the coco dataset. Something is not making sense, and in a sanity test, I discovered that the loss drops even if the training folder only contains 1 image (which doesnt even have people in it). I thought the way this worked was that it trained on the ""train"" images, then it tested the neural net on the ""valid"" images. How is it getting better at finding people in the ""valid"" images if it hasnt trained on a single one??
+
+Basically I am trying to answer the question: ""how much accuracy can I expect to gain as I increase the data?""
+"
+"['deep-learning', 'ai-design', 'computer-vision', 'image-processing']"," Title: Train an AI to infer accurate mathematical calculations by simply “looking” at images of shapes/objectsBody: I’d like to build a model that has an understanding of geometry, where it can be applied to question and answering system. Specifically, it would be nice if it could determine the volume of an object by simply looking at pictures of it.
+
+If there are any pre-trained models out there that I can utilize that would certainly make things easier.
+
+Otherwise, are there any suggestions on the kind of model(s) I should use to do this?
+
+Also, I read something online about how Facebook trained an AI to solve complex math problems just by looking at them. They approached the problem as a language translation problem, not as a math problem. I wonder if this is the way to go?
+"
+"['machine-learning', 'deep-learning', 'probability-distribution', 'variational-autoencoder']"," Title: Why do we regularize the variational autoencoder with a normal distribution?Body: When we define the loss function of a variational autoencoder (VAE), we add the Kullback-Leibler divergence between the sample taken according to a normal distribution of parameters:
+
+$$ N(\mu,\sigma) $$
+
+and we compare it with a normal distribution of parameters
+
+$$ N(0,1) $$
+
+My intuition is that it is clever having samples taken from a distribution centered around zero, but I don't understand why we want that examples are taken with a normal distribution.
+"
+"['neural-networks', 'machine-learning', 'dropout']"," Title: Does the performance of a model increase if dropout is disabled at evaluation time?Body: I know dropout layers are used in neural networks during training to provide a form of regularisation in an attempt to mitigate over-fitting.
+
+Would you not get an increased fitness if you disabled the dropout layers during evaluation of a network?
+"
+"['reinforcement-learning', 'markov-decision-process', 'policies']"," Title: How to understand and visualize a trained RL agent's policy when the state space is high dimensional?Body: What are typical ways to understand and visualize a trained RL agent's policy when the state space is of high dimension (but not images)?
+
+For example, suppose state and action are denoted by $s=(s_1,s_2,\cdots,s_n)$ and $a=(a_1,a_2,\cdots,a_k)$. How do I determine which attribute of the state (e.g. an image pixel of video game) is most responsible for a particular action $a_j$? I would like to have, for each action $a_j, j=1,2,...,k$, a table that ranks the attributes of the observation.
+
+My question may be a little bit vague, but if you have any thoughts on how to improve it please let me know!
+"
+"['machine-learning', 'datasets', 'math', 'regression', 'function-approximation']"," Title: Is there a possibility that there is no relationship between some inputs and outputs?Body: I'm doing machine learning projects. I took a look at many datasets I worked with, mostly there are already famous datasets that everyone uses.
+
+Let's say I decided to make my own dataset. Is there a possibility that my data are so random so that no relationship exists between my inputs and outputs? This is interesting because if this is possible, then no machine learning model will achieve to find an inputs outputs relationship in the data and will fail to solve the regression or classification problem.
+
+Moreover, is it mathematically possible that some values have absolutely no relationship between them? In other words, there is no function (linear or nonlinear) that can map those inputs to the outputs.
+
+Now, I thought about this problem and concluded that, if there is a possibility for this, then it will likely happen in regression because maybe the target outputs are in the same range and the same features values can correspond to the same output values and that will confuse the machine learning model.
+
+Have you ever encountered this or a similar issue?
+"
+"['neural-networks', 'tensorflow', 'keras', 'dropout']"," Title: Why is my validation/test accuracy higher than my training accuracyBody:
+
+Is this due to my dropout layers being disabled during evaluation?
+
+I'm classifying the CIFAR-10 dataset with a CNN using the Keras library.
+
+There are 50000 samples in the training set; I'm using a 20% validation split for my training data (10000:40000). I have 10000 instances in the test set.
+"
+"['image-recognition', 'keras', 'unsupervised-learning', 'autoencoders', 'dimensionality-reduction']"," Title: How can I use autoencoders to analyze patterns and classify them?Body: I generated a bunch of simulation data from a complex physical simulation that spits out patterns. I am trying to apply unsupervised learning to analyze the patterns and ideally classify them into whatever categories the learning technique identifies. Using PCA or manifold techniques such as t-SNE for this problem is rather straightforward, but applying neural networks (autoencoders, specifically) becomes non-trivial, as I am not sure splitting my dataset into test and training data is the right way.
+
+Naively, I was thinking of the following approaches:
+
+
+- Train an autoencoder with all the data as training data and train it for a large number of epochs (overfitting is not a problem in this case perse I would think)
+- Keras offers a
model.predict
option which enables me to just construct the encoder section of the autoencoder and obtain the bottleneck values
+- Carry out some data augmentation and split the data as one might into training and test data and carry out the workflow as normal (This approach makes me a little uncomfortable as I am not attempting to generalize a neural network or should I be?)
+
+
+I would appreciate any guidance on how to proceed or if my understanding of the application of autoencoders is flawed in this context.
+"
+"['reinforcement-learning', 'deep-rl']"," Title: Unexpected results when comparing a greedy policy to a DQN policyBody: I am trying to work on a variation of the Access-Control Queuing Task problem presented in Chapter 10 of Sutton’s reinforcement learning book [1].
+
+Specific details of my setup are as follows:
+
+
+- I have different types of tasks that arrive to a system
+(heavy/moderate/light with heavy tasks requiring more time to be
+processed.). The specific task type is chosen uniformly at random. The task inter-arrival time is $0.1s$ on average.
+- I have different classes of servers that can process these tasks
+(low-capacity; medium capacity; high capacity; with high capacity
+servers having a faster processing time). When I select a specific
+server from a given class, it becomes unavailable during the
+processing time of the task assigned to it. Note that the set of servers (and as a result the number of servers of each class) is not fixed, it instead changes periodically, according to the dataset used to model the set of servers (so specific servers may disappear and new ones may appear, as opposed to the unavailability caused by the assignment). The maximum number of servers of each class is $10$.
+- My goal is to decide which class of server should process a given
+task, in a way that minimizes the sum of the processing times over
+all tasks.
+
+
+The specific reinforcement learning formulation is as follows:
+
+
+- State: the type of task (heavy/moderate/light) ; the number of available low
+capacity servers; the number of available medium capacity servers; the number
+of available high capacity servers
+- Actions: (1) Assign the task to a low capacity server (2) assign the
+task to a medium capacity server (3) assign the task to a high
+capacity server (4) a dummy action that has a worse processing time
+than the servers with low capacity. It is selected when there are no free servers.
+- Rewards: the opposite of the processing time, where the processing times are as follows (in seconds):
+
+
+
+
+| | Slow server | Medium Server | Fast server | ""Dummy action"" |
+|---------------|-------------|---------------|-------------|----------------|
+| Light task | 0.5 | 0.25 | 0.166 | 0.625 |
+| Moderate task | 1.5 | 0.75 | 0.5 | 1.875 |
+| Heavy task | 2.5 | 1.25 | 0.833 | 3.125 |
+
+
+
+My intuition for formulating the problem as an RL problem is that 'Even though assigning Light tasks to High capacity servers (i.e. being greedy) might lead to a high reward in the short term, it may reduce the number of High capacity servers available when a Heavy task arrives. As a result, Heavy tasks will have to be processed by lower capacity servers which will reduce the accumulated rewards'.
+
+However, when I implemented this (using a deep Q-network[2] specifically), and compared it to the greedy policy, I found that both approaches obtain the same rewards. In fact, the deep Q-network ends up learning the greedy policy.
+
+I am wondering why such a behaviour occured, especially that I expected the DQN approach to learn a better policy than the greedy one. Could this be related to my RL problem formulation? Or there is no need for RL to address this problem?
+
+[1]Sutton, R. S., & Barto, A. G. (1998). Introduction to reinforcement learning (Vol. 135). Cambridge: MIT press.
+
+[2]Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Petersen, S. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533.f
+"
+"['optimization', 'gradient-descent']"," Title: Oscillating around the saddle point in gradient descent?Body: I was reading a blog post that talked about the problem of the saddle point in training.
+
+
+ In the post, it says if the loss function is flatter in the direction of x (local minima here) compared to y at the saddle point, gradient descent will oscillate to and from the y direction. This gives an illusion of converging to a minima. Why is this?
+
+ Wouldn’t it continue down in the y direction and hence escape the saddle point?
+
+
+
+
+Link to post:
+https://blog.paperspace.com/intro-to-optimization-in-deep-learning-gradient-descent/
+
+Please go to Challenges with Gradient Descent #2: Saddle Points.
+"
+"['natural-language-processing', 'data-preprocessing']"," Title: Is it recommended to remove stop words before named entity recognition?Body: Removing stop words can significantly speed up named entity recognition (NER) modeling by reducing the number of tokens in a document.
+
+Are stop words critical to get correct NER performance?
+"
+"['neural-networks', 'convolutional-neural-networks', 'computer-vision', 'r-cnn', 'pooling']"," Title: In Fast R-CNN, how are input RoIs mapped to the respective RoIs in the feature map before RoI pooling?Body: I've been reading the Fast R-CNN paper.
+My understanding is that the input to one forward pass is the whole input image plus a list of RoIs (generated by selective search or another region proposal method). Then I understand that on the last convolution layer's feature map (let's call it FM), each corresponding RoI gets RoI-pooled, where now the corresponding ROIs are a rectangular (over height and width) slice of the FM tensor over all channels.
+But I'm having trouble with two concepts:
+
+- How is the input RoI mapped to the corresponding RoI in FM? Each neuron comes from a very wide perceptive field, so, in a deep neural network, there's no way of making a 1:1 mapping between input neurons and neurons in the last convolution layer right?
+
+- Disregarding that I'm confused in point 1, once we have a bunch of RoIs in FM and we do the RoI pooling, we have N pooled feature vectors. Do we now run each of these through one FC network one by one? Or do we have N branches of FC networks? (that wouldn't make sense to me)
+
+
+I have also read the faster R-CNN paper. In the same way, I'm also interested to know about how the proposed regions from RPN map to the input of the RoI pooling in the Fast R-CNN layers. Because actually those proposed regions live in the space of the input image, not in the space of the deep feature map.
+"
+"['philosophy', 'definitions', 'intelligence', 'pseudocode']"," Title: Can you provide some pseudocode examples of what constitutes an AI?Body: After years of learning, I still can't understand what is considered to be an AI. What are the requirements for an algorithm to constitute Artificial Intelligence? Can you provide pseudocode examples of what constitutes an AI?
+"
+"['convolutional-neural-networks', 'classification', 'image-recognition', 'convolution']"," Title: Is there any difference between the convolution operation applied to images and applied to other numerical 2D data?Body: Is there any difference between the convolution operation applied to images and applied to other numerical 2D data?
+
+For example, we have a pretty good CNN model trained on a number of $64 \times 64$ images to detect two classes. On the other hand, we have a number of $64 \times 64$ numerical 2D matrices (which are not considered images), which also have two classes. Can we use the same CNN model to classify the numerical dataset?
+"
+"['reinforcement-learning', 'gym', 'reward-design', 'reward-functions', 'trust-region-policy-optimization']"," Title: How can I implement the reward function for an 8-DOF robot arm with TRPO?Body: I need to get an 8-DOF (degrees of freedom) robot arm to move a specified point. I need to implement the TRPO RL code using OpenAI gym. I already have the gazebo environment. But I am unsure of how to write the code for the reward functions and the algorithm for the joint space motion.
+"
+"['reference-request', 'game-ai', 'algorithm-request']"," Title: What are examples of approaches to create an AI for a fighting robot in an MMO game?Body: I have an MMO game where I have players. I wanted to invent something new to the game, and add player-bots to make the game be single-playable as well. The AI I want to add is simply only for fighting other players or other player-bots that he sees around at his level.
+So, I thought of implementing my fighting strategy, exactly how I play, to the bot which is basically using if statements and randoms. For example, when the opponent has low health, and the bot has enough special attack power, he will use this chance and use his special attack power in order to try to knock the opponent down, or if the bot has low health he will eat in time but not too much because there is a point in risking fights, if you eat too much your opponent will do too. Or, for example, if the bot detects the opponent player is eating too much and gains health, he will do the same.
+I told this idea of the implementation to one of my friends and he simply responded with: This is not AI, it's simply just a set of conditions, it does not have any heuristic functions.
+For that type of game, what are some ideas to create a real AI to achieve these conditions?
+Basically, the AI should know what to do in order to beat the opponent, based on the opponent's data such as current health, Armour and weapons, and level, if he risks his health or not and so on.
+I am a beginner and it really interests me to do it in the right way.
+"
+"['deep-learning', 'natural-language-processing', 'transformer', 'attention', 'positional-encoding']"," Title: How does positional encoding work in the transformer model?Body: In the transformer model, to incorporate positional information of texts, the researchers have added a positional encoding to the model. How does positional encoding work? How does the positional encoding system learn the positions when varying lengths and types of text are passed at different time intervals?
+To be more concrete, let's take these two sentences.
+
+- "She is my queen"
+- "Elizabeth is the queen of England"
+
+How would these sentences be passed to the transformer? What would happen to them during the positional encoding part?
+Please explain with less math and with more intuition behind it.
+"
+"['neural-networks', 'machine-learning', 'training', 'feedforward-neural-networks', 'early-stopping']"," Title: What happens if I train a network for more epochs, without using early stopping?Body: I have a question about training a neural network for more epochs even after the network has converged without using early stopping criterion.
+
+Consider the MNIST dataset and a LeNet 300-100-10 dense fully-connected architecture, where I have 2 hidden layers having 300 and 100 neurons and an output layer having 10 neurons.
+
+Now, usually, this network takes about 9-11 epochs to train and have a validation accuracy of around 98%.
+
+What happens if I train this network for 25 or 30 epochs, without using early stopping criterion?
+"
+"['neural-networks', 'ai-design', 'datasets', 'regression', 'architecture']"," Title: Is a basic neural network architecture better with small datasets?Body: I'm currently trying to predict 1 output value with 52 input values. The problem is that I only have around 100 rows of data that I can use.
+
+Will I get more accurate results when I use a small architecture than when I use multiple layers with a higher amount of neurons?
+
+Right now, I use 1 hidden layer with 1 neuron, because of the fact that I need to solve (in my opinion) a basic regression problem.
+"
+"['training', 'python', 'bert', 'fine-tuning', 'training-datasets']"," Title: How does one continue the pre-training in BERT?Body: I need some help with continuing pre-training on Bert. I have a very specific vocabulary and lots of specific abbreviations at hand. I want to do an STS task. Let me specify my task: I have domain-specific sentences and want to pair them in terms of their semantic similarity. But as very uncommon language is used here, I need to train Bert on it.
+
+- How does one continue the pre-training (I read the GitHub release from google about it, but don't really understand it) Any examples?
+- What structure does my training data need to have, so that BERT can understand it?
+- Maybe training BERT from scratch would be even better. I guess it's the same process as continuing the pretraining just the starting checkpoint would be different. Is that correct?
+
+Also, very happy about all other tips from you guys.
+"
+"['deep-learning', 'convolutional-neural-networks', 'python', 'representation-learning']"," Title: Convolutional Feature Encoding Methods in DCNNBody: In Computer Vision, feature encoding methods are used on pre-trained DCNN to increase the feature robustness to certain conditions such as viewpoint/appearance variations ref.
+
+I was just wondering if there are already available well established methods among AI community with probably python implementations.
+
+I found the following ones in literature but without any tutorial or code example:
+
+
+- Multi-layer pooling ref
+- Cross convolutional layer Pooling ref
+- Holistic Pooling ref
+
+"
+"['deep-learning', 'q-learning', 'deep-neural-networks']"," Title: How to use convolution neural network in Deep-Q?Body: I currently have a grid of pixels 20x20. Each pixel can be red green blue or black. So I have one hot-encoded the pixels giving a 20x20x4 array for each screen.
+
+For my Deep-Q Network, I have attached two successive screenshots of the screen together giving a 20x20x4x2 array.
+
+I am trying to build a Convolutional Neural Network to estimate the Q values but I am not sure if my current architecture is a good idea. It currently is as shown below:
+
+ def create_model(self):
+ model = Sequential()
+ model.add(Conv3D(256, (4, 4,2), input_shape=(20,20,4,2)))
+ model.add(Activation('relu'))
+ model.add(Dropout(0.2))
+
+ model.add(Conv3D(256, (2,2,1), input_shape=self.input_shape))
+ model.add(Activation('relu'))
+
+ model.add(Flatten())
+ model.add(Dense(64))
+ model.add(Dense(self.num_actions, activation='linear'))
+ model.compile(loss='mse', optimizer=Adam(self.learning_rate), metrics=['accuracy'])
+ return model
+
+
+Is a 3d convolution a good idea?
+Is 256 filters a good idea?
+Are the filters (4,4,2) and (2,2,1) suitable?
+I realise answers may be highly subjective but I'm just looking for someone to point out any immediate flaws in the architecture.
+"
+"['natural-language-processing', 'classification', 'pytorch', 'transformer']"," Title: Is it possible to do token classification using a model such as GPT-2?Body: I am trying to use PyTorch's transformers as a part of a research project to do sentiment analysis of several types of review data (laptop and restaurant).
+
+To do this, my team is taking a token-based approach and we are using models that can perform token analysis.
+
+One problem we have encountered is that many of the models in PyTorch's transformers do not support token classification, but do support sequence classification. One such model we wanted to test is GPT-2.
+
+In order to overcome this, we proposed using sequence classifiers on single tokens which should work in theory, but possibly at reduced accuracy.
+
+This raises the following questions:
+
+
+- Is it possible to do token classification using a model such as GPT-2 using PyTorch's transformers?
+- How do sequence classifiers perform on single token sequences?
+
+"
+"['convolutional-neural-networks', 'objective-functions', 'facial-recognition']"," Title: Face recognition model loss not decreasingBody: I wrote a script to do train a Siamese Network style model for face recognition on LFW dataset but the training loss doesnt decrease at all. Probably there's a bug in my implementation. Could you please point it out.
+Right now my code does:
+
+
+- Each epoch has 0.5M triplets all generated in an online way from data (since the exhaustive number of triplets is too big).
+- Triplet sampling method: We have a dictionary of
{class_id: list of file paths with that class id}
. We then create a list of classes which we can use for positive class (some classes have only 1 image so cant be used as positive class). At any iteration we randomly sample a positive class from this refined list and a negative class from the original list. We randomly sample 2 images from positive (as Anchor or A and Positive as P) and 1 from negative (Negative or N). A,P,N form our triplet.
+- Model used is ResNet with the ultimate (512,1000) softmax layer is replaced with (512,128) Dense layer (no activation). To avoid overfitting, only the last Dense and layer4 are kept trainable and rest are frozen.
+- During training we find triplets which are semi-hard in a batch (Loss between 0 and margin) and use only those to do backprop (they mention this in the FaceNet paper)
+
+
+from torchvision import models, transforms
+from torch.utils.data import Dataset, DataLoader
+import torch, torch.nn as nn, torch.optim as optim
+from torch.utils.tensorboard import SummaryWriter
+import os, glob
+import numpy as np
+from PIL import Image
+
+image_size = 224
+batch_size = 512
+margin = 0.5
+learning_rate = 1e-3
+num_epochs = 1000
+
+model = models.resnet18(pretrained=True)
+model.fc = nn.Linear(model.fc.in_features, 128, bias=False)
+
+for param in model.parameters():
+ param.requires_grad = False
+for param in model.fc.parameters():
+ param.requires_grad = True
+for param in model.layer4.parameters():
+ param.requires_grad = True
+
+optimizer = optim.Adam(params=list(model.fc.parameters())+list(model.layer4.parameters()), lr=learning_rate, weight_decay=0.05)
+
+device = torch.device(""cuda"" if torch.cuda.is_available() else ""cpu"")
+model = nn.DataParallel(model).to(device)
+writer = SummaryWriter(log_dir=""logs/"")
+
+class TripletDataset(Dataset):
+ def __init__(self, rootdir, transform):
+ super().__init__()
+ self.rootdir = rootdir
+ self.classes = os.listdir(self.rootdir)
+ self.file_paths = {c: glob.glob(os.path.join(rootdir, c, ""*.jpg"")) for c in self.classes}
+ self.positive_classes = [c for c in self.classes if len(self.file_paths[c])>=2]
+ self.transform = transform
+
+ def __getitem__(self, index=None):
+ class_pos, class_neg = None, None
+ while class_pos == class_neg:
+ class_pos = np.random.choice(a=self.positive_classes, size=1)[0]
+ class_neg = np.random.choice(a=self.classes, size=1)[0]
+
+ fp_a, fp_p = np.random.choice(a=self.file_paths[class_pos], size=2, replace=False)
+ fp_n = np.random.choice(a=self.file_paths[class_neg], size=1)[0]
+
+ return {
+ ""fp_a"": fp_a,
+ ""fp_p"": fp_p,
+ ""fp_n"": fp_n,
+ ""A"": self.transform(Image.open(fp_a)),
+ ""P"": self.transform(Image.open(fp_p)),
+ ""N"": self.transform(Image.open(fp_n)),
+ }
+
+ def __len__(self):
+ return 500000
+
+
+def triplet_loss(a, p, n, margin=margin):
+ d_ap = (a-p).norm(p='fro', dim=1)
+ d_an = (a-n).norm(p='fro', dim=1)
+ loss = torch.clamp(d_ap-d_an+margin, min=0)
+ return loss, d_ap.mean(), d_an.mean()
+
+transform = transforms.Compose([
+ transforms.RandomResizedCrop(image_size),
+ transforms.RandomHorizontalFlip(),
+ transforms.ToTensor(),
+ transforms.Normalize([0.596, 0.436, 0.586], [0.2066, 0.240, 0.186])
+ ])
+train_dataset = TripletDataset(""lfw"", transform)
+nw = 4 if torch.cuda.is_available() else 0
+train_dataloader = DataLoader(train_dataset, batch_size=batch_size, num_workers=0, shuffle=True)
+
+num_batches = len(train_dataloader)
+model.train()
+running_loss = 0
+
+for epoch in range(num_epochs):
+ for batch_id, dictionary in enumerate(train_dataloader):
+ a, p, n = dictionary[""A""], dictionary[""P""], dictionary[""N""]
+ a, p, n = a.to(device), p.to(device), n.to(device)
+ emb_a, emb_p, emb_n = model(a), model(p), model(n)
+ losses, d_ap, d_an = triplet_loss(a=emb_a, p=emb_p, n=emb_n)
+
+ semi_hard_triplets = torch.where((losses>0) & (losses<margin))
+ losses = losses[semi_hard_triplets]
+ loss = losses.mean()
+ loss.backward()
+ optimizer.step()
+ running_loss += loss.item()
+
+ print(""Epoch {} Batch {}/{} Loss = {} Avg AP dist = {} Avg AN dist = {}"".format(epoch, batch_id, num_batches, loss.item(), d_ap.item(), d_an.item()), flush=True)
+ writer.add_scalar(""Loss/Train"", loss.item(), epoch*num_batches+batch_id)
+ writer.add_scalars(""AP_AN_Distances"", {""AP"": d_ap.item(), ""AN"": d_an.item()}, epoch*num_batches+batch_id)
+
+ print(""Epoch {} Avg Loss {}"".format(epoch, running_loss/num_batches), flush=True)
+ writer.add_scalar(""Epoch_Loss"", running_loss/num_batches, epoch)
+ torch.save(model.state_dict(), ""facenet_epoch_{}.pth"".format(epoch))
+
+
+Loss graphs: https://tensorboard.dev/experiment/8TgzPTjuRCOFkFV5lr5etQ/
+Please let me know if you need some other information to help you help me.
+"
+"['machine-learning', 'deep-learning', 'reinforcement-learning', 'ai-design', 'rewards']"," Title: Which model should I choose to maximise reward of having chosen two numbers from a list?Body: I am looking for a technique to train a machine learning model to choose two items from a list.
+
+So, given a list $x=[x_1, x_2, x_3, x_4, \dots, x_n]$, the model needs to choose two elements $(x_i, x_j)$. I have a function $R(x, x_i, x_j)$, which will output the reward of choosing $(x_i, x_j)$ given $x$.
+
+What type of models should I use, and how should I train it to maximize the reward?
+
+I've tried using deep reinforcement learning, but I ran into the following problems with implementing the Q-Network:
+
+
+- Variable-length inputs (fixed by using RNN, I think)
+- The output size grows factorially (for an input set of n elements, there are n choose 2 ways to pick 2 elements, so the network needs to output n choose 2 expected rewards)
+
+"
+"['natural-language-processing', 'ai-design', 'philosophy', 'applications']"," Title: Use cases for AI inside the software companyBody: This question is a bit philosophic and is about making new use cases for software companies. Let me describe what exist for now, why it is not enough, and what is needed.
+
+I know that there are a lot of existing researches in applying ML for software (please don't simply point to this one!), but none of them consider the application of ML for software company, not the software alone.
+
+Existent approaches that apply AI for software engineering tasks consider it as follows:
+
+human1 -> software (big code) <- human2
+
+
+That means that human1
makes some part of software
(that is a part of big code
), and human2
reuses some knowledge from it. It may be a bugfix pattern (as e.g. DeepCode does), API usage pattern, repair of code, summarization of code, code search, or whatever else. I think the main reason for this is the original hypothesis of naturalness:
+
+
+ The naturalness hypothesis. Software is a form of human communication; software corpora have similar statistical properties to natural language corpora, and these properties can be exploited to build better software engineering tools.
+
+
+(from Allamanis et al, page 3)
+
+But imagine one software company. It has:
+
+
+- Some number of engineers,
+- Some number of managers,
+- The software product,
+- Information related to the software product (documentation, bug/task tracking system, code review),
+- Some number of formal management processes (waterfall, scrum or whatever else),
+- Some number of informal processes
+
+
+But none of these models consider the software as a product itself. I mean that we should consider the model as follows:
+
+company -> software product -> customers
+ |
+ v
+ big code
+
+
+or even
+
+engineer1 -> |
+ |
+engineer2 -> |
+ |
+... | ----> software product ----> customers
+ | |
+engineerN -> | |
+ | |
+manager --> | |
+ v
+ big code
+
+
+So my questions are:
+
+
+- Are there any cases of investigation of such models?
+- Are there any similar cases in related fields, say in general companies (not specifically software ones)?
+- Are there any analogies (not specifically from software-related domains) where some knowledge can be transferred from a bigger object (
big code
in our case) to a smaller one (software product
)?
+
+
+Any ideas are welcome.
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'experience-replay']"," Title: Can experience replay be used for training after completing every single epoch?Body: The DQN implements replay memory. Based on my research, I believe the replay memory starts to get used for training once there is enough experience in the memory buffer. This means the neural network gets trained while the game plays.
+
+My question is, if I am to play the game 10000 epochs, store all the experiences and then train from the experiences would that have the same effect as training and while running through 10000 epochs? Is it frowned upon to do it this way? Are there any advantages?
+"
+"['neural-networks', 'datasets']"," Title: Can you let specific data impact a neural network more than other data?Body: I have a lot of empty values in my dataset, so I want to let my neural network 'learn more' on the rows that have no empty values because these rows are of higher importance.
+
+Is there a way to do this?
+"
+"['machine-learning', 'image-recognition', 'datasets', 'image-processing']"," Title: Image dataset for pomegranate plant diseaseBody: I am implementing a project on pomegranate plant disease in Machine learning. I want a dataset of all kind images of a healthy and unhealthy part of the pomegranate plant. I got a dataset from Fruit360 but that is only for pomegranate fruits but need for leaves also. Is there anyone who knows any website, link, version control system repository and/or any resource from which I get a dataset for leaves.
+"
+"['philosophy', 'agi', 'proofs', 'risk-management']"," Title: Is an oracle that answers only with a ""yes"" or ""no"" dangerous?Body: I was thinking about the risks of Oracle AI and it doesn't seem as safe to me as Bostrom et al. suggest. From my point of view, even an AGI that only answers questions could have a catastrophic impact. Thinking about it a little bit, I came up with this Proof:
+
+Lemma
+
+
+ We are not safe even by giving the oracle the ability to only answer yes or no.
+
+
+Proof
+
+
+ Let's say that our oracle must maximize an utility function $\phi$, there is a procedure that encodes the optimality of $\phi$. Since a procedure is, in fact, a set of instructions (an algorithm), each procedure can be encoded as a binary string, composed solely of 0 and 1, Therefore we will have $\phi \in {\{0,1\}^n}$, assuming that the optimal procedure has finite cardinality. Shannon's entropy tells us that every binary string can be guessed by answering only yes/no to questions like: is the first bit 0? and so on, therefore we can reconstruct any algorithm via binary answers (yes / no).
+
+
+Is this reasoning correct and applicable to this type of AI?
+"
+"['deep-learning', 'generative-adversarial-networks', 'overfitting']"," Title: Am I overfitting my GAN model?Body: I'm training a DCGAN model on a 320x320 dataset of images and after an hour of training the generator started to generate (on the same latent space noise as during training) images that are identical to the dataset. For example, if my dataset is images of cars, I should expect to see unexisting designs of cars, right? Am I understanding this wrong? I know this is a very general question but I was wondering if this is what should happen and if I should try on different latent space values and then see proper results and not just copies of my dataset?
+"
+"['machine-learning', 'datasets', 'long-short-term-memory', 'cross-entropy']"," Title: Are sentences from the same document independent and identically distributed?Body: I am trying to build an LSTM model to generate Shakspeare-like poems. I have data set $\{s_1, s_2, \dots, s_m \}$, which are sentences of Shakespeare poems, and each sentence contains words $\{w_1, w_2, \dots, w_n \}$.
+
+I am wondering: Are different $s_i$ ($i=1, \dots, m$) independent and identically distributed samples (IID)? Are $w_i$ ($i=1, \dots, n$) within each sentence the IID?
+"
+"['convolutional-neural-networks', 'tensorflow', 'keras', 'overfitting', 'convolution']"," Title: Is it a sign of overfitting when validation_loss dips and then goes up with increasingly bigger swings?Body: I am experimenting with a ConvNet to categorize images taken with a depth camera. So far I have 4 sets of 15 images each. So 4 labels. The original images are 680x880 16-bit grayscale. They are scaled down before feeding it to the ImageDataGenerator to 68x88 RGB (each color channel with equal value). I am using the ImageDataGenerator (IDG) to create more variance on the sets. (The IDG does not seem to be able to handle 16-bit grayscale images, nor 8-bit images well, so hence I converted them to RGB).
+
+I estimate the images to be low on features, compared to regular RGB images, because it represents depth. To get a feel for the images, here are a few down scaled examples:
+
+
+
+
+
+I let it train 4.096 epochs, to see how that would go.
+
+This is the result of the model and validation loss.
+
+
+You can see that in the early epochs the validation (test / orange line) loss dips, and then goes up and starts to show big swings. Is this a sign of overfitting?
+
+Here is a zoomed in image of the early epochs.
+
+
+The model loss (train / blue line) reached relatively low values with an accuracy of 1.000. Training again shows repeatedly the same kind of graphs.
+Here are the last epochs.
+
+Epoch 4087/4096
+7/7 [==============================] - 0s 10ms/step - loss: 0.1137 - accuracy: 0.9286 - val_loss: 216.2349 - val_accuracy: 0.7812
+Epoch 4088/4096
+7/7 [==============================] - 0s 10ms/step - loss: 0.0364 - accuracy: 0.9643 - val_loss: 234.9622 - val_accuracy: 0.7812
+Epoch 4089/4096
+7/7 [==============================] - 0s 10ms/step - loss: 0.0041 - accuracy: 1.0000 - val_loss: 232.9797 - val_accuracy: 0.7812
+Epoch 4090/4096
+7/7 [==============================] - 0s 10ms/step - loss: 0.0091 - accuracy: 1.0000 - val_loss: 238.7082 - val_accuracy: 0.7812
+Epoch 4091/4096
+7/7 [==============================] - 0s 10ms/step - loss: 0.0248 - accuracy: 1.0000 - val_loss: 232.4937 - val_accuracy: 0.7812
+Epoch 4092/4096
+7/7 [==============================] - 0s 10ms/step - loss: 0.0335 - accuracy: 0.9643 - val_loss: 273.6542 - val_accuracy: 0.7812
+Epoch 4093/4096
+7/7 [==============================] - 0s 10ms/step - loss: 0.0196 - accuracy: 1.0000 - val_loss: 258.2848 - val_accuracy: 0.7812
+Epoch 4094/4096
+7/7 [==============================] - 0s 10ms/step - loss: 0.0382 - accuracy: 0.9643 - val_loss: 226.6226 - val_accuracy: 0.7812
+Epoch 4095/4096
+7/7 [==============================] - 0s 10ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 226.2943 - val_accuracy: 0.7812
+Epoch 4096/4096
+7/7 [==============================] - 0s 11ms/step - loss: 0.0201 - accuracy: 1.0000 - val_loss: 207.3653 - val_accuracy: 0.7812
+
+
+Not sure if it is required to know the architecture of the neural network to judge whether this is overfitting on this data set. Anyway, here is the setup.
+
+kernelSize = 3
+kernel = (kernelSize, kernelSize)
+
+model = Sequential()
+model.add(Conv2D(16, kernel_size=kernel, padding='same', input_shape=inputShape, activation='relu'))
+model.add(MaxPooling2D(pool_size=(2, 2)))
+
+model.add(Conv2D(32, kernel_size=kernel, padding='same', activation='relu'))
+model.add(MaxPooling2D(pool_size=(2, 2)))
+
+model.add(Conv2D(64, kernel_size=kernel, padding='same', activation='relu'))
+model.add(MaxPooling2D(pool_size=(2, 2)))
+
+model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
+model.add(Dense(32, activation='relu'))
+model.add(Dropout(0.2))
+model.add(Dense(nr_of_classes, activation='softmax'))
+
+sgd = tf.keras.optimizers.SGD(lr=learning_rate, decay=1e-6, momentum=0.4, nesterov=True)
+model.compile(loss='categorical_crossentropy',
+ optimizer=sgd,
+ metrics=['accuracy'])
+
+"
+"['convolutional-neural-networks', 'object-detection']"," Title: Dealing with very similar object classes in object detectionBody: I'm working on an object detection problem using Faster R-CNN. I need to identify two object classes, and they are very similar to one another. Furthermore they are similar to a third type of object which should be considered as background. Also, all three of these objects have a lot of variation within them.
+
+In my particular example the two objects of interest are 1) a statue of a particular named person who appears in many statues, 2) a statue of anyone else
+
+Examples:
+
+
+Also, I want to treat living flesh people, or non-humanoid statues as background.
+
+Now here are some interesting results:
+
+
+
+The RPB losses follow the expected trajectory for such a problem, but on the other hand, I really had to have faith and hang in there for the detector losses. They take a while to start decreasing, and I presume it's because there is a relatively sharp trough leading to the the minimum of the loss function with respect to the weights (because the labelled classes are so similar to one another). Miraculously (at least I think so), it does start to kind of work, but not as well as I'd like it to.
+
+My question is as in the title but here are some of my thought processes:
+
+
+- Does the class similarity spoil the bounding box regression? And then does that spoil the class inference in turn?
+- Would it be better to just detect humanoid statues in general, then train a classifier from scratch on the output of that? (I don't know about this one. Maybe the relevant info is already encoded in the detector of the Faster R-CNN and the added bonus is that the Faster R-CNN gets context from outside of the bounding box... unless the bounding box regression is being spoiled by the class ambiguity)
+
+"
+"['machine-learning', 'deep-learning', 'comparison', 'semi-supervised-learning']"," Title: What's the intuition behind contrastive learning?Body: Recently, I have seen a surge of papers w.r.t contrastive learning (a subset of semi-supervised learning).
+
+Can anyone give a detailed explanation of this approach with its advantages/disadvantages and what are the cases in which it gives better results?
+
+Also, why it's gaining traction amongst the ML research community?
+"
+"['machine-learning', 'time-series']"," Title: How to predict time series with accuracy?Body: I am trying to predict Forex time series. The nature of the market is that 80% of the time the price can not be predicted, but in 20% of the time it can be. For example, if the price drops down very deep, there is 99% probability that there will be a recovery process, and this is what I want to predict.
+
+So , how do I train a feed-forward network the way it would only predict those cases that have 99% of certainty to take place and, for the rest of the cases it would output ""unpredictable"" status?
+
+Imagine that my data set has 24 hours of continuous price data as input (as 1 minute samples), and then as output I want the network to predict 1 hour of future price data. The only restriction I need to implement is that if the network is not ""sure"" that the price is predictable, it would outupt 0s. So, how do I implement safety in predictions the network is outputting?
+
+It seems that my problem is similar to Google Compose, where it predicts the next word as you are typing , for example, if you type ""thank you"", it would add "" very much"" and this would be like 95% correct. I want the same, but it is just that my problem has too much complexity. Google uses RNNs, so maybe I should try a deep network of many layers of RNNs?
+"
+"['neural-networks', 'classification', 'objective-functions', 'graphs', 'graph-theory']"," Title: How to update edge features in a graph using a loss function?Body: Given a directed, edge attributed graph G, where the edge attribute is a probability value, and a particular node N (with binary features f1 and f2) in G, the algorithm that I want to implement is as follows:
+
+
+- List all the outgoing edges from N, let this list be called edgelist_N.
+- For all the edges in edgelist_N, randomly assign to the edge attribute a probability value such that the sum of all the probabilities assigned to the edges in the edgelist_N equals to 1.
+- Take the top x edges (x can be a hyperparameter).
+- List the nodes in which the edges from step 3 are incoming.
+- Construct a subgraph with node N, the nodes from step 4 and the edges from step 3.
+- Embed the subgraph (preferably using a GNN) and obtain it's embedding and use it with a classifier to predict say f1/f2.
+- Propagate the loss so as to update the edge probabilities, that was assigned randomly in step 2.
+
+
+I do not understand how to do step 7, i.e. update the edge attribute with the loss, so that edges which are more relevant in constructing the subgraph can be assigned a higher probability value.
+
+Any suggestion would be highly appreciated.
+Thank you very much.
+"
+"['neural-networks', 'classification', 'multilayer-perceptrons']"," Title: Model unfit for some part of spiral data despite low errorBody: I'm current testing a model for spiral data. After 500 epoches, loss is 0.04 but the result is still unmatch with some part of the training data. (bottom left)
+![]()
+(source: upsieutoc.com)
+The model has 2 hidden tanh x 16 units running with ml5.js. I choose tanh because it seems to be smoother than ReLU. Apart from that they're the same.
+Is this thing caused by the model or by ml5 itself?
+"
+"['natural-language-processing', 'python', 'graph-theory']"," Title: How do I turn this formula of the average degree of a graph into Python code?Body: I am working through the textbook "Graph-Based Natural Language Processing and Information Retrieval", where I've got a question on implementation of this first Latex looking formula/algorithm.
+Can you help me turn the formula under 1.2 Graph Properties into python code? Yes, I know there are many other languages, but python is more user-friendly so I'm starting there, and will eventually rewrite it into C.
+As I read the above node example, sorry the D and E nodes were cut off.
+Node A has two outflowing arrows notating it as their head node, and it is the one tail node.
+This first sentence references the Graphs:
+To traverse from A to B, If A to B value is sufficient (above Nx), go to B. If A to B value is below Nx, go A to C to D to A to B total cost is 5.2+7+1+8 = 21.20 traverse cost, this makes sense.
+
+This sentence refers to the Latex formula in the book.
+Then to start the formula calculation, the average degree of a graph "a" is equal to the sum of, one over N, times the sum of the in-degree of vertices? Asserting that the sum is a non zero integer between 1 and N?
+Ok, I only loaded one page and hope that's not a TOS violation or causes issue, it's a challenge to find people who understand graph theory.
+Let me know what questions you have, but I'm just wanting to get clarification if my understanding is what this page is saying.
+"
+"['long-short-term-memory', 'cross-entropy', 'maximum-likelihood']"," Title: Can the cross-entropy loss be used for a NLP task with LSTM?Body: I am trying to build an LSTM model to generate Shakspeare-like poems. I have training set $\{s_1,s_2, \dots,s_m\}$, which are sentences of Shakespeare poems, and each sentence contains words $\{w_1,w_2, \dots,w_n\}$.
+
+To my understanding, each sentence $s_i$, for $i=1, \dots,m$ is a random sequence containing the words $w_j$, for $j=1, \dots,n$. The LSTM model is estimated by applying the maximum likelihood (MLE) method, which will use cross-entropy loss for optimization. The use of MLE requires that the samples in the random sequence be independent and identically distributed (i.i.d), however, the word sequence $w_j$ is not i.i.d (since it is non-Markov). Therefore, I am suspicious about using cross-entropy loss for training an LSTM for the NLP task (which seems to be the common practice).
+"
+"['recurrent-neural-networks', 'transformer', 'speech-recognition']"," Title: Can transformer be better than RNN for online speech recognition?Body: Does transformer have the potential to replace RNN end-to-end models for speech recognition for online speech recognition? This mainly depends on accuracy/latency and deploy cost, not training cost. Can transformer support low latency online use case and have comparable deploy cost and better result than RNN models?
+"
+"['reference-request', 'symbolic-ai', 'meta-learning', 'meta-rules']"," Title: What are recent AI software systems and research papers close to J. Pitrat's ideas?Body: J. Pitrat (born in 1934) was a French leading artificial intelligence scientist (the first to get a Ph.D. in France mentioning "artificial intelligence"). His blog is still online and of course refer to most of his papers (e.g. A Step toward an Artificial Artificial Intelligence Scientist, etc.) and books, notably Artificial Beings: the conscience of a conscious machine (his last book). He passed away in October 2019. I attended (and presented a talk) at a seminar in his memory.
+What are recent AI systems or research papers related to the idea of symbolic AI, introspection, declarative metaknowledge, meta-learning, meta-rules, etc.?
+Most of those I know are more than 20 years old (e.g. Lenat Eurisko; I am aware of OpenCyC). I am interested in papers or systems published after 2010 (perhaps AGI papers with actual complex open source software prototypes).
+-see also the RefPerSys system-
+"
+"['tensorflow', 'probability', 'tensorflow-probability']"," Title: What are the prerequisites to start using the TensorFlow Probability library?Body: I have some familiarity with the regular Tensorflow library and have been able to create a number of working models with it. There are more than enough resources out there to get up and running and answer most questions on the standard library.
+
+But I recently came across the video on some high-level capabilities of the Tensorflow Probability library, TensorFlow Probability: Learning with confidence (TF Dev Summit '19), and I would like to learn it.
+
+The issue is that there are very few resources out there on TFP and given my lack of a formal background in math/statistics, I find myself aimlessly googling to get a grasp of what's going on in the docs. I'm more than willing to invest the time needed, but I just need to know where I can start in terms of resources I can access online. Specifically, I'm looking to get the necessary domain knowledge needed to work with the library given the lack of courses/tutorials on the library itself.
+"
+"['reinforcement-learning', 'robotics', 'path-planning', 'control-theory']"," Title: How to deal with approximate states when doing path planning?Body: If one is interested in implementing a path planning algorithm that is grid-based, one needs to consider the fact that your grid points will never represent the true state of the robot.
+
+How is this dealt with?
+
+Suppose we're doing path planning using a grid-based search on the side of the control for a desired grid position as an output state.
+
+How would you handle the discrepancy between your actual starting position and your discretized starting position?
+
+I understand that normally you may use an MPC instead, which continually recalculates an optimal path using some type of nonlinear solver, but suppose we don't do this - suppose we restrict ourselves to only a grid search and suppose at after every action the state of the robot has to be considered as living in a particular grid point.
+"
+['probability-distribution']," Title: How can I use the success and failure data to estimate parameters of a Dirichlet distribution?Body: I have used Beta function to estimate the performance of the agent. I have failure and success data of the task that runs on the agent.
+The parameter $\alpha$ is a number of successful tasks, while $\beta$ is the number of failures. Thus, I can estimate the performance by exploiting the expected value of Beta, as $$\mu = \frac{\alpha} {(\alpha+\beta)}$$
+
+So, I am looking for a similar model, such that its parameter can be estimated from the success and failure data. So far I found Dirichlet distribution.
+
+What is the expected value of Dirichlet distribution? How I can use the success and failure data to estimate parameters of this distribution?
+
+Let's check the following example:
+
+Suppose that we use a Dirichlet prior represented by $Dirichlet(1, 1, 1)$ and observe $13$ results with $8$ Successful, $2$ Missing, and $3$ Failures. Then we get the posterior to be $Dirichlet(1+8, 1+2, 1+3)$. Then if you define the performance value $\alpha$ to be the expectation of $P(x=Successful)$,
+then $\alpha$ will be $(1+8)/[(1+8)+(1+2)+(1+3)] = 0.56$
+
+Now
+Suppose that we use a Beta prior represented by $Beta(1,1)$ and observe $13$ results with $8$ Successful, and $3$ Failures. Then we get the posterior to be $Beta(1+8, 1+3)$. Then if you define the performance value Pr to be the expectation of $P(x=Successful)$,
+then $\alpha = (1+8)/[(1+8)+(1+3)] = 0.69$
+
+Are my calculations and concept right?
+"
+"['probability', 'ensemble-learning']"," Title: Why does the error ensemble use the ceiling of the number of classifiers?Body:
+What is $y$? Why is $k$ the ceil of $n/2$? What is $y \geq k$?
+"
+"['recurrent-neural-networks', 'gradient-descent', 'stochastic-gradient-descent', 'momentum']"," Title: Why do momentum techniques not work well for RNNs?Body: AFAIK, momentum is quite useful when training CNNs, and can speed-up the training substantially without any drop in validation accuracy.
+I've recently learned that it is not as helpful for RNNs, where plain SGD is preferred.
+For example, Deep Learning by Goodfellow et. al says (section 10.11, page 401):
+
+Both of these approaches have largely been replaced by simply using SGD (even without momentum) applied to LSTMs.
+
+The author talks about LSTMs and "both of these approaches" refer to second-order and first-order SGD methods with momentum methods, respectively, according to my understanding.
+What causes this discrepancy?
+"
+"['machine-learning', 'bayesian-optimization']"," Title: Understanding Bayesian Optimisation graphBody: I came across the concept of Bayesian Occam Razor in the book Machine Learning: a Probabilistic Perspective. According to the book:
+
+
+ Another way to understand
+ the Bayesian Occam’s razor effect is to note that probabilities must
+ sum to one. Hence $\sum_D' p(D' |m) = 1$, where the sum is over all possible data sets. Complex
+ models, which can predict many things, must spread their probability mass thinly, and hence
+ will not obtain as large a probability for any given data set as simpler models. This is sometimes called the conservation of probability mass principle.
+
+
+The figure below is used to explain the concept:
+
+
+
+
+
+ Image Explanation: On the vertical axis we plot the predictions of 3 possible models: a simple one, $M_1$ ; a medium one, $M_2$ ; and a complex one, $M_3$ . We also indicate the actually observed
+ data $D_0$ by a vertical line. Model 1 is too simple and assigns low probability to $D_0$ . Model 3
+ also assigns $D_0$ relatively low probability, because it can predict many data sets, and hence it
+ spreads its probability quite widely and thinly. Model 2 is “just right”: it predicts the observed data with a reasonable degree of confidence, but does not predict too many other things. Hence model 2 is the most probable model.
+
+
+
+What I do not understand is when a complex model is used, it will likely overfit data and hence the plot for a complex model will look like a bell shaped with its peak at $D_0$ while simpler models will more likely have a broader bell shape. But the graph here shows something else entirely. What am I missing here?
+"
+"['deep-learning', 'convolutional-neural-networks', 'keras', 'performance']"," Title: How can I merge outputs of two separate layers so that the overall performance improves?Body: I am training a combined model (fine-tuned VGG16 for images and shallow FCN for numerical data) to do a binary classification. However, the overall AUC score is not what I expected it to be.
+
+Image-only mean AUC after 5-fold cross-validation is about 0.73 and numeric data only 5-fold mean AUC is 0.65. I was hoping to improve the mean AUC by combining the models into one and merging output layers using concatenate
in Keras.
+
+img_output = Dense(256, activation=""sigmoid"")(x_1)
+
+
+and
+
+numeric_output = Dense(128, activation=""relu"")(x_2)
+
+
+are the output layers of the two models. And,
+
+concat = concatenate([img_output, numeric_output])
+hidden1 = Dense(64, activation=""relu"")(concat)
+main_output = Dense(1, activation='sigmoid', name='main_output')(hidden1)
+
+
+is the way I concatenated them.
+
+Since image-only performance was better I decided that it might be reasonable to have more dense layers for image_output
(256
) and ended up using 128
in numeric_output
.I could only reach up to mean AUC of 0.67 using a combined model. I think I should rearrange the concatenation of two outputs somehow (by introducing another learnable parameter (like the formula (10) at 3.3 section of this work?, bias?, or something else) to get more boost on mean AUC. However, I was not able to find what options were available.
+
+Hope you have some ideas worth trying.
+"
+"['deep-learning', 'math', 'gradient-descent', 'meta-learning', 'model-agnostic-meta-learning']"," Title: Understanding the derivation of the first-order model-agnostic meta-learningBody: According to the authors of this paper, to improve the performance, they decided to
+
+
+ drop backward pass and using a first-order approximation
+
+
+I found a blog which discussed how to derive the math but got stuck along the way (please refer to the embedded image below):
+
+
+- Why
+
disappeared in the next line.
+- How come
(which is an Identity matrix)
+
+
+
+
+Update: I also found another math solution for this. To me it looks less intuitive but there's no confusion with the disappearance of 𝜃 as in the first solution.
+
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: Is there a way to add ""focus"" on parts of the image when using CNNs?Body: I'm building a CNN/3DCNN model that classifies hand gestures. The problem is that the actual gesture occupies only like 1% of the whole image. That means that an enormous amount of convolutional operations is done on the ""empty"" parts of the image, which is useless.
+
+Is there a way to solve this problem? I was thinking about a MaxPooling layer with a giant pool size, but near features that are extracted from the gesture will be probably ""compressed"" in only 1 feature.
+"
+"['reinforcement-learning', 'rewards']"," Title: How to incentivise snake to go straight to apple?Body: I have made a Deep Q Network for the game snake but unfortunately, the snake exhibits some unwanted behavior. It generally does quite well but sometimes it gets stuck in an infinite loop that it can't escape and at the start of the game it takes a very long route to the apple rather than taking a more direct route.
+
+The discount factor per time step is 0.99. The Snake gets a reward of +9 for getting an Apple and -1 for dying. Does anybody have any recommendations on how I should tune the hyperparameters/reward function to minimize this unwanted behavior?
+
+I was thinking that reducing the discount factor may be a good idea?
+"
+"['ai-design', 'tensorflow', 'keras']"," Title: How to deal with features which are here just for training?Body: I'm new to the Data Science field and last week I started to learn about Neural Networks and Deep Learning. To practice, I decided to do a small project: design a Neural Network to predict the winner of an NBA game given the two teams playing. Also, for each match I have 2 stats (let's say number of points and number of free throws) for each of the teams.
+
+In the end, the dataset looks like:
+
+| ID | Home | Away | H_Pts | H_Fts | A_Pts | A_Fts | H_win |
+|:---:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
+| 1 | Team1 | Team2 | 45 | 10 | 47 | 8 | 1 |
+| 2 | Team3 | Team4 | 56 | 6 | 70 | 13 | 0 |
+| ... | ... | ... | ... | ... | ... | ... | ... |
+
+
+I implemented the model with TensorFlow/Keras (with the help of this tutorial: Classify structured data with feature columns | TensorFlow Core).
+
+The code is pretty concise:
+
+
+
+batch_size = 16
+train_ds, test_ds, val_ds = get_datasets() # The function mainly uses tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
+team_names = get_team_names()
+
+feature_columns = []
+for column_name in ['Home', 'Away']:
+ team = feature_column.categorical_column_with_vocabulary_list(column_name, team_names)
+ feature_columns.append(feature_column.indicator_column(team))
+
+for column_name in ['H_Pts', 'H_Fls', 'A_Pts', 'A_Fls']:
+ feature_columns.append(feature_column.numeric_column(column_name))
+
+feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
+
+model = tf.keras.Sequential([
+ feature_layer,
+ layers.Dense(128, activation='relu'),
+ layers.Dense(128, activation='relu'),
+ layers.Dense(1)
+])
+
+
+model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy'])
+
+model.fit(train_ds, validation_data=val_ds, epochs=10)
+
+loss, accuracy = model.evaluate(test_ds)
+print(f""Model evaluation: Loss = {loss} | Accuracy = {accuracy}"")
+
+
+Trained with just 100 games, I get a great accuracy: 99%. Of course: as it is, the test dataset given to model.evaluate(test_ds)
contains everything except the target label H_win
. Because H_win
can easily be deduced from H_Pts
and A_Pts
, I get a high accuracy. But this model can't work because by definition you don't know the number of points of each team before the game...
+
+How should I deal with features like these ones which I do not want to predict (so they're not labels) but that should still be considered during the training? Does this kind of feature have a name?
+"
+"['neural-networks', 'machine-learning', 'image-recognition']"," Title: How can I train a neural network to describe the characteristics of a picture?Body: I have collected a set of pictures of people with a text explaining the characteristics of the person on the picture, for example, ""Big nose"" or ""Curly hair"".
+
+I want to train some type of model that takes in any picture and returns a description of the picture in terms of characteristics.
+
+However, I have a hard time figuring out how to do this. It is not like labeling ""dog"" or ""apple"" because then I can create a set of training data and then evaluate its performance, now I can not. If so I would probably have used a CNN and probably also VGG-16 to help me out.
+
+I only have two ML courses under my belt and have never really encountered a problem like this before. Can someone help me to get in the right direction?
+
+As of now, I have a data set of 13000 labeled images I am very confident it is labeled well. I do not know of any pre-trained datasets that could be of help in this instance, but if you know of one it might help.
+
+Worth noting is that every label is or should at least be unique. If for example there exist two pictures with the same label of ""Big nose"" it is purely coincidental.
+"
+"['machine-learning', 'natural-language-processing', 'python']"," Title: Owner Search for given Server SNOBody: I am newbie to NLP.
+I have a excel sheet with following columns:
+ Server_SNo,
+ Owner,
+ Hosting Dept,
+ Bus owner,
+ Applications hosted,
+ Functionality,
+ comments
+
+a. Except the Server_SNo, other columns may or may not have data.
+b. For some records there is no data except Server_SNo which is the first column.
+c. One business owner can own more than 1 Server.
+
+So, out of 4000 records, about 50% of data contain direct mapping for a server with owner. Remaining 50% of data have combination of other columns (Owner, Hosting Dept, Bus owner, Applications hosted, Functionality and comments)
+
+Here is my problem, I need to find the owner for the given Server_Sno for 50% of data which have combination of other columns (Owner, Hosting Dept, Bus owner, Applications hosted, Functionality and comments).
+
+I have just started to build the code using Python and NLTK.
+
+Is this an NLP problem? Am I going in right direction using Python and NLTK for NLP?
+
+Any insights is appreciated.
+
+-Mani
+"
+"['reinforcement-learning', 'deep-rl', 'environment', 'action-spaces']"," Title: How should I design the action space of an agent that needs to choose a 2d point and then shoot a cannonball?Body: I'm building a game environment (see the picture below) where an agent should position the mouse on the screen (see the coordinates on the upper right corner) and then click to shoot a cannonball. If the goal (left) is hit. The agent gets a reward based on the elapsed time between this strike and the last one. If three shots are missed, the game is done and the environment will reset.
+
+The env is done so far. But now I wonder what the action space should look like. How can I make the agent choose some x and y coordinates? And how can I combine this with a "shoot" action?
+"
+"['reinforcement-learning', 'generative-model', 'time-series', 'environment']"," Title: Should the RL agent be trained in an environment with real-world data or with a synthetic model?Body: I want to train a reinforcement learning agent in an environment with parameters (for example, the wind speed, sun irradiation, etc.) that change over time. I have recorded a limited amount of data for these time series.
+
+Should the RL agent be trained in an environment, which replays the recorded time series over and over, or should I model the time series with a generative model first and train the agent in an environment with these synthetic time series?
+
+On the one hand, I think the RL algorithm will perform better with the synthetic data, because there are more diverse trajectories. On the other hand, I don't really have more data, because it is modelled after the same data the RL algorithm could learn from in the first place.
+
+Are there any papers that elaborate on this topic?
+"
+"['reinforcement-learning', 'markov-decision-process', 'markov-property']"," Title: How is the Markovian property consistent in reinforcement learning based scheduling?Body: In Reinforcement Learning, an MDP model incorporates the Markovian property. A lot of scheduling applications in a lot of disciplines use reinforcement learning (mostly deep RL) to learn scheduling decisions. For example, the paper Learning Scheduling Algorithms for Data Processing Clusters, which is from SIGCOMM 2019, uses Reinforcement Learning for scheduling.
+
+Isn't scheduling a non-Markovian process, or am I missing some points?
+"
+"['long-short-term-memory', 'supervised-learning', 'time-series', 'sequence-modeling', 'graphs']"," Title: Model for supervised sequence classification taskBody: The Problem
+
+I am currently working on a sequence classification problem I try to solve with machine learning.
+The target variable is the current state of a system.
+This target variable is following a repeating pattern (eg. [00110200033304...]).
+So Transitions are only allowed from or to the ""0"" state if you imagine the system as a state machine.
+The only deviation is the time the system stays in one state (eg. iteration_1 = [...0220...], iteration_2 =[...02220...]).
+
+My Question
+
+What would be the best choice of (machine learning) model for this task if one wants to optimize for accuracy?
+
+Restrictions
+
+
+- No restrictions regarding time / space complexity of the model
+- No restrictions regarding the type of model
+- The model is only allowed to make wrong classifications in the state transitions phases (e.g. true: [011102...], pred: [001102...]) but must not validate the sequence logic (e.g. true: [011102...], pred: [010102...])
+
+
+Additional Info / Existing Work
+
+
+- With a lstm neural network (many to 1) I achieved an overall accuracy of 97% in an unseen test set.
+Unfortunately the network predicted sequences which violate the sequence logic
+(e.g. true: [011102...] predicted: [010102...]) even tough the window length was wide enough to cover at least 3 state transitions.
+- with simple classification models (only one times step per classification, tested models: feed forward neural network, xgboost / adaboost) an accuracy of ca. 70% are reachable
+- The input signal is acoustic emission in the frequency domain; Ca. 100 frequency bins / 100 features
+
+
+Ideas
+
+
+- Maybe the lstm would work better in ""many to many"" designed with a drastic reduced input dimensionality by increased window size?
+- Maybe a combination of the probability output of the lstm with
+a timed Automaton (a state machine with time dependent probability density functions
+about the state changes) or a Markov chain model could significantly improve the result?
+(But this seems really inelegant)
+- Is it eventually possible to impose the restriction of valid sequences onto the lstm model?
+
+"
+"['neural-networks', 'backpropagation', 'relu', 'mean-squared-error']"," Title: How to determine the target value when using ReLU as activation function?Body: Consider the following simple neural network with only one neuron.
+
+
+- The input is $x_1$ and $y_2$, where $-250 < x < 250$ and $-250 < y < 250$
+- The weights of the only neuron are $w_1$ and $w_1$
+- The output of the neuron is given by $o = \sigma(x_1w_1 + x_2w_2 + b)$, where $\sigma$ is the ReLU activation function and $b$ the bias.
+- Thus the cost should be $(o - y)^2$.
+
+
+When using the sigmoid activation function, the target for each point is usually $0$ or $1$.
+
+But I'm a little confused witch target to use when the activation function is the ReLU, given that it can output numbers greater than 1.
+"
+"['machine-learning', 'natural-language-processing', 'python', 'algorithm']"," Title: What algorithm to use for finding artists/bands in text and differentiating between artists that share the same nameBody: Here's the data I have:
+
+
+- Text from articles from various music blogs & music news sites (title, summary, full content, and sometimes tags).
+- I used a couple different NLP/NER tools (nltk, spacy, and stanford NER) to determine the proper nouns in the text, and gave each proper noun a score based on how many times it appeared, and how many NLP tools recognized it as a proper noun. None of these tools are very accurate by themselves for my data
+- For each proper noun I queried musicbrainz to find artists with that name. (musicbrainz has a lot of data that may be helpful: aliases, discography, associations with other artists)
+- Any links in the article to Spotify, YouTube etc. and the song name & artist for that link
+
+
+I have three goals:
+
+
+- Determine which proper nouns are artists
+- For artists that share the same name, determine which one the text is referring to (based on musicbrainz data)
+- Determine if the artist is important to the article, or if they were just briefly mentioned
+
+
+I have manually tagged some of the data with the correct output for the above 3 goals.
+
+How would you go about this? Which algorithms do you think would be best for these goals?
+Is there any semi-supervised learning I can do to reduce the amount of tagging I need to do?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'computer-vision', 'datasets']"," Title: Why is the validation performance better than the training performance?Body: I am training a classifier to identify 24 hand signs of American Sign Language. I created a custom dataset by recording videos in different backgrounds for each of the signs and later converted the videos into images. Each sign has 3000 images, that were randomly selected to generate a training dataset with 2400 images/sign and validation dataset with the remaining 600 images/sign.
+
+
+- Total number of images in entire dataset: 3000 * 24 = 72000
+- Training dataset: 2400 * 24 = 57600
+- Validation dataset: 600 * 24 = 14400
+- Image dimension (Width x Height): 1280 x 720 pixels
+
+
+The CNN architecture used for training
+
+model = Sequential([
+ Conv2D(32, (3, 3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
+ MaxPooling2D(pool_size=(2,2)),
+ Dropout(0.25),
+
+ Conv2D(32, (3, 3), activation='relu'),
+ MaxPooling2D(pool_size=(2,2)),
+ Dropout(0.25),
+
+ Conv2D(64, (3, 3), activation='relu'),
+ MaxPooling2D(pool_size=(2,2)),
+ Dropout(0.25),
+
+ Conv2D(64, (3, 3), activation='relu'),
+ MaxPooling2D(pool_size=(2,2)),
+ Dropout(0.25),
+
+ Flatten(),
+ Dense(128, activation='relu'),
+ Dropout(0.25),
+
+ Dense(NUM_CLASSES, activation='softmax')
+])
+
+
+Training parameters:
+
+IMG_HEIGHT = 224
+IMG_WIDTH = 224
+BATCH_SIZE = 32
+NUM_CLASSES = 24
+train_datagen = ImageDataGenerator(rescale = 1./255,
+ width_shift_range=0.1,
+ height_shift_range=0.1,
+ zoom_range=0.1,
+ fill_mode='constant')
+EPOCHS = 20
+STEPS_PER_EPOCH = TRAIN_TOTAL // BATCH_SIZE
+VALIDATION_STEPS = VALIDATION_TOTAL // BATCH_SIZE
+
+callbacks_list = [
+ tf.keras.callbacks.EarlyStopping(monitor = 'accuracy',
+ min_delta = 0.005,
+ patience = 3),
+ tf.keras.callbacks.ModelCheckpoint(filepath = 'D:\\Models\\HSRS_ThesisDataset_5Mar_1330.h5',
+ monitor= 'val_loss',
+ save_best_only = True)
+]
+
+optimizer = 'adam'
+
+
+The model accuracy and model loss graph is shown in the figure below:
+
+
+
+The results obtained at the end of the training are
+
+
+- Train acc: 0.8000121
+- Val acc: 0.914441
+
+
+I read this article explaining why the validation loss is lower than the training loss I want to know:
+
+
+- Is it because of the smaller dataset and random shuffling of the images?
+- Is there any way to improve the condition without changing the dataset?
+- Will this have a very detrimental effect on the model performance in real test cases? If not, can I just focus on improving the training accuracy of the overall model?
+
+"
+"['neural-networks', 'classification']"," Title: Is it okay to have wide variations within one of the classes for binary classification tasks?Body: Say I am using a convolutional network to classify pictures of my face versus anyone else's face in the world.
+
+So let's take 10000 pictures of me, and 10000 pictures of other people.
+
+And let's do three experiments where we train a binary classifier:
+
+1) The 10000 ""other"" pictures are of 1 other person.
+
+2) The 10000 ""other"" pictures are of ~10 other people (approximately balanced, so about 1000 pictures per person).
+
+3) The 10000 ""other"" pictures are of ~10000 other people.
+
+I only have one question but here are some different perspectives on it:
+
+
+- Are any of these cases categorically harder to solve than the others?
+- Are they the same difficulty, or close?
+- Are there known considerations to make when tuning the model for each
+of the cases? (like maybe case (3) has a sharper minimum in the loss
+function than (1) so we need to use a different optimisation approach)
+
+"
+"['objective-functions', 'gradient-descent', 'papers', 'support-vector-machine', 'adversarial-ml']"," Title: How do you perform a gradient based adversarial attack on an SVM based model?Body: I have an SVM currently and want to perform a gradient based attack on it similar to FGSM discussed in Explaining And Harnessing
+Adversarial Examples.
+
+
+
+I am struggling to actually calculate the gradient of the SVM cost function with respect to the input (I am assuming it needs to be w.r.t input).
+
+Is there a way to avoid the maths (I am working in python if that helps?)
+"
+"['machine-learning', 'tensorflow', 'python', 'keras']"," Title: Neural Network Results always the sameBody: I have a GRU model which has 12 features as inputs and I'm trying to predict output power. I really do not understand though whether I choose
+
+
+- 1 layer or 5 layers
+- 50 neurons or 512 neuron
+- 10 epochs with a small batch size or 100 eopochs with a large batch size
+- Different optimizers and activation functions
+- Dropput and L2 regurlarization
+- Adding more dense layer.
+- Increasing and Decreasing learning rate
+
+
+My results are always the same and doesn't make any sense, my loss and val_loss loss is very steep in first 2 epochs and then for the rest it becomes constant with small fluctuations in val_loss.
+
+Here is my code and a figure of losses, and my dataframes if needed:
+
+Dataframe1: https://drive.google.com/file/d/1I6QAU47S5360IyIdH2hpczQeRo9Q1Gcg/view
+Dataframe2: https://drive.google.com/file/d/1EzG4TVck_vlh0zO7XovxmqFhp2uDGmSM/view
+
+import pandas as pd
+import tensorflow as tf
+import matplotlib.pyplot as plt
+from sklearn.model_selection import train_test_split
+from sklearn.preprocessing import MinMaxScaler
+from google.colab import files
+from tensorboardcolab import TensorBoardColab, TensorBoardColabCallback
+tbc=TensorBoardColab() # Tensorboard
+from keras.layers.core import Dense
+from keras.layers.recurrent import GRU
+from keras.models import Sequential
+from keras.callbacks import EarlyStopping
+from keras import regularizers
+from keras.layers import Dropout
+
+
+
+
+
+df10=pd.read_csv('/content/drive/My Drive/Isolation Forest/IF 10 PERCENT.csv',index_col=None)
+df2_10= pd.read_csv('/content/drive/My Drive/2019 Dataframe/2019 10minutes IF 10 PERCENT.csv',index_col=None)
+
+X10_train= df10[['WindSpeed_mps','AmbTemp_DegC','RotorSpeed_rpm','RotorSpeedAve','NacelleOrientation_Deg','MeasuredYawError','Pitch_Deg','WindSpeed1','WindSpeed2','WindSpeed3','GeneratorTemperature_DegC','GearBoxTemperature_DegC']]
+X10_train=X10_train.values
+
+y10_train= df10['Power_kW']
+y10_train=y10_train.values
+
+X10_test= df2_10[['WindSpeed_mps','AmbTemp_DegC','RotorSpeed_rpm','RotorSpeedAve','NacelleOrientation_Deg','MeasuredYawError','Pitch_Deg','WindSpeed1','WindSpeed2','WindSpeed3','GeneratorTemperature_DegC','GearBoxTemperature_DegC']]
+X10_test=X10_test.values
+
+y10_test= df2_10['Power_kW']
+y10_test=y10_test.values
+
+
+
+
+# scaling values for model
+
+
+x_scale = MinMaxScaler()
+y_scale = MinMaxScaler()
+
+X10_train= x_scale.fit_transform(X10_train)
+y10_train= y_scale.fit_transform(y10_train.reshape(-1,1))
+X10_test= x_scale.fit_transform(X10_test)
+y10_test= y_scale.fit_transform(y10_test.reshape(-1,1))
+
+
+X10_train = X10_train.reshape((-1,1,12))
+X10_test = X10_test.reshape((-1,1,12))
+
+
+
+Early_Stop=EarlyStopping(monitor='val_loss', patience=3 , mode='min',restore_best_weights=True)
+
+
+
+# creating model using Keras
+model10 = Sequential()
+model10.add(GRU(units=200, return_sequences=True, input_shape=(1,12),activity_regularizer=regularizers.l2(0.0001)))
+model10.add(GRU(units=100, return_sequences=True))
+model10.add(GRU(units=50))
+#model10.add(GRU(units=30))
+model10.add(Dense(units=1, activation='linear'))
+model10.compile(loss=['mse'], optimizer='adam',metrics=['mse'])
+model10.summary()
+
+history10=model10.fit(X10_train, y10_train, batch_size=1500,epochs=100,validation_split=0.1, verbose=1, callbacks=[TensorBoardColabCallback(tbc),Early_Stop])
+
+
+score = model10.evaluate(X10_test, y10_test)
+print('Score: {}'.format(score))
+
+
+
+y10_predicted = model10.predict(X10_test)
+y10_predicted = y_scale.inverse_transform(y10_predicted)
+
+y10_test = y_scale.inverse_transform(y10_test)
+
+
+plt.scatter( df2_10['WindSpeed_mps'], y10_test, label='Measurements',s=1)
+plt.scatter( df2_10['WindSpeed_mps'], y10_predicted, label='Predicted',s=1)
+plt.legend()
+plt.savefig('/content/drive/My Drive/Figures/we move on curve6 IF10.png')
+plt.show()
+
+
+
+"
+['convolutional-neural-networks']," Title: Understanding CNN in a few sentencesBody: I don't know if this is the right place to ask this question. If it is not, please tell me and I remove it.
+
+I've just started to learn CNN and I'm trying to understand what they do and how they do it.
+
+I have written some sentences to describe it:
+
+
+- Let's say that CNN is a mathematical function that adjusts its values based on the result obtained and the desired output.
+
+
+- The values are the filters of the convolutional layers (in other
+types of networks would be the weights).
+- To adjust these values there is a backpropagation method as in all networks.
+
+- The result obtained is an image of the same size as the original.
+- In this image you can see the delimited area.
+- The goal of the network is to learn these filters.
+- The overfitting may be because the network has learned where the pixels you are looking for are located.
+- The filters have as input a pixel of the image and return a 1 or a 0.
+
+
+My doubt is:
+
+In your own words, Have I forgotten something?
+
+NOTE:
+
+This is only one question. The six points above are affirmative sentences, not questions.
+
+There is only one question mark, and it is on my question.
+
+I have to clarify this because someone has closed my previous question because she/he thinks there were more than one question.
+"
+"['deep-rl', 'control-problem', 'ddpg']"," Title: Continuous control with DDPG: How to eliminate steady state error?Body: Currently I'm working on a continuous control problem using DDPG as my RL algorithm. All in all, things are working out quite well, but the algorithm does not show any tendencies to eliminate the steady state control deviation towards the far end of the episode.
+
+In the graphs you can see what happens:
+
+In the first graph we see the setpoint in yellow and the controlled continuous parameter in purple. In the beginning, the algorithm brings the controlled parameter close to the setpoint fast, but then it ceases its further efforts and does not try to eliminate the remaining steady state error. This control deviation even increases over time.
+
+
+
+In the second graph, the actual reward is depicted in yellow. (Just ignore the other colors.) I use the normalized control deviation to calculate the reward: $r = \frac{\frac{|dev|}{k}}{1+\frac{|dev|}{k}}$.
+
+This gives me a reward that lies within the interval $]0, 1]$ and has a value of $0.5$ when the deviation $dev$ equals the parameter $k$. (That is the parameter $k$ kind of indicates when half of the work is done)
+
+This reward function is relatively steep for the last fraction of the deviation from $k$ to $0$. So it would definitely be worth the effort for the agent to eliminate the residual deviation.
+
+However, it looks like the agent is happy with the existing state and the control deviation never gets eliminated. Eventhough the reward is stuck at ~0.85 instead of the maximum achievable 1.
+
+
+Any ideas how to push the agent into some more effort to eliminate the steady state error?
+(A PID controller would exactly do this by using its I-term. How can I translate this to the RL-algo?)
+
+The state presented to the algo consists of the current deviation and the speed of change (derivatve) of the controlled value. The deviation is not included in the calculation of the reward function, but in the end we wat a flat line with no steady state deviation of course.
+
+Any ideas welcome!
+
+Regards,
+Felix
+"
+"['long-short-term-memory', 'computational-learning-theory', 'sequence-modeling', 'vc-dimension']"," Title: How does the number of stacked LSTM layers or units in each layer affect the model complexity?Body: I playing around sequence modeling to forecast the weather using LSTM.
+
+How does the number of layers or units in each layer exactly affect the model complexity (in an LSTM)? For example, if I increase the number of layers and decrease the number of units, how will the model complexity be affected?
+
+I am not interested in rules of thumb for choosing the number of layers or units. I am interested in theoretical guarantees or bounds.
+"
+"['reinforcement-learning', 'comparison', 'model-based-methods', 'model-free-methods', 'policy-iteration']"," Title: How can the policy iteration algorithm be model-free if it uses the transition probabilities?Body: I'm actually trying to understand the policy iteration in the context of RL. I read an article presenting it and, at some point, a pseudo-code of the algorithm is given :
+
+What I can't understand is this line :
+
+
+
+From what I understand, policy iteration is a model-free algorithm, which means that it doesn't need to know the environment's dynamics. But, in this line, we need $p(s',r \mid s, \pi(s))$ (which in my understanding is the transition function of the MDP that gave us the probability of landing in the state $s'$ knowing previous $s$ state and the action taken) to compute $V(s)$. So I don't understand how we can compute $V(s)$ with the quantity $p(s',r \mid s, \pi(s))$ since it is a parameter of the environment.
+"
+"['neural-networks', 'deep-learning', 'reference-request', 'applications']"," Title: What are some well-known problems where neural networks don't do very well?Body: Background: It's well-known that neural networks offer great performance across a large number of tasks, and this is largely a consequence of their universal approximation capabilities. However, in this post I'm curious about the opposite:
+Question: Namely, what are some well-known cases, problems or real-world applications where neural networks don't do very well?
+
+Specification:
+I'm looking for specific regression tasks (with accessible data-sets) where neural networks are not the state-of-the-art. The regression task should be "naturally suitable", so no sequential or time-dependent data (in which case an RNN or reservoir computer would be more natural).
+"
+"['philosophy', 'cognitive-science', 'chinese-room-argument', 'cognitive-architecture', 'soar']"," Title: Is the Cognitive Approach (SOAR) equivalent to the Chinese Room argument?Body: Soar is a cognitive architecture.
+
+
+
+There is something called ""the Chinese box"" or ""Chinese room"" argument:
+
+
+
+The ""Chinese room"" seems to be begging its question, but that is not what I am asking. I am asking if there is any literal difference between a tool like ""SOAR"" and the formalism of the ""Chinese box"". Is SOAR identical or equivalent to a ""Chinese Box""?
+"
+"['neural-networks', 'applications', 'reference-request']"," Title: Medical diagnosis systems based on artificial neural networksBody: Are there any medical diagnosis systems that are already used somewhere that are based on artificial neural networks?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'sparse-rewards', 'hindsight-experience-replay']"," Title: How does the optimization process in hindsight experience replay exactly work?Body: I was reading the following research paper Hindsight Experience Replay. This is the paper that introduces a concept called Hindsight Experience Replay (HER), which basically attempts to alleviate the infamous sparse reward problem. It is based on the intuition that human beings constantly try and learn something useful even from their past failed experiences.
+I have almost completely understood the concept. But in the algorithm posited in the paper, I don't really understand how the optimization works. Once the fictitious trajectories are added, we have a state-goal-action dependency. This means our DQN should predict Q-Values based on an input state and the goal we're pursuing (The paper mentions how HER is extremely useful for multi-RL as well).
+Does this mean I need to add another input feature (goal) to my DQN? An input state and an input goal, as two input features to my DQN, which is basically a CNN?
+Because in the optimization step they have mentioned that we need to randomly sample trajectories from the replay buffer and use those for computing the gradients. It wouldn't make sense to compute the Q-Values without the goal now, because then we'd wind up with duplicate values.
+Could someone help me understand how exactly does the optimization take place here?
+I am training Atari's "Montezuma's Revenge" using a double DQN with Hindsight Experience Replay (HER).
+"
+"['philosophy', 'agi', 'definitions', 'superintelligence', 'singularity']"," Title: What is the idea called involving an AI that will eventually rule humanity?Body: It's an idea I heard a while back but couldn't remember the name of. It involves the existence and development of an AI that will eventually rule the world and that if you don't fund or progress the AI then it will see you as ""hostile"" and kill you. Also, by knowing about this concept, it essentially makes you a candidate for such consideration, as people who didn't know about it won't understand to progress such an AI. From my understanding, this idea isn't taken that seriously, but I'm curious to know the name nonetheless.
+"
+"['deep-learning', 'convolutional-neural-networks', 'classification', 'keras', 'overfitting']"," Title: How do I deal with an erratic validation set loss when the loss on my training set is monotonically decreasing?Body:
+
+Model used
+mobilenet_model = MobileNet(input_shape=in_dim, include_top=False, pooling='avg', weights='imagenet')
+mob_x = Dropout(0.75)(mobilenet_model.output)
+mob_x = Dense(2, activation='sigmoid')(mob_x)
+
+model = Model(mobilenet_model.input, mob_x)
+
+for layer in model.layers[:50]:
+ layer.trainable=False
+
+for layer in model.layers[50:]:
+ layer.trainable=True
+
+model.summary()
+
+The rest of the code
+in_dim = (224,224,3)
+batch_size = 64
+samples_per_epoch = 1000
+validation_steps = 300
+nb_filters1 = 32
+nb_filters2 = 64
+conv1_size = 3
+conv2_size = 2
+pool_size = 2
+epochs = 20
+classes_num = 2
+lr = 0.000004
+train_datagen = ImageDataGenerator(
+ rescale=1. / 255,
+ shear_range=0.2,
+ zoom_range=0.2,
+ horizontal_flip=True)
+test_datagen = ImageDataGenerator(rescale=1./255)
+train_generator = train_datagen.flow_from_directory(
+ 'output/train', # this is the target directory
+ target_size= in_dim[0:2], # all images will be resized to 224*224
+ batch_size=batch_size,
+ class_mode='categorical')
+#Found 6062 images belonging to 2 classes.
+validation_generator = test_datagen.flow_from_directory(
+ 'output/val',
+ target_size=in_dim[0:2],
+ batch_size=batch_size,
+ class_mode='categorical')
+#Found 769 images belonging to 2 classes.
+from keras.callbacks import EarlyStopping
+#set early stopping monitor so the model stops training when it won't improve anymore
+early_stopping_monitor = EarlyStopping(patience=3)
+steps_per_epoch = 10
+from keras import backend as K
+
+def recall_m(y_true, y_pred):
+ true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
+ possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
+ recall = true_positives / (possible_positives + K.epsilon())
+ return recall
+
+def precision_m(y_true, y_pred):
+ true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
+ predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
+ precision = true_positives / (predicted_positives + K.epsilon())
+ return precision
+
+def f1_m(y_true, y_pred):
+ precision = precision_m(y_true, y_pred)
+ recall = recall_m(y_true, y_pred)
+ return 2*((precision*recall)/(precision+recall+K.epsilon()))
+model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])
+
+
+history = model.fit_generator(
+ train_generator,
+ steps_per_epoch=2000// batch_size ,
+ epochs=50,
+ validation_data=validation_generator,
+ validation_steps=800// batch_size,
+ callbacks = [early_stopping_monitor],
+ )
+test_generator = train_datagen.flow_from_directory(
+ 'output/test',
+ target_size=in_dim[0:2],
+ batch_size=batch_size,
+ class_mode='categorical')
+loss, accuracy, f1_score, precision, recall = model.evaluate(test_generator)
+print("The test set accuracy is ", accuracy)
+#The test set accuracy is 0.9001349538122272
+
+From what I have gathered from this post and this article, I understand that the validation set is much smaller with respect to the training set. I have applied augmentation to the test set due to this and that boosted test set accuracy by 1%.
+Please note that the test train split is "stratified" as here is a breakdown of each individual class
+in test/train/validation folders
+Test: Class 0: 7426
+ Class 1: 631
+Train: Class 0: 928
+ Class 1: 80
+Val: Class 0: 928
+ Class 1: 79
+
+I have used an 80/10/10 split for train/test/val respectively.
+Can someone guide me on what to do so that I can ensure the accuracy is 95%+ and the validation loss graph is less erratic?
+
+- I am thinking of tuning the learning rate though it doesn't seem to be working by much.
+- Another suggestion is to use test time augmentation.
+- Also, the link on fast.ai has a comment like so
+
+
+That is also part of the reasons why a weighted ensemble of different
+performing epoch models will usually perform better than the best
+performing model on your validation dataset. Sometimes choosing the
+best model or the best ensemble to generalize well isn’t as easy as
+selecting the lower loss/higher accuracy model.
+4. Should I use L2 regularization in addition to the current dropout?
+
+Applying augmentation of any kind to the validation set is a strict no-no and the dataset is generated by my company which I cannot get more of.
+"
+"['machine-learning', 'training', 'keras', 'overfitting', 'underfitting']"," Title: Is my GRU model under-fitting given this plot of the training and validation loss?Body: I was running my gated recurrent unit (GRU) model. I wanted to get an opinion if my loss and validation loss graph is good or not, since I'm new to this and don't really know if that is considered underfitting or not
+
+
+
+
+"
+"['ai-design', 'hill-climbing', 'local-search']"," Title: How can I solve the zero subset sum problem with hill climbing?Body: I want to solve the zero subset sum problem with the hill-climbing algorithm, but I am not sure I found a good state space for this.
+
+Here is the problem: consider we have a set of numbers and we want to find a subset of this set such that the sum of the elements in this subset is zero.
+
+My own idea to solve this by hill-climbing is that in the first step, we can choose a random subset of the set (for example, the main set is $X= \{X_1,\dots,X_n\}$ and we chose $X'=\{X_{i_1},\dots,X_{i_k}\}$ randomly), then the children of this state can be built by adding an element from $X-X'$ to $X'$ or deleting an element from $X'$ itself. This means that each state has $n$ children. and the objective function could be the sum of the elements in $X'$ that we want to minimize.
+
+Is this a good modeling? Are there better modelings or objective functions that can work more intelligently?
+"
+"['neural-networks', 'python', 'neat']"," Title: I created a snake game and fitted the NEAT algorithm and there's issuesBody: Below are my Inputs Outputs and fitness function. The snake is learning at a slow rate, and seems to be stagnant, additionally when the snake collides with the food, it gets deleted from the genome, which doesn't make any sense because that's not specified in the collision. Any input would be greatly appreciated
+
+for x, s in enumerate(snakes):
+ # inserting new snake head and deleting the tale for movement
+
+ # inputs
+ s.x = s.snake_head[0]
+ s.y = s.snake_head[1]
+ snakeheadBottomDis = win_h - s.y
+ snakeheadRightDis = win_w - s.x
+ snake_length = len(s.snake_position)
+ snakefoodDistEuclidean = math.sqrt((s.x - food.x) ** 2 + (s.y - food.y) ** 2)
+ snakefoodDisManhattan = abs(s.x - food.x) + abs(s.y - food.y)
+ xdis = s.Xdis()
+ ydis = s.Ydis()
+ s.dis_list1.append(snakefoodDistEuclidean)
+ s.dis_list2.append(snakefoodDisManhattan)
+ s.dis_list3.append(s.Xdis())
+ s.dis_list4.append(s.Ydis())
+ s.hunger_list.append(s.hunger)
+ #print('Euclidean: ', dis_list1[-1])
+ #print('Manhattan: ', dis_list2[-1])
+ #print('X distance from Wall: ', dis_list3[-1])
+ #print('Y distance from Wall: ', dis_list4[-1])
+
+ output = nets[snakes.index(s)].activate((s.hunger, s.x, s.y, food.x, food.y, snakeheadBottomDis,
+ snakeheadRightDis, snake_length, xdis,ydis,
+ snakefoodDisManhattan, snakefoodDistEuclidean,s.dis_list1[-1],s.dis_list1[-2],
+ s.dis_list2[-1],s.dis_list2[-2],s.dis_list3[-1],s.dis_list3[-2],
+ s.dis_list4[-1],s.dis_list4[-2],s.hunger_list[-1],s.hunger_list[-2]))
+
+
+ #snake moving animation
+ s.snake_position.insert(0, list(s.snake_head))
+ s.snake_position.pop()
+ s.hunger -= 1
+
+ # Checking distance Euclidean and Manhattan current and last
+ if s.dis_list1[-1] > s.dis_list1[-2]:
+ ge[x].fitness -= 1
+
+ if s.dis_list1[-1] < s.dis_list1[-2]:
+ ge[x].fitness += 0.5
+
+ if s.dis_list1[-1] > s.dis_list2[-2]:
+ ge[x].fitness -= 1
+
+ if s.dis_list1[-1] < s.dis_list2[-2]:
+ ge[x].fitness += 0.5
+
+ #checking hunger number and if its decreasing
+ if s.hunger_list[-1] < s.hunger_list[-2]:
+ ge[x].fitness -= 0.1
+
+ # move right
+ if output[0] >= 0 and output[1] < 0 and output[2] < 0 and output[
+ 3] < 0:
+ #and s.x < win_w - s.width and s.y > 0 + s.height:
+ # ge[x].fitness += 0.5
+ s.move_right()
+
+ # move left
+ if output[1] >= 0 and output[0] < 0 and output[2] < 0 and output[
+ 3] < 0:
+ #and s.x < 500 - s.width and s.y > 0 + s.height:
+ #ge[x].fitness += 0.5
+ s.move_left()
+
+ # move down
+ if output[2] >= 0 and output[1] < 0 and output[0] < 0 and output[
+ 3] < 0:
+ #and s.x < 500 - s.width and s.y > 0 + s.height:
+ # ge[x].fitness += 0.5
+ s.move_down()
+
+ # move up
+ if output[3] >= 0 and output[1] < 0 and output[2] < 0 and output[
+ 3] < 0:
+ #and s.x < 500 - s.width and s.y > 0 + s.height:
+ # ge[x].fitness += 0.5
+ s.move_up()
+
+ #adding more fitness if axis aligns
+ if s.snake_head[0] == food.x:
+ ge[x].fitness += 0.1
+ if s.snake_head[1] == food.x:
+ ge[x].fitness += 0.1
+
+ # checking the activation function tanh
+ # print ('output 0: ', output[0])
+ # print('output 1: ', output[1])
+ # print ('output 2: ', output[1])
+ # print ('output 3: ', output[1])
+
+ # snake poping on other side of screen if screen limit reached
+ if s.snake_head[0] >= win_w - s.width:
+ s.snake_head[0] = 12
+ if s.snake_head[0] <= 11 + s.width:
+ s.snake_head[0] = win_w - s.width - 1
+ if s.snake_head[1] >= win_h - s.height:
+ s.snake_head[1] = s.height + 15
+ if s.snake_head[1] <= 11 + s.height:
+ s.snake_head[1] = win_h - s.height - 1
+
+ head = s.snake_position[0]
+ #s.x < 0 + s.width or s.x > win_w - s.width or s.y < 0 + s.height or \
+ #s.y > win_h - s.height or
+
+ #if run into self you die
+ if head in s.snake_position[1:]:
+ ge[x].fitness -= 10
+ snakes.pop(x)
+ nets.pop(x)
+ ge.pop(x)
+
+ #if hunger reaches 0 you die
+ if s.hunger == 0:
+ ge[x].fitness -= 5
+ snakes.pop(x)
+ nets.pop(x)
+ ge.pop(x)
+
+ #if snake collides with food award fitness
+ if s.getRec().colliderect(food.getRec()):
+ ge[x].fitness += 100
+ s.hunger = 100
+ score += 1
+ s.snake_position.insert(0, list(s.snake_head))
+ food.y = random.randint(0 + 24, 500 - 24)
+ food.x = random.randint(0 + 24, 500 - 24)
+
+ # print(s.hunger)
+
+"
+"['reinforcement-learning', 'gradient-descent', 'policy-gradients', 'proximal-policy-optimization']"," Title: What is the purpose of argmax in the PPO algorithm?Body: I'm kinda new to machine learning and still not too solid on math and particularly calculus. I'm currently trying to implement PPO algorithm as described in the spiningUp website :
+
+
+This line is giving me a hard time :
+
+
+
+What does the $\operatorname{argmax}$ mean, in this context? They are also talking about updating the policy with a gradient ascent. So, is taking argmax with respect to $\theta$ the same as doing:
+
+
+
+where $J$ is the min() function?
+"
+"['natural-language-processing', 'reference-request', 'models', 'state-of-the-art']"," Title: What is the current state-of-the-art in unsupervised cross-lingual representation learning?Body: What is the current state-of-the-art in unsupervised cross-lingual representation learning?
+"
+"['machine-learning', 'natural-language-processing']"," Title: Why is dialogue a hard problem in natural language processing?Body: I've just started learning natural language processing from Dan Jurafsky's videos lectures. In that video, minute 4:56, he is stating that dialogue is a hard problem in natural language processing (NLP). Why?
+"
+"['classification', 'activation-functions', 'perceptron', 'xor-problem']"," Title: What is the simplest classification problem which cannot be solved by a perceptron?Body: What is the simplest classification problem which cannot be solved by a perceptron (that is a single-layered feed-forward neural network, with no hidden layers and step activation function), but it can be solved by the same network if the activation function is swapped out to a differentiable activation function (e.g. sigmoid, tanh)?
+In the first case, the training would be done with the perceptron training rule, in the second case with the delta rule.
+Note that regression problems cannot be solved by perceptrons, so I'm interested in classification only.
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'hindsight-experience-replay', 'random-network-distillation']"," Title: Is this a good approach to solving Atari's ""Montezuma's Revenge""?Body: I'm new to Reinforcement Learning. For an internship, I am currently training Atari's "Montezuma's Revenge" using a double Deep Q-Network with Hindsight Experience Replay (HER) (see also this article).
+HER is supposed to alleviate the reward sparseness problem. But since the reward is annoyingly too sparse, I have also added a Random Network Distillation (RND) (see also this article) to encourage the agent to explore new states, by giving it a higher reward when it reaches a previously undiscovered state and a lower reward when it reaches a state it has previously visited multiple times. This is the intrinsic reward I add to the extrinsic reward the game itself gives. I have also used a decaying greedy epsilon policy.
+How well should this approach work? Because I've set it to run for 10,000 episodes, and the simulation is quite slow, because of the mini-batch gradient descent step in HER. There are multiple hyperparameters here. Before implementing RND, I considered shaping the reward, but that is just impractical in this case. What can I expect from my current approach? OpenAI's paper on RND cites brilliant results with RND on Montezuma's Revenge. But they obviously used PPO.
+"
+"['natural-language-processing', 'comparison', 'models']"," Title: What are the advantages and disadvantages of extrinsic and perplexity model evaluation in NLP?Body: In the video Evaluation and Perplexity by Dan Jurafsky, the author talks about extrinsic and perplexity evaluation in the context of natural language processing (NLP).
+
+What are the advantages and disadvantages of extrinsic and perplexity model evaluation in NLP? Which evaluation method is preferred usually, and why?
+"
+"['neural-networks', 'convolutional-neural-networks', 'autoencoders', 'feedforward-neural-networks']"," Title: Can denoising auto-encoders be convolutional and fully connected?Body: I have been reading lately on autoencoders a lot. I just wanted to summarize my understanding of denoising autoencoders. As far as I understand they can be
+
+
+- Fully connected (in which case, they will be over-complete autoencoders)
+- Convolutional
+
+
+The reason I say it should be over-complete is that the objective is to learn new features and I think extra neurons in the latent layer would help. There is no reason to have a lesser number of neurons because compressing is not the objective. I just want to understand is this the right thinking.
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'image-processing', 'caffe']"," Title: Is there an efficient way of determining the layers with the best performance as feature extractors in GoogleNet?Body: I am using a caffe model of pre-trained GoogleNet trained on ImageNet from here for image retrieval task (place recognition, more specifically).
+
+
+
+I would like to know the layer with best performance in feature extraction. Its official paper suggests that:
+
+
+ The strong performance of shallower networks on this task suggests
+ that the features produced by the layers in the middle of the network
+ should be very discriminative.
+
+
+There is also a project deepdream suggests that:
+
+
+ The complexity of the details generated depends on which layer's
+ activations we try to maximize. Higher layers produce complex
+ features, while lower ones enhance edges and textures, giving the
+ image an impressionist feeling.
+
+
+Searching the web, I found a github page suggesting pool5/7x7_s1
layer as feature extractor without specific convincing reasons.
+
+What I am doing now is quite cumbersome in which I extract features from each individual layer, apply scipy euclidean distance measurement to find a query in the reference database and the judgment is based on precision-recall
curve and my top 3
results are as follows for one dataset:
+
+
+inception_3a/3x3
+inception_4a/5x5
+inception_4b/output
+
+
+Considering large number of convolutional layers in GoogleNet, my approach is undoubtedly quite inefficient and can be changed to another dataset!
+
+Can anyone suggest an efficient way to figure out the layers with the best performance as feature extractors in GoogleNet?
+"
+"['game-ai', 'reference-request']"," Title: What AI technologies were used in earlier versions of StarCraft?Body: I remember back in the 2000s, it was, and still is possible to play against computers in StarCraft BroodWar. They were not as good as pro players, but still reasonably smart.
+
+What AI technologies were used in earlier versions of StarCraft: Brood War?
+"
+"['neural-networks', 'classification', 'data-preprocessing']"," Title: How should I deal with variable input sizes for a neural network classifier?Body: I am currently working on a project, where I have a sensor in a shoe that records the $X, Y, Z$ axes, from an acceleration and gyroscope sensor. Every millisecond, I get 6 data points. Now, the goal is, if I do an action, such a jumping or kicking, I would use the sensor's output to predict that action being done.
+
+The issue: If I jump, for example, one time I may get 1000 data points, but, in another, I get 1200 amounts, meaning the size of the input is different.
+
+The neural networks I've studied so far require the input size to be constant to predict a $Y$ value, however, in this case, it isn't. I've done some research on how to make a neural network with variable sizes, but haven't been able to find one which works. It's not a good idea to crop the input to a certain size, because then I am losing data. In addition, if I just resize the smaller trials by putting extra $0$s, it skews the model.
+
+Any suggestions on a model that would work or how to better clean the data?
+"
+"['reinforcement-learning', 'markov-decision-process', 'implementation', 'convergence', 'policy-evaluation']"," Title: Why isn't the implementation of my policy evaluation for a simple MDP converging?Body: I am trying to code out a policy evaluation algorithm to find the $V^\pi(s)$ for all states. The following diagram below shows the MDP.
+
+
+
+In this case i let p = q = 0.5.
+the rewards for each states are independent of action. I.e $r(\sigma_0)$ = $r(\sigma_2)$ = 0,$r(\sigma_1)$ = 1, $r(\sigma_3)$ = 10. Terminal state is $r(\sigma_3)$
+
+I have the following policy, {0:1, 1:0, 2:0}, where key is the state and value is the action. 0 for $a_0$ and 1 for $a_1$.
+
+#Policy Iteration solver for FUN
+class PolicyEvaluation:
+ def __init__(self, policies):
+ self.N = 3
+ self.pi = policies
+ self.actions = [0, 1] # a0 and a1
+ self.discount = 0.7
+ self.states = [i for i in range(self.N + 1)]
+
+
+ def terminalState(self, state):
+ return state == 3
+
+ # assume p = q = 0.5
+ def succProbReward(self, state):
+ # (newState, probability, reward)
+ spr_list = []
+ if (state == 0 and self.pi[state] == 0):
+ spr_list.append([1, 1.0, 1])
+ elif (state == 0 and self.pi[state] == 1):
+ spr_list.append([2, 1.0, 0])
+ elif (state == 1 and self.pi[state] == 0):
+ spr_list.append([2, 0.5, 0])
+ spr_list.append([0, 0.5, 0])
+ elif (state == 2 and self.pi[state] == 0):
+ spr_list.append([1, 1.0, 0])
+ elif (state == 2 and self.pi[state] == 1):
+ spr_list.append([3, 0.5, 10])
+ spr_list.append([2, 0.5, 0])
+ return spr_list
+
+
+def policyEvaluation(mdp):
+ # initialize
+ V = {}
+ for state in mdp.states:
+ V[state] = 0
+
+ def V_pi(state):
+ return sum(prob * (reward + mdp.discount*V[newState]) for prob, reward, newState in
+ mdp.succProbReward(state))
+
+ while True:
+ # compute new values (newV) given old values (V)
+ newV = {}
+ for state in mdp.states:
+ if mdp.terminalState(state):
+ newV[state] = 0
+ else:
+ newV[state] = V_pi(state)
+
+ if max(abs(V[state] - newV[state]) for state in mdp.states) < 1e-10:
+ break
+ V = newV
+ print(V)
+ print(V)
+
+
+
+pE = PolicyEvaluation({0:1, 1:0, 2:0})
+print(pE.states)
+print(pE.succProbReward(0))
+policyIteration(pE)
+
+
+I've tried to run the code above to find the values for each state, however, I am not converging with my values.
+
+Is there something wrong that I did?
+"
+"['reinforcement-learning', 'markov-decision-process']"," Title: How to draw backup diagram when reward is in expectation but next state is iterated?Body: I am working through Sutton and Barto's RL book. So far in the text, when backup diagrams are drawn, the reward and next state are iterated together (i.e. the equations always have $\sum_{s',r}$), because the text uses the four-place function $p(s',r|s,a)$. Starting from a solid circle (state-action pair), each edge has a reward labeled along the edge and the next state labeled on the open circle. (See page 59 for an example diagram, or see Figure 3.4 here.)
+
+However, exercise 3.29 asks to rewrite the Bellman equations in terms of $p(s'|s,a)$ and $r(s,a)$. This means that the reward is an expected value (i.e. we don't want to iterate over rewards like $\sum_r \cdots (r + \cdots)$), whereas the next states should be iterated (i.e. we want something like $\sum_{s'} p(s'|s,a) (\cdots)$).
+
+I think writing the Bellman equations themselves isn't too difficult; my current guess is that they look like this: $$v_\pi(s) = \sum_a \pi(a|s) \left(r(s,a) + \gamma \sum_{s'} p(s'|s,a) v_\pi(s')\right)$$
+
+$$q_\pi(s,a) = r(s,a) + \gamma \sum_{s'} p(s'|s,a) \sum_{a'} \pi(a'|s') q_\pi(s',a')$$
+
+My problem instead is that I want to be able to draw the backup diagrams corresponding to these equations. Given the ""vocabulary"" for backup diagrams given in the book (e.g. solid circle = state-action pair, open circle = state, rewards along the edge, probabilities below nodes, maxing over denoted by an arc), I don't know how to represent the fact that the reward and next state are treated differently. Two ideas that don't seem to work:
+
+
+- If I draw a bunch of edges after the solid circle, that looks like I'm iterating over rewards.
+- If I come up with a special kind of edge that represents an expected reward, then it looks like only a single next state is being considered.
+
+"
+"['natural-language-processing', 'comparison', 'word2vec', 'cbow', 'skip-gram']"," Title: What are the main differences between skip-gram and continuous bag of words?Body: The skip-gram and continuous bag of words (CBOW) are two different types of word2vec models.
+What are the main differences between them? What are the pros and cons of both methods?
+"
+"['machine-learning', 'classification', 'computer-vision', 'terminology', 'regression']"," Title: What is the type of problem requiring to rate images on a scale?Body: I'm new to the topic, but I've used some off the shelf knowledge about computer vision for classifying images.
+
+For example, you can easily generate labels that can determine whether or not e.g. a cloud is in the image. However, what is the general type of problem called where you want to assign a value, or rate the image on a scale - in this example, the degree of cloudiness in the image?
+
+What are useful algorithms or techniques for addressing this type of problem?
+"
+"['reinforcement-learning', 'dqn', 'papers', 'ddpg', 'sample-efficiency']"," Title: Why are reinforcement learning methods sample inefficient?Body: Reinforcement learning methods are considered to be extremely sample inefficient.
+For example, in a recent DeepMind paper by Hessel et al., they showed that in order to reach human-level performance on an Atari game running at 60 frames per second they needed to run 18 million frames, which corresponds to 83 hours of play experience. For an Atari game that most humans pick up within a few minutes, this is a lot of time.
+What makes DQN, DDPG, and others, so sample inefficient?
+"
+"['neural-networks', 'deep-learning', 'papers', 'multilayer-perceptrons', 'multi-label-classification']"," Title: Are the labels updated during training in the algorithm presented in ""An algorithm for correcting mislabeled data""?Body: I am trying to understand an algorithm for correcting mislabeled data in the paper An algorithm for correcting mislabeled data (2001) by Xinchuan Zeng et al. The authors are suggesting to update the output class probability vector using the formula in equation 4 and class label in equation 5.
+
+I am wondering:
+
+
+- Are they updating labels while training, starting from very first back-propagation?
+- It seems like if we train on the same data and then predict labels on the same data, it would be the same as what the authors are suggesting. Does it make sense or I misunderstood?
+
+"
+"['math', 'gaussian-process']"," Title: Interpretation of inverse matrix in mean calculation in Gaussian ProcessBody: The formula for mean prediction using Gaussian Process is $k(x_*, x)k(x, x)^{-1}y$, where $k$ is the covariance function. See e.g. equation 2.23 (in chapter 2) from Gaussian Processes for Machine Learning (2006) by C. E. Rasmussen & C. K. I. Williams.
+
+Oversimplifying, the mean prediction of the new point $y_*$ is the weighted average of previously observed $y$, where the weights are calculated by the $k(x_*,x)$ and normalized by $k(x,x)^{-1}$.
+
+Now, the first part $k(x_*, x)$ is easy to interpret. The closer the new data point lies to the previously observed data points, the greater their similarity, the higher will be the weight and impact on the prediction.
+
+But how to interpret the second part $k(x, x)^{-1}$? I presume this makes the weight of the points in the clusters greater than the outliers. Am I correct?
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'monte-carlo-methods']"," Title: How to apply hyperparameter optimization on Monte Carlo Tree Search?Body: I have a basic MCTS agent for the game of Hex (a turn based game). I want to tune the parameters of UCT (the Cp parameter) and the number of rollouts parameter.
+
+Where do I have to begin? The problem is that the agent is smart enough to always win if it plays first against another agent. So I don't know how to do the evaluation of each pair of hyperparameters.
+
+If anyone has any ideas let me know.
+"
+"['gradient-descent', 'proofs', 'learning-rate']"," Title: How to prove that gradient descent doesn't necessarily find the global optimum?Body: How can I prove that gradient descent doesn't necessarily find the global optimum?
+
+For example, consider the following function
+
+$$f(x_1, x_2, x_3, x_4) = (x_1 + 10x_2)^2 + 5x_2^3 + (x_2 + 2x_3)^4 + 3x_1x_4^2$$
+
+Assume also that we can't find the optimal value for the learning rate because of time constraints.
+"
+"['machine-learning', 'datasets', 'prediction', 'feature-selection', 'healthcare']"," Title: How should I select the features for predicting diseases (in particular when patients specify their health issues)?Body: My aim is to train a model for predicting diseases. Now, according to this Wikipedia article, diseases are classified based on the following criteria in general:
+
+- Causes (of the disease)
+- Pathogenesis (the mechanism by which the disease progresses)
+- Age
+- Gender
+- Symptoms (of the disease)
+- Damage (caused by the disease)
+- Organ type (e.g. heart disease, liver disease, etc.)
+
+Are these features used for predicting diseases universally (i.e. all types of diseases)? I don't think so. There can be other attributes as well. For example, traveling in the case of coronavirus.
+So, are there better features for predicting diseases?
+Or which ones among them are better than the others, when patients specify their health issues?
+"
+"['neural-networks', 'deep-learning', 'papers', 'multilayer-perceptrons', 'multi-label-classification']"," Title: Recent algorithms for correcting mislabeled data using multilayer perceptronsBody: I am doing literature research on algorithms for correcting mislabeled data using multilayer perceptrons. Found an ""old"" paper An algorithm for correcting mislabeled data (2001) by Xinchuan Zeng et al. Please share if you are aware of recent/current updates with a brief thoughts. Thanks in advance.
+"
+"['machine-learning', 'backpropagation', 'gradient-descent', 'activation-functions', 'vanishing-gradient-problem']"," Title: Which activation functions can lead to the vanishing gradient problem?Body: From this video tutorial Vanishing Gradient Tutorial, the sigmoid function and the hyperbolic tangent can produce the vanishing gradient problem.
+
+What other activation functions can lead to the vanishing gradient problem?
+"
+"['convolutional-neural-networks', 'filters', 'convolutional-layers', 'convolution-arithmetic']"," Title: How to calculate the number of parameters of a convolutional layer?Body: I was recently asked at an interview to calculate the number of parameters for a convolutional layer. I am deeply ashamed to admit I didn't know how to do that, even though I've been working and using CNN for years now.
+
+Given a convolutional layer with ten $3 \times 3$ filters and an input of shape $24 \times 24 \times 3$, what is the total number of parameters of this convolutional layer?
+"
+"['neural-networks', 'clustering', 'multi-label-classification']"," Title: Neural network to extract correlated columnsBody: I want to use a neural network to find correlated columns in a .csv
file and give them as a output. The input .csv
file has multiple columns with 0 and 1 ( like Booleans) in it. The file got the assignment from people to interests in it.
+
+Example .csv
input:
+
+UserID History Math Physics Art Music ...
+User1 0 1 1 0 0 ...
+User2 0 0 0 1 1 ...
+User3 0 1 1 1 1 ...
+User4 1 0 1 1 0 ...
+...
+
+
+The output should be in this case something like: {math,physics}, {art,music}, {history,physics,art} - I exclude here {math,physics,art,music} because in a step afterwards i want to exclude (at least some) which can be created through the combination of others.
+
+At the moment I have a problem that i don´t know which type of neural network could complete this task. How can I solve this problem?
+
+So the important thing, that a column can have more than one column it correlates to - so its not like simple k-means clustering (as far as I understand it).
+"
+"['natural-language-processing', 'reference-request', 'resource-request', 'machine-translation']"," Title: Is there any resource that describes in detail a naive example-based machine translation algorithm?Body: I'm looking to develop a machine translation tool for a constructed language. I think that the example-based approach is the most suitable because the said language is very regular and I can have a sufficient amount of parallel translations.
+I already know the overall idea behind the example-based machine translation (EBMT) approach, but I can't find any resource that describes a naive EBMT algorithm (or model) that would allow me to easily implement it.
+So, I'm looking for either:
+
+- a detailed description,
+- pseudocode or
+- a sufficiently clear open-source project (maybe a GitHub one)
+
+of a naive EBMT algorithm. So, I'm not looking for a software library that implements this, but I'm looking for a resource that explains/describes in detail a naive/simple EBMT algorithm, so that I am able to implement it.
+Note that there are probably dozens of variations of EBMT algorithms. I'm only looking for the most naive/simple one.
+I have already looked at the project Phrase-based Memory-based Machine Translator, but, unfortunately, it is not purely based on examples but also statistical, i.e. it needs an alignment file generated by, for example, Giza++ or Moses.
+"
+"['chess', 'alphago-zero', 'deep-blue']"," Title: How does (or should) AlphaGoZero (which does chess) fare against Deep Blue?Body: Deep blue is good at chess, but is more ""hand-coded"" or ""top-down"".
+https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)
+
+AlphaGoZero is ""self-taught"", and at Go is very much super-human.
+https://en.wikipedia.org/wiki/AlphaGo_Zero
+
+How does AlphaGoZero fare when it goes head-to-head with DeepBlue? Are there indicators like chess ratings?
+"
+"['convolutional-neural-networks', 'object-detection', 'residual-networks']"," Title: Do deeper residual networks perform better or worse?Body: If you have an $18$ layer residual network versus and a $32$ layer residual network, why would the former do better at object detection than the latter, if you have both models are training using the same training data?
+"
+"['long-short-term-memory', 'activation-functions', 'regression', 'forecasting']"," Title: Using sigmoid in LSTM network for multi-step forecastingBody: I'm trying to develop a multistep forecasting model using LSTM Network. The model takes three times steps as input and predicting two time_steps. both input and output columns are normalised using minmax_scalar within the range of 0 and 1.
+
+Please see the below model architecture
+
+Model Architecture
+
+model = Sequential()
+model.add(LSTM(80,input_shape=(3,1),activation='sigmoid',return_sequences=True))
+model.add(LSTM(20,activation='sigmoid',return_sequences=False))
+model.add(Dense(2))
+
+
+In this case, using sigmoid as an activation function is it correct?
+"
+"['reinforcement-learning', 'objective-functions', 'policy-gradients', 'reinforce']"," Title: How to calculate the advantage in policy gradient functions?Body: From my understanding of the REINFORCE policy gradient method, we gently nudge the probabilities of actions based on the advantages. More specifically, the positive advantages increase the probabilities, negative advantages reduce the probabilities.
+
+So, how do we compute the advantages given the real discounted rewards (aggregated rewards from the episode) and a policy network that only outputs the probabilities of actions?
+"
+"['deep-learning', 'computer-vision', 'image-recognition', 'transfer-learning']"," Title: How can I detect the frame from video streaming that contains a graffiti on city wall?Body: I am working on a graffiti detection project. I need to analyze data stream from a camera mounted sideways on a vehicle to identify graffiti on city walls and notify authorities with the single best capture of graffiti and its geolocation, etc.
+
+I am trying to use a ResNet50 model pre-trained on ImageNet using transfer learning for my graffiti image dataset. The classification will be done on an edge device as network connectivity may not be reliable.
+
+Suppose I have a series of frames that have been detected to contain graffiti, as the vehicle goes past it, but I only need to report one image (so not all frames containing graffiti in the series). How can I do that?
+
+Ideally, I would like to report the frame where the camera is perpendicular to the wall. Why perpendicular? I think that images containing the graffiti when the camera is perpendicular to the wall will more clearly show the graffiti.
+"
+"['deep-learning', 'reinforcement-learning', 'dqn', 'deep-rl', 'self-play']"," Title: How to correctly implement self-play with DQN?Body: I have an environment where an agent faces an equal opponent, and while I've achieved OK performance implementing DQN and treating the opponent as a part of the environment, I think performance would improve if the agent trains against itself iteratively. I've seen posts about it, but never detailed implementation notes. My thoughts were to implement the following (agent and opponent are separate networks for now):
+
+
+- Bootstrap agent and opponent with initial weights (either random or trained against CPU, not sure)
+- Use Annealing Epsilon Greedy strategy for N iterations
+- After M iterations (M > N), copy agent network's weights to opponent's network
+- Reset annealing epsilon (i.e. start performing randomly again to explore new opponent)?
+- Repeat steps 2-4
+
+
+Would something like this work? Some specific questions are:
+
+
+- Should I ""reset"" my annealing epsilon strategy every time the opponent is updated? I feel like this is needed because the agent needs sufficient time to explore new strategies for this ""new"" opponent.
+- Should the experience replay buffer be cleared out when the opponent is updated? Again, I think this is needed.
+
+
+Any pointers would be appreciated.
+"
+"['reinforcement-learning', 'policy-gradients', 'matlab']"," Title: Policy Gradient Reward Oscillation in MATLABBody: I'm trying to train a Policy Gradient Agent with Baseline for my RL research. I'm using the in-built RL toolbox from MATLAB (https://www.mathworks.com/help/reinforcement-learning/ug/pg-agents.html) and have created my own Environment. The goal is to train the system to sample an underlying time-series ($x$) given battery constrains ($\epsilon$ is battery cost).
+
+The general setup is as follows:
+
+
+- My Environment is a ""sensor"" system with exogenous input time-series and battery level as my States/Observations (size is 13x1).
+- Actions $A_t$ are binary: 0 = keep a model prediction $(\hat x)$; 1 = sampling time series $(x)$
+- Reward function is
+
+
+$$
+R = -[err(\tilde x, x) + A_t\cdot \epsilon ] + (-100)\cdot T_1 + (100) \cdot T_2
+$$
+
+where $err(\tilde x, x)$ is the RMSE error between the sampled time series $(\tilde x)$, and true time series x.
+
+
+- The Terminal State Rewards are -100 if sensor runs out of battery $T_1$ or 100 if reached the end of the episode with RMSE < threshold and remaining battery level $(T_2)$. The goal is to always end in $T_2$.
+- Each training Episode consists of a time-series of random length, and random initial battery level.
+
+
+My current setup is using mostly default RL setups from MALTAB with learning rate of $10^{-4}$ and ADAM optimizer. The training is slow, and shows a lot of Reward oscillation between the two terminal states. MATLAB RL toolbox also outputs a $Q_0$ value which the state is:
+
+
+ Episode Q0 is the estimate of the discounted long-term reward at the
+ start of each episode, given the initial observation of the
+ environment. As training progresses, Episode Q0 should approach the
+ true discounted long-term reward if the critic is well-designed,
+
+
+
+
+Questions
+
+
+- Is my training and episodes too random? i.e., time-series of different lengths and random initial sensor setup.
+- Should I simplify my reward function to be just $T_2$?
+- Why doesn't $Q_0$ change at all?
+
+"
+"['neural-networks', 'deep-learning', 'object-detection', 'data-preprocessing', 'performance']"," Title: Can the addition of low-quality images to the training dataset increase the network performance?Body: I already trained a deep neural network called YOLO (You Only Look Once) with high-quality images (1920 by 1080 pixels) for a detection task. The result for mAP and IOU were 93% and 89% respectively.
+
+I wanted to decrease the quality of my training data set using some available filters, then I use those low-quality images along with high-quality images to train the network again.
+
+Does this method increase the accuracy (or, in general, performance) of the deep neural network (for a detection task)? Like mAP and IOU?
+
+My data set is vehicle images.
+
+mAP: mean average precision
+
+IOU: intersection over union ( or overlap)
+"
+"['social', 'healthcare', 'applications']"," Title: What are the AI technologies currently used to fight the coronavirus pandemic?Body: The ongoing coronavirus pandemic of coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), as of 29 September 2020, has affected many countries and territories, with more than 33.4 million cases of COVID-19 have been reported and more than 1 million people have died. The live statistics can be found at https://www.worldometers.info/coronavirus/ or in the World Health Organization (WHO) site. Although countries have already started quarantines and have adopted extreme countermeasures (such as closing restaurants or forbidding events with multiple people), the numbers of cases and deaths will probably still increase in the next weeks.
+Given that this pandemic concerns all of us, including people interested in AI, such as myself, it may be useful to share information about the possible current applications of AI to slow down the spread of SARS-CoV-2, to help infected people or people in the healthcare sector that have been uninterruptedly working for hours to attempt to save more lives, while putting at risk their own.
+What are the existing AI technologies (e.g. computer vision or robotics tools) that are already being used to tackle these issues, such as slowing down the spread of SARS-CoV-2 or helping infected people?
+I am looking for references that prove that the mentioned technologies are really being used. I am not looking for potential AI technologies (i.e. research work) that could potentially be helpful. Furthermore, I am not looking for data analysis tools (e.g. sites that show the evolution of the spread of coronavirus, etc.)
+"
+"['machine-learning', 'deep-learning', 'classification', 'recommender-system', 'efficiency']"," Title: Using AI to enhance customer serviceBody: I'm trying to find out how AI can help with efficient customer service, in fact call routing to the right agent. My usecase is given context of a query from a customer and agents' expertise, how can we do the matching?
+
+Generally, how is this problem solved? What sub-topic within AI is suitable for this problems? Classification, recommender systems, ...? Any pointers to open-source projects would be very helpful.
+"
+"['convolutional-neural-networks', 'papers', 'residual-networks', 'vanishing-gradient-problem']"," Title: If vanishing gradients are NOT the problem that ResNets solve, then what is the explanation behind ResNet success?Body: I often see blog posts or questions on here starting with the premise that ResNets solve the vanishing gradient problem.
+The original 2015 paper contains the following passage in section 4.1:
+
+We argue that this optimization difficulty is unlikely to
+be caused by vanishing gradients. These plain networks are
+trained with BN, which ensures forward propagated
+signals to have non-zero variances. We also verify that the
+backward propagated gradients exhibit healthy norms with
+BN. So neither forward nor backward signals vanish. In
+fact, the 34-layer plain net is still able to achieve competitive accuracy, suggesting that the solver works
+to some extent.
+
+So what's happened since then? I feel like either it became a misconception that ResNets solve the vanishing gradient problem (because it does indeed feel like a sensible explanation that one would readily accept and continue to propagate), or some paper has since proven that this is indeed the case.
+I'm starting with the initial knowledge that it's "easier" to learn the residual mapping for a convolutional block than it is to learn the whole mapping. So my question is on the level of: why is it "easier"? And why does the "plain network" do such a good job but then struggle to close the gap to the performance of ResNet. Supposedly if the plain network has already learned reasonably good mappings, then all it has left to learn to close the gap is "residual". But it just isn't able to.
+"
+"['neural-networks', 'overfitting', 'hyperparameter-optimization', 'regularization', 'underfitting']"," Title: Are there principled ways of tuning a neural network in case of overfitting and underfitting?Body: Whenever I tune my neural network, I usually take the common approach of defining some layers with some neurons.
+
+
+- If it overfits, I reduce the layers, neurons, add dropout, utilize regularisation.
+- If it underfits, I do the other way around.
+
+
+But it sometimes feels illogical doing all these. So, is there a more principled way of tuning a neural network (i.e. find the optimal number of layers, neurons, etc., in a principled and mathematical sound way), in case it overfits or underfits?
+"
+"['comparison', 'evolutionary-algorithms', 'evolutionary-computation']"," Title: What is the difference between evolutionary computation and evolutionary algorithms?Body: A book on evolutionary computation by De Jong mentions both the term evolutionary algorithms (EA) as well as evolutionary computation (EC). However, it remains unclear to me what the difference between the two is. According to Vikhar, EA forms a subset of EC. However, it remains unclear to me what sort of topics/algorithms would be considered EC but not EA. Is there a clear difference between the two? If so, what is this difference?
+"
+"['reinforcement-learning', 'ai-design', 'rewards']"," Title: What are the guidelines for defining a reward function in reinforcement learning (bandit problem)?Body: I'm working currently on a problem and I'm using RL (bandit problem).
+
+In my system, I have an agent that chooses an action among $k$ possible actions, and a user that decides whether the agent chooses the right action or not. If the user is satisfied with the decision made by the agent, he rewards with $+1$, otherwise $-1$.
+
+Is this is a good reward function, knowing that in my problem the values are in the range $[0, 1]$?
+
+Are there any guidelines to follow for defining the reward function? Are there any references (books or articles) that tackle this problem and present a solution?
+"
+"['neural-networks', 'convolutional-neural-networks']"," Title: Why does model complexity increase my validation score by a lot?Body: I learned that when creating neural networks the go to was to overfit and then to regularize. However I am now in a situation where, when I make the model more complex (more layers, more filters, ...) my scores become worse.
+
+I am training a CNN to predict pollution 6 hours in advance. The input I give to my model is the pollution of the past 18 hours.
+
+Can I safely say that because there probably is a lot of noise in this data that, that is the reason when increasing my complexity, my model becomes worse?
+"
+"['machine-learning', 'principal-component-analysis']"," Title: Multiple-dimension scaling (MDS) objective for MDS and PCABody:
+The following is the MDS Objective.
+
+
+Let's think of a senario where I apply MDS with/from the solution I obtained from PCA. Then I calculate the objective function on the initial PCA solution and MDS solution (after applying MDS on the former PCA solution). Then I would for sure assume that the objective function will decrease for the MDS solution compared with PCA solution. However, when I calculate the objective function respectively, MDS solution yields higher objective function value. Is this normal?
+I am attaching my code below:
+import os
+import pickle
+import gzip
+import argparse
+import time
+import matplotlib.pyplot as plt
+import numpy as np
+from numpy.linalg import norm
+
+from sklearn.model_selection import train_test_split
+from sklearn.decomposition import PCA
+from sklearn.manifold import TSNE
+from sklearn.neural_network import MLPRegressor, MLPClassifier
+from sklearn.preprocessing import LabelBinarizer
+from sklearn import decomposition
+
+from neural_net import NeuralNet, stochasticNeuralNet
+from manifold import MDS, ISOMAP
+import utils
+
+def mds_objective(Z,X):
+ sum = 0
+ n,d = Z.shape
+ for i in range(n):
+ for j in range(i+1,n):
+ sum += (norm(Z[i,:]-Z[j,:],2)-norm(X[i,:]-X[j,:],2))**2
+ return 0.5*sum
+
+dataset = load_dataset('animals.pkl')
+X = dataset['X'].astype(float)
+animals = dataset['animals']
+n, d = X.shape
+pca = decomposition.PCA(n_components = 5)
+pca.fit(X)
+Z = pca.transform(X)
+plt.figure()
+plt.scatter(Z[:, 0], Z[:, 1])
+for i in range(n):
+ plt.annotate(animals[i], (Z[i,0], Z[i,1]))
+utils.savefig('PCA.png')
+
+print(pca.explained_variance_ratio_)
+print(mds_objective(Z,X))
+
+
+dataset = load_dataset('animals.pkl')
+X = dataset['X'].astype(float)
+animals = dataset['animals']
+n,d = X.shape
+
+model = MDS(n_components=2)
+Z = model.compress(X)
+
+fig, ax = plt.subplots()
+ax.scatter(Z[:,0], Z[:,1])
+plt.ylabel('z2')
+plt.xlabel('z1')
+plt.title('MDS')
+for i in range(n):
+ ax.annotate(animals[i], (Z[i,0], Z[i,1]))
+utils.savefig('MDS_animals.png')
+print(mds_objective(Z,X))
+
+It prints the following:
+
+1673.1096816455256
+1776.8183112784652
+
+"
+"['math', 'backpropagation', 'derivative', 'numpy']"," Title: Why is my derivation of the back-propagation equations inconsistent with Andrew Ng's slides from Coursera?Body: I am using the cross-entropy cost function to calculate its derivatives using different variables $Z, W$ and $b$ at different instances. Please refer image below for calculation.
+
+
+As per my knowledge, my derivation is correct for $dZ, dW, db$ and $dA$, but, if I refer to Andrew Ng Coursera stuff, then I am seeing an extra $\frac{1}{m}$ for $dW$ and $db$, whereas no $\frac{1}{m}$ in $dZ$. Andrew's slides on the left represent derivative and whereas the right side of slides shows NumPy implementation corresponding to the right side equation.
+
+
+Can someone please explain why there is:
+
+1) $\frac{1}{m}$ in $dW^{[2]}$ and $db^{[2]}$ in Andrew's slides in NumPy representation
+
+2) missing $\frac{1}{m}$ for $dZ^{[2]}$ in Andrew's slides in both normal and NumPy representation.
+
+Am I missing something or doing it in the wrong way?
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'applications']"," Title: Which NLP applications are based on recurrent neural networks?Body: Some of the NLP applications taken from this link NLP Applications:
+
+
+- Machine Translation
+- Speech Recognition
+- Sentiment Analysis
+- Question Answering
+- Automatic Summarization
+- Chatbots
+- Market Intelligence
+- Text Classification
+- Character Recognition
+- Spell Check
+
+
+Which are the NLP applications that supports recurrent neural network?
+"
+"['reinforcement-learning', 'comparison', 'sarsa', 'value-functions']"," Title: What is the relationship between the Q and V functions?Body: Suppose we have a policy $\pi$ and we use SARSA to evaluate $Q^\pi(s, a)$, where $a$ is the policy $\pi$.
+
+Can we say that $Q^\pi(s, a) = V^\pi(s)$?
+
+The reason why I think this can be the case is because $Q^\pi(s, a)$ is defined as the value obtained from taking action $a$ and then following policy $\pi$ thereafter. However, the action $a$ taken is the policy according to $\pi$ for all $s \in S $. This seems to corresponds to the value function equation of $V^\pi(s_t) = r(s_t) + \gamma V^\pi(s_{t+1})$.
+"
+"['reinforcement-learning', 'robotics', 'cyborg']"," Title: Why haven't we solved the problem of bipedal walking?Body: This has been a mystery to me.
+
+All the walking robots look like idiots now. But we do have a lot of simulation-based results
+(Flexible Muscle-Based Locomotion for Bipedal Creatures
+), so why can't we just apply the simulation results to a real robot and let it walk, not like an idiot, but like an running ostrich?
+
+With the main loop running at more than 60 fps, I fail to see how possibly the program could fail to stop the robot from losing balance. When I balance a stick on my hand, I could probably only do 5 fps.
+
+We have not only supercomputers connected to the robots, but also reinforcement learning algorithms at our disposal, so what has been the bottleneck in the problem of bipedal walking?
+"
+"['convolutional-neural-networks', 'residual-networks']"," Title: If the point of the ResNet skip connection is to let the main path learn the residual relative to identity, why are there convolutional skips?Body: In the original ResNet paper they talk about using plain identity skip connections when the input and output of a block have the same dimensions.
+
+
+
+When the input and output have different dimensions they propose two options:
+
+(A) Use an identity mapping padded with zeros to make up for the extra dimensions
+
+(B) Use a ""projection"".
+
+
+
+which (after some digging around in other people's code) I see as meaning: do a convolution with a 1x1 kernel with trainable weights.
+
+(B) is confusing to me because it seems to ruin the point of ResNet by making the skip connection trainable. Then the main path is not really learning a ""residual"" relative to an identity transformation. So at this point, I'm no longer sure how to interpret the intent or expected effect of this type of block. And I would think that one should justify doing it in the first places instead of just not putting a skip connection there at all (which in my mind is the status-quo before this paper).
+
+So can anyone help explain away my confusion here?
+"
+"['convolutional-neural-networks', 'recurrent-neural-networks', 'long-short-term-memory', 'time-series']"," Title: How to predict an event (or action) based on a window of time-series measurements?Body: I have an input vector $X$, which contains a series of measurements within a period, e.g. 100 measurements in 1 sec. The goal is to predict an event, let's say, moving forward, backward or static.
+
+I don't want to predict the output just by looking at one series of measurements, but by looking at a window of $n$ vectors $X$ of measurements, making it dependant on the previous measurements, because of the noise in the measurements.
+
+Is there a way RNN can help me with this? Many to one architecture? LSTM?
+CNN of 1D + LSTM + dense?
+"
+"['search', 'norvig-russell', 'best-first-search']"," Title: What does the statement with the max do in the recursive best-first search algorithm?Body: I am reading the book ""Artificial Intelligence: A Modern Approach"" by Stuart and Norvig. I'm having trouble understanding a step of the recursive best-first search (RBFS) algorithm.
+
+Here's the pseudocode.
+
+
+
+What does the line s.f <- max(s.g + s.h, node.f)
do?
+
+Here's a diagram of the execution of RBFS.
+
+
+"
+"['neural-networks', 'deep-learning', 'classification', 'image-recognition']"," Title: How to perform insect classification given two images of the same insect?Body: I'm relatively new to image classification. Currently, I am trying to classify insect images, using a convolutional neural network (CNN).
+When I ask a human expert to identify an insect, I usually provide 2 photos: back and face. It seems that sometimes one feature stands out and allows identification with high certainty (""spots on the back - definitely a ladybug""), while other times you need to cross-reference both angles (""grey back could mean a few things, but after cross-referencing with the eyes - it's a moth"").
+
+How is it customary to implement this? Naively I was considering:
+
+
+- Two separate networks, one for backs and one for faces? If so, what formula is best for weighing in their outputs?
+- Single network, but separate dual classifications - e.g. ""moth face"", ""moth back"", ""ladybug face"", ""ladybug back""?
+- A single network, feed everything naively (e.g. moths from different angles, all with the same classification ""moth"") and rely on the NN to sort it out itself?
+
+"
+"['neural-networks', 'deep-learning', 'papers', 'implementation', 'pose-estimation']"," Title: How can I get to a final output of shape $224 \times 224$, without FC layers, from a tensor of specific shape, in OpenPose?Body: I am approaching the implementation of the OpenPose algorithm for realtime human body pose estimation.
+
+According to the official paper OpenPose: Realtime Multi-Person 2D Pose
+Estimation using Part Affinity Fields, $L$ and $S$ fields (body part maps, and part affinity fields) are estimated. These have the same size as the input image, and, according to the paper, these fields should be outputted at a given step in the forward pass (after a given number of $L$ stages, and $S$ stages), but, since before entering these stages the image is passed through the initial layers of the VGG-19 model, the spatial dimension is encoded and the features that finally enter the $L$ and $S$ stages have other dimensionality.
+
+All the network is convolutional, there's no FC layer at all. The VGG-19 part is the only one that contains MaxPooling layers, hence affecting the spatial relations and size of the receptive fields.
+
+My point is, after stage execution, I get tensors of shape [batch_size, filter_number, 28, 28]
. The issue is that the paper is not stating how to decode this information into the $L$ and $S$ maps of size $224 \times 224$.
+
+Following a traditional approach and decoding the final tensors with a linear net from, let's say, $15000 \rightarrow (224 * 224 * \text{ number of body parts }) + (224 * 224 * \text{ number of limbs } * 2)$~A very huge number!, is out of question for any domestic computer, I presume I should have the least 128GbRAM installed, and is not the case.
+
+Another solution is to remove the max-pooling layers from the VGG-19 part, but then although the map size is preserved to $224$, instead of $28$, the huge amount of computations and values that need to be stored also lead to memory errors.
+
+So, the problem is, how can I get to a final output of $224 \times 224$ without FC layers, from a tensor of shape [batch_size, bodyparts, 28, 28]
?
+
+Not an easy answer. I will check a TensorFlow implementation I have seen around to see how the problem was solved.
+
+Any parallel ideas are greatly welcome.
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'computer-vision']"," Title: Human Aggression Detection Community, Competition and datasetBody: I'm looking for a community or competition website related to human aggression detection using Deep Learning in a video.
+Also, I'm looking for a dataset of human aggression activities.
+
+Any suggestions would be appreciated.
+"
+"['neural-networks', 'deep-learning', 'supervised-learning', 'geometric-deep-learning', 'graphs']"," Title: How can I learn a graph given nodes with features in a supervised fashion?Body: I have a dataset and want to be able to construct a graph from it in a supervised fashion.
+
+Let's assume I have a dataset with N nodes, each node has e.g. 10 features. Out of these N nodes, I want to learn a graph, i.e. an $N \times N$ adjacency matrix. So, I start with $N$ nodes and all I know is a 10-dimensional feature vector for each node. I have no knowledge about the relation between these nodes and want to figure it out.
+
+Here is an example for $N=6$, but in practice $N$ is not fixed.
+
+
+So the output I would like to get here is a $6\times6$ adjacency matrix, representing the relations between the nodes (undirected).
+
+Note: N is arbitrary and not fixed. So an algorithm should be able to perform on any given N.
+
+My dataset is labeled. For the training dataset, I have the desired adjacency matrix for each collection of input nodes, which is filled with $0$s and $1$s.
+
+However, the output of the algorithm could also be an adjacency matrix filled with non-integer numbers in $[0,1]$, giving some kind of probability of the nodes being connected (preferably close to $0$ or $1$ of course). So I could easily give a number as the label for each node. In the above example, the labels for the three connected nodes could be class $1$, and so on.
+
+Is there any kind of supervised learning algorithm (e.g. some sort of graph neural network) that can perform these tasks?
+"
+"['backpropagation', 'math', 'regularization', 'derivative']"," Title: Derivation of regularized cost function w.r.t activation and biasBody: In regularzied cost function a L2 regularization cost has been added.
+
+
+Here we have already calculated cross entropy cost w.r.t $A, W$.
+
+As mentioned in the regularization notebook (see below) in order to do derivation of regularized $J$ (cost function), the changes only concern $dW^{[1]}$, $dW^{[2]}$ and $dW^{[3]}$. For each, you have to add the regularization term's gradient.(No impact on $dA^{[2]}$, $db^{[2]}$, $dA^{[1]}$ and $db^{[1]}$ ?)
+
+
+
+But I am doing it using the chain rule then I am getting change in values for $dA^{[2]}$ , $dZ^{[2]}$, $dA^{[1]}$, $dW^{[1]}$ and $db^{[1]}$.
+
+Please refer below how I calculated this ?
+
+
+Can someone explain why I am getting different results?
+
+What is the derivative of L2 reguarlization w.r.t $dA^{[2]}$ ? (in equation 1)
+
+So my questions are
+
+1) Derivative of L2 regularization cost w.r.t $dA^{[2]}$
+
+2) How adding regularization term not affecting $dA^{[2]}$, $db^{[2]}$, $dA^{[1]}$ and $db^{[1]}$ (i.e. $dA$ and $db$) but changes $dW$'s ?
+"
+"['neural-networks', 'deep-learning', 'classification', 'bayesian-deep-learning', 'bayesian-neural-networks']"," Title: How should the neural network deal with unexpected inputs?Body: I recently wrote an application using a deep learning model designed to classify inputs. There are plenty of examples of this using images of irises, cats, and other objects.
+
+If I trained a data model to identify and classify different types of irises and I show it a picture of a cat, is there a way to add in an ""unknown"" or ""not a"" classification or would it necessarily have to guess what type of iris the cat most looks like?
+
+Further, I could easily just add another classification with the label ""not an iris"" and train it using pictures of cats, but then what if I show it a picture of a chair (the list of objects goes on).
+
+Another example would be in natural language processing. If I develop an application that takes the input language and spits out ""I think this is Spanish"", what if it encounters a language it doesn't recognize?
+"
+"['machine-learning', 'keras', 'multilayer-perceptrons', 'linear-algebra']"," Title: Why MLP cannot approximate a closed shape function?Body: [TL;DR]
+
+I generated two classes Red and Blue on a 2D space. Red are points on Unit Circle and Blue are points on a Circle Ring with radius limits (3,4). I tried to train a Multi Layer Perceptron with different number of hidden layers, BUT all the hidden layers had 2 neurons. The MLP never reached 100% accuracy. I tried to visualize how the MLP would classify the points of the 2D space with Black and White. This is the final image I get:
+
+
+
+At first, I was expecting that the MLP could classify 2 classes on a 2D space with 2 Neurons at each hidden layer, and I was expecting to see a white circle encapsulating the red points and the rest be a black space. Is there a (mathematical) reason, why the MLP fails to create a close shape, rather it seems to go from infinity to infinity on a 2d space ?? (Notice: If I use 3 neurons at each hidden layer, the MLP succeeds quite fast).
+
+[Notebook Style]
+
+I generated two classes Red and Blue on a 2D space.
+Red are points on Unit Circle
+
+size_ = 200
+classA_r = np.random.uniform(low = 0, high = 1, size = size_)
+classA_theta = np.random.uniform(low = 0, high = 2*np.pi, size = size_)
+classA_x = classA_r * np.cos(classA_theta)
+classA_y = classA_r * np.sin(classA_theta)
+
+
+
+and Blue are points on a Circle Ring with radius limits (3,4).
+
+classB_r = np.random.uniform(low = 2, high = 3, size = size_)
+classB_theta = np.random.uniform(low = 0, high = 2*np.pi, size = size_)
+classB_x = classB_r * np.cos(classB_theta)
+classB_y = classB_r * np.sin(classB_theta)
+
+
+I tried to train a Multi Layer Perceptron with different number of hidden layers, BUT all the hidden layers had 2 neurons.
+
+hidden_layers = 15
+inputs = Input(shape=(2,))
+dnn = inputs
+for l_no in range(hidden_layers):
+ dnn = Dense(2, activation='tanh', name = ""layer_{}"".format(l_no))(dnn)
+outputs = Dense(2, activation='softmax', name = ""layer_out"")(dnn)
+
+model = Model(inputs=inputs, outputs=outputs)
+
+model.compile(optimizer='adam', loss='categorical_crossentropy', metrics='accuracy'])
+
+
+The MLP never reached 100% accuracy. I tried to visualize how the MLP would classify the points of the 2D space with Black and White.
+
+limit = 4
+step = 0.2
+grid = []
+x = -limit
+while x <= limit:
+ y = -limit
+ while y <= limit:
+ grid.append([x, y])
+ y += step
+ x += step
+grid = np.array(grid)
+
+
+prediction = model.predict(grid)
+
+
+This is the final image I get:
+
+xs = []
+ys = []
+cs = []
+for point in grid:
+ xs.append(point[0])
+ ys.append(point[1])
+for pred in prediction:
+ cs.append(pred[0])
+
+plt.scatter(xs, ys, c = cs, s=70, cmap = 'gray')
+plt.scatter(classA_x, classA_y, c = 'r', s= 50)
+plt.scatter(classB_x, classB_y, c = 'b', s= 50)
+plt.show()
+
+
+
+
+At first, I was expecting that the MLP could classify 2 classes on a 2D space with 2 Neurons at each hidden layer, and I was expecting to see a white circle encapsulating the red points and the rest be a black space. Is there a (mathematical) reason, why the MLP fails to create a close shape, rather it seems to go from infinity to infinity on a 2d space ?? (Notice: If I use 3 neurons at each hidden layer, the MLP succeeds quite fast).
+
+What I mean by a closed shape, take a look at the second image which was generated by using 3 neurons at each layer:
+
+for l_no in range(hidden_layers):
+ dnn = Dense(3, activation='tanh', name = ""layer_{}"".format(l_no))(dnn)
+
+
+
+
+[According to Marked Answer]
+
+from keras import backend as K
+def x_squared(x):
+ x = K.abs(x) * K.abs(x)
+ return x
+hidden_layers = 3
+inputs = Input(shape=(2,))
+dnn = inputs
+for l_no in range(hidden_layers):
+ dnn = Dense(2, activation=x_squared, name = ""layer_{}"".format(l_no))(dnn)
+outputs = Dense(2, activation='softsign', name = ""layer_out"")(dnn)
+model.compile(optimizer='adam',
+ loss='mean_squared_error',
+ metrics=['accuracy'])
+
+
+I get:
+
+
+"
+"['reinforcement-learning', 'keras', 'actor-critic-methods', 'advantage-actor-critic']"," Title: How to set the target for the actor in A2C?Body: I did a simple Actor-Critic implementation in Keras using 2 networks where the critic learns the Q-Values of every action, and the actor predicts probabilities for choosing each action. In training, the target probabilities for the actor was a one-hot vector with 1.0
in the maximum Q-Value prediction position and 0.0
in all the rest, and simply used fit
method on the actor model with mean squared error loss function.
+
+However, I'm not sure what to set as the target when switching to A2C. In all the guides I saw it's mentioned that the critic now learns one value per state, not one value per action in the action space.
+
+This change makes it unclear on how to set the target vector for the actor. The guides/SE questions I went over did not explain this point and simply said that we can calculate the advantage value using the value function (here, here and here) for the current and next state, which is fine, except we can only do that for the specific action taken and not for every action in the action-space because we don't the value for every next state for every action.
+
+In other words, we only know A(s,a)
for our memorized a
, and we know nothing about the advantage of other actions.
+
+One of my guesses was that you still calculate the Q-Values, because after all, the value function is defined by the Q-Values. The value function is the sum over every action a
of Q(s,a)*p(a)
. So does the critic need to learn the Q-Values and sum their multiplications with the probabilities generated by the policy network (actor), and calculate the advantages of every action?
+
+It's even more confusing because in one of the guides they said that the critic actually learns the advantage values, and not the value function (like all the other guides said), which is strange because you need to use the critic to predict the value function of the state and the next state. Also, the advantage function is per-action and in the implementations I see the critic has one output neuron.
+
+I think that what's being done in the examples I saw was to train the actor to fit a one-hot vector for the selected action (not the best action by the critic), but modify the loss-function value using the advantage value (possibly to influence the gradient). Is that the case?
+"
+"['deep-learning', 'comparison', 'transfer-learning', 'meta-learning']"," Title: What is the difference between ""out-of-distribution (generalisation)"" and ""(meta)-transfer learning""?Body: I'm trying to develop a better understanding of the concept of ""out-of-distribution"" (generalization) in the context of Bengio's ""Moving from System 1 DL to System 2 DL"" and the concept of ""(meta)-transfer learning"" in general.
+
+These concepts seem to be very strongly related, maybe even almost referring to the same thing. So, what are similarities and differences between these two concepts? Do these expressions refer to the same thing? If the concepts are to be differentiated from each other, what differentiates the one concept from the other and how do the concepts relate?
+"
+"['neural-networks', 'machine-learning', 'objective-functions']"," Title: Why is the loss associated with my neural network increasing?Body: I am currently learning neural networks using data from Touchscreen Input as a Behavioral Biometric. Basically, I am trying to predict "User ID" by training the neural network model shown below.
+import time
+import os
+BATCH_SIZE=32
+embedding_size=256
+sequence_length=200
+BUFFER_SIZE=10000
+input_size=41
+learning_rate=0.001
+
+inputs_as_tensors=tf.data.Dataset.from_tensor_slices(train_data_features_array)
+targets_as_tensors=tf.data.Dataset.from_tensor_slices(train_data_labels_categorical_array)
+training_data=tf.data.Dataset.zip((inputs_as_tensors,targets_as_tensors))
+#training_data=training_data.batch(sequence_length,drop_remainder=True)
+training_dataset=training_data.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
+print(training_dataset)
+
+def build_model(vocab_size,batch_size):
+ modelf=tf.keras.Sequential([
+ tf.keras.layers.Dense(10,activation="sigmoid",input_shape=(None,10)),
+
+ tf.keras.layers.Dense(30,activation="relu",use_bias=True),
+
+ tf.keras.layers.Dropout(0.2),
+ tf.keras.layers.Dense(vocab_size)
+
+
+ ])
+ return modelf
+
+def training_step(inputs,targets,optimizer):
+ with tf.GradientTape() as tape:
+ predictions=model(inputs)
+ loss=tf.reduce_mean(tf.keras.losses.categorical_crossentropy(targets,predictions,from_logits=True))
+
+ grads=tape.gradient(loss,model.trainable_variables)
+ optimizer.apply_gradients(zip(grads,model.trainable_variables))
+ return loss,predictions
+
+model=build_model(input_size,BATCH_SIZE)
+i=0
+inner_loop=0
+checkpoint_dir ='Moses_Model_x'
+checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{i}")
+
+while(1):
+ start = time.time()
+ for x,y in training_dataset:
+ loss,predictions=training_step(x,y,tf.keras.optimizers.RMSprop(learning_rate=0.002))
+ print ('Epoch {} Loss {:.4f}'.format(i, loss))
+ print ('Time taken for iteration {} is {} sec\n'.format(i,time.time() - start))
+ model.save_weights(checkpoint_prefix.format(i=i))
+ i=i+1
+
+However, the loss value is actually increasing. Is there anything I have to change or something wrong in my code?
+
+"
+"['reinforcement-learning', 'dqn', 'policy-gradients', 'policies']"," Title: Representation of state space, action space and reward system for Reinforcement Learning problemBody: I am trying to solve the problem of an agent dynamically discovering(start with no information about the environment) the environment and to explore as much of the environment as possible without crashing into obstacles
+I have the following environment:
+
+
+
+where the environment is a matrix. In this the obstacles are represented by 0's and the free spaces are represented with 1s. The position of the agent is given by a label such as 0.8 in the matrix.
+
+The initial internal representation of the environment of the agent will look something like this with the agent position in it .
+
+
+Every time it explores the environment it keeps updating its own map:
+
+
+
+The single state representation is just the matrix containing-
+
+
+- 0 for obstacles
+- 1 for unexplored regions
+- 0.8 for position of the agent
+- 0.5 for the places it has visited once
+- 0.2 for the places it has visited more than once
+
+
+I want the agent to not hit the obstacles and to go around them.
+
+The agent should also not be stuck in one position and try to finish the exploration as quickly as possible.
+
+This is what I plan to do:
+
+In order to prevent the agent from getting stuck in a single place, I want to punish the agent if it visits a single place multiple times. I want to mark the place the agent has visited once as 0.5 and if it has visited it more than once that place will be labelled 0.2
+
+The reason I am marking a place it has visited only once as 0.5 is because if there is a scenario where in the environment there is only one way to go into a region and one way to come out of that region, I don't want to punish this harshly.
+
+Given this problem, I am thinking of using the following reward system-
+
+
+- +1 for every time it takes an action that leads to an unexplored region
+- -1 for when it takes an action that crashes into an obstacle
+- 0 if it visits the place twice(i.e 0.5 scenario)
+- -0.75 is it visits a place more than twice
+
+
+The action space is just-
+
+
+
+Am i right in approaching the problem this way? Is Reinforcement Learning the solution for this problem?
+Is my representation of the state , action, reward system correct?
+I am thinking that DQN is not the right way to go because the definition of a terminal state is hard in this problem, what method should I use to solve this problem?
+"
+"['natural-language-processing', 'comparison', 'long-short-term-memory', 'language-model', 'bidirectional-lstm']"," Title: What are pros and cons of Bi-LSTM as compared to LSTM?Body: What are the pros and cons of LSTM vs Bi-LSTM in language modelling? What was the need to introduce Bi-LSTM?
+"
+"['neural-networks', 'classification', 'definitions', 'computational-learning-theory', 'sigmoid']"," Title: Can neural networks with a sigmoid as the activation function of the output layer approximate continuous functions?Body: Neural networks are commonly used for classification tasks, in fact from this post it seems like that's where they shine brightest.
+
+However, when we want to classify using neural networks, we often have the output layer to take values in $[0,1]$; typically, by taking the last layer to be the sigmoid function $x \mapsto \frac{e^x}{e^x +1}$.
+
+Can neural networks with a sigmoid as the activation function of the output layer approximate continuous functions? Is there an analogue to the universal approximation theorem for this case?
+"
+"['neural-networks', 'backpropagation']"," Title: Simple three layer neural network with backpropagation is not approximating tanh functionBody: I have this simple neural network in Python which I'm trying to use to aproximation tanh function. As inputs I have x - inputs to the function, and as outputs I want tanh(x) = y. I'm using sigmoid function also as an activation function of this neural network.
+
+import numpy
+# scipy.special for the sigmoid function expit()
+import scipy.special
+# library for plotting arrays
+import matplotlib.pyplot
+# ensure the plots are inside this notebook, not an external window
+%matplotlib inline
+
+# neural network class definition
+class neuralNetwork:
+
+
+ # initialise the neural network
+ def __init__(self, inputnodes, hiddennodes, outputnodes, learningrate):
+ # set number of nodes in each input, hidden, output layer
+ self.inodes = inputnodes
+ self.hnodes = hiddennodes
+ self.onodes = outputnodes
+
+ # link weight matrices, wih and who
+ # weights inside the arrays are w_i_j, where link is from node i to node j in the next layer
+ # w11 w21
+ # w12 w22 etc
+ self.wih = numpy.random.normal(0.0, pow(self.hnodes, -0.5), (self.hnodes, self.inodes))
+ self.who = numpy.random.normal(0.0, pow(self.onodes, -0.5), (self.onodes, self.hnodes))
+
+ # learning rate
+ self.lr = learningrate
+
+ # activation function is the sigmoid function
+ self.activation_function = lambda x: scipy.special.expit(x)
+
+ pass
+
+
+ # train the neural network
+ def train(self, inputs_list, targets_list):
+ # convert inputs list to 2d array
+ inputs = numpy.array(inputs_list, ndmin=2).T
+ targets = numpy.array(targets_list, ndmin=2).T
+
+ # calculate signals into hidden layer
+ hidden_inputs = numpy.dot(self.wih, inputs)
+ # calculate the signals emerging from hidden layer
+ hidden_outputs = self.activation_function(hidden_inputs)
+
+ # calculate signals into final output layer
+ final_inputs = numpy.dot(self.who, hidden_outputs)
+ # calculate the signals emerging from final output layer
+ final_outputs = self.activation_function(final_inputs)
+
+ # output layer error is the (target - actual)
+ output_errors = targets - final_outputs
+ # hidden layer error is the output_errors, split by weights, recombined at hidden nodes
+ hidden_errors = numpy.dot(self.who.T, output_errors)
+
+ # BACKPROPAGATION & gradient descent part, i.e updating weights first between hidden
+ # layer and output layer,
+ # update the weights for the links between the hidden and output layers
+ self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 - final_outputs)), numpy.transpose(hidden_outputs))
+
+ # update the weights for the links between the input and hidden layers, second part of backpropagation.
+ self.wih += self.lr * numpy.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), numpy.transpose(inputs))
+ pass
+
+
+ # query the neural network
+ def query(self, inputs_list):
+ # convert inputs list to 2d array
+ inputs = numpy.array(inputs_list, ndmin=2).T
+
+ # calculate signals into hidden layer
+ hidden_inputs = numpy.dot(self.wih, inputs)
+ # calculate the signals emerging from hidden layer
+ hidden_outputs = self.activation_function(hidden_inputs)
+
+ # calculate signals into final output layer
+ final_inputs = numpy.dot(self.who, hidden_outputs)
+ # calculate the signals emerging from final output layer
+ final_outputs = self.activation_function(final_inputs)
+
+ return final_outputs
+
+
+Now I try to query this network, This network has three input nodes one for each x, one node for each input. This network also has 3 output nodes, so It would classify the inputs to given outputs. Where outputs are y, y = tanh(x) function.
+
+# number of input, hidden and output nodes
+input_nodes = 3
+hidden_nodes = 8
+output_nodes = 3
+learning_rate = 0.1
+
+# create instance of neural network
+n = neuralNetwork(input_nodes,hidden_nodes,output_nodes, learning_rate)
+
+realInputs = []
+realInputs.append(1)
+realInputs.append(2)
+realInputs.append(3)
+
+# for x in (-3, 3):
+# realInputs.append(x)
+# pass
+
+expectedOutputs = []
+expectedOutputs.append(numpy.tanh(1));
+expectedOutputs.append(numpy.tanh(2));
+expectedOutputs.append(numpy.tanh(3));
+
+for y in expectedOutputs:
+ print(y)
+ pass
+
+training_data_list = []
+
+# epochs is the number of times the training data set is used for training
+epochs = 200
+
+for e in range(epochs):
+ # go through all records in the training data set
+ for record in training_data_list:
+ # scale and shift the inputs
+ inputs = realInputs
+ targets = expectedOutputs
+ n.train(inputs, targets)
+ pass
+ pass
+
+n.query(realInputs)
+
+
+Outputs: desired vs ones from network with same data as training data:
+
+0.7615941559557649
+0.9640275800758169
+0.9950547536867305
+
+
+array([[-0.21907413],
+ [-0.6424568 ],
+ [-0.25772344]])
+
+
+My results are completely wrong. I'm a beginner with neural networks so I wanted to build neural network without frameworks like tensor flow... Could someone help me? Thank you.
+"
+"['neural-networks', 'comparison', 'regression']"," Title: Which one is better: multivariate regression with basis expansion or neural networks?Body: Assume we are given a training dataset $D = \{ (x_i, y_i)\}_{i=1}^{N}$.
+
+My question is: which is better?
+
+
+- A multivariate regression with basis expansion with independent matrix $X$ and dependent matrix $Y$, such that $X \in K; K \subset \mathbb R^n$ and $Y \in \mathbb R^m$ with training data $D$.
+
+
+Or
+
+
+- A neural network which takes $n$ input variables and returns $m$ output with training data $D$
+
+
+
+ Without a doubt, the multivariate regression option is better with its basis polynomials because it can adapt any curve required in any dimension and doesn't need a large number of datasets than neural networks. Then, why neural networks are used more than multivariate regression?
+
+
+Note: Prefer explaining the mechanism of neural network used as regression in your answers. To help us know the degree of flexibility of both.
+
+Edit: You may prefer choosing your own loss function in case you need.
+"
+"['monte-carlo-tree-search', 'monte-carlo-methods', 'computer-programming']"," Title: MCTS moves with multiple parentsBody: I'd like to develop an MCTS-like (Monte Carlo Tree Search) algorithm for program induction, i.e. learning programs from examples.
+
+My initial plan is for nodes to represent programs and for the search to expand nodes by revising programs. Each edge represents a possible revision.
+
+Many expansions involve a single program: randomly resample a subtree of the program, replace a constant with a variable, etc. It's straightforward to use these with MCTS.
+
+Some expansions, however, generate a program from scratch (e.g. sample a new program). Others use two or more programs to generate a single output program (e.g. crossover in Genetic Programming).
+
+These latter types of moves seem nonstandard for vanilla MCTS, but I'm not deeply familiar with the literature or what technical terms might be most relevant.
+
+How can I model expansions involving $N \ne 1$ nodes? Are there accepted methods for handling these situations?
+"
+"['training', 'chat-bots']"," Title: Is it better to rely on an intention file or a database for a web chatbot?Body: Currently, I'm making a chatbot that is going to be functioning in a website, so I was wondering, is it better to train the chatbot with intentions files or use the database as the intention file, if it the latter, then how would I do it? With SQLite or with Excel? Any guides or tutorial would be appreciated.
+
+I'm planning to use Flask + Python + Html for the chatbot.
+"
+"['natural-language-processing', 'pytorch', 'transformer']"," Title: Can you train Transformers sequentially?Body: I’m currently trying to train a BART, which is a denoising Transformer created by Facebook researchers. Here’s my Transformer code
+
+import math
+import torch
+from torch import nn
+from Constants import *
+
+class Transformer(nn.Module):
+ def __init__(self, input_dim: int, output_dim: int, d_model: int = 200, num_head: int = 8, num_e_layer: int = 6,
+ num_d_layer: int = 6, ff_dim: int = 1024, drop_out: float = 0.1):
+ '''
+ Args:
+ input_dim: Size of the vocab of the input
+ output_dim: Size of the vocab for output
+ num_head: Number of heads in mutliheaded attention models
+ num_e_layer: Number of sub-encoder layers
+ num_d_layer: Number of sub-decoder layers
+ ff_dim: Dimension of feedforward network in mulihead models
+ d_model: The dimension to embed input and output features into
+ drop_out: The drop out percentage
+ '''
+ super(Transformer, self).__init__()
+ self.d_model = d_model
+ self.transformer = nn.Transformer(d_model, num_head, num_e_layer, num_d_layer, ff_dim, drop_out,
+ activation='gelu')
+ self.decoder_embedder = nn.Embedding(output_dim, d_model)
+ self.encoder_embedder = nn.Embedding(input_dim, d_model)
+ self.fc1 = nn.Linear(d_model, output_dim)
+ self.softmax = nn.Softmax(dim=2)
+ self.positional_encoder = PositionalEncoding(d_model, drop_out)
+ self.to(DEVICE)
+
+ def forward(self, src: torch.Tensor, trg: torch.Tensor, src_mask: torch.Tensor = None,
+ trg_mask: torch.Tensor = None):
+ embedded_src = self.positional_encoder(self.encoder_embedder(src) * math.sqrt(self.d_model))
+ embedded_trg = self.positional_encoder(self.decoder_embedder(trg) * math.sqrt(self.d_model))
+ output = self.transformer.forward(embedded_src, embedded_trg, src_mask, trg_mask)
+ return self.softmax(self.fc1(output))
+
+class PositionalEncoding(nn.Module):
+ def __init__(self, d_model, dropout=0.1, max_len=5000):
+ super(PositionalEncoding, self).__init__()
+ self.dropout = nn.Dropout(p=dropout)
+ pe = torch.zeros(max_len, d_model)
+ position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
+ div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
+ pe[:, 0::2] = torch.sin(position * div_term)
+ pe[:, 1::2] = torch.cos(position * div_term)
+ pe = pe.unsqueeze(0).transpose(0, 1)
+ self.register_buffer('pe', pe)
+
+
+and here’s my training code
+
+def train(x: list):
+ optimizer.zero_grad()
+ loss = 0.
+ batch_sz = len(x)
+ max_len = len(max(x, key=len)) + 1 # +1 for EOS xor SOS
+ noise_x = noise(x)
+ src_x = list(map(lambda s: [SOS] + [char for char in s] + [PAD] * ((max_len - len(s)) - 1), noise_x))
+ trg_x = list(map(lambda s: [char for char in s] + [EOS] + [PAD] * ((max_len - len(s)) - 1), x))
+ src = indexTensor(src_x, max_len, IN_CHARS).to(DEVICE)
+ trg = targetsTensor(trg_x, max_len, OUT_CHARS).to(DEVICE)
+ names = [''] * batch_sz
+
+ for i in range(src.shape[0]):
+ probs = transformer(src, trg[:i + 1])
+ loss += criterion(probs, trg[i])
+
+ loss.backward()
+ optimizer.step()
+
+ return names, loss.item()
+
+
+As you can see in the train code. I am training it ""sequentially"" by inputting the first letter of the data then computing the loss with the output then inputting the first and second character and doing the same thing, so on and so forth.
+
+This doesn’t seem to be training properly though as the denoising is totally off. I thought maybe there’s something wrong with my code or you can’t train Transformers this way.
+
+I'm taking first name data then noising it then training the Transformer to denoise it, but the output to the Transformers doesn't look remotely like the denoised version or even the noised version of the name. I built a denoising autoencoder using LSTMs and it did way better, but I feel like BART should be way out performing LSTMs cause it's supposedly state of the art NLP neural network model.
+"
+"['objective-functions', 'policy-gradients', 'advantage-actor-critic']"," Title: Is A2C loss function taking smaller steps for larger mistakes?Body: A2C loss is usually defined as advantage * (-log(actor_predictions)) * target
where target
is a one-hot vector (with some clipping/noise/etc...) with the selected target.
+
+Does this mean that we get larger losses for smaller mistakes?
+
+If for example the agent has predicted $\pi(a|s)=0.9$ but the advantage is negative, this would mean a larger mistake than if the agent predicted that $\pi(a|s)=0.1$, however, putting the numbers in the formula means a larger loss for the 0.1
prediction.
+
+Assuming advantage=-1
, advantage * (-log(actor_predictions)) * target
would mean:
+
+$$
+-1 * (-log(0.9)) * 1 = log(0.9)=-0.045
+$$
+$$
+-1 * (-log(0.1)) * 1 = log(0.1)=-1
+$$
+
+Is my understanding correct?
+"
+"['machine-learning', 'reference-request', 'gaussian-process']"," Title: Is there an up-to-date list of suitable kernels for Gaussian processes?Body: Consider a stochastic process $\{X_t \colon t \in T\}$ indexed by a set $T$. We assume for simplicty that $T \in \mathbb{R}^n$. We assume that for any choice of indexes $t_1, \dots, t_n$, the random variables $(X_{t_1}, \dots, X_{t_n})$ are jointly distributed according to a multivariate Gaussian distribution with mean $\mu = (0, \dots, 0)$, for a given covariance matrix $\Sigma$.
+Under these assumptions, the stochastic process is completely determined by the 2nd-order statistics. Hence, if we assume a fixed mean at $0$, then the stochastic process is fully defined by the covariance matrix. This matrix can be defined in terms of the covariance function
+$$
+k(t_i, t_j) = \mbox{cov}(X_{t_i}, X_{t_j}).
+$$
+It is well-known that functions $k$ as defined above are admissible kernels. This fact is often used in probabilistic inference, when performing regression or classification.
+Several functions can be suitable kernels, but only a few are used in practice, depending on the application.
+Given a large amount of related literature, can someone provide an up-to-date list of functions commonly used as kernels in this context?
+"
+"['neural-networks', 'ai-design', 'regression']"," Title: Non-linear regression with a neural networkBody: I have to perform a regression on three curves as shown in the following plot. Here, accA
(y-axis) is the dependent variable, and w
(x-axis) is the independent variable. The sum of the three curves adds up to 1.
+
+To perform regression for the three curves, I would like to use a neural network. What architecture/set-up should I use for this task?
+
+
+"
+"['neural-networks', 'deep-learning', 'training', 'alexnet', 'data-augmentation']"," Title: What could cause a big fluctuation of the loss in the last epochs of training an AlexNet?Body: I am training an AlexNet neural network, with about 12000 images which 80% is for training, 10% is for validation and another 10% is for testing.
+I have a problem in my plots. There is a big fluctuation in epoch 47,
+how can I have a smooth plot? What is the problem?
+
+
+
+I tried to increase my validation data because the fluctuation was for validation loss. but nothing changed.
+I decreased learning rate, but it stucks in a local optimum.
+"
+"['convolutional-neural-networks', 'multilayer-perceptrons', 'artificial-neuron', 'weights', 'filters']"," Title: Can neurons in MLP and filters in CNN be compared?Body: I know they are not the same in working, but an input layer sends the input to $n$ neurons with a set of weights, based on these weights and the activation layer, it produces an output that can be fed to the next layer.
+Aren't the filters the same, in the way that they convert an "image" to a new "image" based on the weights that are in that filter? And that the next layer uses this new "image"?
+"
+"['machine-learning', 'combinatorial-games']"," Title: Can an artificial intelligence be unbeatable at simple games?Body: There are (two-players, perfect information) combinatorial games for which, at any configuration of the game, a winning move (if there is one) can be quickly computed by a short program. This is the case of the following game that starts with a bunch of matches and each player alternatively removes 1,2 or 3 matches, until the player that removes the last one wins.
+This is also the case of the Nim game.
+
+On the other hand, understanding the winning strategy of games like Go or chess seems hopeless.
+However, some machine-learning based programs like alphaGo zero are able ""learn the strategy"" of complex games, using as input data only the rules of the game. I don't really know how these algorithms work but here is my vague question:
+
+For simple game like Nim, can such an algorithm be able to actually find a winning move in any winning configuration of the game ?
+
+The number of configurations of Nim is infinite, but the algorithm will consider during its ""training"" only a finite number of configurations. It seems imaginable that if this training phase is long enough, then the program will be able to capture the winning strategy, like a human would do.
+"
+"['search', 'intelligent-agent', 'heuristics', 'multi-agent-systems']"," Title: Agents meeting in a directed connected graphBody: We have a directed connected graph, where nodes are places one can go to and edges are the "roads" between places. We have K agents whose goal is to meet in one node. Agents start in different nodes. Note that more than one agent can be at one node at the same time and that all agents move by one node at every turn(they move synchronously).
+We have to variants of this task:
+
+- in each turn every agent must move
+
+- an agent may pass on moving.
+
+
+For a chosen variant, I have to find an algorithm to complete this task, but it cannot be the state-space search algorithm.
+I've been sitting on this for a while, but I cannot think of anything.
+I've been thinking if agents could know each other positions in order to choose where to go, but it's a state-space search. I also thought that if agents meet, they could continue together. But I'm looking for an alternative to state-space search algorithms.
+"
+['convolutional-neural-networks']," Title: What are the main points of the top-down vs bottom-up paradigm in neural networks?Body: I've been reading some papers on human pose estimation and I'm starting to see the terms top-down and bottom-up crop up a lot. For example in this paper:
+
+
+ Our hourglass module differs from these designs mainly
+ in its more symmetric distribution of capacity between bottom-up processing
+ (from high resolutions to low resolutions) and top-down processing (from low
+ resolutions to high resolutions)
+
+
+Okay, so what are the main observations or distinctions of top-down vs bottom-up? Why does it make sense to have a paradigm in which we talk about these specifically?
+"
+['algorithm']," Title: Automated explanation - function results - simple attemptBody: I used to work as an analyst in a financial project where we had functions $f$ determining the price, and sometimes the inputs $x$ jumped in such a way to produce anomalous results.
+We had to report an explanation, and I wish to automate the process.
+It's not properly a question of AI, more of information science.
+The idea is that once, for a generic non-linear $f$, you can determine the ranking of relevance of $x_i$ in explaining the result, you can generate a full explanation by:
+
+
+- decompose $f$ as a composition of $f_j$, which are intermediate results with a definite meaning in the application domain (in this case, finance)
+- apply the algorithm using the $f_j$ instead of $x_i$, and then iterate it to explain the $f_j$ in terms of $x_i$
+
+
+The relevance is quantified by the information gain of each variable. This will be explained for an application in ranking the $x_i$ directly.
+We assume to start on a uniform distribution on the $x$ domain, calculate the derived probability density function for $f$, and the information entropy of $f$.
+Then we fix the $x_i$ one at a time, for each calculate the new p.d.f. of $f$ conditioned on that $x_i$ and the (lower) information entropy of $f$. The information gain is $IG(x_i)$.
+Choose as the first conditioning the $i$ with the largest information gain, then condition of the remaining $i$ with a decreasing order of $IG_i$.
+So we could start for example , with $(x_1,x_2,x_3)$, to condition first on $x_2$, then on $(x_2,x_3)$, and then on $(x_2,x_3,x_1)$, getting the percentage contributions as:
+$\frac{IG_i}{H_y}$. The successive terms $IG_i$ add up always to the total entropy $H_y$, since conditioning on all variables gives a point and zero entropy.
+
+Any opinion on how to improve this ""automated function explanation"" is welcome
+"
+"['natural-language-processing', 'comparison', 'unsupervised-learning', 'supervised-learning', 'semi-supervised-learning']"," Title: What are the pros and cons of supervised, semi-supervised and unsupervised relation extraction in NLP?Body: I am following the NLP course taught by Dan Jurafsky. In the video lectures Supervised Relation Extraction and Semi Supervised and Unsupervised Relation Extraction Jurafsky explains supervised, semi-supervised and unsupervised relation extraction.
+
+But what are the pros and cons of every relation extraction method compared with the other two relation extraction methods?
+"
+"['computer-vision', 'math', 'applications']"," Title: Why are conics important in computer vision?Body: The book Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman talks about lines, points and conics. A conic is a curve described by a second-degree equation in the plane, so a parabola would be an example of a conic. The purpose and usage of points and lines in computer vision are quite clear. For example, a set of points defines an object, or we can project a 3-dimensional point to a 2-dimensional point in the image plane, or a line represents the space to look for the corresponding point in the second plane of another point in the first plane (epipolar geometry). However, probably because I haven't yet read the part of the book related to the applications of conics in computer vision, it's not clear why do we even care about conics in computer vision.
+
+So, why are conics important in computer vision? Note that I know that conics are defined by points and, given the point-line duality, they can also be defined by lines, but this doesn't still enlightens me on the purpose of conics in computer vision. So, I am looking for applications where conics are used to define the underlying CV model, in a similar way that points and lines are used to describe the pinhole camera model.
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl', 'resource-request']"," Title: What are some online courses for deep reinforcement learning?Body: What are some (good) online courses for deep reinforcement learning?
+
+I would like the course to be both programming and theoretical. I really liked David Silver's course, but the course dates from 2015. It doesn't really teach deep Q-learning at this time.
+"
+"['search', 'a-star', 'consistent-heuristic']"," Title: What does a consistent heuristic become if an edge is removed in A*?Body: For the A* algorithm, a consistent heuristic is defined as follows:
+if, for every node $n$ and every successor $m$ of $n$ generated by any action $a$, $h(n) \leq c(n,m) + h(m)$.
+
+Suppose the edge $c(n,m)$ is removed, how do we define consistency for nodes $n, n'$ ? Since the $n'$ is not generated from $n$.
+"
+"['reinforcement-learning', 'temporal-difference-methods']"," Title: Are there known error bounds for TD(0) with a constant learning rate?Body: Is there any known error bounds for the TD(0) algorithm for the value function after a finite number of iterations?
+
+$$ \Delta_t=\max_{s \in \mathcal{S}}|v_t(s)-v_\pi(s)|$$
+$$v_{t+1}(s_t)=v_t(s_t)+\alpha(r+v_t(s_{t+1})-v_t(s_t))$$
+"
+"['neural-networks', 'recurrent-neural-networks', 'long-short-term-memory', 'generative-model']"," Title: Is there any way of generating fixed-length sequences with RNNs?Body: Is there any way of generating fixed-length sequences with RNNs? I want to tell my character level RNN to generate a name of length 3, 4, 5 and so on. I haven't found anything online like this, but my intuition tells me that, if I append the sequence length (e.g. 5) at each RNN input, the RNN should be able to do this. Does anyone know if this task is possible?
+"
+"['deep-learning', 'math', 'variational-autoencoder', 'evidence-lower-bound']"," Title: Why does the variational auto-encoder use the reconstruction loss?Body: VAE is trained to reduce the following two losses.
+
+- KL divergence between inferred latent distribution and Gaussian.
+
+- the reconstruction loss
+
+
+I understand that the first one regularizes VAE to get structured latent space. But why and how does the second loss help VAE to work?
+During the training of the VAE, we first feed an image to the encoder. Then, the encoder infers mean and variance. After that, we sample $z$ from the inferred distribution. Finally, the decoder gets the sampled $z$ and generates an image. So, in this way, the VAE is trained to make the generated image to be equal to the original input image.
+Here, I cannot understand why the sampled $z$ should make the original image, since the $z$ is sampled, it seems that the $z$ does not have any relationship between the original image.
+But, as you know, VAE works well. So I think I miss something important or understand it in a totally wrong way.
+"
+"['planning', 'pddl']"," Title: Can't solve Towers of Hanoi in PDDLBody: I'm using PDDL to generate a plan to solve this tower of Hanoi puzzle. I'll give the problem, the rules, the domain and fact sheet for everything.
+
+PDDL is telling me that the goal can be simplified to false; however, I know for a fact that this puzzle is solvable.
+
+Puzzle:
+
+
+
+
+ There are 3 posts. Each has rings on it. From bottom to top on each post. The first post has the
+ second largest ring. The second post has the smallest ring, with the second smallest ring on top of it.
+ The third post has the third largest ring, with the largest ring stacked on top of it.
+
+
+Rules:
+
+
+ The rules of this game are that you may only stack a ring on top of a larger ring. Your goal is to get all of the rings onto the same post, stacked from largest to smallest.
+
+
+My Code
+
+Domain
+
+(define (domain hanoi)
+ (:requirements :strips)
+ (:predicates (clear ?x) (on ?x ?y) (smaller ?x ?y))
+
+ (:action move
+ :parameters (?disc ?from ?to)
+ :precondition (and (smaller ?to ?disc) (on ?disc ?from)
+ (clear ?disc) (clear ?to))
+ :effect (and (clear ?from) (on ?disc ?to) (not (on ?disc ?from))
+ (not (clear ?to))))
+ )
+
+
+Problem
+
+(define (problem hanoi5)
+ (:domain hanoi)
+ (:objects peg1 peg2 peg3 d1 d2 d3 d4 d5)
+ (:init
+ (smaller peg1 d1) (smaller peg1 d2) (smaller peg1 d3)
+ (smaller peg1 d4) (smaller peg1 d5)
+ (smaller peg2 d1) (smaller peg2 d2) (smaller peg2 d3)
+ (smaller peg2 d4) (smaller peg2 d5)
+ (smaller peg3 d1) (smaller peg3 d2) (smaller peg3 d3)
+ (smaller peg3 d4) (smaller peg3 d5)
+
+ (smaller d2 d1) (smaller d3 d1) (smaller d3 d2) (smaller d4 d1)
+ (smaller d4 d2) (smaller d4 d3) (smaller d5 d1) (smaller d5 d2)
+ (smaller d5 d3) (smaller d5 d4)
+
+ ;(clear peg2) (clear peg3) (clear d1)
+ ;(on d5 peg1) (on d4 d5) (on d3 d4) (on d2 d3) (on d1 d2))
+ (clear d2) (clear d4) (clear d1)
+ (on d2 peg1) (on d5 peg2) (on d4 d5) (on d3 peg3) (on d1 d3))
+
+ (:goal (and (on d5 d4) (on d4 d3) (on d3 d2) (on d2 d1)))
+)
+
+
+I'm really at a loss here. Thank you!
+"
+"['neural-networks', 'comparison', 'computational-learning-theory', 'no-free-lunch-theorems', 'universal-approximation-theorems']"," Title: Are No Free Lunch theorem and Universal Approximation theorem contradictory in the context of neural networks?Body: To my understanding NFL states that, we cannot have an hypothesis (let's assume it is an approximator like NN in this case) class that can't achieve certain accuracy parameters $\leq \epsilon$ with probability greater than a certain $p$ given the number of points from which we can sample is upper bounded to $m$.
+
+Whereas, the UAC states that an approximator like a NN given enough hidden units can approximate any function (to my knowledge the function must be bounded).
+
+The point where these 2 clashes (as per my knowledge) is that if we increase the paramters in a NN, the UAC will start to hold good, but the VC dimension will increase (or hypothesis class becomes richer) and for the same $m$ our $\epsilon$ increases or $p$ decreases (not sure which one is affected).
+
+So what are the gaps in my knowledge here? How do we make these 2 consistent with each other?
+"
+"['deep-learning', 'reinforcement-learning', 'dqn']"," Title: Why is my DQN model not getting better?Body: I've created a deep Q network. My model does not get better, and can't see what I'm doing wrong. I'm new to RL.
+
+Replay Memory
+
+class ReplayMemory(object):
+
+def __init__(self, input_shape, mem_size=100000):
+ self.states = np.zeros((mem_size, input_shape))
+ self.actions = np.zeros(mem_size, dtype=np.int32)
+ self.next_states = np.zeros((mem_size, input_shape))
+ self.rewards = np.zeros(mem_size)
+ self.terminals = np.zeros(mem_size)
+
+ self.mem_size = mem_size
+ self.mem_count = 0
+
+def push(self, state, action, next_state, reward, terminal):
+
+ idx = self.mem_count % self.mem_size
+
+ self.states[idx] = state
+ self.actions[idx] = action
+ self.next_states[idx] = next_state
+ self.rewards[idx] = reward
+ self.terminals[idx] = terminal
+
+ self.mem_count += 1
+
+def sample(self, batch_size):
+ batch_index = np.random.randint(0, min(self.mem_count, self.mem_size), batch_size)
+
+ states = self.states[batch_index]
+ actions = self.actions[batch_index]
+ next_states = self.next_states[batch_index]
+ rewards = self.rewards[batch_index]
+ terminals = self.terminals[batch_index]
+
+ return (states, actions, next_states, rewards, terminals)
+
+def __len__(self):
+ return min(self.mem_count, self.mem_size)
+
+
+DQN Agent
+
+class DQN_Agent(object):
+
+ def __init__(self, n_actions,n_states, ALPHA=0.001, GAMMA=0.99, eps_start=1 , eps_end=0.01, eps_decay=0.005):
+ self.n_actions = n_actions
+ self.n_states = n_states
+
+ self.memory = ReplayMemory(n_states)
+
+ self.ALPHA = ALPHA
+ self.GAMMA = GAMMA
+
+ self.eps_start = eps_start
+ self.eps_end = eps_end
+ self.eps_decay = eps_decay
+
+
+ self.model = self.create_net()
+ self.target = self.create_net()
+
+ self.target.set_weights(self.model.get_weights())
+
+ self.steps_counter = 0
+
+ def create_net(self):
+
+ model = Sequential([
+ Dense(64, activation=""relu"", input_shape=(self.n_states,)),
+ Dense(32, activation=""relu""),
+ Dense(self.n_actions)
+ ])
+
+ model.compile(loss=""huber_loss"", optimizer=Adam(lr=0.0005))
+
+ return model
+
+def select_action(self, state):
+ ratio = self.eps_end + (self.eps_start-self.eps_end)*np.exp(-1*self.eps_decay*self.steps_counter)
+ rand = random.random()
+ self.steps_counter += 1
+
+ if ratio > rand:
+ #print(""random"")
+ return np.random.randint(0, self.n_actions)
+ else:
+ #print(""not random"")
+ return np.argmax(self.model.predict(state))
+
+
+def train_model(self, batch_size):
+ if len(self.memory) < batch_size:
+ return None
+
+ states, actions, next_states, rewards, terminals = self.memory.sample(batch_size)
+
+ q_curr = self.model.predict(states)
+ q_next = self.target.predict(next_states)
+ q_target = q_curr.copy()
+
+
+ batch_index = np.arange(batch_size, dtype=np.int32)
+
+ q_target[batch_index, actions] = rewards + self.GAMMA*np.max(q_next, axis=1)*terminals
+
+ _ = self.model.fit(states, q_target, verbose = 0)
+
+ if self.steps_counter % 10 == 0:
+ self.target.set_weights(self.model.get_weights())
+
+
+My training loop
+
+n_games = 50000
+agent = DQN_Agent(2, 4)
+
+scores = []
+avg_scores = []
+
+for epoch in range(n_games):
+ done = False
+ score = 0
+ state = env.reset()
+
+ while not done:
+ #env.render()
+ action = agent.select_action(state.reshape(1,-1))
+ next_state, reward, done, _ = env.step(action)
+
+
+ score += reward
+
+ agent.memory.push(state, action, next_state, reward, done)
+
+ state = next_state
+ agent.train_model(64)
+
+
+ avg_score = np.mean(scores[max(0, epoch-100):epoch + 1])
+ avg_scores.append(avg_score)
+ scores.append(score)
+ print(score, avg_score)
+
+"
+"['reinforcement-learning', 'papers', 'rewards', 'apprenticeship-learning']"," Title: Do all expert trajectories have the same starting state in apprenticeship learning?Body: In the apprenticeship learning algorithm described by Ng et al. in Apprenticeship Learning via Inverse Reinforcement Learning, they mention that expert trajectories come in the form of $\{s_0^i, s_1^i\, ...\}_{i=1}^m$. However, they also mentioned that $s_0 $ is drawn from distribution D. Do all expert trajectories then have to have the same starting state? Why is it not possible to compute the feature expectation based on a single trajectory?
+"
+"['classification', 'pattern-recognition']"," Title: Data classification model to detect a process in an event logBody: There are many example in python which has a ready made data set, for example there is T-Shirt pre-trained data and thousands images, within few minutes it will tell how many t-shirt images are there in those folders
+
+but how that data detect.tflite
itself is created from scratch step by step, If I wanted to manually write those data in csv how it should be done
+
+Ultimately I have data which says, click event, keyboard event, process exe
name, title of the window, and timestamp
+
+I want to detect what exactly that user doing in his computer, who told do to what to him I want the software to tell me in words and diagrams
+
+this is definitely a big software, but data classification part unclear to me
+
+I need some guidance
+"
+"['reinforcement-learning', 'probability-distribution', 'control-theory']"," Title: What does a joint probability density function have to do with Stochastic Optimal Control and Reinforcement Learning?Body: I stumbled upon a job offer from a company that was looking for someone who was good with Reinforcement Learning (applied to finance) and something in their offer caught my eye. It goes something like this:
+
+
+ We want you to be able to study the price dynamic (of a stock I suppose) and its evolution in order to extract a Joint PDF that will be used in the Optimal Stochastic Control of a Loss Function (or gain)
+
+
+The thing is I understand what each of these things mean and how they are used separately (from my background in Control theory & dynamical systems) and I worked with fitting Joint PDFs and Copulas before, but I don't understand how a Joint PDF would help with the ""Optimal Stochastic Control of a Loss Function"" ? Thanks.
+"
+"['gaming', 'deepmind']"," Title: Is AlphaStar still competing in the Star Craft ladder?Body: Last year it was announced that Deepmind's Starcraft playing bot AlphaStar was taking on human players in the Starcraft ladder system (some kind of league system as far as I can tell) and that it had reached the Grandmaster level.
+
+Since then I haven't really heard anything anymore about the progress of Alphastar. Given that I don't know anything about Starcraft I was wondering whether somebody has a better clue as to what Alphastar is up to? Is it still playing online? Or when did it stop playing? What was the improvement trajectory during the time it played online? Basically, how did this pan out, as seen from the perspective of the Starcraft community?
+"
+"['objective-functions', 'generative-adversarial-networks', 'papers', 'pytorch']"," Title: Why does GAN loss converge to log(2) and not -log(2)?Body: In Goodfellow's paper, he says:
+
+
+ Hence, by inspecting Eq. 4 at $D^*_G (\mathbf{x}) = \frac{1}{2}$, we find $C(G) = \log \frac{1}{2}+ \log \frac{1}{2} = − \log 4$. To see that this is the best possible value of $C(G)$
+
+
+i.e. $D$ and $G$ loss should converge to $\log \frac{1}{2}$. This makes perfect sense. When I train a GAN in PyTorch with BCEloss, the loss for $D$ and $G$ converge to $\log(2)$, the negative of what Goodfellow states and what I'd expect.
+
+What am I missing?
+"
+"['deep-learning', 'optimization', 'batch-normalization', 'vanishing-gradient-problem', 'exploding-gradient-problem']"," Title: What effect does batch norm have on the gradient?Body: Batch norm is a technique where they essentially standardize the activations at each layer, before passing it on to the next layer. Naturally, this will affect the gradient through the network. I have seen the equations that derive the back-propagation equations for the batch norm layers. From the original paper: https://arxiv.org/pdf/1502.03167.pdf
+
+
+
+However, I have trouble understanding if there is an intuitive understanding of what effect it actually has on the network. For instance, does it help with the exploding gradient problem, since the activations are rescaled, and the variance of them is constrained?
+"
+"['search', 'constraint-satisfaction-problems']"," Title: How can I formulate a nonogram problem as a constraint satisfaction problem?Body: I've just started learning CSP and I find it quite exciting. Now I'm facing a nonogram solving problem and I want to solve it using backtracking with CSP.
+The first problem that I face is that I cannot quite figure out what such variables, domains, and constraints could be. My first thought was to make every field(those squares) a variable with such domain: $D_i = \{1,0\}$, where 1 means that a certain field has been colored black and 0 white.
+So far I've been mostly learning binary constraints, and I was thinking of using the AC-3 or forward checking algorithm for propagation during the backtracking algorithm.
+As far I know constraints of all arity can be represented as a set of binary constraints, so that would enable me to use the algorithms I mentioned. But that leads me to the problem of defining constraints. If every field was a variable then each line and column would be a constraint, based on how certain lines should be colored(numbers defining line for example 2,3,2).
+But it's all new and quite hard for me to imagine and come up with. I've been reading some articles and papers on that but they were too advanced for me.
+So, does anybody have an idea how can I formulate a nonogram problem as a constraint satisfaction problem?
+"
+"['neural-networks', 'training', 'backpropagation', 'objective-functions']"," Title: How do weights changes handles during back-propagation when there are unknown labelsBody: I have a question about how weights are updated during back-propagation for some of my samples that have unknown labels (please note, unknown, not missing). The reason they are unknown is because this is genomic data and to generate these data would take 8 years of lab work! Nevertheless I have genomic data for samples that have multiple-labels, sex age organ etc. this is a multi-class multi-label problem.
+
+For most classes, ALL labels are complete. For two or three classes, there are unknown labels. an example would be the developmental stage of samples at age x, the developmental stage of sample at age y are known. the developmental stage of samples at age Z are unknown! (generating this data is what would take most time)... I would therefore like to include all this data during training as it is indispensable. I would like to generate the sigmoid probability and assign unknown label 'Z' as belonging to developmental stage 0 or 1 (known classes) based on a threshold (say >= 0.5)... When one-hot encoding the unknown labels simply have no ground truth, 0 for class developing and 0 for not-developing as follows (example of 3 samples shown for class in question):
+
+ [[1., 0.
+ [0., 1.
+ [0., 0. ......]]
+
+
+the first row is known sample 1, second is known sample 2 and 3rd is unknown, and therefore has no ground truth. It is this sample i would like to assign a label of known class 1 or 2 based on the 'highest probability'.. based on reading and discussions, this is the direction i will be taking for this task, as it can be validated in the lab later... so the approach is, include in training and see what the network 'thinks' it is.
+
+My question is: How does back-propagation handle these known and unknown samples with respect to weight updates.
+
+I should note i have trained the network with ~90% validation performance. for all classes for which is there is complete data, the predictions are great. and the same for classes for which there is unknown data. It can accurately classify the samples for which there is known developmental stages... and it does assign a probability value to those samples that have the 'unknown' label (0,0), so i would really like to know how back-prop is handling these samples for the classes where there are unknown ground truth labels.
+
+thank you!
+"
+"['deep-learning', 'comparison', 'gradient-descent', 'stochastic-gradient-descent', 'mini-batch-gradient-descent']"," Title: What is the difference between batch and mini-batch gradient decent?Body: I am learning deep learning from Andrew Ng's tutorial Mini-batch Gradient Descent.
+
+Can anyone explain the similarities and dissimilarities between batch GD and mini-batch GD?
+"
+"['neural-networks', 'machine-learning', 'data-preprocessing']"," Title: How should I deal with variable-length inputs for neural networks?Body: I am a very beginner in the field of AI. I am basically a Pharma Professional without much coding experience. I use GUI-based tools for the neural network.
+I am trying to develop an ANN that receives as input a protein sequence and produces as output a drug molecule. Drug molecules can be represented as fixed-length binary (0-1). This length is 881 bits.
+However, I do not know how to transform protein sequences of variable length into a fixed-length binary representation.
+So, how should I deal with variable-length inputs for a neural network? What is the best way?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'computer-vision', 'fully-convolutional-networks']"," Title: Can a fully convolutional network always return an image of the same size as the original?Body: I'm trying to perform a segmentation task on images of multiple sizes using fully convolutional neural networks.
+
+Currently, I'm using EfficientNet as a feature extractor, and adding a deconvolution/backwards convolution/transposed convolution layer as described in the original Fully Convolutional Networks for Semantic Segmentation paper.
+
+But this transposed convolution layer doesn't return a filter of a size equivalent to the original image for images of varying sizes.
+
+For example, let's assume the original image is $100 \times 100$, and the last layer contains filters of size $50 \times 50$. To get a filter of the same size as the original, you would need a transposed convolution layer of size $51 \times 51$.
+
+Now, assume you passed in an image of size $200 \times 200$. The last layer would contain filters of size $100 \times 100$. That same transposed convolutional filter of size $51 \times 51$ would result in an output of size $150 \times 150$.
+
+Is there any way to make it so that a fully convolutional network always returns an image of the same size as the original?
+"
+"['reinforcement-learning', 'q-learning', 'dqn']"," Title: Whats the correct loss function to use during deep Q-learning (discrete action space)Body: After playing around with normal Q-learning I have decided to switch to deep Q-learning and I have encountered this problem.
+
+As I understand, for a task with discrete action space, where there are 4 possible actions (lets say left, right, up, down) my DQN needs to have four outputs. And then argmax
of prediction needs to be taken, which will be my predicted action (if argmax(prediction)==2
then I will pick third action, in my case up
).
+
+If I use Mean Squared Error loss for my network then the dimension of output needs to be same as dimension of expected target variables. I am calculating target variables using following code:
+
+target = rewards[i] + GAMMA * max(Qs_next_state[i])
which gives me a single number (while predicted output is four dimensional) as I workaround I decided to use custom loss function:
+
+def custom_loss(y_true, y_pred): # where y_true is target, y_pred is output of neural net
+ return ((max(y_pred) - y_true)**2) / BATCH_SIZE
+
+
+But I am not sure if it is correct, from what I have read from tutorials/papers loss functions do not have this max()
in them. But how else am I going to end up with same dimensionality between targets and NN outputs. What is the correct approach here?
+"
+"['machine-learning', 'reinforcement-learning', 'deep-learning', 'comparison', 'algorithmic-trading']"," Title: What are the pros and cons of deep learning and machine learning to develop a trading system?Body: As I want to start coding a new Trading AI this year (first based on Python and later maybe in C++) I stumbled over the following question:
+Today, I would like to make a pro/contra list with you in the area of deep learning vs machine learning. The difference should be clear to most of you. If not, here is a nice explanation from Hackernoon.
+Up to now, I was convinced that this future project will be based on Tensorflow, Keras, etc. However, the following came to my mind afterward.
+Most of you will probably have heard of Pluribus already. Dr. Sandholm and Mr. Brown (as work for his Ph.D.) were the first to program an AI that won against 6 poker world champions in No-Limit-Texas Holdem Poker. This seemed impossible because poker is a game of imperfect information. If you haven't read/seen their work until now, here's a link to a Facebook blog post Facebook, Carnegie Mellon build first AI that beats pros in 6-player poker. See also the paper Superhuman AI for multiplayer poker and this video.
+From the work, it is clear that they wrote the whole thing in C++ and WITHOUT the use of any deep learning library but exclusively on the basis of machine learning. This was also confirmed to me by both of them via email. So it was possible to bring the AI in under 24H to a level that could beat 6 Poker World Champions without any problems.
+The stock and crypto market is nothing else. A game of imperfect information. The prices of a crypto coin or stock are influenced by an incredible number of factors. This includes of course prices from the past but also current media (as currently seen with covid-19) and data from the economy.
+We want to grab the data out of the "Big Players", like CoinCap API, CryptoAPI.io for all kinds of historical and new charts prices, etc. The same we will do with the Yahoo Finance API to grab data out of the stock market. Depending on the size of this project and how it will develop, we want to implement also some kind of NLP to grab the most out of Economy Data like dailyfx news to predict some in/decreases for some stocks but this is a future feature.
+So, basically, the main question is: should we use neural networks (deep learning) or machine learning?
+All this leads me to the conclusion that I am not sure what the better option for a trading bot would be. I know that the training of AI-based on deep learning would take much longer than based on machine learning, but is it safe to say that the results are really better?
+What are the pros and cons of deep learning and machine learning to develop a trading system?
+"
+"['machine-learning', 'deep-learning', 'reinforcement-learning', 'ai-design']"," Title: How can I develop a reinforcement learning agent that plays memory cards game?Body: I am new to RL, and I am thinking of doing a little project. The goal of the project is to learn an agent play the memory game with cards.
+
+I already created the program for detecting the cards on the table (with YOLO) and classifying them what kind of object they are.
+
+I want an agent to be able to play the memory game by itself, without being explicitly told the rules and such.
+
+Any tips on how to get started to make the RL process easier?
+"
+"['neural-networks', 'natural-language-processing', 'data-preprocessing', 'structured-data']"," Title: Is text preprocessing really all that necessary for NLP?Body: As a first step in many NLP courses, we learn about text preprocessing. The steps include lemmatization, removal of rare words, correcting typos etc. But I am not so sure about the actual effectiveness of doing such a step; in particular, if we are learning a neural network for a downstream task, it seems like modern state of the art (BERT, GPT-2) just take essentially raw input.
+
+For instance, this ACL paper seems to show that the result of text preprocessing is mixed, to say the least.
+
+So is text preprocessing really all that necessary for NLP? In particular, I want to contrast/compare against vision and tabular data, where I have empirically found that standardization usually actually does help. Feel free to share your personal experiences/what use cases where text preprocessing helps!
+"
+"['proofs', 'computational-learning-theory', 'vc-dimension', 'vc-theory', 'hypothesis-class']"," Title: How do I prove that $\mathcal{H}$, with $\mathcal{VC}$ dimension $d$, shatters all subsets with size less than $d-1$?Body: If a certain hypothesis class $\mathcal{H}$ has a $\mathcal{VC}$ dimension $d$ over a domain $X$, how can I prove that $H$ will shatter all subsets of $X$ with size less than $d$, i.e. $\mathcal{H}$ will shatter $A \subset X$ where $|A| \leq d-1$?
+"
+"['neural-networks', 'deep-learning', 'math', 'activation-functions', 'relu']"," Title: Is ReLU a non-linear activation function?Body: According to this blog post
+
+
+ The purpose of an activation function is to add some kind of non-linear property to the function
+
+
+The sigmoid is typically used as an activation function of a unit of a neural network in order to introduce non-linearity.
+
+Is ReLU a non-linear activation function? And why? If not, then why is it used as an activation function of neural networks?
+"
+"['deep-learning', 'applications', 'programming-languages']"," Title: Is there any programming language designed by deep learning?Body: I know that AI can be used to design printed circuit boards (PCBs), so it can be used to solve complex tasks.
+Is there any programming language designed by deep learning (or any other AI technique)?
+"
+"['machine-learning', 'activation-functions', 'relu']"," Title: Are PreLU and Leaky ReLU better than ReLU in the case of noisy labels?Body: Let's assume I want to build a semantic segmentation algorithm, based on Multires-UNET. My GT-masks are messy and generated by a GAN, but they are getting better and better over time. The goal is knowledge expansion (based on the paper Noisy-Student).
+
+Can you generally say that PreLU and Leaky Relu are better for noisy labels (or imperfect ones), like the situation in GANs in general?
+"
+"['natural-language-processing', 'long-short-term-memory', 'bert']"," Title: Building a template based NLG system to generate a report from dataBody: I am a newbie to NLP and NLG. I am tasked to develop a system to generate a report based on a given data table. The structure of the report and the flow is predefined. I have researched on several existing python libraries like BERT, SimpleNLG but they don't seem to fit my need.
+
+For example:
+input_data(country = 'USA', industry = 'Coal', profit = '4m', trend = 'decline')
+output: The coal industry in USA declined by 4m.
+
+The input data array can be different combinations (and dynamic) based on a data table. I would like to know if there is any python package available, or any resource discussing a practical approach for this.
+"
+"['reinforcement-learning', 'notation', 'expectation', 'apprenticeship-learning']"," Title: What does the notation ${s'\sim T(s,a,\cdot)}$ mean?Body: I have been seeing notations on Expectations with their respective subscripts such as $E_{s_0 \sim D}[V^\pi (s_0)] = \Sigma_{t=0}^\infty[\gamma^t\phi(s_t)]$. This equation is taken from https://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf and $Q^\pi(s,a,R) = R(s) + \gamma E_{s'\sim T(s,a,\cdot)}[V^\pi(s',R)]$ ,in the case of the Bayesian IRL paper.(https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-416.pdf)
+
+I understand that $s_0 \sim D$ means that the starting state $s_0$ is drawn from a distribution of starting states $D$. But how do we understand the latter with subscript ${s'\sim T(s,a,\cdot)}$ ? How is $s'$ drawn from a distribution of transition probabilities?
+"
+"['reinforcement-learning', 'q-learning', 'algorithm-request', 'tic-tac-toe', 'zero-sum-games']"," Title: Non-Neural Network algorithms for large state space in zero sum gamesBody: I was reading online that tic-tac-toe has a state space of $3^9 = 19,683$. From my basic understanding, this sounds too large to use tabular Q-learning, as the Q table would be huge. Is this correct?
+If that is the case, can you suggest other (non-NN) algorithms I could use to create a TTT bot to play against a human player?
+"
+"['convolutional-neural-networks', 'classification', 'object-detection', 'yolo']"," Title: Object detection: combine many classes into one?Body: I am trying to train a model that detects logos in documents. Since I am not really interested in what kind of logo there is, but simply if there is a logo, does it make sense to combine all logos into 1 logo class?
+
+Or are ""logos"" too diverse to group them together (like some logos are round, some are rectangular, some are even text based etc.) and the diversity of features will just make it hard for the neural network to learn? Or doesn't it matter?
+
+(I am currently trying out the YOLOv3 architecture to begin with. Any other suggestions better suited are also welcome)
+"
+['constraint-satisfaction-problems']," Title: Where can I find example data for Nonogram solver?Body: I made CSP nonogram solver and I wanna test it on some bigger data.
+
+Where can I find such data to test my program? I've been looking on the internet but I couldn't find anything.
+"
+"['philosophy', 'agi', 'human-like']"," Title: Would a new human-like general artificial intelligence be more similar, in terms of eduction, to a toddler or an adult human?Body: The naive concept of a general AI, or strong AI, or artificial general intelligence, is some kind of software that can answer questions like
+
+
+ What is the volume of a cube that is 1 m wide?
+
+
+or even
+
+
+ Why are there only two political parties in the US?
+
+
+The second question requires external knowledge and high-level reasoning. For example that US means USA in the context, the constitution and that having two parties is caused by the mathematical properties of the election system.
+
+But I would expect that newborn human child is intelligent in the sense of intelligence that is used in general artificial intelligence, but toddlers can not answer these questions.
+
+That is not because an infant is not intelligent, but because it is not educated, I think.
+
+What is the apparent level of education of an artificial intelligence that could be called human-like?
+"
+"['deep-learning', 'classification', 'long-short-term-memory', 'sequence-modeling']"," Title: length independent sequence classification methodsBody: I am looking to do sequence classification using deep learning. The length of my sequences can vary from a few hundred to several tens of thousands of characters. I was wondering what is a good approach for doing this. I had success with splitting a sequence into subsequences a few hundred characters long and using LSTMs, but then one is faced with the task of putting the results of each of those together and it is nontrivial as well. Any help would be appreciated.
+"
+"['deep-learning', 'math', 'objective-functions', 'calculus']"," Title: Is there any wrong in my focal loss derivation?Body: Assume $\mathbf{X} \in R^{N, C}$ is the input of the softmax $\mathbf{P} \in R^{N, C}$, where $N$ is number of examples and $C$ is number of classes:
+
+$$\mathbf{p}_i = \left[ \frac{e^{x_{ik}}}{\sum_{j=1}^C e^{x_{ij}}}\right]_{k=1,2,...C} \in R^{C} \mbox{ is a row vector of } \mathbf{P}$$
+
+Consider example $i$-th, because softmax function $\mathbf{p}:R^C \mapsto R^C$ (eliminate subscript $i$ for ease notation), so the derivative of vector-vector mapping is Jacobian matrix $\mathbf{J}$:
+
+$$\mathbf{J}_{\mathbf{p}}(\mathbf{x}) = \left[ \frac{\partial \mathbf{p}}{\partial x_1}, \frac{\partial \mathbf{p}}{\partial x_2}, ..., \frac{\partial \mathbf{p}}{\partial x_C} \right] =
+\begin{bmatrix}
+ \frac{\partial p_1}{\partial x_1} & \frac{\partial p_1}{\partial x_2} & \dots & \frac{\partial p_1}{\partial x_C} \\
+ \frac{\partial p_2}{\partial x_1} & \frac{\partial p_2}{\partial x_2} & \dots & \frac{\partial p_2}{\partial x_C} \\
+ \dots & \dots & \dots & \dots \\
+ \frac{\partial p_C}{\partial x_1} & \frac{\partial p_C}{\partial x_2} & \dots & \frac{\partial p_C}{\partial x_C}
+\end{bmatrix}
+\in R^{C, C}
+$$
+
+$\mathbf{J}_{\mathbf{p}}(\mathbf{x})$ is called the derivative of vector ${\mathbf{p}}$ with respect to vector $\mathbf{x}$
+
+$$\mbox{1) Derivative in diagonal:}\frac{\partial p_{k}}{\partial x_{k}} = \frac{\partial}{\partial x_{k}}\left( \frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}} \right)$$
+
+$$ = \frac{\left( \frac{\partial e^{x_k}}{\partial x_k} \right)\sum_{j=1}^C e^{x_j} - e^{x_k}\left(\frac{\partial \sum_{j=1}^C e^{x_j}}{\partial x_k} \right)}{\left(\sum_{j=1}^C e^{x_j}\right)^2} \mbox{ (Quotient rule) }$$
+
+$$ = \frac{e^{x_k}\sum_{j=1}^C e^{x_j} - e^{x_k} e^{x_k}}{\left(\sum_{j=1}^C e^{x_j}\right)^2} = \frac{e^{x_k}(\sum_{j=1}^C e^{x_j} - e^{x_k})}{\left(\sum_{j=1}^C e^{x_j}\right)^2}$$
+
+$$ = \frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}} \left(\frac{\sum_{j=1}^C e^{x_j} - e^{x_k}}{\sum_{j=1}^C e^{x_j}}\right) = \frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}} \left(1 - \frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}}\right)$$
+
+$$\rightarrow \frac{\partial p_{k}}{\partial x_{k}} = p_{k}(1-p_{k})$$
+
+$$\mbox{2) Derivative not in diagonal } k \neq c \mbox{ :} \frac{\partial p_{k}}{\partial x_{c}} = \frac{\partial}{\partial x_{c}}\left( \frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}} \right)$$
+
+$$ = \frac{\left( \frac{\partial e^{x_k}}{\partial x_c} \right)\sum_{j=1}^C e^{x_j} - e^{x_k}\left(\frac{\partial \sum_{j=1}^C e^{x_j}}{\partial x_c} \right)}{\left(\sum_{j=1}^C e^{x_j}\right)^2} \mbox{ (Quotient rule) }$$
+
+$$ = \frac{0 - e^{x_k} e^{x_c}}{\left(\sum_{j=1}^C e^{x_j}\right)^2} = -\frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}}\frac{e^{x_c}}{\sum_{j=1}^C e^{x_j}}$$
+
+$$\rightarrow \frac{\partial p_{k}}{\partial x_{c}} = -p_{k}p_{c}$$
+
+$$\rightarrow \mathbf{J}_{\mathbf{p}}(\mathbf{x}) =
+\begin{bmatrix}
+ p_1(1-p_1) & -p_1p_2 & \dots & -p_1p_C \\
+ -p_2p_1 & p_2(1-p_2) & \dots & -p_2p_C \\
+ \vdots & \vdots & \ddots & \vdots \\
+ -p_Cp_1 & -p_Cp_2 & \dots & p_C(1-p_C)
+\end{bmatrix}$$
+
+
+- Focal Loss: $\displaystyle FL = -\sum_{k=1}^C y_{k} \alpha_{k}(1-p_k)^\gamma \log (p_k)$
+
+
+$$\nabla_{\mathbf{x}} FL = \nabla_{\mathbf{p}}FL (\mathbf{J}_{\mathbf{p}}(\mathbf{x}))^T $$
+
+$$\nabla_{\mathbf{p}} FL =
+\begin{bmatrix}
+ \frac{\partial FL}{\partial p_1}\\
+ \frac{\partial FL}{\partial p_2}\\
+ \vdots \\
+ \frac{\partial FL}{\partial p_C}
+\end{bmatrix} \mbox{ where } \frac{\partial FL}{\partial p_k}
+= - y_k\alpha_k \left(-\gamma(1-p_k)^{\gamma-1} \log(p_k) + \frac{(1-p_k)^\gamma}{p_k} \right) = y_k \alpha_k\gamma(1-p_k)^{\gamma-1}\log(p_k) - y_k\alpha_k\frac{(1-p_k)^\gamma}{p_k}$$
+
+$$\nabla_{\mathbf{x}} FL =
+\begin{bmatrix}
+ \frac{\partial FL}{\partial p_1}\\
+ \frac{\partial FL}{\partial p_2}\\
+ \vdots \\
+ \frac{\partial FL}{\partial p_C}
+\end{bmatrix}^T
+\begin{bmatrix}
+ \frac{\partial p_1}{\partial x_1} & \frac{\partial p_1}{\partial x_2} & \dots & \frac{\partial p_1}{\partial x_C} \\
+ \frac{\partial p_2}{\partial x_1} & \frac{\partial p_2}{\partial x_2} & \dots & \frac{\partial p_2}{\partial x_C} \\
+ \vdots & \vdots & \ddots & \vdots \\
+ \frac{\partial p_C}{\partial x_1} & \frac{\partial p_C}{\partial x_2} & \dots & \frac{\partial p_C}{\partial x_C}
+\end{bmatrix} =
+\begin{bmatrix}
+ \sum_{k=1}^C \left(\frac{\partial FL}{\partial p_k}\frac{\partial p_k}{\partial x_1}\right)\\
+ \sum_{k=1}^C \left(\frac{\partial FL}{\partial p_k}\frac{\partial p_k}{\partial x_2}\right)\\
+ \vdots \\
+ \sum_{k=1}^C \left(\frac{\partial FL}{\partial p_k}\frac{\partial p_k}{\partial x_C}\right)
+\end{bmatrix}^T \in R^C
+$$
+
+$$\mbox{Case 1: }\displaystyle \frac{\partial FL}{\partial p_k}\frac{\partial p_k}{\partial x_k} \forall k=1,2,...,C$$
+
+$$\frac{\partial FL}{\partial p_k} \frac{\partial p_k}{\partial x_k} = y_k \alpha_k\gamma(1-p_k)^{\gamma-1}\log(p_k)p_k(1-p_k) - y_k\alpha_k\frac{(1-p_k)^\gamma}{p_k}p_k(1-p_k)$$
+
+$$ = y_k \alpha_k (1-p_k)^{\gamma}(\gamma p_k \log(p_k) - 1 + p_k) $$
+
+$$\mbox{Case 2: } (k \neq c)\displaystyle \frac{\partial FL}{\partial p_k}\frac{\partial p_k}{\partial x_c}$$
+
+$$\frac{\partial FL}{\partial p_k} \frac{\partial p_k}{\partial x_c} = - y_k\alpha_k\gamma(1-p_k)^{\gamma-1}\log(p_k)p_kp_c + y_k\alpha_k\frac{(1-p_k)^\gamma}{p_k}p_kp_c$$
+
+$$ = - y_k\alpha_k (1-p_k)^{\gamma-1}p_c(\gamma p_k \log(p_k) - 1 + p_k) $$
+
+$$\mbox{For each } d=1,2,...,C \mbox{ : }\sum_{k=1}^C \left(\frac{\partial FL}{\partial p_k} \frac{\partial p_k}{\partial x_d}\right) = y_d \alpha_d (1-p_d)^{\gamma}(\gamma p_d \log(p_d) - 1 + p_d) + \sum_{c \neq d}^C \left( - y_d\alpha_d (1-p_d)^{\gamma-1}p_c(\gamma p_d \log(p_d) - 1 + p_d) \right) = y_d\alpha_d(1-p_d)^{\gamma-1}(\gamma p_d \log(p_d) - 1 + p_d)\left(1-p_d -\sum_{c \neq d}^C(p_c)\right) $$
+
+$$\rightarrow \nabla_{\mathbf{x}} FL = \left[ y_d\alpha_d(1-p_d)^{\gamma-1}(\gamma p_d \log(p_d) - 1 + p_d)\left(1-p_d -\sum_{c \neq d}^C(p_c)\right) \right]_{d=1,2,...,C}$$
+
+However, the problem is $\left(1-p_d -\sum_{c \neq d}^C(p_c)\right) = 0$ (because sum of all probabilities is 1) then the whole expression collapses to $0$.
+
+Is there any wrong in my focal loss derivation?
+
+Reference:
+
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'recurrent-neural-networks', 'feedforward-neural-networks']"," Title: How do we minimize loss for a single neuron with a feedback?Body: Suppose we had a series of single-dimensional data points $X = \{x_1, x_2, \dots, x_n \}$, where $n$ is the number of data points and there corresponding output values $T = \{t_1, t_2, \dots, t_n \}$.
+
+Now, I want to train a single neuron network given below to learn from the data (the model is bad, but I just wanted to try it out as an exercise).
+
+
+
+The output function of this neuron would be a recursive function as:
+
+$$
+y = f(a_0 + a_1x + a_2 y)
+$$
+
+where
+
+$$
+f(x) = \frac{1}{1 + e^{-x}}
+$$
+
+for a given $x$.
+
+The error function for such a model would be:
+
+$$
+e = \sum_{i=1}^N (y_i - t_i)^2
+$$
+
+How should I minimise this loss function? What are the derivatives that I need to use to update the parameters?
+
+(Also, I am new to this problem, therefore it would be really helpful if you tell me sources/books to read about such problems.)
+"
+"['reinforcement-learning', 'optimization', 'deep-rl']"," Title: Which deep reinforcement learning algorithm is appropriate for my problem?Body: My task is to solve an optimization problem with deep reinforcement learning. I read about several algorithms like DQN, PPO, DDPG, and A2C/A3C but use cases always seem to be problems like video games (sparse rewards, etc.) or robotics (continuous action spaces, etc.). Since my problem is an optimization issue, I wonder which algorithm is appropriate for my setting:
+
+
+- limited number of discrete actions (like 20)
+- high-dimensional states (like 250 values)
+- instant reward after every single action (not only at the end of an episode)
+- a single action can affect the state quite a lot
+
+
+There's no ""goal"" like in a video game, an episode ends after a certain number of actions. I'm not quite sure which algorithm is appropriate for my use case.
+"
+"['deep-learning', 'recurrent-neural-networks', 'long-short-term-memory', 'q-learning']"," Title: Do RNN solves the need for LSTM and/or multiple states in Deep Q-Learning?Body: Introduction
+I am trying to setup a Deep Q-Learning agent. I have looked that the papers Playing Atari with Deep Reinforcement Learning as well as Deep Recurrent Q-Learning for Partially Observable MDPs as well as looking at the question How does LSTM in deep reinforcement learning differ from experience replay?.
+Current setup
+I currently take the current state of a game, not picture but rather the position of the agent, the way the agent is facing and some inventory items. I currently only feed state $S_t$ as at input to a 3 layer NN (1 input, 2 hidden, 3 output) to estimate the Q-values of each action.
+The algorithm that I use is almost the same as the one used in Playing Atari with Deep Reinforcement Learning, with the only difference that I do not train after each timestep but rather sample mini_batch*T
at the end of each episode and train on that. Where T
is the number of time-steps in that episode.
+The issue
+At the current state, the agent do not learn within 100 00 episodes, which is about 100 00 * 512 training iterations. Making me consider that something is not working, this is where I realised that I do not consider any of the history of the previous steps.
+What I currently struggle with is sending multiple time-steps-states to the NN. The reason for this is the complexity of how the game/program I am using. According to Deep Recurrent Q-Learning for Partially Observable MDPs LSTM could be a solution for this, however I would prefer to manually code a RNN rather than using an LSTM. Would not a RNN with something like the following structure have a chance of working?
+
+Also, as far as I know, RNN need the inputs to be fed in sequence and not randomly sampled?
+"
+"['neural-networks', 'ai-design']"," Title: How can I process neural network with 25000 input nodes?Body: I'm trying to build a neural network between protein sequence and its drug fingerprint. My input size is 20000. The output size is 881. The sample size is 610.
+
+Can I process this huge neural network? But how? And in which tool?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'backpropagation', 'relu']"," Title: Why do DeconvNet use ReLU in the backward pass?Body: Why does DeconvNet (Zeiler, 2014) use ReLU in the backward pass (after unpooling)? Are not the feature maps values already positive due to the ReLU in the forward pass? So, why do the authors apply the ReLU again coming back to the input?
+
+update: I better explain my problem:
+
+given an input image $x$ and ConvLayer $CL$ composed of:
+
+
+- a convolution
+- an activation function ReLU
+- a pool operation
+
+
+$f$ is the output of ConvLayer given an input $x$, i.e. $f=CL(x)$.
+
+So, the Deconv target is to ""reverse"" the output $f$ (the feature map) to restore an approximate version of $x$. To this aim, the authors define a function $CL^{-1}$ composed of 3 subfunctions:
+
+a. unpool
+
+b. activation function ReLU (useless in my opinion, because $f$ is already positive due to the application of the 2. step in $CL(f)$)
+
+c. transposed convolution.
+
+In other words $x\simeq CL^{-1}(f)$ where $CL^{-1} (f) = transpconv(relu(unpool(f)))$. But, if $f$ is the output computed as $f=CL(x)$, it is already positive, so the b. step is useless.
+
+This is what I understood from the paper. Where I wrong?
+"
+"['neural-networks', 'convolutional-neural-networks']"," Title: Interpreting I/O Transformation Matrix in ConvolutionBody: I've been reading this article on convolutional neural networks (I'm a beginner) - and I'm stuck at a point.
+
+What I understand: We have a 4x4 input, and want to transform it to a 2x2 grid. I'm visualising this as a kernel sliding over the 4x4 grid, with just the right number of strides, so as to get 4 outputs which constitute the 2x2 grid (there are animations right above this part of the page in the link attached). The model chooses to represent the 4x4 grid as a vector of length 16. Also, the 4x16 transformation matrix when pre-multiplied to the input vector, produces the output vector, which is mapped back to a 2D grid. Is this right?
+Moving on, another screenshot from the same page-
+
+Is it that both the matrices are really the same, and a lot of weights just happen to be zero in the weight matrix, which is what the second matrix is depicting? In that case even, why so many zeros?
+P.S. Thanks a lot for being so patient and reading to the end of this post, I really appreciate it. I'm a beginner, and really interested in these topics, and I'm self studying from various online resources - hence, any help is appreciated. I hope this is the right platform for this post.
+"
+"['convolutional-neural-networks', 'signal-processing']"," Title: Aren't all discrete convolutions (not just 2D) linear transforms?Body:
+
+The image above, a screenshot from this article, describes discrete 2D convolutions as linear transforms. The idea used, as far as I understand, is to represent the 2 dimensional $n$x$n$ input grid as a vector of $n^2$ length, and the $m$x$m$ output grid as a vector of $m^2$ length. I don't see why this can't be generalised to higher-dimensional convolutions, since a transformation matrix can be constructed from one vector to another, no matter the length (right?)
+
+My question: Aren't all discrete convolutions (not just 2D) linear transforms?
+
+Are there cases where such a transformation matrix cannot be found?
+"
+"['neural-networks', 'deep-learning', 'classification', 'computer-vision', 'performance']"," Title: How to estimate the accuracy upper limit of any CNN model over a computer vision classification taskBody: We are given a computer vision classification task, that is, a task that asks us to predict the category of an image over $n$ predefined classes (the so-called closed set classification problem).
+
+Question: Is it possible to give an estimate on what is the best accuracy one is likely achieve using an end-to-end CNN model (possibly, using a popular backbone) in this task? Do the performances of state-of-the-arts models on open datasets serve as a good reference? If someone claims that they achieve certain performance with some popular CNN architecture, how do we know s/he is not bragging?
+
+You may or may not have access to the training dataset yet. The testing dataset shall be something close to the real-world production scenario. I know this is too vague, but just assume you have a fair judge.
+
+Background: Product teams sometimes asks engineering teams for quick (and dirty) solutions. Engineering teams want to assess the feasibility before say ""Yes we can do $95\%$"" and officially launch (and be responsible) the projects.
+"
+"['deep-learning', 'performance', 'mean-squared-error']"," Title: Is it normal to have the root mean squared error greater on the test dataset than on the training dataset?Body: I am new to deep learning.
+
+I am training a model and I am getting a root mean squared error (RMSE) greater on the test dataset than on the training dataset.
+
+
+
+What could be the reason behind this? Is this acceptable to get the RMSE greater in test data?
+"
+"['machine-learning', 'classification', 'math', 'constraint-satisfaction-problems', 'adversarial-ml']"," Title: How do I decide which norm to use for placing a constraint on my adversarial perturbation?Body: I am performing an adversarial machine learning attack on a neural network for network traffic classification. For adding adversarial perturbations in features such as packet interarrival times and packet size, what norm should I use as a constraint? (eg. l1 norm, l2 norm, l-infinity norm, etc.)
+"
+"['machine-learning', 'convolutional-neural-networks', 'backpropagation', 'objective-functions', 'gradient-descent']"," Title: How are the weights retained for filters for a particular class in a CNN?Body: I am new to CNN. What I have learned so far about the filters is that when we are giving a training example to our model, our model updates the weights by gradient descent to minimize the loss function.
+So my question is how the weights are retained for a particular class label?
+
+The question is vague as my knowledge is vague. It's my 4th hour to CNN.
+
+For example, if I am talking about the MNIST dataset with 10 labels. Let's say I am giving 1 image to my model initially. It will have a bigger loss for the forward pass. Let's say now it came for the back pass and adjusted the weights for and minimized the loss function for that label.
+Now, when a new label arrives for training, how will it update the weights for filters which have already been updated according to the previous label?
+"
+"['neural-networks', 'reference-request', 'neuromorphic-engineering', 'control-theory']"," Title: Are there examples of neural networks (used for control) implemented on a FPGA or on a neurochip?Body: Greetings to all respected colleagues!
+
+I want to consult on the use of FPGAs and neurochips. I plan to use it in my laboratory project for programming control systems on neural networks.
+
+In my work, there are a lot of applications of neural networks, and I became interested in their programming on FPGAs and neurochips.
+But I don’t know a single example of a really made and working laboratory prototype in which a neural network is implemented on an FPGA or on a neurochip and controls something. If someone shares the link, I would carefully study it.
+"
+"['convolutional-neural-networks', 'keras', 'autoencoders', 'convolutional-layers', 'dense-layers']"," Title: How to add a dense layer after a 2d convolutional layer in a convolutional autoencoder?Body: I am trying to implement a convolutional autoencoder with a dense layer at the bottleneck to do some dimensional reduction. I have seen two approaches for this, which aren't particularly scalable. The first was to introduce 2 dense layers (one at the bottleneck and one before & after that has the same number of nodes as the conv2d layer that precedes the dense layer in the encoder section:
+input_image_shape=(200,200,3)
+encoding_dims = 20
+
+encoder = Sequential()
+encoder.add(InputLayer(input_image_shape))
+encoder.add(Conv2D(32, (3,3), activation="relu, padding="same"))
+encoder.add(MaxPooling2D((2), padding="same"))
+encoder.add(Flatten())
+encoder.add(Dense(32*100*100, activation="relu"))
+encoder.add(Dense(encoding_dims, activation="relu"))
+
+#The decoder
+decoder = Sequential()
+decoder.add(InputLayer((encoding_dims,)))
+decoder.add(Dense(32*100*100, activation="relu"))
+decoder.add(Reshape((100, 100, 32)))
+decoder.add(UpSampling2D(2))
+decoder.add(Conv2D(3, (3,3), activation="sigmoid", padding="same"))
+
+It's easy to see why this approach blows up as there are two densely connected layers with (32100100) nodes each or more or in that ballpark which is nuts.
+Another approach I have found which makes sense for b/w images such as the MNIST stuff is to introduce an arbitrary number of encoding dimensions and reshape it (https://medium.com/analytics-vidhya/building-a-convolutional-autoencoder-using-keras-using-conv2dtranspose-ca403c8d144e). The following chunk of code is copied from the link, I claim no credit for it:
+#ENCODER
+inp = Input((28, 28,1))
+e = Conv2D(32, (3, 3), activation='relu')(inp)
+e = MaxPooling2D((2, 2))(e)
+e = Conv2D(64, (3, 3), activation='relu')(e)
+e = MaxPooling2D((2, 2))(e)
+e = Conv2D(64, (3, 3), activation='relu')(e)
+l = Flatten()(e)
+l = Dense(49, activation='softmax')(l)
+#DECODER
+d = Reshape((7,7,1))(l)
+d = Conv2DTranspose(64,(3, 3), strides=2, activation='relu', padding='same')(d)
+d = BatchNormalization()(d)
+d = Conv2DTranspose(64,(3, 3), strides=2, activation='relu', padding='same')(d)
+d = BatchNormalization()(d)
+d = Conv2DTranspose(32,(3, 3), activation='relu', padding='same')(d)
+decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(d)
+
+So, is there a more rigorous way of adding a dense layer after a 2d convolutional layer?
+"
+"['machine-learning', 'convolutional-neural-networks', 'comparison', 'data-preprocessing']"," Title: What is the difference between training a model with RGB images and using only the color channels separately?Body: What is the difference between training a model with RGB images and using only the color channels separately (like only the red channel, green channel, etc.)? Would the model also learn patterns between the different colors in the first case?
+
+If for me the single-channel results are relevant but also the patterns between different channels are relevant, it would be beneficial to use them together?
+
+I am asking this because I want to apply this to signals of an accelerometer that has x, y, z-axis data. And I want to increase the resolution of the data. Will the model learn to combine all features from different axis if I input (1024, 3) length, channels of a one-dimensional signal into my one-dimensional CNN?
+"
+"['deep-learning', 'deep-neural-networks', 'activation-functions']"," Title: Are there any commonly used discontinuous activation functions?Body: Are there any commonly used activation functions (e.g. that take values in $(0,.5)\cup (.5,1)$)? Preferably for classification?
+
+Why? I was looking for commonly used activation functions on Google, and I noticed that all activation functions are continuous. However, I believe this is not needed in Hornik's paper.
+
+When I did a bit of testing myself, with a discontinuous activation function on the MNIST dataset, the results were good. So I was curious if anyone else used this kind of activation function.
+"
+"['backpropagation', 'gradient-descent']"," Title: Different methods of calculating gradients of cost function(loss function)Body: We require to find the gradient of loss function(cost function) w.r.t to the weights to use optimization methods such as SGD or gradient descent. So far, I have come across two ways to compute the gradient:
+
+
+- BackPropagation
+- Calculating gradient of loss function by calculus
+
+
+I found many resources for understanding backpropagation.
+The 2nd method I am referring to is the image below(taken for a specific example, e is the error: difference between target and prediction):
+
+
+Also, the proof was mentioned in this paper:here
+
+Moreover, I found this method while reading this blog.(You might have to scroll down to see the code: gradient = X.T.dot(error) / X.shape[0] )
+
+My question is are the two methods of finding gradient of cost function same? It appears different and if yes, which one is more efficient( though one can guess it is backpropagation)
+
+Would be grateful for any help. Thanks for being patient(it's my 1st time learning ML).
+"
+"['neural-networks', 'deep-learning', 'feature-selection', 'feature-extraction']"," Title: What are examples of commonly used feature and readout maps?Body: It is well-known that deep feedforward networks can approximate any continuous function from $\mathbb{R}^k$ to $\mathbb{R}^l$, (uniformly on compacts).
+
+However, in practice feature maps are typically used to improve the learning quality and likewise, readout maps are used to make neural networks suited for specific learning tasks.
+
+For example:
+
+
+- Classification: networks are composed with the softmax (readout) function so they take values in $(0,1)^l$.
+
+
+What are examples of commonly used feature and readout maps?
+"
+"['reference-request', 'datasets', 'bert', 'question-answering', 'knowledge-base']"," Title: Will structured knowledge bases continue to be used in question answering with the likes of BERT gaining popularity?Body: This may come across as an open and opinion-based question, I definitely want to hear expert opinions on the subject, but I am also looking for references to materials that I can read deeply.
+One of the ways question answering systems can be classified is by the type of data source that they use:
+
+- Structured knowledge bases with ontologies (DBPedia, WikiData, Yago, etc.).
+
+- Unstructured text corpora that contain the answer in natural language (Wikipedia).
+
+- Hybrid systems that search for candidate answers in both structured and unstructured data sources.
+
+
+From my reading, it appears as though structured knowledge bases/knowledge graphs were much more popular back in the days of the semantic web and when the first personal assistants (Siri, Alexa, Google Assistant) came onto the scene.
+Are they dying out now in favor of training a deep learning model over a vast text corpus like Bert and/or Meena? Do they have a future in question answering?
+"
+"['neural-networks', 'reinforcement-learning', 'dqn']"," Title: Atari Breakout InfrastructureBody:
+
+This is how they describe their infrastructure in https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf. I want to implement the game of Atari Breakout.
+
+import torch
+import torch.nn as nn
+import torch.optim as optim
+import torch.nn.functional as F
+
+class DQN(nn.Module):
+ def __init__(self, height, width):
+ super(DQN, self).__init__()
+
+ self.height = height
+ self.width = width
+
+ self.conv1 = nn.Conv2d(in_channels=4, out_channels=16, kernel_size=8, stride=4)
+ self.conv2 = nn.Conv2d(in_channels=6, out_channels=32, kernel_size=4, stride=2)
+
+ self.fc = nn.Linear(in_features=????, out_features=256)
+ self.out = nn.Linear(in_features=256, out_features=4)
+
+ def forward(self, state):
+
+ # (1) Hidden Conv. Layer
+ self.layer1 = F.relu(self.conv1(state))
+
+ #(2) Hidden Conv. Layer
+ self.layer2 = F.relu(self.conv2(self.layer1))
+
+ #(3) Hidden Linear Layer
+ self.layer3 = self.fc(self.layer2)
+
+ #(4) Output
+ actions = self.out(self.layer3)
+
+ return actions
+
+
+I will probably instantiate my policy network and my target network the following way :
+
+policy_net = DQN(envmanager.get_height(), envmanager.get_width()).to(device)
+target_net = DQN(envmanager.get_height(), envmanager.get_width()).to(device)
+
+
+I am very new in the world of Reinforcement Learning. I would like to implement their infrastructure in the DQN()
, but I think I am wrong in several places. Am I good here? If not, how can I fix it so that it reflect the infrastructure from the above picture?
+
+UPDATE
+
+I know that the formula to calculate the output size is equal to
+
+$O=\frac{W−K+2P}{S}+1$
+
+where $O$ is the output height/length, $W$ is the input height/length, $K$ is the filter size, $P$ is the padding, and $S$ is the stride.
+
+I obtained forself.fc = nn.Linear(in_features=????, out_features=256)
that in_features
must be equal to $32*9*9$
+"
+"['reference-request', 'papers', 'history', 'exploding-gradient-problem']"," Title: Which work originally introduced gradient clipping?Body: The Deep Learning book mentions that it's been used for years but the oldest sources it mentions are from 2012:
+
+
+ A simple type of solution has been in use by practitioners for many years: clipping the gradient. There are different instances of this idea (Mikolov, 2012; Pascanu et al., 2013). One option is to clip the parameter gradient from a mini-batch element-wise (Mikolov, 2012), just before the parameter update. Another is to clip the $||g||$ of the gradient $g$ (Pascanu et al., 2013) just before the parameter update
+
+
+But I find it hard to believe that the first uses and mentions of gradient clipping are from 2012. Does anyone know the origins of the solution?
+"
+['pytorch']," Title: XOR-solving neural network is suffering from local minimaBody: import torch
+import torch.nn as nn
+import torch.optim as optim
+from torch.autograd import Variable
+import matplotlib.pyplot as plt
+import numpy as np
+x_data = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1]]).float()
+y_data = torch.tensor([0, 1, 1, 0]).float()
+
+class Model(nn.Module):
+ def __init__(self, input_size, H1, output_size):
+ super().__init__()
+ self.linear_input = nn.Linear(input_size, 2)
+ self.linear_output = nn.Linear(2, output_size)
+ self.sigmoid = torch.nn.Sigmoid()
+
+ def forward(self, x):
+ x = self.sigmoid(self.linear_input(x))
+ x = self.sigmoid(self.linear_output(x))
+ return x
+
+ def predict(self, x):
+ return (self.forward(x) >= 0.5).float()
+
+model = Model(2,2,1)
+lossfunc = nn.BCELoss()
+optimizer = optim.Adam(model.parameters(), lr=0.01)
+epochs = 2000
+losses = []
+
+for i in range(epochs):
+ pred = model(x_data).view(-1)
+ loss = lossfunc(pred, y_data)
+ print(""epochs:"", i, ""loss:"", loss.item())
+ losses.append(loss.item())
+ optimizer.zero_grad()
+ loss.backward()
+ optimizer.step()
+
+def cal_score(X, y):
+ y_pred = model.predict(X)
+ score = float(torch.sum(y_pred.squeeze(-1) == y.byte().float())) / y.shape[0]
+ return score
+
+print('test score :', cal_score(x_data, y_data))
+def plot_decision_boundray(X):
+ x_span = np.linspace(min(X[:, 0]), max(X[:, 0]))
+ y_span = np.linspace(min(X[:, 1]), max(X[:, 1]))
+ xx, yy = np.meshgrid(x_span, y_span)
+ grid = torch.tensor(np.c_[xx.ravel(), yy.ravel()]).float()
+ pred_func = model.forward(grid)
+ z = pred_func.view(xx.shape).detach().numpy()
+ plt.contourf(xx, yy, z)
+ plt.show()
+
+plot_decision_boundray(x_data)
+
+
+As you can see, it's a simple neural network which consists of one hidden layer using BCELoss and Adam.
+
+
+
+Normally, it results in the correct one like above.
+
+
+
+However, it is sometimes stuck in a local minima and a decision boundary becomes awkward.
+
+Because the input data is limited, I guess that preprocessing of those data might not be possible and only initial weights matter in this problem. I tried initializing them with normal distribution but it didn't work. How can I approach this problem?
+"
+"['deep-learning', 'comparison', 'gradient-descent', 'regularization', 'hyper-parameters']"," Title: What is relation between gradient descent and regularization in deep learning?Body: Gradient descent is used to reduce the loss and regularization is used to fight over-fitting.
+
+Is there any relation between gradient descent and regularization, or both are independent of each other?
+"
+"['neural-networks', 'machine-learning', 'generative-model', 'papers', 'data-preprocessing']"," Title: Do I have to downsample the input and upsample the output of the neural network when implementing the NICE algorithm?Body: Consider that my input is an RGB image. The size of my image is $N\times N$. I'm trying to implement NICE algorithm presented by Dinh. The bijective function $f: \mathbb{R}^d \to \mathbb{R}^d$ maps $X$ to $Z$. So I have $p_Z(Z)=p_X(X)$.
+
+What I can't understand is that $N$ is much bigger than $d$. Does this mean that I should downsample the inputs? Does the resulting loss function change if I add a downsampling layer at the beginning of the neural net and also add an upsampling layer at the end of the net?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'implementation', 'computational-complexity']"," Title: How much time does it take to train DQN on Atari environment?Body: I am trying to build a DQN model for the Atari Pong game, but I am not sure whether the model is learning at all.
+
+I am using the architecture described in the paper Playing Atari with Deep Reinforcement Learning. And I tested the model on a simpler environment (like CartPole), which worked great, but I am not seeing any progress at all with Pong, I have been training the model for 2-3 hours and its performance is no better than taking random actions.
+
+Should I just keep waiting or there might be something wrong with my code. Around how many episodes should it take before I see some positive results?
+"
+"['game-ai', 'deepfakes']"," Title: Can I use deepfake to rotoscope for animation?Body: I'm a big fan of animation and have kept an eye on the deepfake's ability to replicate full body motion.
+
+So I ask
+
+
+- Is there a deepfake software available I can use to gather animation from a video?
+- Are there publically released a deepfake body tracking tools out there? Even for normal video
+
+"
+"['neural-networks', 'deep-learning', 'training', 'keras', 'datasets']"," Title: What does it mean to have epochs=30 in Keras' fit method given certain data?Body: I have read a lot of information about several notions, like batch_size, epochs, iterations, but because of explanation was without numerical examples and I am not native speaker, I have some kind of problem of understanding still about those terms, so I decided to work with data. Let us suppose we have the following data
+
+
+
+Of course, it is just subset of original data, and I want to build a neural network with three hidden layers, the first layer contains 500 nodes, it takes input three variable and on each node, there is sigmoid activation function, next layer contains 100 node and sigmoid activation, the third one contains 50 node and sigmoid again, and finally we have one output with sigmoid to convert the result into 0 or 1 that classify whether a person with those attributes is female or male.
+
+I trained the model using Keras Tensorflow with the following code
+
+model.fit(X_train,y_train,epochs=30)
+
+
+With this data, what does mean epochs=30
? Does it mean that all 177 rows (with 3 input at times) will go to the model 30 times? What about batch_size=None
in model.fit parameters?
+"
+"['philosophy', 'agi', 'superintelligence']"," Title: What assumptions are made when positing the emergence of superintelligence?Body: Many experts seem to think that artificial general intelligence, or AGI, (on the level of humans) is possible and likely to emerge in the near-ish future. Some make the further step to say that superintelligence (much above the level of AGI) will appear soon after, through mechanisms like recursive self-improvement from AGI (from a survey).
+
+However, other sources say that such superintelligence is unlikely or impossible (example, example).
+
+What assumptions do those who believe in superintelligence make? The emergence of superintelligence has generally been regarded as something low-probability but possible (e.g. here). However, I can't seem to find an in-depth analysis of what assumptions are made when positing the emergence of superintelligence. What specific assumptions do those who believe in the emergence of superintelligence make that are unlikely, and what have those who believe in the guaranteed emergence of superintelligence gotten wrong?
+
+If the emergence of superintelligence is to be seen as a low-probability event in the future (on par with asteroid strikes, etc.), which seems to be the dominant view and is the most plausible, what assumptions exactly makes it low-probability?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl', 'pseudocode']"," Title: Can this be a possible deep q learning pseudocode?Body: I am not using replay here. Can this be a possible deep q learning pseudocode?
+
+s - state
+a - action
+r - reward
+n_s - next state
+q_net - neural network representing q
+
+step()
+{
+
+ get s,a,r,n_s
+ q_target[s,a]=r+gamma*max(q_net[n_s,:])
+ loss=mse(q_target,q_net[s,a])
+ loss.backprop()
+
+}
+
+while(!terminal)
+{
+ totalReturn+=step();
+}
+
+"
+"['reinforcement-learning', 'objective-functions', 'dqn', 'deep-rl']"," Title: How should I define the loss function when using DQN to estimate the probability density?Body: I'm doing a Deep Q-learning project. All of my rewards are positive and there are two terminal states. One of them has a zero reward and the other has a high positive reward.
+
+The rewards are stochastic and my Q-network must generate non-zero Q-values for all states and actions. Based on my project, I must use these numbers to create a probability density. In other words, the normalized Q-values of each state generated by network define a probability density for choosing an action.
+
+How should I define my loss function? Is there a project or paper which I could look at and decide how to define the loss function? I am searching for similar projects and their proposed Deep Q-learning algorithms.
+"
+"['machine-learning', 'natural-language-processing', 'natural-language-understanding']"," Title: What are the current research trends in recognizing narrative similarity?Body: I am currently working on a term paper on the topic of Narrative Similarity, based on Loizos Michael's work ""Similarity of Narratives"". I am trying to find the latest trends within this field of study for the literature overview in my assignement. However up untill now I wasn't able to find any new work on this particular subject.
+
+I would appreciate any literature recommendation by anyone out there that has worked on this topic or is currently doing so.
+
+Link to Michael's work:
+https://www.researchgate.net/publication/316280937_Similarity_of_Narratives
+"
+"['neural-networks', 'convolutional-neural-networks', 'reference-request', 'image-segmentation', 'u-net']"," Title: What are some good alternatives to U-Net for biomedical image segmentation?Body: Soon I will be working on biomedical image segmentation (microscopy images). There will be a small amount of data (a few dozens at best).
+
+Is there a neural network, that can compete with U-Net, in this case?
+
+I've spent the last few hours searching through scientific articles that are dealing with this topic, but haven't found a clear answer and I would like to know what other possibilities are. The best answers I found are that I could consider using ResU-Net (R2U-Net), SegNet, X-Net and backing techniques (article).
+
+Any ideas (with evidence, not necessarily)?
+"
+"['agi', 'proofs', 'ai-safety', 'theory-of-computation']"," Title: Does Rice's theorem prove safe AI is undecidable?Body: According to Wikipedia
+
+
+ In computability theory, Rice's theorem states that all non-trivial,
+ semantic properties of programs are undecidable. A semantic property
+ is one about the program's behavior (for instance, does the program
+ terminate for all inputs), unlike a syntactic property (for instance,
+ does the program contain an if-then-else statement). A property is
+ non-trivial if it is neither true for every computable function, nor
+ false for every computable function.
+
+
+A syntactic property asks a question about a computer program like ""is there is a while loop?""
+
+A semantic properties asks a question about the behavior of the computer program. For example, does the program loop forever (which is the Halting problem, which is undecidable, i.e., in general, there's no algorithm that can tell you if an arbitrarily given program halts or terminates for a given input)?
+
+So, Rice's theorem proves all non-trivial semantic properties are undecidable (including whether or not the program loops forever).
+
+AI is a computer program (or computer programs). These program(s), like all computer programs, can be modeled by a Turing machine (Church-Turing thesis).
+
+Is safety (for Turing machines, including AI) a non-trivial semantic question? If so, is AI safety undecidable? In other words, can we determine whether an AI program (or agent) is safe or not?
+
+I believe that this doesn't require formally defining safety.
+"
+"['neural-networks', 'convolutional-neural-networks', 'comparison', 'generative-adversarial-networks']"," Title: What is the difference between using dense layers as opposed to convolutional layers in my networks when dealing with images?Body: I am thinking about developing a GAN.
+
+What is the difference between using dense layers as opposed to convolutional layers in my networks when dealing with images?
+"
+"['reinforcement-learning', 'markov-decision-process', 'markov-property']"," Title: What are the most common non-Markov RL paradigms?Body: I am interested in doing model-free RL but not using the Markov assumptions typical for MDPs or POMDPs.
+
+What are alternative paradigms that don't rely on the Markov assumptions? Are there any common approaches when this assumption is violated?
+
+EDIT: I am asking for mathematical models that do not make the Markov assumption and so could be used for problems where the Markov assumption does not hold
+"
+"['reinforcement-learning', 'markov-decision-process']"," Title: Efficient algorithm to obtain near optimal policies for an MDPBody: Given a discrete, finite Markov Decision Process (MDP) with its usual parameters $(S, A, T, R, \gamma)$, it is possible to obtain the optimal policy $\pi^{*}$ and the optimal value function $V^{*}$ through one of many planning methods (policy iteration, value iteration or solving a linear program).
+
+I am interested in obtaining a random near-optimal policy $\pi$, with the value function associated with the policy given by $V^{\pi}$, such that
+$$ \epsilon_1 < ||V^{*} - V^{\pi}||_{\infty} < \epsilon_2$$
+
+I wish to know an efficient way of achieving this goal. A possible approach is to generate random policies and then to use the given MDP model to evaluate these policies and verify that they satisfy the criteria.
+
+If only an upper bound were needed, the idea that near optimal value functions induce near optimal policies could be used, that is, we can show that, if
+$$||V - V^{*}||_{\infty} < \epsilon, \quad \epsilon > 0$$
+and if $\pi$ is the policy that is greedy with respect to the value function $V$, then
+$$ ||V^{\pi} - V^{*}||_{\infty} < \frac{2\gamma\epsilon}{1 - \gamma}$$
+So by picking a suitable $\epsilon$ for the given $\gamma$, we can be sure of any upper bound $\epsilon_2$.
+
+However, I would also like that the policy $\pi$ not be ""too good"", hence the requirement for a lower bound.
+
+Any inputs regarding an efficient solution or reasons for the lack thereof are welcome.
+"
+"['neural-networks', 'convolutional-neural-networks', 'game-ai', 'monte-carlo-tree-search', 'evaluation-functions']"," Title: Is this a good approach to evaluate the game state with a neural network?Body: I've written a Monte Carlo Tree Search player for the game of Castle (AKA Shithead, Shed, Palace...). I have set this MCTS player to play against a basic rule-based AI for ~30000 games and collected ~1.5 million game states (as of now) along with whether the MCTS player won that particular game in the end after being in that particular game state. The game has a large chance aspect, and, currently, the MCTS player is winning ~55% of games. I want to see how high I can push it. In order to do this, I aim to produce a NN that will act as a game state evaluation function to use within the MCTS.
+With this information, I've already tried an SVM, but came to the conclusion that the game space is too large for the SVM to classify a given state accurately.
+I hope to be able to train a NN to evaluate a given state and return how good that state is for the MCTS player. Either with a binary GOOD/BAD or I think it would be more helpful to return a value between 0-1.
+The input to the NN is a $4 \times 41$ NumPy array of binary values (0, 1) representing the MCTS players hand, MCTS face-up cards, OP face-up cards, MCTS no. face-down cards, OP no. face-down cards. Shown below.
+
+Describes the np.array
:
+
+The np.array
is made from the database entries of game states. An example of this information is below. However, I am currently omitting the TOP & DECK_EMPTY columns in this model. WON (0, 1) is used as the label.
+
+This is my keras code:
+model = tf.keras.models.Sequential()
+model.add(tf.keras.layers.Flatten())
+model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
+model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
+model.add(tf.keras.layers.Dense(2, activation=tf.nn.softmax))
+
+model.compile(optimizer='adam',
+ loss='sparse_categorical_crossentropy',
+ metrics=['accuracy'])
+
+model.fit(X_train, y_train, epochs=3)
+
+This model isn't performing.
+
+- Do you think it is possible to obtain a useful NN with my current approach?
+
+- What layers should I look to add to the NN?
+
+- Can you recommend any training/learning material that I could use to try and get a better understanding?
+
+
+"
+"['reinforcement-learning', 'training', 'gym']"," Title: Should I always start from the same start state in reinforcement learning?Body: In an episodic training of an RL agent, should I always start from the same initial state or I can start from several valid initial states?
+
+For example, in a gym environment, should my env.reset()
function always resets me to the same start state or it can start from different states at each training episode?
+"
+"['machine-learning', 'deep-learning', 'math', 'proofs']"," Title: Is the derivative of the loss wrt a single scalar parameter proportional to the loss?Body: I am wondering about the correlation between the loss and the derivative of the loss wrt a single scalar parameter, with the same sample. That means: considering a machine learning model with parameters $\theta \in R$, I want to figure out the relationship between $Loss(x)$ and $\frac{\partial Loss(x)}{\partial \theta_i}$, where $i \in \{1,2,3,...,n\}$
+
+Intuitively, I would like to consider that they are in a positive correlation, is it right? If it is right, how can I prove it in a mathematical way?
+"
+"['neural-networks', 'backpropagation', 'optimization', 'matlab']"," Title: How can I train a neural network if I don't have enough data?Body: I have created a neural network that is able to recognize images with the numbers 1-5. The issue is that I have a database of 16x5 images which ,unfortunately, is not proving enough as the neural network fails in the test set. Are there ways to improve a neural network's performance without using more data? The ANN has approximately a 90% accuracy on the training sets and a 50% accuracy in the test ones.
+
+Code:
+
+clear
+graphics_toolkit(""gnuplot"")
+sigmoid = @(z) 1./(1 + exp(-z));
+sig_der = @(y) sigmoid(y).*(1-sigmoid(y));
+
+
+parse_image; % This external f(x) loads the images so that they can be read.
+%13x14
+num=0;
+for i=1:166
+ if mod(i-1,10)<=5 && mod(i-1,10) > 0
+ num=num+1;
+ data(:,num) = dlmread(strcat(""/tmp/"",num2str(i)))(:);
+ end
+end
+
+
+
+function [cost, mid_layer, last_layer] = forward(w1,w2,data,sigmoid,i)
+ mid_layer(:,1)=sum(w1.*data(:,i));
+ mid_layer(:,2)=sigmoid(mid_layer(:,1));
+ last_layer(:,1)=sum(mid_layer(:,2).*w2);
+ last_layer(:,2)=sigmoid(last_layer(:,1));
+ exp_res=rem(i,5);
+ if exp_res==0
+ exp_res=5;
+ end
+ exp_result=zeros(5,1); exp_result(exp_res)=1;
+ cost = exp_result-last_layer(:,2);
+end
+
+function [w1, w2] = backprop(w1,w2,mid_layer,last_layer,data,cost,sig_der,sigmoid,i)
+ delta(1:5) = cost;
+ delta(6:20) = sum(cost' .* w2,2);
+ w2 = w2 + 0.05 .* delta(1:5) .* mid_layer(:,2) .* sig_der(last_layer(:,1))';
+ w1 = w1 + 0.05 .* delta(6:20) .* sig_der(mid_layer(:,1))' .* data(:,i);
+end
+
+w1=rand(182,15)./2.*(rand(182,15).*-2+1);
+w2=rand(15,5)./2.*(rand(15,5).*-2+1);
+
+for j=1:10000
+ for i=[randperm(85)]
+ [cost, mid_layer, last_layer] = forward(w1,w2,data,sigmoid,i);
+ [w1, w2] = backprop(w1,w2,mid_layer,last_layer,data,cost,sig_der,sigmoid,i);
+ cost_mem(j,i,:)=cost;
+ end
+end
+
+"
+"['machine-learning', 'optimization']"," Title: What is the role of convex optimisation in AI systems?Body: Convex optimisation is defined as:
+
+
+
+I have seen a lot of talk about convex loss functions in Neural Networks and how we are optimising rewards or penalty in AI/ML systems. But I have never seen any loss function formulated in the aforementioned way. So my question is:
+
+Is there any role of convex optimization in AI? If so, in what algorithms or problem settings or systems?
+"
+"['reinforcement-learning', 'policy-gradients', 'papers', 'temporal-difference-methods']"," Title: Is the TD-residual defined for timesteps $t$ past the length of the episode?Body: Let $\mathcal{S}$ be the state-space in a reinforcement learning problem where rewards are in $\mathbb{R}$, and let $V:\mathcal{S} \to \mathbb{R}$ be an approximate value function. Following the GAE paper, the TD-residual with discount $\gamma \in [0,1]$ is defined as $\delta_t^V = r_t + \gamma V(s_{t + 1}) - V(s_t)$.
+
+I am confused by the formula for the GAE-$\lambda$ advantage estimator, which is
+$$
+\hat{A}_t^{\text{GAE}(\gamma, \lambda)} = \sum_{l = 0}^\infty (\gamma \lambda)^l \delta_{t + l}^V.
+$$
+
+This seems to imply that $\delta_t^V$ is defined for $t > N$, where $N$ is the length of the current trajectory/episode. It looks like in implementations of this advantage estimator, it is just assumed that $\delta_t^V = 0$ for $t > N$, since the sums are finite. Is there a justification for this assumption? Or am I missing something here?
+"
+"['reinforcement-learning', 'ai-design', 'rewards']"," Title: How can I find the appropriate reward value for my reinforcement learning problem?Body: I am wondering how can I find the appropriate reward value for each specific problem. I know this is a highly empirical process, but I am sure that the value is not set totally at random. I want to know what are the general guidelines and practices to find the appropriate reward value for any reinforcement learning problem.
+"
+"['reinforcement-learning', 'value-functions', 'expectation', 'bellman-equations']"," Title: Why is there an expectation sign in the Bellman equation?Body: In chapter 3.5 of Sutton's book, the value function is defined as:
+
+
+
+Can someone give me some clarification about why there is the expectation sign behind the entire equation? Considering that the agent is following a fixed policy $\pi$, why there should be an expectation when the trajectory of the future possible states is fixed (or maybe I am getting it wrong and it's not). In total, if the expectation here has the meaning of averaging over a series of trajectories, what are those trajectories and what are the weights of them when we want to compute the expected value over them according to this Wikipedia definition of the expected values?
+"
+"['reinforcement-learning', 'markov-decision-process', 'pomdp']"," Title: Why can the core reinforcement learning algorithms be applied to POMDPs?Body: Why can an AI, like AlphaStar, work in StarCraft, although the environment is only partially observable? As far as I know, there are no theoretical results on RL in the POMDP environment, but it appears the core RL techniques are being used in partially observable domains.
+"
+"['reinforcement-learning', 'ai-design', 'markov-decision-process', 'rewards']"," Title: How should I define the reward function for the Connect Four game?Body: I'm creating an RL application for the game Connect Four. I've researched the different strategies for the game and which positions are more favourable to lead to a win.
+
+Should I be assigning greater rewards when the application places a token in those particular positions? If so, that would mean the application/algorithm is a Connect Four specific application, and not generic?
+"
+"['reinforcement-learning', 'ai-design', 'game-ai']"," Title: Should I be trying to create a generic or specific (to particular game) reinforcement learning agent?Body: I'm creating an RL application for the game Connect Four.
+
+In general, should I be aiming to create an application that's more generic, which would 'learn' different games, or specific to a particular game (e.g. Connect Four, by assigning greater rewards to certain token positions in the C4 grid)?
+
+Does the difference between the two approaches just come down to adapting their respective reward functions to reward specific achievements or positions (in a board game setting), or something else?
+"
+"['agi', 'ai-safety']"," Title: Does this prove AI Safety is undecidable?Body: Does this prove AI Safety is undecidable?
+
+Proof:
+
+Output meaning output to computer program.
+
+[A1] Assume we have a program that decides which outputs are “safe”.
+
+[A2] Assume we have an example of an unsafe output: “unsafe_output”
+
+[A3] Assume we have an example of safe output: “safe_output”.
+
+[A4] Define a program to be safe if it always produces safe output.
+
+[A5] Assume we have a second program (safety_program
) that decides which programs are safe.
+
+[A6] Write the following program:
+
+def h()
+ h_is_safe := safety_program(h)
+ if (h_is_safe):
+ print unsafe_output
+ else:
+ print safe_output
+
+
+Clearly h halts.
+
+If the safety_program said h was safe, then h prints out unsafe_output.
+
+If the safety_program said h was not safe, then h prints out safe_output.
+
+Therefore safety_program doesn’t decide h correctly.
+
+This is a contradiction. Therefore we made a wrong assumption: Either safe output cannot be decided, or safe programs cannot be decided.
+
+Therefore, in general, the safety of computer programs, including Artificial Intelligence, is undecidable.
+
+Therefore AI Safety is undecidable.
+"
+"['machine-learning', 'game-ai', 'prediction']"," Title: How can artificial intelligence predict the next possible moves of the player?Body: When you play video games, sometimes there is an AI that attempts to predict what are you going to do.
+
+For example, in the Candy Crush game, if you finish the level and you still have moves remaining, you get to see fishes or other powers destroying the other candies, but, instead of watching 10 minutes of your combos without moving at all after accomplishing a level, like this Longest video game combo ever probably, it tells an alert that says tap to skip, so basically the AI is predicting all the possible combos that will keep proceeding automatically and calculating every automatic move.
+
+How can artificial intelligence predict such a thing?
+"
+"['reinforcement-learning', 'comparison', 'value-functions', 'reward-functions']"," Title: What is the relationship between the reward function and the value function?Body: To clarify it in my head, the value function calculates how 'good' it is to be in a certain state by summing all future (discounted) rewards, while the reward function is what the value function uses to 'generate' those rewards for it to use in the calculation of how 'good' it is to be in the state?
+"
+"['neat', 'neuroevolution', 'crossover-operators', 'mutation-operators']"," Title: Do I have to crossover my node genes in NEAT, and how?Body: I'm currently trying to code the NEAT algorithm by myself, but I got stuck with two questions. Here they are:
+What happens if during crossover a node is removed (or disabled) and there's a connection that was previously connected to that specific node? Because, in that case, some connections are no longer useful. Do I keep the useless connections or do I prevent this from happening? Or maybe I'm missing something?
+Someone on AI SE said that:
+
+You could:
+1.) Use only the connection genes in crossover, and derive your node genes from the connection genes
+2.) Test if every node is in use, and delete the ones that are not
+
+But the problem with that is that my genomes will lose some complexity. Maybe I can use the nodes during crossover, and then disable the connections that were using this node. That way, I'm keeping the genotype complex, but the phenotype is still working.
+Is there another way to workaround this problem or this is the best way?
+"
+"['machine-learning', 'math', 'statistical-ai', 'cross-entropy', 'maximum-likelihood']"," Title: Is maximum likelihood estimation meaningless for a dataset of only outliers?Body: From my understanding, maximum likelihood estimation chooses the set of parameters for the estimator that maximizes likelihood with the ground truth distribution.
+
+I always interpreted it as the training set having a tendency to have most examples near the mean or the expected value of the true distribution. Since most training examples are close to the mean (since they have been sampled from this distribution) maximizing the estimator's chance of sampling these examples gets the estimated distribution close to the ground truth distribution.
+
+This would mean that any MLE procedure on a dataset of outliers should fail miserably. Are this interpretation and conclusion correct? If not, what is wrong with the mentioned interpretation of maximizing likelihood for an estimator?
+"
+"['math', 'generative-adversarial-networks', 'papers', 'notation']"," Title: What does equation in the ""related work"" section of the GAN paper mean?Body: I was going through the paper on GAN by Ian Goodfellow. Under the related work section, there is an equation. I cannot decipher the equation. Can anyone help me understand the meaning of the equation?
+
+$$\lim_{\sigma \to 0} \nabla_{\mathbf x} \mathbb E_{\epsilon \sim \mathcal N(0, \sigma^2 \mathbf I)} f(\mathbf x+\epsilon) = \nabla_x f(\mathbf x)$$
+
+Also, any guide to understanding mathematical notation for reading research paper is highly appreciated.
+"
+"['reinforcement-learning', 'swarm-intelligence']"," Title: What kind of artificial intelligence is this? A decentralized swarm intelligence where the input and output is split among the agentsBody: I have an AI design for deciding the length of green and red lamps of the traffic. In my design, every crossroads has its own agent. This agent has input the amount of vehicle in each road in a single junction. AI then decide how long is the red lamp and the green lamp in each junction. The fitness function is the average commute time in the city. Each agent may communicate with each other, and give reward or punishment to other AI. What AI algorithm works like this?
+"
+"['machine-learning', 'data-preprocessing', 'linear-regression', 'normalisation']"," Title: Do I need to denormalise results in linear regression?Body: I have learned so far how to linear regression with one or multiple features. So far, so good, everything seems to work fine, at least for my first simple examples.
+However, I now need to normalise my features for training. I'm doing this by calculating the mean and the standard deviation per feature, and then calculate the normalised feature by subtracting the mean, taking the absolute value, and dividing by the standard deviation. Again, so far, so good, the results of my tensors which I use for training look good.
+I understand why I need to normalise input data, and I also understand why one can do it like this (I know that there are other ways as well, e.g. to map values to a 0-1 interval).
+Now I was wondering about two things:
+
+- First, after having trained my network, when I want to make a prediction for a specific input – do I need to normalise this as well, or do I use the un-normalised data? Does it make a difference? My gut feeling says, I should normalise it, as it should make a difference, but I'm not sure. What should I do here, and why?
+- Second, either way, I get a result. Now I was wondering whether I need to denormalise this? I mean, it should make a difference, shouldn't it? If so, how? How do I get from the normalised result value to a denormalised one? Do I just need to reverse the calculation with mean and standard deviation, to get the actual value?
+
+It would be great if someone could shed some light on this.
+"
+"['reinforcement-learning', 'algorithm', 'rewards']"," Title: In RL, if I assign the rewards for better positional play, the algorithm is learning nothing?Body: I'm creating an RL application for the game Connect Four.
+
+If I tell the algorithm which moves/token positions will receive greater rewards, surely it's not actually learning anything; it's just a basic lookup for the algorithm? ""Shall I place the token here, or here? Well, this one receives a greater reward, so I choose this one.""
+
+For example, some pseudocode:
+
+function get_reward()
+ if 2 in a line
+ return 1
+ if 3 in a line
+ return 2
+ if 4 in a line
+ return 10
+ else
+ return -1
+
+foreach columns
+ column_reward_i = get_reward(column_i)
+ if column_reward_i >= column_rewards
+ place_token(column_i)
+
+"
+"['neat', 'neuroevolution', 'crossover-operators']"," Title: In NEAT, is it a good idea to give the same ID to node genes created from the same connection gene?Body: Do I have to prevent nodes created from the same connection gene to have different IDs/innovation number? In this example, the node 6 is created from the connection going from node 3 to node 4:
+
+
+
+In the case where that specific node was already globally created, is it useful to give it the same ID for crossover? Because the goal of NEAT is to do meaningful crossover by doing historical marking. The paper from Kenneth O. Stanley says at page 108:
+
+
+ [...] by keeping a list of the innovations that occurred in the current generation, it
+ is possible to ensure that when the same structure arises more than once through independent mutations in the same generation, each identical mutation is assigned the
+ same innovation number.
+
+
+Why don't we do that for node genes too?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'rewards', 'atari-games']"," Title: How was the DQN trained to play many games?Body: Some people claim that DQN was used to play many Atari games. But what actually happened? Was DQN trained only once (with some data from all games) or was it trained separately for each game? What was common to all those games? Only the architecture of the RL agent? Did the reward function change for each game?
+"
+"['neural-networks', 'long-short-term-memory', 'overfitting']"," Title: Is this LSTM model underfitting?Body: I think this model is underfitting. Is this correct?
+
+
+
+
+
+_________________________________________________________________
+Layer (type) Output Shape Param #
+=================================================================
+lstm_1 (LSTM) (50, 60, 100) 42400
+_________________________________________________________________
+dropout_1 (Dropout) (50, 60, 100) 0
+_________________________________________________________________
+lstm_2 (LSTM) (50, 60) 38640
+_________________________________________________________________
+dropout_2 (Dropout) (50, 60) 0
+_________________________________________________________________
+dense_1 (Dense) (50, 20) 1220
+_________________________________________________________________
+dense_2 (Dense) (50, 1) 21
+=================================================================
+
+
+The above is a summary of the model.
+Any advice on how the model could be improved?
+"
+"['chat-bots', 'resource-request']"," Title: Does there exist a resource for vetting banned words for chatbots?Body: So, Tay the racist tweeter bot... one thing that could have prevented this would have been to have a list of watchwords to not respond to, with some logic similar to foreach (word in msg) {if (banned_words.has(word)) disregard()}
.
+
+Even if that wouldn't, what I'm getting at is obvious: I am building a chatterbot that must be kid-friendly. For my sake and for the sake of whoever finds this question, is there a resource consisting of a .csv or .txt of such words that one might want to handle? I remember once using a site-blocking productivity extension that had visible its list of banned words; not just sexually charged words, but racial slurs, too.
+"
+"['markov-decision-process', 'pomdp']"," Title: What is the intuition behind grid-based solutions to POMDPs?Body: After spending some time reading about POMDP, I'm still having a hard time understanding how grid-based solutions work.
+
+I understand the finite horizon brute-force solution, where you have your current belief distribution, enumerate every possible collection of action/observation combinations for a given depth and find the expected reward.
+
+I have tried to read some sources about grid-based approximations, for example, these slides describe the grid-based approach.
+
+However, it's not clear to me what exactly is going on. I'm not understanding how the value function is actually computed. After you take an action, how do you update your belief states to be consistent with the grid? Does the grid-based solution simply reduce the set of belief states? How does this reduce the complexity of the problem?
+
+I'm not seeing how this reduces the number of actions, observation combinations needed to be considered for a finite-horizon solution.
+"
+"['deep-learning', 'classification', 'computer-vision']"," Title: Running 10 epochs on the Food-101 datasetBody: I’m currently working on the Food-101 dataset. I want to train a model that is greater than 85% accuracy for top-1 for the test set, using a ResNet50 or smaller network with a reasonable set of augmentations. I’m running 10 epochs using ResNet34 and I’m currently on the 8th epoch. This is how its doing:
+
+epoch train_loss valid_loss error_rate time
+0 2.526382 1.858536 0.465891 25:21
+1 1.981913 1.566125 0.406881 27:21
+2 1.748959 1.419548 0.372129 27:16
+3 1.611638 1.315319 0.346980 25:16
+4 1.568304 1.250232 0.328069 24:43
+5 1.438499 1.193816 0.313762 24:26
+6 1.378019 1.156924 0.307426 24:30
+7 1.331075 1.131671 0.299010 24:26
+8 1.314978 1.115857 0.297079 24:24
+
+
+As you can see, it doesn’t seem like I’m going to do better than 71% accuracy at this point. The dataset size is 101,000. It has 101 different kinds of food and each food has a 1000 images. Training this definitely takes long but what are some things I can do to improve its accuracy?
+"
+"['natural-language-processing', 'resource-request', 'information-retrieval']"," Title: What are examples of tutorials and blogs for beginners to master the cross-lingual information retrieval?Body: Currently, I am following the Dan Jurofsky NLP Tutorial and CS 224 Stanford 2019. Can you list tutorials and blogs for beginners to master the cross-lingual information retrieval?
+"
+"['machine-learning', 'comparison', 'models', 'probability-distribution', 'statistics']"," Title: What is the difference between model and data distributions?Body: Is there any difference between the model distribution and data distribution, or are they the same?
+"
+"['classification', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: How to implement a LSTM for multilabel classification problem?Body: I would like to develop an LSTM because I have a variable input matrix. I am zero-padding to a specific length of 800.
+
+However, I am not sure of how to classify a certain situation when each input matrix has multiple labels inside, i.e. 0
, 1
and 2
. Do I need to use multi-label classification?
+
+
+
+Data shape
+
+(250,800,4)
+
+x_train(150,800,4)
+y_train(150,800,1)
+x_test(100,800,4)
+y_test(100,800,1)
+
+
+
+
+Building LSTM
+
+model = Sequential()
+model.add(LSTM(100, input_shape=))
+model.add(Dropout(0.5))
+model.add(Dense(100, activation='relu'))
+model.add(Dense(800, activation='softmax'))
+model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
+
+
+I am not sure of how to build the LSTM for training and test. If I train my model with a 3D shape. It will mean that for real time predictions I should also have a 3D shape, but the idea is to have a 2D matrix as an input.
+"
+"['image-recognition', 'backpropagation', 'markov-chain', 'hidden-markov-model']"," Title: How can I use a Hidden Markov Model to recognize images?Body: How could I use a 16x16 image as an input in a HMM? And at the same time how would I train it? Can I use backpropagation?
+"
+"['convolutional-neural-networks', 'tensorflow', 'python', 'keras', 'image-processing']"," Title: How to handle extremely 'long' images?Body: After transforming timeseries into an image format, I get a width-height ratio of ~135. Typical image CNN applications involve either square or reasonably-rectangular proportions - whereas mine look nearly like lines:
+
+
+
+Example dimensions: (16000, 120, 16) = (width, height, channels)
.
+
+Are 2D CNNs expected to work well with such aspect ratios? What hyperparameters are appropriate - namely, in Keras/TF terms, strides
, kernel_size
(is 'unequal' preferred, e.g. strides=(16, 1)
)? Relevant publications would help.
+
+
+
+Clarification: width == timesteps
. The images are obtained via a transform of the timeseries, e.g. Short-time Fourier Transform. channels
are the original channels. height
is the result of the transform, e.g. frequency information. The task is binary classification of EEG data (w/ sigmoid output).
+
+Relevant thread
+"
+"['natural-language-processing', 'computational-linguistics']"," Title: How does one detect linguistic recursion so as to know how much nesting there is, if any?Body: To be clear, recursion in linguistics is here better called "nesting" in this CS context to avoid confusing it with the other recursion. How does one detect nesting? I am particularly interested in the example case of conjunctions. For example: say that I want to look for sentences that look like this:
+
+Would you rather have ten goldfish or a raccoon?
+
+Seems straightforward: a binary choice. However, how do you distinguish a binary choice with nesting from a ternary (or n-ary) choice?
+
+Would you rather have (one or two dogs) or (a raccoon)?
+Would you rather have (two dogs) or (ten goldfish) or (a raccoon)?
+
+Ditto for implied uses of "or," which is more common than the latter of the above:
+
+Would you rather have (one or two dogs),[nothing] (ten goldfish), or (a raccoon)?
+
+Given the available tools for NLP (POS-taggers and the like), how do you count the number of conjunctions to say "there are n surface-level clauses in the sentence, with n-or-zero clauses nested within."?
+"
+"['deep-learning', 'recurrent-neural-networks', 'long-short-term-memory', 'time-series', 'gated-recurrent-unit']"," Title: Inner working of Bidirectional RNNsBody: I'm trying to understand how Bidirectional RNNs work.
+
+Specifically, I want to know whether a single cell is used with different states, or two different cells are used, each having independent parameters.
+
+In pythonic pseudocode,
+
+Implementation 1:
+
+cell = rev_cell = RNNCell()
+cell_state = cell.get_initial_state()
+rev_cell_state = rev_cell.get_initial_state()
+for i in range(len(series)):
+ output, cell_state = cell(series[i], cell_state)
+ rev_output, rev_cell_state = rev_cell(series[-i-1], rev_cell_state)
+ final_output = concatenate([output, rev_output])
+
+
+Implementation 2:
+
+cell = RNNCell()
+rev_cell = RNNCell()
+cell_state = cell.get_initial_state()
+rev_cell_state = rev_cell.get_initial_state()
+for i in range(len(series)):
+ output, cell_state = cell(series[i], cell_state)
+ rev_output, rev_cell_state = rev_cell(series[-i-1], rev_cell_state)
+ final_output = concatenate([output, rev_output])
+
+
+Which of the above implementations is correct? Or is the working of Bidirectional RNNs completely different altogether?
+"
+"['terminology', 'generative-adversarial-networks']"," Title: What does ""shape information"" mean in terms of GAN(generative adversarial networks)?Body: A paper says
+
+
+ However, annotations used as inputs to C-GAN are typically based only on shape information, which can result in undesirable intensity distributions in the resulting artificially-created images.
+
+
+What does ""shape information"" mean here?
+
+I am aware of basic concept of GAN(generative adversarial networks), though I don't understand what ""shape information"" refers to.
+
+
+
+I am aware the image above is an illustration of Image Segmentation. Would I think of the any one of the segmented area (red, green, blue) as a shape for a GAN?
+
+Could someone please give a hint? Thanks in advance.
+"
+"['neural-networks', 'machine-learning', 'ai-design', 'datasets', 'hyperparameter-optimization']"," Title: Why do we need both the validation set and test set?Body: I know that this has been asked a hundred times before, however, I was not able to find a question (and an answer) which actually answered what I wanted to know, respectively, which explained it in a way I was able to understand. So, I'm trying to rephrase the question…
+
+When working with neural networks, you typically split your data set into three parts:
+
+
+- Training set
+- Validation set
+- Test set
+
+
+I understand that you use the training set for, well, train the network, and that you use the test set to verify how well it has learned: By measuring how well the network performs on the test set, you know what to expect when actually using it later on. So far, so good.
+
+Now, a model has hyper parameters, which – besides the weights – need to be tuned. If you change these, of course, you get different results. This is where in all explanations the validation set comes into play:
+
+
+- Train using the training set
+- Validate how well the model performs using the validation set
+- Repeat this for a number of variants which differ in their hyperparameters (or do it in parallel, right from the start)
+- Finally, select one and verify its performance using the test set
+
+
+Now, my question is: Why would I need steps 2 and 3? I could as well train multiple version of my model in parallel, and then run all of them against the test set, to see which performs best, and then use this one.
+
+So, in other words: Why would I use the validation set for comparing the model variants, if I could directly use the test set to do so? I mean, I need to train multiple versions either way. What is the benefit of doing it like this?
+
+Probably, there is some meaning to it, and probably I got something wrong, but I can't figure out what. Any hints?
+"
+['computer-vision']," Title: Before GAN, what are the commonly used techniques for image-to-image translation?Body: As per a post, image-to-image translation is a type of CV problem.
+
+I guess I understand the concept of image-to-image translation.
+
+
+
+I am aware that GANs(generative adversarial networks) are good at this kind of problems.
+
+I just wondered what the commonly used techniques are for this kind of problems Before GAN?
+
+Could someone please give a hint? Thanks in advance.
+"
+"['neural-networks', 'python', 'training', 'backpropagation', 'stochastic-gradient-descent']"," Title: Stochastic gradient descent does not behave as expected, even with different activation functionsBody: I have been working on my own AI for a while now, trying to implemented SGD with momentum from scratch in python. After looking around and studying all the maths behind it, i finally managed to implement SGD in a neural network that i trained to recognize the classic MNIST digits dataset.
+As activation function i always used sigmoid for both hidden and output neurons, and everything seems to work more or less ok, but now i wanted to step it up a bit and try to let SGD operate with different activations, so i added 2 other functions to my code: relu and tanh.
+The behaviours that i expected based on articles, documentation and ""tutorials"" found online were:
+tanh: Should be slightly better than sigmoid
+relu: should be much better than sigmoid and tanh
+(By better i mean faster or at least higher accuracy the the end, or a mix of both)
+
+Using tanh it looks like it's much slower converging to a minimum compared to sigmoid
+
+Using relu...well, the results were very, VERY horrible
+Here's the outputs with the different activations (Learning rate: 0.1, Epochs: 5, MiniBatch size: 10, Momentum: 0.9)
+
+Sigmoid training
+
+
+[Sigmoid for hidden layers, sigmoid for output layer]
+Epoch: 1/5 (14.3271 s): Loss: 0.0685, Accuracy: 0.6231, Learning rate: 0.10000
+Epoch: 2/5 (14.0060 s): Loss: 0.0503, Accuracy: 0.6281, Learning rate: 0.10000
+Epoch: 3/5 (14.0081 s): Loss: 0.0482, Accuracy: 0.6382, Learning rate: 0.10000
+Epoch: 4/5 (13.8516 s): Loss: 0.0471, Accuracy: 0.7085, Learning rate: 0.10000
+Epoch: 5/5 (13.9411 s): Loss: 0.0374, Accuracy: 0.7990, Learning rate: 0.10000
+
+
+Tanh training
+
+
+[Tanh for hidden layers, sigmoid for output layer]
+Epoch: 1/5 (13.7553 s): Loss: 0.3708, Accuracy: 0.4171, Learning rate: 0.10000
+Epoch: 2/5 (13.7666 s): Loss: 0.2580, Accuracy: 0.4623, Learning rate: 0.10000
+Epoch: 3/5 (13.5550 s): Loss: 0.2289, Accuracy: 0.4824, Learning rate: 0.10000
+Epoch: 4/5 (13.7311 s): Loss: 0.2211, Accuracy: 0.5729, Learning rate: 0.10000
+Epoch: 5/5 (13.6996 s): Loss: 0.2142, Accuracy: 0.5779, Learning rate: 0.10000
+
+
+Relu training
+
+
+[Relu for hidden layers, sigmoid for output layer]
+Epoch: 1/5 (14.2100 s): Loss: 0.7725, Accuracy: 0.0854, Learning rate: 0.10000
+Epoch: 2/5 (14.6218 s): Loss: 0.1000, Accuracy: 0.0854, Learning rate: 0.10000
+Epoch: 3/5 (14.2116 s): Loss: 0.1000, Accuracy: 0.0854, Learning rate: 0.10000
+Epoch: 4/5 (14.1657 s): Loss: 0.1000, Accuracy: 0.0854, Learning rate: 0.10000
+Epoch: 5/5 (14.1427 s): Loss: 0.1000, Accuracy: 0.0854, Learning rate: 0.10000
+
+
+Another run with relu
+
+
+Epoch: 1/5 (14.7391 s): Loss: 15.4055, Accuracy: 0.1658, Learning rate: 0.10000
+Epoch: 2/5 (14.8203 s): Loss: 59.2707, Accuracy: 0.1709, Learning rate: 0.10000
+Epoch: 3/5 (15.3785 s): Loss: 166.1310, Accuracy: 0.1407, Learning rate: 0.10000
+Epoch: 4/5 (14.9285 s): Loss: 109.9386, Accuracy: 0.1859, Learning rate: 0.10000
+Epoch: 5/5 (15.1280 s): Loss: 158.9268, Accuracy: 0.1859, Learning rate: 0.10000
+
+
+For these examples the epochs are just 5 but incrementing the epochs the results dont change, tanh and relu for me perform worse than sigmoid.
+
+Here is my python code reference for SGD:
+
+SGD with momentum
+
+This method was created to accept different activation functions to dynamically use them when creating the neural network object
+
+The activation functions and their derivatives:
+
+Activation functions and derivatives
+
+The loss function i used is the mean squared error:
+
+
+def mean_squared(output, expected_result):
+ return numpy.sum((output - expected_result) ** 2) / expected_result.shape[0]
+
+
+def mean_squared_derivative(output, expected_result):
+ return output - expected_result
+
+
+
+Is there some concept i am missing? Am i using the activation functions the wrong way? I really cannot find the answer to this even after searching for a long time.
+I feel like the problem is somewhere in the backpropagation but i can't find it.
+Any kind of help would be greatly appriciated
+
+PS: I hope i posted this in the right place, i am pretty new to asking questions here, so if there is any problem i will move the question somewhere else
+
+Edit:
+
+I tried to implement this with tensorflow, using relu for hidden layers and sigmoid for output. The results i get with this implementation are the same as the ones i mentioned in my question, so unless i am doing something wrong in both situations i am left to think i cannot use relu with sigmoid, which makes sense cause relu can have very high values while sigmoid pushes them down between 0 and 1, therefore most of the times giving values very close to 1.
+Code reference:
+TensorFlow implementation
+"
+"['machine-learning', 'datasets', 'terminology']"," Title: What are ""proxy data sets"" in machine learning?Body: The paper Assessment of Deep Generative Models for High-Resolution Synthetic Retinal Image Generation of Age-Related Macular Degeneration uses the term ""proxy data sets"" in this way
+
+
+ To develop DL techniques for synthesizing high-resolution realistic fundus images serving as proxy data sets for use by retinal specialists and DL machines.
+
+
+I googled that term, but didn't find a definition of ""proxy data sets"". What are ""proxy data sets"" in machine learning?
+
+The paper Analysis of Manufacturing Process Sequences, Using Machine Learning on Intermediate Product States (as Process Proxy Data) mentions a similar term
+
+
+ The advantage of the product state-based view is the focus on the product itself to structure the information and data involved throughout the process. Using the intermediate product states as proxy data for this purpose
+
+
+Does ""proxy data"" mean the same thing as ""proxy data sets"" does?
+"
+"['machine-learning', 'pattern-recognition']"," Title: How I can identify holes in a 3D CAD file?Body: How I can identify holes in a 3D CAD file? I want to identify different types of holes, counterbored or countersunk holes. My program lets me extract, for example, the faces and adjacency of the faces.
+I am talking about the Siemens NX for example.
+The different types of holes you can see there:
+https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_VwwfDrdggc&psig=AOvVaw1kIYdOt2qxazSYFXwHuXpb&ust=1586329824363000&source=images&cd=vfe&ved=0CAIQjRxqFwoTCOjd1qHh1egCFQAAAAAdAAAAABAJ
+"
+"['terminology', 'backpropagation', 'history', 'multilayer-perceptrons']"," Title: Why is it called back-propagation?Body: While looking at the mathematics of the back-propagation algorithm for a multi-layer perceptron, I noticed that in order to find the partial derivative of the cost function with respect to a weight (say $w$) from any of the hidden layers, we're just writing the error function from the final outputs in terms of the inputs and hidden layer weights and then canceling all the terms without $w$ in it as differentiating those terms with respect to $w$ would give zero.
+
+Where is the back-propagation of error while doing this? This way, I can find the partial derivatives of the first hidden layer first and then go towards the other ones if I wanted to. Is there some other method of going about it so that the Back Propagation concept comes into play? Also, I'm looking for a general method/algorithm, not just for 1-2 hidden layers.
+
+I'm fairly new to this and I'm just following what's being taught in class. Nothing I found on the internet seems to have proper notation so I can't understand what they're saying.
+"
+"['machine-learning', 'python', 'feature-selection', 'signal-processing', 'feature-extraction']"," Title: Feature extraction for exponentially damped signalsBody: I am looking into exponentially damped signals where it is a stationary signal (after implementing the Adfuller statistical test) and I would like to look into how can I extract meaningful features out of the signal in order to do pattern recognition with machine learning. Can anyone guide me on where I can find articles/blogs of signal processing techniques and feature extraction of exponentially damped signal?
+
+My situation:
+
+I want to look into features that relate to damping of the signal, I already looked at it in the frequency domain and I found out that from my datasets (considering the first 3 Natural frequencies/modes) the peaks are almost the same (there's like the same deviation by only like [+] or [-] 0.5 from freq. values). Looking into the damping factor, I found out that only the second damping ratio was different but still small deviation around the same ([+] or [-] 0.5). So, I thought that it would be difficult for machine learning to identify the difference between cases. One of my ideas is to look into energy dissipation as it might be related to damping, but I don't know how to approach it or from which domain I need to go in order to get the features.
+
+Side Question:
+
+I have several questions regarding signal processing:
+
+
+- Say I have a signal and would like to extract features from it,
+what steps or points that I should know in order to implement signal processing? (As I am using Python).
+- When I used signaltonoise function online (python) in order to see
+the signal-to-noise ratio and I got a positive SNR. However, if I
+pass the signal into, for example, a band-pass filter to concentrate on a certain frequency band I get a negative SNR. Why is that?
+- How can I extract features from STFT? And I also know about wavelet and HHT, what are the uses of both algorithms and how to also extract features from it?
+
+"
+"['natural-language-processing', 'long-short-term-memory', 'vanishing-gradient-problem']"," Title: How do LSTM and GRU avoid to overcome the vanishing gradient problem?Body: I'm watching the video Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorflow Tutorial | Edureka where the author says that the LSTM and GRU architecture help to reduce the vanishing gradient problem. How do LSTM and GRU prevent the vanishing gradient problem?
+"
+"['neural-networks', 'backpropagation']"," Title: How to perform back propagation with different sized layers?Body: I'm developing my first neural network, using the well known MNIST database of handwritten digit. I want the NN to be able to classify a number from 0 to 9 given an image.
+
+My neural network consists of three layers: the input layer (784 neurons, each one for every pixel of the digit), a hidden layer of 30 neurons (it could also be 100 or 50, but I'm not too worried about hyperparameter tuning yet), and the output layer, 10 neurons, each one representing the activation for every digit. That gives to me two weight matrices: one of 30x724 and a second one of 10x30.
+
+I know and understand the theory behind back propagation, optimization and the mathematical formulas behind that, that's not a problem as such. I can optimize the weights for the second matrix of weights, and the cost is indeed being reduced over time. But I'm not able to keep propagating that back because of the matrix structure.
+
+Knowing that I have find the derivative of the cost w.r.t. the weights:
+
+d(cost) / d(w) = d(cost) / d(f(z)) * d(f(z)) / d(z) * d(z) / d(w)
+
+
+(Being f
the activation function and z
the dot product plus the bias of a neuron)
+
+So I'm in the rightmost layer, with an output array of 10 elements. d(cost) / d(f(z))
is the subtraction of the observed an predicted values. I can multiply that by d(f(z)) / d(z)
, which is just f'(z)
of the rightmost layer, also an unidimensional vector of 10 elements, having now d(cost) / d(z) calculated. Then, d(z)/d(w)
is just the input to that layer, i.e. the output of the previous one, which is a vector of 30 elements. I figured that I can transpose d(cost) / d(z)
so that T( d(cost) / d(z) ) * d(z) / d(w)
gives me a matrix of (10, 30), which makes sense because it matches with the dimension of the rightmost weight matrix.
+
+But then I get stuck. The dimension of d(cost) / d(f(z))
is (1, 10), for d(f(z)) / d(z)
is (1, 30) and for d(z) / d(w)
is (1, 784). I don't know how to come up with a result for this.
+
+This is what I've coded so far. The incomplete part is the _propagate_back
method. I'm not caring about the biases yet because I'm just stuck with the weights and first I want to figure this out.
+
+
+
+import random
+from typing import List, Tuple
+
+import numpy as np
+from matplotlib import pyplot as plt
+
+import mnist_loader
+
+np.random.seed(42)
+
+NETWORK_LAYER_SIZES = [784, 30, 10]
+LEARNING_RATE = 0.05
+BATCH_SIZE = 20
+NUMBER_OF_EPOCHS = 5000
+
+
+def sigmoid(x):
+ return 1 / (1 + np.exp(-x))
+
+
+def sigmoid_der(x):
+ return sigmoid(x) * (1 - sigmoid(x))
+
+
+class Layer:
+
+ def __init__(self, input_size: int, output_size: int):
+ self.weights = np.random.uniform(-1, 1, [output_size, input_size])
+ self.biases = np.random.uniform(-1, 1, [output_size])
+ self.z = np.zeros(output_size)
+ self.a = np.zeros(output_size)
+ self.dz = np.zeros(output_size)
+
+ def feed_forward(self, input_data: np.ndarray):
+ input_data_t = np.atleast_2d(input_data).T
+ dot_product = self.weights.dot(input_data_t).T[0]
+ self.z = dot_product + self.biases
+ self.a = sigmoid(self.z)
+ self.dz = sigmoid_der(self.z)
+
+
+class Network:
+
+ def __init__(self, layer_sizes: List[int], X_train: np.ndarray, y_train: np.ndarray):
+ self.layers = [
+ Layer(input_size, output_size)
+ for input_size, output_size
+ in zip(layer_sizes[0:], layer_sizes[1:])
+ ]
+ self.X_train = X_train
+ self.y_train = y_train
+
+ @property
+ def predicted(self) -> np.ndarray:
+ return self.layers[-1].a
+
+ def _normalize_y(self, y: int) -> np.ndarray:
+ output_layer_size = len(self.predicted)
+ normalized_y = np.zeros(output_layer_size)
+ normalized_y[y] = 1.
+
+ return normalized_y
+
+ def _calculate_cost(self, y_observed: np.ndarray) -> int:
+ y_observed = self._normalize_y(y_observed)
+ y_predicted = self.layers[-1].a
+
+ squared_difference = (y_predicted - y_observed) ** 2
+
+ return np.sum(squared_difference)
+
+ def _get_training_batches(self, X_train: np.ndarray, y_train: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
+ train_batch_indexes = random.sample(range(len(X_train)), BATCH_SIZE)
+
+ return X_train[train_batch_indexes], y_train[train_batch_indexes]
+
+ def _feed_forward(self, input_data: np.ndarray):
+ for layer in self.layers:
+ layer.feed_forward(input_data)
+ input_data = layer.a
+
+ def _propagate_back(self, X: np.ndarray, y_observed: int):
+ """"""
+ der(cost) / der(weight) = der(cost) / der(predicted) * der(predicted) / der(z) * der(z) / der(weight)
+ """"""
+ y_observed = self._normalize_y(y_observed)
+ d_cost_d_pred = self.predicted - y_observed
+
+ hidden_layer = self.layers[0]
+ output_layer = self.layers[1]
+
+ # Output layer weights
+ d_pred_d_z = output_layer.dz
+ d_z_d_weight = hidden_layer.a # Input to the current layer, i.e. the output from the previous one
+
+ d_cost_d_z = d_cost_d_pred * d_pred_d_z
+ d_cost_d_weight = np.atleast_2d(d_cost_d_z).T * np.atleast_2d(d_z_d_weight)
+
+ output_layer.weights -= LEARNING_RATE * d_cost_d_weight
+
+ # Hidden layer weights
+ d_pred_d_z = hidden_layer.dz
+ d_z_d_weight = X
+
+ # ...
+
+ def train(self, X_train: np.ndarray, y_train: np.ndarray):
+ X_train_batch, y_train_batch = self._get_training_batches(X_train, y_train)
+ cost_over_epoch = []
+
+ for epoch_number in range(NUMBER_OF_EPOCHS):
+ X_train_batch, y_train_batch = self._get_training_batches(X_train, y_train)
+
+ cost = 0
+ for X_sample, y_observed in zip(X_train_batch, y_train_batch):
+ self._feed_forward(X_sample)
+ cost += self._calculate_cost(y_observed)
+ self._propagate_back(X_sample, y_observed)
+
+ cost_over_epoch.append(cost / BATCH_SIZE)
+
+ plt.plot(cost_over_epoch)
+ plt.ylabel('Cost')
+ plt.xlabel('Epoch')
+ plt.savefig('cost_over_epoch.png')
+
+
+training_data, validation_data, test_data = mnist_loader.load_data()
+X_train, y_train = training_data[0], training_data[1]
+
+network = Network(NETWORK_LAYER_SIZES, training_data[0], training_data[1])
+network.train(X_train, y_train)
+
+
+This is the code for mnist_loader
, in case someone wanted to reproduce the example:
+
+import pickle
+import gzip
+
+
+def load_data():
+ f = gzip.open('data/mnist.pkl.gz', 'rb')
+ training_data, validation_data, test_data = pickle.load(f, encoding='latin-1')
+ f.close()
+
+ return training_data, validation_data, test_data
+
+
+"
+"['natural-language-processing', 'math', 'logic', 'natural-language-understanding', 'automated-theorem-proving']"," Title: What are the challenges faced by using NLP to convert mathematical texts into formal logic?Body: From what I've figured
+
+(a) converting mathematical theorems and proofs from English to formal logic is a straightforward job for mathematicians with sufficient background, except that it takes time.
+
+(b) once converted to formal logic, computer verification of the proof becomes straightforward.
+
+If we can automate (a), a lot of time and intellectual labour (that could be dedicated elsewhere) is saved in doing (b) on published research papers.
+
+Note that if solving (a) in its entirety is hard, we could expect the mathematicians to meet the computer system halfway and avoid writing lengthy English paras that are hard to convert. If it becomes doable enough, submitting a formal logical version of your paper could even become a standard procedure that is expected.
+
+Additional benefit of solving (a) would be to do the process in reverse: mathematicians could delegate smaller tasks and lemmas (both trivial and non-trivial tasks) to an automated theorem prover (ATP). Assisted theorem proving will become more popular and boost productivity, maybe even surprise us once in a while by coming up with proofs that the paper writer couldn't. This is further of value if we predict a sharp upward trajectory of the capability of ATPs in the future. If anything, this could be self-fulfilling, as the demonstration of potential for good ATPs combined by a large corpus of proofs and problems in formal logical format could drive an increase in research on ATPs.
+
+Forgive me if I sound like a salesman, but how doable is this? What will be the main challenges faced in developing NLP-based AI to convert papers, and how tractable are these challenges given today's state of the field?
+
+P.s. I understand that proofs generated by ATPs are often hard to understand intuitively and can end up proving results without clearly exposing the underlying proof method used. But it is still a benefit to be able to use the final results
+"
+"['neural-networks', 'recurrent-neural-networks']"," Title: Is this a correct visual representation of a recurrent neural network (RNN)?Body:
+
+This is a picture of a recurrent neural network (RNN) found on a udemy course (Deep Learning A-Z). The axis at the bottom is ""time"".
+
+In a time series problem, each yellow row from left to right would represent a sequence of a feature. In this picture, then, there are 6 sequences from 6 different features that are being fed to the network.
+
+I am wondering if the arrows in this picture are completely accurate in an RNN. Shouldn't every yellow node also connect to every other blue node along its depth dimension? By depth dimension here I mean the third dimensional axis of the input tensor.
+
+For example, the yellow node at the bottom left of this picture, which is closest to the viewer, should have an arrow pointing to all the blue nodes in the array of blue nodes that is at the very left, and not just to the blue node directly above it.
+"
+"['ai-design', 'classification', 'support-vector-machine']"," Title: Why is my SVM not reaching good accuracy when trained to perform binary classification of search results?Body: I am trying to perform binary classification of search results based on the relevance to the query.
+I followed this tutorial on how to make an SVM, and I got it to work with a small iris dataset. Now, I am attempting to use the LETOR 4.0 MQ2007 dataset by Microsoft to classify. The dataset has 21 input vectors as well as a score from 0 to 2. I classified 0 as -1 and 1, 2 as 1. My algorithm reaches 57.4% accuracy after 1000 epochs with 500 samples of each classification. My learning rate is 0.0001. here is my code.
+
+from tqdm import tqdm
+import numpy as np
+from sklearn.metrics import accuracy_score
+
+
+print(""-------------------------------------"")
+choice = input(""Train or Test: "")
+print(""-------------------------------------"")
+
+# HYPERPARAMETERS
+feature_num = 21
+epochs = 1000
+sample_size = 500
+learning_rate = 0.0001
+
+if choice == ""Train"":
+
+ out_file = open('weights.txt', 'w')
+ out_file.close()
+
+ print(""Serializing Train Data..."")
+
+ # SERIALIZE DATA
+ file = open('train.txt')
+ train_set = file.read().splitlines()
+ positive = []
+ negative = []
+
+ # GRAB TRAINING SAMPLES
+ for i in train_set:
+ if (i[0] == '1' or i[0] == '2') and len(positive) < sample_size:
+ positive.append(i)
+ if (i[0] == '0') and len(negative) < sample_size:
+ negative.append(i)
+
+ train_set = positive+negative
+ file.close()
+
+ features = []
+ query = []
+
+ # CREATE TRAINING VECTORS
+ alpha = np.full(feature_num, learning_rate)
+ weights = np.zeros((len(train_set), feature_num))
+ output = np.zeros((len(train_set), feature_num))
+ score = np.zeros((len(train_set), feature_num))
+
+ for i in tqdm(range(len(train_set))):
+ elements = train_set[i].split(' ')
+ if int(elements[0]) == 0:
+ score[i] = [-1] * feature_num
+ else:
+ score[i] = [1] * feature_num
+
+ query.append(int(elements[1].split(':')[1]))
+ tmp = []
+ for feature in elements[2:2+feature_num]:
+ if feature.split(':')[1] == 'NULL':
+ tmp.append(0.0)
+ else:
+ tmp.append(float(feature.split(':')[1]))
+ features.append(tmp)
+
+ features = np.asarray(features)
+
+ print(""-------------------------------------"")
+ print(""Training Initialized..."")
+
+ # TRAIN MODEL
+ for i in tqdm(range(epochs)):
+
+ # FORWARD y = sum(wx)
+ for sample in range(len(train_set)):
+ output[sample] = weights[sample]*features[sample]
+ output[sample] = np.full((feature_num), np.sum(output[sample]))
+
+ # NORMALIZE NEGATIVE SIGNS
+ output = output*score
+ # UPDATE WEIGHTS
+ count = 0
+ for val in output:
+ if(val[0] >= 1):
+ cost = 0
+ weights = weights - alpha * (2 * 1/epochs * weights)
+ else:
+ cost = 1 - val[0]
+ # WEIGHTS = WEIGHTS + LEARNING RATE * [X] * [Y]
+ weights = weights + alpha * (features[count] * score[count] - 2 * 1/epochs * weights)
+
+ count += 1
+
+ # EXPORT WEIGHTS
+ out_file = open('weights.txt', 'a+')
+ for i in weights[0]:
+ out_file.write(str(i)+'\n')
+ out_file.close()
+
+elif choice == ""Test"":
+
+ print(""Serializing Test Data..."")
+
+ # SERIALIZE DATA
+ file = open('train.txt')
+ train_set = file.read().splitlines()
+ positive = []
+ negative = []
+ for i in train_set:
+ if (i[0] == '1' or i[0] == '2') and len(positive) < sample_size:
+ positive.append(i)
+ if (i[0] == '0') and len(negative) < sample_size:
+ negative.append(i)
+
+ test_set = positive+negative
+
+ file = open('weights.txt', 'r').read().splitlines()
+ weights = np.zeros((len(test_set), feature_num))
+
+ # CREATE TEST SET
+ for i in range(len(weights)):
+ weights[i] = file
+ features = []
+ query = []
+ output = np.zeros((len(test_set), feature_num))
+ score = np.zeros((len(test_set)))
+
+ for i in tqdm(range(len(test_set))):
+ elements = test_set[i].split(' ')
+ if int(elements[0]) == 0:
+ score[i] = -1
+ else:
+ score[i] = 1
+
+ query.append(int(elements[1].split(':')[1]))
+ tmp = []
+ for feature in elements[2:2+feature_num]:
+ if feature.split(':')[1] == 'NULL':
+ tmp.append(0.0)
+ else:
+ tmp.append(float(feature.split(':')[1]))
+ features.append(tmp)
+
+ features = np.asarray(features)
+
+ for sample in range(len(test_set)):
+ output[sample] = weights[sample]*features[sample]
+ output[sample] = np.full((feature_num), np.sum(output[sample]))
+
+ predictions = []
+ for val in output:
+ if(val[0] > 1):
+ predictions.append(1)
+ else:
+ predictions.append(-1)
+
+ print(""-------------------------------------"")
+ print(""Predicting..."")
+ print(""-------------------------------------"")
+ print(""Prediction finished with ""+str(accuracy_score(score, predictions)*100)+""% accuracy."")
+
+
+
+My training algorithm
+
+if(val[0] >= 1):
+ cost = 0
+ weights = weights - alpha * (2 * 1/epochs * weights)
+else:
+ cost = 1 - val[0]
+ # WEIGHTS = WEIGHTS + LEARNING RATE * [X] * [Y]
+ weights = weights + alpha * (features[count] * score[count] - 2 * 1/epochs * weights)
+
+
+
+What could I do to help the model train? Am I not giving it enough time? Is the algorithm wrong? Are the hyperparameters ok?
+Thanks for all your help.
+"
+"['machine-learning', 'cross-validation', 'validation-datasets']"," Title: What is the theoretical basis for the use of a validation set?Body: Let's say we use an MLE estimator (implementation doesn't matter) and we have a training set. We assume that we have sampled the training set from a Gaussian distribution $\mathcal N(\mu, \sigma^2)$.
+Now, we split the dataset into training, validation and test sets. The result will be that each will have maximum likelihoods for the following Gaussian distributions $\mathcal N(\mu_{training}, \sigma^2_{training}), \mathcal N(\mu_{validation}, \sigma^2_{validation}), \mathcal N(\mu_{test}, \sigma^2_{test})$.
+Now, let's assume the case where $\mu_{validation}<\mu_{training}<\mu_{test}$ and $\mu_{training}<\mu<\mu_{test}$.
+Clearly, if we perform validation using this split, then the model that gets selected will be closer to $\mu_{validation}$, which will worsen the performance on actual data, whereas if we only used the training set, the performance could actually be better (this is the simplest case without taking into account the effect of variance).
+So, we will have a $4!$ combinations between the means, and each one might improve or worsen the performance (probably in $50 \%$ cases performance will be worsened, assuming symmetry).
+So, what am I missing here? Were my aforementioned assumptions wrong? Or does the validation set has a completely different purpose?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks']"," Title: CNN High Variance across multiple trained models, what does it mean?Body: Background:
+
+I have a 2D CNN model that I am applying to a regression task with some uniquely extracted spectrograms. The specifics of the data set are mostly irrelevant and very domain specific so I won't go into detail, but it is essentially just image classification with a MSE loss function for each label and a unique image of 100x4000. When I re-train the model from scratch multiple times then provide it my testing data set, it has predictions that vary significantly across each iteration and thus a high variance. Supposedly the only difference between a trained model versus another would be the random initialization of weights and the random train/validation split. I feel that the train/validation split has been ruled out by when I've done k-fold cross validation and my model has done very good for all segments of my train/validation splits and acquired good results for the validation in each split. But these same models persist to have high variance in the test data set.
+
+Question:
+
+If I am seeing a high variance for the predictions from my trained model across multiple different runs of re-training, what do I attack first to reduce my variance on my predictions for my test data set?
+
+I've found many articles talking about bias and variance in the data set but not as much criticism directed towards model design. What things can I explore in my dataset or model design, and/or tools I can use to strengthen my model? Does my model need to be bigger/smaller?
+
+Ideas/Solutions:
+A few Ideas I'd like to acquire some criticism for.
+
+
+- Regularization applied to model such as L1/L2 Regularization, dropout, or early stopping.
+- Data augmentation applied to dataset (inconveniently not an option right now, but in a more general scenario it could be).
+- Bigger or smaller model?
+- Is the random initialization of weight actually very important? Maybe train multiple models and take the average of their collective answers to get the best prediction on real world data (test set).
+
+
+Personal Note:
+I have had experience with all these items before on other projects and with personal projects and have some moderate confidence justifying regularization and data augmentation. However, I lack some perspective as to any other tools that might be useful to explore the cause of model variance. I wanted to ask this question here to start a discussion in a general sense of this problem.
+
+Cheers
+
+EDIT: CLARIFICATION. When I say 'variance' I mean specifically variance across models, not variance of predictions from 1 trained model across the test set. Example: Instead lets say I am trying to predict a value somewhere between 1 and 4 (expected_val=3). I train 10 models to do this and 4 of the models accurately predict 3 with a VERY low standard deviation across all the test set samples. Thus a low variance and high accuracy/precision for these 4 models. But the other 6 models predict wildly and some predict 1 very confidently every time and the others could be 4. And I've even had models that predicted negative values even though I have NO training or testing samples that have negative labels.
+"
+"['neural-networks', 'deep-learning', 'backpropagation', 'computational-complexity', 'symbolic-computing']"," Title: What is symbol-to-number differentiation?Body: I recently came across symbol-to-symbol and symbol-to-number differentiation, out of which symbol to symbol seemed fairly straightforward - the computational graph is extended to include gradient calculations and relationships between gradients.
+
+I have a problem in understanding what exactly symbol-to-number differentiation is. Does it map directly every variable in the backprop to its relevant gradient? If yes, how does it do this without knowing about the rest of the computational graph?
+
+If the question is unclear, to increase context - TensorFlow uses symbol to symbol differentiation whereas torch uses symbol to number (apparently).
+
+Came across this in section 6.5.5 of Deep Learning, the book. The material mentioned has a convincing explanation for symbol-to-symbol differentiation but could not say the same for symbol-to-number differentiation.
+"
+"['computer-vision', 'image-recognition', 'terminology']"," Title: What is a landmark in computer vision?Body: I guess I understand the concept of face detection, a technique specifies the location of multiple objects in the image, and draws bounding boxes on the target.
+
+
+
+The question is related to the concept of a landmark. For example, the bottom guy in the image above pointed out by the red arrow has 18 green dots on his face. Is anyone of the dots is a landmark?
+
+What is the size of a landmark? What is the acceptable error of its position? For example, the landmark in the middle of his nose has to be in what kind of range?
+
+Could someone please give a hint?
+"
+"['natural-language-processing', 'python', 'training', 'sentiment-analysis']"," Title: Should I be balancing the data before creating the vocab-to-index dictionary?Body: My question is about when to balance training data for sentiment analysis.
+
+Upon evaluating my training dataset, which has 3 labels (good, bad, neutral), I noticed there were twice as many neutral labels as the other 2 combined, so I used a function to drop neutral labels randomly.
+
+However, I wasn't sure if I should do this before or after creating the vocab2index
mappings.
+
+To explain, I am numericizing my text data by creating a vocabulary of words in the training data and linking them to numbers using enumerate. I think to use that dictionary of vocab2index
values to numericise the training data. I also use that same dictionary to numericise the testing data, dropping any words that do not exist in the dictionary.
+
+When I took a class on this, they had balanced the training data AFTER creating the vocab2index dictionary. However, when I thought about this in my own implementation, it did not make sense. What if some words from the original vocabulary are gone completely, then we aren't training the machine learning classifier on those words, but they would not be dropping from the testing data either (since words are dropping from X_test
based on whether they are in the vocab2index
dictionary).
+
+So should I be balancing the data BEFORE creating the vocab2index dictionary?
+
+I linked the code to create X_train
and X_test
below in case it help.
+
+def create_X_train(training_data='Sentences_75Agree_csv.csv'):
+ data_csv = pd.read_csv(filepath_or_buffer=training_data, sep='.@', header=None, names=['sentence','sentiment'], engine='python')
+ list_data = []
+ for index, row in data_csv.iterrows():
+ dictionary_data = {}
+ dictionary_data['message_body'] = row['sentence']
+ if row['sentiment'] == 'positive':
+ dictionary_data['sentiment'] = 2
+ elif row['sentiment'] == 'negative':
+ dictionary_data['sentiment'] = 0
+ else:
+ dictionary_data['sentiment'] = 1 # For neutral sentiment
+ list_data.append(dictionary_data)
+ dictionary_data = {}
+ dictionary_data['data'] = list_data
+ messages = [sentence['message_body'] for sentence in dictionary_data['data']]
+ sentiments = [sentence['sentiment'] for sentence in dictionary_data['data']]
+
+ tokenized = [preprocess(sentence) for sentence in messages]
+ bow = Counter([word for sentence in tokenized for word in sentence])
+ freqs = {key: value/len(tokenized) for key, value in bow.items()} #keys are the words in the vocab, values are the count of those words
+
+ # Removing 5 most common words from data
+ high_cutoff = 5
+ K_most_common = [x[0] for x in bow.most_common(high_cutoff)]
+ filtered_words = [word for word in freqs if word not in K_most_common]
+
+ # Create vocab2index dictionary:
+ vocab = {word: i for i, word in enumerate(filtered_words, 1)}
+ id2vocab = {i: word for word, i in vocab.items()}
+ filtered = [[word for word in sentence if word in vocab] for sentence in tokenized]
+
+ # Balancing training data due to large number of neutral sentences
+ balanced = {'messages': [], 'sentiments':[]}
+ n_neutral = sum(1 for each in sentiments if each == 1)
+ N_examples = len(sentiments)
+ # print(n_neutral/N_examples)
+ keep_prob = (N_examples - n_neutral)/2/n_neutral
+ # print(keep_prob)
+ for idx, sentiment in enumerate(sentiments):
+ message = filtered[idx]
+ if len(message) == 0:
+ # skip this sentence because it has length 0
+ continue
+ elif sentiment != 1 or random.random() < keep_prob:
+ balanced['messages'].append(message)
+ balanced['sentiments'].append(sentiment)
+
+ token_ids = [[vocab[word] for word in message] for message in balanced['messages']]
+ sentiments_balanced = balanced['sentiments']
+
+ # Unit test:
+ unique, counts = np.unique(sentiments_balanced, return_counts=True)
+ print(np.asarray((unique, counts)).T)
+ print(np.mean(sentiments_balanced))
+ ##################
+
+ # Left padding and truncating to the same length
+ X_train = token_ids
+ for i, sentence in enumerate(X_train):
+ if len(sentence) <=30:
+ X_train[i] = ((30-len(sentence)) * [0] + sentence)
+ elif len(sentence) > 30:
+ X_train[i] = sentence[:30]
+ return vocab, X_train, sentiments_balanced
+
+
+def create_X_test(test_sentences, vocab):
+ tokenized = [preprocess(sentence) for sentence in test_sentences]
+ filtered = [[word for word in sentence if word in vocab] for sentence in tokenized] # X_test filtered to only words in training vocab
+ # Alternate method with functional programming:
+ # filtered = [list(filter(lambda a: a in vocab, sentence)) for sentence in tokenized]
+ token_ids = [[vocab[word] for word in sentence] for sentence in filtered] # Numericise data
+
+ # Remove short sentences in X_test
+ token_ids_filtered = [sentence for sentence in token_ids if len(sentence)>10]
+ X_test = token_ids_filtered
+ for i, sentence in enumerate(X_test):
+ if len(sentence) <=30:
+ X_test[i] = ((30-len(sentence)) * [0] + sentence)
+ elif len(sentence) > 30:
+ X_test[i] = sentence[:30]
+ return X_test
+
+"
+"['neural-networks', 'classification', 'text-classification', 'bayesian-deep-learning', 'bayesian-neural-networks']"," Title: Are bayesian neural networks suited for text (or document) classification?Body: I've tried to do my research on Bayesian neural networks online, but I find most of them are used for image classification. This is probably due to the nature of Bayesian neural networks, which may be significantly slower than traditional artificial neural networks, so people don't use them for text (or document) classification. Am I right? Or is there a more specific reason for that?
+
+Are bayesian neural networks suited for text (or document) classification?
+"
+"['machine-learning', 'data-preprocessing', 'geometric-deep-learning', 'graph-neural-networks']"," Title: How should I deal with multi-dimensional tensors for nodes in a graph convolution network?Body: How to work with GCN when the features of each node is not a 1D vector? For example, if the graph has N nodes and each node has features of the form $C \times D \times E$.
+
+Also, is there an open-source implementation for such a case?
+"
+"['deep-learning', 'computer-vision', 'applications', 'generative-adversarial-networks']"," Title: What are some real-world products or applications that can be developed using GANs?Body: GANs have shown good progress across a wide variety of domains ranging from image translation, image generation, text to image synthesis, audio/video generation, image super-resolution and many more.
+
+Although these concepts have great research potential, what are some real-world products or applications that can be developed using GANs?
+
+A few I know are drug discovery and product customization. What else can you suggest?
+"
+"['reinforcement-learning', 'proofs', 'temporal-difference-methods', 'function-approximation', 'off-policy-methods']"," Title: Equivalence between expected parameter increments in ""Off-Policy Temporal-Difference Learning with Function Approximation""Body: I am having a hard time understanding the proof of theorem 1 presented in the "Off-Policy Temporal-Difference Learning with Function Approximation" paper.
+Let $\Delta \theta$ and $\Delta \bar{\theta}$ be the sum of the parameter increments over an episode under on-policy $T D(\lambda)$
+and importance sampled $T D(\lambda)$ respectively, assuming
+that the starting weight vector is $\theta$ in both cases. Then
+$E_{b}\left\{\Delta \bar{\theta} | s_{0}, a_{0}\right\}=E_{\pi}\left\{\Delta \theta | s_{0}, a_{0}\right\}, \quad \forall s_{0} \in \mathcal{S}, a_{0} \in \mathcal{A}$
+We know that:
+$$
+\begin{aligned}
+&\Delta \theta_{t}=\alpha\left(R_{t}^{\lambda}-\theta^{T} \phi_{t}\right) \phi_{t}\\
+&R_{t}^{\lambda}=(1-\lambda) \sum_{n=1}^{\infty} \lambda^{n-1} R_{t}^{(n)}\\
+&R_{t}^{(n)}=r_{t+1}+\gamma r_{t+2}+\cdots+\gamma^{n-1} r_{t+n}+\gamma^{n} \theta^{T} \phi_{t+n}
+\end{aligned}
+$$
+and
+$$\Delta \bar{\theta_{t}}=\alpha\left(\bar{R}_{t}^{\lambda}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}$$
+$$
+\begin{aligned}
+\bar{R}_{t}^{(n)}=& r_{t+1}+\gamma r_{t+2} \rho_{t+1}+\cdots \\
+&+\gamma^{n-1} r_{t+n} \rho_{t+1} \cdots \rho_{t+n-1} \\
+&+\gamma^{n} \rho_{t+1} \cdots \rho_{t+n} \theta^{T} \phi_{t+n}
+\end{aligned}
+$$
+And it is proven that:
+$$
+E_{b}\left\{\bar{R}_{t}^{\lambda} | s_{t}, a_{t}\right\}=E_{\pi}\left\{R_{t}^{\lambda} | s_{t}, a_{t}\right\}
+$$
+Here is the proof, it begins with:
+$E_{b}\{\Delta \bar{\theta}\}=E_{b}\left\{\sum_{t=0}^{\infty} \alpha\left(\bar{R}_{t}^{\lambda}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\}$
+$=E_{b}\left\{\sum_{t=0}^{\infty} \sum_{n=1}^{\infty} \alpha(1-\lambda) \lambda^{n-1}\left(\bar{R}_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\}$.
+which I believe is incorrect since,
+$E_{b}\{\Delta \bar{\theta}\}=E_{b}\left\{\sum_{t=0}^{\infty} \alpha\left(\bar{R}_{t}^{\lambda}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\}$
+$=E_{b}\left\{\sum_{t=0}^{\infty} \alpha \left(\sum_{n=1}^{\infty}(1-\lambda) \lambda^{n-1}\bar{R}_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\}$.
+and taking out the second sigma will lead to a sum over constant terms.
+Furthermore, it is claimed that in order to prove the equivalence above, it is enough to prove the equivalence below:
+$$
+\begin{array}{c}
+E_{b}\left\{\sum_{t=0}^{\infty}\left(\bar{R}_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\} \\
+=E_{\pi}\left\{\sum_{t=0}^{\infty}\left(R_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t}\right\}
+\end{array}
+$$
+Which I don't understand why. and even if it is the case there are more ambiguities in the proof:
+$E_{b}\left\{\sum_{t=0}^{\infty}\left(\bar{R}_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\}$
+$$=\sum_{t=0}^{\infty} \sum_{\omega \in \Omega_{t}} p_{b}(\omega) \phi_{t} \prod_{k=1}^{t} \rho_{k} E_{b}\left\{\bar{R}_{t}^{(n)}-\theta^{T} \phi_{t} | s_{t}, a_{t}\right\}$$
+(given the Markov property, and I don't understand why Markovian property leads to conditional independence !)
+$$=\sum_{t=0}^{\infty} \sum_{\omega \in \Omega_{t}} \prod_{j=1}^{t} p_{s_{j-1}, s_{j}}^{a_{j-1}} b\left(s_{j}, a_{j}\right) \phi_{t} \prod_{k=1}^{t} \frac{\pi\left(s_{k}, a_{k}\right)}{b\left(s_{k}, a_{k}\right)} \cdot \left(E_{b}\left\{\bar{R}_{t}^{(n)} | s_{t}, a_{t}\right\}-\theta^{T} \phi_{t}\right)$$
+$$= \sum_{t=0}^{\infty} \sum_{\omega \in \Omega_{t}} \prod_{j=1}^{t} p_{s_{j-1}, s_{j}}^{a_{j-1}} \pi\left(s_{j}, a_{j}\right) \phi_{t} \cdot\left(E_{b}\left\{\bar{R}_{t}^{(n)} | s_{t}, a_{t}\right\}-\theta^{T} \phi_{t}\right)$$
+$$=\sum_{t=0}^{\infty} \sum_{\omega \in \Omega_{t}} p_{\pi}(\omega) \phi_{t}\left(E_{\pi}\left\{R^{(n)} | s_{t}, a_{t}\right\}-\theta^{T} \phi_{t}\right)$$
+(using our previous result)
+$$=E_{\pi}\left\{\sum_{t=0}^{\infty}\left(R_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t}\right\} . \diamond$$
+I'd be grateful if anyone could shed a light on this.
+"
+"['machine-learning', 'natural-language-processing', 'recurrent-neural-networks', 'long-short-term-memory', 'transformer']"," Title: Why does the transformer do better than RNN and LSTM in long-range context dependencies?Body: I am reading the article How Transformers Work where the author writes
+
+
+ Another problem with RNNs, and LSTMs, is that it’s hard to parallelize the work for processing sentences, since you have to process word by word. Not only that but there is no model of long and short-range dependencies.
+
+
+Why exactly does the transformer do better than RNN and LSTM in long-range context dependencies?
+"
+"['reinforcement-learning', 'markov-decision-process', 'papers']"," Title: How is the state-visitation frequency computed in ""Maximum Entropy Inverse Reinforcement Learning""?Body: I am trying to understand the formulation of the maximum entropy Inverse RL method by Brian Ziebart. Particularly, I am stuck on how to understand the computation of state - visitation frequencies.
+
+In order to do so, they utilize a dynamic programming approach to compute the visitation frequency, in which the next state frequency is calculated based upon the state visitation frequency at the previous time step.
+
+This is the algorithm below where, $D_{s_i,t}$ is the probability of state $s_i$ being visited at time step $t$.
+
+
+
+What is the difference between this way of computing state visitation frequency compared to the naive method of summing the total number of times state $s_i$ appears in the trajectory divided by the trajectory length?
+"
+"['datasets', 'decision-trees', 'naive-bayes']"," Title: What is the meaning of test data set in naive bayes classifier or decision trees?Body: What is the benefit of a test data set, especially for naive bayes estimator or decision tree construction?
+
+When using a naive bayes classifier the probabilities are a fact. As far as I know there is nothing one could tune (like the weights in a neural net). So what is the purpose of the test data set? Simply to know if one can apply naive bayes or not?
+
+Similiarly what is the benefit of the test data set when constructing a decision tree. We alread use the gini impurity to construct the best possibe decision tree and there is nothing we could do when we get bad results with the test data set.
+"
+"['python', 'hyperparameter-optimization', 'recommender-system']"," Title: Why can't I Hyper tune my KNNBasic Algorithm?Body: I've been trying to hyper tuning my KNNBasic algorithm by the help of grid search for recommendation system for movie review data. The problem is that both of my KNNBasicTuned and KNNBasicUntuned shows the same result. Here is my code for KNNTuning..
+I have tried the SVD algo tuning and it worked perfectly so all my libraries are working perfectly. However my all libraries are in my github linke : https://github.com/iSarcastic99/KNNBasicTuning
+
+Code of KNNBasicTuning :
+
+
+
+# -*- coding: utf-8 -*-
+""""""
+Created on Sat Apr 4 01:25:40 2020
+
+@author: rahulss
+""""""
+#My libraries
+
+from MovieLens import MovieLens
+from surprise import KNNBasic
+from surprise import NormalPredictor
+from Evaluator import Evaluator
+from surprise.model_selection import GridSearchCV
+
+import random
+import numpy as np
+
+#loading my working data
+def LoadMovieLensData():
+ ml = MovieLens()
+ print(""Loading movie ratings..."")
+ data = ml.loadMovieLensLatestSmall()
+ print(""\nComputing movie popularity ranks so we can measure novelty later..."")
+ rankings = ml.getPopularityRanks()
+ return (ml, data, rankings)
+
+np.random.seed(0)
+random.seed(0)
+
+# Load up common data set for the recommender algorithms
+(ml, evaluationData, rankings) = LoadMovieLensData()
+
+print(""Searching for best parameters..."")
+param_grid = {'n_epochs': [10, 30], 'lr_all': [0.005, 0.010],
+ 'n_factors': [50, 90]}
+gs = GridSearchCV(KNNBasic, param_grid, measures=['rmse', 'mae'], cv=3)
+
+gs.fit(evaluationData)
+
+# best RMSE score
+print(""Best RMSE score attained: "", gs.best_score['rmse'])
+
+# combination of parameters that gave the best RMSE score
+print(gs.best_params['rmse'])
+
+# Construct an Evaluator to, you know, evaluate them
+evaluator = Evaluator(evaluationData, rankings)
+
+params = gs.best_params['rmse']
+KNNBasictuned = KNNBasic(n_epochs = params['n_epochs'], lr_all = params['lr_all'], n_factors = params['n_factors'])
+evaluator.AddAlgorithm(KNNBasictuned, ""KNN - Tuned"")
+
+KNNBasicUntuned = KNNBasic()
+evaluator.AddAlgorithm(KNNBasicUntuned, ""KNN - Untuned"")
+
+
+# Evaluating all algorithms
+evaluator.Evaluate(False)
+
+evaluator.SampleTopNRecs(ml, testSubject=85, k=10)
+
+"
+"['neural-networks', 'machine-learning', 'tensorflow', 'ddpg']"," Title: Why does the result when restoring a saved DDPG model differ significantly from the result when saving it?Body: I save the trained model after a certain number of episodes with the special save() function of the DDPG class (the network is saved when the reward reaches zero), but when I restore the model again using saver.restore(), the network gives out a reward equal to approximately -1800. Why is this happening, maybe I'm doing something wrong?
+My network:
+
+import tensorflow as tf
+import numpy as np
+import gym
+
+epsiode_steps = 500
+
+# learning rate for actor
+lr_a = 0.001
+
+# learning rate for critic
+lr_c = 0.002
+gamma = 0.9
+alpha = 0.01
+memory = 10000
+batch_size = 32
+render = True
+
+
+class DDPG(object):
+ def __init__(self, no_of_actions, no_of_states, a_bound, ):
+ self.memory = np.zeros((memory, no_of_states * 2 + no_of_actions + 1), dtype=np.float32)
+
+ # initialize pointer to point to our experience buffer
+ self.pointer = 0
+
+ self.sess = tf.Session()
+
+ self.noise_variance = 3.0
+
+ self.no_of_actions, self.no_of_states, self.a_bound = no_of_actions, no_of_states, a_bound,
+
+ self.state = tf.placeholder(tf.float32, [None, no_of_states], 's')
+ self.next_state = tf.placeholder(tf.float32, [None, no_of_states], 's_')
+ self.reward = tf.placeholder(tf.float32, [None, 1], 'r')
+
+ with tf.variable_scope('Actor'):
+ self.a = self.build_actor_network(self.state, scope='eval', trainable=True)
+ a_ = self.build_actor_network(self.next_state, scope='target', trainable=False)
+
+ with tf.variable_scope('Critic'):
+ q = self.build_crtic_network(self.state, self.a, scope='eval', trainable=True)
+ q_ = self.build_crtic_network(self.next_state, a_, scope='target', trainable=False)
+
+ self.ae_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Actor/eval')
+ self.at_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Actor/target')
+
+ self.ce_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Critic/eval')
+ self.ct_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Critic/target')
+
+ # update target value
+ self.soft_replace = [
+ [tf.assign(at, (1 - alpha) * at + alpha * ae), tf.assign(ct, (1 - alpha) * ct + alpha * ce)]
+ for at, ae, ct, ce in zip(self.at_params, self.ae_params, self.ct_params, self.ce_params)]
+
+ q_target = self.reward + gamma * q_
+
+ td_error = tf.losses.mean_squared_error(labels=(self.reward + gamma * q_), predictions=q)
+
+ self.ctrain = tf.train.AdamOptimizer(lr_c).minimize(td_error, name=""adam-ink"", var_list=self.ce_params)
+
+ a_loss = - tf.reduce_mean(q)
+
+ # train the actor network with adam optimizer for minimizing the loss
+ self.atrain = tf.train.AdamOptimizer(lr_a).minimize(a_loss, var_list=self.ae_params)
+
+ tf.summary.FileWriter(""logs2"", self.sess.graph)
+
+ # initialize all variables
+
+ self.sess.run(tf.global_variables_initializer())
+ self.saver = tf.train.Saver()
+ self.saver.restore(self.sess, ""Pendulum/nn.ckpt"")
+
+
+ def choose_action(self, s):
+ a = self.sess.run(self.a, {self.state: s[np.newaxis, :]})[0]
+ a = np.clip(np.random.normal(a, self.noise_variance), -2, 2)
+
+ return a
+
+ def learn(self):
+ # soft target replacement
+ self.sess.run(self.soft_replace)
+
+ indices = np.random.choice(memory, size=batch_size)
+ batch_transition = self.memory[indices, :]
+ batch_states = batch_transition[:, :self.no_of_states]
+ batch_actions = batch_transition[:, self.no_of_states: self.no_of_states + self.no_of_actions]
+ batch_rewards = batch_transition[:, -self.no_of_states - 1: -self.no_of_states]
+ batch_next_state = batch_transition[:, -self.no_of_states:]
+
+ self.sess.run(self.atrain, {self.state: batch_states})
+ self.sess.run(self.ctrain, {self.state: batch_states, self.a: batch_actions, self.reward: batch_rewards,
+ self.next_state: batch_next_state})
+
+ # we define a function store_transition which stores all the transition information in the buffer
+ def store_transition(self, s, a, r, s_):
+ trans = np.hstack((s, a, [r], s_))
+
+ index = self.pointer % memory
+ self.memory[index, :] = trans
+ self.pointer += 1
+
+ if self.pointer > memory:
+ self.noise_variance *= 0.99995
+ self.learn()
+
+ # we define the function build_actor_network for builing our actor network and after crtic network
+ def build_actor_network(self, s, scope, trainable)
+ with tf.variable_scope(scope):
+ l1 = tf.layers.dense(s, 30, activation=tf.nn.tanh, name='l1', trainable=trainable)
+ a = tf.layers.dense(l1, self.no_of_actions, activation=tf.nn.tanh, name='a', trainable=trainable)
+ return tf.multiply(a, self.a_bound, name=""scaled_a"")
+
+ def build_crtic_network(self, s, a, scope, trainable):
+ with tf.variable_scope(scope):
+ n_l1 = 30
+ w1_s = tf.get_variable('w1_s', [self.no_of_states, n_l1], trainable=trainable)
+ w1_a = tf.get_variable('w1_a', [self.no_of_actions, n_l1], trainable=trainable)
+ b1 = tf.get_variable('b1', [1, n_l1], trainable=trainable)
+ net = tf.nn.tanh(tf.matmul(s, w1_s) + tf.matmul(a, w1_a) + b1)
+
+ q = tf.layers.dense(net, 1, trainable=trainable)
+ return q
+
+ def save(self):
+ self.saver.save(self.sess, ""Pendulum/nn.ckpt"")
+
+env = gym.make(""Pendulum-v0"")
+env = env.unwrapped
+env.seed(1)
+
+no_of_states = env.observation_space.shape[0]
+no_of_actions = env.action_space.shape[0]
+
+a_bound = env.action_space.high
+ddpg = DDPG(no_of_actions, no_of_states, a_bound)
+
+total_reward = []
+
+no_of_episodes = 300
+# for each episodes
+for i in range(no_of_episodes):
+ # initialize the environment
+ s = env.reset()
+
+ # episodic reward
+ ep_reward = 0
+
+ for j in range(epsiode_steps):
+
+ env.render()
+
+ # select action by adding noise through OU process
+ a = ddpg.choose_action(s)
+
+ # peform the action and move to the next state s
+ s_, r, done, info = env.step(a)
+
+ # store the the transition to our experience buffer
+ # sample some minibatch of experience and train the network
+ ddpg.store_transition(s, a, r, s_)
+
+ # update current state as next state
+ s = s_
+
+ # add episodic rewards
+ ep_reward += r
+
+ if int(ep_reward) == 0 and i > 200:
+ ddpg.save()
+ print(""save"")
+ quit()
+
+ if j == epsiode_steps - 1:
+ total_reward.append(ep_reward)
+ print('Episode:', i, ' Reward: %i' % int(ep_reward))
+
+ break
+
+"
+"['machine-learning', 'philosophy', 'genetic-programming', 'ai-completeness', 'inductive-programming']"," Title: Why is creating an AI that can code a hard task?Body: For people who have experience in the field, why is creating AI that has the ability to write programs (that are syntactically correct and useful) a hard task?
+
+What are the barriers/problems we have to solve before we can solve this problem? If you are in the camp that this isn't that hard, why hasn't it become mainstream?
+"
+"['machine-learning', 'convolutional-neural-networks', 'computer-vision', 'terminology']"," Title: What does ""off-the-shelf"" mean?Body: I encountered the phrase/concept off-the-shelf CNN in this paper in which authors used off-the-shelf CNN representation, OverFeat
, with simple classifiers to address different recognition tasks.
+
+As I understand it correctly, it literally means something is ready to be used for a task without alteration.
+
+Can somebody explain in simple words what off-the-shelf CNN technically means in the context of AI and convolutional neural networks?
+"
+"['natural-language-processing', 'reference-request', 'dialogue-systems', 'human-robot-interaction']"," Title: Is there any literature on the design of dialogue systems for interviews and questionnaire administration?Body: For my master thesis I am working on a dialogue system that should be deployed in hospitals to administer simple questionnaires to patients. I already did literature research and I'm fine with what I found since I don't have to replicate something which has been already done, but I noticed that there are really few papers regarding this specific 'robot interviewer' topic.
+
+Let me explain the task a bit more in details: in a real interview a human interviewer usually starts with greetings and with an explanation of the questionnaire to administer, and then he/she starts asking some more or less structured questions to the person to interview. The idea here is to replace a human interviewer with a dialogue system.
+
+Now, at a first glimpse it seems like a task that can be easily hand coded, and indeed lot of real application simply use systems in which specific questions are stored in memory in association with some already made answers to choose from (here's an example), and the system simply show them (or read in the case of humanoid robots) to the people being interviewed, the system then wait for the answer and then move on with the next questions.
+
+The point is that in real interviews the conversation flow is obviously much more smooth and natural. A human being can detect doubts in the voice of the interviewed person, which can also explicitly ask for explanations, a human being can also understand when an answer comes with emotional implications ('yes I feel sad every day') and we are able to automatically react to these hidden implications with emotional fillers ('I'm sorry to hear that'). All these aspects require to train some natural language understanding module in order to be replicated in an artificial agent (and this is actually what I'm currently working on), so I though I would have found more papers on this.
+
+Now, despite having found a tons of paper related to open domain dialogue systems, affective and attentive systems, even systems able to reply with irony, I did not found many papers about dialogue systems for smooth interviews or questionnaire administration, which in my opinion sounds like an much easier task to tackle (especially if compared to open domain conversations).
+The only two papers that I found which truly focused on interviewer systems are:
+
+
+
+So my question is: did I miss something, like some specific keywords? Or is there actually a gap in the literature with regard to the design of dialogue systems for interviews and questionnaire (or surveys) administration? I'm interested in any link or hint from anyone working on similar applicants, thank you in advance!
+"
+"['machine-learning', 'deep-learning', 'training', 'hyperparameter-optimization', 'cross-validation']"," Title: After having selected the best model with cross-validation, for how long should I train it?Body: When using k-fold cross-validation in a deep learning problem, after you have computed your hyper-parameters, how do you decide how long to train your final model? My understanding is that, after the hyperparameters are selected, you train your model one more time on the entire set of data, but it's not clear to me when you decide to stop training.
+"
+"['deep-learning', 'autoencoders', 'probability-distribution', 'variational-autoencoder']"," Title: Can a variational auto-encoder learn images composed of random noise at each pixel (each drawn from the same distribution)?Body: Can a variational auto-encoder (VAE) learn images whose pixels have been generated from a Gaussian distribution (e.g. $N(0, 1)$), i.e. each pixel is a sample from $N(0, 1)$?
+
+My gut feeling says no, because the VAE adds additional noise $\epsilon$ to the original image in the latent space, and if all images and pixels are random from the same distribution it would be impossible do decode/reconstruct into the particular input image. However, VAEs are a bit of a mystery to me internally. Any help would be appreciated!
+"
+"['machine-learning', 'probability', 'probability-distribution', 'efficiency', 'storage']"," Title: What is the most efficient data type to store probabilities?Body: In ML we often have to store a huge amount of values ranging from 0 to 1, mostly being probabilities. The most common data structure to do so seems to be a floating point? Indeed, the range of floating points is huge. This makes them imprecise in the desired interval and inefficient, right?
+
+This question suggests using the biggest integer value to represent a 1 and the smallest for 0. Also, it points to the Q number format, where all bits can be chosen as fractional, which sounds very efficient.
+
+Why have these data types still not found their ways into numpy, tensorflow etc.? Am I missing something?
+"
+"['reinforcement-learning', 'models', 'planning', 'model-based-methods']"," Title: Isn't a simulation a great model for model-based reinforcement learning?Body: Most reinforcement learning agents are trained in simulated environments. The goal is to maximize performance in (often) the same environment, preferably with a minimum amount of interactions. Having a good model of the environment allows to use planning and thus drastically improves the sample efficiency!
+
+Why is the simulation not used for planning in these cases? It is a sampling model of the environment, right? Can't we try multiple actions at each or some states, follow the current policy to look several steps ahead and finally choose the action with the best outcome? Shouldn't this allow us to find better actions more quickly compared to policy gradient updates?
+
+In this case, our environment and the model are kind of identical and this seems to be the problem. Or is the good old curse of dimensionality to blame again? Please help me figure out, what I'm missing.
+"
+"['computational-learning-theory', 'vc-theory', 'information-theory', 'minimum-description-length', 'feature-engineering']"," Title: Can feature engineering change the selection of the model according to the minimum description length?Body: The definition of MDL according to these slides is:
+
+
+ The minimum description length (MDL) criteria in machine learning says that the best description of the data is given by the model which compresses it the best. Put another way, learning a model for the data or predicting it is about capturing the regularities in the data and any regularity in the data can be used to compress it. Thus, the more we can compress a data, the more we have learnt about it and the better we
+ can predict it.
+
+ MDL is also connected to Occam’s Razor used in machine learning which states that ""other things being equal, a simpler explanation is better than a more complex one."" In MDL, the simplicity (or rather complexity) of a model is interpreted as the length of the code obtained when that model is used to compress
+ the data.
+
+
+To put it in short according to MDL principle, we prefer predictors with relatively smaller description length (i.e can be described within a certain length) for a given description language (this definition is without delving into exact technical details as it is not necessary to the question).
+
+Since MDL is dependent on the description language we use, can we say feature engineering can cause a change in the selection of the predictor?
+For example as this picture shows:
+
+
+
+To me, it seems that, in the first picture, we will require a longer description length predictor in Cartesian coordinates, as compared to a predictor in polar co-ordinates (just a single discerning radius needs to be specified). So, to me, it seems feature engineering changed the selection of the predictor to a relatively simple one (in the sense that it will have a shorter description length). Thus feature engineering has changed the description length required for out predictor. Did I make any wrong assumptions? If so why?
+"
+"['machine-learning', 'reference-request', 'sequence-modeling']"," Title: How can I use machine learning to predict properties (such as the area) of simple polygons?Body: Imagine a set of simple (non-self-intersecting) polygons given by the coordinate pairs of their vertices $[(x_1, y_1), (x_2, y_2), \dots,(x_n, y_n)]$. The polygons in the set have a different number of vertices.
+
+
+
+How can I use machine learning to solve various supervised regression and classification problems for these polygons such as prediction of their areas, perimeters, coordinates of their centroids, whether a polygon is convex, whether its centroid is inside or outside, etc?
+
+Most machine learning algorithms require inputs of the same size but my inputs have a different number of coordinates. This may probably be handled by recurrent neural networks. However, the coordinates of my input vectors can be circularly shifted without changing the meaning of the input. For example, $$[(x_1, y_1), (x_2, y_2),...,(x_n, y_n)]$$ and $$[(x_n, y_n), (x_1, y_1),...,(x_{n-1}, y_{n-1})]$$ represent the same polygon where a starting vertex is chosen differently.
+
+Which machine learning algorithm is both invariant to a circular shifting of its input coordinates and can work with inputs of different sizes?
+
+Intuitively, an algorithm could learn to split each polygon into non-overlapping triangles, calculate areas or perimeters of each triangle, and then aggregate these computations somewhere in the output layer. However, the labels (areas or perimeters) are given only for the whole polygons, not for the triangles. Also, the perimeter of the polygon is not the sum of the perimeters of the triangles. Is thinking about this problem in terms of triangles misleading?
+
+Could you please provide references on machine learning algorithms that solve such tasks? Or any advice, how to approach this task? It does not have to be neural network and does not have to learn exact analytic formulas. Approximate results would be enough.
+"
+"['machine-learning', 'reinforcement-learning', 'overfitting']"," Title: How can I handle overfitting in reinforcement learning problems?Body: So this is my current result (loss and score per episode) of my RL model in a simple two players game:
+
+I use DQN with CNN as a policy and target networks. I train my model using Adam optimizer and calculate the loss using Smooth L1 Loss.
+In a normal "Supervised Learning" situation, I can deduce that my model is overfitting. And I can imagine some methods to tackle this problem (e.g. Dropout layer, Regularization, Smaller Learning Rate, Early Stopping).
+
+- But would that solution will also work in RL problem?
+- Or are there any better solutions to handle overfitting in RL?
+
+"
+"['neural-networks', 'deep-learning', 'gradient-descent']"," Title: What is the degree of linearity in the error propagated by Gradient Descent?Body: Neural Network is trained to learn a non-linear function, the more layers it has, the more is the quality of the prediction and the ability to match the real-world function correctly (lets leave aside overfitting for now).
+
+Now, given that the concept of a derivative of a function (if it is not a line) is only defined at a single point , and worse than that, it is defined as a tangent line (so, its linear in essence) , and backpropagation uses this tangent line to project weight updates, does this mean that Gradient Descent is linear at its fundamental level ?
+
+Then if Gradient Descent is using linearity to propagate back weight updates, imagine how big the error would be for non linear function? For example , you have calculated that the weight number 129 of your network has to be decreased by 0.003412, but at that point with this new weight, the function might have already reversed its direction, and the real update for this weight must be a negative number!!! Isn't this the reason that our deep fully connected networks have such a difficulty to learn , because with more layers stacked up, the more non-linear the model becomes, and thus, the weight updates we propagate back to lower layers could be treated as ""best guess"" values instead of something that can be fully trusted?
+
+I am correct in assuming that Gradient Descent is not calculating correct weight updates on each backwards step and that the only reason that the network eventually converges to required model is because these imprecisions are fixed in a loop (called epoch). So, if we use an analogy, Gradient Descent would be like navigating the World on a boat with maps developed with the assumption that the Earth is flat. With such maps, you can sail in near-by areas, but if you would travel around the world without knowing that the Earth is round you will never arrive to your destination, and that's exactly what we are experiencing when we train deep fully connected networks without making them converge.
+
+This means, if Gradient Descent is broken in such a way, a correct Gradient Descent algorithm would only have to do SINGLE backwards step and update all the weights in one pass, giving the minimum error that is theoretically possible in 1 epoch .... I am right ?
+
+So, my question basically is: is Gradient Descent a really broken algorithm or I am missing something?
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'multi-label-classification']"," Title: How to train a LSTM with multidimensional dataBody: I am trying to train a LSTM, but I have some problems regarding the data representation and feeding it into the model.
+
+My data is a numpy array of three dimensions: One sample consist of a 2D matrix of size (600,5). 600(timesteps) and 5(features). However, I have 160 samples or files that represent the behavior of a user in multiple days. Altogether, my data has a dimension of (160,600,5).
+
+The label set is an array of 600 elements which describes certain patterns of each 2D matrix. The shape of the output should be (600,1).
+
+My question is how can I train the LSTM to the corresponding label set? What would be the best approach to handle this problem? The idea is that the output should be an array of (600,1) with 3 label inside.
+
+Multiple_outputs {0,1,2}
+ Output: 0000000001111111110000022222220000000000000
+ -------------600 samples ------------------
+
+Input: (1, 600, 5)
+Output: (600, 1)
+Training: (160,600,5)
+
+
+I look forward for some ideas!
+
+dataset(160,600,5)
+
+X_train, X_test, y_train, y_test = train_test_split(dataset[:,:,0:4], dataset[:,:,4:5],test_size = 0.30)
+
+model = Sequential()
+model.add(InputLayer(batch_input_shape = (92,600,5 )))
+model.add(Embedding(600, 128))
+#model.add(Bidirectional(LSTM(256, return_sequences=True)))
+model.add(TimeDistributed(Dense(2)))
+model.add(Activation('softmax'))
+
+model.compile(loss='categorical_crossentropy',
+ optimizer=Adam(0.001),
+ metrics=['accuracy'])
+
+model.summary()
+
+
+model.fit(X_train,y_train, batch_size=92, epochs=40, validation_split=0.2)
+
+"
+"['neural-networks', 'gradient-descent', 'linear-regression', 'mean-squared-error']"," Title: How to implement Mean square error loss function in mini batch GDBody: I have a vectorized implementation of the neural network in c++. I successfully solve the classification problems of Fashion MNIST and CIFAR.
+
+Now I am modifying my code to do the Linear regression. I am stuck at a point. I have to use MSE loss function here instead of squared error.
+
+My questions are:
+1) In linear regression tasks; is there a difference between MSE and squared error(There is only a difference of mean value which means dividing by the number of mini-batch...according to my understanding)?
+2) My C++ implementation of this network is given below, to implement MSE shall I modify my loss function line and divide it by batch size?
+
+__global__ void loss(double* X, double* Y, double *Z, size_t n) {
+
+ size_t index = blockIdx.x * blockDim.x + threadIdx.x;
+
+ if (index < n) {
+ Z[index] = ((X[index] - Y[index]));
+ }
+}
+void forward_prop(){
+ L_F( b1, b_x, w1, a1, BATCH_SIZE, layer_0_nodes, layer_1_nodes );
+ tan_h(a1, BATCH_SIZE, layer_1_nodes);
+
+ L_F( b2, a1, w2, a2, BATCH_SIZE, layer_1_nodes, layer_2_nodes );
+ tan_h(a2, BATCH_SIZE, layer_2_nodes);
+
+ L_F( b3, a2, w3, a3, BATCH_SIZE, layer_2_nodes, layer_3_nodes );
+}
+void backward_prop(){
+
+ cuda_loss(a3, b_y, loss_m, BATCH_SIZE, layer_3_nodes);
+ L_B(a2, loss_m, dw3, layer_2_nodes, BATCH_SIZE, layer_3_nodes, true);
+ tan_h_B(a2, BATCH_SIZE, layer_2_nodes);
+
+ L_B(loss_m, w3, dz2, BATCH_SIZE, layer_3_nodes, layer_2_nodes, false);
+ cuda_simple_dot_ab(dz2, a2, BATCH_SIZE, layer_2_nodes);
+ L_B(a1, dz2, dw2, layer_1_nodes, BATCH_SIZE, layer_2_nodes, true);
+ tan_h_B(a1, BATCH_SIZE, layer_1_nodes);
+
+ L_B(dz2, w2, dz1, BATCH_SIZE, layer_2_nodes, layer_1_nodes, false);
+ cuda_simple_dot_ab(dz1, a1, BATCH_SIZE, layer_1_nodes);
+ L_B(b_x, dz1, dw1, layer_0_nodes, BATCH_SIZE, layer_1_nodes, true);
+}
+__global__ void linearLayerForward( double *b, double* W, double* A, double* Z, size_t W_x_dim, size_t W_y_dim, size_t A_x_dim) {
+
+ size_t row = blockIdx.y * blockDim.y + threadIdx.y;
+ size_t col = blockIdx.x * blockDim.x + threadIdx.x;
+
+ size_t Z_x_dim = A_x_dim;
+ size_t Z_y_dim = W_y_dim;
+
+ double Z_value = 0;
+
+ if (row < Z_y_dim && col < Z_x_dim) {
+ for (size_t i = 0; i < W_x_dim; i++) {
+ Z_value += W[row * W_x_dim + i] * A[i * A_x_dim + col];
+ }
+ Z[row * Z_x_dim + col] = Z_value + b[col];
+ }
+}
+
+__global__ void linearLayerBackprop(double* W, double* dZ, double *dA,
+ size_t W_x_dim, size_t W_y_dim,
+ size_t dZ_x_dim) {
+
+ size_t col = blockIdx.x * blockDim.x + threadIdx.x;
+ size_t row = blockIdx.y * blockDim.y + threadIdx.y;
+
+ // W is treated as transposed
+ size_t dA_x_dim = dZ_x_dim;
+ size_t dA_y_dim = W_x_dim;
+
+ double dA_value = 0.0f;
+
+ if (row < dA_y_dim && col < dA_x_dim) {
+ for (size_t i = 0; i < W_y_dim; i++) {
+ dA_value += W[i * W_x_dim + row] * dZ[i * dZ_x_dim + col];
+ }
+ dA[row * dA_x_dim + col] = dA_value;
+ }
+}
+
+
+__global__ void tanhActivationForward(double* Z, size_t n) {
+
+ size_t index = blockIdx.x * blockDim.x + threadIdx.x;
+
+ if (index < n) {
+ Z[index] = tanh(Z[index]);
+ }
+}
+
+__global__ void tanhActivationBackward(double* Z, size_t n) {
+
+ size_t index = blockIdx.x * blockDim.x + threadIdx.x;
+
+ if (index < n) {
+ Z[index] = 1-(tanh(Z[index]) * tanh(Z[index]));
+ }
+}
+```
+
+"
+"['neural-networks', 'machine-learning', 'ai-design', 'meta-learning', 'multi-task-learning']"," Title: How do I format task features with a one-hot task identification vector to ensure separate weight matrices for each task in multi-task RL?Body: I am on Lecture 2 of Stanford CS330 Multi-Task and Meta-learning, and on slide 10, the professor describes using a one-hot input vector to represent the task, and she also explained that there would be independent weight matrices for each task
+
+How is the input to a multi-task network encoded to allow the features of all the tasks to be associated with different weights?
+
+Would you have an input vector containing all the features for every task, and then multiply the input vectors by the task ID vector? Is there a more efficient way to approach this problem?
+
+In other terms, here’s what I’m thinking:
+
+network_input[i] = features[i] * task[i]
+
+
+where features
is a 2d matrix of feature vectors for every task, and task
is a one-hot vector corresponding to the task number. Is that multiplicative conditioning?
+"
+"['recurrent-neural-networks', 'transformer', 'time-complexity', 'forward-pass', 'seq2seq']"," Title: What is the time complexity of the forward pass and back-propagation of the sequence-to-sequence model with and without attention?Body: I keep looking through the literature, but can't seem to find any information regarding the time complexity of the forward pass and back-propagation of the sequence-to-sequence RNN encoder-decoder model, with and without attention.
+
+The paper Attention is All You Need by Vaswani et. al in 2017 states the forward pass cost is $O(n^3)$, which makes sense to me (with 1 hidden layer). In terms of $X$ the input length, and $Y$ the output length, it then looks like $O(X^3 + Y^3)$, which I understand.
+
+However, for training, it seems to me like one back-propagation is at worst $O(X^3 + Y^3)$, and we do $Y$ of them, so $O(Y(X^3 Y^3))$.
+
+This is the following diagram, where the green blocks are the hidden states, the red ones are the input text and the blue ones are output text.
+
+
+
+If I were to add global attention, as introduced by Luong et. al in 2015, the attention adds an extra $X^2$ in there due to attention multiplication, to make an overall inference of $O(X^3 + X^2 Y^3)$, and training even worse at $O(XY(X^3 + X^2 Y^3))$ since it needs to learn attention weights too.
+
+The following diagram shows the sequence-to-sequence model with attention,
+where $h$'s are the hidden states, $c$ is the context vector and $y$ is the output word, $Y$ of such output words and $X$ such inputs. This setup is described in the paper Effective Approaches to Attention-based Neural Machine Translation, by Luong et al. in 2015.
+
+
+
+Is my intuition correct?
+"
+"['python', 'genetic-algorithms', 'neat', 'fitness-functions']"," Title: How to perform classification with NEAT-Python?Body: I am trying to do classification using NEAT-python for the first time, and I am having difficulty getting the accuracy rate. I tried the same problem with an ANN and was able to get a good accuracy rate (96%+), but NEAT-Python gives barely 40%.
+
+Here's how I set up:
+
+
+- Problem: Train 100 probability values to predict classification (1-10)
+- Input and output setup: inputs = number of input shape (100 values of prob), and output is 10 values of probability assoc with 10 classes.
+- Activation: I applied ReLU for feedforward, then applied
+softmax
+
+
+- Fitness function: I used the loglikelihood. I was unsure about how to set up the fitness function. I also used mean accuracy rate in the genome. Both gave similar results.
+
+
+
+In terms of hyperparameters, I am trying various values and haven't had any luck with it. Today I am trying with an increase in population size and generations. I have another feature input that can be used.
+
+Are there any resources that discuss how to handle mixed data for NEAT?
+
+Any help is greatly appreciated.
+"
+['text-classification']," Title: Can artificial intelligence classify textual records?Body: I am a records manager and I am being asked if I recommend Office 365. I'm having a hard time making a recommendation because I am missing an essential piece of information: can Office 365 replace the manual process of placing records into categories based on organizational function? It is important that this is done accurately, because the category determines how long the records should be kept before they are irretrievably destroyed. James Lappin seems to say that yes, Office 365 is underwritten by Project Cortex, and it is capable of doing this.
+
+My sense is that artificial intelligence is not yet capable of determining a conceptual category for records. For this to be true, a machine would have to replicate a complex human process: reading a free-text, free-form document; identifying the relevant pieces of information in a document, while ignoring others to determine what the document is ""about""; then taking the answer of what the document is ""about"" and matching it to a predefined set of major organizational activity.
+
+Are there any AI experts who can comment on how realistic it is to expect Project Cortex to do this?
+"
+"['reinforcement-learning', 'dqn', 'atari-games']"," Title: Why isn't my DQN agent improving when trained on Atari Breakout?Body: Lately, I have implemented DQN for Atari Breakout. Here is the code:
+
+https://github.com/JeremieGauthier/AI_Exercices/blob/master/Atari_Breakout/DQN_Breakout.py
+
+I have trained the agent for over 1500 episodes, but the training leveled off at around 5 as score. Could someone look at the code and point out a few things that should be corrected?
+
+
+
+Actually, the training is not going more than 5 in average score. Is there a way to improve my performance?
+"
+"['deep-learning', 'computer-vision', 'terminology']"," Title: What is ""natural image domain""?Body: I see some papers use the term ""natural image domain"". I googled that but didn't find any explanation of it.
+
+I guess I understand the normal meaning of ""natural image"", such as the image people take by phones. The images in ImageNet database are all natural images.
+
+Is ""natural image domain"" a subfield of computer vision?
+"
+"['machine-learning', 'natural-language-processing', 'keras', 'long-short-term-memory']"," Title: Could zero-padding affect learning in a negative way?Body: I implemented an LSTM
with Keras
to perform word ordering task (given a syntactically unordered sentence, the goal is to label each word of the sentence with the right position in this one.)
+So, my dataset is composed by numerical vectors and each numerical vector represents a word.
+
+I train my model trying to learn the local order of a syntactic subtree composed by words that have syntactic relationships (for example, a subtree could be a set of three words in which the root is the verb and children are subject and object relationship).
+
+I padded each subtree to a length of 20, which is the maximum subtree length that I found in my dataset. With padding introduction, I inserted a lot of vectors composed of only zeros.
+
+My initial dataset shape is (700000, 837)
, but knowing that Keras
wants a 3D dataset, I reshaped it to (35000, 20, 837)
and the same for my labels (from 700000 to (35000, 20)
).
+
+As loss function, I'm using the ListNet
algorithm loss function, which takes a list of words and for each computes the probability of the element to be ranked in the first position (then ranking these scores, I obtain the predicted labels of each word).
+
+The current implementation is the following:
+
+model = tf.keras.Sequential()
+model.add(LSTM(units=100, activation='tanh', return_sequences=True, input_shape=(timesteps, features)))
+model.add(Dense(1, activation='sigmoid'))
+
+model.summary()
+
+model.compile(loss=listnet_loss, optimizer=keras.optimizers.Adam(learning_rate=0.00005, beta_1=0.9, beta_2=0.999, amsgrad=True), metrics=[""accuracy""])
+
+model.fit(training_dataset, training_dataset_labels, batch_size=1, epochs=number_of_epochs, workers=10, verbose=1, callbacks=[SaveModelCallback()])
+
+
+And SaveModelCallback
simply saves each model during training.
+
+At the moment I obtain, at each epoch, very very similar results:
+
+Epoch 21/50
+39200/39200 [==============================] - 363s 9ms/step - loss: 2.5483 - accuracy: 0.8246
+Epoch 22/50
+39200/39200 [==============================] - 359s 9ms/step - loss: 2.5480 - accuracy: 0.8245
+Epoch 23/50
+39200/39200 [==============================] - 360s 9ms/step - loss: 2.5478 - accuracy: 0.8246
+
+
+I have to questions:
+
+
+- Could zero-padding affect learning in a negative way? And if yes, how could we not consider this padding?
+- Is it a good model for what I have to do?
+
+"
+"['machine-learning', 'game-ai', 'reference-request', 'chess']"," Title: What are some resources for coding some artificial intelligence techniques in the context of games?Body: I know the most basic rudimentary theory on AI, and I want to delve into actual practical coding with AI and machine learning. I already know a decent bit of coding in C++ and I'm learning Python syntax now.
+
+I think I want to start implementing artificial intelligence techniques for simple games (like snake or maybe chess, which isn't really a simple game, but I know a lot about it), and then move on to more complex methods and algorithms.
+
+So, what are some resources (e.g. tutorials, guides, books, etc.) for coding some artificial intelligence techniques in the context of games?
+"
+"['reinforcement-learning', 'implementation', 'convergence', 'bellman-equations', 'policy-evaluation']"," Title: Why can the Bellman equation be turned into an update rule?Body: In chapter 4.1 of Sutton's book, the Bellman equation is turned into an update rule by simply changing the indices of it. How is it mathematically justified? I didn't quite get the initiation of why we are allowed to do that?
+
+$$v_{\pi}(s) = \mathbb E_{\pi}[G_t|S_t=s]$$
+
+$$ = \mathbb E_{\pi}[R_{t+1} + \gamma G_{t+1}|S_t=s]$$
+
+$$= \mathbb E_{\pi}[R_{t+1} + \gamma v_{\pi}(S_{t+1})|S_t=s]$$
+
+$$ = \sum_a \pi(a|s)\sum_{s',r} p(s',r|s,a)[r+ \gamma v_{\pi}(s')]$$
+
+from which it goes to the update equation:
+
+$$v_{k+1}(s) = \mathbb E_{\pi}[R_{t+1} + \gamma v_{k}(S_{t+1})|S_t=s]$$
+
+$$=\sum_a \pi(a|s)\sum_{s',r} p(s',r|s,a)[r+ \gamma v_{k}(s')]$$
+"
+"['terminology', 'objective-functions', 'gradient-descent']"," Title: What do these numbers represent in this picture of a surface?Body: The following image is a screenshot from a video tutorial that illustrates the concept of gradient descent algorithm with a 3D animation.
+
+Do the numbers on the top of the balls pointed out by the red arrows represent the gradient?
+
+
+"
+"['reinforcement-learning', 'reference-request', 'deep-rl']"," Title: What are the most common deep reinforcement learning algorithms and models apart from DQN?Body: Recently, I have completed Atari Breakout (https://arxiv.org/pdf/1312.5602.pdf) with DQN.
+
+Similar to DQN, what are the most common deep reinforcement learning algorithms and models in 2020? It seems that DQN is outdated and policy gradients are preferred.
+"
+"['datasets', 'reference-request', 'resource-request']"," Title: Are there any training datasets using standard font text rather than hand written ones?Body: Are there any training datasets using standard font text rather than handwritten ones?
+
+I tried using the MNIST handwritten one on font based chars, but it didn't work well.
+"
+"['reinforcement-learning', 'multi-armed-bandits', 'upper-confidence-bound']"," Title: How do we reach at the formula for UCB action-selection in multi-armed bandit problem?Body: I came across the formula for Upper Confidence Bound Action Selection (while studying multi-armed bandit problem), which looks like:
+
+$$
+A_t \dot{=} \operatorname{argmax}_a \left[ Q_t(a) + c \sqrt{ \frac{\ln t}{N_t(a)} } \right]
+$$
+
+Although, I understand what the second term in the summation actually means but I am not able to figure out how and from where the exact expression came from, what is the log doing there? What effect does $c$ have? And, why a square root?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'computer-vision', 'papers']"," Title: Understanding the results of ""Visualizing and Understanding Convolutional Networks""Body: I am trying to understand the results of the paper Visualizing and Understanding Convolutional Networks, in particular the following image:
+
+
+
+What are these 3x3 blocks and their 9 cells representing?
+
+From my understanding, each 3x3 block of the i-th layer corresponds to a randomly chosen feature map in that layer (e.g. for the layer-1 they randomly chose 9 feature maps, for layer-2 16 feature maps etc). On the left part (grayish images), the j-th 3x3 block shows 9 visualizations obtained by mapping the top-9 activations (single values) of that particular feature map to the ""pixel space"" (using a deconvolutional network). On the right part, the j-th block shows the 9 patches of input images, corresponding to the top-9 activations (e.g. in the first layer and i-th feature map, the j-th image patch is the local region of input image which is seen by the j-th neuron of that feature map). Is my understanding correct?
+
+However, it's not entirely clear to me how the top-9 activations are chosen. It seems that for each layer and each feature-map, an activation is picked for a different input image (that's why we see e.g. different persons in layer-3, row-1, col-1, and different cars in layer-3, row-2, col-2). So within each block, the top-9 activations are obtained from 9 different images (but images of the same class) of the entire dataset (but in principle it could be that more than one activations are coming from the same image).
+"
+"['reinforcement-learning', 'deep-rl', 'rewards', 'proximal-policy-optimization']"," Title: How does normalization of the inputs work in the context of PPO?Body: What does the normalization of the inputs mean in the context of PPO? At each time step of an episode, I only know the values of this time step and of the previous ones, if I take track of them. This means that for each observation and for each reward at each time step I will do:
+
+value = (value - mean) / std
+
+
+before passing them to the NN, right? Specifically, I compute mean and std by keeping track of the values for the whole episode and at each time step, I add the new values to an array. Is this a valid approach?
+
+Also, how can I handle negative rewards, such that being positive?
+"
+"['convolutional-neural-networks', 'gradient-descent', 'support-vector-machine']"," Title: What is the gradient of a non-linear SVM with respect to the input?Body: The objective function of an SVM is the following:
+$$J(\mathbf{w}, b)=C \sum_{i=1}^{m} \max \left(0,1-y^{(i)}\left(\mathbf{w}^{t} \cdot \mathbf{x}^{(i)}+b\right)\right)+\frac{1}{2} \mathbf{w}^{t} \cdot \mathbf{w}$$
+where
+
+- $\mathbf{w}$ is the model's feature weights and $b$ is its bias parameter
+- $\mathbf{x}^{(i)}$ is the $i^\text{th}$ training instance's feature vector
+- $y^{(i)}$ is the target class ($-1$ or $1$) for the $i^\text{th}$ instance
+- $m$ is the number of training instances
+- $C$ is the regularisation hyper-parameter
+
+And if I was to use a kernel, this would become:
+$$J(\mathbf{w}, b)=C \sum_{i=1}^{m} \max \left(0,1-y^{(i)}\left(\mathbf{u}^{t} \cdot \mathbf{K}^{(i)}+b\right)\right)+\frac{1}{2} \mathbf{u}^{t} \cdot \mathbf{K} \cdot \mathbf{u}$$
+where the kernel can be the Gaussian kernel:
+$$K(\mathbf{u}, \mathbf{v})=e^{-\gamma\|\mathbf{u}-\mathbf{v}\|^{2}}$$
+How would I go about finding its gradient with respect to the input?
+I need to know this as to then apply this to a larger problem of a CNN with its last layer being this SVM, so I can then find the gradient of this output wrt the input of the CNN.
+"
+"['reinforcement-learning', 'deep-rl', 'proximal-policy-optimization']"," Title: NaNs after a while in training of PPOBody: My problem is that every time I am trying to train my PPO agent I get NaN values after a while. The diagnostic that I get is the following:
+
+ep= 3| t= 144/450000| 0.8 sec| rew=31.3005| ploss=-2.5e-02| vfloss=4.7e+01| kl=1.3e-03| ent=8.5e+00| clipfrac=0.0e+00
+ep= 6| t= 288/450000| 1.1 sec| rew=31.2144| ploss=-2.2e-02| vfloss=4.1e+01| kl=1.3e-03| ent=8.5e+00| clipfrac=0.0e+00
+ep= 9| t= 432/450000| 1.4 sec| rew=28.2668| ploss=-2.9e-02| vfloss=3.5e+01| kl=1.6e-03| ent=8.5e+00| clipfrac=0.0e+00
+ep= 12| t= 576/450000| 1.7 sec| rew=28.2910| ploss=-2.7e-02| vfloss=3.6e+01| kl=1.7e-03| ent=8.5e+00| clipfrac=0.0e+00
+ep= 15| t= 720/450000| 2.0 sec| rew=27.4817| ploss=-2.3e-02| vfloss=3.0e+01| kl=1.8e-03| ent=8.5e+00| clipfrac=0.0e+00
+ep= 18| t= 864/450000| 2.3 sec| rew=29.8415| ploss=-4.5e-02| vfloss=3.4e+01| kl=4.0e-03| ent=8.5e+00| clipfrac=2.8e-02
+ep= 21| t= 1008/450000| 2.6 sec| rew=29.1447| ploss=-2.7e-02| vfloss=2.7e+01| kl=2.0e-03| ent=8.5e+00| clipfrac=6.9e-03
+ep= 24| t= 1152/450000| 3.0 sec| rew=30.2001| ploss=-3.5e-02| vfloss=2.8e+01| kl=1.7e-03| ent=8.5e+00| clipfrac=6.9e-03
+ep= 27| t= 1296/450000| 3.3 sec| rew=31.4069| ploss=-2.9e-02| vfloss=3.7e+01| kl=3.0e-03| ent=8.5e+00| clipfrac=2.1e-02
+ep= 30| t= 1440/450000| 3.6 sec| rew=27.7963| ploss=-4.6e-02| vfloss=2.3e+01| kl=7.3e-03| ent=8.5e+00| clipfrac=1.7e-01
+ep= 33| t= 1584/450000| 3.9 sec| rew=30.8561| ploss=-5.9e-02| vfloss=2.5e+01| kl=9.6e-03| ent=8.5e+00| clipfrac=2.6e-01
+ep= 36| t= 1728/450000| 4.2 sec| rew=27.3002| ploss=-6.9e-02| vfloss=2.2e+01| kl=1.3e-02| ent=8.5e+00| clipfrac=3.1e-01
+ep= 39| t= 1872/450000| 4.5 sec| rew=28.0270| ploss=-5.6e-02| vfloss=2.1e+01| kl=8.9e-03| ent=8.5e+00| clipfrac=2.0e-01
+ep= 42| t= 2016/450000| 4.9 sec| rew=28.0624| ploss=-5.8e-02| vfloss=2.0e+01| kl=7.5e-03| ent=8.5e+00| clipfrac=2.4e-01
+ep= 45| t= 2160/450000| 5.2 sec| rew=28.6224| ploss=-8.4e-02| vfloss=2.3e+01| kl=7.2e-03| ent=8.5e+00| clipfrac=2.0e-01
+ep= 48| t= 2304/450000| 5.5 sec| rew=32.3889| ploss=-4.3e-02| vfloss=2.6e+01| kl=7.1e-03| ent=8.5e+00| clipfrac=2.9e-01
+ep= 51| t= 2448/450000| 5.8 sec| rew=31.4241| ploss=-1.0e-01| vfloss=2.7e+01| kl=7.0e-03| ent=8.5e+00| clipfrac=2.1e-01
+ep= 54| t= 2592/450000| 6.1 sec| rew=33.4760| ploss=-5.1e-02| vfloss=2.5e+01| kl=7.3e-03| ent=8.5e+00| clipfrac=2.4e-01
+ep= 57| t= 2736/450000| 6.4 sec| rew=31.0780| ploss=-8.8e-02| vfloss=2.3e+01| kl=6.9e-03| ent=8.5e+00| clipfrac=3.0e-01
+ep= 60| t= 2880/450000| 6.7 sec| rew=34.1286| ploss=-6.9e-02| vfloss=2.7e+01| kl=7.6e-03| ent=8.5e+00| clipfrac=3.1e-01
+ep= 63| t= 3024/450000| 7.1 sec| rew=31.0017| ploss=-6.1e-02| vfloss=2.5e+01| kl=1.4e-02| ent=8.5e+00| clipfrac=3.7e-01
+ep= 66| t= 3168/450000| 7.4 sec| rew=32.3697| ploss=-1.1e-01| vfloss=2.2e+01| kl=1.2e-02| ent=8.5e+00| clipfrac=4.6e-01
+ep= 69| t= 3312/450000| 7.7 sec| rew=31.4455| ploss=-7.4e-02| vfloss=2.4e+01| kl=8.4e-03| ent=8.5e+00| clipfrac=3.7e-01
+ep= 72| t= 3456/450000| 8.0 sec| rew=32.1896| ploss=-9.2e-02| vfloss=2.0e+01| kl=1.3e-02| ent=8.4e+00| clipfrac=4.1e-01
+ep= 75| t= 3600/450000| 8.3 sec| rew=31.3721| ploss=-9.4e-02| vfloss=2.4e+01| kl=1.4e-02| ent=8.4e+00| clipfrac=4.2e-01
+ep= 78| t= 3744/450000| 8.6 sec| rew=35.5718| ploss=-1.0e-01| vfloss=3.0e+01| kl=1.0e-02| ent=8.4e+00| clipfrac=4.6e-01
+ep= 81| t= 3888/450000| 9.0 sec| rew=32.2289| ploss=-1.3e-01| vfloss=2.9e+01| kl=1.6e-02| ent=8.4e+00| clipfrac=5.5e-01
+ep= 84| t= 4032/450000| 9.3 sec| rew=31.7656| ploss=-1.0e-01| vfloss=2.3e+01| kl=1.3e-02| ent=8.4e+00| clipfrac=4.4e-01
+ep= 87| t= 4176/450000| 9.6 sec| rew=35.4555| ploss=-8.8e-02| vfloss=3.3e+01| kl=5.8e-03| ent=8.4e+00| clipfrac=3.1e-01
+ep= 90| t= 4320/450000| 9.9 sec| rew=33.2766| ploss=-1.2e-01| vfloss=2.4e+01| kl=1.5e-02| ent=8.4e+00| clipfrac=5.7e-01
+ep= 93| t= 4464/450000| 10.2 sec| rew=32.5218| ploss=-1.1e-01| vfloss=2.4e+01| kl=1.8e-02| ent=8.4e+00| clipfrac=5.6e-01
+ep= 96| t= 4608/450000| 10.5 sec| rew=34.7137| ploss=-9.7e-02| vfloss=2.5e+01| kl=1.5e-02| ent=8.4e+00| clipfrac=4.3e-01
+ep= 99| t= 4752/450000| 10.9 sec| rew=35.3797| ploss=-8.2e-02| vfloss=2.5e+01| kl=1.6e-02| ent=8.4e+00| clipfrac=4.9e-01
+ep= 102| t= 4896/450000| 11.2 sec| rew=nan| ploss=nan| vfloss=nan| kl=nan| ent=nan| clipfrac=0.0e+00
+C:\Users\ppo.py:154: RuntimeWarning: invalid value encountered in greater
+ adv_nrm = (adv_nrm - adv_nrm.mean()) / max(1.e-8,adv_nrm.std()) # standardized advantage function estimate
+ep= 105| t= 5040/450000| 11.5 sec| rew=nan| ploss=nan| vfloss=nan| kl=nan| ent=nan| clipfrac=0.0e+00
+
+
+Any ideas why this is arised?
+"
+"['machine-learning', 'ai-security', 'adversarial-ml']"," Title: To perform a white-box adversarial attack, would the use of a numerical gradient suffice?Body: I am trying to perform a white-box attack on a model.
+Would it be possible to simply use the numerical gradient of the output wrt input directly rather than computing each subgradient of the network analytically? Would this (1) work and (2) actually be a white box attack?
+As I would not be using a different model to 'mimic' the results but instead be using the same model to get the outputs, am I right in thinking that this would still be a white box attack.
+"
+"['natural-language-processing', 'papers', 'transformer', 'attention', 'bert']"," Title: What is the intuition behind the dot product attention?Body: I am watching the video Attention Is All You Need by Yannic Kilcher.
+My question is: what is the intuition behind the dot product attention?
+$$A(q,K, V) = \sum_i\frac{e^{q.k_i}}{\sum_j e^{q.k_j}} v_i$$
+becomes:
+$$A(Q,K, V) = \text{softmax}(QK^T)V$$
+"
+"['agi', 'reference-request']"," Title: What are some online courses on artificial general intelligence?Body: Although no artificial general intelligence (AGI) has yet been created, probably, there are already some courses on the topic. So, what are some online (preferably free) courses on AGI?
+"
+"['neural-networks', 'machine-learning', 'terminology']"," Title: What is it called in AI when a program is designed to make ""x in the style of y""?Body: Simplified: What is it called in AI when a program is designed to make ""x in the style of y;"" when it trains off of two types of sources in order to make a thing from source one, informed by features from source two? For example, if a network made up of two smaller networks were to take sheet music of a specific compositional style in network A and audio samples from a certain genre in B and through an interface creates music from a certain genre in a certain compositional style; the is comes from One, the seems comes from Two.
+
+For more coarse and obvious examples:
+
+
+- ""Compose synthpop in the style of Beethoven""
+- ""Draw impressionism in the style of Mondrian""
+- ""Generate casserole recipes using only ingredients most likely to fluctuate in price given current market data""
+- ""Sketch baseballs that look like they're made of espresso foam""
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-recognition', 'image-processing', 'image-segmentation']"," Title: Can neural network help me with detecting center coordinates of particles in an image?Body: I have an image of some nano particles that was taken with Scanning Electron Microscope (SEM) attached here. I want to obtain center points coordinates (x,y) for each particle. Doing it by hand is very tedious. Since I just started to learn Machine Learning and got introduced to Artificial Neural Networks and kinda understand that they they are helpful with image classification, I am curious if I can use these tools to achieve my goal.
+
+I found this article where they discuss kind similar work,,, but I am curious if you have seen anything practical or if you can give me some steps where and how to start, that's really helpful.
+Any guidance is appreciated.
+
+
+"
+"['python', 'datasets', 'unsupervised-learning', 'clustering']"," Title: How can I cluster this data frame with several features and observations?Body: How can I cluster the data frame below with several features and observations? And how would I go about determining the quality of those clusters? Is k-NN appropriate for this?
+
+id Name Gender Dob Age Address
+1 MUHAMMAD JALIL Male 1987 33 Chittagong
+1 MUHAMMAD JALIL Male 1987 33 Chittagong
+2 MUHAMMAD JALIL Female 1996 24 Rangpur
+2 MRS. JEBA Female 1996 24 Rangpur
+3 MR. A. JALIL Male 1987 33 Sirajganj
+3 MR. A. JALIL Male 1987 33 Sirajganj
+3 MD. A. JALIL Male 1987 33 Sirajganj
+4 MISS. JEBA Female 1996 24 Rangpur
+4 PROF. JEBA Female 1996 24 Rangpur
+1 MD. A. JALIL Male 1987 33 Chittagong
+1 MUHAMMAD A. JALIL Male 1987 33 Chittagong
+
+"
+"['neural-networks', 'computational-learning-theory', 'universal-approximation-theorems', 'vc-theory', 'no-free-lunch-theorems']"," Title: If a neural network is a universal function approximator, can it have any prior beliefs?Body: Let us confine ourselves to the case where we have a $n$ dimensional input and a $+1$ or $-1$ output. It can be shown that:
+
+For every $n$, there exists a dense NN of depth 2, such that it contains all functions from ${±1}^n$ to ${±1}$. (given sign activation functions and some other very simple assumptions).
+
+Check section 20.3 for the proof.
+So, if a neural net can approximate any function, then it has a $\mathcal V \mathcal C $ dimension of infinite (considering the $n$ dimensional set of points as our universe).
+Thus, it can realize all types of functions (or its hypothesis set contains all set of functions), and hence cannot have prior knowledge (prior knowledge in the sense used in the No Free Lunch theorem).
+Are my deductions correct? Or did I make wrong assumptions? Are there actually any prior beliefs in a neural network that I am missing?
+A detailed explanation would be nice.
+"
+"['prediction', 'chat-bots']"," Title: What is the best approach to build a self-learning AI chatbot?Body: I am a novice in AI and I like to build a chatbot to predict diseases using patient narration as input. Initially, I simply want to train my chatbot on 1 disease only. And once this initial milestone is accomplished then I want to train it on other diseases.
+
+I want to know whether I should build and train my model first and then move towards building a chatbot or should i create a chatbot first and then train it on a disease.
+
+Also please justify which approach is better and why?
+"
+"['reinforcement-learning', 'policy-gradients', 'experience-replay', 'off-policy-methods', 'on-policy-methods']"," Title: Could we update the policy network with previous trajectories using supervised learning?Body: I believe to understand the reason why on-policy methods cannot reuse trajectories collected from earlier policies: the trajectory distribution change with the policy and the policy gradient is derived to be an expectation over these trajectories.
+
+Doesn't the following intuition from the OpenAI Vanilla Policy Gradient description indeed propose that learning from prior experience should still be possible?
+
+
+ The key idea underlying policy gradients is to push up the probabilities of actions that lead to higher return, and push down the probabilities of actions that lead to lower return.
+
+
+The goal is to change the probabilities of actions. Actions sampled from previous policies are still possible under the current one.
+
+I see that we cannot reuse the previous actions to estimate the policy gradient. But couldn't we update the policy network with previous trajectories using supervised learning? The labels for the actions would be between 0 and 1 based on how good an action was. In the simplest case, just 1 for good actions and 0 for bad ones. The loss could be a simple sum of squared differences with a regularization term.
+
+Why is that not used/possible? What am I missing?
+"
+"['neural-networks', 'datasets', 'feedforward-neural-networks', 'multilayer-perceptrons']"," Title: What are standard datasets for fully connected neural networks?Body: I am looking for datasets that are used as a testing standard in the fully connected neural networks (FCNN). For example, in the image recognition and CNN, CIFAR datasets are used in most of the papers, but can't find anything like that for the FCNN.
+"
+['deep-learning']," Title: How to choosing the random value for parameter w in deep learning network?Body: I did watch the course DeepLearning of Andrew Ng and he told that we should create parameter w
small like:
+
+parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) ** 0.001
+
+
+But in the last application assignment. They choose another way:
+
+layers_dims = [12288, 20, 7, 5, 1]
+def initialize_parameters_deep(layer_dims):
+ np.random.seed(3)
+ parameters = {}
+ L = len(layer_dims)
+
+ for l in range(1, L):
+ parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) / np.sqrt(layer_dims[l - 1])
+ parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
+ assert (parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
+ assert (parameters['b' + str(l)].shape == (layer_dims[l], 1))
+ return parameters
+
+
+And the result of this way is very good but if I choose w
like the old above, It's just have 34%
correct!
+
+So do you can explain ?
+"
+"['machine-learning', 'ai-design', 'classification']"," Title: Which classifier should I use for a dataset with one feature?Body: I have a labeled dataset composed of 3000 data. Its single feature is the price of the house and its label is the number of bedrooms.
+
+Which classifier would be a good choice to classify these data?
+"
+"['linear-regression', 'statistical-ai', 'linear-algebra']"," Title: Is there any way to apply linear transformations on a vector other than matrix multiplication?Body: I am trying to optimize the cost function calculation in regression analysis using a non-matrix multiplication based approach.
+
+More specifically, I have a point $x = (1, 1, 2, 3)$, to which I want to apply a linear transformation $n$ times. If the transformation is denoted by a $4 \times 4$ matrix $A$, then the final transformation would be given by $A^n * x$.
+
+Given that matrix multiplication can computational expensive, is there a way we can speed up the computation, assuming we would need to run multiple iterations of this simulation?
+"
+"['machine-learning', 'robotics']"," Title: Is possible to train a robot or AI to prune fruit trees?Body: I live in a rural area where there is a growing necessity for people with knowledge to prune Pear trees, this process is crucial for the industry, but as people go to the big cities, this skill is being lost, and in a few years there will be no one to do it.
+
+I wanna know if it is possible to train a robot using AI to do this, and want would it take to make this work!
+
+Keep in mind that this would be viable only in an ""industrial"" way. The trees all have approximately the same size and are disposed in a certain preset way (distance between each other, height, etc).
+"
+"['deep-learning', 'reinforcement-learning', 'objective-functions']"," Title: Is Mean Squared Error Loss function a good loss function for continuous variables $0 < x < 1$Body: Suppose I am utilising a neural network to predict the next state, $s'$ based on the current $(s, a)$ pairs.
+
+all my neural network inputs are between 0 and 1 and the loss function for this network is defined as the mean squared error of the Difference between the current state and next state. Because the variables are all between 0 and 1, the MSE difference between the actual difference and predicted difference is smaller than the actual difference.
+
+Suppose the difference in next state and current state for $s \in R^2$ is $[0.4,0.5]$ and the neural network outputs a difference of $[0.2,0.4]$. The mean squared loss is therefore 0.05 $(0.2^2 + 0.1^2) = 0.05$ whereas the neural network does not really predict the next state very well due to a difference of $(0.2, 0.1)$.
+
+Although whichever loss function is used does not matter, It was deceiving to think that despite the loss function outputting low values, it is mainly due to the squared term that keeps the value small.
+
+Is Mean Squared Error loss function still a good loss function to be used here ?
+"
+"['machine-learning', 'deep-learning', 'data-science', 'algorithmic-bias']"," Title: Possible approaches to dealing with unbalanced dataset and highly biased deep learning algorithmBody: I have an extremely unbalanced video dataset for a two class video classification problem.All my videos in my current video dataset is $40$ second long with $900p$ resolution.However the dataset is highly unbalanced with $3000$ samples for class A vs $300$ samples to class B. Due to high imbalance, i added the following class weight implementation to my deep learning model.
+
+https://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_class_weight.html
+
+However my model was still heavily biased due to high imbalance in data. I am considering the following options:
+
+
+- Adding more video data to balance the dataset: My only concern here is my current dataset is uniform with same duration of videos and similar resolution of around $900p$. Will it matter if I add very low resolution videos to balance the dataset?
+- Adding video augmentation to current dataset.
+
+
+I am looking for any other recommendations that i could make use. Any pros or cons of any of these methods that I should consider to prevent any bias?
+"
+"['comparison', 'computational-learning-theory', 'vc-dimension', 'universal-approximation-theorems']"," Title: How can neural networks approximate any continuous function but have $\mathcal{VC}$ dimension only proportional to their number of parameters?Body: Neural networks typically have $\mathcal{VC}$ dimension that is proportional to their number of parameters and inputs. For example, see the papers Vapnik-Chervonenkis dimension of recurrent neural networks (1998) by Pascal Koirana and Eduardo D. Sontag and VC Dimension of Neural Networks (1998) by Eduardo D. Sontag for more details.
+
+On the other hand, the universal approximation theorem (UAT) tells us that neural networks can approximate any continuous function. See Approximation by Superpositions of a Sigmoidal Function (1989) by G. Cybenko for more details.
+
+Although I realize that the typical UAT only applies to continuous functions, the UAT and the results about the $\mathcal{VC}$ dimension of neural networks seem to be a little bit contradictory, but this is only if you don't know the definition of $\mathcal{VC}$ dimension and the implications of the UAT.
+
+So, how come that neural networks approximate any continuous function, but, at the same time, they usually have a $\mathcal{VC}$ dimension that is only proportional to their number of parameters? What is the relationship between the two?
+"
+"['bayesian-networks', 'bayesian-optimization', 'bayesian-probability']"," Title: How can I draw a Bayesian network for this problem with birds?Body: I am working on the following problem to gain an understanding of Bayesian networks and I need help drawing it:
+
+
+ Birds frequently appear in the tree outside of your window in the morning and evening; these include finches, cardinals and robins. Finches appear more frequently than robins, and robins appear more frequently than cardinals (the ratio is 7:4:1). The finches will sing a song when they appear 7 out of every 10 times in the morning, but never in the evening. The cardinals rarely sing songs and only in the evenings (in the evening, they sing 1 of every 10 times they appear). Robins sing once every five times they appear regardless of the time of day. Every tenth cardinal and robin will stay in the tree longer than five minutes. Every fourth finch will stay in the tree longer than five minutes.
+
+
+I have tried drawing two versions of the network and would love some feedback. Currently, I am leaning more towards the right side network.
+
+
+"
+"['machine-learning', 'reinforcement-learning', 'q-learning', 'dqn', 'deep-rl']"," Title: Are Q values estimated from a DQN different from a duelling DQN with the same number of layers and filters?Body: I am confused about the Q values of a duelling deep Q network (DQN). As far as I know, duelling DQNs have 2 outputs
+
+
+- Advantage: how good it is to be in a particular state $s$
+- Value: the advantage of choosing a particular action $a$
+
+
+We can make these two outputs into Q values (reward for choosing particular action $a$ when in state $s$) by adding them together.
+
+However, in a DQN, we get Q values from the single output layer of the network.
+
+Now, suppose that I use the same DQN model with the very same weights in my input and hidden layers and changing the output layer which gives us Q values to advantage and value outputs. Then, during training, if I add them together, will it give me the same Q value for a particular state, supposing all the parameters of both my algorithms are the same except for the output layers?
+"
+"['machine-learning', 'vc-dimension', 'vc-theory']"," Title: Why does the growth function need to be polynomial in order for the learning algorithm to be consistent?Body: Could someone please explain to me why in VC theory, specifically, when calculating the VC dimension, the growth function needs to be polynomial in order for the learning algorithm to be consistent? Why polynomial, and where does the name growth function come from exactly?
+"
+"['tensorflow', 'objective-functions', 'softmax']"," Title: Why does TensorFlow docs discourage using softmax as activation for the last layer?Body: The beginner colab example for tensorflow states:
+
+
+ Note: It is possible to bake this tf.nn.softmax
in as the activation function for the last layer of the network. While this can make the model output more directly interpretable, this approach is discouraged as it's impossible to
+ provide an exact and numerically stable loss calculation for all models when using a softmax output.
+
+
+My question is, then, why? What do they mean by impossible to provide an exact and numerically stable loss calculation?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'computer-vision']"," Title: Creating Dataset for Image ClassificationBody:
+I want to develop a CNN model to identify 24 hand signs in American Sign Language. I created a custom dataset that contains 3000 images for each hand sign i.e. 72000 images in the entire dataset.
+
+For training the model, I would be using 80-20 dataset split (2400 images/hand sign in the training set and 600 images/hand sign in the validation set). My question is:
+Should I randomly shuffle the images when creating the dataset? And Why?
+
+
PS: Based on my previous experience, it led to validation loss being lower than training loss and validation accuracy more than training accuracy.
+"
+"['machine-learning', 'deep-learning', 'resource-request']"," Title: Is there an online tool that can predict accuracy given only the dataset?Body: Is there an online tool that can predict accuracy given only the dataset as input (i.e. without the compiled model)?
+
+That would help to understand how data augmentation/distribution standardization, etc., is likely to change the accuracy.
+"
+"['reinforcement-learning', 'math', 'implementation', 'gym', 'policy-evaluation']"," Title: How can I implement policy evaluation when reward is tied to an action outcome?Body: I'm following Stanford reinforcement learning videos on youtube. One of the assignments asks to write code for policy evaluation for Gym's FrozenLake-v0 environment.
+
+In the course (and books I have seen), they define policy evaluation as
+
+$$V^\pi_k(s)=r(s,\pi(s))+\gamma\sum_{s'}p(s'|s,\pi(s))V^\pi_{k-1}(s')$$
+
+My confusion is that in the frozen lake example, the reward is tied to the result of the action. So, for each pair state-action, I have a list that contains a possible next-state, the probability to get to that next-state and the reward. For example, being in the target state and performing any action brings a reward of $0$, but being in any state that brings me to the target state gives me a reward of $1$.
+
+Does this mean that, for this example, I need to rewrite $V^\pi_k(s)$ as something like this:
+
+$$V^\pi_k(s)= \sum_{s'} p(s'|s,\pi(s)) [r(s,\pi(s), s')+ \gamma V^\pi_{k-1}(s')]$$
+"
+"['terminology', 'agi', 'history']"," Title: Who first coined the term ""artificial general intelligence""?Body: Similarly to the question Who first coined the term Artificial Intelligence?, who first coined the term ""artificial general intelligence""?
+"
+"['reinforcement-learning', 'ai-design', 'markov-decision-process']"," Title: How can I formalise a non-zero-sum game of $N$ agent as Markov game?Body: I coded a non-zero-sum game of $N$ agents in a discrete dynamic environment to RL with Q-learning and DQN agents.
+It's like a marathon. Only two actions are available per agent: $\{ G \text{ (move forward}) , S \text{ (stay to its position}) \}$. Every agent has $m$ possible individual positions, the other agents cannot interfere with its path to the terminal position. Only when one agent reaches its terminal position gets a full reward. When everyone reaches a terminal state, all get $0$ rewards. If more than $1$ but less than $N$ reach their terminals, they get a small reward.
+Now, I try to formalize it as a Markov Game (MG), but I don't have a solid mathematical background.
+My first question is:
+
+- When we model a problem as an RL problem, the transition probability (TP) distribution is not required, while an MDP and MG require TP. But then how are all RL problems modeled first into MDP or MG?
+
+As I have read in literature, I understand that I will treat the action sets of all other players as a "team" joint set of actions.
+Second question:
+
+- How can I specialize the TP function to the specific problem I want to model? Should I just mention the general function equation?
+
+What I have tried so far is to explicitly describe it, but I think I am not getting something:
+
+
+- The probability of the transition from $s$ to $s'$, where in $s'$ a number of $k$ agents move a step forward is equal to $1$, given that they all chose action $G \in Α$ and that the rest $n-k$, if any, all chose the action $S \in Α$, where $k$ is an integer $1 \leq k \leq n$.
+
+- The probability of the transition from $s$ to $s'$, where in $s'$ a number of $k$ agents to earn the high payoff is equal to $1$, given that $k=1$, its position is equal to $m-1$, it chooses action $G \in A$ and that the rest $n-k$ all chose the action $S \in Α$, where $m$ is the max possible position for each agent.
+
+- The probability of the transition from $s$ to $s'$, where in $s'$ a number of $k$ agents to earn a low payoff is equal to $1$, given that $k>1$, their position is equal $m-1$, they all chose action $G \in Α$ and that the rest $n-k$ all chose the action $S \in Α$, where $k$ is an integer with $1 < k \leq n$ and m is the max possible position for each agent
+
+
+
+"
+"['neural-networks', 'training', 'genetic-algorithms', 'mean-squared-error']"," Title: What does it mean if classification error is equal between two networks but the MSE is different?Body: I'm experimenting with training a feedforward neural network using a genetic algorithm and I've done a few tests using both the mean squared error and classification error functions as fitness heuristic in the GA.
+
+When I use MSE as error function, my GA tends to converge around an MSE of 0.1 (initial conditions have an MSE of around 0.9). Testing system accuracy with this network gives me 95%+ for both training and testing data.
+
+But, when I use classification error as my heuristic, my GA tends to converge around when the MSE is about 0.3. System accuracy is still around the same at 95%+.
+
+
+ My question is, if you had two networks, one showing an MSE of 0.1 and one an MSE of 0.3, but both perform approximately the same in terms of accuracy, what can I deduce from the differences in MSE?
+
+
+In other words: which network is ""better"", even if the accuracy is the same? Does a lower MSE mean anything below a certain amount? I could train my network for 100x as many generations and get a better MSE but not necessarily a better accuracy. Why?
+
+For some context:
+
+
+
+When the MSE is approximately 1.5 (epoch 250), the accuracy seems to match when the MSE is approximately 2.0 (epoch 50). Why does the accuracy not increase despite MSE decreasing?
+"
+"['philosophy', 'social', 'ethics', 'healthcare']"," Title: Would it be ethical to allow an AI to make life-or-death medical decisions?Body: Would it be ethical to allow an AI to make life-or-death medical decisions?
+
+For instance, where there an insufficient number of ventilators during a respiratory pandemic, not every patient can have one. It seems like a straight forward question, but before you answer, consider:
+
+
+- Human decision-making in this regard is a form of algorithm.
+
+
+(For instance, the statistics and rules that determine who gets kidney transplants.)
+
+
+- Even if the basis for the decision is statistical, the ultimate decision making process could be heuristic, so at least the bias could be identified.
+
+
+In other words, the goal of this process, specifically, is favoring one patient over another, but doing so in the way that has the greatest utility.
+
+
+- Statistical bias is a core problem of Machine Learning, but human decision making is also subject to this condition.
+
+
+One of the arguments in favor might be that at least the algorithm would be impartial, here in relation to human bias.
+
+Finally, where there is scarcity, utilitarianism becomes more imperative. (Part of the trolley problem is you only have two tracks.) But the trolley problem is also relevant because it can be a commentary on the burden of responsibility.
+"
+"['reinforcement-learning', 'definitions', 'markov-decision-process', 'reward-functions', 'sutton-barto']"," Title: Why does the definition of the reward function $r(s, a, s')$ involve the term $p(s' \mid s, a)$?Body: Sutton and Barto define the state–action–next-state reward function, $r(s, a, s')$, as follows (equation 3.6, p. 49)
+$$
+r(s, a, s^{\prime}) \doteq \mathbb{E}\left[R_{t} \mid S_{t-1}=s, A_{t-1}=a, S_{t}=s^{\prime}\right]=\sum_{r \in \mathcal{R}} r \frac{p(s^{\prime}, r \mid s, a )}{\color{red}{p(s^{\prime} \mid s, a)}}
+$$
+Why is the term $p(s' \mid s, a)$ required in this definition? Shouldn't the correct formula be $\sum_{r \in \mathcal{R}} r p(s^{\prime}, r \mid s, a )$?
+"
+"['machine-learning', 'classification', 'datasets', 'weights', 'cross-entropy']"," Title: How are weights for weighted x-entropy loss on imbalanced data calculated?Body: I am trying to build a classifier which should be trained with the cross entropy loss. The training data is highly class-imbalanced. To tackle this, I've gone through the advice of the tensorflow docs
+
+and now I am using a weighted cross entropy loss where the weights are calculated as
+
+weight_for_class_a = (1 / samples_for_class_a) * total_number_of_samples/number_of_classes
+
+
+following the mentioned tutorial.
+
+It works perfectly, but why is there this factor total_number_of_samples/number_of_classes
?
+The mentioned tutorial says this
+
+
+ [...] helps keep the loss to a similar magnitude.
+
+
+But I don not understand why. Can someone clarify?
+"
+"['neural-networks', 'image-processing', 'regression']"," Title: Can a neural network whose output is uniformly equal to zero learn its way out of it?Body: I am performing a regression task on sparse images. The images are a result of a physical process with meaningful parameters (actually, they are a superposition of cone-like shapes), and I am trying to train a regressor for these parameters.
+
+Here, sparse images mean that the data, and thus the expected output, is made of 2D square tensors, with only one channel, and that it is expectd that roughly 90% of the image is equal to zero. However, in my system, the data is represented as dense tensors.
+
+I built a neural network with an encoder mapping the image on an output for which I have chosen activation and shape such that it corresponds with those meaningful parameters.
+
+I then use custom layers to build an image from these parameters in a way that matches closely the physical process, and train the network by using the L2 distance between the input image and the output image.
+
+However, for a large set of parameters, the output image will be equal to zero, since these are sparse images. This is the case in general for the initial network.
+
+Is it possible that, through training, the neural network will learn its way out of this all-zero parameterization ?
+
+My intuition is that, in the beginning, the loss will be equal to the L2 norm of the input image, and the gradient will be uniformly zero, hence, no learning.
+
+Can anyone confirm ?
+"
+"['machine-learning', 'computer-vision', 'applications']"," Title: Could machine learning be used to measure the distance between two objects from a picture or live camera?Body: Could machine learning be used to measure the distance between two objects from a picture or live camera?
+
+An example of this is the measurement between the centre of each eye pupil.
+
+This area is all new to me, so any advice and suggestions would be greatly appreciated.
+"
+"['reinforcement-learning', 'reference-request', 'deepmind']"," Title: Which simulation platform is used by DeepMind (and others) to handle inverse kinematics musculoskeletal?Body: Which simulation platform is used by DeepMind and others to handle inverse kinematics musculoskeletal simulation, etc., for reinforcement learning simulations and agents?
+
+I thought they use Unity or Unreal but I assume that would be resource-heavy.
+"
+"['neural-networks', 'deep-learning', 'classification', 'objective-functions', 'activation-functions']"," Title: Single label classification into hierarchical categories using a neural networkBody: I am working on a classification problem into progressive classes. In other words, there is some hierarchy of categories in such a way, that A < B < C, e.g. low, medium, high, very high. What loss function and activation function for the output layer should I use to take advantage of the class hierarchy, so that true A and predicted C is penalized more than true A and predicted B?
+
+My ideas are:
+
+1) To assign some value to each category, use one output unit with the sigmoid activation and RMS loss function. Then to assign each class to an interval, e.g. 0-033 - class A, 0.33-0.66 class B 0.66-1 - class C. It seem to do the trick, but can favor the extreme categories over the middle ones.
+
+2) Use K softmax output units, integer labels instead of one-hot encoded and the sparse categorical crossentropy loss function. In this case I am not sure how exactly sparse categorical crossentropy works and if it really takes into account the hierarchy.
+"
+"['machine-learning', 'math', 'computational-learning-theory', 'vc-dimension', 'vc-theory']"," Title: Understanding relation between VC Symmetrization Lemma and Generalization BoundsBody: I am new in the field of Machine Learning so I wanted to start of by reading more about mathematics and history behind it.
+
+I am currently reading, in my opinion, a very good and descriptive paper on Statistical Learning Theory - ""Statistical Learning Theory: Models, Concepts, and Results"". In section 5.5 Generalization bounds, it states that:
+
+
+ It is sometimes useful to rewrite (17) ""the other way round"". That is, instead of fixing $\epsilon$ and then computing the probability that the empirical risk deviates from the true risk by more than $\epsilon$, we specify the probability with which we want the bound to hold, and then get a statement which tells us how close we can expect the risk to be to the empirical risk. This can be achieved by setting the right-hand side of (17) equal to some $\delta > 0$, and then solving for $\epsilon$. As a result, we get the statement that with a probability at least $1−\delta$, any function $f \in F$ satisfies
+
+
+
+
+Equation (17) is VC Symmetrization lemma to which we applied union bound and then Chernoff bound:
+
+
+
+What I fail to understand is the part where we are rewriting (17) ""the other way around"". I fail to grasp intuitive understanding of relation between (17) and (18), as well as understanding generalization bounds in general.
+
+Could anyone help me with understanding these concepts or at least provide me with additional resources (papers, blog posts, etc.) that can help?
+"
+"['neural-networks', 'machine-learning', 'hyperparameter-optimization']"," Title: How is a validation set used to tune the hyperparameters in a non-biased way, if the new models depends on the values of these?Body: I've built a neural network from the scratch, choosing arbitrary numbers for the hyperparameters: learning rate, number of hidden layers and neurons for these, number of epochs and size of mini batches. Now that I've been able to build something potentially useful (~93% of accuracy with test data, unseen by the model before), I want to focus on hyperparameter tuning.
+
+The conceptual difference between training and validation sets is clear and makes a lot of sense. It's obvious that the model is biased towards the training set, so it wouldn't make sense to use it to tune the hyperparameters, nor for evaluating its performance.
+
+But, how can I use the validation set for this, if changing any of the parameters enforces me to rebuild a new model again? The final prediction depends on the values of X number of MxN matrices (weights) and X number of N vectors (biases), whose values depend on the learning rate, batch size and number of epochs; and whose dimensions depends on the number and size of hidden layers. If I change any of these, I'd need to rebuild my model again. So I'd be using this validation set for training different models, ending up as in the first step: fitting a model from the scratch.
+
+To sum up: I fall in a recursive problem in which I need to fine tune the hyperparameters of my model with unseen data, but changing any of these hyperparameters implies rebuilding the model.
+"
+"['machine-learning', 'deep-learning', 'objective-functions', 'ethics', 'algorithmic-bias']"," Title: How to add weights to one specific input feature to ensure fair training in the network?Body: I am trying to create a multiclass product-rating network based on product reviews and other input features. Two of the other input features are ""product category"" and ""gender"". However, I want to avoid unfair bias in the classification task between male/female. Since some product categories are more likely to be reviewed by males or females (hence, not balanced), I am seeking for an approach to solve this ""imbalance""-like issue.
+
+The options and things that I consider at the moment are:
+
+
+- Downsample the training examples in each product category to balance for gender
+- Add weights to the training examples for gender, or
+- Add weights to the loss function (either log-likelihood or cross-entropy)
+
+
+Even though downsampling might be the easiest option, I would like to explore the options of adding weights in the network in some way. However, most literature are only discussing adding weights to the loss function in order to solve for imbalanced data related to the target value (which is not the issue that I am addressing).
+
+Can someone help me or point me in the right direction to solve this challenge?
+"
+"['reinforcement-learning', 'python', 'pytorch']"," Title: Why are the rewards of my RL agent for the Atari Breakout game decreasing after a certain number of episodes?Body:
+
+The agent is trying to master the Atari Breakout game.
+
+Here is my code
+
+Is that normal that reward_100
decreased that much after it hits 4.5? Is there a way to avoid that behavior?
+
+Be aware that reward_100
is simply mean_reward = np.mean(self.total_rewards[-100:])
. In other words, it is the mean over the last 100 rewards. On the graph, reward_100
represents de y-axis and th number of episodes the x-axis.
+"
+"['reinforcement-learning', 'comparison']"," Title: Is RL just a less rigorous version of stochastic approximation theory?Body: After reading some literature on reinforcement learning (RL), it seems that stochastic approximation theory underlies all of it.
+
+There's a lot of substantial and difficult theory in this area requiring measure theory leading to martingales and stochastic approximations.
+
+The standard RL texts at best mention the relevant theorem and then move on.
+
+Is the field of RL is really stochastic approximation theory in disguise? Is RL just a less rigorous version of stochastic approximation theory?
+"
+"['reinforcement-learning', 'proximal-policy-optimization', 'on-policy-methods']"," Title: Action masking for on policy algorithm like PPOBody: I have an environment, in which my agent learns according to PPO. The environment has a maximum of 80 actions, however not all of them are always allowed. My idea was to mask them, by setting the probabilities of the non valid actions to 0, and renormalizing the remaining actions. However, this would be not the predicted policy anymore, and thus the agent wouldn't act on policy. Is there a better way to mask PPO agents actions, or does it simply not constitute a big problem?
+"
+"['natural-language-processing', 'reference-request', 'natural-language-understanding', 'text-summarization']"," Title: How would you build an AI to output the primary concept of a paragraph?Body: My thinking is you input a paragraph, or sentence, and the program can boil it down to the primary concept(s).
+Example:
+Input:
+
+Sure, it would be nice if morality was simply a navigation toward greater states of conscious well-being, and diminishing states of suffering, but aren't there other things to value independent of well-being? Like truth, or beauty?
+
+Output:
+
+Questioning moral philosophy.
+
+
+Is there any group that's doing this already? If not, why not?
+"
+"['reinforcement-learning', 'q-learning']"," Title: How are n-dimensional vectors state vectors represented in Q-learning?Body: Using this code:
+
+import gym
+import numpy as np
+import time
+
+""""""
+SARSA on policy learning python implementation.
+This is a python implementation of the SARSA algorithm in the Sutton and Barto's book on
+RL. It's called SARSA because - (state, action, reward, state, action). The only difference
+between SARSA and Qlearning is that SARSA takes the next action based on the current policy
+while qlearning takes the action with maximum utility of next state.
+Using the simplest gym environment for brevity: https://gym.openai.com/envs/FrozenLake-v0/
+""""""
+
+def init_q(s, a, type=""ones""):
+ """"""
+ @param s the number of states
+ @param a the number of actions
+ @param type random, ones or zeros for the initialization
+ """"""
+ if type == ""ones"":
+ return np.ones((s, a))
+ elif type == ""random"":
+ return np.random.random((s, a))
+ elif type == ""zeros"":
+ return np.zeros((s, a))
+
+
+def epsilon_greedy(Q, epsilon, n_actions, s, train=False):
+ """"""
+ @param Q Q values state x action -> value
+ @param epsilon for exploration
+ @param s number of states
+ @param train if true then no random actions selected
+ """"""
+ if train or np.random.rand() < epsilon:
+ action = np.argmax(Q[s, :])
+ else:
+ action = np.random.randint(0, n_actions)
+ return action
+
+def sarsa(alpha, gamma, epsilon, episodes, max_steps, n_tests, render = True, test=False):
+ """"""
+ @param alpha learning rate
+ @param gamma decay factor
+ @param epsilon for exploration
+ @param max_steps for max step in each episode
+ @param n_tests number of test episodes
+ """"""
+ env = gym.make('Taxi-v3')
+ n_states, n_actions = env.observation_space.n, env.action_space.n
+ Q = init_q(n_states, n_actions, type=""ones"")
+ print('Q shape:' , Q.shape)
+
+ timestep_reward = []
+ for episode in range(episodes):
+ print(f""Episode: {episode}"")
+ total_reward = 0
+ s = env.reset()
+ print('s:' , s)
+ a = epsilon_greedy(Q, epsilon, n_actions, s)
+ t = 0
+ done = False
+ while t < max_steps:
+ if render:
+ env.render()
+ t += 1
+ s_, reward, done, info = env.step(a)
+ total_reward += reward
+ a_ = epsilon_greedy(Q, epsilon, n_actions, s_)
+ if done:
+ Q[s, a] += alpha * ( reward - Q[s, a] )
+ else:
+ Q[s, a] += alpha * ( reward + (gamma * Q[s_, a_] ) - Q[s, a] )
+ s, a = s_, a_
+ if done:
+ if render:
+ print(f""This episode took {t} timesteps and reward {total_reward}"")
+ timestep_reward.append(total_reward)
+ break
+# print('Updated Q values:' , Q)
+ if render:
+ print(f""Here are the Q values:\n{Q}\nTesting now:"")
+ if test:
+ test_agent(Q, env, n_tests, n_actions)
+ return timestep_reward
+
+def test_agent(Q, env, n_tests, n_actions, delay=0.1):
+ for test in range(n_tests):
+ print(f""Test #{test}"")
+ s = env.reset()
+ done = False
+ epsilon = 0
+ total_reward = 0
+ while True:
+ time.sleep(delay)
+ env.render()
+ a = epsilon_greedy(Q, epsilon, n_actions, s, train=True)
+ print(f""Chose action {a} for state {s}"")
+ s, reward, done, info = env.step(a)
+ total_reward += reward
+ if done:
+ print(f""Episode reward: {total_reward}"")
+ time.sleep(1)
+ break
+
+
+if __name__ ==""__main__"":
+ alpha = 0.4
+ gamma = 0.999
+ epsilon = 0.9
+ episodes = 200
+ max_steps = 20
+ n_tests = 20
+ timestep_reward = sarsa(alpha, gamma, epsilon, episodes, max_steps, n_tests)
+ print(timestep_reward)
+
+
+from :
+
+https://towardsdatascience.com/reinforcement-learning-temporal-difference-sarsa-q-learning-expected-sarsa-on-python-9fecfda7467e
+
+A sample Q table generated is :
+
+[[ 1. 1. 1. 1. 1. 1. ]
+ [ 0.5996 0.5996 0.5996 0.35936 0.5996 1. ]
+ [ 0.19936016 0.35936 0.10336026 0.35936 0.35936 -5.56063984]
+ ...
+ [ 0.35936 0.5996 0.35936 0.5996 1. 1. ]
+ [ 1. 0.5996 1. 1. 1. 1. ]
+ [ 0.35936 0.5996 1. 1. 1. 1. ]]
+
+
+The columns representing the actions and rows representing the corresponding states.
+
+Can the state be represented by a vector? The Q table cells are not contained by vectors of size > 1, so how should these states be represented? For example, if I'm in the state [2], can this be represented as an n-dimensional vector?
+
+Put another way, if Q[1,3] = 4
, can the Q state 1 with action 3 be represented as a vector [1, 3, 2, 12, 3]
? If so, then is the state_number -> state_attributes
mapping stored in a separate lookup table?
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'reference-request']"," Title: Given the coordinates of an object in an image, is it possible to predict the coordinates of the same object in a different perspective?Body: I am trying to figure out how to approach this.
+Given training data of images and the pixel coordinates of the centre of an object in that image, would it be possible to predict the pixel coordinates of the object in the same "scene" in a different perspective, but with the object removed?
+"
+"['machine-learning', 'deep-learning', 'training', 'iid']"," Title: If the i.i.d. assumption holds, shouldn't the training and validation trends be exactly the same?Body: If the i.i.d. (independent and identically distributed) assumption holds for a training-validation set pair, shouldn't their loss trends be exactly the same, since every batch from the validation set is equivalent to having a batch from the training set instead?
+
+If the assumption was to be true wouldn't that make any method that was aware of the fact that there were two separate sets (regularization methods such as early stopping) meaningless?
+
+Do we work with the fact that there is a certain degree of wrongness to the assumption or am I interpreting it wrongly?
+
+P.S - The question stems from an observation made on MNIST (where I suppose the i.i.d assumption holds strongly). The training and validation trends (losses and accuracy both) on MNIST were almost exactly identical for any network (convolutional and feedforward) trained using negative log-likelihood, making regularization meaningless.
+"
+"['reinforcement-learning', 'q-learning', 'monte-carlo-methods']"," Title: How does Monte Carlo Exploring Starts work?Body:
+
+I'm having trouble understanding the 5th step in the flowchart.
+
+For the 5th step, the 'update the Q function by taking the average of returns' is confusing.
+
+From what I understand, the Q function is basically the state-action pair values put in a table (the Q table). To update it means to make adjustments to the state-action pair value of the individual states and their respective actions (e.g state 1 action 1, state 3 action 1, state 3 action 2, so on and so forth).
+
+I'm not sure what 'average of returns' means though. Is it asking me to take the average of the returns after $x$ episodes? From my understanding, returns is the sum of rewards in a full episode (So, AVG=sum of returns for x episodes/x).
+
+And what do I do with that 'average'?
+
+I'm a little confused when they say 'update the Q function' because the Q function consists of many parameters that must be updated (the individual state-action pair value), and I'm not sure which one they are referring to.
+
+What is the point of calculating the average of returns? Since the state-action pair value for a particular state and particular action will always be the same (e.g if I always take action 3 in state 4, I will always get value=2 forever)
+"
+"['reinforcement-learning', 'q-learning', 'terminology', 'value-functions']"," Title: Is the Q value the same as the state-action pair value?Body: Am I right to say that the Q value of a particular state and action is the same as the state-action pair value of that same state and action?
+"
+"['convolutional-neural-networks', 'implementation', 'convolution']"," Title: How is the convolution layer is usually implemented in practice?Body: Following an earlier question, I'm interested in understanding the basics of Conv2d
and especially how the kernel is applied, summed, and the propagated.
+I understand that:
+
+
+- a
kernel
has size W
x H
and that more than one kernel is applied (e.g., S x W x H) where S
is the amount of kernels.
+- A stride or step is used when iterating the network input
+- Padding may or may not be used by being added to the network input.
+
+
+What I would ideally like to see is either a description, or a python sample (pytorch
or tensorflow
) of how that is done, what the dimensionality of the output is, and any operation I may be missing (some YouTube videos say that the kernel is summarised and then divided to hold one new unique value representing the feature activation?)
+"
+"['reinforcement-learning', 'deep-rl', 'rewards']"," Title: What is the best measurement for how good an action of a reinforcement learning agent really is?Body: Even when we get a valuable reward signal after every single action, this immediate reward only approximates the short term goodness of the action.
+
+To consider the long term effect of an action, we can use the return of an episode, the action value function $Q(s,a)$ or the advantage $A(s,a) = Q(s,a) - V(s)$. However, these measures do not rate the action in isolation but take all the following actions until the end of an episode into account.
+
+Are there ways to more precisely approximate how good a single action really is considering its short and long term effects?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'keras']"," Title: Is using a filter of size (1, x, y) on a 3D convolutional layer the same as using a filter of size (x,y) on a 2D convolutional layer?Body: I'm trying to predict some properties of videos with Keras using the following rough architecture:
+
+
+- Feed each frame through the same 2-D convolutional layer.
+- Take the outputs of this 2-D convolutional layer and feed them through a 3-D convolutional layer.
+
+
+There are more hidden layers, but these are the main ones that matter and are messing with my dimensionality. The input of Conv2D
should be (batch_size, height, width, channels)
. Each movie has dimensionality (number_of_frames, height, width, channels)
. I first had the idea to neglect batching of movies entirely, and treat the batch size and the number of frames equivalently. Then, Conv2D
would output a 4-D tensor, and I would increase its dimensionality to make the ouput a 5-D tensor that I could input into Conv3D
. To do this, Conv3D
could only accept inputs of batch size 1.
+
+I decided against this, because I wanted to batch movies. My current thought is to do this:
+
+conv1 = Conv3D(filters=1, kernel_size =(1,12,12), strides=(1,1,1), data_format='channels_last')
+conv2 = Conv3D(filters=1,kernel_size=(10,10,10), strides=(1,1,1), data_format='channels_last')
+
+
+conv1
would represent the 2-D convolutional layer while conv2
would represent the 3-D convolutional layer. Would this idea work? I figure there is the advantage that I can batch now and when I train the 2-D filter, the same 2-D filter is running over every single movie frame. I'm just worried that the filter in conv1
will fail to go over certain frames, or it will somehow overlap frames when I want the filter to go over every frame individually.
+"
+"['convergence', 'relu', 'c++', 'sigmoid']"," Title: Neural network doesn't seem to converge with ReLU but it does with Sigmoid?Body: I'm not really sure if this is the sort of question to ask on here, since it is less of a general question about AI and more about the coding of it, however I thought it wouldn't fit on stack overflow.
+
+I have been programming a multilayer perceptron in c++, and it seems to be working with a sigmoid function, however when I change the activation function to ReLU it does not converge and stays at an average cost of 1 per training example. this is because all of the network's output neurons output a 0.
+
+With the sigmoid function it converges rather nicely, I did a bit of testing and after about 1000 generations it got to an average cost of 0.1 on the first 1000 items in the MNIST dataset.
+
+I will show you the code I changes first for the activation functions, and then i will put the whole block of code in.
+
+Any help would be greatly appreciated!
+
+Sigmoid:
+
+
+
+inline float activation(float num)
+{
+ return 1 / (1 + std::exp(-num));
+}
+
+inline float activation_derivative(float num)
+{
+ return activation(num) * (1 - activation(num));
+}
+
+
+ReLU:
+
+
+
+inline float activation(float num)
+{
+ return std::max(num, 0.0f);
+}
+
+inline float activation_derivative(float num)
+{
+ return num > 0 ? 1.0f : 0.0f;
+}
+
+
+And here's the whole block of code (I collapsed the region of code for benchmarking and the region for creating the dataset):
+
+
+
+#include <iostream>
+#include <fstream>
+#include <vector>
+#include <random>
+#include <chrono>
+#include <cmath>
+#include <string>
+#include <algorithm>
+
+#pragma region benchmarking
+#pragma endregion
+
+class Network
+{
+public:
+ float cost = 0.0f;
+ std::vector<std::vector<std::vector<float>>> weights;
+ std::vector<std::vector<std::vector<float>>> deriv_weights;
+ std::vector<std::vector<float>> biases;
+ std::vector<std::vector<float>> deriv_biases;
+ std::vector<std::vector<float>> activations;
+ std::vector<std::vector<float>> deriv_activations;
+ void clear_deriv_activations()
+ {
+ for (unsigned int i = 0; i < deriv_activations.size(); ++i)
+ {
+ std::fill(deriv_activations[i].begin(), deriv_activations[i].end(), 0.0f);
+ }
+ }
+ int get_memory_usage()
+ {
+ int memory = 4;
+ memory += get_vector_memory_usage(weights);
+ memory += get_vector_memory_usage(deriv_weights);
+ memory += get_vector_memory_usage(biases);
+ memory += get_vector_memory_usage(deriv_biases);
+ memory += get_vector_memory_usage(activations);
+ memory += get_vector_memory_usage(deriv_activations);
+ return memory;
+ }
+};
+
+struct DataSet
+{
+ std::vector<std::vector<float>> training_inputs;
+ std::vector<std::vector<float>> training_answers;
+ std::vector<std::vector<float>> testing_inputs;
+ std::vector<std::vector<float>> testing_answers;
+};
+
+
+Network create_network(std::vector<int> layers)
+{
+ Network network;
+ int layer_count = layers.size() - 1;
+ network.weights.reserve(layer_count);
+ network.deriv_weights.reserve(layer_count);
+ network.biases.reserve(layer_count);
+ network.deriv_biases.reserve(layer_count);
+ network.activations.reserve(layer_count);
+ network.deriv_activations.reserve(layer_count);
+ int nodes_in_prev_layer = layers[0];
+ for (unsigned int i = 0; i < layers.size() - 1; ++i)
+ {
+ int nodes_in_layer = layers[i + 1];
+ network.weights.emplace_back();
+ network.weights[i].reserve(nodes_in_layer);
+ network.deriv_weights.emplace_back();
+ network.deriv_weights[i].reserve(nodes_in_layer);
+ network.biases.emplace_back();
+ network.biases[i].reserve(nodes_in_layer);
+ network.deriv_biases.emplace_back(nodes_in_layer, 0.0f);
+ network.activations.emplace_back(nodes_in_layer, 0.0f);
+ network.deriv_activations.emplace_back(nodes_in_layer, 0.0f);
+ for (int j = 0; j < nodes_in_layer; ++j)
+ {
+ network.weights[i].emplace_back();
+ network.weights[i][j].reserve(nodes_in_prev_layer);
+ network.deriv_weights[i].emplace_back(nodes_in_prev_layer, 0.0f);
+ for (int k = 0; k < nodes_in_prev_layer; ++k)
+ {
+ float input_weight = (2 * (float(std::rand()) / RAND_MAX)) - 1;
+ network.weights[i][j].push_back(input_weight);
+ }
+ float input_bias = (2 * (float(std::rand()) / RAND_MAX)) - 1;
+ network.biases[i].push_back(input_bias);
+ }
+ nodes_in_prev_layer = nodes_in_layer;
+ }
+ return network;
+}
+
+void judge_network(Network &network, const std::vector<float>& correct_answers)
+{
+ int final_layer_index = network.activations.size() - 1;
+ for (unsigned int i = 0; i < network.activations[final_layer_index].size(); ++i)
+ {
+ float val_sq = (network.activations[final_layer_index][i] - correct_answers[i]);
+ network.cost += val_sq * val_sq;
+ }
+}
+
+inline float activation(float num)
+{
+ return std::max(num, 0.0f);
+}
+
+void forward_propogate(Network& network, const std::vector<float>& input)
+{
+ const std::vector<float>* last_layer_activations = &input;
+ int last_layer_node_count = input.size();
+ for (unsigned int i = 0; i < network.weights.size(); ++i)
+ {
+ for (unsigned int j = 0; j < network.weights[i].size(); ++j)
+ {
+ float total = network.biases[i][j];
+ for (int k = 0; k < last_layer_node_count; ++k)
+ {
+ total += (*last_layer_activations)[k] * network.weights[i][j][k];
+ }
+ network.activations[i][j] = activation(total);
+ }
+ last_layer_activations = &network.activations[i];
+ last_layer_node_count = network.weights[i].size();
+ }
+}
+
+void final_layer_deriv_activations(Network& network, const std::vector<float>& correct_answers)
+{
+ int final_layer_index = network.activations.size() - 1;
+ int final_layer_node_count = network.activations[final_layer_index].size();
+ for (int i = 0; i < final_layer_node_count; ++i)
+ {
+ float deriv = network.activations[final_layer_index][i] - correct_answers[i];
+ network.deriv_activations[final_layer_index][i] = deriv * 2;
+ }
+}
+
+inline float activation_derivative(float num)
+{
+ return num > 0 ? 1.0f : 0.0f;
+}
+
+void back_propogate_layer(Network& network, int layer)
+{
+ int nodes_in_layer = network.activations[layer].size();
+ int nodes_in_prev_layer = network.activations[layer - 1].size();
+ for (int i = 0; i < nodes_in_layer; ++i)
+ {
+ float total = network.biases[layer][i];
+ for (int j = 0; j < nodes_in_prev_layer; ++j)
+ {
+ total += network.weights[layer][i][j] * network.activations[layer - 1][j];
+ }
+ float dzda = activation_derivative(total);
+ float dzdc = dzda * network.deriv_activations[layer][i];
+ for (int j = 0; j < nodes_in_prev_layer; ++j)
+ {
+ network.deriv_weights[layer][i][j] += network.activations[layer - 1][j] * dzdc;
+ network.deriv_activations[layer - 1][j] += network.weights[layer][i][j] * dzdc;
+ }
+ network.deriv_biases[layer][i] += dzdc;
+ }
+}
+
+void back_propogate_first_layer(Network& network, std::vector<float> inputs)
+{
+ int nodes_in_layer = network.activations[0].size();
+ int input_count = inputs.size();
+ for (int i = 0; i < nodes_in_layer; ++i)
+ {
+ float total = network.biases[0][i];
+ for (int j = 0; j < input_count; ++j)
+ {
+ total += network.weights[0][i][j] * inputs[j];
+ }
+ float dzda = activation_derivative(total);
+ float dzdc = dzda * network.deriv_activations[0][i];
+ for (int j = 0; j < input_count; ++j)
+ {
+ network.deriv_weights[0][i][j] += inputs[j] * dzdc;
+ }
+ network.deriv_biases[0][i] += dzdc;
+ }
+}
+
+void back_propogate(Network& network, const std::vector<float>& inputs, const std::vector<float>& correct_answers)
+{
+ network.clear_deriv_activations();
+ final_layer_deriv_activations(network, correct_answers);
+ for (int i = network.activations.size() - 1; i > 0; --i)
+ {
+ back_propogate_layer(network, i);
+ }
+ back_propogate_first_layer(network, inputs);
+}
+
+void apply_derivatives(Network& network, int training_example_count)
+{
+ for (unsigned int i = 0; i < network.weights.size(); ++i)
+ {
+ for (unsigned int j = 0; j < network.weights[i].size(); ++j)
+ {
+ for (unsigned int k = 0; k < network.weights[i][j].size(); ++k)
+ {
+ network.weights[i][j][k] -= network.deriv_weights[i][j][k] / training_example_count;
+ network.deriv_weights[i][j][k] = 0;
+ }
+ network.biases[i][j] -= network.deriv_biases[i][j] / training_example_count;
+ network.deriv_biases[i][j] = 0;
+ network.deriv_activations[i][j] = 0;
+ }
+ }
+}
+
+void training_iteration(Network& network, const DataSet& data)
+{
+ int training_example_count = data.training_inputs.size();
+ for (int i = 0; i < training_example_count; ++i)
+ {
+ forward_propogate(network, data.training_inputs[i]);
+ judge_network(network, data.training_answers[i]);
+ back_propogate(network, data.training_inputs[i], data.training_answers[i]);
+ }
+ apply_derivatives(network, training_example_count);
+}
+
+void train_network(Network& network, const DataSet& dataset, int training_iterations)
+{
+ for (int i = 0; i < training_iterations; ++i)
+ {
+ training_iteration(network, dataset);
+ std::cout << ""Generation "" << i << "": "" << network.cost << std::endl;
+ network.cost = 0.0f;
+ }
+}
+
+#pragma region dataset creation
+
+#pragma endregion
+
+int main()
+{
+ Timer timer;
+ DataSet dataset = create_dataset_from_file(""data.txt"");
+ Network network = create_network({784, 128, 10});
+ train_network(network, dataset, 1000);
+ std::cout << timer.get_duration() << std::endl;
+ std::cin.get();
+}
+```
+
+"
+"['reinforcement-learning', 'q-learning', 'policies', 'value-functions']"," Title: Why does the policy $\pi$ affect the Q value?Body: From my understanding, the policy $\pi$ is basically how the agent acts (i.e. the actions it will take in each state).
+
+However, I am confused about the Q value and how it is ""affected"" by a policy. This answer says
+
+
+ $Q^\pi(s, a)$ is the action-value function. It is the expected return starting from state $s$, following policy $\pi$, taking action $a$. It's focusing on the particular action at the particular state.
+
+
+From this, I infer that the $Q$ value (the action-value function) will be affected by the policy $\pi$. Why? So, why does the Q value change according to policy $\pi$?
+
+Shouldn't the Q value be constant, because the same action taken in the same state will always give the same yield (and hence remain constantly good/bad)?
+
+All the policy does is find out the max Q values and bases its policy on that information.
+"
+"['reinforcement-learning', 'q-learning', 'terminology', 'policies', 'return']"," Title: Is my understanding of the value function, Q function, policy, reward and return correct?Body: I'm a beginner in the RL field, and I would like to check that my understanding of certain RL concepts.
+
+Value function: How good it is to be in a state S following policy π.
+
+
+So, the value functions here are 0.3 and 0.9
+
+
+Q function(also called state-action value, or just action value): How good it is to be in a state S and perform action A while following policy π. It uses reward to measure the state-action value
+
+
+
+So, the state-action values here are 0.03,0.02,0.5 and 0.9
+
+
+Q value: The overall expected rewards after performing action A in state S, and continuing with policy π until the end of the episode. So, essentially I can only calculate the Q value if I know all the state-action values of the actions I will be taking in the single episode.(Because the Q value takes into account the actions after the current action A, till the end of the episode, following policy π)
+
+Reward: The metric used to tell the agent how good/bad it's action was. It is a constant value.
+For e.g
+
+ 1. Fall in pond --> -1
+ 2. On stone path --> +1
+ 3. Reach home--> +10
+
+
+Return: The sum of rewards in a single episode
+
+Policy π: A set of specific instructions an agent will follow in an episode. For example, the policy will look like:
+
+In state 1, take action 3 ( which takes me to state 2)
+
+In state 2, take action 2 ( which takes me to state 3)
+
+In state 3, take action 1 ( Which takes me to state 4)
+
+In state 4, take action 2 ( Which takes me to terminal state)
+
+1 episode completed
+
+
+And my policy will keep updating each episode to get the best return
+"
+"['search', 'minimax', 'alpha-beta-pruning']"," Title: How can I apply the alpha-beta pruning algorithm to the ""1-2 steal marbles"" problem?Body: I have the following problem called ""1-2 steal marbles"".
+
+Initially, there are 6 marbles on the board. One of the players can choose to remove 1 or 2 marbles leaving 5 or 4. After that, the other player can do the same, choosing to take again 1 or 2 marbles from the board. The process continues until there is only one marble on the board. The player who wins is the one the leaves the last marble on the board. (For example: If there are 3 marbles and it's my turn, then I will choose to remove 2 to leave one in the board to win)
+
+How can I draw the search tree that represents the application of the alpha-beta pruning to this ""1-2 steal marbles"" with 13 marbles? I would like to see the maximizer and minimizer nodes and the value at the nodes too.
+"
+"['machine-learning', 'classification', 'supervised-learning', 'computational-learning-theory', 'vc-dimension']"," Title: An infinite VC dimensional space vs using hierarchical subspaces of finite but growing VC dimensionsBody: I have the following scenario. I have a binary classification problem, whose underlying function is a step function. The probability distribution of feature vectors is a uniform over the domain.
+
+Case 1: I have a classifier which fits the training samples perfectly, no matter what the size of the data. The space of functions $H$ has an infinite VC dimension. As the data points going to infinite, the hypothesized function converges pointwise to the underlying step function.
+
+Case 2: Here I have divided the same hypothesis space into a number of hierarchical subspaces $H_1 \subset H_2 \subset H_3 \subset \dots \subset H_n$ ($n$ goes to infinity). The VC dimension of each of the spaces is finite and grows with $n$ to infinity. Now, given any data of $n$ points, I compute the minimum number of VC dimension required to fit the data exactly, say, $d_n$ and use that space $H_{d_n}$ as the hypothesis. Do the same as data size $n$ goes to infinity, at each $n$ using the hypothesis space that just enough VC dimension to fit the data. In this approach also, as the data size goes to infinity, the hypothesized function converges pointwise to the underlying step function.
+
+Is the difference between these two approaches to the same problem? Is there any theoretical difference? Which method is any better than others, in some sense?
+"
+"['neural-networks', 'deep-learning', 'backpropagation', 'gradient-descent', 'batch-normalization']"," Title: Why do we update all layers simultaneously while training a neural network?Body:
+ Very deep models involve the composition of several functions or layers. The gradient tells how to update each parameter, under the assumption that the other layers do not change. In practice, we update all of the layers simultaneously.
+
+
+The above is an extract from Ian Goodfellow's Deep Learning - which talks about the need for batch normalization.
+
+Why do we update all the layers simultaneously? Instead, if we update layers one at a time during backpropagation - it will eliminate the need for batch normalization, right?
+
+Reference: A Gentle Introduction to Batch Normalization for Deep Neural Networks
+
+P.S. The attached link says: Because all layers are changed during an update, the update procedure is forever chasing a moving target. Apart from the main question, it would be great if someone could explain why exactly a moving target is being referred to in the above sentence.
+"
+"['reinforcement-learning', 'reference-request', 'proofs', 'bellman-equations', 'policy-evaluation']"," Title: What is the proof that policy evaluation converges to the optimal solution?Body: Although I know how the algorithm of iterative policy evaluation using dynamic programming works, I am having a hard time realizing how it actually converges.
+
+It appeals to intuition that, with each iteration, we get a better and better approximation for the value function and we can thus assure its convergence, but with this said simply, it seems that this method is very inefficient contrary to the reality that it actually is quite efficient.
+
+What is the rigorous mathematical proof of the convergence of the policy evaluation algorithm to the actual answer? How is it that the value function obtained this way is close to the actual values computed by solving the set of bellman equations?
+"
+"['machine-learning', 'deep-learning', 'comparison', 'academia', 'education']"," Title: What are the pros and cons of studying machine learning before deep learning?Body: I'm a biotech student and I'm currently working on single-particle tracking. For my work, I need to use aspects of deep learning (CNN, RNN and object segmentation) but I'm not familiar with these topics. I have some prior knowledge in python.
+
+So, do I have to learn machine learning first before going into deep learning, or can I skip ML?
+
+What are the pros and cons of studying machine learning before deep learning?
+"
+"['search', 'breadth-first-search']"," Title: 2 Partition ProblemBody: I want to solve the two partition problem (https://en.wikipedia.org/wiki/Partition_problem) using an uninformed search algorithm (BFS or uniform cost).
+
+The states can be represented by three sets S1,S1,S where the set S contains unassigned values and S1 and S2 the values assigned to each partition respectively. At the initial state S will contain all values and S1 and S2 will be empty. The actions consist in moving a value from S to S1 or S2. The objective is to find a complete assignation (S is empty) where abs(sum(S1)-sum(S2)) is minimum. As you can see I have all elements but the cost of the actions.
+
+
+- How can I assign costs to the actions in order to apply one of those algorithms? (Costs must be positive.)
+
+
+I know it is not the best way to solve this problem but there must be a way to do it because the problem is formulated this way in the book.
+"
+"['reinforcement-learning', 'policies', 'value-functions', 'value-iteration']"," Title: Why do I need an initial arbitrary policy to implement value iteration algorithmBody: I've been recently given an assignment based on Reinforcement Learning and I'm supposed to implement the value iteration algorithm in a grid environment.
+
+The assignment:
+
+
+
+
+
+My doubt is why do I even need an initial arbitrary policy as given in the parameters (in the assignment) to implement the value iteration algorithm? And so, the change in the values of a and b shouldn't affect the algorithm. Am I correct in my thinking about it?
+"
+"['reinforcement-learning', 'natural-language-processing', 'generative-adversarial-networks', 'rewards', 'text-generation']"," Title: How should I design a reward function for a NLP problem where two models interoperate?Body: I would like to design a reward function. I am training two models from the first model that classify set of texts (paragraphs and keywords) and I also got some hidden states. The second model is trying to generate keywords for those paragraphs.
+
+I want to use those hidden states from the first model to give rewards for key phrases that are generated from the second model. I want to know how can I implement this reward function since I have never used it before.
+"
+"['reinforcement-learning', 'q-learning', 'markov-decision-process', 'semi-mdp']"," Title: Relationship between the reward rate and the sampled reward in a Semi-Markov Decision ProcessBody: In the paper: Reinforcement learning methods for continuous-time Markov decision problems, the authors provide the following update rule for the Q-learning algorithm, when applied to Semi-Markov Decision Processes (SMDPs):
+$Q^{(k+1)}(x,a) = Q^{(k)}(x,a) + \alpha_k [ \frac{1-e^{-\beta \tau}}{\beta}r(x,y,a) + e^{-\beta \tau} max_{a'} Q^{(k)}(y,a) - Q^{(k)}(x,a) ] $
+where $\alpha_k$ is the learning rate, $\beta$ is the continuous time discount factor and $\tau$ is the time taken to transition from state $x$ to state $y$.
+It is not clear to me what is the relationship between the sampled reward $r(x,y,a)$ and the reward rate $\rho(x,a)$ specified in the objective function $\mathbb{E}[ \int_{0}^{\infty} e^{-\beta t}\rho(x(t),a(t)) dt ]$.
+In particular, how do they determine $r(x,y,a)$ in the experiments in Section 6? In this experiment, they consider a routing problem in an M/M/2 queuing system, where the reward rate is:
+$c_1 n_1(t) + c_2 n_2(t)$. $c_1$ and $c_2$ are scalar cost factors and $n_1(t)$ and $n_2(t)$ are the number of customers in queue 1 and 2, respectively.
+"
+"['natural-language-processing', 'text-classification']"," Title: Top Frequent occurrence word effect in Model Efficiency?Body: Assume that I have a Dataframe with the text column.
+Problem: Classification / Prediction
+ sms_text
+0 Go until jurong point, crazy.. Available only ...
+1 Ok lar... Joking wif u oni...
+2 Free entry in 2 a wkly comp to win FA Cup fina...
+3 U dun say so early hor... U c already then say...
+4 Nah I don't think he goes to usf, he lives aro...
+
+After preprocessing the text
+
+From the above WordCloud, we can find the most frequent(occurred) words like
+Free
+Call
+Text
+Txt
+
+As these are the most frequent words and adds less importance in prediction/classification as they appear a lot. (My Opinion)
+
+My Question is
+Removing top frequent(most occurred) words will improve the model score?
+How does this impact on model performance?
+Is it ok to remove the most occurred words?
+
+"
+"['training', 'dropout']"," Title: How to make binary neural networks resilient to flipped activation values?Body: Assume I am given a binary neural network where the activation values are constrained to be 0 or 1 (by clipping the ReLU function). Additionally, assume the neural network is supposed to work in a noisy environment where some of the activation values may be randomly flipped, i.e. 0 -> 1 or 1 -> 0.
+
+I am trying to train such neural network in a way that it would be resilient to the noisy environment. I assume training with dropout would make the neural network somewhat resilient to noises where one is flipped to zero (1 -> 0).
+
+What are some ways that allow me to make the neural network resilient to the other kind of noise which flips zeros to ones (0 -> 1)? Is it theoretically valid to introduce a dropout-like algorithm which flips some zeros to ones during training but does not backpropagate the gradients through those flipped nodes?
+"
+"['computational-learning-theory', 'vc-dimension', 'vc-theory', 'sample-complexity', 'hypothesis-class']"," Title: How does size of the dataset depend on VC dimension of the hypothesis class?Body: This might be a little broad question, but I have been watching Caltech youtube videos on Machine Learning, and in this video prof. is trying to explain how we should interpret the VC dimension in terms of what it means in layman terms, and why do we need it in practice.
+The first part I think I understand, please correct me if I am wrong. VC Dimension dictates the number of effective parameters (i.e. degrees of freedom) that model has. In other words, the number of parameters the model needs in order to cover all possible label combinations for the chosen dataset. Now, the second part is not clear to me. The professor is trying to answer the question:
+
+How does knowing the VC dimension of the hypothesis class affect number of samples we need for training?
+
+Again, I apologize if all of this may be trivial, but I am new to the field and wish to learn as much as I can, so I can implement better and more efficient programs in practice.
+"
+"['neural-networks', 'weights']"," Title: Is having binary randomized unchanging neural network weights a good idea?Body: I am creating a neural network to experiment with, and I was wondering:
+
+
+- If I have weights randomly initialized to be either 1 or 0 for each neuron, and then I made it so that the weights cannot be changed, would that ruin the neural network? What would happen?
+
+
+Note: There is no bias in this network.
+"
+"['convolutional-neural-networks', 'keras', 'architecture']"," Title: Heavily mixing signal differentiation from Open Set of backgrounds via CNNBody: I am currently attempting to detect a signal from background noise. The signal is pretty well known but the background has a lot of variability. I've since come to know this problem as Open Set Recognition. Another complicating factor is that the signal mixes with the background noise (think equivalent to a transparent piece of glass in-front of scenery for a picture, or picking out the sound of a pin drop in an office space).
+When I started this project, it seemed like the current state of the art in this space was generating Spectrograms and feeding them to a CNN and this is the path I've followed. I'm at a place where I think I've overcome most of the initial problems you might encounter but I'm still not getting good enough results for a project solution.
+Here's the overall steps I've gone through:
+
+- Generate 17000 ground truth "signals" and 17000 backgrounds (negatives or other classes depending on what nn scheme I'm training)
+
+- Generate separate test samples (not training samples but external model validation samples: "blind test") where I take the backgrounds and randomly overlay the signal into it at various intensities.
+
+- My first attempt was with a pre-built library training solution (ImageAI) with resnet50 base model. This solution is a multiclass classifier so I had 400 each of the signal + 5 other classes that were the background. It did not work well at classifying the signal. I don't think I ever got this off the ground for two reasons a) My spectrogram pictures were not optimised (waay to large) and b) I couldn't adjust the image input shape via the library. It mostly just ended up classifying one background class.
+
+- I then started building my own neural nets. The first reason to make sure my spectrogram input shape was matched in the input shape of the CNN. The second reason was to test various neural net schemes to see what worked best.
+
+- The first net I built was a simple feed forward net with a couple of dense layers. This trains to .9998 val_acc. It (like the rest of what I try) produces poor results on my blind tests, in the range of 60% true positive.
+
def build(width, height, depth, classes):
+ # initialize the model along with the input shape to be
+ # "channels last" and the channels dimension itself
+ model = Sequential()
+ inputShape = (height, width, depth)
+ chanDim = -1
+
+ # if we are using "channels first", update the input shape
+ # and channels dimension
+ if K.image_data_format() == "channels_first":
+ inputShape = (depth, height, width)
+ chanDim = 1
+ model.add(Flatten())
+ model.add(Dense(512, input_shape=(inputShape),activation="relu"))
+ model.add(Dense(128, activation="relu"))
+ model.add(Dense(32, activation="relu"))
+ # sigmoid classifier
+ model.add(Dense(classes))
+ model.add(Activation("sigmoid"))
+
+ # return the constructed network architecture
+ return model
+
+
+- I then try a "VGG Light" model. Again, trains to .9999 but gives me only 62% true positive results on my blind tests
+
def build(width, height, depth, classes):
+ # initialize the model along with the input shape to be
+ # "channels last" and the channels dimension itself
+ model = Sequential()
+ inputShape = (height, width, depth)
+ chanDim = -1
+
+ # if we are using "channels first", update the input shape
+ # and channels dimension
+ if K.image_data_format() == "channels_first":
+ inputShape = (depth, height, width)
+ chanDim = 1
+
+ # CONV => RELU => POOL
+ model.add(Conv2D(32, (3, 3), padding="same", input_shape=inputShape))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(MaxPooling2D(pool_size=(3, 3)))
+ model.add(Dropout(0.25))
+
+ # (CONV => RELU) * 2 => POOL
+ model.add(Conv2D(64, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(Conv2D(64, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(MaxPooling2D(pool_size=(2, 2)))
+ model.add(Dropout(0.25))
+
+ # (CONV => RELU) * 2 => POOL
+ model.add(Conv2D(128, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(Conv2D(128, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(MaxPooling2D(pool_size=(2, 2)))
+ model.add(Dropout(0.25))
+ model.add(GaussianNoise(.05))
+
+ # first (and only) set of FC => RELU layers
+ model.add(Flatten())
+ model.add(Dense(1024))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization())
+ model.add(Dropout(0.5))
+ model.add(Dense(512))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization())
+ model.add(Dropout(.5))
+ model.add(Dense(128))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization())
+ model.add(GaussianDropout(0.5))
+
+ # sigmoid classifier
+ model.add(Dense(classes))
+ model.add(Activation("sigmoid"))
+
+ # return the constructed network architecture
+ return model
+
+
+- I then try a "full VGG" net. This again trains to .9999 but only a blind test true positive result of 63%.
+
def build(width, height, depth, classes):
+ # initialize the model along with the input shape to be
+ # "channels last" and the channels dimension itself
+ model = Sequential()
+ inputShape = (height, width, depth)
+ chanDim = -1
+
+ # if we are using "channels first", update the input shape
+ # and channels dimension
+ if K.image_data_format() == "channels_first":
+ inputShape = (depth, height, width)
+ chanDim = 1
+
+ #CONV => RELU => POOL
+ model.add(Conv2D(64, (3, 3), padding="same", input_shape=inputShape))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(MaxPooling2D(pool_size=(3, 3)))
+ #model.add(Dropout(0.25))
+
+ # (CONV => RELU) * 2 => POOL
+ model.add(Conv2D(128, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(Conv2D(128, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(MaxPooling2D(pool_size=(2, 2)))
+ #model.add(Dropout(0.25))
+
+ # (CONV => RELU) * 2 => POOL
+ model.add(Conv2D(256, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(Conv2D(256, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(MaxPooling2D(pool_size=(2, 2)))
+ #model.add(Dropout(0.25))
+
+ # (CONV => RELU) * 2 => POOL
+ model.add(Conv2D(512, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(Conv2D(512, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(MaxPooling2D(pool_size=(2, 2)))
+ #model.add(Dropout(0.25))
+
+ # (CONV => RELU) * 2 => POOL
+ model.add(Conv2D(1024, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(Conv2D(1024, (3, 3), padding="same"))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization(axis=chanDim))
+ model.add(MaxPooling2D(pool_size=(2, 2)))
+ #model.add(Dropout(0.25))
+ model.add(GaussianNoise(.1))
+
+ # first (and only) set of FC => RELU layers
+ model.add(Flatten())
+ model.add(Dense(8192))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization())
+ model.add(Dropout(0.5))
+ model.add(Dense(4096))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization())
+ model.add(Dropout(0.5))
+ model.add(Dense(1024))
+ model.add(Activation("relu"))
+ model.add(BatchNormalization())
+ model.add(GaussianDropout(0.5))
+
+ # sigmoid classifier
+ model.add(Dense(classes))
+ model.add(Activation("sigmoid"))
+
+ # return the constructed network architecture
+ return model
+
+
+- All of the above are binary_crossentropy trained in keras.
+
+- I've tried multi-class with these models as well but when testing them on the blind test they usually pick the background rather than the signal.
+
+- I've also messed around with Autoencoders to try and get the encoder to rebuild the signal well and then compare to known results but haven't been successful yet though I'd be willing to give it another try if everyone thought that might produce better results.
+
+- In the beginning I ran into unbalanced classification problems (I was noob) but under all the models shown above the classes all have the same number of samples.
+
+
+I'm at the point where the larger VGG models trained on 34,000 samples is taking days and I don't see any better results than a basic, feed forward NN that takes 4 minutes to train.
+Does anyone see the path forward here?
+"
+"['convolutional-neural-networks', 'generative-adversarial-networks']"," Title: How would semantic segmentation work with a non convolutional neural networkBody: Listening to lectures, convolutional neural network seems to be an improvement over a simple neural network, where for example, you take every pixel in the image, flatten it to a vector, and feed it to ANN with a couple layers. Therefore semantic segmentation should be possible to perform on a classic ANN.
+
+I dont understand exactly how a CNN can classify each pixel in the image to do semantic segmentation.
+
+The way semantic segmentation is explained in lectures is the output of the CNN is fed into it backwards, is that the same as with GAN?
+
+If the output of a CNN is a value between 0 and 1 for each class, how exactly can those values be fed back through a CNN backwards to classify each pixel in the image? Through back propagation?
+
+My understanding is that it should be possible to do the same to a regular ANN described above.
+Can someone explain why or why not it wouldn't be possible, and how semantic segmentation feeds output back through network to classify each pixel
+
+Thanks,
+"
+"['research', 'papers']"," Title: Is the paper ""Reducing the Dimensionality of Data with Neural Networks"" by Hinton relevant?Body: Is the paper ""Reducing the Dimensionality of Data with Neural Networks"" by G. Hinton and R. Salakhutdinov relevant?
+
+It seems that the deep learning textbook by Goodfellow, Bengio & Courville (2016) doesn't cite that paper.
+
+Does that indicate that paper is not as important as others to Deep learning? If yes, I would skip this one to accelerate my process of learning.
+"
+"['reinforcement-learning', 'python', 'dqn', 'pytorch']"," Title: Replace epsilon greedy action selection and the standard DQN by an Independent Gaussian Noise Network ModelBody: Here is my code
+
+Recently, I solved the game of Atari Breakout using a classic DQN model. The convergence of the mean reward slowly improved during three days. I was interested in learning a method which may help me improving the convergence speed. I found the following article : https://arxiv.org/pdf/1706.10295v3.pdf. It says I can use an Independent Gaussian Noise to outperform an standard DQN.
+
+Here is my Noisy DQN model :
+
+import math
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import numpy as np
+
+
+class NoisyLinear(nn.Linear): #Independent Gaussian Noise used with NoisyDQN model
+ def __init__(self, in_features, out_features, sigma_init=0.017, bias=True):
+ super(NoisyLinear, self).__init__(in_features, out_features, bias=bias)
+
+ self.sigma_weight = nn.Parameter(torch.full((out_features, in_features), sigma_init))
+ self.register_buffer(""epsilon_weight"", torch.zeros(out_features, in_features))
+
+ if bias:
+ self.sigma_bias = nn.Parameter(torch.full((out_features,), sigma_init))
+ self.register_buffer(""epsilon_bias"", torch.zeros(out_features))
+
+ self.reset_parameters()
+
+ def reset_parameters(self):
+ std = math.sqrt(3/self.in_features)
+ self.weight.data.uniform_(-std, std)
+ self.bias.data.uniform_(-std, std)
+
+ def forward(self, input):
+ self.epsilon_weight.normal_()
+ bias = self.bias
+ if bias is not None:
+ self.epsilon_bias.normal_()
+ bias = bias + self.sigma_bias * self.epsilon_bias.data
+ return F.linear(input, self.weight + self.sigma_weight * self.epsilon_weight.data, bias)
+
+
+class NoisyDQN(nn.Module):
+ """"""
+ Look at https://arxiv.org/pdf/1706.10295v3.pdf
+ """"""
+
+ def __init__(self, input_shape, num_actions):
+ super(NoisyDQN, self).__init__()
+
+ self.conv = nn.Sequential(
+ nn.Conv2d(in_channels=input_shape[0], out_channels=32, kernel_size=8, stride=4),
+ nn.ReLU(),
+ nn.Conv2d(in_channels=32, out_channels=64, kernel_size=4, stride=2),
+ nn.ReLU(),
+ nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1),
+ nn.ReLU(),
+ )
+
+ self.conv_output_size = self.get_output_size(input_shape)
+
+ self.linear = nn.Sequential(
+ NoisyLinear(in_features=self.conv_output_size, out_features=512),
+ nn.ReLU(),
+ NoisyLinear(in_features=512, out_features=num_actions)
+ )
+
+ def get_output_size(self, input_shape):
+ output = self.conv(torch.zeros((1, *input_shape)))
+ return int(np.prod(output.shape))
+
+ def forward(self, input):
+ self.layer1 = self.conv(input)
+ self.layer1 = self.layer1.reshape(-1, self.conv_output_size)
+
+ return self.linear(self.layer1)
+
+
+The idea is to replace the epsilon greedy action selection and my standard DQN model by the Noisy Network you can see just above.
+
+The code run successfully, but it doesn't improve even a bit. How can I fix that?
+
+UPDATE
+
+
+
+After nearly 200k episodes, I am still between 1.5
and 2
reward. The maximum reward I can get on the Atari Breakout game is about 500
. With a standard DQN, after 100k episodes, I am near 11
as reward.
+
+On the above picture the X-axis is the number of episodes and the Y-axis is the mean reward over the last 100 rewards. The Y-axis is describe with
+mean_reward = np.mean(self.total_rewards[-100:])
+
+UPDATE
+
+After about 8 hours of training, I got this
+
+
+
+As you can see, it is not working as good as in the paper. I worked a lot with the hyperparameters, but not changed.
+"
+"['machine-learning', 'natural-language-processing', 'unsupervised-learning', 'clustering']"," Title: Could clustering be used to parse pdf documents to get headings and titles?Body: I'm a bit new to AI and I'd like to use some kind of clustering algorithm to solve a problem:
+
+I'm trying to parse pdf documents to get headings and titles. I can parse pdf to html and I'm then able to get some information on the lines of the document. I've identified some properties that can be useful for identifying the headings.
+
+
+- font-size (int): of course it's quite usual that heading's font-size is bigger than normal text
+- font-family (string): it's possible for headings to be bold so font-family may differ
+- left property (int): it's also possible that headings are aligned a bit to the right, there's an indentation that's not always there on normal paragraphs
+- bonus boolean: I have identified some properties that I can combine to get a boolean value. When the boolean is set to true it can increase the chances of the paragraph being a heading.
+
+
+Of course, these are not rules that apply to all headings. Some headings may follow some of these but not all of them. It could also be possible that some 'normal' paragraphs follow all these points, but what I've seen is that, in general, those rules where what made headings different from paragraphs.
+
+With this information, is there a way of doing what I'm looking for? As I said, I'm new to AI even though I have a background in CS and mathematics. I thought clustering could be interesting since I'm trying to create 2 clusters: headings and normal paragraphs.
+
+What algorithm do you think might work for this use case. Should I look outside clustering?
+"
+"['neural-networks', 'machine-learning', 'keras']"," Title: Why does this model have 12 parameters?Body: I guess the model shown in this image (img_1)
+
+
+
+is the same as the one in this image (img_2)
+
+
+
+I was trying to build a neural net like that.
+
+This keras code is to do the job.
+
+model = Sequential()
+model.add(Dense(3, input_dim=3, activation='relu'))
+model.add(Dense(1, activation='sigmoid'))
+plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
+
+
+However, print(model.summary())
outputs
+
+Model: ""sequential_17""
+_________________________________________________________________
+Layer (type) Output Shape Param #
+=================================================================
+dense_31 (Dense) (None, 3) 12
+_________________________________________________________________
+dense_32 (Dense) (None, 1) 4
+=================================================================
+
+
+There are 3 w
s and 1 b
in the hidden layer. Why does this model have 12 parameters?
+"
+"['reference-request', 'papers', 'computational-learning-theory', 'pac-learning', 'books']"," Title: What are some resources on computational learning theory?Body: Pretty soon I will be finishing up Understanding Machine Learning: From Theory to Algorithms by Shai Ben-David and Shai Shalev-Shwartz. I absolutely love the subject and want to learn more, the only issue is I'm having trouble finding a book that could come after this. Ultimately, my goal is to read papers in JMLR's COLT.
+
+
+- Is there a book similar to ""Understanding Machine Learning: From Theory to Algorithms"" that would progress my knowledge further and would go well after reading UML?
+- Is there any other materials (not a book) that could allow me to learn more or prepare me for reading a journal like the one mentioned above?
+
+
+(Also, taking courses in this is not really an option, so this will be for self-study).
+
+(Note that I have also asked this question here on TCS SE, but it was recommended I also ask here.)
+"
+"['neural-networks', 'deep-learning', 'ai-design']"," Title: Leaky Discriminators and Siamese GANsBody: Is it useful to use Siamese network
structure for GANs
like sharing latent space
between generators in cGAN
, or also with discriminators
.
+
+Thinking about it, like giving the generator tips about the knowledge-base of the discriminator, to target the problem of discriminator forgetting and increase the chance of convergence. Because than the discriminator prediction confidence is dependent of the generators construction (what’s anyway the case, but now on a system base ).
+
+What do you think ? Didn’t see it in recent papers that often, just in this one, but it’s more like a pix2pix
transformation, and just works so well, because they are using the segmentation masks
of A, to get good segmentation results on B‘ ( A transformed to B). Didn’t find any approaches to sth like leaky discriminators
.
+"
+"['game-ai', 'monte-carlo-tree-search']"," Title: MCTS RAVE performing badly in Board Game AIBody: I'm using Monte Carlo Tree Search with UCT selection to try and build an AI player for a complex multiplayer board game. My regular UCT MCTS seems to be working fine, winning with random and basic greedy players or low-depth 'paranoid' alpha-beta variant player, but I've been looking for some methods to improve it and I found RAVE.
+
+""In RAVE, for a given game tree node N, its child nodes Ci store not only the statistics of wins in playouts started in node N but also the statistics of wins in all playouts started in node N and below it, if they contain move i (also when the move was played in the tree, between node N and a playout). This way the contents of tree nodes are influenced not only by moves played immediately in a given position but also by the same moves played later."".
+
+I've found a lot of literature about it and it was supposed to give good results - 70%-80% win rate against basic UCT on a game of TicTacToe3D. I implemented it as a sort of benchmark, a 4x4x4 version, before trying it on my target game. But, however I tried tuning the parameters, I've been getting worse results, the win rate is at best arount 46%.
+
+I've been calculating the node values like this:
+
+visits[i] is a number of visits for child i of parent p that selection is performed on, wins[i] is a number of wins according to UCT, AMAFvisits and AMAFwins are assigned based on the node's source action -> updated after a finished simulation if a sourceAction (the action that changed the game state into this state) was played in the simulation by the player of the MCTS tree root node.
+
+for (int i = 0; i < nChildren; i++) {
+ if (visits[i] < 1) {
+ value = Double.MAX_VALUE - rnd.nextDouble();
+ }
+ else if (m[i] < 1) {
+ double vUCT = wins[i]/visits[i] + C*Math.sqrt(Math.log(sumVisits)/(visits[i]));
+ value = vUCT;
+ }
+ else {
+ double beta = Math.sqrt(k/(3*visits[i] + k));
+ double vRAVE = (AMAFscores[i])/(m[i]) + C*Math.sqrt(Math.log(mChildren)/(m[i]));
+ double vUCT = (wins[i])/(visits[i])+ C*Math.sqrt(Math.log(sumVisits)/(visits[i]));
+ value = beta * vRAVE + (1 - beta) * vUCT;
+ value += rnd.nextDouble() * eps;
+ /*double beta = Math.sqrt(k/(3*visits[i] + k));
+ double vRAVE = (AMAFscores[i])/(m[i]);
+ double vUCT = (wins[i])/(visits[i]);
+ value = beta * vRAVE + (1 - beta) * vUCT;
+ value += C*Math.sqrt(Math.log(sumVisits)/(visits[i]));
+ value += rnd.nextDouble() * eps;*/
+ }
+ if (maxValue <= value) {
+ maxValue = value;
+ index = i;
+ }
+}
+chosen = tree.getTreeNode(children.get(index));
+
+
+Here's a paint rendition of my understanding of how RAVE should work -> https://imgur.com/a/MM4K1HE.
+Am I missing something? Is my implementation wrong? Here's the rest of the code responsible for traversing the tree in a 'rave way': https://www.paste.org/104476. The expand function on tree expands the tree for all actions, and returns a random one which then gets visited, the others are to be visited in other iterations.
+
+I first tested the code on k = 250 like the authors of the benchmark paper https://dke.maastrichtuniversity.nl/m.winands/documents/CIG2016_RAVE.pdf suggested and on 100, 1000 and 10000 iterations, with tree depth 20 or 50. I also experimented with other k values and other params.
+"
+"['reinforcement-learning', 'deep-rl', 'supervised-learning', 'efficiency', 'active-learning']"," Title: Is it possible to guide a reinforcement learning algorithm?Body: I have just started to study reinforcement learning and, as far as I understand, existing algorithms search for the optimal solution/policy, but do not allow the possibility for the programmer to suggest a way to find the solution (to guide their learning process). This would be beneficial for finding the optimal solution faster.
+
+Is it possible to guide the learning process in (deep) reinforcement learning?
+"
+"['tensorflow', 'pretrained-models']"," Title: How does a software license apply to pretrained models?Body: Google provides a lot of pretrained tensorflow models, but I cannot find a license.
+
+I am interested in the tfjs-models. The code is licensed Apache-2.0, but the models are downloaded by the code, so the license of the repository probably does not apply to the models and I am not able to find anywhere a note about the license of the pretrained models.
+
+How should I handle this, especially when I may want to distribute models derived from the pretrained Google models?
+"
+"['recurrent-neural-networks', 'models', 'control-problem']"," Title: Neural networks with internal dynamics in the state-space formBody: Neural networks with feedback (Hopfield, Hamming, etc.) differ from ordinary neural networks (multilayer perceptrons, etc.), which turns them into a dynamic element with its own internal dynamics (if we consider them as a separate dynamic link). The following question naturally arises - is it possible to represent them in the form of state spaces?
+
+Nuance is that feedback is created by introducing a delay element, which means recording a neural network exclusively in a discrete form. Is continuous recording possible? What acts as matrices A, B, C, D? How does the presence of nonlinear activation functions affect?
+The only more or less useful information that I managed to find is in this article:
+
+On neural networks in identification and control of dynamic systems. 3.2 Paragraph. Page 8
+
+But my assumptions are only confirmed there, which does not clarify the situation.
+
+In general, if someone has come across this and can assist in studying the issue, please share links, possibly examples, etc.
+"
+"['deep-learning', 'recurrent-neural-networks', 'long-short-term-memory', 'deep-neural-networks', 'deepmind']"," Title: Why Pixel RNN (Row LSTM) can capture triangular contexts?Body: I'm reading the paper Pixel Recurrent Neural Network. I have a question about Row LSTM. Why Row LSTM can capture triangular contexts?
+
+In this paper,
+
+
+ the kernel of the one-dimensional convolution has size $k \times 1$ where $k \geq 3$; the larger value of $k$ the broader the context that is captured.
+
+
+The one-dimensional kernel can capture only the left context. (Is this correct?)
+
+The $n \times n$ kernel such as
+
+$$
+\begin{bmatrix}
+1 & 1 & 1 \\
+0 & 0 & 0 \\
+0 & 0 & 0
+\end{bmatrix}
+$$
+
+can capture triangular contexts.
+
+Is this correct?
+"
+"['papers', 'neat', 'crossover-operators', 'mutation-operators', 'genetic-operators']"," Title: What does ""In each generation, 25% of offspring resulted from mutation without crossover"" mean in the context of NEAT?Body: I am reading through the NEAT paper. In parameter settings, page 15, there is:
+
+
+ In each generation, 25% of offspring resulted from mutation without crossover.
+
+
+What does it mean?
+"
+"['backpropagation', 'gradient-descent', 'feedforward-neural-networks', 'stochastic-gradient-descent', 'mini-batch-gradient-descent']"," Title: What exactly is averaged when doing batch gradient descent?Body: I have a question about how the averaging works when doing mini-batch gradient descent.
+
+I think I now understood the general gradient descent algorithm, but only for online learning. When doing mini-batch gradient descent, do I have to:
+
+
+- forward propagate
+- calculate error
+- calculate all gradients
+
+
+...repeatedly over all samples in the batch, and then average all gradients and apply the weight change?
+
+I thought it would work that way, but recently I have read somewhere that you basically only average the error of each example in the batch, and then calculate the gradients at the end of each batch. That left me wondering though, because, the activations of which sample in the mini-batch am I supposed to use to calculate the gradients at the end of every batch?
+
+It would be nice if somebody could explain what exactly happens during mini-batch gradient descent, and what actually gets calculated and averaged.
+"
+"['papers', 'vanishing-gradient-problem']"," Title: Does the paper ""On the difficulty of training Recurrent Neural Networks"" (2013) assume, falsely, that spectral radii are $\ge$ square matrix norms?Body: (link to paper in arxiv)
+
+In section 2.1 the authors define $\gamma$ as the maximum possible value of the derivative of the activation function (e.g. 1 for tanh.) Then they have this to say:
+
+
+ We first prove that it is sufficient for $\lambda_1 < \frac{1}{\gamma}$, where $\lambda_1$ is the absolute value of the largest eigenvalue of the recurrent weight matrix $W_{rec}$, for the vanishing gradient problem to occur.
+
+
+Then they use the submultiplicity ($\|AB\| \le \|A\|\|B\|$) of the 2-norm of the Jacobians to obtain the following inequality:
+
+$$ \forall x, \| \frac{\partial x_{k+1}}{\partial x_k} \| \le \| W_{rec}^\top \| \| diag(\sigma'(x_k))\| < \frac{1}{\gamma} \gamma < 1 $$
+
+Here
+
+
+- $x_k$ is the pre-activated state vector of the RNN
+- $W_{rec}$ is the weight matrix between states (i.e. $x_k = W_{rec} \times \sigma(x_{k-1}) + b$ )
+- $\sigma()$ is the activation function for the state vector
+- $diag(v)$ is the diagonal matrix version of a vector $v$
+
+
+They appear to be either substituting the norm of the weight matrix $\|W_{rec}^\top\|$ for its largest eigenvalue $|\lambda_1|$ (eigenvalues are the same for transposes) or just assuming that this norm is less than or equal to the eigenvalue. This bothers me because the norm of a matrix is bounded below, not above, by this eigenvalue/spectral radius (see lemma 10 here and this math SE question)
+
+They seem to assume that
+
+$$\| W_{rec}^\top \| \le \lambda_1 < \frac{1}{\gamma} $$
+
+But really
+
+$$ \| W_{rec}^\top \| \ge \lambda_1 $$
+"
+"['tensorflow', 'pytorch', 'text-summarization', 'gpt', 'text-generation']"," Title: How can I use GPT-2 to modify seed text of one form into a different form (LENGTH INVARIANT) whilst retaining meaning?Body: I am currently starting a research project whereby I am trying to convert text of one form into another.
+i.e. If I were to write a seed sentance of the form ""Scientists have finally achieved the ability to induce dreams of electric sheep in the minds of anaesthetized robots"" I would like GPT-2 to convert this into ""Robots have finally had dreams of electric sheep whilst being anaesthetized by scientists."" or some coherent permutation of the underlying structure whereby the main logic of the text is conveyed albeit roughly.
+
+The current open source implementation of GPT-2 seeks to predict the next word, i.e. the seed text is given ""Scientist have finally"" and the generated text would be "" started being paid enough!""
+
+My first presumption was to use some form of GAN, however it became quickly evident that:
+
+
+ Recent work has shown
+ that when both quality and diversity is considered, GAN-generated text
+ is substantially worse than language model generations (Caccia et al.,
+ 2018; Tevet et al., 2018; Semeniuta et al., 2018).
+
+
+How could I most effectively achieve this? Thanks.
+"
+"['reinforcement-learning', 'reference-request', 'policy-gradients', 'implementation', 'reinforce']"," Title: Is there a good and easy paper to code policy gradient algorithms (REINFORCE) from scratch?Body: I am interested in learning about policy gradient algorithms and REINFORCE. Can you suggest a good and easy paper that I can use to code them from scratch?
+"
+"['machine-learning', 'unsupervised-learning', 'generalization', 'pac-learning']"," Title: Is there a notion of generalization in unsupervised learning?Body: I've been learning a little bit about generalization theory, and in particular, the PAC (and PAC-Bayes) approach to thinking about this problem.
+
+So, I started to wonder if there is an analogous version of ""generalization"" in Unsupervised Learning? I.e., is there a general framework that encapsulates how ""good"" an unsupervised learning method is? There's reconstruction error for learning lower dimensional representations, but what about unsupervised clustering?
+
+Any ideas?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl', 'value-functions']"," Title: What is the target Q-value in DQNs?Body: I understand that in DQNs, the loss is measured by taking the MSE of outputted Q-values and target Q-values.
+What does the target Q-values represent? And how is it obtained/calculated by the DQN?
+"
+"['machine-learning', 'computational-learning-theory', 'vc-dimension', 'vc-theory']"," Title: What do we mean by saying ""VC dimension gives a LOOSE, not TIGHT bound""?Body: From what I understand VC dimension is what establishes the feasibility of learning for infinite hypothesis sets, the only kind we would use in practice.
+
+But, the literature (i.e. Learning from Data) states that VC gives a loose bound, and that in real applications, learning models with lower VC dimension tend to generalize better than those with higher VC dimension. So, a good rule of thumb would be to require at least 10xVC
dimension examples in order to get decent generalization.
+
+I am having trouble interpreting what loose bound means. Is the VC generalization bound loose due to its universality? Meaning, its results apply to all hypothesis sets, learning algorithms, input spaces, probability distributions, and binary target functions.
+"
+"['pattern-recognition', 'time-series']"," Title: What method to identify markers in data series via machine learningBody: I have data that is collected from several different instruments simultaneously that is generally analyzed on a location-by-location basis. A skilled interpreter can identify ""markers"" in the data that represent a certain change in conditions with depth - each marker only occurs once in each series of data. However, it is possible that a maker is absent either due to missing data or that physical condition not existing.
+
+Often, there are dozens of these markers per location and thousands, if not 10's of thousands of measurements that need to be interpreted. The task is not that difficult and there are many strong priors that can be used to guide interpretation. E.g., if marker A is in location #1, and location #2 is very close to location #1, it is likely that marker A will be present in a very similar relative position. Also, if you have markers A, B, and C they will always be in that order. Although it could be that you have A/B/C or A/C or B/C, etc.
+
+I am including a hand-sketched example below with 4 example locations and one data stream (I normally have 4-5 data streams per location.
+
+I am looking for guidance on the type of algorithm to apply to this problem. I have explored Dynamic Time warping, but the issue is that with 10-20k data samples per location, and thousands of locations, the problem becomes computationally challenging.
+
+Also, in general you may have 10000 locations, with maybe 100 that have been hand interpreted by an expert.
+
+
+"
+"['convolutional-neural-networks', 'models', 'pytorch']"," Title: What is the difference between FC and MLP in as used in PointNet?Body: I am trying to understand the PointNet network for dealing with point clouds and struggling with understanding the difference between FC and MLP:
+
+
+ ""FC is fully connected layer operating on each point. MLP is
+ multi-layer perceptron on each point.""
+
+
+I understand how fully connected layers are used to classify and I previously thought, was that MLP was the same thing but it seems varying academic papers have a differing definition from each other and from general online courses. In PointNet what is meant by a shared MLP different to a standard feedforward fully connected network?
+
+
+"
+"['reinforcement-learning', 'supervised-learning']"," Title: Reinforcement learning with industrial continuous processBody: I am new to RL and wish to realize a RL control for an industrial process. The goal is to control the temperature and humidity in a vegetal food production chamber.
+
+States: External temperature and humidity, internal temperature and humidity, percentage of the proportional valves controlling heater, cooler and steam for humidity. The goal is to keep temperature and humidity in the chamber (measures) as close as possible to the desired values (the setpoints).
+
+Agent actions: Increase/decrease the percentage of the proportional valves controlling the actuators.
+
+Rewards: Deviation between measure and setpoints (small deviation => high reward, high deviation => low reward).
+
+I have data available, the history of states and actions from a real system. The actions are made by several PID controllers (some of them in cascade). So far I have about 3 month every minutes (with some stops sometimes when a chamber is for example cleaned). The data are continuously logged and every month I get more data. The data includes bad/unwanted states.
+
+For training the RL agent, I am planning to simulate the environment using a supervised learning model (with the predict function), probably XGboost. Is it feasible, are there pitfalls to avoid in this case?
+"
+"['classification', 'image-recognition', 'multi-label-classification']"," Title: Can I do image classification with Multi Layers Perceptron (MLP)?Body: I'm seeking guidence here.
+Can I use Multi Layers Perceptron (MLP), e.g regular flat neural networks, for image classification?
+
+Will they perform better than Fisher Faces?
+
+Is it difficult to do image classification with a MLP network?
+
+It's on basic level like classifying objects and not detailed structures and patterns.
+
+Important to me is that the MLP need to be trained with pictures that can have noise in background and different light shadows.
+"
+"['philosophy', 'artificial-consciousness', 'turing-test', 'mythology-of-ai', 'chinese-room-argument']"," Title: What are the implications of the statement ""If you can't tell, does it matter?"" in relation to AI?Body: "If you can't tell, does it matter?" was one of the first lines of dialogue of the Westworld television series, presented as a throwaway in the first episode of the first season, in response to the question "Are you real?"
+In the sixth episode of the third season, the line becomes a central realization of one of the main characters, The Man in Black.
+This is, in fact, a central premise of the show—what is reality, what is identity, what does it mean to be alive?—and has a basis in the philosophy of Philip K. Dick.
+
+- What are the implications of this statement in relation to AI? In relation to experience? In relation to the self?
+
+"
+"['deep-learning', 'autoencoders']"," Title: Can I use an autoencoder with high latent representational space?Body: I am trying to use a neural network to predict the next state output given the current state and action pairs. Both input and outputs are continuous variables. Due to the high dimensionality of each input, ( ~50 dimensional input ) and 48 dimensional output, I am not able to achieve an achieve a satisfiable enough accuracy.
+
+I am thinking of using an auto-encoder to learn a latent representation of the state. Would a latent representation from an auto-encoder help to improve the prediction accuracy ? and can the latent representation have a higher dimensional space compared to the original state ?
+"
+"['machine-learning', 'convolutional-neural-networks', 'prediction']"," Title: What does the model predict if it has never seen the image before?Body: I've been messing around with an Open Set, Binary Classifier and am having trouble with it. I'm sure there are a lot of reasons for that trouble.
+One thing I am struggling with is, what does the model predict if it has never seen the image before?
+An example would be if I'm trying to detect sheep across all background scenes. If I train a binary classification set with one class having lots of sheep in it and the other class having lots of various backgrounds, what would the model predict if it came across a background it had never seen before with no sheep in it? [mine is telling me "sheep" and I don't know why]
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-recognition', 'image-processing']"," Title: Do Multi-resolution CNN exist?Body: I am currently working on a problem for which the topographic data is in very different resolution. Let say I have data of 20x20 with 1km2 tiles and also high resolution data of 50m2 tiles. I would like to combine both for input in a CNN.
+To make things more spicy I don't care about the 50m2 when it is far away from the center, that is why I would like to use an 'image' multi resolution, aka resolution low in the edges but higher in the center. That would be like human vision, only high detailed in the center...
+Then I would combine that multi-resolution image with my 1km2 data
+
+Do you know any research done on such a CNN ?
+
+Only found that one for now Multi-Resolution Feature Fusion for Image
+Classification of Building Damages with
+Convolutional Neural Networks
+
+Thank you for you help
+"
+"['agi', 'research', 'cognitive-science', 'academia', 'neuroscience']"," Title: What are the scientific journals dedicated to artificial general intelligence?Body: Apart from Journal of Artificial General Intelligence (a peer-reviewed open-access academic journal, owned by the Artificial General Intelligence Society (AGIS)), are there any other journals (or proceedings) completely (or partially) dedicated to artificial general intelligence?
+
+If you want to share a journal that is only partially dedicated to the topic, please, provide details about the relevant subcategory or examples of papers on AGI that were published in such a journal. So, a paper that talks about e.g. an RL technique (that only claims that the idea could be useful for AGI) is not really what I am looking for. I am looking for journals where people publish papers, reviews or surveys that develop or present theories and implementations of AGI systems. It's possible that these journals are more associated with the cognitive science or neuroscience communities and fields.
+"
+"['recurrent-neural-networks', 'prediction', 'time-series', 'state-of-the-art']"," Title: What are modern state-of-the-art solutions in prediction of time-series?Body: I wanted to ask you about the newest achievements in time series analysis (mostly prediction).
+What state-of-the-art solutions (as in frameworks, papers, related projects) do you know that can be used for analysing and predicting time series?
+
+I am interested in something possibly better than just RNN, LSTM and GRU :)
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'tensorflow']"," Title: Finding patterns in binary files using deep learningBody: I am a newbie in deep learning and wanted to know if the problem I have at hand is a suitable fit for deep learning algorithms. I have thousands of fragments each of about 1000 bytes size (i.e. numbers in the range of 0 to 255). There are two classes in the fragments:
+
+
+- Some fragments have a high frequency of two particular byte values appearing next to one another: ""0 and 100"". This kind of pattern roughly appears once every 100 to 200 bytes.
+- In the other class, the byte values are more randomly distributed.
+
+
+We have the ability to produce as many numbers of instances of each class as needed for training purposes. However, I would like to differentiate with a machine learning algorithm without explicitly identifying the ""0 and 100"" pattern in the 1st class myself. Can deep learning help us solve this? If so, what kind of layers might be useful?
+
+As a preliminary experiment, we tried to train a deep learning network made up of 2 hidden layers of TensorFlow's ""Dense"" layers (of size 512 and 256 nodes in each of the hidden layers). However, unfortunately, our accuracy was indicative of simply a random guess (i.e. 50% accuracy). We were wondering why the results were so bad. Do you think a Convolutional Neural Network will better solve this problem?
+"
+"['reinforcement-learning', 'value-functions']"," Title: Do policy independent state and action values exist in reinforcement learning?Body: The state value function $V(s)$ is defined as the expected return starting in state $s$ and acting according to the current policy $\pi(a|s)$ till the end of the episode. The state-action values $Q(s,a)$ are similarly dependent on the current policy.
+
+Is it also possible to get a policy independent value of a state or an action?
+Can the immediate reward $r(s,a,s')$ be considered a noisy estimate of the action value?
+"
+"['reinforcement-learning', 'off-policy-methods', 'sarsa', 'on-policy-methods', 'expected-sarsa']"," Title: Is Expected SARSA an off-policy or on-policy algorithm?Body: I understand that SARSA is an On-policy algorithm, and Q-learning an off-policy one.
+Sutton and Barto's textbook describes Expected Sarsa thusly:
+
+
+ In these cliff walking results Expected Sarsa was used on-policy, but
+ in general it might use a policy different from the target policy to
+ generate behavior, in which case it becomes an off-policy algorithm.
+
+
+I am fundamentally confused by this - specifically, how do we define when Expected SARSA adopts or disregards policy. The Coursera Course states that it is On-Policy, further confusing me.
+
+My confusions became realized when tackling the Udacity course, specifically a section visualizing Expected SARSA for simple a gridworld (See section 1.11 and 1.12 in link below). Note that the course defines Expected Sarsa as on-policy.
+https://www.zhenhantom.com/2019/10/27/Deep-Reinforcement-Learning-Part-1/
+
+You'll notice the calculation for the new state value Q(s0,a0) as
+
+
+ Q(s0, a0) <— 6 + 0.1( -1 + [0.1 x 8] + [0.1 x 7] + [0.7 x 9] + [0.1
+ x 8] - 6) = 6.16.
+
+
+This is also the official answer. But this would mean that it is running off policy, given that it is stated that the action taken at S1 corresponds to a shift right, and hence expected SARSA (On policy) should yield you.
+
+
+ Q(s0, a0) <— 6 + 0.1( -1 + [0.1 x 8] + [0.1 x 7] + [0.1 x 9] + [0.7 x
+ 8] - 6) = 6.1
+
+
+The question does state
+
+
+ (Suppose that when selecting the actions for the first two timesteps
+ in the 100th episode, the agent was following the epsilon-greedy
+ policy with respect to the Q-table, with epsilon = 0.4.)
+
+
+But as this same statement existed for the regular SARSA example (which also yields 6.1 as A1 is shift right, as before), I disregarded it.
+
+Any advice is welcome.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'reference-request']"," Title: Are there any good research papers on image identification with limited data?Body: I'm a newbie in machine learning and I am interested in neural networks.
+
+Are there any good research papers on image identification with limited data?
+"
+"['autoencoders', 'image-processing']"," Title: How fast are autoencoders?Body: I was exploring image/video compression using Machine Learning. In there I discovered that autoencoders are used very frequently for this sort of thing. So I wanted to enquire:-
+
+
+- How fast are autoencoders? I need something to compress an image in milliseconds?
+- How much resources do they take? I am not talking about the training part but rather the deployment part. Could it work fast enough to compress a video on a Mi phone (like note8 maybe)?
+
+
+Do you know of any particularly new and interesting research in AI that has enabled a technique to this fast and efficiently?
+"
+"['reinforcement-learning', 'markov-decision-process', 'rewards', 'decision-theory']"," Title: Can optimizing for immediate reward result in a policy maximizing the return?Body: The goal of a reinforcement learning agent is to maximize the expected return which is often a discounted sum of future rewards. The return indeed is a very noisy random variable as future rewards depend on the state-transition-probabilities and the often stochastic policy. Lots of trajectories have to be sampled to approximate its expected value.
+
+The immediate reward indeed does not have these dependencies. Therefore the questions:
+
+If we train a policy to maximize the immediate reward, will it also perform well in the long term? What properties would the reward function need to fulfill?
+"
+"['natural-language-processing', 'reference-request', 'natural-language-understanding', 'state-of-the-art']"," Title: How well can NLP techniques recognize connotations in natural languages?Body: What is the state of the art with respect to recognizing connotations in natural languages?
+
+For instance:
+
+
+- Trump is a better president than Obama. [Praising]
+- Trump is the worst president since Obama. [Insulting]
+
+
+or:
+
+
+- The rock star did not infect over 100 groupies. [Defending against rumor]
+- The rock star infected no more than 100 groupies. [Attacking (0 is no more than 100)]
+
+
+In each example, both statements logically mean exactly the same thing, but any human hearing them would interpret them as having quite opposite meanings.
+
+How well can current natural language processors recognize the difference between logically equivalent statements?
+"
+"['neural-networks', 'machine-learning', 'ai-design', 'hyper-parameters']"," Title: Why is the number of neurons used in various neural networks power of 2?Body: I have noticed that almost all tutorials take the number of neurons as a power of 2. Is there any proper mathematical and well-proven reason for that?
+
+If you sometimes change it to some other odd value, then we get a very long and weird error calling about a dozen things making the traceback of a page. Is there any reason for that?
+
+
+ I had tried it on a text predicting RNN with some GRU and LSTM layers mixed (bidirectional). I changed the no. of neuron units and It also resulted in an error. So, any ideas/theories?
+
+"
+"['neural-networks', 'deep-learning', 'objective-functions']"," Title: Loss function for choosing a subset of objectsBody: I'm trying to train a neural net to choose a subset from some list of objects.
+The input is a list of objects $(a,b,c,d,e,f)$ and for each list of objects the label is a list composed of 0/1 - 1 for every object that is in the subset, for example $(1,1,0,1,0,1)$ represents choosing $a,b,d,f$. I thought about using MSE loss to train the net but that seemed like a naive approach, is there some better loss function to use in this case?
+"
+"['reinforcement-learning', 'comparison', 'policies', 'value-iteration', 'policy-iteration']"," Title: Why do value iteration and policy iteration obtain similar policies even though they have different value functions?Body: I am trying to implement value and policy iteration algorithms. My value function from policy iteration looks vastly different from the values from value iteration, but the policy obtained from both is very similar. How is this possible? And what could be the possible reasons for this?
+"
+"['machine-learning', 'neuromorphic-engineering', 'spiking-neural-networks', 'neuroscience', 'reservoir-computing']"," Title: What are examples of machine learning techniques inspired by neuroscience?Body: What are examples of machine learning techniques (i.e. models, algorithms, etc.) inspired (to different extents) by neuroscience?
+
+Particularly, I'm interested in recent developments, say less than 10 years old, that have their basis in neuroscience to some degree.
+"
+"['reinforcement-learning', 'deep-rl', 'statistical-ai', 'bias-variance-tradeoff']"," Title: Do the variance and bias belong to the policy or value functions?Body: Recently, I read many papers on variance and bias. But I am still confused by the two notions, the variance or bias belongs to who? Policy or value? If the variance or bias is large or low, what results will we get?
+"
+"['neural-networks', 'generative-adversarial-networks', 'generative-model', 'artificial-creativity']"," Title: Using DCGAN on a (very small) dataset of artBody: I am developing a DCGAN using the this tutorial in PyCharm. As my usage of this tutorial suggests, I am quite new to DCGANs as I've previously only had a few experiences with machine learning algorithms on classifying problems. My goal is to feed my DCGAN a dataset of paintings of a specific painter, and get 'new' paintings in return. Needless to say, a painter does not paint thousands of paintings in his life, leaving me with a dataset of around 60 paintings. One of the smallest datasets I have ever worked with. I have two, related, questions:
+
+
+- Is it realistic to properly train a DCGAN on this type of dataset? If not, would there be any alternative you would suggest?
+- What would be a good set of parameters to start of from to properly train this DCGAN?
+
+
+Thanks in advance!
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'alphazero', 'pomdp']"," Title: Is Monte Carlo tree search needed in partially observable environments during gameplay?Body: I understand that with a fully observable environment (chess / go etc) you can run an MCTS with an optimal policy network for future planning purposes. This will allow you to pick actions for gameplay, which will result in max expected return from that state.
+
+However, in a partially observable environment, do we still need to run MCTS during gameplay? Why can't we just pick the max action from the trained optimal policy given the current state? What utility does MCTS serve here?
+
+I am new to reinforcement learning and am trying to understand the purpose of MCTS / planning in partially observable environments.
+"
+"['neural-networks', 'machine-learning', 'hardware']"," Title: How does optical computing work and deal with nonlinearity?Body: This article states that:
+
+
+ One of the algorithms that photonics is very good at implementing is matrix multiplication
+
+
+But how are parameters stored and updated(in backpropagation)?
+
+One more serious problem is that there are nonlinear operations in neural networks, then how does the photonic neural network deal with activation functions?
+"
+"['reinforcement-learning', 'comparison', 'game-theory', 'minimax', 'bellman-equations']"," Title: How are the Bellman optimality equations and minimax related?Body: Is the philosophy between Bellman equations and minimax the same?
+
+Both the algorithms look at the full horizon and take into account potential gains (Bellman) and potential losses (minimax).
+
+However, do the two differ besides the obvious on the fact that Bellman equations use discounted potential rewards, while minimax deals with potential losses without the discount? Are these enough to say they are similar in philosophy or is are they dissimilar? If so, then in what sense?
+"
+"['reinforcement-learning', 'intelligent-agent', 'architecture']"," Title: RL: What should be the output of the NN for an agent trying to learn how to play a game?Body: Say the game is tic tac toe.
+I found two possible output layers:
+
+
+- Vector of length 9: each float of the vector represents 1 action
+(one of the 9 boxes in Tic Tac Toe). The agent will play the corresponding action with the highest value. The agent learns the rules through trial and error. When the agent tries to make an illegal move (i.e. placing a piece on a box where there is already one), the reward will be harshly negative (-1000 or so).
+- A single float: the float represents who is winning (positive = ""the agent is winning"", negative = ""the other player is winning""). The agent does not know the rules of the game. Each turn the agent is presented with all the possible next states (resulting from playing each legal action) and it chooses the state with the highest output value.
+
+
+What other options are there?
+
+I like the first option because it's cleaner, but it's not feasible with games that have thousands or millions of actions. Also, I am worried that the game might not really learn the rules.
+E.g. Say that in state S the action A is illegal. Say that state R is extremely similar to state S but action A is legal in state R (and maybe in state R action A is actually the best move!). Isn't there the risk that by learning not to play action A in state S it will also learn not to play action A in state R? Probably not an issue in Tic Tac Toe, but likely one in any game with more complex rules.
+What are the disadvantages of option 2?
+
+Does the choice depend on the game? What's your rule of thumb when choosing the output layer?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl']"," Title: Why are Dueling Q Networks not used more often to approximate Q-values in reinforcement learning algorithms?Body: I've just learned about Dueling Network Architectures to estimate $Q$-values and am wondering why this architecture is not used more often in deep RL algorithms? DDPG and TD3 estimate the $Q$-function using Double Q Learning instead of the empirically better Dueling Approach.
+"
+"['deep-learning', 'multilayer-perceptrons', 'perceptron']"," Title: Why can't MLPs perform non-linear regression and classification?Body: In this page it's told:
+
+
+ In Single Perceptron / Multi-layer Perceptron(MLP), we only have linear separability because they are composed of input and output layers(some hidden layers in MLP)
+
+
+What does it mean? I thought the MLP was a non-linear classifier. Could you explain it to me?
+"
+"['reinforcement-learning', 'research', 'operant-conditioning']"," Title: Has there been any work done on AI-driven operant conditioning?Body: For example, a RL algorithm that gains points when a rat presses a lever and loses points when it dispenses a pellet, water, treat, and/or sugar water. After a few days of controlling the rewards given to a rat, all rewards are stopped and the longer/more times the rat presses the lever before giving up, the higher the score.
+
+This would be a situation in which both the input and the outputs are discrete with very low data density over time and with outputs having very long-term affects on the environment.
+
+What kind of RL architecture would be appropriate here?
+"
+"['neural-networks', 'tensorflow', 'training', 'optimization', 'batch-normalization']"," Title: How to improve neural network training against a large data set of points with varying magnitudeBody: I am currently using TensorFlow and have simply been trying to train a neural network directly against a large continuous data set, e.g. $y = [0.014, 1.545, 10.232, 0.948, ...]$ corresponding to different points in time. The loss function in the fully connected neural network (input layer: 3 nodes, 8 inner layers: 20 nodes each, output layer: 1 node) is just the squared error between my prediction and the actual continuous data. It appears the neural network is able to learn the high magnitude data points relatively well (e.g. Figure 1 at time = 0.4422). But the smaller magnitude data points (e.g. Figure 2 at time = 1.1256) are quite poorly learned without any sharpness and I want to improve this. I've tried experimenting with different optimizers (e.g. mini-batch with Adam, full batch with L-BFGS), compared reduce_mean
and reduce_sum
, normalized the data in different ways (e.g. median, subtract the sample mean and divide by the standard deviation, divide the squared loss term by the actual data), and attempted to simply make the neural network deeper and train for a very long period of time (e.g. 7+ days). But after approximately 24 hours of training and the aforementioned tricks, I am not seeing any significant improvements in predicted outputs especially for the small magnitude data points.
+
+
+
+Figure 1
+
+
+
+
+
+Figure 2
+
+
+
+
+
+Therefore, do you have any recommendations on how to improve training particularly when there are different data points of varying magnitude I am trying to learn? I believe this is a related question, but any explicit examples of implementations or techniques to handle varying orders of magnitude within a single large data set would be greatly appreciated.
+"
+"['computational-learning-theory', 'information-theory', 'bayesian-deep-learning', 'minimum-description-length']"," Title: How can a machine learning problem be reduced as a communication problem?Body: I once heard that the problem of approximating an unknown function can be modeled as a communication problem. How is this possible?
+"
+"['reinforcement-learning', 'rewards', 'policy-gradients', 'ddpg']"," Title: Appropriate algorithm for RL problem with sparse rewards, continuous actions and significant stochasticityBody: I'm working on a RL problem with the following properties:
+
+
+- The rewards are extremely sparse i.e. all rewards are 0 except the terminal non-zero reward. Ideally I would not use any reward engineering as that would lead to a different optimization problem.
+- Actions are continuous. Discretization should not be used.
+- The amount of stochasticity in the environment is very high i.e. for a fixed deterministic policy the variance of returns is very high.
+
+
+More specifically, the RL agent represents the investor, the terminal reward represents the utility of the terminal wealth (hence the sparsity), actions represent portfolio positions (hence the continuity) and the environment represents the financial market (hence the high stochasticity).
+
+I've been trying to use DDPG with a set of ""commonly used"" hyperparameters (as I have no idea have to tune them besides experimenting which lasts too long) but so far (after 10000 episodes) it seems that nothing is happening.
+
+
+
+My questions are the following:
+
+
+- Given the nature of the problem I'm trying to solve (sparse rewards, continuous actions, stochasticity) is there a particular (D)RL algorithm that would lend itself well to it?
+- How likely is it that DDPG simply won't converge to a reasonable solution (due to the peculiarities of the problem itself) no matter what set of hyperparameters I choose?
+
+"
+"['reinforcement-learning', 'tensorflow', 'training', 'environment']"," Title: Why does this tutorial on reinforced learning not check whether the environment is 'game over' during training?Body: I am following the tutorial Train a Deep Q Network with TF-Agents. It uses the hello world
environment of reinforced learning: cart pole.
+
+At the end, the agent is getting trained with experience on the training enviroment (train_env
). When performing an action in the environment a time_step
is returned containing the observation, award and whether the environment signals the end, which basically says 'game over'. This can be due to the pole reaching an angle which is too high, or 200 time steps have been reached (which is the max score, atleast in the tutorial).
+
+When compute_avg_return
method evaluates an agents performance on an environment, the environment is checked to be game over or not using time_step.is_last
.
+
+Why is time_step.is_last
not considered when training the agent at the end of the tutorial? Nor do I see that the environment is reset during training. Or at least, I do not see it in the code presented. Is it checked internally? Looking at the graph, it never goes over an average return (the score) of 200 time steps. So it does seem to check for time_step.is_last
. Do I overlook something or how does this work?
+
+See the code block below. I would expect the check for time_step.is_last
after collect_step(train_env, agent.collect_policy, replay_buffer)
, which would be followed by resetting the environment if it was true.
+
+# Reset the train step
+agent.train_step_counter.assign(0)
+
+# Evaluate the agent's policy once before training.
+avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
+returns = [avg_return]
+
+for _ in range(num_iterations):
+
+ # Collect a few steps using collect_policy and save to the replay buffer.
+ for _ in range(collect_steps_per_iteration):
+ collect_step(train_env, agent.collect_policy, replay_buffer)
+
+ # Sample a batch of data from the buffer and update the agent's network.
+ experience, unused_info = next(iterator)
+ train_loss = agent.train(experience).loss
+
+ step = agent.train_step_counter.numpy()
+
+ if step % log_interval == 0:
+ print('step = {0}: loss = {1}'.format(step, train_loss))
+
+ if step % eval_interval == 0:
+ avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
+ print('step = {0}: Average Return = {1}'.format(step, avg_return))
+ returns.append(avg_return)
+
+"
+"['reinforcement-learning', 'deep-rl', 'monte-carlo-tree-search']"," Title: Why AlphaGo didn't use Deep Q-Learning?Body: In the previous research, in 2015, Deep Q-Learning shows its great performance on single player Atari Games. But why do AlphaGo's researchers use CNN + MCTS instead of Deep Q-Learning? is that because Deep Q-Learning somehow is not suitable for Go?
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'off-policy-methods']"," Title: Understanding the W term in off policy monte carlo learningBody: In Sutton and Barto's RL textbook they included the following pseudocode for off policy Monte Carlo learning. I am a little confused, however, because to me it looks like the W term will become infinitely large after a couple thousand iterations (and this is exactly what happens when I implement the algorithm).
+
+For example, say that the MC algorithm always follows the behavioral policy for each episode (ignoring epsilon soft/greedy for examples sake). If the probability of the action specified by the policy is 0.9, then after 10,000 iterations W would have a value of 1.11^10,000. I understand that the ratio of W to C(a,s) is what matters, however this ratio cannot be computer once W becomes infinite. Clearly I am misunderstanding something.
+
+
+"
+"['natural-language-processing', 'similarity']"," Title: Levenshtein Distance between each word in a given stringBody: From Calculate Levenshtein distance between two strings in Python it is possible to calculate distance and similarity between two given strings(sentences).
+
+And from Levenshtein Distance and Text Similarity in Python to return the matrix for each character and distance for two strings.
+
+Are there any ways to calculate distance and similarity between each word in a string and print the matrix for each word in a string(sentences)?
+
+a = ""This is a dog.""
+b = ""This is a cat.""
+
+from difflib import ndiff
+
+def levenshtein(seq1, seq2):
+ size_x = len(seq1) + 1
+ size_y = len(seq2) + 1
+ matrix = np.zeros ((size_x, size_y))
+ for x in range(size_x):
+ matrix [x, 0] = x
+ for y in range(size_y):
+ matrix [0, y] = y
+
+ for x in range(1, size_x):
+ for y in range(1, size_y):
+ if seq1[x-1] == seq2[y-1]:
+ matrix [x,y] = min(
+ matrix[x-1, y] + 1,
+ matrix[x-1, y-1],
+ matrix[x, y-1] + 1
+ )
+ else:
+ matrix [x,y] = min(
+ matrix[x-1,y] + 1,
+ matrix[x-1,y-1] + 1,
+ matrix[x,y-1] + 1
+ )
+ print (matrix)
+ return (matrix[size_x - 1, size_y - 1])
+
+levenshtein(a, b)
+
+
+Outputs
+
+>> 3
+
+
+Matrix
+
+[[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.]
+ [ 1. 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.]
+ [ 2. 1. 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.]
+ [ 3. 2. 1. 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.]
+ [ 4. 3. 2. 1. 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.]
+ [ 5. 4. 3. 2. 1. 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
+ [ 6. 5. 4. 3. 2. 1. 0. 1. 2. 3. 4. 5. 6. 7. 8.]
+ [ 7. 6. 5. 4. 3. 2. 1. 0. 1. 2. 3. 4. 5. 6. 7.]
+ [ 8. 7. 6. 5. 4. 3. 2. 1. 0. 1. 2. 3. 4. 5. 6.]
+ [ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0. 1. 2. 3. 4. 5.]
+ [10. 9. 8. 7. 6. 5. 4. 3. 2. 1. 0. 1. 2. 3. 4.]
+ [11. 10. 9. 8. 7. 6. 5. 4. 3. 2. 1. 1. 2. 3. 4.]
+ [12. 11. 10. 9. 8. 7. 6. 5. 4. 3. 2. 2. 2. 3. 4.]
+ [13. 12. 11. 10. 9. 8. 7. 6. 5. 4. 3. 3. 3. 3. 4.]
+ [14. 13. 12. 11. 10. 9. 8. 7. 6. 5. 4. 4. 4. 4. 3.]]
+
+
+General Levenshtein distance for character level shown in below fig.
+
+
+
+ Is it possible to calculate Levenshtein Distance for Word Level?
+
+
+Required Matrix
+
+ This is a cat
+
+This
+is
+a
+dog
+
+"
+"['natural-language-processing', 'gpt', 'fine-tuning', 'gpt-2']"," Title: GPT-2: (Hardware) requirements for fine-tuning the 774M modelBody: I wonder if there's anyone who has actually succeeded in fine-tuning GPT-2's 774M model without using cloud TPU's. My GeForce RTX 2070 SUPER couldn't handle it in previous attempts.
+I'm running TensorFlow 1.14.0 with CUDA V 9.1 on Ubuntu 18.04. For fine-tuning I'm using gpt-2-simple.
+When fine-tuning using the 77M model, I keep running into OOM errors, such as:
+W tensorflow/core/common_runtime/bfc_allocator.cc:314] Allocator (GPU_0_bfc) ran out of memory trying to allocate 6.25MiB (rounded to 6553600). Current allocation summary follows.
+So far I've tried:
+
+- Using different a optimizer (
RMSPropOptimizer
instead of AdamOptimizer
)
+- Setting batch-size to 1
+use_memory_saving_gradients
+only_train_transformer_layers
+
+Fine-tuning works smoothly on the 355M model.
+So what I'm really asking is:
+
+- is it possible to fine-tune GPT-2's 774M model without industrial-sized hardware?
+- if so, please tell me about your successful attempts
+- apart from hardware-recommendations, how could fine-tuning be optimized to make 77M fit in memory?
+
+"
+"['neural-networks', 'hyperparameter-optimization', 'hyper-parameters']"," Title: How to know if the hyperparameters of a neural network relate to each other?Body: According this thread some hyperparameters are independent from each other while some are directly related.
+
+One of the answers give an example where two hyperparameters affect each other.
+
+
+ For example, if you're using stochastic gradient descent (that is, you train your model one example at a time), you probably do not want to update the parameters of your model too fast (that is, you probably do not want a high learning rate), given that a single training example is unlikely to be able to give the error signal that is able to update the parameters in the appropriate direction (that is, the global or even local optimum of the loss function).
+
+
+How would someone creating a neural network know how the hyperparameters affect each other?
+
+In other words, what are the heuristics for hyperparameter selection when trying to build a robust model?
+"
+"['reinforcement-learning', 'ai-design', 'dqn', 'policy-gradients']"," Title: How can I design a DQN or policy gradient model to explore and collect all optimal solutions?Body: I am working to use DQN and Policy Gradient reinforcement learning models to solve classic maze escaping problems.
+
+So far, I have been able to train a model, which, after around 100 episodes, quickly explored ONE optimal solution to escape mazes.
+
+However, it is easy to see that for many maze designs, the optimal solutions could be multiple, and I would like to take one step further to collect all optimal and distinguishable solutions.
+
+However, I tried some searches online and so far, the only material I can find is this Learning Diverse Skills. But this seems an obstacle to me. I somewhat believe this seems a classic (?) and an easier problem that should be addressed in the textbook?
+
+Could someone shed light on this matter?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'supervised-learning']"," Title: Is radial basis function network appropriate for small datasets?Body: I'm a computer engineering student and I'm about to work on my master thesis. My professor gave me a small dataset with brain Computed Axial Tomography records. I would like to use deep learning to help doctors diagnose a certain disease (obviously, I've also got the data for doing supervised learning).
+
+Since the dataset is small, is radial basis function network a good solution? What do you think?
+
+Btw, if you have any type of tips in using the RBF network for this kind of project I would be really grateful.
+"
+"['reinforcement-learning', 'deep-rl', 'pytorch', 'actor-critic-methods', 'notation']"," Title: What does the notation $\partial \theta_{\pi}$ mean in this actor-critic update rule?Body: One of the steps in the actor-critic algorithm is $$\partial \theta_{\pi} \gets \partial \theta_{\pi} + \nabla_{\theta}\log\pi_{\theta} (a_i | s_i) (R - V_{\theta}(s_i))$$
+
+For me, $\theta$ are just the weights. Can you explain to me what mean $\partial \theta_{\pi}$?
+
+The whole algorithm comes from Maxim Lepan's book Deep Reinforcement Learning Hands-on page 269.
+
+Here is a picture of the algorithm :
+
+
+"
+"['comparison', 'search', 'hill-climbing', 'best-first-search']"," Title: What is the difference between hill-climbing and greedy best-first search algorithms?Body: While watching MIT's lectures about search, 4. Search: Depth-First, Hill Climbing, Beam, the professor explains the hill-climbing search in a way that is similar to the best-first search. At around the 35 mins mark, the professor enqueues the paths in a way similar to greedy best-first search in which they are sorted, and the closer nodes expanded first.
+
+However, I have read elsewhere that hill climbing is different from the best first search. What's the difference between the two then?
+"
+"['neural-networks', 'machine-learning', 'backpropagation', 'math']"," Title: Why are all weights of a neural net updated and not just the weights of the first layerBody: Why are all weights of a neural net updated and not only the weights of the first hidden layer?
+
+The error-influence of the prediction by the weights of a neural net is calculated using the chain rule. However, the chain rule tells us how the first variable influences the second variable, and so on. Following that logic, we should only update the weights of the first hidden layer. My thought is, that if we backtrack the influence of the first variable but also change the values of the subsequent weights (of the subsequent hidden layer), there is no need to calculate the influence of the first weights in the first place. Where am I wrong?
+"
+"['neural-networks', 'tensorflow', 'training', 'objective-functions', 'optimization']"," Title: Can neural networks handle redundant inputs?Body: I have a fully connected neural network with the following number of neurons in each layer [4, 20, 20, 20, ..., 1]
. I am using TensorFlow and the 4 real-valued inputs correspond to a particular point in space and time, i.e. (x, y, z, t), and the 1 real-valued output corresponds to the temperature at that point. The loss function is just the mean square error between my predicted temperature and the actual temperature at that point in (x, y, z, t). I have a set of training data points with the following structure for their inputs:
+
+
+
+(x,y,z,t):
+
+(0.11,0.12,1.00,0.41)
+
+(0.34,0.43,1.00,0.92)
+
+(0.01,0.25,1.00,0.65)
+
+...
+
+(0.71,0.32,1.00,0.49)
+
+(0.31,0.22,1.00,0.01)
+
+(0.21,0.13,1.00,0.71)
+
+
+
+Namely, what you will notice is that the training data all have the same redundant value in z
, but x
, y
, and t
are generally not redundant. Yet what I find is my neural network cannot train on this data due to the redundancy. In particular, every time I start training the neural network, it appears to fail and the loss function becomes nan
. But, if I change the structure of the neural network such that the number of neurons in each layer is [3, 20, 20, 20, ..., 1]
, i.e. now data points only correspond to an input of (x, y, t), everything works perfectly and training is all right. But is there any way to overcome this problem? (Note: it occurs whether any of the variables are identical, e.g. either x
, y
, or t
could be redundant and cause this error.)
+
+My question: is there any way to still train the neural network while keeping the redundant z
as an input? It just so happens the particular training data set I am considering at the moment has all z
redundant, but in general, I will have data coming from different z
in the future. Therefore, a way to ensure the neural network can robustly handle inputs at the present moment is sought.
+"
+"['reinforcement-learning', 'multi-agent-systems', 'distributed-computing']"," Title: Do I need to maintain a separate population in each distributed environment when implementing PBT in a MARL context?Body: I have questions regarding on how to implement PBT as described in Algorithm 1 (on page 5) in the paper, Population Based Training of Neural Networks to train agents in a MARL (multi-agent reinforcement learning) environment.
+
+In a single agent RL environment, the environments can be distributed & agents trained in parallel & there will be a need to maintain some sort of centralized population of weights & hyperparameters. (Please correct me if I'm wrong.)
+
+In a MARL context, do I need to also maintain a centralized population for all agents in all environments or do I need to maintain a separate population for agents in each distributed environment? Which is a correct or more effective approach?
+
+Any pointers would be appreciated. Thank you.
+"
+"['convolutional-neural-networks', 'geometric-deep-learning', 'graph-theory', 'similarity', 'graph-neural-networks']"," Title: How to estimate the convolutional representation of a graph from its similarity to other graph convolutional representation?Body: Suppose we have two graphs A and B disconnected to each other (let's say 2-hops each), within a larger graph. If the convolutional representation of graph A is known, is it possible to estimate the definitive convolutional representation of graph B based on its similarity to graph A?
+
+If yes, what do you think is the simplest (arithmetically) way to do this, which algorithm can help me to do this? You can assume that precision requirements are not important.
+"
+"['ai-design', 'game-ai', 'game-theory']"," Title: What should be a good playing strategy in this 2-player simultaneous game?Body: This is a bot making problem from here.
+I am detailing the problem.
+
+
+The picture above shows the initial configuration of the game. P1 represents player1 and P2 represents player2. A scotch bottle is kept(initially) at the position #5 on the number line. Both players start with 100 dollars in hand. Note that players don't move, only the bottle moves.
+
+Rules of the game:
+
+
+- The first player makes a secret bid followed by a secret bid by the second player.
+- The bottle moves one position closer to the winning bidder.
+- In case of drawn bid, the winner is the player, who has the draw advantage.
+- Draw advantage alternates between the two player, that is, the first draw is won by the first player, the second draw if it occurs is won by the second player and so on.
+- The winning bid is deducted from the player's hand, the loser keeps his bid.
+- The bottle moves one position closer to the winning bidder.
+- Each bid must be greater than 0 dollar. In the case when there's no money left, the player has no choice but to bid 0 dollar. Only integral bids are allowed.
+
+
+The player who gets the bottle wins. If no one gets it, the game ends in a draw.
+
+Both the players,thus,have complete knowledge of the history of biddings of each other, and the location of bottle at the current time.
+
+So far, i know this is an instance of poorman bidding games. I have used some strategies like I am intentionally losing some bids and let the opponent use his money, in a hope that difference of money increases to the point of allowing for the emergence of a winning strategy. Also, i pull stronger the bottle as it goes further. This isn't performing well with other bots.
+
+What should be the strategy of a bot playing this game?
+"
+"['machine-learning', 'classification', 'terminology', 'time-series']"," Title: Does it classify as Machine Learning?Body: I have a gaussian distributed time series ($X_t$) with some parameters in my experiment. Suppose I want to know the mean $\mu$. If I define another time series $Y_t$ such that $Y_t=X_t-a$ for all $t$. Now say I vary this parameter $a$ and generate altogether different time series for each $a$, say $Y_t(a)$. I look at the mean of $Y_t$ for each $a$. The value of a, where I get the mean of $Y_t$ closest to $0$, will be my estimate of $\mu$. Say I will eventually use this learnt value of $\mu$ to generate $Y_t$ as my final goal. Can this be called ML? I am using some training data of $X_t$ to learn about its parameter and then using test data of $X_t$ to generate $Y_t$.
+Now why am I working so hard on this simple problem? Well, actually I am not. I am doing something else, which will have lots of parameters in the time series and will be used to generate other time series after similar parameter extraction. That will be too complicated to discuss here. I just wanted to clear my basics using an over-simplified example.
+"
+"['reinforcement-learning', 'python', 'policy-gradients']"," Title: Subtracting the entropy from our policy gradient will prevent our agent from being stuck in the local minimum?Body:
+ In the information theory, the entropy is a measure of uncertainty in
+ some system. Being applied to agent policy, entropy shows how much the
+ agent is uncertain about which action to make. In math notation,
+ entropy of the policy is defined as : $$H(\pi) = -\sum \pi(a|s) \log
+ \pi(a|s)$$ The value of entropy is always greater than zero and has a
+ single maximum when the policy is uniform. In other words, all actions
+ have the same probability. Entropy becomes minimal when our policy has
+ 1 for some action and 0 for all others, which means that the agent is
+ absolutely sure what to do. To prevent our agent from being stuck in
+ the local minimum, we are subtracting the entropy from the loss
+ function, punishing the agent for being too certain about the action
+ to take.
+
+
+The above excerpt is from Maxim Lapan in the book Deep Reinforcement Learning Hands-on page 254.
+
+In code, it might look like :
+
+ optimizer.zero_grad()
+ logits= PG_network(batch_states_ts)
+ log_prob = F.log_softmax(logits, dim=1)
+ log_prob_actions = batch_scales_ts * log_prob[range(params[""batch_size""]), batch_actions_ts]
+ loss_policy = -log_prob_actions.mean()
+
+ prob = F.softmax(logits, dim=1)
+ entropy = -(prob * log_prob).sum(dim=1).mean()
+ entropy_loss = params[""entropy_beta""] * entropy
+ loss = loss_policy - entropy_loss
+
+
+I know that a disadvantage of using policy gradient is our agent can be stuck at a local minimum. Can you explain mathematically why subtracting the entropy from our policy will prevent our agent from being stuck in the local minimum ?
+"
+"['reinforcement-learning', 'terminology', 'temporal-difference-methods', 'environment']"," Title: What are episodic and non-episodic domains in reinforcement learning?Body: I was reading about the temporal difference (TD) learning and I read that:
+
+
+ TD handles continuing, non-episodic domains
+
+
+Assuming that continuing means non-terminating, what does non-episodic or episodic domain mean?
+"
+"['reinforcement-learning', 'definitions', 'value-iteration', 'policy-iteration']"," Title: What is generalized policy iteration?Body: I am reading Sutton and Barto's material now. I know value iteration, which is an iterative algorithm taking the maximum value of adjacent states, and policy iteration. But what is generalized policy iteration?
+"
+"['reinforcement-learning', 'proofs', 'monte-carlo-methods', 'policy-iteration', 'policy-improvement']"," Title: Monte Carlo epsilon-greedy Policy Iteration: monotonic improvement for all cases or for the expected value?Body: I was going through university slides and this particular slide is trying to prove that in a Monte Carlo Policy Iteration algorithm using an epsilon-greedy policy, the state Values (V-Values) are monotonically improving.
+
+
+
+My question is about the first line of computation.
+
+
+
+Isn't this actually the formula for the expected value of Q? It is calculating a probability of occurrence following the policy times actual Q values, then doing the summation.
+
+If that is the case, could you help me understand the relationship between the expected value of Q and the expected value of V ?
+
+Also, if above is true, in a real world scenario, depending on how many episodes we sample and on stochasticity, does it mean that the V values of the new policy could be worse than the V values of the old policy ?
+"
+"['neural-networks', 'ai-basics', 'activation-functions', 'hidden-layers', 'weights']"," Title: How are non-linear surfaces formed in the training of a neural network?Body: Desperate trying to understand something for couple of weeks. All those questions are actually one big question.Please help me. Time-codes and screens in my question refer to this great(IMHO) 3d explanation:
+
+https://www.youtube.com/watch?v=UojVVG4PAG0&list=PLVZqlMpoM6kaJX_2lLKjEhWI0NlqHfqzp&index=2
+
+....
+Here is the case: Say I have 2 inputs (lets call them X1 and X2) into my ANN. Say X1= persons age and X2=years of education.
+
+1) First question: do I plug those numbers as is or normalize them 0-1 as a ""preprocessing""?
+
+2) As I have 2 weights and 1 bias, actually I am going to plug my inputs to X1*W1+X2*W2=output formula. This is 2d plane in a 3d space if I am not mistaken(time-code 5:31):
+
+
+Thus when I plug in my variables, like in regression I will get on a Z axis my output. So the second question is: am I right up to here?
+
+-----------------From here come real important couple of questions.
+
+3) My output (before I plug it into the activation function) is just a simple number, IT IS NOT A PLANE and NOT A SURFACE, but a simple scalar, without any sigh on it coming from 2d surface in a 3d space(though it does come from there). Thus, when I plug this number (which was Z value in a previous step) into the activation function (say sigmoid) my number enters there in to the X axis, and we get as an output some Y value. As I understand this operation was totally 2d operation, is was 2d sigmoid and not some kind of 3dsigmoidal surface.
+
+So here is the question: If I am right, why do we see in this movie (and couple of other places) such an explanation?
+(time-code 12:55):
+
+4)Now lets say that I was right in the previous step and as an output from the activation function I do get a simple number not a 2d surface and not a 3d one. I just have some number like I had in the very beginning of the ANN as an input (age, education etc). If i want to add another layer of neurons, this very number enters there as is not telling any one the ""secret"" that it was created by some kind of sigmoid. In this next layer this number is about to take similar transformations as it happened to age and education in a previous layer, it is going to be Xn in just the same scenario: sigmoid(XnWn+XmWm=output) and in the end we will get once again just a number. If I am right, why in the movie they say (time-code 14:50 ) that when we add together two activation functions we get something unlinear. They show result of such ""addition"" first as 2d (time-code 14:50 and 14:58).
+
+So, Here comes my question: how come that they ""add"" two activation functions, if to the second activation function reaches just a simple number as said above he is not telling any one the ""secret"" that it was created by some kind of sigmoid.
+
+5) And then again, they show this addition of 3d surfaces (time-code 19:39 )
+
How it is possible? I mean again there should not happen any addition of surfaces, because no surface passes to next step but a number. What do I miss?
+"
+"['reinforcement-learning', 'convolutional-neural-networks', 'policy-gradients', 'reinforce']"," Title: How can I sample the output distribution multiple times when pruning the filters with reinforcement learning?Body: I was reading the paper Learning to Prune Filters in Convolutional Neural Networks, which is about pruning the CNN filters using reinforcement learning (policy gradient). The paper says that the input for the pruning agent (the agent is a convolutional neural network) is a 2D array of shape (N_l, M_l)
, where N_l
is the number of filters and M_l = m x h x w
(m
, l
and h
are filter dimensions), and the output is an array of actions (each element is 0 (unnecessary filter) or 1 (necessary)) and says in order to approximate gradients we have to sample the output M
times (using the REINFORCE algorithm).
+
+Since I have one input, how can I sample the output distribution multiple times (without updating the CNN parameters)?
+
+If I'm missing something, please, tell me where I'm wrong
+"
+"['machine-learning', 'comparison', 'computational-learning-theory', 'pac-learning', 'statistics']"," Title: What is the relationship between PAC learning and classic parameter estimation theorems?Body: What are the differences and similarities between PAC learning and classic parameter estimation theorems (e.g. consistency results when estimating parameters, e.g. with MLE)?
+"
+"['machine-learning', 'deep-learning', 'reinforcement-learning', 'math', 'deep-rl']"," Title: How can a single sample represent the expectation in gradient temporal difference learning?Body: I was reading the gradient temporal difference learning version 2(GTD2) from rich Sutton's book page-246. At some point, he expressed the whole expectation using a single sample from the environment. But how a single sample can represent the whole expectation.
+
+I marked this point in this image.
+
+
+"
+"['algorithm', 'automation', 'inference']"," Title: Algorithm which learns to select from proposed optionsBody: My goal is to write a program that automatically selects a routing out of multiple proposed options.
+
+The data consists out of the multiple proposed options with each the attributes time, costs and if there is a transhipment and also which of the options was selected.
+
+Example of data:
+
+
+
+My idea at the moment is that I have to apply so type of inference to learn which attribute (time, costs, transhipment) has the highest impact on how to choose the best option. But I don't know exactly where to start with this.
+
+Is there a ""best"" ML algorithm for this? Or how should I approach this?
+
+The dataset currently consists out of 1000 samples in case if this is important.
+
+Thanks in advance for your responses.
+"
+"['objective-functions', 'dqn']"," Title: How does the DQN loss from td_targets against q_values make sense?Body: Why td_loss
is calculated from (td_targets
against q_values
)?
+
+Why I am lost is because:
+
+
+q_values
is just the probability of action. It does not have a reward and discount.
+td_targets
does have rewards + discounts * next_q_values. Somemore next_q_values is next state.
+
+
+How both td_targets
and q_values
can minus (or Huber or MSE) to get lost work?
+
+td_error = valid_mask * (td_targets - q_values)
+td_loss = valid_mask * td_errors_loss_fn(td_targets, q_values)
+
+
+td_loss = valid_mask * td_errors_loss_fn(td_targets, q_values)
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'computer-vision', 'architecture']"," Title: Why are denser layers needed in computer vision neural nets?Body: Many neural net architectures for computer vision tasks use several convolutional layers and then several fully-connected (or dense) layers. While the reasons for using convolutional layers are clear to me, I don't understand why the dense layers are needed. Can't high accuracy be achieved with only convolutional layers?
+"
+"['machine-learning', 'quantum-computing']"," Title: Are there any novel quantum machine learning algorithms that are fundamentally different from ""classical"" ones?Body: Generally, if one googles ""quantum machine learning"" or anything similar the general gist of the results is that quantum computing will greatly speed up the learning process of our ""classical"" machine learning algorithms. However, ""speed up"" itself does not seem very appealing to me as the current leaps made in AI/ML are generally due to novel architectures or methods, not faster training.
+
+Are there any quantum machine learning methods in development that are fundamentally different from ""classical"" methods? By this I mean that these methods are (almost*) impossible to perform on ""classical"" computers.
+
+*except for simulation of the quantum computer of course
+"
+"['python', 'genetic-algorithms']"," Title: Genetic Algorithm Python Snake not improvingBody: So, i have created Snake game using Pygame and Python. Then i wanted to create an AI with Genetic algorithm and a simple NN to play it. Seems pretty fun, but things aren't working out.
+
+This is my genetic algorithm:
+
+def calculate_fitness(population):
+ """"""Calculate the fitness value for the entire population of the generation.""""""
+ # First we create all_fit, an empty array, at the start. Then we proceed to start the chromosome x and we will
+ # calculate his fit_value. Then we will insert, inside the all_fit array, all the fit_values for each chromosome
+ # of the population and return the array
+ all_fit = []
+ for i in range(len(population)):
+ fit_value = Fitness().fitness(population[i])
+ all_fit.append(fit_value)
+ return all_fit
+
+
+def select_best_individuals(population, fitness):
+ """"""Select X number of best parents based on their fitness score.""""""
+ # Create an empty array of the size of number_parents_crossover and the shape of the weights
+ # after that we need to create an array with x number of the best parents, where x is NUMBER_PARENTS_CROSSOVER
+ # inside config file. Then we search for the fittest parents inside the fitness array created by the
+ # calculate_fitness function. Numpy.where return (array([], dtype=int64),) that satisfy the query, so we
+ # take only the first element of the array and then it's value (the index inside fitness array). After we have
+ # the index of the element we just need to take all the weights of that chromosome and insert them as a new
+ # parent. Finally we change the fitness value of the fitness value of that chromosome inside the fitness
+ # array in order to have all different parents and not only the fittest
+ parents = numpy.empty((config.NUMBER_PARENTS_CROSSOVER, population.shape[1]))
+ for parent_num in range(config.NUMBER_PARENTS_CROSSOVER):
+ index_fittest = numpy.where(fitness == numpy.max(fitness))
+ index_fittest = index_fittest[0][0]
+ parents[parent_num, :] = population[index_fittest, :]
+ fitness[index_fittest] = -99999
+ return parents
+
+
+def crossover(parents, offspring_size):
+ """"""Create a crossover of the best parents.""""""
+ # First we start by creating and empty array with the size equal to offspring_size we want. The type of the
+ # array is [ [Index, Weights[]] ]. If the parents size is only 1 than we can't make crossover and we return
+ # the parent itself, otherwise we select 2 random parents and then mix their weights based on a probability
+ offspring = numpy.empty(offspring_size)
+ if parents.shape[0] == 1:
+ offspring = parents
+ else:
+ for offspring_index in range(offspring_size[0]):
+ while True:
+ index_parent_1 = random.randint(0, parents.shape[0] - 1)
+ index_parent_2 = random.randint(0, parents.shape[0] - 1)
+ if index_parent_1 != index_parent_2:
+ for weight_index in range(offspring_size[1]):
+ if random.uniform(0, 1) < 0.5:
+ offspring[offspring_index, weight_index] = parents[index_parent_1, weight_index]
+ else:
+ offspring[offspring_index, weight_index] = parents[index_parent_2, weight_index]
+ break
+ return offspring
+
+
+def mutation(offspring_crossover):
+ """"""Mutating the offsprings generated from crossover to maintain variation in the population.""""""
+ # We cycle though the offspring_crossover population and we change x random weights, where x is a parameter
+ # inside the config file. We select a random index, generate a random value between -1 and 1 and then
+ # we sum the original weight with the random_value, so that we have a variation inside the population
+ for offspring_index in range(offspring_crossover.shape[0]):
+ for _ in range(offspring_crossover.shape[1]):
+ if random.uniform(0, 1) == config.MUTATION_PERCENTAGE:
+ index = random.randint(0, offspring_crossover.shape[1] - 1)
+ random_value = numpy.random.choice(numpy.arange(-1, 1, step=0.001), size=1, replace=False)
+ offspring_crossover[offspring_index, index] = offspring_crossover[offspring_index, index] + random_value
+ return offspring_crossover
+
+
+My neural network is formed using 7 inputs:
+
+is_left_blocked, is_front_blocked, is_right_blocked, apple_direction_vector_normalized_x,
+snake_direction_vector_normalized_x, apple_direction_vector_normalized_y,snake_direction_vector_normalized_y
+
+
+Basically if you can go left, front, right, direction to the apple and snake direction.
+Then i have an hidden layer with 8 neurons and finally 3 output that indicate left, keep going or right.
+
+The Neural Network forward() is calculate like this:
+
+self.get_weights_from_encoded()
+Z1 = numpy.matmul(self.__W1, self.__input_values.T)
+A1 = numpy.tanh(Z1)
+Z2 = numpy.matmul(self.__W2, A1)
+A2 = self.sigmoid(Z2)
+A2 = self.softmax(A2)
+return A2
+
+
+where self.__W1 and self.__W2 are the weights from input to hidden layer and then the weights from hidden layer to the output. Softmax(A2) return the index of the matrix[1,3] where the value is the biggest, then i use that index to indicate the direction that my neural network choose.
+
+This is the config file that contains the parameters:
+
+# GENETIC ALGORITHM
+NUMBER_OF_POPULATION = 500
+NUMBER_OF_GENERATION = 200
+NUMBER_PARENTS_CROSSOVER = 50
+MUTATION_PERCENTAGE = 0.2
+
+# NEURAL NETWORK
+INPUT = 7
+NEURONS_HIDDEN_1 = 8
+OUTPUT = 3
+NUMBER_WEIGHTS = INPUT * NEURONS_HIDDEN_1 + NEURONS_HIDDEN_1 * OUTPUT
+
+
+And this is the main:
+
+for generation in range(config.NUMBER_OF_GENERATION):
+
+ snakes_fitness = genetic_algorithm.calculate_fitness(population)
+
+ # Selecting the best parents in the population.
+ parents = genetic_algorithm.select_best_individuals(population, snakes_fitness)
+
+ # Generating next generation using crossover.
+ offspring_crossover = genetic_algorithm.crossover(parents,
+ offspring_size=(pop_size[0] - parents.shape[0], config.NUMBER_WEIGHTS))
+
+ # Adding some variations to the offspring using mutation.
+ offspring_mutation = genetic_algorithm.mutation(offspring_crossover)
+
+ # Creating the new population based on the parents and offspring.
+ population[0:parents.shape[0], :] = parents
+ population[parents.shape[0]:, :] = offspring_mutation
+
+
+I have 2 problems:
+
+1) I don't see an improvement over the new generations
+
+2) I'm actually running the game inside the for loop, but waiting for all the snake of a generation to die and repeat with the new one is really time consuming. Isn't there a way to launch all or, atleast, more than 1 instance of the game and keep filling the array with the result?
+
+This is Fitness().fitness(population[i])
+
+def fitness(self, weights):
+ game_manager = GameManager(weights)
+ self.__score = game_manager.play_game()
+ return self.__score
+
+
+This is where it's called inside the for loop
+
+def calculate_fitness(population):
+ """"""Calculate the fitness value for the entire population of the generation.""""""
+ # First we create all_fit, an empty array, at the start. Then we proceed to start the chromosome x and we will
+ # calculate his fit_value. Then we will insert, inside the all_fit array, all the fit_values for each chromosome
+ # of the population and return the array
+ all_fit = []
+ for i in range(len(population)):
+ fit_value = Fitness().fitness(population[i])
+ all_fit.append(fit_value)
+ return all_fit
+
+
+This the function that launch the game (GameManager(weights)) and return the score of the snake.
+
+This is my first time on AI so this code could be all a mess, don't worry about pointing out what i did wrong, just please don't say ""It's all wrong"" because i won't be able to learn otherwise.
+"
+"['computer-vision', 'image-processing', 'fourier-transform']"," Title: What does the Fourier transformed image mean?Body: I have been trying to figure out what the Fourier transformed image represents. I am aware of Fourier transformation in general, but I can't explain myself the image it forms after transformation.
+
+In the given image, what does the outlined white sort of lines mean?
+
+
+"
+"['intelligent-agent', 'goal-based-agents', 'utility-based-agents', 'dijkstras-algorithm', 'prims-algorithm']"," Title: What types of AI agents are Djikstra's algorithm and Prim's Minimum Spanning Tree algorithm?Body: From the perspective of the type of AI Agents, I would like to discuss Prim's Minimum Spanning Tree algorithm and Dijkstra's Algorithm.
+Both are model-based agents and both are "greedy algorithms".
+Both have their memory to store the history of vertices and their path distance. Prim's is more greedy than Dijkstra's algorithm whereas Dijkstra's algorithm is more efficient than Prim's.
+Can we say that Dijkstra's algorithm is a utility-based agent, whereas Prim's is a goal-based agent, with the justification that Prim's is more goal-oriented as compared to finding the optimum (shortest) path?
+"
+"['reinforcement-learning', 'math', 'policy-gradients', 'calculus']"," Title: How is the log-derivative trick of a trajectory derived?Body: I am looking at this formula which breaks down the gradient of $P(\tau |\theta)$ the first part is clear as is the derivative of $\log(x)$, but I do not see how the first formula is rearranged into the second.
+
+
+"
+"['reinforcement-learning', 'comparison', 'off-policy-methods', 'sarsa', 'on-policy-methods']"," Title: What are the differences between 1-step SARSA and SARSA?Body: SARSA is on-policy, while n-step SARSA is off-policy. But when n = 1, is it like an off-policy version of SARSA? Any similarity and difference between 1-step SARSA and SARSA?
+"
+"['ai-design', 'fuzzy-logic']"," Title: How do we determine the membership functions and values for this problem?Body: The goal of our project is to identify the quality of a grain and we have two values, A
and B
.
+
+A
can take the values low L
, medium M
and high H
. And B
can take the values of Low, Medium and High (L
,M
,H
).
Specifically, the ranges for A
are -> 5-10
(low), 10-15
(medium), 15-20
(high).
And the ranges for B
are -> 50-75
(low), 75-90
(medium), 90-110
(high)
+
+The output for these are bad B
, average A
and good G
.
+
+
+
+How do we determine the membership functions and values for this?
+
+We want to write Python code for the fuzzy system, but we are beginners and have no idea how to go about this. Any help would be appreciated.
+"
+"['machine-learning', 'ai-design', 'datasets', 'supervised-learning']"," Title: How can I approximate a function that determines the priority of objects?Body: I am facing the following supervised learning problem:
+
+An object is fully characterized by its position in $R^n$. There are $m$ objects. There are fully observable (i.e. their positions are always known).
+
+At each time step $t$, exactly one of these objects is activated. Activation is fully observable, i.e. the index $a_t$ ($a_t \in [1,m]$) of the object activated at time $t$ is known.
+
+We know that, under the hood, activation works this way: there is a priority function $f$ ($f: R^n \to R$ ), which computes, for each time step, the priority score of each object. The object for which the priority score was the highest is activated.
+
+The goal is to find (the approximation of) one of the possible priority functions that would match a given data-set. A data-set is of size $(m*n+1)*t$ ($m$ positions of dimension $n$, plus the index of the activated object, over $t$ time steps).
+
+As an example, if it turns out there is a hidden fixed beacon, and at each time step $t$ the object the closest to the beacon is activated, then a possible function would be $f(o_{it})=1/d_{it}$, where $d_{it}$ is the distance between the beacon and the object $o_i$ at time $t$.
+
+(If several objects have the same highest priority score, then only one of them is activated, selected randomly).
+
+The function found by the algorithm may be parametric and encoded by a neural network, if this is applicable.
+
+Is there a method for finding one such function ?
+"
+"['convolutional-neural-networks', 'computer-vision', 'object-detection']"," Title: How come a detection works after global average pooling 2D?Body: I use an off-the-shelf convolutional neural network, where at the end of the convolutional part, the depth of the last convolutional layer is expanded and then its 2D average is computed (such that for a tensor of say 8x8x512, you get its 2D average, which is of 1x512). It is a commonly used operation in deep networks, called Global Average Pooling 2D.
+
+The only tensor that is input to the fully-connected part is that 2D averaged 1x512 tensor, i.e., a tensor that should not preserve the 2D information. Yet, my fully-connected last layer neurons, which have been trained to predict the 2D location of objects, work very well.
+
+I thought about it for a long time and couldn't find any convincing explanation how come the network preserved the 2D information in the averaged tensor.
+
+Any idea?
+"
+"['neural-networks', 'deep-learning', 'architecture']"," Title: Should batch-normalization/dropout/activation-function layers be used after the last fully connected layer?Body: I am using the following architechture:
+
+3*(fully connected -> batch normalization -> relu -> dropout) -> fully connected
+
+
+Should I add the batch normalization -> relu -> dropout
part after the last fully connected layer as well (the output is positive anyway, so the relu wouldn't hurt I suppose)?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'filters']"," Title: What happens to the channels after the convolution layer?Body: I wonder what happens to the 'channels' dimension (usually 3 for RGB images) after the first convolution layer in CNNs?
+
+In books and other sources, it is always said that the depth of the output from convolutional layers is the number of kernels (filters) in that layer.
+
+But, if the input image has 3 channels and we convolve each of them with $K$ kernels, shouldn't the depth of the output be $K * 3$? Are they somehow 'averaged' or in other way combined with each other?
+"
+"['neural-networks', 'deep-learning', 'architecture']"," Title: Should neural nets be deeper the more complex the learning problem is?Body: I know it's not an exact science. But would you say that generally for more complicated tasks, deeper nets are required?
+"
+"['natural-language-processing', 'bert', 'pretrained-models']"," Title: How to use pre-trained BERT to extract the vectors from sentences?Body: I'm trying to extract the vectors from the sentences. Spent soo much time searching for the pre-trained BERT models but found nothing.
+
+Is it possible to get the vectors using pre-trained BERT from the data?
+
+"
+"['machine-learning', 'deep-learning', 'ai-design', 'computational-learning-theory']"," Title: A model for each sub-problem vs one model for the whole problemBody: Let's say one wants to use a neural net to learn some function $g(x)$. Let's say that we know that $g$ is a combination of two functions (or two sub-problems), $g(x)=f_2(f_1(x))$, and that we have two datasets
+
+
+- composed of $x$ samples and their corresponding $g(x)$ labels, and
+- composed of $x$ samples and their corresponding $f_1(x)$ labels.
+
+
+Should we use two nets, one to learn the mapping from $x$ samples to $f_1(x)$ using dataset 1 and another net to learn the mapping from $f_1(x)$ to $g(x)$ (note that we can build a dataset composed of $f_1(x)$ samples and $g(x)$ labels with the trained net), or just one net to learn mappings from $x$ to $g(x)$ using dataset 1?
+
+Intuitively, the first option seems to be better since we take advantage of our knowledge that $f_1$ is a ""sub-problem"" of $g$.
+"
+"['deep-learning', 'models', 'hidden-layers']"," Title: How to work on different models for a given problem?Body: I am working on the MNIST data on my own. The idea is to use different values for the number of hidden layers, number of nodes in a given layer, etc. How do you organize these things while you are working on creating a model for a problem? DO you do everything in one code file or you use different code files for choosing the best?
+"
+"['reinforcement-learning', 'policies', 'off-policy-methods']"," Title: How to estimate a behavior policy for off-policy learning based on data?Body: I have a dataset which includes states, actions, and reward. The dataset includes information on the transition, i.e., $p(r,s' \mid s,a)$.
+
+Is there a way to estimate a behavior policy from this dataset so that it can be used in an off-policy learning algorithm?
+"
+"['deep-learning', 'backpropagation', 'relu']"," Title: Does net with ReLU not learn when output < 0?Body: The derivative of ReLU is 0 if its output is lower than 0 - $d ReLU(x)/dReLU$ is $0$ if $x < 0$. Let's denote some net's output by $Out$, so if this net's last layer is ReLU then we get that $dOut/dReLU$ is $0$ if $Out < 0$. Subsequently, for every parameter $p$ in the net we would get that $dOut/dp$ is $0$. Does that mean that for every sample $x$ such that $Out(x) < 0$ the net doesn't learn at all from that sample since the derivative for each parameter is $0$?
+"
+"['computer-vision', 'object-detection']"," Title: If two objects are too close to each other, would an object detector do a poor job of correctly classifying them?Body: Suppose we have an object detector that is trained to detect $20$ products. If two objects are too close to each other, in general, would an object detector do a poor job of correctly classifying them? If they were far apart in the scene, would the object detector to a better job of correctly classifying them?
+"
+"['neural-networks', 'machine-learning', 'support-vector-machine']"," Title: Is an SVM the same as a neural network without a hidden layer?Body: A neural network without a hidden layer is the same as just linear regression.
+
+If I then use squared hinge loss and encoporate the l2 regularisation term, is it fair to then call this network the same as a linear SVM?
+
+Going by this assumption, then if I need to implement a multiclass SVM, i can just have n output nodes (where n is the number of classes). Would this then be equivalent to having n number of SVMs, similar to a one-vs-rest method?
+
+If I then wanted to encoporate a kernel into my SVM, could I then use an activation function or layer prior to the final output nodes (where I compute loss and add regularisation) which would then transfer this data into another feature plane the same as that of an SVM kernel?
+
+This is my current hunch, but would like some confirmation or correction where my understanding is incorrect.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'terminology']"," Title: Does the ""lowest layer"" refer to the first or last layer of the neural network?Body: People sometimes use 1st layer, 2nd layer to refer to a specific layer in a neural net. Is the layer immediately follows the input layer called 1st layer?
+
+How about the lowest layer and highest layer?
+"
+"['convolutional-neural-networks', 'convolution', 'filters', 'convolutional-layers', 'fully-convolutional-networks']"," Title: What is the point of using 1D and 2D convolutions with a kernel size of 1 and 1x1 respectively?Body: I understand the gist of what convolutional neural networks do and what they are used for, but I still wrestle a bit with how they function on a conceptual level. For example, I get that filters with kernel size greater than 1 are used as feature detectors, and that number of filters is equal to the number of output channels for a convolutional layer, and the number of features being detected scales with the number of filters/channels.
+However, recently, I've been encountering an increasing number of models that employ 1- or 2D convolutions with kernel sizes of 1 or 1x1, and I can't quite grasp why. It feels to me like they defeat the purpose of performing a convolution in the first place.
+What is the advantage of using such layers? Are they not just equivalent to multiplying each channel by a trainable, scalar value?
+"
+"['neural-networks', 'backpropagation']"," Title: Class of functional equations that backpropagation can solveBody: There is a theorem that states that basically a neural network can approximate any function whatsoever. However, this does not mean that it can solve any equation. I have some notes where it states that backpropagation allows us to solve problems of the following kind
+
+$$ F(x_i, t) = y_i $$
+
+Can someone point me to what exactly this means?
+"
+"['computer-vision', 'object-detection', 'papers']"," Title: Why do we set offset (0.5) in single shot detector?Body: In the paper SSD: Single Shot MultiBox Detector, under section 2.2 - (4), why do we add an offset of 0.5 to x, y in generating the anchor boxes across feature maps?
+"
+"['natural-language-processing', 'resource-request', 'books', 'named-entity-recognition']"," Title: Are there any good resources (preferably books) about techniques used for entity extraction?Body: Given some natural language sentences like
+
+I would like to talk to Mr. Smith
+
+I would like to extract entities, like the person "Smith".
+I know that frameworks, which are capable of doing so (f. e. RASA or spaCy), exist, but I would like to dive deeper and understand the theory behind all this.
+At university, I learned a few of the basic models like CRF's or SVM's used for this task, but I wonder if there are any good resources (preferably books) about this topic.
+"
+"['deep-learning', 'objective-functions', 'regression', 'loss', 'mean-squared-error']"," Title: How MSE should be appliead with multi target deep network?Body: I'm having a problem understanding how the MSE should be used when working with a multidimensional target, e.g 3 dimensiones. (My outputs are continuois values, not categorical)
+
+Let us say I have a batch size of 2, to make it simple; I pass my input in the network and my y_pred would be a 2x3 tensor.
+The same happens for y_true, 2x3 itself.
+
+Now the thing I'm not sure of: I take first the difference, diff = y_true - y_pred
; this maintains the dimension.
+Now, for MSE, I square diff, obtaining again a 2x3 tensor, is that right ?
+
+Now the tricky part (for me): I have to Mean. Which one should I consider:
+
+Mean all the (six) values obtaining thus a scalar ? But in this case I do not understand how the backpropagation would make better on specific targets
+
+Mean by rows, i.e, obtaining a 2x1 tensor, so that I have a mean for each example ? Also here I cannot get how the optimization would work
+
+Mean by columns, i.e, obtaining a 1x3 tensors, where I obtain thus an ""error"" for each target? This seems the more logical to me, but I'm not so sure.
+
+Hope this is clear
+"
+"['reinforcement-learning', 'markov-decision-process']"," Title: Why does it make sense to study MDPs with finite state and action spaces?Body:
+ In the standard Markov Decision Process (MDP) formalization of the reinforcement-learning (RL) problem (Sutton & Barto, 1998), a decision maker interacts with an environment consisting of finite state and action spaces.
+
+
+This is an extract from this paper, although it has nothing to do with the paper's content per se (just a small part of the introduction).
+
+Could someone please explain why it makes sense to study finite state and action spaces?
+
+In the real world, we might not be able to restrict ourselves to a finite number of states and actions! Thinking of humans as RL agents, this really doesn't make sense.
+"
+"['machine-learning', 'reinforcement-learning']"," Title: Using AI to find the correct set of object/numbers based on previous dataBody: There are 11 objects of which 4 are ""Bad"" objects. So there are 7 ""Good"" objects.
+You have to choose as many Good objects before proceeding to another set of objects of a different sequence.
+
+How would you train an AI that predicts the position of the Good objects based of previous sets of data?
+"
+"['variational-autoencoder', 'latent-variable', 'bottlenecks']"," Title: Does bottleneck size matter in Disentangled Variational Autoencoders?Body: I suppose that picking an appropriate size for the bottleneck in Autoencoders is neither a trivial nor an intuitive task. After watching this video about VAEs, I've been wondering: Do disentangled VAEs solve this problem?
+
+After all, if the network is trained to use as few latent space variables as possible, I might as well make the bottleneck large enough so that I don't run into any issues during training. Am I wrong somewhere?
+"
+"['reinforcement-learning', 'terminology', 'definitions', 'multi-objective-rl']"," Title: What are preferences and preference functions in multi-objective reinforcement learning?Body: In RL (reinforcement learning) or MARL (multi-agent reinforcement learning), we have the usual tuple:
+
+(state, action, transition_probabilities, reward, next_state)
+
+
+In MORL (multi-objective reinforcement learning), we have two more additions to the tuple, namely, ""preferences"" and ""preference functions"".
+
+What are they? What do we do with them? Can someone provide an intuitive example?
+"
+"['reinforcement-learning', 'comparison', 'markov-decision-process', 'papers']"," Title: How are the classical MDP and the object-oriented MDP views different?Body: I've been reading the attached paper - which aims to model entities in the world as objects, including the learning agent itself!
+
+To say the least, the goal is to navigate through what seems like a maze (path-planning problem) - and drop off passengers in desired destinations, while avoiding walls in the map of the world (5x5 grid for now). The objects involved are, a taxi, passengers, walls and a destination.
+
+Now, a particular paragraph says the following:
+
+
+ ""Whereas in the classical MDP
+ model, the effect of encountering walls is felt as a property of specific locations in the grid, the OO-MDP view is that wall interactions are the same regardless of their location. As such, agents’ experience can transfer gracefully throughout the state space.""
+
+
+What does this mean? How are the classical MDP and the object-oriented MDP views different?
+
+I can't make sense of the above extract, at all. Any help would be appreciated!
+
+P.S. I did not consider posting parts of the extract as separate questions since my problem has more to do with understanding the extract as a whole which inevitably relies on understanding the parts.
+"
+"['reinforcement-learning', 'comparison', 'markov-decision-process', 'value-functions']"," Title: What is the difference between the state transition of an MDP and an action-value?Body: Let's say we have MDP where we have a state transition matrix.
+
+How is this state transition different from action value in reinforcement learning? Is the state transition in MDP stochastic transition, meaning transition to some other state without taking any action?
+"
+"['deep-learning', 'convolutional-neural-networks', 'classification', 'image-recognition', 'optical-character-recognition']"," Title: How should I define the loss function for a multi-object detection problem?Body: I'm trying to create a text recognition project using CNN. I need help regarding the text detection task.
+
+I have the training images and bounding box details for them. But I'm unable to figure out how to create the loss function.
+
+Can anyone help me by telling how to take the output from the CNN model and compare it to the bounding box labels?
+"
+"['deep-learning', 'natural-language-processing']"," Title: Building a spell check modelBody: I have customer review texts. The data consists of the the raw and manually corrected texts of the reviews. I have aligned these pairs by using similarity algorithms and matched the words on them. Since there are some mis-matched words pairs, I have eliminated the pairs under a threshold value for their counts.
+
+Now there are raw and corrected word pairs. What kind of machine learning model can I build for spellcheck by using the data mentioned as well?
+"
+"['machine-learning', 'ai-design', 'classification']"," Title: What are the best classifiers for this type of data?Body: I would like to classify a dataset Credit Scoring, which is composed of 21 attributes, some of them are numeric and others are boolean.
+
+For the output, I want to know if they have a good or bad credit based on those attributes, without calculating any numeric value for the credit score.
+
+I am using Weka for this task. However, I am not sure what are the best/ideal classifiers for that kind of datasets.
+
+Anyone here can put me in the right direction?
+"
+"['machine-learning', 'reference-request', 'algorithm-request']"," Title: Is there a theory that captures the following ideas?Body: A big class of problems that are relevant in today's society are full of uncertainty and are also sometimes computationally intractable. Along our lives we come to realize that we are solving the same type of problem multiple times, sometimes with different strategies and mixed results. I would like to close in on three main problem types: pattern recognition, regression and density estimation.
+An agent (computer program or even a human) that identifies the type of a problem and applies a systematic procedure for finding its solution. A solution is understood in the classical sense for each of the problem types, thus, the solution does not have to be a global optima. This procedure must be implementable.
+Bonus points
+
+- Uses metadata about the problem itself to 'gain insight' about the nature of the problem.
+- Verifies that its solution is correct in some sense.
+- The types or classes of problems can be expanded later on.
+- Works with very limited resources.
+- Works with little information about the problem, or with "small data".
+
+So far, I've found Statistical Learning theory and Bayesian Inference as candidates that implement some of those ideas, but I was wondering if there's something else out there or I just need to take the best of both of those worlds.
+"
+"['machine-learning', 'proofs', 'probability', 'computational-learning-theory', 'statistics']"," Title: Why is probability that at least one hypothesis out of $k$ being consistent with $m$ training examples $k(1- \epsilon)^m$?Body: My question is actually related to the addition of probabilities. I am reading on computational learning theory from Tom Mitchell's machine learning book.
+
+In chapter 7, when proving the upper bound of probabilities for $\epsilon$ exhausted version space (theorem 7.1), it says that the probability that at least one hypothesis out of the $k$ hypotheses in the hypotheses space $|H|$ being consistent with m training examples is at most $k(1- \epsilon)^m$.
+
+I understand that the probability of a hypothesis, $h$, consistent with m training examples is $(1-\epsilon)^m$. However, why is it possible to add the probabilities for $k$ hypotheses? And might the probability be greater than 1 in this case?
+"
+"['machine-learning', 'deep-learning', 'terminology', 'ensemble-learning']"," Title: How is an architecture composed of a second model that validates the first one called in machine learning?Body: I have a mix of two deep models, as follows:
+
+if model A is YES --pass to B--> if model B is YES--> result = YES
+if model A is NO ---> result = NO
+
+
+So basically model B validates if A is saying YES
. My models are actually the same, but trained on two different feature sets of same inputs.
+
+What is this mix called in machine learning terminology? I just call them master/slave architecture, or primary/secondary model.
+"
+['regularization']," Title: Cannot fine-tune L2-regularization parameterBody: I have a data set of 1600 examples. I am using 1280 (80%) for training, 160 (10%) for testing, and 160 (10%) for validation. The training goes one of two ways no matter how I fine-tune the L2 parameter:
+
+1) The validation and training error converge, albeit around 75% error
+
+2) The training error settles to around 0%, but the validation error stays around 75%
+
+I don't think my network is too large either. I have trained networks with two hidden layers, both with the same number of nodes as the input. I also tried dropout layers and that did not seem to help.
+
+Does this just mean that I need to add more training examples? Or how do I know that I have reached the limitations of what I am having the network learn?
+"
+"['deep-learning', 'reinforcement-learning']"," Title: Reinforcement Learning (and specifically REINFORCE algorithm) for one-round ""games""Body: I'm interested about using Reinforcement Learning in a setting that might seem more suitable for Supervised Learning. There's a dataset $X$ and for each sample $x$ some decision needs to be made. Supervised Learning can't be used since there aren't any algorithms to solve or approximate the problem (so I can't solve it on the dataset) but for a given decision it's very easy to decide how good it is (define a reward).
+
+For example, you can think about the knapsack problem - let's say we have a dataset where each sample $x$ is a list (of let's say size 5) of objects each associated with a weight and a value and we want to decide which objects to choose (of course you can solve the knapsack problem for lists of size 5, but let's imagine that you can't). For each solution the reward is the value of the chosen objects (and if the weight exceeds the allowed weight then the reward is 0 or something). So, we let an agent ""play"" with each sample $M$ times, where play just means choosing some subset and training with the given value.
+
+For the $i$-th sample the step can be adjusted to be:
+$$\theta = \theta + \alpha \nabla_{\theta}log \pi_{\theta}(a|x^i)v$$
+for each ""game"" with ""action"" $a$ and value $v$.
+
+instead of the original step:
+$$\theta = \theta + \alpha \nabla_{\theta}log \pi_{\theta}(a_t|s_t)v_t$$
+Essentially, we replace the state with the sample.
+
+The issue with this is that REINFORCE assumes that an action also leads to some new state where here it is not the case. Anyway, do you think something like this could work?
+"
+"['reinforcement-learning', 'q-learning']"," Title: Is Q-Learning suitable for time-dependent spaces?Body: Many Q-learning techniques have been developed to capture discrete state(observation), actions like a robot in a grid world, and even continuous (state or action) spaces. But I am wondering how we can model the states/space in a time-dependent environment. Please, consider the following example:
+
+There is one smartphone (client) and five compute servers that are addressing/serving many clients (smartphones) at the same time. The smartphone transfers some raw data (e.g, sensor data) to one of those five servers (e.g., every t seconds) and gets the results. Suppose the server computes the stress-level of the client in real-time based on the collected data.
+
+Now, a q-learning agent should be deployed to the smartphone to be able to select the best server with minimum response time (i.e., the goal is to minimize the execution/response time). Note that servers are serving different clients and their load is a function of time and varies from time to time.
+
+So in the above scenario, I am wondering what would be our ""states"" and how we can model the ""environment""?
+"
+"['machine-learning', 'classification', 'terminology', 'papers', 'object-detection']"," Title: What is the meaning of ""easy negatives"" in the context of machine learning?Body: What does the term "easy negatives" exactly mean in the context of machine learning for a classification problem or any problem in general?
+From a quick google search, I think it means just negative examples in the training set.
+Can someone please elaborate a bit more on why the term "easy" is brought into the picture?
+Below, there is a screenshot taken from the paper where I found this term, which is underlined.
+
+"
+"['reinforcement-learning', 'terminology', 'papers']"," Title: What is the KWIK framework?Body:
+...for learning transition dynamics...in the KWIK framework.
+
+The above is part of a paper's conclusion - and I don't really seem to understand what the KWIK framework is. In the details of the paper, is a brief highlight of the KWIK conditions for a learning algorithm, which go as follows (I paraphrase):
+
+- All predictions must be accurate (assuming a valid hypotheses class)
+- However the learning algorithm may also return $\perp$, which indicates that it cannot yet predict the output for this input.
+
+A quick Google search brought me to this paper from ICML 2008, but it is a little difficult to comprehend without a detailed read.
+Could someone please help me understand what the KWIK framework is, and what implication does it have for a learning algorithm to satisfy KWIK conditions? An explanation that starts at simple and goes to fairly advanced discussions is appreciated.
+"
+"['game-theory', 'depth-first-search']"," Title: Why does our AI play worse at even levels of depth?Body: We are building an AI to play a board game. Leaving aside the implementation, we noticed that it plays worse when we set an even (2,4,6,...) level of depth. We use a minimax depth-first strategy.
+Do you have any ideas why it behaves like that?
+
+Edit: for example if we set a game between an AI with 5 levels of depth and an AI with 6 levels of depth, the first one usually wins (and this is weird).
+"
+"['reinforcement-learning', 'deep-rl', 'ddpg', 'off-policy-methods', 'on-policy-methods']"," Title: Why is DDPG an off-policy RL algorithm?Body: In DDPG, if there are no $\epsilon$-greedy and no action noise, is DDPG an on-policy algorithm?
+"
+"['machine-learning', 'math', 'unsupervised-learning', 'clustering']"," Title: How do I approach this problem?Body: Let's say I have a dataset with multiple types of multiple ingredients (salt1,salt2, etc). Each n-th variation of each ingredient vs flavor may be represented by an n×k matrix that where an ingredient corresponds with a particular value of ""flavor"".
+
+A recipe consists of a 1×n vector (where n is the number of ingredients) where each value corresponds to the quantity of ingredient in the recipe.
+
+A particular combination of ingredients, with particular weights, with some transformation, would result in a particular 1×k ""flavor"" profile, in this simple model.
+
+One approach could be to formulate this as a Probabilistic Matrix Factorization problem (I think), with k being the number of flavor parameters. And combining the recipe vector with the flavor matrix might do the trick.
+
+But the problem is, the flavor value of each ingredient (and each variation of the ingredient) in the ingredient-flavor matrix would be very very limited. The recipe flavor profile might have a corresponding flavor vector, that too would be limited, and would not be available, at the beginning. So in order to capture the relationship between the ingredients and the flavor, the system would be dependent on user-submitted data on recipe/ingredient flavors.
+
+Is there a way I could create clusters of recipes based on user flavor ratings and extrapolate these to the constituent ingredients or vice versa? Could this be done via some unsupervised learning algorithm?
+
+I am quite new to this, I would appreciate some help or some pointers to which mathematical approaches I should be looking at to model this problem.
+"
+"['neural-networks', 'applications', 'reference-request', 'autonomous-vehicles']"," Title: What kinds of techniques do autopilots of autonomous cars use?Body: What kinds of techniques do autopilots of autonomous cars (e.g. the ones of Tesla) use? Do they use reinforcement learning? Which types of neural network architecture do they use?
+"
+"['machine-learning', 'reference-request', 'applications', 'algorithm-request', 'algorithmic-trading']"," Title: What are the most popular and effective approaches to leveraging AI for stock price prediction?Body: Currently, what are the most popular and effective approaches to leveraging AI for stock price prediction?
+It seems like there could be several approaches and problem formulations:
+
+- Supervised learning:
+- Regression: predict the stock price directly
+- Classification: predict whether the stock price goes up or down
+- Unsupervised learning: find clusters of stocks that move together
+- Reinforcement learning: let the agent directly maximize its stock market return
+- Other AI methods: rules, symbolic systems, etc.
+
+Which are most popular/performant? Are there other ways that people are using machine learning in stock trading (sentiment analysis on financial statements, news, etc.)?
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'policy-gradients', 'actor-critic-methods']"," Title: What is the gradient of the Q function with respect to the policy's parameters?Body: I have been recently studying Actor-Critic algorithms, and I ran into the following question.
+
+Let $Q_{\omega}$ be the critic network, and $\pi_{\theta}$ be the actor. It is known that in order to maximize the objective return $J(\theta)$, we follow the gradient direction, which could be estimated as follows $$\nabla_{\theta}J=\mathbb{E}[Q_{\omega}(s,a).\nabla_{\theta}log \pi_{\theta} (a|s)].$$ But if we were to calculate the gradient of $Q^{\pi}$ with respect to $\theta$, what are the possible approaches to do so?
+
+More generally, say we have a network $\phi_{\omega}$ that is trained on data generated from another neural network, say a stochastic actor $\pi_{\theta}$ like in classic reinforcement learning frameworks, how to find the gradient of $\phi_{\omega}$ w.r.t ${\theta}$?
+"
+"['neural-networks', 'recurrent-neural-networks', 'papers']"," Title: How can Siamese Networks be viewed as RNNs?Body:
+ ""Single-object tracking commonly uses Siamese networks, which can be seen as an RNN unrolled over two time-steps.""
+
+
+(from the SQAIR paper)
+
+I'm wondering how Siamese networks can be viewed as RNNs, as mentioned above. A diagrammatic explanation, or anything that helps understand the same, would help! Thank you!
+"
+"['natural-language-processing', 'bert', 'similarity']"," Title: Similarity score between 2 words using Pre-trained BERT using PytorchBody: I'm trying to compare Glove, Fasttext, Bert on the basis of similarity between 2 words using Pre-trained Models. Glove and Fasttext had pre-trained models that could easily be used with gensim word2vec in python.
+
+Does BERT have any such models?
+Is it possible to check the similarity between two words using BERT?
+
+"
+"['machine-learning', 'hyperparameter-optimization', 'principal-component-analysis', 'dimensionality-reduction']"," Title: Is it theoretically possible (or impossible) that principal component analysis worsens the performance of the model?Body: In case I had a prediction model and decided to add a PCA step prior to the model, is it theoretically possible/impossible that the number of output dimensions that is better for all tests may perform worse than the model without PCA?
+My question comes from the fact that I want to add a PCA step prior to a model and hyperparameterize the PCA output dimension from 1 to N (N being the number of dimensions in the original dataset) and I wanted to know if there is any theoretical basis that there is no case in which performing this previous step could have a worse performance than the previous model.
+Especially, my doubt comes if the best PCA case from a selection of dimensions from 1-N is always better than the best case without PCA.
+"
+"['computer-vision', 'papers', 'generative-model', 'random-variable']"," Title: Why is this variable in equation 2 of the SQAIR paper a random vector of $n$ ones followed by a zero?Body: I've been reading the SQAIR paper lately, and the mathematics involved seems a bit complicated.
+Some background, about the paper: SQAIR stands for Sequential Attend, Infer, Repeat - the paper does generative modelling of moving objects. The idea of Attend, Infer, Repeat is to decompose a static scene into constituent objects, where each object is represented by continuous latent variables. The latent variables, $z^{what}$,$z^{where}$ and $z^{pres}$ encode the appearance, position and presence of an object.
+Here's a screenshot of the first of many things I'm unable to understand -
+
+Why is $z^{pres,1:n+1}$ a random vector of $n$ ones followed by a zero? Why do we need the zero? How does it help?
+Furthermore, an explanation of equation $(2)$ as in the image above, would be great.
+P.S. I hope you all find the paper interesting. I'll ask other questions from the paper in separate posts, so as to not crowd one post with too many queries.
+"
+"['machine-learning', 'ai-design', 'constraint-satisfaction-problems']"," Title: How can I design a system that suggests physical exercises to a person while keeping into account the fatigue?Body: I want to create an exercise suggester. Each day either has a routine or is a rest day. A routine has 4 slots. For each slot we select an exercise. We constrain the legal exercises, only do upper-body today, etc. We, therefore, restrict the number of available exercises. This seems easy.
+
+I want to know how can I extend this model to take fatigue into account. Fatigue is a state such that every exercise reduces it (specific value for each exercise) and it recovers with time.
+
+Can this problem even be modeled as a constraint satisfaction problem? What's the best way of modeling and solving this problem
+
+I'd like to suggest all the legal exercises taking fatigue into account.
+"
+"['reinforcement-learning', 'reference-request', 'multi-agent-systems']"," Title: Are there any board game appropriate to examine the performance of multiple agents that cooperate both inter-group and intra-group?Body: I want to find out scenarios that useful to examine the performance of intra-group and inter-group cooperation in MARL.
+Specifically, I prefer a board game (like sudoku) that is suitable for the cooperation evaluation.
+
+But there are some differences between the requirements and the go like game. Every grid on the board should be treated as an agent. They are designed to form a situation with the local utility and global utility.
+
+Take the sudoku as an example, every grid should choose an appropriate value to reach the sudoku solution.
+
+Due to my non-familiarity with traditional MARL scenarios, it will be a great help for showing me some keywords or lists.
+"
+"['genetic-algorithms', 'crossover-operators', 'genetic-operators']"," Title: How does the crossover operator work when my output contains only 2 states?Body: I'm currently working on a project where I am using a basic cellular automata and a genetic algorithm to create dungeon-like maps. Currently, I'm having an incredibly hard time understanding how exactly crossover works when my output can only be two states: DEAD or ALIVE (1 or 0).
+
+I understand crossover conceptually - you find two fit members of the population and they exchange genetic material, hopefully producing a fitter offspring. I also understand this is usually done by performing k-point crossover on bit strings (but can also be done with real numbers).
+
+However, even if I encode my DEAD/ALIVE cells into bits and cross them over, what do I end up with? The cell can only be DEAD or ALIVE. Will the crossover give me some random value that is outside this range?
+
+And even if I were to work on floating-point numbers, wouldn't I just end up with a 1 or 0 anyway? In that case, it seems like it would be better to just randomly mutate DEAD cells into ALIVE cells, or vice versa.
+
+I've read several papers on the topic, but none seem to explain this particular issue (in a language I can understand, anyway). Intuitively, I thought maybe I can perform crossover on a neighbourhood of cells - so I find 2 fit neighbourhoods, and then they exchange members (for example, neighbourhood A gives 4 of its neighbours to neighbourhood B). However, I have not seen this idea anywhere, which leads me to believe it must be fundamentally wrong.
+
+Any help would be greatly appreciated, I'm really stuck on this one.
+"
+"['convolutional-neural-networks', 'classification', 'overfitting', 'convergence', 'learning-rate']"," Title: Is it a good idea to overfit on a small part of your data for faster model convergence?Body: I working on a classification problem that needs to detect patterns on a time serie. Basically, there's a catch-all class that means ""no pattern detected"", the other are for the specific patterns. The data is imbalanced (ratio 1/10 at least), but I adapted the class weights.
+
+I'm able to overfit successfully on a few days of data, but when I train on 2 years of data, the model seems stuck on class1 ""no pattern detected"" for a veeeery long time. I've tried several learning rates, but it doesn't make the convergence happen significatively faster.
+
+Is it a better starting point for my training to use the overfitting model's weight as a starting point? Could this allow the model to converge faster?
+"
+"['reinforcement-learning', 'deep-rl', 'monte-carlo-methods', 'convergence']"," Title: When does Monte Carlo linear function approximation converge?Body: In this Stanford lecture (minute 35:47 and 37:00), the professor says that Monte Carlo (MC) linear function approximation does not always converge, and she gives an example. In general, when does MC linear function approximation converge (or not)?
+
+Why do people use that MC linear function approximation if sometimes it doesn't converge?
+
+They also gave the definition of the stationary distribution of a policy, and I am not sure if using it for function approximation converges or not.
+"
+"['neural-networks', 'reinforcement-learning', 'tensorflow', 'deep-rl', 'reinforce']"," Title: Understanding the TensorFlow implementation of the policy gradient methodBody: I was trying to understand the implementation of a basic policy gradient (REINFORCE) method using TensorFlow.
+I think I got almost everything. The only thing that still bothers me is the loss function implementation.
+
+From the theory, we have that after all the manipulation the gradient of the score function is
+
+$$\nabla_{\theta}J(\theta)=\mathop{\mathbb{E}}\left[\nabla_{\theta}(log(\pi(s,a,\theta)))R(\tau) \right]$$
+
+In this Cartpole example the part relative to the loss function is
+
+ neg_log_prob = tf.nn.softmax_cross_entropy_with_logits_v2(logits = NeuralNetworkOutputs, labels = actions)
+ loss = tf.reduce_mean(neg_log_prob * discounted_episode_rewards_)
+
+
+At this point, I do not understand how the definition from above translates into code.
+
+As far as I understood, the functions
+
+tf.nn.softmax_cross_entropy_with_logits_v2(logits = NeuralNetworkOutputs, labels = actions)
+
+
+returns
+
+log(softmax(NeuralNetworkOutputs))*actions
+
+
+Which is then multiplied by the discounted returns
+
+log(softmax(NeuralNetworkOutputs))*actions*discounted_episode_rewards_
+
+
+Within this expression, I do not understand why should we multiply, an expression which looks like the loss function we want, by the value of the action.
+"
+"['monte-carlo-tree-search', 'minimax', 'alpha-beta-pruning']"," Title: When should Monte Carlo Tree search be chosen over MiniMax?Body: I would like to ask whether MCTS is usually chosen when the branching factor for the states that we have available is large and not suitable for Minimax. Also, other than MCTS simluates actions, where Minimax actually 'brute-forces' all possible actions, what are some other benefits for using Monte Carlo for adversarial (2-player) games?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'ai-design']"," Title: Automating browser actions using AIBody: I am at a very initial stage of my research so I will try to describe what I am trying to achieve:
+
+I want to create an AI model which learns how to navigate the browser's component like clicking or creating new favorites and tabs, or navigating browser action menu, or bookmarking the website etc. In short, automating the testing for browser using selenium and an AI model so that over time the model learns itself to navigate the browser and test different functionality itself and eventually it test the functionalities that are not seen by the model before. For example: if I feed the AI model how browser is closed when ""x"" is clicked and minimize when ""-"" is clicked, next it can learn itself how to maximize the browser.
+
+The initial input could be to record some videos of navigating the browser using selenium and then feed it to the model and with time it learns itself to go to different section of the browser which the model does not know and still test it.
+
+Is it even possible to combine AI and selenium together to create something like this? If yes, how can I achieve it and what is the best approach to develop such model.
+
+Thanks in advance.
+"
+"['machine-learning', 'convolutional-neural-networks', 'backpropagation']"," Title: Should I compute the gradients with respect to the flatten layer in a convolutional neural network?Body: I'm trying to create a convolutional neural network without frameworks (such as PyTorch, TensorFlow, Keras, and so on) with Python.
+
+Here's a description of CNN taken from the Wikipedia article
+
+
+ In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. They have applications in image and video recognition, recommender systems, image classification, medical image analysis, natural language processing, and financial time series.
+
+
+A CNN has different types of layers, such as convolution, pooling (max or average), flatten and dense (or fully-connected) layers.
+
+I have a few questions.
+
+
+- Should we compute gradients (such as $\frac{\partial L}{\partial A_i}$,$\frac{\partial L}{\partial Z_i}$,$\frac{\partial L}{\partial A_{i-1}}$ and so on) in flatten layer or not?
+- If no, then how should I compute $\frac{\partial L}{\partial A_i}$ and $\frac{\partial L}{\partial Z_i}$ of first layer of convolutional layer? With $\frac{\partial L}{[\frac{\partial g(A_i)}{\partial x}]}$ or with
+$\frac{\partial L}{\partial dA_{i+2}}$(P.S. as you know iteration of BackPropagation is reverse, so I used i+n for denote the previous layer)?
+- Or can I compute derivatives in Flatten layer with
+$$\frac{\partial J}{\partial A} = W_{i+1}^T Z_{i+1}$$(i+1 denotes prev.layer in BackProp)
+$$\frac{\partial L}{\partial Z} = \frac{\partial L}{\partial A} *\frac{\partial g(A_i)}{\partial x} $$
+and then reshape of Conv2D shape?
+
+
+P.S. I found questions like mine (names are same), but there're not answer to my question as I asking about formula.
+"
+"['natural-language-processing', 'datasets', 'data-science', 'audio-processing', 'structured-data']"," Title: What are the best datasets available for music information retrieval?Body: I am interested in doing some work in classification problems in music information retrieval. I know that there are some formats of datasets (such as MIDI, Spectrogram, Piano-roll, MusicXML, etc.) for this work but have been unable to find any nice large datasets for this. What are the best free datasets for such work? I am mainly looking into western classical music.
+"
+"['convolutional-neural-networks', 'backpropagation', 'weights']"," Title: Backpropagation of neural nets with shared weightBody: I am trying to understand the mathematics behind the forward and backward propagation of neural nets. To make myself more comfortable, I am testing myself with an arbitrarily chosen neural network. However, I am stuck at some point.
+
+Consider a simple fully connected neural network with two hidden layers. For simplicity, choose linear activation function (${f(x) = x}$) at all layer. Now consider that this neural network takes two $n$-dimensional inputs $X^{1}$ and $X^{2}$. However, the first hidden layer only takes $X^1$ as the input and produces the output of $H^1$. The second hidden layer takes $H^{1} $and $X^2$ as the input and produces the output $H^{2}$. The output layer takes $H^{2}$ as the input and produces the output $\hat{Y}$. For simplicity, assume, we do not have any bias.
+
+So, we can write that, $H^1 = W^{x1}X^{1}$
+
+$H^2 = W^{h}H1 + W^{x2}X^{2} = W^{h}W^{x1}X^{1} + W^{x2}X^{2}$ [substituting the value of $H^1$]
+
+$\hat{Y} = W^{y}H^2$
+
+Here, $W^{x1}$, $W^{x2}$, $W^{h}$ and $W^{y}$ are the weight matrix. Now, to make it more interesting, consider a sharing weight matrix $W^{x} = W^{x1} = W^{x2}$, which leads, $H^1 = W^{x}X^{1}$ and $H^2 = W^{h}W^{x}X^{1} + W^{x}X^{2}$
+
+I do not have any problem to do forward propagation by my hand; however, the problem arises when I tried to make backward propagation and update the $W^{x}$.
+
+$\frac{\partial loss}{\partial W^{x}} = \frac{\partial loss}{\partial H^{2}} . \frac{\partial H^{2}}{\partial W^{x}}$
+
+Substituting, $\frac{\partial loss}{\partial H^{2}} = \frac{\partial Y}{\partial H^{2}}. \frac{\partial loss}{\partial Y}$ and $H^2 = W^{h}W^{x}X^{1} + W^{x}X^{2}$
+
+$\frac{\partial loss}{\partial W^{x}}= \frac{\partial Y}{\partial H^{2}}. \frac{\partial loss}{\partial Y} . \frac{\partial}{\partial W^{x}} (W^{h}W^{x}X^{1} + W^{x}X^{2})$
+
+Here I understand that, $\frac{\partial Y}{\partial H^{2}} = (W^y)^T$ and $\frac{\partial}{\partial W^{x}} W^{x}X^{2} = (X^{2})^T$ and we can also calculate $\frac{\partial Y}{\partial H^{2}}$, if we know the loss function. But how do we calculate $\frac{\partial}{\partial W^{x}} W^{h}W^{x}X^{1}$?
+"
+"['python', 'image-recognition', 'game-ai', 'open-ai']"," Title: How do i start building an autoclick bot for pubg mobile?Body: I want to make a bot which clicks the fire button on the mobile screen upon seeing an enemies head.
+
+In pubg mobile which is an android game you have to control the fire button and the aim along with many other controls to kill an enemy or other players. I want to automate the fire button and everything else would be controlled by me, when I'll aim on a player's head the bot should click the fire button instantly.
+
+So there are a few problems in this ,first one is that what if it shoots upon seeing my teammates head and other is what if there are two players at once ,which will it shoot, for that it needs to shoot when my aim or the red crosshair is on the player's head.
+
+I don't know how to get started , I need to make an image recognition app and an autoclicker and combine them both. How do I get started? Assuming that I only know basic python.
+"
+"['reinforcement-learning', 'terminology', 'papers', 'action-spaces', 'continuous-action-spaces']"," Title: What is meant by a multi-dimensional continuous action space?Body: In the context of Reinforcement Learning, what does it mean to have a multi-dimensional continuous action space?
+I came across the following in the COBRA Paper
+
+A method for learning a distribution over a multi-dimensional continuous action space. This learned distribution can be sampled efficiently.
+
+and
+
+During the initial exploration phase it explores its environment, in which it can move objects freely with a continuous action space but is not rewarded for its actions.
+
+So, what do the multi-dimensionality and the continuity of the action space refer to? It'd be great if someone could provide an explanation with examples!
+"
+"['reinforcement-learning', 'q-learning', 'adversarial-ml']"," Title: Adversarial Q Learning should use the same Q Table?Body: I'm creating a RF Q-Learning agent for a two player fully-observable board game and wondered, if I was to train the Q Table using adversarial training, should I let both 'players' use, and update, the same Q Table? Or would this lead to issues?
+"
+"['reinforcement-learning', 'game-ai', 'rewards', 'reward-design', 'reward-shaping']"," Title: How should I design the reward function for racing game (where the goal is to reach finishing line before the opponent)?Body: I'm building an agent for a racing game. In this game, there is a randomized map where there are speed boosts for the player to pick up and obstacles that act to slow the player down. The goal of the game is to reach the finishing line before the opponent.
+While working on this problem, I've realized that we can almost forget about the presence of our opponent and just focus on getting the agent to the finish line as quickly as possible.
+I started with a simple
+
+- $-1$ reward for every timestep
+- $+100$ reward for winning, and
+- $-100$ for losing.
+
+When I was experimenting with this, I felt like the rewards may be too sparse, as my agent was converging to pretty poor average returns. I iterated to a function of speed and distance travelled (along with the $+100$ reward), but, after some experimentation, I started feeling like the agent might be able to achieve high returns without necessarily being the fastest to the finish line.
+I'm thinking that I return to the first approach and possibly add in some reward for being in the first place (as a function of the opponent's distance behind the agent).
+What else could I try? Should I try and spread the positive rewards out more for good behavior? should I create additional rewards/penalties for perhaps hitting obstacles and using boosts or can I expect the agent to learn the correlation?
+"
+"['reinforcement-learning', 'reference-request', 'actor-critic-methods', 'advantage-actor-critic']"," Title: What is the original source of the TD Advantage Actor-Critic algorithm?Body: What is the original source of the TD Advantage Actor-Critic algorithm?
+I found this tutorial really helpful for learning the algorithm. However, what is the original source of this algorithm?
+
+"
+"['machine-learning', 'reinforcement-learning', 'policy-gradients', 'environment', 'ddpg']"," Title: What if the rewards induced by an environment are related to the policy too?Body: Assume we have a policy $\pi_{\theta}$ in a classic reinforcement learning setting, and a reward function $R^{\pi}(s,a)$ that changes as long as $\pi$ changes i.e. not only is it predefined by the environment itself, how can we model the popular algorithms (e.g. SAC) according to this change?
+"
+"['reinforcement-learning', 'training', 'q-learning', 'exploration-exploitation-tradeoff']"," Title: Should I just use exploitation after I have trained the Q agent?Body: When using a trained Q-learning algorithm in an actual game, would I just use exploitation and no longer use exploration? Should I use exploration only during the training phase?
+"
+"['convolutional-neural-networks', 'computer-vision', 'image-recognition', 'keras', 'image-segmentation']"," Title: How to use 'Canny/Watershed' algorithm's output as an input for Image Classification ModelBody: I have a very silly problem in hand. I have implemented 2 methods which give me the mask to separate the objects from the background. What I get from one method is the object encapsulated in the red Contour or boundary and the other one makes the background Red.
+
+I am using Keras to classify Trash. I wanted to use the output or the EXTRACTED
object as an input to the CNN model. Now I do not see any difference in the images and output. All there is extra boundary around the object and I fail to understand how it can help my model.
+
+I could add an extra alpha
channel to second method to make the background transparent but Keras' ImageDataGenerator
do not work with RGBA
images. What shoul dI do to improve the model?
+"
+"['reinforcement-learning', 'papers', 'adversarial-ml']"," Title: How can transition models in RL be trained adversarially?Body: To give a little background, I've been reading the COBRA paper, and I've reached the section that talks about the exploration policy, in particular. We figure that a uniformly random policy won't do us any good, since the action space is sparsely populated with objects that the agent must act upon - and a random action is likely to result in no change (an object here occupies only about 1.7% of the space of the screen). Hence we need our agent to learn in the exploration phase a policy that clicks on and moves objects more frequently.
+
+I get that a random policy won't work, but I've difficulty understanding how and why the transition model is trained adversarially. Following is the extract which talks about the same, and I've highlighted parts that I don't completely understand -
+
+
+ ""Our approach is to train the transition model adversarially with an exploration policy that learns to take actions on which the transition model has a high error. Such difficult-to-predict actions should be those that move objects (given that others would leave the scene unchanged). In this way the exploration policy and the transition model learn together in a virtuous cycle. This approach is a form of curiosity-driven exploration, as previously described in both the psychology (Gopnik et al., 1999) and the reinforcement learning literature (Pathak et al., 2017; Schmidhuber, 1990a,b).""
+
+
+
+- How does it help to take actions on which the transition model has a
+high error?
+- I don't exactly see how a virtuous cycle is in action
+
+
+Could someone please explain? Thanks a lot!
+"
+"['reinforcement-learning', 'ddpg']"," Title: How can DDPG handle the discrete action space?Body: I am wondering how can DDPG or DPG handle the discrete action space. There are some papers saying that use Gumbel softmax with DDPG can make the discrete action problem be solved. However, will the Gumbel softmax make the deterministic policy be the stochastic one? If not, how can that be achieved?
+"
+"['recommender-system', 'knowledge-graph']"," Title: What are multi-hop relational paths?Body: What are multi-hop relational paths in the context of knowledge graphs (KGs)?
+
+I tried looking it up online, but didn't find a simple explanation.
+"
+"['reinforcement-learning', 'terminology', 'multi-armed-bandits', 'contextual-bandits']"," Title: Are bandits considered an RL approach?Body: If a research paper uses multi-armed bandits (either in their standard or contextual form) to solve a particular task, can we say that they solved this task using a reinforcement learning approach? Or should we distinguish between the two and use the RL term only when it is associated with an MDP formulation?
+
+In fact, each RL course/textbook usually contains a section about bandits (especially when dealing with the exploration-exploitation tradeoff). Additionally, bandits also have the concept of actions and rewards.
+
+I just want to make sure what the right terminology should be, when describing either approach.
+"
+"['reinforcement-learning', 'papers', 'reward-functions', 'recommender-system', 'knowledge-graph']"," Title: Which reward function works for recommendation systems using knowledge graphs?Body: I've been reading this paper on recommendation systems using reinforcement learning (RL) and knowledge graphs (KGs).
+
+To give some background, the graph has several (finitely many) entities, of which some are user entities and others are item entities. The goal is to recommend items to users, i.e. to find a recommendation set of items for every user such that the user and the corresponding items are connected by one reasoning path.
+
+I'm attaching an example of such a graph for more clarity (from the paper itself) -
+
+![]()
+
+In the paper above, they say
+
+
+ First, we do not have pre-defined targeted items for any user, so it is not applicable to use a binary reward indicating whether the user interacts with the item or not. A better design of the reward function is to incorporate the uncertainty of how an item is relevant to a user based on the rich heterogeneous information given by the knowledge graph.
+
+
+I'm not able to understand the above extract, which talks about the reward function to use - binary, or something else. A detailed explanation of what the author is trying to convey in the above extract would really help.
+"
+"['reinforcement-learning', 'q-learning', 'unsupervised-learning', 'eligibility-traces']"," Title: Applying Eligibility Traces to Q-Learning algorithm does not improve results (And might not function well)Body: I am trying to apply Eligibility Traces
to a currently working Q-Learning algorithm.
+
+The reference code for the Q-Learning algorithm was taken from this great blog by DeepLizard, but does not include Eligibility Traces
. Link to the code on Google Colab.
+
+I wish to add the Eligibility Traces
by implementing this pseud code:
+
+Initialize Q(s,a) arbitrarily and e(s,a) = 0, for all s,a
+Repeat (for each episode):
+ Initialize s,a
+ Repeat (for each step of episode):
+ Take action a, observe r,s’
+ Choose a’ from s’ using policy derived from Q (e.g., ϵ-greedy)
+ δ ← r + γ Q(s’,a’) – Q(s,a)
+ e(s,a) ← e(s,a) + 1
+ For all s,a:
+ Q(s,a) ← Q(s,a) + α δ e(s,a)
+ e(s,a) ← γ λ e(s,a)
+ s ← s’ ; a ← a’
+ until s is terminal
+
+
+Taken from HERE
+
+This is my code as I have implemented the pseudo-code - Link
+
+The part that needs to be improved is here:
+
+#Q learning algorithem
+for episode in range(num_episodes):
+ state = env.reset()
+ et_table = np.zeros((state_space_size,action_space_size))
+ done = False
+ reward_current_episode = 0
+
+ for steps in range(max_steps_per_episode):
+ #Exploration-Explotation trade-off
+ exploration_rate_thresh = random.uniform(0,1)
+ if exploration_rate_thresh > exploration_rate:
+ action = np.argmax(q_table[state,:])
+ else:
+ action = env.action_space.sample()
+
+ new_state, reward, done, info = env.step(action)
+
+ #Update Q-table and Eligibility table
+ delta = reward + discount_rate * np.max(q_table[new_state,:]) - q_table[state,action]
+ et_table[state, action] = et_table[state, action] + 1
+
+ for update_state in range(state_space_size):
+ for update_action in range(action_space_size):
+ q_table[update_state, update_action] = q_table[update_state, update_action] + learning_rate * delta * et_table[update_state, update_action]
+ et_table[update_state, update_action] = discount_rate * gamma * et_table[update_state, update_action]
+
+ state = new_state
+ reward_current_episode = reward
+
+ if done==True:
+ break
+
+ #Exploration rate decay
+ exploration_rate = min_exploration_rate + (max_exploration_rate - min_exploration_rate) * np.exp(-exploration_decay_rate*episode)
+
+ rewards_all_episodes.append(reward_current_episode)
+
+
+For a while, I was getting pure results (avg. rewards for 1000 episodes were around 0.14 while the original NON-ET algorithm was averaging 0.69 on the last 1000 episodes), but now I get these errors:
+
+/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:27: RuntimeWarning: overflow encountered in double_scalars
+/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:22: RuntimeWarning: invalid value encountered in double_scalars
+
+"
+"['reinforcement-learning', 'rewards', 'expectation', 'return']"," Title: Shouldn't expected return be calculated for some faraway time in the future $t+n$ instead of current time $t$?Body: I am learning RL for the first time. It may be naive, but it is a bit odd to grasp this idea that, if the goal of RL is to maximize the expected return, then shouldn't the expected return be calculated for some faraway time in the future ($t+n$) instead of current time $t$? It is because we are building our system for the future using current information. ( I am coming from machine learning background and this makes more sense to me).
+
+Generally, the expected return is:
+$$\mathbb{E}[G_t] = \mathbb{E}[ R_{t+1} + R_{t+2} + R_{t+3}+... R_{t+n}]$$
+
+However, shouldn't the expected return be:
+$$\mathbb{E}[G_{t+n}] = \mathbb{E}[R_{t+1} + R_{t+2} + R_{t+3}+... R_{t+n-1}]$$
+"
+"['neural-networks', 'machine-learning', 'image-recognition']"," Title: MNIST Classification code performing with 88%-90% whereas other codes online perform 95% on first epochBody: I have been trying to write code to implement plain neural net without convolution from scratch. I took some help online here and added my code to my github account.
+
+I don't understand why the prediction made by my code is only 88%-90% accurate after the 1st epoch, whereas his code is 95% accurate after 1st epoch with the same parameters (Same Xavier initialization for weights, biases are not initialized, same hidden layer neurons). While his architecture uses 2 hidden layers, my code performed worse with 2 hidden layers. For 1 hidden layer, his code performs similar (~96%).
+"
+"['natural-language-processing', 'training', 'metric', 'expectation']"," Title: What is meant by the expected BLEU cost when training with BLEU and SIMILE?Body: Recently I was reading a paper based on a new evaluation metric SIMILE. In a section, validation loss comparison had been made for SIMILE and BLEU. The plot showed the expected BLEU cost when training with BLEU and SIMILE.
+
+What I'm unable to understand is what is meant by the expected BLEU cost when training with BLEU and SIMILE? Are there any separate cost functions defined for these scores?
+
+I'm attaching the image of the graph.
+
+
+"
+"['naive-bayes', 'probability-theory', 'bayes-theorem', 'bayesian-probability']"," Title: Understanding how to calculate $P(x|c_k)$ for the Bernoulli naïve Bayes classifierBody: I'm looking at the Bernoulli naïve Bayes classifier on Wikipedia and I understand Bayes theorem along with Gaussian naïve Bayes. However, when looking at how $P(x|c_k)$ is calculated, I don't understand it. The Wikipedia page says its calculated as follows
+$$P(x|c_k) = \prod^{n}_{i=1} p^{x_i}_{ki} (1-p_{ki})^{(1-x_i)}. $$
+They mention that $p_{ki}$ is the probability of class $c_k$ generating the term $x_i$, does that mean $P(x|c_k)$? Because if so then that doesn't make sense since to calculate that we need to have calculated it already. So what is $p_{ki}$?
+And in the first part, after the product symbol, are they raising this probability to the power pf $x_i$ or does that again just mean 'probability of class $c_k$ generating the term $x_i$'?
+I also don't understand the intuition behind why or how this calculates $P(x|c_i)$.
+"
+"['reinforcement-learning', 'actor-critic-methods', 'environment', 'advantage-actor-critic']"," Title: What is the advantage of using more than one environment with the advantage actor-critic?Body: make_env = lambda: ptan.common.wrappers.wrap_dqn(gym.make(""PongNoFrameskip-v4""))
+ envs = [make_env() for _ in range(NUM_ENVS)]
+
+
+Here is a code you can look at.
+
+The two above lines of code create multiple environments for the game of Atari Pong with the A2C algorithm.
+
+I understand why it is very useful to have multiple agents working on different instances of the same environment as it is presented in A3C (i.e. an asynchronous version of A2C). However, in the above code, it has a single agent working on different instances of the same environment.
+
+What is the advantage of using more than one environment with a single agent?
+
+UPDATE
+
+class GymEnvVec:
+ def __init__(self, name, n_envs, seed):
+ self.envs = [gym.make(name) for i in range(n_envs)]
+ [env.seed(seed + 10 * i) for i, env in enumerate(self.envs)]
+
+ def reset(self):
+ return [env.reset() for env in self.envs]
+
+ def step(self, actions):
+ return list(zip(*[env.step(a) for env, a in zip(self.envs, actions)]))
+
+"
+"['reinforcement-learning', 'training', 'q-learning']"," Title: Does Q Learning learn from an opponent playing random moves?Body: I've created a Q Learning algorithm to play Connect Four against an opponent who just chooses a random free column. My Q Agent is currently only winning about 0.49 games on average (30,000 episodes). Will my Q Agent actually learn from these episodes, seeing as its opponent isn't 'trying' to beat it, as there's no strategy behind its random choices? Or should this not matter – if the Q Agent is playing enough games, it doesn't matter how good/bad its opponent is?
+"
+"['deep-learning', 'reference-request', 'topology']"," Title: Is there any application of topology to deep learning?Body: Is there any application of topology (as in math discipline) to deep learning? If so, what are some examples?
+"
+"['neural-networks', 'reinforcement-learning', 'dqn', 'deep-rl']"," Title: What should the target be when the neural network outputs multiple Q values in deep Q-learning?Body: I have some gaps in my understanding regarding the performing of the gradient descent in Deep - Q networks. The original deep q network for Atari performs a gradient descent step to minimise $y_j - Q(s_j,a_j,\theta)$, where $y_j = r_j + \gamma max_aQ(s',a',\theta)$.
+
+In the example where I sample a single experience $(s_1,a_2,r_1,s_2)$ and I try to conduct a single gradient descent step, then feeding in $s_1$ to the neural network outputs an array of $Q(s_1,a_0), Q(s_1,a_1), Q(s_1,a_2), \dots$ values.
+
+When doing gradient descent update for this single example, should the target output to set for the network be equivalent to $Q(s_1,a_0), Q(s_1,a_1), r_1 + \gamma max_{a'}Q(s_2,a',\theta), Q(s_1,a_3), \dots$ ?
+
+I know the inputs to the neural network to be $s_j$, to give the corresponding Q values. However, I cannot concretize the target values that the network should be optimized.
+"
+"['reinforcement-learning', 'game-theory', 'multi-agent-systems']"," Title: Shouldn't the utility function of two-player zero-sum games be in the range $[-1, 1]$?Body: In Appendix B of MuZero, they say
+
+
+ In two-player zero-sum games the value functions are assumed to be bounded within the $[0, 1]$ interval.
+
+
+I'm confused about the boundary: Shouldn't the value/utility function be in the range of [-1,1] for two-player zero-sum games?
+"
+"['reinforcement-learning', 'rewards', 'intelligent-agent', 'learning-algorithms', 'environment']"," Title: How do you know if an agent has learnt its environment in reinforcement learning?Body: I'm new to reinforcement learning and trying to understand it.
+
+If you train an agent using a reinforcement learning algorithm (discrete or continuous) on an environment (real or simulated), then how do you know if the agent has learnt its environment? Should it reach its goal on every run (episode)? (Any literature references are also welcome)
+
+Is this related to the reward threshold defined in the environment?
+
+What happens if you continue training after the agent has learnt the environment? Will it perform by reaching its goal every time or will there be failed episodes?
+"
+"['deep-learning', 'reinforcement-learning', 'terminology', 'unsupervised-learning', 'supervised-learning']"," Title: How can reinforcement learning be unsupervised learning if it uses deep learning?Body: I was watching a video in my online course where I'm learning about A.I. I am a very beginner in it.
+
+At one point in the course, the instructor says that reinforcement learning (RL) needs a deep learning model (NN) to perform an action. But for that, we need expected results in our model for the NN to learn how to predict the Q-values.
+
+Nevertheless, at the beginning of the course, they said to me that RL is an unsupervised learning approach because the agent performs the action, receives the response from the environment, and finally takes the more likely action, that is, with the highest Q value.
+
+But if I'm using deep learning in RL, for me, RL looks like a supervised learning approach. I'm a little confused about these things, could someone give me clarifications about them?
+
+
+"
+"['reinforcement-learning', 'sutton-barto']"," Title: Doubt regarding the proof of convergence of $\epsilon$ soft policies without exploring startsBody: In page 125 of Sutton and Barto (second last paragraph) the proof for equality of $v_{\pi}$ and $v_*$ for $\epsilon$ soft policies is given. But I could not understand the statement explaining the proof:
+
+Consider a new environment that is just like the original environment, except with the requirement that policies be $\epsilon$-soft “moved inside” the environment. The new environment has the same action and state set as the original
+and behaves as follows. If in state $s$ and taking action $a$, then with probability
+$1 - \epsilon$ the new environment behaves exactly like the old environment. With
+probability $\epsilon$ it repicks the action at random, with equal probabilities, and
+then behaves like the old environment with the new, random action. The best
+one can do in this new environment with general policies is the same as the
+best one could do in the original environment with $\epsilon$-soft policies.
+
+What is the meaning of environment here? And what is this new thing/argument (provided above) the authors are describing to arrive at the proof?
+"
+"['neural-networks', 'non-linear-regression']"," Title: solving xor function using a neural network with no hidden layersBody: xor is a non-linear dataset. It cannot be solved with any number of perceptron based neural network but when the perceptions are applied the sigmoid activation function, we can solve the xor dataset.
+
+But I came across a source where the following statement is stated as False
+
+A two layer (one input layer; one output layer; no hidden layer) neural network can represent the XOR function
.
+
+However, I have trained a model with no hidden layers, gives the following result:
+
+[INFO] data=[0 0],ground-truth=0, pred=0.5161, step=1
+[INFO] data=[0 1],ground-truth=1, pred=0.5000, step=1
+[INFO] data=[1 0],ground-truth=1, pred=0.4839, step=0
+[INFO] data=[1 1],ground-truth=0, pred=0.4678, step=0
+
+
+So, in if I apply a softmax classifier, I can separate the xor dataset with a nn without any hidden layer. This makes the statement incorrect.
+
+Is it true that we cannot separate a non linear dataset without any hidden layers in a neural network? If yes, where am I wrong in my reasoning from the training of the nn I have done above
+"
+"['reinforcement-learning', 'minimax']"," Title: What happens if the opponent doesn't play optimally in minimax?Body: I just read an article about the minimax algorithm. When you design the algorithm, you assume that your opponent is a perfect player, i.e. it plays optimally.
+
+Let's consider the game of chess. What happens if the opponent plays irrationally or sub-optimally? Do you still have a guarantee that you are going to win?
+"
+"['neural-networks', 'vanishing-gradient-problem', 'exploding-gradient-problem']"," Title: What are the common pitfalls that we could face when training neural networks?Body: Apart from the vanishing or exploding gradient problems, what are other problems or pitfalls that we could face when training neural networks?
+"
+"['reinforcement-learning', 'training', 'comparison', 'testing']"," Title: What is the difference between training and testing in reinforcement learning?Body: In reinforcement learning (RL), what is the difference between training and testing an algorithm/agent? If I understood correctly, testing is also referred to as evaluation.
+
+As I see it, both imply the same procedure: select an action, apply to the environment, get a reward, and next state, and so on. But I've seen that, e.g., the Tensorforce RL framework allows running with or without evaluation.
+"
+"['neural-networks', 'deep-learning', 'reference-request']"," Title: What are some resources with exercises related to neural networks?Body: I am asking for a book (or any other online resource) where we can solve exercises related to neural networks, similar to the books or online resources dedicated to mathematics where we can solve mathematical exercises.
+"
+"['natural-language-processing', 'reference-request', 'word-embedding']"," Title: Is there a good book or paper on word embeddings?Body: Is there a good and modern book that focuses on word embeddings and their applications? It would also be ok to provide the name of a paper that provides a good overview of word embeddings.
+"
+"['neural-networks', 'machine-learning', 'neuroevolution', 'neural-architecture-search']"," Title: Can operations like convolution and pooling be discovered with a neural architecture search approach?Body: From Neural Architecture Search: A Survey, first published in 2018:
+
+
+ Moreover, common search spaces are also based on predefined building
+ blocks, such as different kinds of convolutions and pooling, but do
+ not allow identifying novel building blocks on this level; going
+ beyond this limitation might substantially increase the power of NAS.
+
+
+Has anyone tried that? If not, do you have any thoughts about the feasibility of this idea?
+"
+"['reinforcement-learning', 'policy-gradients', 'proximal-policy-optimization']"," Title: PPO algorithm converges on only one actionBody: I have taken some reference implementations of PPO algorithm and am trying to create an agent which can play space invaders . Unfortunately from the 2nd trial onwards (after training the actor and critic N Networks for the first time) , the probability distribution of the actions converges on only action and the PPO loss and the critic loss converges on only one value.
+
+Wanted to understand the probable reasons why this might occur . I really cant run the code in my cloud VMs without being sure that I am not missing anything as the VMs are very costly to use . I would appreciate any help or advice in this regarding .. if required I can post the code as well . Hyperparameters used are as follows :
+
+clipping_val = 0.2 critic_discount = 0.5 entropy_beta = 0.01 gamma = 0.99 lambda = 0.95
+
+code repo : github.com/superchiku/ReinforcementLearning .
+"
+"['reinforcement-learning', 'ddpg']"," Title: A question about the Wolpertinger algorithm (Deep RL in Large Discrete Action Spaces paper)Body: I am trying to reproduce the recommender task experiment from this paper.
+The paper suggests to embed discrete actions into continuous action space and then to use the proposed Wolpertinger agent.
+The Wolpertinger agent is as follows:
+
+DDPG produces so called proto action $f(s)$, then KNN finds k nearest embeddings of discrete actions to this proto action, and then we choose the one of these $k$ embeddings, which has the highest Q-function value. The whole is a full policy, $\pi(\cdot)$.
+
+While training we optimize the critic using only the full policy (DDPG + a choice of a neighbour).
+The actor is optimized using proto action output in order to be differentiable, $Q(s, f_{\theta}(s)) \rightarrow \max_{\theta}$.
+
+The problem is that the critic does not know that it is used to optimize continuous output of the algorithm. It is trained only to value the embedded actions. As I understand, we hope that continuity of critic will help us with it, but what I have is that proto actions constantly appear to be in some corners with no real actions and where the Q-function unreasonably has greater values (because it is simply untrained in such domains). The DDPG output is normalized to match the embeddings bounds to make these empty spaces not so large.
+
+It seems for me that there is a way to make embeddings more appropriate for the task and achieve higher reward. However, when I use $k = |
+\mathcal{A}|$, proto actions are not considered and algorithm works pretty well.
+Usually I use $|\mathcal{A}| = 100$ and $k = 10$.
+I have trained them with skip-gram, based on the users history.
+
+Below are 2d projections of my embeddings to the first 10 axes (embeddings are from $\mathbb{R}^{20}$). And undependently of the state, proto actions are about the same. The blue is a proto action for the some fixed state.
+Having some state fixed, $Q(s, f(s))$ value is always higher than $Q(s, a)$ for any $a \in \mathcal{A}$.
+
+
+
+Would be glad to get any help, especially the help of people familiar with this algorithm.
+Do I need to make embeddings fill proto actions range (some hyperrectangle in case we have the tanh activation it the actor)? What is the way to fill such a domain with embeddings?
+"
+"['neural-networks', 'deep-learning', 'feedforward-neural-networks', 'accuracy', 'sigmoid']"," Title: Accuracy dropped when I ran the program the second timeBody: I was following a tutorial about Feed-Forward Networks and wrote this code for a simple FFN :
+class FirstFFNetwork:
+
+ #intialize the parameters
+ def __init__(self):
+ self.w1 = np.random.randn()
+ self.w2 = np.random.randn()
+ self.w3 = np.random.randn()
+ self.w4 = np.random.randn()
+ self.w5 = np.random.randn()
+ self.w6 = np.random.randn()
+ self.b1 = 0
+ self.b2 = 0
+ self.b3 = 0
+
+ def sigmoid(self, x):
+ return 1.0/(1.0 + np.exp(-x))
+
+ def forward_pass(self, x):
+ #forward pass - preactivation and activation
+ self.x1, self.x2 = x
+ self.a1 = self.w1*self.x1 + self.w2*self.x2 + self.b1
+ self.h1 = self.sigmoid(self.a1)
+ self.a2 = self.w3*self.x1 + self.w4*self.x2 + self.b2
+ self.h2 = self.sigmoid(self.a2)
+ self.a3 = self.w5*self.h1 + self.w6*self.h2 + self.b3
+ self.h3 = self.sigmoid(self.a3)
+ return self.h3
+
+ def grad(self, x, y):
+ #back propagation
+ self.forward_pass(x)
+
+ self.dw5 = (self.h3-y) * self.h3*(1-self.h3) * self.h1
+ self.dw6 = (self.h3-y) * self.h3*(1-self.h3) * self.h2
+ self.db3 = (self.h3-y) * self.h3*(1-self.h3)
+
+ self.dw1 = (self.h3-y) * self.h3*(1-self.h3) * self.w5 * self.h1*(1-self.h1) * self.x1
+ self.dw2 = (self.h3-y) * self.h3*(1-self.h3) * self.w5 * self.h1*(1-self.h1) * self.x2
+ self.db1 = (self.h3-y) * self.h3*(1-self.h3) * self.w5 * self.h1*(1-self.h1)
+
+ self.dw3 = (self.h3-y) * self.h3*(1-self.h3) * self.w6 * self.h2*(1-self.h2) * self.x1
+ self.dw4 = (self.h3-y) * self.h3*(1-self.h3) * self.w6 * self.h2*(1-self.h2) * self.x2
+ self.db2 = (self.h3-y) * self.h3*(1-self.h3) * self.w6 * self.h2*(1-self.h2)
+
+
+ def fit(self, X, Y, epochs=1, learning_rate=1, initialise=True, display_loss=False):
+
+ # initialise w, b
+ if initialise:
+ self.w1 = np.random.randn()
+ self.w2 = np.random.randn()
+ self.w3 = np.random.randn()
+ self.w4 = np.random.randn()
+ self.w5 = np.random.randn()
+ self.w6 = np.random.randn()
+ self.b1 = 0
+ self.b2 = 0
+ self.b3 = 0
+
+ if display_loss:
+ loss = {}
+
+ for i in tqdm_notebook(range(epochs), total=epochs, unit="epoch"):
+ dw1, dw2, dw3, dw4, dw5, dw6, db1, db2, db3 = [0]*9
+ for x, y in zip(X, Y):
+ self.grad(x, y)
+ dw1 += self.dw1
+ dw2 += self.dw2
+ dw3 += self.dw3
+ dw4 += self.dw4
+ dw5 += self.dw5
+ dw6 += self.dw6
+ db1 += self.db1
+ db2 += self.db2
+ db3 += self.db3
+
+ m = X.shape[1]
+ self.w1 -= learning_rate * dw1 / m
+ self.w2 -= learning_rate * dw2 / m
+ self.w3 -= learning_rate * dw3 / m
+ self.w4 -= learning_rate * dw4 / m
+ self.w5 -= learning_rate * dw5 / m
+ self.w6 -= learning_rate * dw6 / m
+ self.b1 -= learning_rate * db1 / m
+ self.b2 -= learning_rate * db2 / m
+ self.b3 -= learning_rate * db3 / m
+
+ if display_loss:
+ Y_pred = self.predict(X)
+ loss[i] = mean_squared_error(Y_pred, Y)
+
+ if display_loss:
+ plt.plot(loss.values())
+ plt.xlabel('Epochs')
+ plt.ylabel('Mean Squared Error')
+ plt.show()
+
+ def predict(self, X):
+ #predicting the results on unseen data
+ Y_pred = []
+ for x in X:
+ y_pred = self.forward_pass(x)
+ Y_pred.append(y_pred)
+ return np.array(Y_pred)
+
+The data was generated as follows :
+data, labels = make_blobs(n_samples=1000, centers=4, n_features=2, random_state=0)
+labels_orig = labels
+labels = np.mod(labels_orig, 2)
+X_train, X_val, Y_train, Y_val = train_test_split(data, labels, stratify=labels, random_state=0)
+
+When I ran the program yesterday, I had gotten a training accuracy of about 98% and a test accuracy of 94%. But when I ran it today, suddenly the accuracy dropped to 60-70%. I tried to scatter plot the result, and it looked like it behaved as if it were a single sigmoid instead of the Feed-Forward Network.
+ffn = FirstFFNetwork()
+#train the model on the data
+ffn.fit(X_train, Y_train, epochs=2000, learning_rate=.01, display_loss=False)
+#predictions
+Y_pred_train = ffn.predict(X_train)
+Y_pred_binarised_train = (Y_pred_train >= 0.5).astype("int").ravel()
+Y_pred_val = ffn.predict(X_val)
+Y_pred_binarised_val = (Y_pred_val >= 0.5).astype("int").ravel()
+accuracy_train_1 = accuracy_score(Y_pred_binarised_train, Y_train)
+accuracy_val_1 = accuracy_score(Y_pred_binarised_val, Y_val)
+#model performance
+print("Training accuracy", round(accuracy_train_1, 2))
+print("Validation accuracy", round(accuracy_val_1, 2)
+
+I do not understand how this happened and cannot figure it out.
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'backpropagation', 'gradient-descent']"," Title: Backward pass of CNN like Resnet: how to manually compute flops during backprop?Body: I've been trying to figure out how to compute the number of Flops in backward pass of ResNet. For forward pass, it seems straightforward: apply the conv filters to the input for each layer. But how does one do the Flops counts for gradient computation and update of all weights during the backward pass?
+
+Specifically,
+
+
+- how to compute Flops in gradient computations for each layer?
+- what all gradients need to be computed so Flops for each of those can be counted?
+- How many Flops in computation of gradient for Pool, BatchNorm, and Relu layers?
+
+
+I understand the chain rule for gradient computation, but having a hard time formulating how it'd apply to weight filters in conv layers of ResNet and how many Flops each of those would take. It'd be very useful to get any comments about method to compute total Flops for Backward pass. Thanks
+"
+"['philosophy', 'agi', 'superintelligence', 'singularity']"," Title: What does a self-improving artificial general intelligence with finite resources and infinite time do?Body: What would happen when an artificial general intelligence can improve itself over a long time, with limited resources?
+
+The assumption is that it has a large but finite amount of computing power, and can not escape that limit to find resources. Let's assume the limit to be on the order 100 human brains or so - the limits are not important, only that it is limited.
+
+Now, we let it run to self-improve as much as it can with the given resources, until it goes to some stable or periodic state.
+
+What it can do is at least one of the following:
+
+
+- converge to some state
+- oscillate between two or more states
+- go into a chaotic state
+- stop doing anything
+- disable itself in some other way than just stop doing anything
+
+
+There are certainly more complex behaviors possible:
+
+
+- become maximally happy, and then periodically redefine what happy means
+- hacking its reward function in some other way
+
+
+I would expect it to converges to a single state - but that seems naive.
+
+Are there any ideas on how it would end up?
+"
+"['deep-learning', 'reinforcement-learning', 'supervised-learning', 'reinforce']"," Title: Can a typical supervised learning problem be solved with reinforcement learning methods?Body: Let's say I want to teach a neural to classify images, and, for some reason, I insist on using reinforcement learning rather than supervised learning.
+
+I have a dataset of images and their matching classes. Then, for each image, I could define a reward function which is $1$ for classifying it right and $-1$ for classifying it wrong (or perhaps even define a more complicated reward function where some mistakes are less costly than others). For each image $x^i$, I can loop through each class $c$ and use a vanilla REINFORCE step: $\theta = \theta + \alpha \nabla_{\theta}log \pi_{\theta}(c|x^i)r$.
+
+Would that be different than using standard supervised learning methods (for example, the cross-entropy loss)? Should I expect different results?
+
+This method actually seems better since I could define a custom reward for each misclassification, but I've never seen anyone use something like that
+"
+"['neural-networks', 'python', 'evolutionary-algorithms', 'neat', 'neuroevolution']"," Title: How to select good inputs and fitness function to achive good results with NEAT for Icy Tower botBody: I'm trying to make a bot to the famous "Icy Tower" game.
+I rebuilt the game using pygame and I'm trying to build the bot using Python-NEAT.
+Every generation a population of 70 characters tries to jump to the next platform and increase their fitness. right now the fitness is the number of platforms they jumped on, each platform gives +10.
+
+The problem I'm facing is that the bot isn't learning good enough after 1000 generations the best score was around 200 (it can get to 200 even at the first few generations by mistake. 200 means 20 platforms which is not a lot).
+when I look at the characters jumping it looks like they just always jump and go left or right and not deliberately aiming to the next platform.
+I tried several input configurations to make the bot perform better. but nothing really helped.
+these are the inputs I tried to mess around with:
+
+- pos.x, pos.y
+- velocity.x, velocity.y
+- isOnPlatform (bool)
+- [plat.x, plat.y, plat.width] (list of the 3-7 next platforms locations, also tried distance from character in x,y)
+- [prev.x, prev.y] (2-6 previous character positions)
+
+I'm not so proficient with neuroevolution and I'm probably doing something wrong. glad if you could explain what's causing the bot to be so bad or what's not helping him to learn properly.
+Although I think that the fitness function and the inputs should be the only problem I'm attaching the python-NEAT config file.
+[NEAT]
+fitness_criterion = max
+fitness_threshold = 10000
+pop_size = 70
+reset_on_extinction = False
+
+[DefaultGenome]
+# node activation options
+activation_default = tanh
+activation_mutate_rate = 0.0
+activation_options = tanh
+
+# node aggregation options
+aggregation_default = sum
+aggregation_mutate_rate = 0.0
+aggregation_options = sum
+
+# node bias options
+bias_init_mean = 0.0
+bias_init_stdev = 1.0
+bias_max_value = 30.0
+bias_min_value = -30.0
+bias_mutate_power = 0.5
+bias_mutate_rate = 0.7
+bias_replace_rate = 0.1
+
+# genome compatibility options
+compatibility_disjoint_coefficient = 1.0
+compatibility_weight_coefficient = 0.5
+
+# connection add/remove rates
+conn_add_prob = 0.5
+conn_delete_prob = 0.5
+
+# connection enable options
+enabled_default = True
+enabled_mutate_rate = 0.01
+
+feed_forward = True
+initial_connection = full
+
+# node add/remove rates
+node_add_prob = 0.2
+node_delete_prob = 0.2
+
+# network parameters
+num_hidden = 6
+num_inputs = 11
+num_outputs = 3
+
+# node response options
+response_init_mean = 1.0
+response_init_stdev = 0.0
+response_max_value = 30.0
+response_min_value = -30.0
+response_mutate_power = 0.0
+response_mutate_rate = 0.0
+response_replace_rate = 0.0
+
+# connection weight options
+weight_init_mean = 0.0
+weight_init_stdev = 1.0
+weight_max_value = 30
+weight_min_value = -30
+weight_mutate_power = 0.5
+weight_mutate_rate = 0.8
+weight_replace_rate = 0.1
+
+[DefaultSpeciesSet]
+compatibility_threshold = 3.0
+
+[DefaultStagnation]
+species_fitness_func = max
+max_stagnation = 3
+species_elitism = 2
+
+[DefaultReproduction]
+elitism = 3
+survival_threshold = 0.2
+
+
+- Note: the previous character positions are the position in the previous frame, and if the game runs at 60 fps the previous position is not that different from the current one...
+
+- Note2: the game score is a bit more complex than just jumping on platforms, the bot should also be rewarded for combos that can make him jump higher. the combo system is already implemented but I first want to see the bot aiming to the next platform before he learns to jump combo.
+
+
+"
+"['autoencoders', 'image-processing']"," Title: How to evaluate the performance of an autoencoder trained on image data?Body: I am training an autoencoder on (general) image data.
+
+I use binary crossentropy loss function, but it is not very informative when I want to evaluate the performance of my autoencoder.
+
+An obvious performance metric would be pixel-wise MSE, but it has its own downsides, shown on some toy examples in an image from paper from Pihlgren et al.
+
+
+
+In the same paper, the authors suggest using perceptual loss, but it seems complicated and not well-studied.
+
+I found some other instances of this question, but there doesn't seem to be a concensus.
+
+I understand that it depends on the application, but I want to know if there are some general guidelines as to which performance metric to use when training autoencoders on image data.
+"
+"['reinforcement-learning', 'open-ai', 'environment', 'gym']"," Title: OpenAI Gym: Multiple actions in one stepBody: I'm trying to design an OpenAI Gym environment in which multiple users/players perform actions over time. It's round based and each user needs to take an action before the round is evaluated and the next round starts. The action for one user can be model as a gym.spaces.Discrete(5)
space.
+I want my RL agent to make decisions for all users. I'm wondering how to take multiple actions before progressing time and calculating the reward.
+
+Basically, what I want is:
+
+
+
+obs = env.reset()
+user_actions = []
+for each user:
+ user_actions.append(agent.predict(obs))
+obs, reward, done, _ = env.step(user_actions)
+
+
+So the problem is that I don't immediately know the reward after getting an action since I need to collect all actions before evaluating the round.
+
+I could of course extend actions to include actions of all users in one go. But this would be problematic if I have a really large number of users or it even changes over time, right?
+
+I found these two (1, 2) related questions, but they didn't solve my problem.
+"
+"['reinforcement-learning', 'monte-carlo-methods']"," Title: Why do we update $W$ with $\frac{1}{\mu (A_t | S_t)}$ instead of $\frac{\pi (A_t | S_t)}{\mu (A_t | S_t)}$ in off-policy Monte Carlo control?Body: I had the same question when I am reading the RL textbook from Sutton Bartol as posted here.
+
+
+
+Why do we update $W$ with $\frac{1}{\mu (A_t | S_t)}$ instead of $\frac{\pi (A_t | S_t)}{\mu (A_t | S_t)}$?
+
+It seems that, with the updating rule from the textbook, whatever action $\mu$ decides to choose, we automatically assume that $\pi$ will choose it with 100% probability. But $\pi$ is greedy with respect to Q. How does this assumption make sense?
+"
+"['algorithm', 'reference-request', 'unsupervised-learning', 'pattern-recognition']"," Title: When labelled data is not available, what are some common unsupervised learning algorithms for pattern recognition that can be used?Body: In pattern recognition systems, when no labeled data is available, what are some common unsupervised learning algorithms for pattern recognition, that can be used?
+"
+"['neural-networks', 'deep-learning', 'classification']"," Title: Combine two feature vectors for a correct input of a neural networkBody: Let's consider this scenario. I have two conceptually different video datasets, for example a dataset A composed of videos about cats and a dataset B composed of videos about houses. Now, I'm able to extract a feature vectors from both the samples of the datasets A and B, and I know that, each sample in the dataset A is related to one and only one sample in the dataset B and they belong to a specific class (there are only 2 classes).
+
+For example:
+
+Sample x1 AND sample y1 ---> Class 1
+Sample x2 AND sample y2 ---> Class 2
+Sample x3 AND sample y3 ---> Class 1
+and so on...
+
+
+If I extract the feature vectors from samples in both datasets , which is the best way to combine them in order to give a correct input to the classifier (for example a neural network) ?
+
+feature vector v1 extracted from x1 + feature vector v1' extracted from y1 ---> input for classifier
+
+I ask this because I suspect that neural networks only take one vector as input, while I have to combine two vectors
+"
+"['deep-learning', 'ai-design', 'activation-functions', 'regression']"," Title: Can SqueezeNet be used for regression?Body: I want a model that outputs the pixel coordinates of the tip of my forefinger, and whether it's touching something or not. Those would be 3 output neurons: 2 for the X-Y coordinates and 1, with a sigmoid activation, wich predicts the probability whether it's touching or not.
+
+What do I need to change in the squeezenet model in order to do this?
+
+(PS: the trained model needs to be the fastest possible (in latency), that's why I wanted to use SqueezeNet)
+"
+"['reinforcement-learning', 'deep-rl', 'markov-decision-process', 'rewards', 'open-ai']"," Title: State-of-the-art algorithms not working on a custom RL environmentBody: I'm trying to train a RL agent on a custom, highly stochastic environment (MDP). In order to do so I'm using existing implementations of state-of-the-art RL algorithms as provided by Stable Baselines. However, no matter what algorithm I try out and despite extensive hyperparameter tuning, I'm failing to obtain any meaningful result. More precisely, a trivial ""always perform the same action (0.7,0.7) each time"" strategy works better than any of the obtained policies. The environment is highly stochastic (model of a financial market). How likely is it that the environment is simply ""too stochastic"" for any meaningful learning to take place? If interested, here's the environment code:
+
+class environment1(gym.Env):
+def __init__(self):
+ self.t = 0.0 # initial time
+ self.s = 100.0 # initial midprice value
+ self.T = 1.0 # trading period length
+ self.sigma = 2 # volatility constant
+ self.dt = 0.005 # time step
+ self.q = 0.0 # initial inventory
+ self.oldq = 0 # initial old inventory
+ self.x = 0 # initial wealth/cash
+ self.gamma = 0.1 # risk aversion parameter
+ self.k = 1.5 # intensity of arivals of orders
+ self.A = 140 # constant
+ self.done = False
+ self.info = []
+ high = np.array([np.finfo(np.float32).max,
+ np.finfo(np.float32).max,
+ np.finfo(np.float32).max],
+ dtype=np.float32)
+ self.action_space = spaces.Discrete(100)
+ self.observation_space = spaces.Box(-high, high, dtype=np.float32)
+ self.seed()
+ self.state = None
+
+def seed(self, seed=None):
+ self.np_random, seed = seeding.np_random(seed)
+ return [seed]
+
+def step(self, action):
+ old_x, old_q, old_s = self.x, self.q, self.s # current becomes old
+ self.t += 0.005 # time increment
+ P1 = self.dt*self.A*np.exp(-self.k*(action//10)/10) # probabilities of execution
+ P2 = self.dt*self.A*np.exp(-self.k*(action%10)/10)
+ if random.random() < P1: # decrease inventory increase cash
+ self.q -= 1
+ self.x += self.s + (action//10)/10
+ if random.random() < P2: # increase inventory decrease cash
+ self.q += 1
+ self.x -= self.s - (action%10)/10
+ if random.random() < 0.5:
+ self.s += np.sqrt(0.005)*self.sigma
+ else:
+ self.s -= np.sqrt(0.005)*self.sigma
+ self.state = np.array([self.s-100,(self.q-34)/25,(self.t-0.5)/0.29011491975882037])
+ reward = self.x+self.q*self.s-(self.oldx+self.oldq*self.olds)
+ if np.isclose(self.t, self.T):
+ self.done = True
+ self.oldq = self.q
+ self.oldx = self.x
+ self.olds = self.s
+ return self.state, reward, self.done, {}
+
+def reset(self):
+ self.t = 0.0 # initial time
+ self.s = 100.0 # initial midprice value
+ self.T = 1.0 # trading period length
+ self.sigma = 2 # volatility constant
+ self.dt = 0.005 # time step
+ self.q = 0.0 # initial inventory
+ self.oldq = 0.0
+ self.oldx = 0.0
+ self.olds = 100.0
+ self.x = 0.0 # initial wealth/cash
+ self.gamma = 0.1 # risk aversion parameter
+ self.k = 1.5 # intensity of arivals of orders
+ self.A = 140 # constant
+ self.done = False
+ self.info = []
+ self.state = np.array([self.s-100,(self.q-34)/25,(self.t-0.5)/0.29011491975882037])
+ return self.state
+
+
+The state space is mostly normalized. The action space consists of 100 possible discrete actions (integers from 0 to 99 which are then transformed to (0.0,0.0),(0.0,0.1),...(1.0,1.0). The reward is simply given by the change in the portfolio value (cash+stock).
+
+Note: I've also tried transforming the action space into a continuous one in order to use DDPG but all to no avail.
+"
+"['reinforcement-learning', 'q-learning']"," Title: Q table not converging for an arbitrary experimentBody: This is an experiment in order to understand the working of Q table and Q learning.
+
+I have the states as
+
+states = [0,1,2,3]
+
+I have an arbitrary value for each of these states as shown below (assume index-based mapping) -
+
+arbitrary_values_for_states = [39.9,47.52,32.92,37.6]
+
+I want to find the minimum of the state which will give me the minimum value.
+So I have complimented the values to 50-arbitrary value.
+
+inverse_values_for_states = [50-x for x in arbitrary_values_for_states]
+
+Therefore, I defined reward function as -
+
+def reward(s,a,s_dash):
+ if inverse_values_for_states[s]<inverse_values_for_states[s_dash]:
+ return 1
+ elif inverse_values_for_states[s]>inverse_values_for_states[s_dash]:
+ return -1
+ else:
+ return 0
+
+
+Q table is initialized as -
+Q = np.zeros((4,4))
(np is numpy)
+
+The learning is carried out as -
+
+episodes = 5
+steps = 10
+for episode in range(episodes):
+ s = np.random.randint(0,4)
+ alpha0 = 0.05
+ decay = 0.005
+ gamma = 0.6
+ for step in range(steps):
+ a = np.random.randint(0,4)
+ action.append(a)
+ s_dash = a
+ alpha = alpha0/(1+step*decay)
+ Q[s][a] = (1-alpha)*Q[s][a]+alpha*(reward(s,a,s_dash)+gamma*np.max(Q[s_dash]))
+
+ s = s_dash
+
+
+The problem is, the table doesn't converge.
+
+Example. For the above scenario -
+
+np.argmax(Q[0]) gives 3
+np.argmax(Q[1]) gives 2
+np.argmax(Q[2]) gives 2
+np.argmax(Q[3]) gives 2
+
+
+All of the states should give argmax as 2 (which is actually the index[state] of the minimum value).
+
+Another example,
+
+when I increase steps to 1000 and episodes to 50,
+
+np.argmax(Q[0]) gives 3
+np.argmax(Q[1]) gives 0
+np.argmax(Q[2]) gives 1
+np.argmax(Q[3]) gives 2
+
+
+More, steps and episodes should assure convergence, but this is not visible.
+
+I need help where I am going wrong.
+
+PS: This little experiment is needed to make Q-learning applicable to a larger combinatorial problem. Unless I understand this, I don't think I will be able to do that right.
+Also, there is no terminal state because this is an optimization problem. (And I have heard that Q-learning doesn't necessarily needs a terminal state)
+"
+"['machine-learning', 'bayesian-deep-learning', 'bayesian-neural-networks', 'uncertainty-quantification']"," Title: Is there any research on models that provide uncertainty estimation?Body: Is there any research on machine learning models that provide uncertainty estimation?
+
+If I train a denoising autoencoder on words and put through a noised word, I'd like it to return a certainty that it is correct given the distribution of data it has been trained on.
+
+Answering these questions or metrics for uncertainty are both things I am curious about. Just general ways for models to just say ""I'm not sure"" when it receives something far outside the inputs it's been trained to approximate.
+"
+"['reinforcement-learning', 'policy-gradients']"," Title: What does it mean to parameterise a policy in policy gradient methods?Body: Can you explain policy gradient methods and what it means for the policy to be parameterised? I am reading Sutton and Barto book on reinforcement learning and didn't understand well what it is, can you give some examples?
+"
+"['datasets', 'data-preprocessing']"," Title: How can raw data from a motion sensor (like an IMU) reduced to the main points of the dataBody: How can I reduce the caputured movement data of a person in a way, that I have filtered the main features of the movement. Or how can I detect pattern/ main features in that data set?
+
+I captured some data with a device (Inertial sensor and motion capturing (x,y,z)) attached to human body. So I have a huge data set. I want to prepare the data set so I have no noise and no ""unecessary"" data.
+
+In the end I just want to know for example that certain movement behaviour hints to a clueless guy or a person under stress.
+
+I first thought approaches from association analysis could be useful but I think based on what I have read about the application of such algorithms they are not suitable for this data set.
+"
+"['image-recognition', 'object-detection', 'facial-recognition']"," Title: What is 3D face recognition? and how we can check liveness of a face image?Body: Actually what is mean by 3D face recognition? In normal cases we are extracting face encoding s from a 2D image,right?
+Is 3D face recognition is used for liveness detection? how its possible?
+"
+"['python', 'q-learning', 'actor-critic-methods', 'mean-squared-error']"," Title: Why do we calculate the mean squared error loss to improve the value approximation in Advantage Actor-Critic Algorithm?Body: class AtariA2C(nn.Module):
+ def __init__(self, input_shape, n_actions):
+ super(AtariA2C, self).__init__()
+
+ self.conv = nn.Sequential(
+ nn.Conv2d(input_shape[0], 32, kernel_size=8, stride=4),
+ nn.ReLU(),
+ nn.Conv2d(32, 64, kernel_size=4, stride=2),
+ nn.ReLU(),
+ nn.Conv2d(64, 64, kernel_size=3, stride=1),
+ nn.ReLU(),
+ )
+
+ conv_output_size = self. _get_conv_out(input_shape)
+
+ self.policy = nn.Sequential(
+ nn.Linear(conv_output_size, 512),
+ nn.ReLU(),
+ nn.Linear(512, n_actions),
+ )
+
+ self.value = nn.Sequential(
+ nn.Linear(conv_output_size, 512),
+ nn.ReLU(),
+ nn.Linear(512, 1),
+ )
+
+ def _get_conv_out(self, shape):
+ o = self.conv(T.zeros(1, *shape))
+ return int(np.prod(o.shape))
+
+ def forward(self, x):
+ x = x.float() / 256
+ conv_out = self.conv(x).view(x.size()[0], -1)
+ return self.policy(conv_out), self.value(conv_out)
+
+
+In Maxim Lapan's book Deep Reinforcement Learning Hands-on
, after implementing the above network model, it says
+
+
+ The forward pass through the network returns a tuple of two tensors:
+ policy and value. Now we have a large and important function, which
+ takes the batch of environment transitions and returns three tensors:
+ batch of states, batch of actions taken, and batch of Q-values
+ calculated using the formula $$Q(s,a) = \sum_{i=0}^{N-1} \gamma^i r_i
+ + \gamma^N V(s_N)$$ This Q_value will be used in two places: to calculate mean squared error (MSE) loss to improve the value
+ approximation, in the same way as DQN, and to calculate the advantage
+ of the action.
+
+
+I am very confused about a single thing. How and why do we calculate the mean squared error loss to improve the value approximation in Advantage Actor-Critic Algorithm?
+"
+"['natural-language-processing', 'word-embedding', 'word2vec', 'text-classification']"," Title: Creating Text Features using word2vecBody: My task is to classify some texts. I have used word2vec to represent text words and I pass them to an LSTM as input. Taking into account that texts do not contain the same number of words, is it a good idea to create text features of fixed dimension using the word2vec word representations of the text and then classify the text using these features as an input of a neural network? And in general is it a good idea to create text features using this method?
+"
+"['natural-language-processing', 'chat-bots', 'natural-language-understanding']"," Title: Designing a chatbot personal project with zero coding experience, using an existing platformBody: My girlfriend has a masters degree in linguistics and would like to create an AI chatbot personal project to show potential employers her linguistics skills since she is struggling to find a job.
+
+Unfortunately she doesn't know how to program except for extremely basic Python skills. She has been searching for weeks for tools to create a chatbot without needing to program, refusing to ask on forums for help so I'm asking the StackExchange community. Sort of like a plugin/widget for Slack, Facebook Messenger or website that you can just install on your website and just concentrate on the workflow/data/conversational design in a similar way to programming in Scratch or Node-Red.
+
+I know nothing about NLP, neural networks or anything like that and I can't understand for the life of me what exactly it is a linguist needs to do in AI. Adversely, she doesn't have the computer knowledge to understand how an API works, or how to get some sort of service and a chat interface are needed to bootstrap her conversational designs with some code somewhere.
+
+So my question is: Is there a way for a linguist to create a chatbot without knowing programming or going in too deep, all by themselves? We looked at tools like hubspot.com, where the chatbot design is either multiple choices questions with predefined answers or offers expensive paid solutions for companies. I'm sure there are free educational or community open-source platforms doing this.
+"
+"['reinforcement-learning', 'recommender-system', 'knowledge-graph', 'knowledge-graph-embeddings']"," Title: Why can't pure KG embedding methods discover multi-hop relations paths?Body: According to Reinforcement Knowledge Graph Reasoning for Explainable Recommendation
+
+
+ pure KG embedding methods lack the ability to discover multi-hop relational paths.
+
+
+Why is it so?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'python', 'computer-vision']"," Title: An Encoder-Decoder based CNN to predict a tensor of pointsBody: So I have with me a data of rendered 2D images of a 3D object and along with that, I have the image projection coordinates (X, Y)
of all the voxels that are in the camera perspective in that image.
+
+The rendered image
+
+The voxel camera projections(This is just for visualization purposes)
+
+
+I wanted to build a CNN which takes an input a rendered image and outputs all the voxel camera projection coordinates (X, Y)
.I was thinking to try out an encoder-decoder based network (like a U-Net
) but shouldn't the image obtained after decoding be the same dimensions as the input image. I want my decoder to output the tensor of coordinates but I am having a hard time thinking about how it would do so.
+"
+"['reinforcement-learning', 'comparison', 'value-functions', 'expectation', 'bellman-equations']"," Title: Are these two definitions of the state-action value function equivalent?Body: I have been reading the Sutton and Barto textbook and going through David Silvers UCL lecture videos on YouTube and have a question on the equivalence of two forms of the state-action value function written in terms of the value function.
+
+From Question 3.13 of the textbook I am able to write the state-action value function as
+$$q_{\pi}(s,a) = \sum_{s',r}p(s',r|s,a)(r + \gamma v_\pi(s')) = \mathbb{E}[r + \gamma v_\pi(s')|s,a]\;.$$
+Note that the expectation is not taken with respect to $\pi$ as $\pi$ is the conditional probability of taking action $a$ in state $s$. Now, in David Silver's slides for the Actor-Critic methods of the Policy Gradient lectures, he says that
+$$\mathbb{E}_{\pi_\theta}[r + \gamma v_{\pi_\theta}(s')|s,a] = q_{\pi_\theta}(s,a)\;.$$
+
+Are these two definitions equivalent (in expectation)?
+"
+"['reinforcement-learning', 'policies', 'deepmind', 'alphago-zero', 'alphago']"," Title: How does the AlphaGo Zero policy decide what move to execute?Body: I was going through the AlphaGo Zero paper and I was trying to understand everything, but I just can't figure out this one formula:
+
+$$
+\pi(a \mid s_0) = \frac{N(s_0, a)^{\frac{1}{\tau}}}{\sum_b N(s_0, b)^{\frac{1}{\tau}}}
+$$
+
+Could someone decode how the policy makes decisions following this formula? I pretty much understood all of the other parts of the paper, and also the temperature parameter is clear to me.
+
+It might be a simple question, but can't figure it out.
+"
+"['heuristics', 'multi-agent-systems', 'a-star']"," Title: How does heuristic work with multiple agents?Body: I have a question for heuristic search with multiple agents. I know how heuristic search works with one agent (ex. one Pacman) but I don't really understand it with multiple agents. Let's say we have this problem where Worm A has to get to its goal state A and Worm B to B, knowing that the agents can move only in vertical and horizontal way:
+
+
+If we had only Worm B, the optimal cost from starting position to the goal position would be 9
, since one action costs 1
and it'd follow the path RIGHT-RIGHT-RIGHT-RIGHT-RIGHT-RIGHT-UP-UP-UP
.
+
+My question is, if we have two worms, like in the picture, the optimal cost would be 9 + optimal cost for Worm A
?
+
+Also, strictly for this problem with 2 agents, if we use Manhattan distance as a heuristic for one agent, would it be admissible if we take the average of Worm A and B heuristics for a problem with two agents?
+
+Another question, I know for a fact that sum of two admissible heuristics won't be admissible for one agent but would it be for the problem with two agents?
+
+These two worms are dependent of each other. How? If one worm moves from position X to Y, the position X is marked as a wall and is not an available field to move in. So if one worm has been in a specific position, that position is no more free for moving in.
+
+For example, if we have something like B^^^X^^^
, where B is the Worm B, ^
is an available field and X
is a wall, after one RIGHT action it'll look like XB^^X^^
, after one more RIGHT: XXB^X^^
etc.
+"
+"['convolutional-neural-networks', 'activation-functions', 'convolution']"," Title: Are activation functions applied to feature maps?Body: If I have a convolutional neural network, and I convolve my input tensor with a kernel, the output is a feature map. Is an activation function then applied to this feature map?
+
+If its an image that is a 2D tensor, would the activation function change every single value on this image?
+"
+"['image-processing', 'image-segmentation']"," Title: How to create vector representation of roadmap like scansBody: What would be the best way to create a vector representation of roadmap like scans? The goal I am trying to achieve is illustrated below. The left side represents the source image, the right side the output in the form of three vectors. The fussiness on the left is a simulation, not the actual source image:
+
+
+
+The actual source image would look more like:
+
+
+
+Currently I am looking at a combination of skeletonization and Hough transform. The result is rather messy though, and seems to warrant quite some extra engineering. Any other suggestions?
+"
+"['natural-language-processing', 'long-short-term-memory']"," Title: Which NLP model to use to handle long context?Body: I'm trying to process product data for an e-commerce platform. The goal is to understand products' size.
+
+Just to show you some examples on how messy product dimension description is:
+
+Overall Dimensions: 66 in W x 41 in D x 36 in H
+Overall: 59 in W x 28.75 in D x 30.75 in H
+92w 37d 32h"",
+86.6 in W x 33.9 in D x 24 in H
+W: 95.75\"" D: 36.5\"" H: 28.75\"""",
+W: 96\"" D: 39.25\"" H: 32\"""",
+""118\""W x 35\""D x 33\""T."",
+""28 L x 95 W x 41 H""
+""95\"" W x 26.5\"" H x 34.75\"" D""
+""98\""W x 39\""D x 29\""H""
+""28\"" High x 80\"" Wide x 32\"" Deep""
+
+
+
+Now assume that the product dimension description is short < 60 characters, I trained a two layer bidirectional LSTM, which can handle this task perfectly.
+
+But the problem is, the above dimension is usually embedded in a long context (as a part of the product description). How can I extract the useful information from the long context and understand it? My LSTM can only accept context size of 60.
+
+What language model is more suitable for this?
+"
+"['genetic-algorithms', 'neat', 'neuroevolution', 'crossover-operators', 'genetic-operators']"," Title: How do I determine the genomes to use for crossover in NEAT?Body: If I have the fitness of each genome, how do I determine which genome will crossover with which, and so on, so that I get a new population?
+
+Unfortunately, I can't find anything about it in the original paper, so I ask here?
+"
+"['neural-networks', 'reinforcement-learning', 'actor-critic-methods', 'a3c']"," Title: How should I deal with variable batch size in A3C?Body: I am fairly new to reinforcement learning (RL) and deep RL. I have been trying to create my first agent (using A3C) that selects an optimal path with the reward being some associated completion time (the more optimal the path is, packets will be delivered in less time kind of).
+
+However, in each episode/epoch, I do not have a certain batch size for updating my NN's parameters.
+
+To make it more clear, let's say that, in my environment, on each step, I need to perform a request to the servers, and I have to select the optimal path.
+
+Now, each execution in my environment does not contain the same amount of files. For instance, I might have 3 requests as one run, and then 5 requests for the next one.
+
+The A3C code I have at hand has a batch size of a 100. In my case, that batch size is not known a priori.
+
+Is that going to affect my training? Should I find a way to keep it fixed somehow? And how can one define an optimal batch size for updating?
+"
+"['reinforcement-learning', 'deep-rl', 'open-ai', 'off-policy-methods', 'on-policy-methods']"," Title: Can we combine Off-Policy with On-Policy Algorithms?Body: On-Policy Algorithms like PPO directly maximize the performance objective or an approximation of it. They tend to be quite stable and reliable but are often sample inefficient. Off-Policy Algorithms like TD3 improve the sample inefficiency by reusing data collected with previous policies, but they tend to be less stable. (Source: Kinds of RL Algorithms - Spinning up - OpenAI)
+
+Looking at learning curves comparing SOTA algorithms, we see that off-policy algorithms quickly improve performance at the training's beginning. Here an example:
+
+
+Can we start training off-policy and after some time use the learned and quickly improved policy to init the policy network of an on-policy algorithm?
+"
+"['convolutional-neural-networks', 'image-recognition']"," Title: My CNN model performs bad on new (self-created) pictures, what are possible reasons?Body: I wanted to train a model that recognizes sign language. I have found a dataset for this and was able to create a model that would get 94% accuracy on the test set. I have trained models before and my main goal is not to have the best model (I know 94% could easiy be tuned up). However these models where always for class exercises and thus were never used on 'real' new data.
+
+So I took a new picture of my hand that I know I wanted to be a certain letter (let's assume A).
+
+Since my model was trained on 28x28 images, I needed to re-size my own image because it was larger. After that I fed this image to my model only to get a wrong classification.
+
+https://imgur.com/a/QE6snTa
+
+These are my pictures (upper-left = my own image (expected class A), upper-right = an image of class A (that my model correctly classifies as A), bottom = picture of class Z (the class my image was classified as)).
+
+You can clearly see that my own image looks for more like the image of class A (that I wanted my model to predict), than the model it did predict.
+
+What could be reasons that my model does not work on real-life images? (If code is wanted I can provide it ofcourse but since I don't know where I go wrong, it seemed out of line to copy all the code).
+"
+"['machine-learning', 'classification', 'data-science', 'data-preprocessing']"," Title: Are there any general guidelines for dealing with imbalanced data through upsampling or downsampling?Body: Are there any general guidelines for dealing with imbalanced data through upsampling/downsampling?
+
+This Google developer guide suggests performing downsampling with upweighting, but for the most part I've found upsampling usually works better in practice (some corroboration).
+
+Is there any clear consensus or empirical study of what works in practice, or when to use which? Does it matter which classification algorithm you use?
+"
+"['deep-learning', 'tensorflow', 'prediction']"," Title: How to implement AI strategy for MastermindBody: I'm looking to implement a AI for the turn-based game Mastermind in Node.JS, using Google's Tensorflow library. Basically the AI needs to predict the 4D input for the optimal 2D output [0,4]
with a given list of 4D inputs and 2D outputs from previous turns in the form of [input][output]
.
+
+The optimal output would be [0,4]
, which would be the winning output. The training data looks like this:
+
+[1,2,3,4][0,1] [0,5,2,6][3,1] [0,2,5,6][2,2] [6,5,2,0][4,0] [5,2,0,6][0,4]
+
+
+So given these previous turns
+
+[1,2,3,4][0,1] [0,5,2,6][3,1] [0,2,5,6][2,2] [6,5,2,0][4,0]
+
+
+the AI would predict an input of [5,2,0,6]
for the output [0,4]
.
+I've looked at this post but it talks about only inferring input for a output without any context. In Mastermind, the context of previous guesses and results from them are critical
+
+My algorithm would need to use the information from previous turns to determine the best input for the winning output ([0,4]
).
+
+So my question is: How can I implement AI for Mastermind?
+"
+"['reference-request', 'healthcare']"," Title: Is AI already being used in the drug industry to combat the COVID-19?Body: We all have heard about how beneficial AI can be in health. There are plenty of papers and research about confronting diseases, like cancer. However, in 2020 with COVID-19 be one of the most serious health problems that have caused thousands of deaths worldwide.
+Is AI already being used in the drug industry to combat the COVID-19? If yes, can you, please, provide a reference?
+"
+"['reinforcement-learning', 'gradient-descent', 'policy-gradients', 'reinforce', 'adam']"," Title: How long should the state-dependent baseline for policy gradient methods be trained at each iteration?Body: How long should the state-dependent baseline be trained at each iteration? Or what baseline loss should we target at each iteration for use with policy gradient methods?
+
+I'm using this equation to compute the policy gradient:
+
+$$
+\nabla_{\theta} J\left(\pi_{\theta}\right)=\underset{\tau \sim \pi_{\theta}}{\mathrm{E}}\left[\sum_{t=0}^{T} \nabla_{\theta} \log \pi_{\theta}\left(a_{t} | s_{t}\right)\left(\sum_{t^{\prime}=t}^{T} R\left(s_{t^{\prime}}, a_{t^{\prime}}, s_{t^{\prime}+1}\right)-b\left(s_{t}\right)\right)\right]
+$$
+
+Here is mentioned to use one or more gradient steps, so is it a hyper-parameter to be found using random search?
+
+Is there some way we can use an adaptive method to find out when to stop?
+
+In an experiment to train Cartpole-v2 using a policy gradient with baseline, I found the results are better when applying 5 updates than when only a single update was applied.
+
+Note: I am referring to the number of updates to take on a single batch of q values encountered across trajectories collected using current policy.
+"
+"['deep-learning', 'reinforcement-learning', 'statistical-ai']"," Title: Simplification of expected reward under the limit in continuous tasksBody: I was reading the average reward setting for continuous tasks from rich sutton's book (page 202, 2nd edition). There he perform a simplification over the expected reward under the limit approaching to infinite. I mark this point in this picture:
+
+
+The book does not clearly mention the steps to simplify the above expression. I search on the web to find the solution but there is no clear explanation on that. Can anyone explain the marked point?
+"
+"['machine-learning', 'prediction', 'regression', 'linear-regression', 'categorical-data']"," Title: 3d representation of a regression with two independent variables one of them is categorical and another is continuousBody: I have hopefully a fundamental question of Do I understand things right.
+(Thank you in advance and sorry for my English which might be not so good)
+
+1-Preambula 1:
+I know that if we have 2 independent variables, both of a continuous type, it is ok to represent them as a 2d plane in a 3d space:
+
+
+
+2-Preambula 2:
+I have seen that many times when we have to deal with continuous and categorical variables(male\female for example), we represent them like this(note the lines are parallel):
+
+
+
+3-Assumption:
+In the beginning I assumed that it is 2d representation of this 3d case:
+
+4-Discussion 1: But If my assumption above was right, why do they always ""picture"" it with parallel lines? After all this is a very specific situation. In most of cases both regression lines will not be parallel, further more, they may have different slope direction (one negative and another positive)For example:
+
+
+5-Discussion 2: On the other hand parallel models may be explained in such a way: if we will add a regression hyperplane which ""somehow"" fits both groups(male and female), we will get the parallel lines:
+
+
+6-Finally My questions are quite simple.
+
+question 5.1: Did I understand right the nature of parallel lines as I show it above (in discussion2)?
+
+question 5.2: If I was right in 5.1, I assume that in such cases hyperplane regression is a quite a bad predictor. Am I right?
+"
+"['neural-networks', 'machine-learning', 'comparison', 'function-approximation']"," Title: What are the differences between artificial neural networks and other function approximators?Body: Modern artificial neural networks use a lot more functions than just the classic sigmoid, to the point I'm having a hard time really seeing what classifies something as a ""neural network"" over other function approximators (such as Fourier series, Bernstein polynomials, Chebyshev polynomials or splines).
+
+So, what makes something an artificial neural network? Is there a subset of theorems that apply only to neural networks?
+
+Backpropagation is classic, but that is the multi-variable chain rule, what else is unique to neural networks over other function approximators?
+"
+['reinforcement-learning']," Title: Is there 1-dimensional reinforcement learning?Body: From what I can find, reinforcement algorithms work on a grid or 2-dimensional environment. How would I set up the problem for an approximate solution when I have a 1-dimensional signal from a light sensor. The sensor sits some distance away from a lighthouse. The intent would be to take the reading from the sensor to determine the orientation of the lighthouse beam.
+
+The environment would be a lighthouse beam, the state would be the brightness seen at the sensor for a given orientation, and the agent would be the approximate brightness/orientation? What would the reward be? What reinforcement learning algorithm would I use to approximate the lighthouse orientation given sensor brightnesses?
+"
+"['machine-learning', 'reinforcement-learning', 'monte-carlo-tree-search']"," Title: Should Monte Carlo tree search be able to consistently beat me in the connect four game?Body: I've implemented the Monte Carlo tree search (MCTS) algorithm for a connect four game I've built. The MCTS agent beats a random choice agent 90-100% of the time, but I’m still able to beat it pretty easily. It even misses obvious three in a row opportunities where it just needs to add one more token to win (but places it elsewhere instead).
+
+Is this normal behavior, or should the MCTS agent be able to beat me consistently too? I'm allowing it to grow its tree for 2 seconds before getting it to return its chosen action - could it be that it needs longer to think?
+"
+"['neural-networks', 'machine-learning', 'ai-design', 'activation-functions', 'regression']"," Title: Which activation functions should I use for polynomial regression?Body: I am a beginner in machine learning and neural networks. I have only used neural networks for classification problems. My aim is to modify it so that it can work for polynomial regression as well. In my problem, I have three inputs and three outputs. My aim is to predict these three outputs based on these three inputs. The outputs are real-valued, and can take positive and negative values.
+
+How should I choose the activation functions? I have only used sigmoid.
+"
+"['reinforcement-learning', 'markov-decision-process']"," Title: How can blackjack be formulated as a Markov decision process?Body: I am reading sutton barton's reinforcement learning textbook and have come across the finite Markov decision process (MDP) example of the blackjack game (Example 5.1).
+
+Isn't the environment constantly changing in this game? How would the transition probabilities be fixed in such an environment, when both you and the dealer draw cards?
+"
+"['neural-networks', 'deep-learning', 'classification', 'transfer-learning']"," Title: Is the high dimensionality of input vectors a problem for a radial basis function neural network?Body: I have a dataset A of videos. I've extracted the feature vector of each video (with a convolutional neural network, via transfer learning) creating a dataset B. Now, every vector of the dataset B has a high dimension (about 16000), and I would like to classify these vectors using an RBF-ANN (there are only 2 possible classes).
+
+Is the high dimensionality of input vectors a problem for a radial basis function ANN? If yes, is there any way to deal with it?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'applications', 'autonomous-vehicles']"," Title: Why can't DQN be used for self-driving cars?Body: Why can't DQN be used for self-driving cars?
+Why can't DQN and similar RL algorithms be used for self-driving cars?
+
+The reason why I am curious is that it successfully plays go and other multistate games.
+"
+"['convolutional-neural-networks', 'papers', 'algorithmic-bias', 'softmax']"," Title: What do the authors of this paper mean by the bias term in this picture of a neural network implementation?Body: I am reading a paper implementing a deep deterministic policy gradient algorithm for portfolio management. My question is about a specific neural network implementation they depict in this picture (paper, picture is on page 14).
+
+
+
+The first three steps are convolutions. Once they have reduced the initial tensor into a vector, they add that little yellow square entry to the vector, called the cash bias, and then they do a softmax operation.
+
+The paper does not go into any detail about what this bias term could be, they just say that they add this bias before the softmax. This makes me think that perhaps this is a standard step? But I don't know if this is a learnable parameter, or just a scalar constant they concatenate to the vector prior to the softmax.
+
+I have two questions:
+
+1) When they write softmax, is it safe to assume that this is just a softmax function, with no learnable parameters? Or is this meant to depict a fully connected linear layer, with a softmax activation?
+
+2) If it's the latter, then I can interpret the cash bias as being a constant term they concatenate to the vector before the fully connected layer, just to add one more feature for the cash assets. However, if softmax means just a function, then what is this cash bias? It must be a constant that they implement, but I don't see what the use of that would be, how can you pick a constant scalar that you are confident will have the intended impact on the softmax output to bias the network to put some weight on that feature (cash)?
+
+Any comments/interpretations are appreciated!
+"
+"['reinforcement-learning', 'tensorflow']"," Title: Trying to Train Cards Game with RLBody: I am trying to train a card game called Callbreak I tried inputs like all the opponents discarded cards all hands everything a human can see and calculate with ""common sense"" I fed it to the Agent but it's not learning the way it should after 100M steps it learned as basic as any new/bad player will do in the game.
+I have never tried to train a card game before so Idk what should I feed and how much to feed any help will be appreciated.
+
+I have trained 100m+ steps before it will just keep repeating these drops and will never get more rewards.
+
+config I am using is:
+
+default:
+trainer: ppo
+batch_size: 1024
+beta: 5.0e-3
+buffer_size: 10240
+epsilon: 0.2
+hidden_units: 256
+lambd: 0.95
+learning_rate: 3.0e-4
+learning_rate_schedule: linear
+max_steps: 100.0e5
+memory_size: 128
+normalize: false
+num_epoch: 3
+num_layers: 3
+time_horizon: 64
+sequence_length: 64
+summary_freq: 100000
+use_recurrent: false
+vis_encode_type: simple
+reward_signals:
+ extrinsic:
+ strength: 1.0
+ gamma: 0.99
+
+
+Tensorboard
+
+
+New Training Results
+
+default:
+trainer: ppo
+batch_size: 1024
+beta: 5.0e-3
+buffer_size: 10240
+epsilon: 0.2
+hidden_units: 512
+lambd: 0.95
+learning_rate: 3.0e-4
+learning_rate_schedule: linear
+max_steps: 500.0e5
+memory_size: 128
+normalize: false
+num_epoch: 3
+num_layers: 3
+time_horizon: 64
+sequence_length: 64
+summary_freq: 100000
+use_recurrent: false
+vis_encode_type: simple
+reward_signals:
+ extrinsic:
+ strength: 1.0
+ gamma: 0.99
+ curiosity:
+ strength: 0.02
+ gamma: 0.99
+ encoding_size: 256
+
+
+
+
+
+
+
+"
+"['computer-vision', 'linear-algebra', 'projective-transformations']"," Title: How do you find the homography matrix given 4 points in both images?Body: I want to understand the process of finding a homography matrix given 4 points in both images. I am able to do that in python OpenCV, but I wonder how it works behind the scenes.
+
+Suppose I have points $p_1, p_2, p_3, p_4$ in the first image and $p'_1, p'_2, p'_3, p'_4$ in the second. How am I going to generate the homography matix given these points.
+"
+"['reinforcement-learning', 'comparison', 'q-learning', 'sarsa', 'greedy-policy']"," Title: Are Q-learning and SARSA the same when action selection is greedy?Body: I'm currently studying reinforcement learning and I'm having difficulties with question 6.12 in Sutton and Barto's book.
+
+Suppose action selection is greedy. Is Q-learning then exactly the same algorithm as SARSA? Will they make exactly the same action selections and weight updates?
+
+I think it's true, because the main difference between the two is when the agent explores, and following the greedy policy it never explores, but I am not sure.
+"
+"['computer-vision', 'object-detection', 'r-cnn', 'selective-search', 'faster-r-cnn']"," Title: How does the region proposal method work in Fast R-CNN?Body: I read so many articles and the Fast R-CNN paper, but I'm still confused about how the region proposal method works in Fast R-CNN.
+As you can see in the image below, they say they used a proposal method, but it is not specified how it works.
+What confuses me is, for example, in the VGGnet, the output of the last convolution layer is feature maps of shape 14x14x512, but what is the used algorithm to propose the regions and how does it propose them from the feature maps?
+
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'architecture']"," Title: Can you explain me this CNN architecture?Body: I am starting to get my head around convolutional neural networks, and I have been working with the CIFAR-10 dataset and some research papers that used it. In one of these papers, they mention a network architecture notation for a CNN, and I am not sure how to interpret that exactly in terms of how many layers are there and how many neurons in each.
+This is an image of their structure notation.
+
+
+- Can some give me an explanation as to what exactly this structure looks like?
+
+- In the CIFAR-10 dataset, each image is $32 \times 32$ pixels, represented by 3072 integers indicating the red, green, blue values for each pixel.
+Does that not mean that my input layer has to be of size 3072? Or is there some way to group the inputs into matrices and then feed them into the network?
+
+
+"
+"['ai-design', 'python', 'search']"," Title: Wooden railway search problem AIBody: I have this problem.
+
+A basic wooden railway set contains the pieces shown in Figure 3.32.
+
+The task is to connect these pieces into a railway that has no overlapping tracks and no loose ends where a train could run off onto the floor.
+a. Suppose that the pieces fit together exactly with no slack. Give a precise formulation of the task as a search problem.
+b. Identify a suitable uninformed search algorithm for this task and explain your choice.
+
+I know I have to use a DFS for this problem. But how do I know all the pieces are connected. The last one and the first one are connected.
+Can someone help me with some tips on how to solve this problem and implement it (in Python)?
+"
+"['reinforcement-learning', 'policy-gradients', 'actor-critic-methods']"," Title: Is the reward following after time step $t+1$ collected based on current policy?Body: I am currently learning policy gradient methods from the Deep RL boot camp by Pieter Abbeel in which he explains the actor-critic algorithm derivation.
+
+At around minute 39, he explains that the sum of the rewards from time step $t$ onwards is actually an estimation of $Q^\pi(s,u)$. I understand the definition of $Q^\pi(s,u)$ but I'm not sure why this is the case here. Is the reward following after time step $t+1$ collected based on current policy?
+
+
+"
+"['reinforcement-learning', 'convergence', 'policies']"," Title: Why do RL implementations converge on one action?Body: I have seen this happening in implementations of state-of-the-art RL algorithms where the model converges to a single action over time after multiple training iterations. Are there some general loopholes or reasons why this kind of behavior is exhibited?
+"
+"['reinforcement-learning', 'deep-rl', 'dqn', 'convergence', 'function-approximation']"," Title: Why is it hard to prove the convergence of the deep Q-learning algorithm?Body: Why is it hard to prove the convergence of the DQN algorithm? We know that the tabular Q-learning algorithm converges to the optimal Q-values, and with a linear approximator convergence is proved.
+
+The main difference of DQN compared to Q-Learning with linear approximator is using DNN, the experience replay memory, and the target network. Which of these components causes the issue and why?
+"
+['recurrent-neural-networks']," Title: Reservoir of LSM vs. FF-NN or ELMBody: The reservoir of the Liquid State Machine is an array of random numbers connected to each other with a probability depending on the distance between each other. Because of this connection with each other it apparently has ""recurrence"". The reservoir is followed by a readout stage where the actual weight training is done.
+
+The hidden layer of a FF-NN can be an array of random weights, which is exactly what an Extreme Learning Machine is. ELM has a closed form solution of the second-stage weight (Beta) calculation i.e. only the Beta needs to be trained.
+
+So in both cases you have a second-stage layer or readout layer where weights are trained.
+
+My question is, if the reservoir random weights are very much like the ELM random weights and both don't need to be trained, how are they any different than each other? In other words, both have a set of untrained random weights, so in LSM where is the recurrence exactly happening if the weights are just random? Can't the LSM be reduced to a FF-NN?
+"
+"['reinforcement-learning', 'reference-request', 'state-spaces', 'action-spaces', 'discretization']"," Title: When to do discretization to decrease the state/action space in RL?Body: When to do discretization to decrease the state/action space in RL? Can you give me some references that such a technique is used?
+"
+"['machine-learning', 'randomness', 'generator', 'neural-networks', 'deep-neural-networks']"," Title: Random value generator using a single neuron or DNNBody: AI is supposed to do anything human or traditional computer can do, that is what we expect AI to be.
+
+So 'generating random value' is also a task included in the scope that AI should be able to do
+
+I'm trying to generate random value using a single neuron but the outcome isn't much good.
+Any suggestions?
+
+PS.
+Random weight initialisation is allowed coz weights are constants at start.
+Using 'random' function is forbidden anywhere else.
+"
+['neural-networks']," Title: Is there a parallelizable algorithm for training sparse neural networks?Body: Consider that we want to create a very big neural network. If we consider to use dense layers, we might face some challenges. Now consider that we use sparse layers instead of dense layers. When using a really sparse model, we would have much less parameters and could create much bigger neural networks. Is there an efficient algorithm to parallelize the training of such networks?
+"
+"['reference-request', 'research', 'geometric-deep-learning', 'academia', 'graph-neural-networks']"," Title: What are some conferences for publishing papers on graph convolutional networks?Body: What are some conferences for publishing papers on graph convolutional networks?
+"
+['neural-networks']," Title: How to manage the different pixel size for a CNN?Body: I have a convolutional neural network such as U-Net for a segmentation task and in input images with the same spatial resolution (256x256) but different pixel size due to the acquisition process. Specifically, every image has FoV 370x370mm 256/256 pixel, but a different zoom, for example an image might have 2.7/1.8 px/mm and another image 2.4/1.7 px/mm.
+Considering the FoV the pixel size should be 370/256=1.44mm x pixel but with a zoom of 2.7/1.8 px/mm which is the pixel size in this case ? I thought 1.8/2.7= 0.67mm but I am not sure.
+Why should I have the same in-plane resolution (pixel size) for each image when I train my CNN and not only the same spatial resolution (256x256 px) ?
+"
+"['python', 'computer-vision', 'image-processing', 'filters']"," Title: How does the math behind heat map filters work?Body: I am working on an app that generates heat/ thermal map given a picture. i have been able to get what i expected using python opencv builtin function cv2.applyColorMap(img, cv2.COLORMAP_JET)
. Everything works exactly as expected. But i want to understand how applyColorMap
works at the back end. I am aware how several image filters (like image, edge filters) work by convolution / cross correlation with appropriate kernals, but i can't seem to pull the same concept for color maps.
+For this question lets consider a color map where we want :
+
+Brightest ones: (RED COLORED)
+
+MEDIUM INTENSITY ONES: (YELLOW COLORED)
+
+LOW INTENSITY ONES: (BLUE COLORED)
+
+What i have done:
+
+I tried dividing the pixels into 3 categories and replaced each pixel with the either of the colors (RED, YELLOW, BLUE ) depending upon it's value from gray scale image( 0-255).
+This approach had a problem that there were solid 3 colors in the image with no variation in intensity of the individual color while in a good heat map there is blend of colors ( it decreases or increases ) based upon the intensity . I want to achieve that effect. I would appreciate any help or any lead to understand how heat maps work .
+"
+"['neural-networks', 'convolutional-neural-networks', 'computer-vision', 'architecture']"," Title: Merge two different CNN models into oneBody: I have 2 different models with each model doing a separate function and have been trained with different weights. Is there any way I can merge these two models to get a single model.
+
+If it can be merged
+
+
+- How should I go about it? Will the number of layers remain the same?
+- Will it give me any performance gain?(Intuitively speaking, I should get a higher performance)
+- Will the hardware requirements change when using the new model?
+- Will I need to retrain the model? Can I somehow merge the trained weights?
+
+
+If the models cannot be merged
+
+
+- Why so? After all, convolution is finding the correct pattern in data.
+- Also, if CNN's cannot be merged, then how do skip-connections like ResNet50 work?
+
+
+EDIT:
+
+Representation:
+
+What I currently have
+
+Image ---(model A) ---> Temporary image ---(Model B)---> Output image
+
+What I want:
+
+Image ---(model C) ---> Output image
+"
+"['reinforcement-learning', 'python', 'pytorch', 'actor-critic-methods', 'environment']"," Title: Once the environments are vectorized, how do I have to gather immediate experiences for the agent?Body: My main purpose right now is to train an agent using the A2C algorithm to solve the Atari Breakout game. So far I have succeeded to create that code with a single agent and environment. To break the correlation between samples (i.i.d), I need to have an agent interacting with several environments.
+
+class GymEnvVec():
+
+ def __init__(self, env_name, n_envs, seed=0):
+ make_env = lambda: gym.make(env_name)
+ self.envs = [make_env() for _ in range(n_envs)]
+ [env.seed(seed + 10 * i) for i, env in enumerate(self.envs)]
+
+ def reset(self):
+ return [env.reset() for env in self.envs]
+
+ def step(self, actions):
+ return list(zip(*[env.step(a) for env, a in zip(self.envs, actions)]))
+
+
+I can use the class GymEnvVec
to vectorize my environment.
+
+So I can set my environments with
+
+envs = GymEnvVec(env_name=""Breakout-v0"", n_envs=50)
+
+
+I can get my first observations with
+
+observations = envs.reset()
+
+
+Pick some actions with
+
+actions = agent.choose_actions(observations)
+
+
+The choose_actions
method might look like
+
+def choose_actions(self, states):
+ assert isinstance(states, (list, tuple))
+
+ actions = []
+ for state in states:
+ probabilities = F.softmax(self.network(state)[0])
+ action_probs = T.distributions.Categorical(probabilities)
+ actions.append(action_probs.sample())
+
+ return [action.item() for action in actions]
+
+
+Finally, the environments will spit the next_states, rewards and if it is done with
+
+next_states, rewards, dones, _ = env.step(actions)
+
+
+It is at this point I am a bit confused. I think I need to gather immediate experiences, batch altogether and forward it to the agent. My problem is probably with the ""gather immediate experiences"".
+
+I propose a solution, but I am far from being sure it is a good answer. At each iteration, I think I must take a random number with
+
+nb = random.randint(0, len(n_envs)-1)
+
+
+and put the experience in history with
+
+history.append(Experience(state=states[nb], actions[nb], rewards[nb], dones[nb]))
+
+
+Am I wrong? Can you tell me what I should do?
+"
+"['reinforcement-learning', 'research', 'pytorch']"," Title: What are the procedures to get RL paper results?Body: I finished working on a new algorithm in Reinforcement Learning, I need to compare it to some well-known algorithms. That's why I need to know the step-by-step procedures that RL researchers usually take in order to get their results and compare them to other papers' results. (e.g. running the algorithm multiple times with different random seeds, saving results to .csv
files, plotting).
+
+Anyone can help?
+
+(I am working on Pytorch, PyBullet Environments.)
+"
+"['reinforcement-learning', 'dqn', 'loss']"," Title: Do smaller loss values during DQN training produce better policies?Body: During the training of DQN, I noticed that the model with prioritized experience replay (PER) had a smaller loss in general compared to a DQN without PER. The mean squared loss was an order of magnitude $10^{-5}$ for the DQN with PER, whereas the mean squared loss was an order of magnitude $10^{-2}$.
+
+Do the smaller training errors have any effect on executing the final policy learned by the DQN?
+"
+"['neural-networks', 'activation-functions', 'regression']"," Title: What work has been done with Poisson-style regression via neural networks with exponential activation function?Body: The first neural net I wrote was a classifier. After that, I learned that neural nets can be used for regression tasks, even quantile regression.
+It has become clear to me that the usual games with extensions of OLS linear regression can be applied to neural networks.
+What work has been done with Poisson-style regression via neural networks with log link functions (exponential activation function)?
+"
+['reinforcement-learning']," Title: Is it required that taking an action updates the state?Body: For some environments taking an action may not update the environment state. For example, a trading RL agent may take an action to buy shares s. The state at time t which is the time of investing is represented as the interval of 5 previous prices of s. At t+1 the share price has changed but it may not be as a result of the action taken. Does this affect RL learning, if so how ? Is it required that state is updated as a result of taking actions for agent learning to occur ?
+
+In gaming environments it is clear how actions affect the environment. Can some rules of RL breakdown if no ""noticeable"" environment change takes place as a result of actions ?
+
+Update:
+
+""actions influence the state transitions"", is my understanding correct:
+If transitioning to a new state is governed by epsilon greedy and epsilon is set to .1 then with .1 probability the agent will choose an action from the q table which has max reward reward for the given state. Otherwise the agent randomly chooses and performs an action then updates the q table with discounted reward received from the environment for the given action.
+
+I've not explicitly modeled an MDP and just defined the environment and let the agent determine best actions over multiple episodes of choosing either a random action or the best action for the given state, the selection is governed by epsilon greedy.
+
+But perhaps I've not understood something fundamental in RL. I'm ignoring MDP in large part as I'm not modeling the environment explicitly. I don't set the probabilities of moving from each state to other states.
+"
+"['reinforcement-learning', 'reference-request', 'k-means', 'dimensionality']"," Title: Can I do state space quantization using a KMeans-like algorithm instead of range buckets?Body: Are there any reference papers where it is used a KMeans-like algorithm in state space quantization in Reinforcement Learning instead of range buckets?
+"
+"['objective-functions', 'image-segmentation', 'loss']"," Title: what will be the best loss function for unet to predict the each pixel values?Body:
+
+I'm predicting the used 9 pictures to predict the last picture
+so (40,40,9) -> unet -> (40,40,1)
+
+but as you see the predict picture
+
+
+It's not just a mask(0or 1) its float
+so which loss function should I define to achieve the best Unet result? and why?
+"
+"['reinforcement-learning', 'markov-decision-process', 'pomdp', 'bayesian-optimization', 'gaussian-process']"," Title: Can we use a Gaussian process to approximate the belief distribution at every instant in a POMDP?Body: Suppose $x_{t+1} \sim \mathbb{P}(\cdot | x_t, a_t)$ denotes the state transition dynamics in a reinforcement learning (RL) problem. Let $y_{t+1} = \mathbb{P}(\cdot | x_{t+1})$ denote the noisy observation or the imperfect state information. Let $H_{t}$ denote the history of actions and observations $H_{t+1} = \{b_0,y_0,a_0,\cdots,y_{t+1}\}$.
+
+For the RL Partially Observed Markov Decision Process (RL-POMDP), the summary of the history is contained in the ""belief state"" $b_{t+1}(i) = \mathbb{P}(x_{t+1} = i | H_{t+1})$, which is the posterior distribution over the states conditioned on the history.
+
+Now, suppose the model is NOT known. Clearly, the belief state can't be computed.
+
+Can we use a Gaussian Process (GP) to approximate the belief distribution $b_{t}$ at every instant $t$?
+
+Can Variational GP be adapted to such a situation? Can universal approximation property of GP be invoked here?
+
+Are there such results in the literature?
+
+Any references and insights into this problem would be much appreciated.
+"
+"['deep-learning', 'reinforcement-learning']"," Title: How's the action represented in MuZero for Atari?Body: MuZero seems to use two different methods to encode actions into planes for Atari games:
+
+
+- For the input action to the representation function, MuZero encodes historical actions as simple bias planes, scaled as $a/18$, where $18$ is the total number of valid actions in Atari.(from the appendix E of the paper)
+- For the input action to the dynamics function, Muzero encode an action as a one-hot vector, which is tiled appropriately into planes(from the appendix F of the paper)
+
+
+I'm not so sure about how to make of the term ""bias plane"".
+
+About the second, my understanding is that, as an example, for action $4$, we first apply one-hot encoding, which gives us a zero vector of length $18$ with one in the $5$-th position(as there are $18$ actions). Then we tile it and get a zero vector of length $36$, with ones in the $5$-th and $23$-rd positions. At last, this vector is reshaped into a $6\times 6$ plane as follows:
+
+$$
+0, 0, 0, 0, 1, 0\\ 0, 0, 0, 0, 0, 0\\ 0, 0, 0, 0, 0, 0\\
+0, 0, 0, 0, 1, 0\\ 0, 0, 0, 0, 0, 0\\ 0, 0, 0, 0, 0, 0
+$$
+"
+"['machine-learning', 'deep-learning', 'terminology', 'cross-validation']"," Title: What are non-held-out data or non-held-out classes?Body: I'm Spanish and I don't understand the meaning of ""non-held-out"". I have tried Google Translator and online dictionaries like Longman but I can't find a suitable translation for this term.
+
+You can find these term using this Google Search, and in articles like this one:
+
+
+- ""computing SVD on the non-held-out data"" from here.
+- ""The training set consists all the images and annotations containing non-held-out classes while held-out classes are masked as background during the training"" from Few-Shot Semantic Segmentation with Prototype Learning.
+- ""A cross-validation procedure is that non held out data (meaning after holding out the test set) is splitted in k folds/sets"" from here.
+
+
+What is non-held-out data and held-out data or classes?
+"
+"['reinforcement-learning', 'policy-gradients']"," Title: How does the gradient increase the probabilities of the path with a positive reward in policy gradient?Body: Pieter Abbeel in his deep rl bootcamp policy gradient lecture derived the gradient of the utility function with respect to $\theta$ as $\nabla U(\theta) \approx \hat{g} = 1/m\sum_{i=1}^m \nabla_\theta logP(\tau^{(i)}; \theta)R(\tau^{(i)})$, where $m$ is the number of rollouts, and $\tau$ represents the trajectory of $s_0,u_0, ..., s_H, u_H$ state action sequences.
+
+He also explains that the gradient increases the log probabilities of trajectories that have positive reward and decreases the log probabilities of trajectories with negative reward, as seen in the picture. From the equation, however, I don't see how the gradient tries to increase the probabilities of the path with positive R?
+
+From the equation, what I understand is that we would want to update $\theta$ in a way that moves in the direction of $\nabla U(\theta)$ so that the overall utility is maximised, and this entails computing the gradient log probability of a trajectory.
+
+Also, why is $\theta$ omitted in $R(\tau^{(i)})$, since $\tau$ depends on the policy which is dependent on $\theta$ ?
+
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'python', 'image-recognition', 'data-preprocessing']"," Title: What pre-processing of the image is needed before feeding it into the convolutional neural network?Body: I can't figure out what preprocessing of the image is needed before feeding it into the convolutional neural network. For example, I want to recognize circles on a 1000 by 1000 px photo. The learning process of a neural network occurs on 100 by 100 px (https://www.kaggle.com/smeschke/four-shapes/data). I'm having a little difficulty wrapping my head around the situation when the circle in the input image is much larger (or smaller) than 100x100 px. How then the convolution neural network determines that circle if it was learned on a dataset of a different picture's size.
+
+For clarity, I want to submit a 454 by 430 px image to the network input:
+
+
+
+Example of the dataset for the learning process (100 by 100 px):
+
+
+
+Finally, I want to recognize all the circles on the input image:
+
+
+"
+"['reinforcement-learning', 'comparison', 'collaborative-filtering']"," Title: How do reinforcement learning and collaborative learning overlap?Body: How do reinforcement learning and collaborative learning overlap? What are the differences and similarities between these fields?
+
+I feel like the results I get via google do not make the distinction clear.
+"
+"['policy-gradients', 'actor-critic-methods']"," Title: Why a single trajectory can be used to update the policy network $\theta$ in A3C?Body: The Deep RL bootcamp on policy gradient techniques gives the update equation for the policy network in A3C as
+
+$\theta_{i+1} = \theta_i + \alpha \times 1/m \sum_{k=1}^m\sum_{t=0}^{H-1}\nabla_{\theta}log\pi_{\theta_i}(u_t^{(k)} | s_t^{(k)})(Q(s_t^{(k)},u_t^{(k)}) - V_{\Phi_i}^\pi(s_t^{(k)})) $
+
+However in the actual A3C paper, the gradient update is based on a single trajectory and there is no averaging of the gradient over $m$ trajectories as defined in the video ? The simple action-value actor-critic algorithm also does not seem to require an averaging over m trajectory.
+"
+"['probability', 'text-generation', 'importance-sampling']"," Title: Importance sampling eq. 5 in paper ""Residual Energy-based Models for Text Generation""Body: In the paper ""Residual Energy-Based Models for Text Generation"" (arXiv), on page 5, they write that equation 5 is an instance of importance sampling.
+
+Equation 5 is:
+
+$$ P(x_t \mid x_{<t}) = P_{LM}(x_t \mid x_{<t}) \, \frac{\mathbb{E}_{x'_{>t} \sim P_{LM}(\cdot \mid x_{\leq t})}[\exp(-E_\theta (x_{<t}, \, x_t, \, x'_{>t}))]}{\mathbb{E}_{x'_{\geq t} \sim P_{LM}(\cdot \mid x_{\leq t-1})}[\exp(-E_\theta (x_{<t}, \, x'_t, \, x'_{>t}))]} \ \ .$$
+
+The goal is to approximate sampling from a distribution from which sampling is intractable $P_\theta(Y \mid X) = P_{LM}(Y \mid X) \, \frac{\exp(-E_\theta (X, Y))}{Z_\theta(X)}$, by sampling from $P_{LM}$, from which sampling is cheaper.
+
+I understand that they are marginalizing over $>t$ in eq. 5, and I understand the basic idea of importance sampling to change $\mathbb{E}_{x \sim p}[f(x)]$ into $\mathbb{E}_{x \sim q}[f(x) \frac{p(x)}{q(x)}]$. However, eq. 5 is not a mean or aggregate, it is a probability.
+
+What is happening? I don't see how eq. 5 fits in the importance sampling scheme (or a self-normalizing importance sampling scheme, link). Thanks in advance!
+"
+"['reinforcement-learning', 'policy-gradients', 'actor-critic-methods', 'experience-replay', 'a3c']"," Title: How does being on-policy prevent us from using the replay buffer with the policy gradients?Body:
+One of the approaches to improving the stability of the Policy
+Gradient family of methods is to use multiple environments in
+parallel. The reason behind this is the fundamental problem we
+discussed in Chapter 6, Deep Q-Network, when we talked about the
+correlation between samples, which breaks the independent and
+identically distributed (i.i.d) assumption, which is critical for
+Stochastic Gradient Descent (SDG) optimization. The negative
+consequence of such correlation is very high variance in gradients,
+which means that our training batch contains very similar examples,
+all of them pushing our network in the same direction. However, this
+may be totally the wrong direction in the global sense, as all those
+examples could be from one single lucky or unlucky episode. With our
+Deep Q-Network (DQN), we solved the issue by storing a large amount of
+previous states in the replay buffer and sampling our training batch
+from this buffer. If the buffer is large enough, the random sample
+from it is much better representation of the states distribution at
+large. Unfortunately, this solution won't work for PG methods, at most
+of them are on-policy, which means that we have to train on samples
+generated by our current policy, so, remembering old transitions is
+not possible anymore.
+
+The above excerpt is from Maxim Lapan in the book Deep Reinforcement Learning Hands-on page 284.
+How does being on-policy prevent us from using the replay buffer with the policy gradients? Can you explain to me mathematically why we can't use replay buffer with A3C for instance?
+"
+"['deep-learning', 'cross-validation']"," Title: Calculating accuracy for cross validationBody: I'm struggling with calculating accuracy
+when I do cross-validation for a deep learning model.
+I have two candidates for doing this.
+1. Train a model with 10 different folds and get the best accuracy of them(so I get 10 best accuracies) and average them.
+2. Train a model with 10 different folds and get 10 accuracy learning curves. Now, average these learning curves by calculating the mean of 10 accuracies of each epoch. So now we get one averaged accuracy learning curve and find the highest accuracy from this curve.
+
+Among these two candidates which one is correct??
+"
+"['neural-networks', 'machine-learning', 'statistical-ai']"," Title: How to calculate the data noise variance for a prediction interval?Body: I have a neural network that connects $N$ input variables to $M$ output variables (qoi). By default, neural networks just give out point estimations.
+
+Now, I want to plot some of the quantity of interests and produce also a prediction interval. To calculate the model uncertainty, I use the bootstrap method.
+
+$$\sigma_{model}^2=\frac{1}{B-1}\sum_{i=1}^B(\hat{y}_i^b-\hat{y}_i)^2\qquad \text{with}\quad\hat{y}_i = \frac{1}{B}\sum_{b=1}^B\hat{y}_i^b$$
+$B$ training datasets are resampled from original dataset with replacement. $\hat{y}_i^b$ is the preditcion of the $i$ sample generated by the $b$th bootstrap model.
+
+If I understood it correctly, the model uncertainty (or epistemic uncertainty) is enough to create a confidence interval.
+
+But for the PI I also need the irreducible error $\sigma_{noise,\epsilon}^2$. $$\sigma_y^2= \sigma_{model}^2+\sigma_{noise,\epsilon}^2$$
+
+The aleatoric uncertainty is explained in the following picture:
+
+
+Is there a procedure to calculate this aleatoric uncertainty?
+
+I read the paper High-Quality Prediction Intervals for Deep Learning and watched the corresponding YouTube video. And I read the paper Neural Network-Based Prediction Intervals.
+
+EDIT
+I suggest the following algorithm to estimate the noise variance, but I am not sure if this makes sense:
+
+
+"
+"['deep-learning', 'natural-language-processing', 'math', 'gradient-descent']"," Title: How is the Jacobian a generalisation of the gradient?Body: I came across these slides Natural Language Processing with Deep Learning CS224N/Ling284, in the context of natural language processing, which talk about the Jacobian as a generalization of the gradient.
+
+I know there is a lot of topic regarding this on the internet, and trust me, I've googled it. But things are getting more and more confused for me.
+
+In simple words, how is the Jacobian a generalization of the gradient? How can it be used in gradient descent?
+"
+"['philosophy', 'math', 'agi', 'incompleteness-theorems']"," Title: Does Gödel's second incompleteness theorem put a limitation on artificial intelligence systems?Body: According to Brian Cantwell Smith
+
+no calculation without representation
+
+Therefore, computers depend on models. So, we can say that AI is limited internally by the model and externally by the environment. This problem is discussed here in a previous question I have asked.
+Now, consider Gödel's second incompleteness theorem
+
+a coherent theory does not demonstrate its own coherence
+
+Can we say that Gödel's second incompleteness theorem puts a limitation on artificial intelligence? How could AI bypass Gödel's second incompleteness theorem?
+"
+"['neural-networks', 'machine-learning', 'word-embedding', 'text-summarization']"," Title: What should the dimension of the input be for text summarization?Body: I am trying to build a model for extractive text summarization using keras sequential layers. I am having a hard time trying to understand how to input my x data. Should it be an array of documents with each document containing an array of sentences? or should I further break it down to each sentence containing an array of words?
+
+The y input is basically a binary classification of each sentence to check whether or not they belong to the summary of the document.
+
+The first layer is an embedding layer and I'm using 100d Glove word embedding.
+
+P.s: I am new to machine learning.
+"
+"['machine-learning', 'training', 'definitions', 'testing']"," Title: Is the test time the phase when the model's accuracy is calculated with test data set?Body: When papers talk about the ""test time"", does this mean the phase when the model is passed with new data instances to derive the accuracy of the test data set? Or is ""test time"" the phase when the model is fully trained and launched for real-world input data?
+"
+"['reinforcement-learning', 'definitions', 'actor-critic-methods', 'value-iteration', 'policy-iteration']"," Title: Would you categorize policy iteration as an actor-critic reinforcement learning approach?Body: One way of understanding the difference between value function approaches, policy approaches and actor-critic approaches in reinforcement learning is the following:
+
+
+- A critic explicitly models a value function for a policy.
+- An actor explicitly models a policy.
+
+
+Value function approaches, such as Q-learning, only keep track of a value function, and the policy is directly derived from that (e.g. greedily or epsilon-greedily). Therefore, these approaches can be classified as a ""critic-only"" approach.
+
+Some policy search/gradient approaches, such as REINFORCE, only use a policy representation, therefore, I would argue that this approach can be classified as an ""actor-only"" approach.
+
+Of course, many policy search/gradient approaches also use value models in addition to a policy model. These algorithms are commonly referred to as ""actor-critic"" approaches (well-known ones are A2C / A3C).
+
+Keeping this taxonomy intact for model-based dynamic programming algorithms, I would argue that value iteration is an actor-only approach, and policy iteration is an actor-critic approach. However, not many people discuss the term actor-critic when referring to policy iteration. How come?
+
+Also, I am not familiar with any model-based/dynamic programming like actor only approaches? Do these exist? If not, what prevents this from happening?
+"
+"['generative-model', 'latent-variable']"," Title: Why do hypercube latent spaces perform poorer than Gaussian latent spaces in generative neural networks?Body: I have a quick question regarding the use of different latent spaces to represent a distribution. Why is it that a Gaussian is usually used to represent the latent space of the generative model rather than say a hypercube? Is it because a Gaussian has most of its distribution centred around the origin rather than a uniform distribution which uniformly places points in a bounded region?
+
+I've tried modelling different distributions using a generative model with both a Gaussian and Uniform distribution in the latent space and the Uniform is always slightly restrictive when compared with a Gaussian. Is there a mathematical reason behind this?
+
+Thanks in advance!
+"
+['decision-trees']," Title: Why do we use a weighted average of child entropies when we calculate information gain?Body: In the decision tree algorithm, why do we use a weighted average of child entropies when we calculate information gain? What is wrong about using the arithmetic mean of entropies?
+"
+"['neural-networks', 'deep-learning', 'regularization', 'l2-regularization', 'l1-regularization']"," Title: Does L1/L2 Regularization help reach an optimum result faster?Body: I understand that L1 and L2 regularization helps to prevent overfitting. My question is then, does that mean they also help a neural network learn faster as a result?
+The way I'm thinking is that since the regularization techniques reduce weights (to 0 or close to 0 depending on whether it's L1 or L2) that are not important to the neural network, this would, in turn, result in "better values" for the output neurons right? Or perhaps I am completely wrong.
+For example, suppose I have a neural network that is to train a snake to move around a NxN environment. With regularization, the snake will learn faster in terms of survive longer in the game?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl', 'off-policy-methods']"," Title: Is it possible to prove that the target policy is better than the behavioural policy based on learned Q values?Body: I have retrospective data for a sort of ""behaviour policy"" which I will use to train a deep q network to learn a target greedy policy. After learning the Q values for this target policy, can we make the conclusion that because the Q value for the target policy, $Q(s,\pi_e(s))$ is higher than the Q values for the behaviour policy, $Q(s,\pi_b(s))$ at all states encountered, where $\pi_e$ is the policy output by deep Q-learning and $\pi_b$ is the behaviour policy, then this target policy has better performance than the behaviour policy?
+
+I know the proper way is to run the policy and do an empirical comparison of some sort. However, that is not possible in my case.
+"
+"['comparison', 'terminology', 'swarm-intelligence']"," Title: What is the difference between artificial intelligence and swarm intelligence?Body: Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.
+According to the Wikipedia article on swarm intelligence
+
+Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence.
+The application of swarm principles to robots is called swarm robotics, while 'swarm intelligence' refers to the more general set of algorithms.
+SI systems consist typically of a population of simple agents or boids interacting locally with one another and with their environment. The inspiration often comes from nature, especially biological systems.
+
+These two terms seem to be related, especially in their application in computer science and software engineering. Is one a subset of another? Is one tool (SI) is used to build a system for the other(AI)? What are their differences and why are they significant?
+"
+"['neural-networks', 'machine-learning', 'data-preprocessing', 'brain']"," Title: Are my steps correct for a proper classification of a sick brain?Body: I have a dataset with MRI of patients with a specific disease that affects the brain and another dataset with MRI of healthy patients.
+
+I want to create a classifier (using neural networks) to classify if the MRI of a new patient shows the presence of the illness or not.
+
+First of all, I extracted the brain from all the MRIs (the so-called skull stripping) using BET tool found in FSL.
+
+I have three questions for you
+
+
+- As the input to the training phase, I want to give the whole extracted brains (possibly in the nii format). What kind of preprocessing steps do I need to apply once I've extracted the brains (before passing it to the classifier)?
+- Do you know any better tool for skull stripping?
+- Do you know a tool (or library) that takes as input a nii files and allows me to create a classifier that uses neural networks?
+
+"
+"['reinforcement-learning', 'pytorch', 'ddpg', 'benchmarks']"," Title: Benchmarking SAC on PybulletBody: So far I have seen TD3 and DDPG benchmarks on Pybullet environments, but I am looking for SAC benchmarks on Pybullet too, anyone can help?
+"
+"['neural-networks', 'machine-learning', 'comparison', 'fuzzy-logic']"," Title: What is the difference between fuzzy neural networks and adaptive neuro fuzzy inference systems?Body: I have, like you see, just a general question about the combination of fuzziness and neural networks. I understood it as follows
+
+
+- Fuzzy neural networks as a hybrid system: the neural network helps me to find the optimal parameters related to the fuzzy system, for example, the rules or the membership function
+- Adaptive neural fuzzy inference systems (ANFIS): the NN helps me to find the optimal parameters related to the fuzzy inference system. What are some examples here?
+
+
+I cannot intuitively grasp the difference between these two.
+"
+"['reinforcement-learning', 'q-learning', 'rewards', 'reward-design', 'reward-functions']"," Title: Why is the reward function $\text{reward} = 1/{(\text{cost}+1)^2}$ better than $\text{reward} =1/(\text{cost}+1)$?Body: I have implemented a simple Q-learning algorithm to minimize a cost function by setting the reward to the inverse of the cost of the action taken by the agent. The algorithm converges nicely, but there is some difference I get in the global cost convergence for different orders of the reward function. If I use the reward function as:
+$$\text{reward} = \frac{1}{(\text{cost}+1)^2}$$
+the algorithm converges better (lower global cost, which is the objective of the process) than when I use the reward as:
+$$\text{reward} = \frac{1}{(\text{cost}+1)}$$
+What could be the explanation for this difference? Is it the issue of optimism in the face of uncertainty?
+"
+"['neural-networks', 'machine-learning', 'datasets']"," Title: Why is ANFIS important in general?Body: I am actually working with the iris dataset from sklearn and try to understand the ANFIS-Package for python. But that does not really matter! I have a more general question.
+
+During thinking about adaptive neuro-fuzzy inference system (ANFIS), a general question came into my mind. I don't really understand: in general, why is ANFIS necessary?
+
+So, for example, if I want to predict classes for this iris dataset, I also can use a supervised learning method or a neural network and I get the result.
+
+In ANFIS, I do nothing other than splitting the input attributes into linguistic terms and give membership functions to it. At the end of the day, I will receive ""predictions"" for the input values, which are classes.
+
+But - with the ANFIS-Package in Python - I cannot see, if my membership function has changed during the learning time or what rules the network constructed. So, I cannot really see why this is useful. Maybe it is just because I am usually using the iris dataset for supervised learning.
+"
+"['machine-learning', 'reinforcement-learning', 'ai-design', 'reference-request', 'online-learning']"," Title: Is there an online RL algorithm that receives as input a camera frame and produces an action as output?Body: I want to build a reinforcement learning model, which takes a camera picture as input, that learns online (in terms of machine learning). Based on the position of an object on the camera, I want the model to output an action. That action would be a stepper motor, that either moves to the right or left. This process would be repeated until a given goal/position is reached.
+
+I can't go to the lab at the moment, so I wrote a virtual environment and let the agent live in that.
+
+I am trying a neural network with the cross-entropy function. For small environments, this works fine. However, when I increase the size of the environment, the computation becomes really really slow and the model needs a lot of data input until it starts to learn. Also, it only learns offline. But what I would rather want is a model that learns online and only takes a few tries until it understands the underlying pattern. That isn't a problem for the virtual environment, since I can easily get thousands of data samples. But in the real environment, it would take ages this way.
+
+
+- Is there an online reinforcement algorithm that could help me out (instead of training the neural network with the cross-entropy loss function)?
+
+"
+"['reinforcement-learning', 'comparison', 'terminology']"," Title: What is the difference between the prediction and control problems in the context of Reinforcement Learning?Body: What is the difference between the prediction (value estimation) and control problems in reinforcement learning?
+Are there scenarios in RL where the problem cannot be distinctly categorised into the aforementioned problems and is a mixture of the problems?
+Examples where the problem cannot be easily categorised into one of the aforementioned problems would be nice.
+"
+"['deep-learning', 'research', 'papers']"," Title: Why do most deep learning papers not include an implementation?Body: I'm a novice researcher, and as I started to read papers in the area of deep learning I noticed that the implementation is normally not added and is needed to be searched elsewhere, and my question is how come that's the case? The paper's authors needed to implement their models anyway in order to conduct their experimentations, so why not publish the implementation? Plus, if the implementation is not added and there's no reproducibility, what prevents authors from forging results?
+"
+"['convolutional-neural-networks', 'image-processing', 'image-segmentation']"," Title: Super-Resolution with Convolutional Neuronal Networks, why interpolation at the beginning?Body: I have read several papers about super-resolution with CNNs, where a low-resolution image is reconstructed to a high-resolution image.
+What I don't understand is, why it is necessary to interpolate the low-resolution image at the beginning to a size that matches the high-resolution image target.
+
+What is the idea about that? If I have an image to image transformation, what are the benefits in a Neuronal network to have the same input size as the same output size?
+"
+"['reinforcement-learning', 'definitions', 'minimax', 'monte-carlo-methods', 'temporal-difference-methods']"," Title: In what RL algorithm category is MiniMax?Body: Q-learning is a temporal-difference method and Monte Carlo tree search is a Monte Carlo method. In what category is MiniMax?
+"
+"['machine-learning', 'datasets', 'data-preprocessing']"," Title: How to fill missing values in a dataset where some properties can be inputs and outputs?Body: I have a dataset with missing values, I would like to use machine learning methods to fill.
+In more detail, there are $n$
+individuals, for which up to 10 properties are provided, all numerical. The fact is, there are no individuals for which all properties are given. The first rows (each row contains data for a given individual) do look as the following
+
+\begin{bmatrix}
+ 1 & NA & 3.6 & 12.1 & NA \\
+1.2 & NA & NA & 4 & NA \\
+ NA & 4 & 5 & NA & 7
+ \end{bmatrix}
+
+What methods could be applicable in general?
+
+I have some basic experience in classifiers and Random Forests. Modulo the obvious difference that this is not a classifying problem, what I struggle most with is that the same variable (described in the e.g
+$n$-th column) is both an input and an output. Say I want to predict the value $A_{2,3}$ in the dataset above. In this case, all the values in the third column could be used as input, excluded of course $A_{2,3}$
+itself, which would be an output.
+
+This seems to be different than the more conventional set-up of predicting a property, given a set of other properties (e.g, predict income given education, work sector, seniority, etc.). In this case, sometimes the income is to be predicted, sometimes used for predicting another variable.
+I am aware of methods which, given a vector $X_i$, could approximate a function $F$ and predict responses $Y_i$ with
+
+$$ Y_i = F(X_i)$$
+
+In the scenario I described though, it looks like some implicit function $\Phi$ is to be found, a function of all the variables $Z_i$ (columns in the dataset above)
+
+$$ \Phi (Z_i) = 0$$
+
+What methods could handle this aspect? I understand the question is probably too general, but I could not find much and could do with a starting point. I would be already content with some hints for my further reading, but anything more would be gratefully welcomed, thanks.
+"
+"['python', 'policy-gradients', 'pytorch', 'actor-critic-methods']"," Title: Advantage computed the wrong way?Body: Here is the code written by Maxim Lapan. I am reading his book (Deep Reinforcement Learning Hands-on). I have seen a line in his code which is really weird. In the accumulation of the policy gradient $$\partial \theta_{\pi} \gets \partial \theta_{\pi} + \nabla_{\theta}\log\pi_{\theta} (a_i | s_i) (R - V_{\theta}(s_i))$$ we have to compute the advantage $R - V_{\theta}(s_i)$. In line 138, maxim uses adv_v = vals_ref_v - value_v.detach()
. Visually, it looks fine, but look at the shape of each term.
+
+ipdb> adv_v.shape
+torch.Size([128, 128])
+
+ipdb> vals_ref_v.shape
+torch.Size([128])
+
+ipdb> values_v.detach().shape
+torch.Size([128, 1])
+
+
+In a much simpler code, it is equivalent to
+
+In [1]: import torch
+
+In [2]: t1 = torch.tensor([1, 2, 3])
+
+In [3]: t2 = torch.tensor([[4], [5], [6]])
+
+In [4]: t1 - t2
+Out[4]:
+tensor([[-3, -2, -1],
+ [-4, -3, -2],
+ [-5, -4, -3]])
+
+In [5]: t1 - t2.detach()
+Out[5]:
+tensor([[-3, -2, -1],
+ [-4, -3, -2],
+ [-5, -4, -3]])
+
+
+I have trained the agent with his code and it works perfectly fine. I am very confused why it is good practice and what it is doing. Could someone enlighten me on the line adv_v = vals_ref_v - value_v.detach()
? For me, the right thing to do was adv_v = vals_ref_v - value_v.squeeze(-1)
.
+
+Here is the full algorithm used in his book :
+
+UPDATE
+
+
+
+As you can see by the image, it is converging even though adv_v = vals_ref_v - value_v.detach()
looks wrongly implemented. It is not done yet, but I will update the question later.
+"
+"['recurrent-neural-networks', 'word-embedding', 'attention', 'text-classification', 'speech-synthesis']"," Title: How many spectrogram frames per input character does text-to-speech (TTS) system Tacotron-2 generate?Body: I've been reading on Tacotron-2, a text-to-speech system, that generates speech just-like humans (indistinguishable from humans) using the GitHub https://github.com/Rayhane-mamah/Tacotron-2.
+
+I'm very confused about a simple aspect of text-to-speech even after reading the paper several times. Tacotron-2 generates spectrogram frames for a given input-text. During training, the dataset is a text sentence and its generated spectrogram (it seems at a rate of 12.5 ms per spectogram frame).
+
+
+- If the input is provided as a character string, then how many spectogram frames does it predict for each character?
+- How does training supply which frames form the expected output from the dataset? Because the training dataset is simply a thousand of frames for a sentences, how does it know which frames are ideal output for a given character?
+
+
+This basic aspect seems just not mentioned clearly anywhere and I'm having a hard time figuring this one out.
+"
+"['neural-networks', 'machine-learning', 'definitions']"," Title: Is my understanding of how AI works correct?Body: In my discussion over my question on Math SE, I explained to a user, how I think AI works, I wrote that with the sigmoid(logistic) function, features of a data set are identified, many such iterations provide learning.
+
+Is my understanding of how this works correct?
+"
+"['natural-language-processing', 'ai-design', 'chat-bots', 'natural-language-understanding']"," Title: How can I make ELIZA more realistic?Body: I’ve coded a simple ELIZA chatbot for a high school coding competition. The chatbot is part of an app that’s designed to help its user cope with depression, anxiety, and similar mental health disorders. It uses sentiment analysis to identify signs of mental illness, and to track it's user's progress toward ""happiness"" over time.
+
+My question is, what steps can I take to make it more realistic (without using some pre-existing software, library, etc, which isn't allowed)? Also, are there any existing tables of questions/responses I can add to my ELIZA bot's repertoire so that it can handle more conversations?
+"
+['convolutional-neural-networks']," Title: Conversion of strided filter gradient to convolutional formBody: I'm implementing strided 2D convolution. My formula looks like this:
+$$y_{i, j} = \sum_{m=0}^{F_h - 1}\sum_{n=0}^{F_w - 1} x_{s\cdot i + m, s\cdot j + n}\,f_{m, n}, \tag{1}$$ where $s$ is the stride
+(some sources might refer to this as 'cross-correlation' but 'convolution' is consistent with PyTorch's definition)
+
+I have calculated the gradient with respect to the filter as:
+
+$$\frac{\partial E}{\partial f_{m', n'}} = \sum_{i=0}^{(x_h - F_h) / s}\sum_{j=0}^{(x_w - F_w) / s} x_{s\cdot i + m', s\cdot j + n'} \frac{\partial E}{\partial y_{i, j}} \tag{2}$$
+
+and some simple dummy index relabeling leads to:
+$$\frac{\partial E}{\partial f_{i, j}} = \sum_{m=0}^{(x_h - F_h) / s}\sum_{n=0}^{(x_w - F_w) / s} x_{s\cdot m + i, s\cdot n + j} \frac{\partial E}{\partial y_{m, n}} \tag{3}$$
+
+Equation $(3)$ looks similar to the first, but not exactly (the $s$ is on the wrong term!). My objective is to convert the second equation into 'convolutional form' so that I can calculate it using my existing, efficient convolution algorithm.
+
+Could someone please help me work this out, or point out any errors that I have made?
+"
+"['neural-networks', 'deep-learning', 'reinforcement-learning', 'dqn', 'software-evaluation']"," Title: How to evaluate a Deep Q-NetworkBody: Good day, it's a pleasure having joined this Stack.
+
+In my master thesis I have to expand a Deep Reinforcement Learning Network, to be precise a Deep Q-Network, which is used to control machines in an electrical grid for power quality management.
+
+What would be the best way to evaluate if a network is doing a good job during training or not? Right now I have access to the reward function as well as the q_value function.
+
+The rewards consist of 4 arrays, one for each learning criteria of the network. The first tuple is a hard criteria
(adherence mandatory) while the latter 3 are soft criteria
:
+
+Episode: 1/3000 Step: 1/11 Reward: [[1.0, 1.0, -1.0], [0.0, 0.68, 1.0], [0.55, 0.55, 0.55], [1.0, 0.62, 0.79]]
+Episode: 1/3000 Step: 2/11 Reward: [[-1.0, 1.0, 1.0], [0.49, 0.46, 0.67], [0.58, 0.58, 0.58], [0.77, 0.84, 0.77]]
+Episode: 1/3000 Step: 3/11 Reward: [[-1.0, 1.0, 1.0], [0.76, 0.46, 0.0], [0.67, 0.67, 0.67], [0.77, 0.84, 1.0]]
+
+
+The q_values are arrays which I do not fully understand yet. Could one of you explain them to me? I read the official definiton of Q-Values
positive False Discovery Rate
. Can these values be used to evaluate neural network training? These are the Q-Values
for step 1
:
+
+Q-Values: [[ 0.6934726 -0.24258053 -0.10599071 -0.44178435 0.5393113 -0.60132784
+ -0.07680141 0.97968364 0.7707691 0.57855517 0.16273917 0.44632837
+ 0.00799532 -0.53355324 -0.45182624 0.9229134 -1.0455914 -0.0765233
+ 0.37784138 0.14711905 0.10986999 0.08918551 -0.8189287 0.14438646
+ 0.8869624 -0.43251887 0.7742889 -0.7671829 0.07737591 0.2569678
+ 0.5102049 0.5132051 -0.31643414 -0.0042788 -0.66071266 -0.18251896
+ 0.7762838 0.15322062 -0.06284399 0.18447408 -0.9609979 -0.4508798
+ -0.07925312 0.7503184 0.6858963 -1.0436649 -0.03167241 0.87660617
+ -0.43605536 -0.28459656 -0.5564517 1.2478396 -1.1418368 -0.9335588
+ -0.72871417 0.04163677 0.30343965 -0.30024529 0.08418611 0.19429305
+ 0.44063848 -0.5541725 0.5740701 0.76789933 -0.9621064 0.0272104
+ -0.44953588 0.13415053 -0.07738207 -0.16188647 0.6667519 0.31965214
+ 0.3241703 -0.27273563 -0.07130697 0.49683014 0.32996863 0.485767
+ 0.39242893 0.40508035 0.3413986 -0.5895434 -0.05772913 -0.6172271
+ -0.12423459 0.2693861 0.32966745 -0.16036317 -0.36371914 -0.04342368
+ 0.22878243 -0.09400887 -0.1134861 0.07647536 0.04724833 0.2907955
+ -0.70616114 0.71054566 0.35959414 -1.0539075 0.19137645 1.1948669
+ -0.21796732 -0.583844 -0.37989947 0.09840107 0.31991178 0.56294084]]
+
+
+Are there other ways of evaluating DQNetworks? I would also appreciate literature about this subject. Thank you very much for your time.
+"
+"['reinforcement-learning', 'pytorch', 'actor-critic-methods', 'convergence']"," Title: Why isn't my implementation of A2C for the the atari pong game converging?Body: I have two different implementations with PyTorch of the Atari Pong game using A2C algorithm. Both implementations are similar, but some portion are different.
+
+
+- https://colab.research.google.com/drive/12YQO4r9v7aFSMqE47Vxl_4ku-c4We3B2?usp=sharing
+
+
+The above code is from the following Github repository: https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/blob/master/Chapter10/02_pong_a2c.py It converged perfectly well!
+
+You can find an explanation in Maxim Lapan's book Deep Reinforcement Learning Hands-on page 269
+
+Here is the mean reward curve :
+
+
+
+
+- https://colab.research.google.com/drive/1jkZtk_-kR1Mls9WMbX6l_p1bckph8x1c?usp=sharing
+
+
+The above implementation has been created by me based on the Maxim Lapan's book. However, the code is not converging. There's a small portion of my code that is wrong, but I can't point out what it is. I've been working on that near a week now.
+
+Here is the mean reward curve :
+
+
+
+Can someone tell me the problem portion of the code and how can I fix it?
+
+UPDATE 1
+
+I have decided to test my code with a simpler environment, i.e. Cartpole-v0
.
+
+Here is the code : https://colab.research.google.com/drive/1zL2sy628-J4V1a_NSW2W6MpYinYJSyyZ?usp=sharing
+
+Even that code doesn't seem to converge. Still can't see where is my problem.
+
+UPDATE 2
+
+I think the bug might be in the ExperienceSource class or in the Agent class.
+
+UPDATE 3
+
+The following question will help you understand the classes ExperienceSource
and ExperienceSourceFirstLast
.
+"
+"['reinforcement-learning', 'dqn']"," Title: DQN not showing the agent is learning in a snake grid environment gameBody: I've been trying to train a snake for the snake game in DQN. Which the snake can essentially just move up, down, left and right. I'm having a hard time getting the snake to stay alive longer. So my question is, what are some techniques that I can implement to get the snake to stay alive for longer?
+
+Some of the things that I've attempted but doesn't seem to have done much after about 1000 episodes are:
+
+
+- Implementing the L2 regularization
+- Reduce the exploration decay rate so it give the snake more chance to explore
+- Randomize the starting point for the snake for each episode to try to reduce ""local exploration""
+- I've tweeked some hyper parameters such as learning rate, policy/target network update rate
+
+
+The input neurons are fed with the state of the board. For example, if my board size is 12*12 then there are 144 input neurons each representing the space of the environment. I've checked that the loss decreases fairly quickly but no improvements on snake lasting longer in the game.
+
+As a side note my reward function is simply a +1 for every time step that the snake survives.
+
+I'm out of ideas of what I can do to get the snake to learn, maybe 1000 episodes is simply not enough? Or maybe my input is not providing good enough information to train the snake?
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'heuristics']"," Title: Why aren’t heuristics for Connect Four Monte Carlo tree search improving the agent?Body: I’ve created an agent using MCTS to play Connect Four. It wins against humans pretty well, but I’d like to improve upon it. I decided to add domain knowledge to the MCTS rollout stage. My evaluation function checks how “good” an action is and returns the best/highest value action to the rollout policy as the action to use.
+I created a “gym” application for one agent, who’s not using the evaluation function, to play against an agent who is using the evaluation function.
+I would have expected the agent using the heuristics to perform better than the agent who isn’t, but the inclusion of the heuristics doesn’t seem to make any difference! Any ideas why this might be the case?
+"
+"['convolutional-neural-networks', 'linear-regression', 'relu', 'non-linear-regression']"," Title: If features are always positives, why do we use RELU activation functions?Body: When does it happen that a layer (either first or hidden) outputs negative values in order to justify the use of RELU?
+
+As far as I know, features are never negative or converted to negative in any other type of layer.
+
+Is it that we can use the RELU with a different ""inflection"" point than zero, so we can make the neuron start describing a lineal response just after this ""new zero""?
+"
+"['deep-learning', 'deep-rl', 'hyperparameter-optimization', 'hyper-parameters', 'sample-efficiency']"," Title: Should we start with a small batch-size and increase during training to improve sample efficiency?Body: Just made an interesting observation playing around with the stable-baseline's implementation of PPO and the BipedalWalker environment from OpenAI's Gym. But I believe this should be a general property of deep learning.
+
+Using a small batch size of 512 samples the walker achieves a near-optimal behavior after just 0.5 Mio steps. The optimized hyperparameters in the RL Zoo suggest using a batch size of 32k steps. This definitely leads to better performance after 5 Mio steps but takes 2 Mio steps until it reaches a near-optimal behavior.
+
+Therefore the question:
+Shouldn't we schedule the batch-size to improve sample efficiency?
+
+I believe it makes sense because after initialization the policy is far away from the optimal one and therefore should update quickly to get better. Even when the gradient estimates using small batches are very noisy, they still seem to bring the policy quickly in a quite good state. Thereafter we can increase the batch-size and make less but more precise gradient steps. Or am I missing an important point here?
+"
+"['papers', 'geometric-deep-learning', 'graph-neural-networks', 'spectral-analysis']"," Title: Understanding the node information score in the paper ""Hierarchical Graph Pooling with Structure Learning""Body: The paper Hierarchical Graph Pooling with Structure Learning (2019) introduces a distance measure between:
+
+- a graph's node-representation matrix $\text{H}$, and
+- an approximation of this constructed from each node's neighbours' information $\text{D}^{-1}\text{A}\text{H}$:
+
+
+Here, we formally define the node information score as the Manhattan distance between the node representation itself and the one constructed from its neighbors:
+$$\mathbb{p} = \gamma(\mathcal{G}_i) = ||(\text{I}^{k}_{i} - (\text{D}^{k}_{i})^{-1}\text{A}^{k}_{i})\text{H}^{k}_{i}|| $$
+
+(where $\text{A}$ and $\text{D}$ are the Adjacency and Diagonal matrices of the graph, respectively)
+Expanding the product on the RHS we get (ignoring index notation for simplicity):
+$$||\text{H} - (\text{D}^{-1}\text{A}\text{H})||$$
+Problem: I don't see how $\text{D}^{-1}\text{A}\text{H}$ is a "node representation... constructed from its neighbors".
+$\text{I} - \text{D}^{-1}\text{A}$ is clearly equivalent to the Random Walk Laplacian, but it's not immediately obvious to me how multiplying this by $\text{H}$ provides per-node information on how well one can reconstruct a node from its neighbours.
+"
+"['deep-learning', 'convolutional-neural-networks', 'backpropagation']"," Title: How do I calculate the partial derivative with respect to $x$?Body: I am trying to implement CNN using python NumPy. I searched so much, but all I found was for one filter with one channel for convolution.
+Suppose $x$ is an image with the shape: (N_Height, N_Width, N_Channel) = (5,5,3)
.
+Let's say I have 16
filters with this shape: (F_Height, F_Width, N_Channel) = (3,3,3)
, stride=1
and padding=0
+Forward:
+The output shape after convolution 2d will be
+(
+math.floor((N_Height - F_Height + 2*padding)/stride + 1 )),
+math.floor((N_Width- F_Width + 2*padding)/stride + 1 )),
+filter_count
+)
+
+So, the output of this layer will be an array with this shape: (Height, Width, Channel) = (3, 3, 16)
+BackPropagation:
+Suppose $dL/dh$ is the input for my layer in back-propagation with this shape: (3, 3, 16)
+Now, I must find $dL/dw$ and $dL/dx$: $dL/dw$ to update my filters parameter and $dL/dx$ to pass it as input to the previous layer as the loss respect to the input $x$.
+From this answer Error respect to filters weights I found how to calculate $dL/dw$.
+The problem I have in the back-propagation is I don't know how to calculate $dL/dx$ having this shape: (5, 5, 3)
and pass it to the previous layer.
+I read lots of articles in Medium and other sites, but I don't get how to calculate it:
+
+"
+"['machine-learning', 'ai-design', 'reference-request']"," Title: Which machine learning method can take a matrix as input?Body: I am pretty new to the machine learning field. I want to use an $n \times m$ matrix as the input of a model, in order to predict a vector $1 \times m$, both of real numbers. Input data are quite clean, with statistics of about 10000 items.
+
+Do you know a method that can handle that?
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'value-functions', 'importance-sampling']"," Title: How is the incremental update rule derived from the weighted importance sampling in off-policy Monte Carlo control?Body: Here's the approximated value using weighted importance sampling
+
+$$
+V_{n} \doteq \frac{\sum_{k=1}^{n-1} W_{k} G_{k}}{\sum_{k=1}^{n-1} W_{k}}, \quad n \geq 2
+$$
+
+Here's the incremental update rule for the approximated value
+
+$$V_{n+1} \doteq V_{n}+\frac{W_{n}}{C_{n}}\left[G_{n}-V_{n}\right], \quad n \geq 1$$
+
+How is the second equation derived from the first?
+
+These are used for the weighted importance sampling method of off-policy Monte Carlo control.
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'attention']"," Title: How is visual attention mechanism different from a two branch convolutional neural network?Body: I am doing some research on the visual attention mechanism in remote sensing domain (where the features learnt from one layer are highlighted using the attention mask derived from another layer). From what I have observed, the attention mask is learnt in a similar fashion as any other branch in CNN. So, what is so special about the visual attention mask that makes it different from a regular two branch CNN? The reference papers are provided below:
+
+Visual Attention-Driven Hyperspectral Image Classification (IEEE, 2019)
+
+A Two-Branch CNN Architecture for Land Cover
+Classification of PAN and MS Imagery (MDPI, 2019)
+"
+"['machine-learning', 'reference-request', 'applications', 'few-shot-learning']"," Title: What are some use cases of few-shot learning?Body: Besides computer vision and image classification, what other use cases/applications are for few-shot learning?
+"
+"['neural-networks', 'comparison', 'papers', 'hidden-markov-model']"," Title: What is a Hidden Markov Model - Artificial Neural Network (HMM-ANN)?Body: As far as I know, neural networks have hidden computational units and HMM has hidden states.
+
+Hidden Markov Models can be used to generate a language, that is, list elements from a family of strings. For example, if you have an HMM that models a set of sequences, you would be able to generate members of this family, by listing sequences that would befall into the group of sequences we are modeling.
+
+In this, this and this paper, HMMs are combined with ANNs. But how exactly? What is a Hidden Markov Model - Artificial Neural Network (HMM-ANN)? Is HMM-ANN a hybrid algorithm? In simple words, how is this model or algorithm used?
+"
+"['machine-learning', 'deep-learning', 'comparison']"," Title: What is the difference between deep learning and shallow learning?Body: What is the difference between deep learning and shallow learning?
+What I am interested in knowing is not the definition of deep learning and shallow learning, but understanding the actual difference.
+Links to other resources are also appreciated.
+"
+"['applications', 'computational-learning-theory', 'pac-learning', 'sample-complexity', 'hypothesis-class']"," Title: Is there any practical application of knowing whether a concept class is PAC-learnable?Body: A concept class $C$ is PAC-learnable if there exists an algorithm that can output a hypothesis with probability at least $(1-\delta)$ (the ""probably"" part), and an error that is less than $\epsilon$ (the ""approximately"" part), in time that is polynomial in $1/\epsilon$, $1/\delta$, $n$ and $|C|$.
+
+Tom Mitchell defines an upper bound for the sample complexity, $m >= 1/\epsilon (ln(|H|) + ln(1/\delta))$ for the finite hypotheses. Based on this bound, he classifies whether target concepts are PAC-learnable or not. For example, $n$ conjunction boolean literal concept class.
+
+It seems to me that PAC-learnability seeks to act more like a classification of certain concept classes.
+
+Are there any practical purposes for knowing whether a concept class is PAC-learnable?
+"
+"['applications', 'reference-request', 'chat-bots', 'resource-request', 'healthcare']"," Title: Is there an AI system that, given a patient's symptoms, produces a diagnosis and suggests a treatment?Body: Is there an AI system (preferably, one that interacts with the human, such as a chatbot like this one) that, given some input (e.g. entered into the system by writing text), such as a person's physical history and symptoms (of a certain disease), produces a diagnosis of the disease and/or suggests medications or a treatment to improve the condition of the patient?
+"
+"['deep-learning', 'monte-carlo-tree-search', 'chess', 'alphazero', 'deepmind']"," Title: Where does reinforcement learning actually show up in Deepmind's game engines?Body: From the brief research I've done on the topic, it appears that the way Deepmind's Alphazero or Muzero makes decisions is through Monte Carlo tree searches, where in the randomized simulations allows for a more rapid way to make calculations than traditional alpha-beta pruning. As the simulation space increases, this search approaches that of a classical tree search.
+
+Where exactly did Deepmind use neural networks? Was it in the evaluation portion? And if so, how did they make determinations on what makes a ""good"" or ""bad"" game state? If they deferred the evaluations of another chess engine like Stockfish, how do we see AlphaZero absolutely demolish Stockfish in head-to-head matches?
+"
+"['neural-networks', 'training', 'prediction']"," Title: Confidence Interval around prediction with bootstrappingBody: I want to generate a confidence interval around my prediction (vector) $\hat{y}$. I have implemented the following procedure. However, I am not sure whether this makes sense in a statistical way:
+
+
+- I have a data set. First I split it into a 80% training set (2000 measurements), 10% valdidation set and 10% testing set (250 measurements)
+- I resample B ($\sim 100$) training sets from the original training set with replacement.
+
+
+- For each of the B training datasets I train the model $b$ and
+validate it
+(everytime I use the same validation set).
+- I use the testset from point 1. and make a prediction $\hat{y}_i^b$
+(so everytime I use the same test set, since I need the predictions for the same input values)
+
+- I calculate the average of the $B$ predictions. $$\bar{\hat{y}}_i=\frac{1}{B}\sum_{b=1}^B\hat{y}_i^b$$.
+- I calculate the variance ($i\in [1,250]$)
+$$\sigma_{\hat{y}_i}^2=\frac{1}{B-1}\sum_{b=1}^B(\hat{y}_i^b -\bar{\hat{y}}_i)^2$$
+- I guess the $95\%$ confidence interval for the prediction $\hat{y}_i$ is $$\hat{y}_i\in \bar{\hat{y}}_i\pm z_{0.025}\frac{\sigma_{\hat{y}_i}}{\sqrt{B}}$$
+with $z_{0.025}=1.96$
+- If I sort the $\hat{y}_i$ values and plot it together with the upper
+and lower bound, I will get the prediction curve with a CI.
+
+
+My biggest uncertainty relates to step 5). I read in a book Supervised Classification:Quite a Brief Overview, Marco Loog:
+
+
+ When the population standard deviation $\sigma$ is known and the parent population is normally distributed or $N>30$ the $100(1-\alpha)$ CI for the population mean is given by the symmetrical distribution for the standardized normal distribution $z$
+ $$\mu\in \bar{x}\pm z_{a/2}\frac{\sigma}{\sqrt{N}}$$
+
+
+Is it correct to say here $N=B$ (number of bootstrap models or number of resampled trainingsets or number of estimators $\hat{y}_i^b$ ). Does the procedure make sense?
+"
+"['reinforcement-learning', 'math', 'policy-gradients']"," Title: Is this the correct gradient for log of softmax?Body: I am currently implementing the very basic version (REINFORCE) of the Monte Carlo policy gradient algorithm. I was wondering if this is the correct gradient for the log of softmax.
+
+\begin{align}
+\nabla_{\theta} \log \pi_{\theta}(s, a)
+&=
+\varphi(s, a)-\mathbb{E}\left[\varphi(s, a)_{\forall a \in A}\right] \\
+&=
+\left(\varphi(s)^T \cdot \theta_{a}\right)-\sum_{\forall a \in A}\left(\varphi(s)^T \cdot \theta_{a}\right)
+\end{align}
+
+where $\varphi(s)$ is the feature vector at state $s$.
+
+I am not sure if my interpretation of the equation is correct. I ask because, in my implementation, my weights ($\theta$) blow up after a few iterations, and I have a feeling the problem is in this line.
+"
+"['datasets', 'unsupervised-learning', 'data-science', 'clustering', 'k-means']"," Title: Is this dataset with only two features suitable for clustering with k-means?Body: I am working with the K-means clustering algorithm for unsupervised learning.
+
+Is the following dataset suitable for the k-means clustering task or not? Why or why not? The dataset has only two features.
+
+
+
+
+"
+"['machine-learning', 'long-short-term-memory', 'data-preprocessing', 'structured-data', 'feature-engineering']"," Title: How to feed key-value features (aggregated data) to LSTM?Body: I have the following time-series aggregated input for an LSTM-based model:
+
+x(0): {y(0,0): {a(0,0), b(0,0)}, y(0,1): {a(0,1), b(0,1)}, ..., y(0,n): {a(0,n), b(0,n)}}
+x(1): {y(1,0): {a(1,0), b(1,0)}, y(1,1): {a(1,1), b(1,1)}, ..., y(1,n): {a(1,n), b(1,n)}}
+...
+x(m): {y(m,0): {a(m,0), b(m,0)}, y(m,1): {a(m,1), b(m,1)}, ..., y(m,n): {a(m,n), b(m,n)}}
+
+
+where x(m)
is a timestep, a(m,n)
and b(m,n)
are features aggregated by the non-temporal sequential key y(m,n)
which might be 0...1,000
.
+
+Example:
+
+0: {90: {4, 4.2}, 91: {6, 0.2}, 92: {1, 0.4}, 93: {12, 11.2}}
+1: {103: {1, 0.2}}
+2: {100: {3, 0.1}, 101: {0.4, 4}}
+
+
+Where 90-93, 103, and 100-101 are aggregation keys.
+
+How can I feed this kind of input to LSTM?
+
+Another approach would be to use non-aggregated data. In that case, I'd get the proper input for LSTM. Example:
+
+Aggregated input:
+
+0: {100: {3, 0.1}, 101: {0.4, 4}}
+
+
+Original input:
+
+0: 100, 1, 0.05
+1: 101, 0.2, 2
+2: 100, 1, 0
+3: 100, 1, 0.05
+4: 101, 0.2, 2
+
+
+But in that case, the aggregation would be lost, and the whole purpose of aggregation is to minimize the number of steps so that I get 500 timesteps instead of e.g. 40,000, which is impossible to feed to LSTM. If you have any ideas I'd appreciate it.
+"
+"['ai-design', 'image-recognition', 'resource-request']"," Title: Is there an AI tool to reverse engineer scanned data to obtain its CAD file?Body: Today, if you scan an object and want its CAD file (Solidworks/Autocad), you need to use reverse engineering software (Geomagic). This takes time and you need experience of the software tools.
+
+Is there an AI tool/app that does the job automatically? If not, is this a reasonable idea to develop an AI application capable of doing it? What would be the biggest challenges?
+"
+"['neural-networks', 'machine-learning', 'natural-language-processing', 'transformer', 'attention']"," Title: Why does this multiplication of $Q$ and $K$ have a variance of $d_k$, in scaled dot product attention?Body: In scaled dot product attention, we scale our outputs by dividing the dot product by the square root of the dimensionality of the matrix:
+
+The reason why is stated that this constrains the distribution of the weights of the output to have a standard deviation of 1.
+Quoted from Transformer model for language understanding | TensorFlow:
+
+For example, consider that $Q$ and $K$ have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of $d_k$. Hence, square root of $d_k$ is used for scaling (and not any other number) because the matmul of $Q$ and $K$ should have a mean of 0 and variance of 1, and you get a gentler softmax.
+
+Why does this multiplication have a variance of $d_k$?
+If I understand this, I will then understand why dividing by $\sqrt({d_k})$ would normalize to 1.
+Trying this experiment on 2x2 arrays I get an output of 1.6 variance:
+
+"
+"['reinforcement-learning', 'proximal-policy-optimization', 'learning-rate']"," Title: How to best make use of learning rate scheduling in reinforcement learning?Body: How to best make use of learning rate scheduling in reinforcement learning?
+
+To me, a low learning rate towards the end to fine-tune what you've learned with subtle updates makes sense. But I don't see why over training time this should be linearly brought down. Wouldn't this increase overfitting too, as it promotes an early adopted policy to get further and further finetuned for the rest of the training?
+Wouldn't it be better to keep it constant over the entire training so that when the agent finds novel experiences later, it still has a high enough learning rate to update its model?
+
+I also don't really know how these modern deep RL papers do it. The starcraft II paper by DeepMind, and the OpenAI hide and seek paper don't mention learning rate schedules for instance.
+
+Or are there certain RL environments where it's actually best to use something like a linear learning rate schedule?
+"
+"['comparison', 'boltzmann-machine', 'deep-belief-network', 'restricted-boltzmann-machine', 'deep-boltzmann-machine']"," Title: What are the differences between a deep belief network, a restricted Boltzmann machine and a deep Boltzmann machine?Body: Can anyone list the differences between deep Belief network (DBN), restricted Boltzmann machine (RBM), deep Boltzmann machine (DBM) using simple examples?
+
+Links to other resources are also appreciated.
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'overfitting']"," Title: Does the concept of validation loss apply to training deep Q networks?Body: In deep learning, the concept of validation loss is to ensure that the model being trained is not currently overfitting the data. Is there a similar concept of overfitting in deep q learning?
+
+Given that I have a fixed number of experiences already in a replay buffer and I train a q network by sampling from this buffer, would computing the validation loss (separate from the experiences in the replay buffer) help me to decide whether I should stop training the network?
+
+For example, If my validation loss increases even though my train loss continues to decrease, I should stop training the training. Does deep learning validation loss also apply in the deep q network case?
+
+Just to clarify again, no experiences are collected during the training of the DQN.
+"
+"['reinforcement-learning', 'comparison', 'q-learning', 'off-policy-methods', 'on-policy-methods']"," Title: What is the difference between on-policy and off-policy for continuous environments?Body: I'm trying to understand RL applied to time series (so with infinite horizon) which have a continous state space and a discrete action space.
+
+First, some preliminary questions: in this case, what is the optimal policy? Given the infinite horizon there is no terminal state but only an objective to maximise the rewards, so I can't run more than one episode, is it correct?
+
+Consequently, what is the difference between on-policy and off-policy learning given this framework?
+"
+"['reinforcement-learning', 'keras', 'policy-gradients', 'rewards', 'cross-entropy']"," Title: How do you manage negative rewards in policy gradients?Body: This old question has no definitive answer yet, that's why I am asking it here again. I also asked this same question here.
+If I'm doing policy gradient in Keras, using a loss of the form:
+rewards*cross_entropy(action_pdf, selected_action_one_hot)
+
+How do I manage negative rewards?
+I've had success with this form in cases where the reward is always positive, but it does not train with negative rewards. The failure mode is for it to drive itself to very confident predictions all the time, which results in very large negative losses due to induced deviation for exploration. I can get it to train by clipping rewards at zero, but this throws a lot of valuable information on the table (only carrots, no sticks).
+"
+"['markov-chain', 'bayesian-networks']"," Title: What is the difference between a Bayesian Network and a Markov Chain?Body: I am trying to understand the difference between a Bayesian Network and a Markov Chain.
+
+When I search for this one the web, the unanimous solution seems to be that a Bayesian Network is directional (i.e. it's a DAG) and a Markov Chain is not directional.
+
+However, often a Markov Chain example is overtime, where the weather today is impacting the weather tomorrow, but the weather tomorrow is not (obviously) impacting the weather today. So I am quite confused how is a Markov Chain not directional?
+
+I seem to be missing something here. Can someone please help me understand?
+"
+"['convolutional-neural-networks', 'recurrent-neural-networks', 'terminology', 'geometric-deep-learning', 'graph-neural-networks']"," Title: How do CNNs or RNNs ""stack the feature of nodes by a specific order""?Body: I am trying to understand the following statement taken from the paper Graph Neural Networks: A Review of Methods and Applications (2019).
+
+
+ Standard neural networks like CNNs and RNNs cannot handle the graph input properly in that they stack the feature of nodes by a specific order.
+
+
+This statement is confusing to me. I have not used CNNs/RNNs for non-Euclidean data before, so perhaps that's where my understanding falls off.
+
+How do CNNs/RNNs stack the feature of nodes by a specific order?
+"
+"['deep-learning', 'convolutional-neural-networks', 'classification', 'datasets', 'data-preprocessing']"," Title: How does sampling works in case of imbalanced image datasets?Body: I am solving a problem of image classification of the image dataset for 3 classes. Dataset is highly imbalanced.
+
+How will sampling (either over- or under-sampling) work in that case? Should I remove (or add) any random number of images, or should I follow some pattern?
+
+In the case of CSV data, the general rule is to do PCA, and then remove the data points, but how to do it in the image dataset? Is there any other way to handle this problem?
+"
+"['neural-networks', 'deep-learning', 'reinforcement-learning', 'deep-rl']"," Title: How does the repetition of features across states at different time steps affect learning?Body: Let's say you are training a neural network in an RL setting, where the state (i.e. features/input data) can be the same for multiple successive steps (~typically around 8 steps) of an episode.
+
+For example, an initial state might consist of the following values:
+
+[30, 0.2, 0.5, 1, 0]
+
+
+And then again the same state could be fed into the neural network for e.g. 6-7 times more, resulting in ultimately the following input arrays:
+
+[[30, 0.2, 0.5, 1, 0],
+ [30, 0.2, 0.5, 1, 0],
+ ...,
+ [30, 0.2, 0.5, 1, 0]]
+
+
+I know that the value 0 in the feature set depicts that the weight for this feature results in insignificant value.
+
+But what about the repetition of values? How does that affect learning, if it does at all? Any ideas?
+
+Edit: I am going to provide more information as requested in the comments.
+
+The reason I did not provide this information in the first place, is because I thought there would be similarities in such cases across problems/domains of application. But it is also fine to make it more specific.
+
+
+- The output of the network is a probability among two paths. Our network has to select an optimal path based on some gathered network statistics.
+- I will be using A3C, as similar work in the bibliography has made progress.
+- The reason the agent is staying in the same state is the fact that the protocol can also make path selection decisions at the same time, without an actual update of network statistics. So in that case, you would have the same RTT for instance.
+
+i. This is a product of concurrency in the protocol
+
+ii. It is expected behavior
+
+"
+"['reference-request', 'human-like', 'facial-recognition', 'complexity-theory', 'information-theory']"," Title: Is there any published research on the information-carrying capacity of the human face?Body:
+- Is there any published research on the information-carrying capacity of the human face?
+
+
+Here I mean ""how much information can be conveyed via facial expressions & micro-expressions"".
+
+This is a subject of interest because the human face is arguably ""the most interesting"" single thing for humans, since it's likely the first real pattern we recognize as infants, and conveys so much non-verbal communication than can relate to achievement of a goal or identification of a mortal threat. (Dogs similarly are said to have co-evolved to read human faces. Film acting as ""the art of the closeup"" also validates this viewpoint.)
+
+Essentially, I'm tying to get a sense of how complex is the set of human facial expressions, what is the computational complexity of the range of problems related to identification of the range of possible expressions, and the emulation of such expressions to imitate a human agent. (i.e. these techniques can be used to ""read"" a human subject or manipulate a human subject.)
+
+Well researched articles & blogs would also be welcome.
+"
+"['neural-networks', 'recurrent-neural-networks', 'transformer']"," Title: Can you use transformer models to do autocomplete tasks?Body: I've researched online and seen many papers on the use of RNNs (like LSTMs or GRUs) to autocomplete for, say, a search engine, character by character. Which makes sense since it inherently predicts character-by-character in a sequential manner.
+
+Would it be possible to use the transformer architecture instead to do search autocomplete? If so, how might such a model be adapted?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'reward-design', 'reward-functions']"," Title: How should I handle invalid actions in a grid world?Body: I'm building a really simple experiment, where I let an agent move from the bottom-left corner to the upper-right corner of a $3 \times 3$ grid world.
+I plan to use DQN to do this. I'm having trouble handling the starting point: what if the Q network's prediction is telling the agent to move downward (or leftward) at the beginning?
+Shall I program the environment to immediately give a $-\infty$ reward and end this episode? Will this penalty make the agent "fear" of moving left again in the future, even if moving left is a possible choice?
+Any suggestions?
+"
+"['reinforcement-learning', 'policy-gradients', 'eligibility-traces']"," Title: How do I derive the gradient with respect to the parameters of the softmax policy?Body: The gradient of the softmax eligibility trace is given by the following:
+
+\begin{align}
+\nabla_{\theta} \log(\pi_{\theta}(a|s)) &= \phi(s,a) - \mathbb E[\phi (s, \cdot)]\\
+&= \phi(s,a) - \sum_{a'} \pi(a'|s) \phi(s,a')
+\end{align}
+
+How is this equation derived?
+
+The following relation is true:
+
+\begin{align}
+\nabla_{\theta} \log(\pi_{\theta}(a|s)) &= \frac{\nabla_{\theta} \pi_{\theta}(a|s)}{\pi_{\theta}(a|s)} \tag{1}\label{1}
+\end{align}
+
+Thus, the following relation must also be true:
+\begin{align}
+\frac{\nabla_{\theta} \pi_{\theta}(a|s)}{\pi_{\theta}(a|s)} &=\phi(s,a) - \sum_{a'} \pi(a'|s) \phi(s,a')
+\end{align}
+
+Mathematically, why would this be the case? Probably, you just need to answer my question above because \ref{1} is true and it's just the rule to differentiate a logarithm.
+"
+"['neural-networks', 'convolutional-neural-networks', 'hyperparameter-optimization', 'convolution']"," Title: How do we choose the filters for the convolutional layer of a convolution neural network?Body: Since the hidden layers of a CNN work as a trainable feature extractor, more detailed content based on a larger number of pixels shall require bigger filter sizes. But for cases where localized differences are to receive greater attention, smaller filter sizes are required.
+
+I know there is a lot of topic on the internet regarding CNN and most of them have a simple explanation about Convolution Layer and what it is designed for, but they don’t explain
+
+
+ How many convolution layers are required?
+
+ What filters should I use in those convolution layers?
+
+"
+"['deep-learning', 'data-preprocessing']"," Title: Can I find a mapping that minimizes the maximum distance ratio of certain vectors?Body: Let's say we have several vector points. My goal is to distinguish the vectors, so I want to make them far from each other. Some of them are already far from each other, but some of them can be positioned very closely.
+
+I want to get a certain mapping function that can separate such points that are close to each other, while still preserving the points that are already far away from each other.
+
+I do not care what is the form of the mapping. Since the mapping will be employed as pre-processing, it does not have to be differentiable or even continuous.
+
+I think this problem is somewhat similar to 'minimizing the maximum distance ratio between the points'. Maybe this problem can be understood as stretching the crushed graph to a sphere-like isotropic graph.
+
+I googled it for an hour, but it seems that the people are usually interested in selecting the points that have such nice characteristics from a bunch of data, rather than mapping an existing vector points to a better one.
+
+So, in conclusion, I could not find anything useful.
+
+Maybe you can think 'the neural network will naturally learn it while solving classification problem'. But it failed. Because it is already struggling with too many burdens. So, this is why I want to help my network with pre-processing.
+"
+"['neural-networks', 'terminology']"," Title: What are mono-variable and multi-variable neural networks?Body: In this document, the terms ""Redes Neuronales estáticas monovariables"" and ""Redes Neuronales estáticas multivariables"" are mentioned.
+
+What are mono-variable and multi-variable neural networks? Is it the same as a multi-layer or uni-layer NN?
+
+I have searched about multivariable/mono-variable static/dynamic neural networks in some books, but at least in those books there's no information about these topics.
+
+I have the idea it refers to the inputs/outputs, but I'm not sure.
+"
+"['reinforcement-learning', 'comparison', 'policy-iteration', 'policy-evaluation', 'dynamic-programming']"," Title: Why is update rule of the value function different in policy evaluation and policy iteration?Body: In the textbook ""Reinforcement Learning: An Introduction"", by Richard Sutton and Andrew Barto, the pseudo code for Policy Evaluation is given as follows:
+
+
+
+The update equation for $V(s)$ comes from the Bellman equation for $v_{\pi}(s)$ which is mentioned below (the update equation) for your convenience:
+$$v_{k+1}(s) = \sum_{a} \pi(a|s)\sum_{s',r}p(s',r|s,a)[r+\gamma v_{k}(s')]$$
+
+Now, in Policy Iteration, the Policy Evaluation comes in stage 2, as mentioned in the following pseudo code:
+
+
+
+Here, in the policy Evaluation stage, $V(s)$ is updated using a different equation:
+$$\begin{align} v_{k+1}(s) = \sum_{s',r}p(s',r|s,\pi (s))[r + \gamma v_{k}(s)] \end{align}$$ where $a = \pi(s)$ is used.
+
+Can someone please help me in understanding why this change is made in Policy Iteration? Are the two equations the same?
+"
+"['neural-networks', 'machine-learning', 'autoencoders', 'information-theory']"," Title: Is it possible to have the latent vector of an auto-encoder with size 1?Body: Given e.g. 1M vectors of $1000$ floating points each, where every point in vectors is sampled from a uniform distribution between $-1$ to $1$:
+
+Is it possible to have the bottleneck of the AE network with size 1? In other words, without caring about generalization, is it possible to train a network, where, given only 1 encoded value, it can recreate any of the 1M examples?
+"
+"['neural-networks', 'machine-learning', 'autoencoders', 'overfitting', 'computational-learning-theory']"," Title: How estimate the minimum size of an autoencoder to overfit the training data?Body: Given e.g. $1$M vectors of $1000$ floating points each, where every point in vectors is sampled from a uniform distribution between $-1$ to $1$, how to estimate the minimum network size required between input ($1000$ units), bottleneck (preferably $1$ unit), and output ($1000$ units) which is cable of overfitting the training data perfectly?
+"
+"['deep-learning', 'research', 'papers', 'academia']"," Title: How does publishing in the deep learning world work, with respect to journals and arXiv?Body: Let's say I implemented a new deep learning model that pushed some SOTA a little bit further, and I wrote a new paper about for publication.
+
+How does it work now? I pictured three options:
+
+
+- Submit it to a conference. Ok, that's the easy one, I submit it to something like NeurIPS or ICML and hope to get accepted. At that point, how do you make your paper accessible? Are there problems in uploading it to arXiv later, in order to get read by more people?
+- Upload it on arXiv directly. If I do that it would not be peer-reviewed, and technically speaking it would be devoid of ""academic value"". Right? It could easily be read by anyone, but there would be no formal ""proof"" of its ""scientific quality"". Correct me if I'm wrong.
+- Submit it to a peer-reviewed journal. Avoid desk rejection, avoid reviewers' rejection, after a long painful process it ends up on some international scientific journal. At that point, since the article is formally the editor's property, can you still upload it on arXiv, or on your blog, so that it can be accessible by many people?
+- How do the big stars of deep learning research do when they have some hot new paper ready for publication? And what publications are the most valued in the professional and the academic world?
+
+"
+"['machine-learning', 'graphs', 'text-classification', 'semantics', 'singular-value-decomposition']"," Title: Applications of polar decomposition in Machine LearningBody: Assume there exists a new and very efficient algorithm for calculating the polar decomposition of a matrix $A=UP$, where $U$ is a unitary matrix and $P$ is a positive-semidefinite Hermitian matrix. Would there be any interesting applications in Machine Learning? Maybe topic modeling? Or page ranking? I am interested in references to articles and books.
+"
+"['reinforcement-learning', 'ai-design', 'open-ai', 'environment', 'gym']"," Title: What should the action space for the card game Crib be?Body: I'm working on creating an environment for a card game, which the agent chooses to discard certain cards in the first phase of the game, and uses the remaining cards to play with. (The game is Crib if you are familiar with it.)
+
+How can I make an action space for these actions? For instance, in this game, we could discard 2 of 6 cards, then choose 1 of 4 remaining cards to play, then 1 of 3 remaining cards, then 1 of 2 remaining cards. How do I model this?
+
+I've read this post on using MultiDiscrete spaces, but I'm not sure how to define this space based on the previous chosen action. Is this even the right approach to be taking?
+"
+"['neural-networks', 'autoencoders']"," Title: Does the reduction of the dimensions over multiple layers allow more details to be stored within the final representation?Body: From : https://debuggercafe.com/implementing-deep-autoencoder-in-pytorch/
+the following autoencoder is defined
+
+class Autoencoder(nn.Module):
+ def __init__(self):
+ super(Autoencoder, self).__init__()
+
+ # encoder
+ self.enc1 = nn.Linear(in_features=784, out_features=256)
+ self.enc2 = nn.Linear(in_features=256, out_features=128)
+ self.enc3 = nn.Linear(in_features=128, out_features=64)
+ self.enc4 = nn.Linear(in_features=64, out_features=32)
+ self.enc5 = nn.Linear(in_features=32, out_features=16)
+
+ # decoder
+ self.dec1 = nn.Linear(in_features=16, out_features=32)
+ self.dec2 = nn.Linear(in_features=32, out_features=64)
+ self.dec3 = nn.Linear(in_features=64, out_features=128)
+ self.dec4 = nn.Linear(in_features=128, out_features=256)
+ self.dec5 = nn.Linear(in_features=256, out_features=784)
+
+ def forward(self, x):
+ x = F.relu(self.enc1(x))
+ x = F.relu(self.enc2(x))
+ x = F.relu(self.enc3(x))
+ x = F.relu(self.enc4(x))
+ x = F.relu(self.enc5(x))
+
+ x = F.relu(self.dec1(x))
+ x = F.relu(self.dec2(x))
+ x = F.relu(self.dec3(x))
+ x = F.relu(self.dec4(x))
+ x = F.relu(self.dec5(x))
+ return x
+
+net = Autoencoder()
+
+
+From the Autoencoder
class, we can see that 784 features are passed through a series of transformations and are converted to 16 features.
+
+The transformations (in_features
to out_features
) for each layer are:
+
+784 to 256
+256 to 128
+128 to 64
+64 to 32
+32 to 16
+
+
+Why do we perform this sequence of operations? For example, why don't we perform the following sequence of operations instead?
+
+784 to 256
+256 to 128
+
+
+Or maybe
+
+784 to 512
+512 to 256
+256 to 128
+
+
+Or maybe just encode in two layers:
+
+784 to 16
+
+
+Does the reduction of the dimensions over multiple layers (instead of a single layer) allow more details to be stored within the final representation? For example, if we used only the transformation $784 \rightarrow 16$, may this cause some detail not to be encoded? If so, why is this the case?
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: What is the advantage of using Google's Coral over Nvidia's Xavier?Body: I was reading about the possibility of using Google's Coral for deep learning-based object detection and image classification. I heard it has a good speed in terms of frames/sec.
+
+I also read that Google's Coral is only compatible with quantized models. What does this mean? How will this affect the performance of object detection or classification in terms of accuracy and speed?
+
+What is the advantage of using Google's Coral over Nvidia's Xavier?
+"
+"['neural-networks', 'machine-learning', 'gradient-descent', 'activation-functions']"," Title: If the output of a model is a ridge function, what should the activation functions at all the nodes be?Body: I have the following assignment.
+
+
+
+I can't understand the b part of this question in my assignment. I have completed the 1st part and understand the maths behind it, but the 2nd part has me stumped.
+
+I looked up ridge functions and they basically map real vectors to a single real value, from what I understood. For that reason, I considered that the activation function has to be one that ranges over the real numbers, but that still doesn't clear my doubts.
+
+I don't need a full answer just an explanation of the question will be very helpful, here's some text from the book I'm referring( Russel and Norvig), though I couldn't really grasp how this would help me choose an activation function.
+
+
+ Before delving into learning rules, let us look at the ways in which networks generate complicated functions. First, remember that each unit in a sigmoid network represents a soft threshold in its input space, as shown in Figure 18.17(c) (page 726). With one hidden layer and one output layer, as in Figure 18.20(b), each output unit computes a soft-thresholded linear combination of several such functions. For example, by adding two opposite-facing soft threshold functions and thresholding the result, we can obtain a “ridge” function as shown in Figure 18.23(a). Combining two such ridges at right angles to each other (i.e., combining the outputs from four hidden units), we obtain a “bump” as shown in Figure 18.23(b).
+
+"
+"['reinforcement-learning', 'ai-design', 'game-ai', 'q-learning', 'gym']"," Title: How can I model and solve the Knight Tour problem with reinforcement learning?Body: I've read about the Knight Tour problem. And I wanted to try to solve it with a reinforcement learning algorithm with OpenAI's gym.
+
+So, I want to make a bot that can move on the chess table like the knight. And it is given a reward each time it moves and does not leave the table or step in an already visited place. So, it gets better rewards if it survives more.
+
+Or there is a better approach to this problem? Also, I would like to display the best knight in each generation.
+
+I'm not very advanced at reinforcement learning (I'm still studying it), but this project really caught my attention. I know well machine learning and deep learning.
+
+Do I need to start implementing a new OpenAI's gym environment and start all from scratch, or there is a better idea?
+"
+"['machine-learning', 'feature-selection', 'swarm-intelligence', 'cat-swarm-optimization']"," Title: How can Cat Swarm Algorithm (CSO) used for feature selection?Body: Cat swarm optimization (CSO) is a novel metaheuristic for evolutionary optimization algorithms based on swarm intelligence which proposed in 2006. See Feature Selection of Support Vector Machine Based on Harmonious Cat Swarm Optimization.
+
+According to Modified Cat Swarm Optimization Algorithm for Feature Selection of Support Vector Machines
+
+
+ CSO imitates the behavior of cats through two sub-modes: seeking and tracing. Previous studies have indicated that CSO algorithms outperform other well-known meta-heuristics, such as genetic algorithms and particle swarm optimization. This study presents a modified version of cat swarm optimization (MCSO), capable of improving search efficiency within the problem space. The basic CSO algorithm was integrated with a local search procedure as well as the feature selection of support vector machines (SVMs).
+
+
+Can someone explain how exactly Cat Swarm Algorithm (CSO) is used for feature selection?
+"
+"['neural-networks', 'word-embedding', 'bert', 'text-summarization', 'pretrained-models']"," Title: How to add a pretrained model to my layers to get embeddings?Body: I want to use a pretrained model found in [BERT Embeddings] https://github.com/UKPLab/sentence-transformers and I want to add a layer to get the sentence embeddings from the model and pass on to the next layer. How do I approach this?
+
+The inputs would be an array of documents and each document containing an array of sentences.
+
+The input to the model itself is a list of sentences where it will return a list of embeddings.
+
+This is what I've tried but couldn't solve the errors:
+
+def get_embeddings(input_data):
+
+ input_embed = []
+ for doc in input_data:
+ doc = tf.unstack(doc)
+ doc_arr = asarray(doc)
+ doc = [el.decode('UTF-8') for el in doc_arr]
+ doc = list(doc)
+ assert(type(doc)== list)
+
+ new_doc = []
+ for sent in doc:
+ sent = tf.unstack(sent)
+ new_doc.append(str(sent))
+ assert(type(sent)== str)
+
+ embedding= model.encode(new_doc) # Accepts lists of strings to return BERT sentence embeddings
+ input_embed.append(np.array(embedding))
+
+ return tf.convert_to_tensor(input_embed, dtype=float)
+
+
+sentences = tf.keras.layers.Input(shape=(3,5)) #test shape
+sent_embed = tf.keras.layers.Lambda(get_embeddings)
+
+
+x = sent_embed(sentences)
+
+
+
+"
+"['reinforcement-learning', 'ai-design', 'q-learning', 'open-ai']"," Title: How to add more than 1 agent in one generation with Q LearningBody: Sometimes the agent learns a bit slow and you want to have multiple agents in one generation. And at each episode you'll draw on the screen only the best of them or all of them. How is that possible?
+
+For clarification purposes, please watch this video on youtube at time 4:10.
+
+I need just a theoretical approach, I'll try the coding myself :).
+
+Thanks for any answer! I really do appreciate it! :)
+"
+"['machine-learning', 'reinforcement-learning', 'real-time']"," Title: How do I set up rewards to account for unmanned aerial vehicle crashes?Body: I am working on a project to implement a collision avoidance algorithm on a real unmanned aerial vehicle (UAV).
+
+I'm interested in understanding the process to set up a negative reward to account for scenarios wherein there is a UAV crash. This can be done very easily during the simulation (if the UAV touches any object, the episode stops giving a negative reward). In the real world, a UAV crash would usually entail it hitting a wall or an obstacle, which is difficult to model.
+
+My initial plan is to stop the RL episode and manually input a negative reward (to the algorithm) each time a crash occurs. Any improvements to this plan would be highly appreciated!
+"
+"['neural-networks', 'reference-request', 'feedforward-neural-networks']"," Title: What are the most common feedforward neural networks?Body: What are the most common feedforward neural networks? What kind of inputs do they receive? For example, do they receive binary numbers, real numbers, vectors, or matrics? Is there such a taxonomy?
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'importance-sampling']"," Title: How can we compute the ratio between the distributions if we don't know one of the distributions?Body: Here is my understanding of importance sampling. If we have two distributions $p(x)$ and $q(x)$, where we have a way of sampling from $p(x)$ but not from $q(x)$, but we want to compute the expectation wrt $q(x)$, then we use importance sampling.
+
+The formula goes as follows:
+
+$$
+E_q[x] = E_p\Big[x\frac{q(x)}{p(x)}\Big]
+$$
+
+The only limitation is that we need a way to compute the ratio. Now, here is what I don't understand. Without knowing the density function $q(x)$, how can we compute the ratio $\frac{q(x)}{p(x)}$?
+
+Because if we know $q(x)$, then we can compute the expectation directly.
+
+I am sure I am missing something here, but I am not sure what. Can someone help me understand this?
+"
+"['comparison', 'recurrent-neural-networks', 'long-short-term-memory', 'gated-recurrent-unit']"," Title: What's the difference between LSTM and GRU?Body: I have been reading about LSTMs and GRUs, which are recurrent neural networks (RNNs). The difference between the two is the number and specific type of gates that they have. The GRU has an update gate, which has a similar role to the role of the input and forget gates in the LSTM.
+Here's a diagram that illustrates both units (or RNNs).
+
+With respect to the vanilla RNN, the LSTM has more "knobs" or parameters. So, why do we make use of the GRU, when we clearly have more control over the neural network through the LSTM model?
+Here are two more specific questions.
+
+- When would one use Long Short-Term Memory (LSTM) over Gated Recurrent Units (GRU)?
+
+- What are the advantages/disadvantages of using LSTM over GRU?
+
+
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'text-classification']"," Title: My LSTM text classification model seems not learn anything in early epochsBody: I am trying to use LSTM to do text classification and monitor the training process with tensorboard. But it seems that this model doesn't learn anything in early epochs. Is it normal for LSTM networks?
+
+Here is the definition of model:
+
+class RNN(nn.Module):
+ """"""
+ RNN model for text classification
+ """"""
+ def __init__(self, vocab_size, num_class, emb_dim, emb_droprate, rnn_cell_hidden, rnn_cell_type, birnn, num_layers, rnn_droprate, sequence_len):
+ super().__init__()
+ self.vocab_size = vocab_size # vocab size
+ self.emb_dim = emb_dim # embedding dimension
+ self.emb_droprate = emb_droprate # embedding droprate
+ self.num_class = num_class # classes
+ self.rnn_cell_hidden = rnn_cell_hidden # hidden layer size
+ self.rnn_cell_type = rnn_cell_type # rnn cell type
+ self.birnn = birnn # wheather use bidirectional rnn
+ self.num_layers = num_layers # number of rnn layers
+ self.rnn_droprate = rnn_droprate # rnn dropout rate before fc
+ self.sequence_len = sequence_len # fix sequence length, so we dont need loop
+ pass
+
+ def build(self):
+ self.embedding = nn.Embedding(self.vocab_size, self.emb_dim)
+ self.emb_dropout = nn.Dropout(self.emb_droprate)
+ if self.rnn_cell_type == ""LSTM"":
+ self.rnn = nn.LSTM(input_size=self.emb_dim, hidden_size=self.rnn_cell_hidden, num_layers=self.num_layers, bidirectional=self.birnn, batch_first=True)
+ elif self.rnn_cell_type == ""GRU"":
+ self.rnn = nn.GRU(input_size=self.emb_dim, hidden_size=self.rnn_cell_hidden, num_layers=self.num_layers, bidirectional=self.birnn, batch_first=True)
+ else:
+ self.rnn = None
+ print(""unsupported rnn cell type, valid is [LSTM, GRU]"")
+ if self.birnn:
+ self.fc = nn.Linear(2 * self.rnn_cell_hidden, self.num_class)
+ else:
+ self.fc = nn.Linear(self.rnn_cell_hidden, self.num_class)
+
+ self.rnn_dropout = nn.Dropout(self.rnn_droprate)
+
+ def forward(self, input_):
+ batch_size = input_.shape[0]
+
+ x = self.embedding(input_)
+ x = self.emb_dropout(x)
+
+ if self.rnn_cell_type == ""LSTM"":
+ if self.birnn:
+ h_0 = torch.zeros(self.num_layers * 2, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
+ c_0 = torch.zeros(self.num_layers * 2, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
+ else:
+ h_0 = torch.zeros(self.num_layers, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
+ c_0 = torch.zeros(self.num_layers, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
+ output, (h_n, c_n) = self.rnn(x, (h_0, c_0))
+ elif self.rnn_cell_type == ""GRU"":
+ if self.birnn:
+ h_0 = torch.zeros(self.num_layers * 2, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
+ else:
+ h_0 = torch.zeros(self.num_layers, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
+ output, h_n = self.rnn(x, h_0)
+
+ if self.birnn:
+ x = h_n.view(self.num_layers, 2, batch_size, self.rnn_cell_hidden)
+ x = torch.cat((x[-1, 0, : , : ], x[-1, 1, : , : ]), dim = 1)
+ else:
+ x = h_n.view(self.num_layers, 1, batch_size, self.rnn_cell_hidden)
+ x = x[-1, 0, : , : ]
+
+ x = x.view(batch_size, 1, -1) # shape: [batch_size, 1, 2 or 1 * rnn_cell_hidden]
+ x = self.rnn_dropout(x)
+
+ x = self.fc(x)
+ x = x.view(-1, self.num_class) # shape: [batch_size, num_class]
+
+ return x
+
+
+Parameters of this model:
+
+
+- vocab size: 4805
+- number of classes: 27
+- embedding dimension: 300
+- embedding dropoutrate: 0.5
+- rnn cell type: LSTM
+- rnn cell hidden size: 1000
+- bidirectional rnn: False
+- number of lstm layers: 1
+- dropout rate at last lstm layer hidden: 0.5
+- padded sequence length: 64
+
+
+The Optim:
+
+criterion = nn.CrossEntropyLoss().to(device)
+optimizer = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-6)
+
+
+learning rate here is 0.001
, batch size is 32
.
+
+The tensorboard graph:
+
+
+
+
+
+It seems that this model starts learning after epoch 15. Is it normal?
+"
+"['convolutional-neural-networks', 'prediction', 'time-series', 'convolution', 'forecasting']"," Title: Is there a difference between using 1d conv layers and 2d conv layers with kernel with size of 1 along other than time dimension?Body: Let's assume I use convolutional networks for time-series prediction. Data I feed to the network have 1 channel depth, height of number of periods and number of features is the width, so the frame size is:
+[1, periods, features]. Batch size is not relevant here.
+
+Is there a difference between using 1d convolutions along time (height) dimension and 2d convolutional that will have a kernel size of for example (3, 1) or (5, 1), so that the larger number convolutes along the time dimension, and there is no convolution along features dimension?
+"
+"['machine-learning', 'reinforcement-learning', 'dqn']"," Title: Can deep reinforcement learning algorithms be deterministic in their reproducibility in results?Body: I ran a deep q learning algorithm (DQN) for $x$ number of epochs and got policy $\pi_1$.
+I reran the same script for the same $x$ number of epochs and got policy $\pi_2$. I expected $\pi_1 $ and $\pi_2$ to be similar because i ran the same script. However, when computing the actions on the same test set, i realised the actions were very different.
+
+Is this supposed to be normal when training deep q networks or is there something that I am missing ?
+
+I am using prioritised experience replay when training the model.
+"
+"['reinforcement-learning', 'proximal-policy-optimization', 'exploration-exploitation-tradeoff']"," Title: How can I increase the exploration in the Proximal Policy Optimation algorithm?Body: How can I increase the exploration in the Proximal Policy Optimation reinforcement learning algorithm? Is there a variable assigned for this purpose? I'm using the stable-baseline implementation: https://stable-baselines.readthedocs.io/en/master/modules/ppo2.html
+"
+"['neural-networks', 'comparison', 'autoencoders']"," Title: What are the main differences between sparse autoencoders and convolution autoencoders?Body: What are the main differences and similarities between sparse autoencoders and convolution autoencoders?
+
+When should one be preferred over the other? What are their applications?
+
+(References are welcome. Somehow I was not able to find any comparisons of these autoencoders although I looked in a few textbooks and searched for material online. I was able to find the descriptions of each autoencoder separately, but what I am interested in is the comparison.)
+"
+"['neural-networks', 'convolutional-neural-networks', 'deep-neural-networks']"," Title: How can I find the similar non-zero connections between different levels of sparsity of the same network?Body: I am pruning a neural network (CNN and Dense) and for different sparsity levels, I have different sub-networks. Say for sparsity levels of 20%, 40%, 60% and 80%, I have 4 different sub-networks.
+
+Now, I want to find the non-zero connections that they have in common. Any idea how to visualize this or compute this?
+
+I am using Python 3.7 and TensorFlow 2.0.
+
+After the convergence of a neural network following the random weight initialization, some weights/connections increase (magnitude), while other weights decrease. You can then prune the smallest magnitude weights. I want to compare the remaining weights for say two networks having the same level of sparsity of say 50%. The idea is to have an idea of which weights were pruned away and which weights/connections remain.
+"
+"['comparison', 'search', 'best-first-search', 'tree-search', 'graph-search']"," Title: What is a example showing that the tree-based variant for the greedy best-first search is incomplete?Body: I understand that a tree-based variant will have nodes repeatedly added to the frontier. How do I craft an example where a particular goal node is never found. Is this example valid.
+
+
+
+On the other hand, how do I explain that the graph-based version of the greedy best-first search is complete?
+"
+"['deep-learning', 'math', 'proofs', 'coq']"," Title: Can deep learning be used to help mathematical research?Body: I am currently learning about deep learning and artificial intelligence and exploring his possibilities, and, as a mathematician at heart, I am inquisitive about how it can be used to solve problems in mathematics.
+
+Seeing how well recurrent neural networks can understand human language, I suppose that they could also be used to follow some simple mathematical statements and maybe even come up with some proofs. I know that computer-assisted proofs are more and more frequent and that some software can now understand simple mathematical language and verify proofs (e.g. Coq). Still, I've never heard of deep learning applied to mathematical research.
+
+Can deep learning be used to help mathematical research? So, I am curious about whether systems like Coq could be combined with deep learning systems to help mathematical research. Are there some exciting results?
+"
+"['neural-networks', 'recurrent-neural-networks', 'long-short-term-memory', 'bert', 'text-generation']"," Title: Is the number of bidirectional LSTMs in seq2seq model equal to the maximum length of input text/characters?Body: I'm confused about this aspect of RNNs while trying to learn how seq2seq encoder-decoder works at https://machinelearningmastery.com/configure-encoder-decoder-model-neural-machine-translation/.
+
+It seems to me that the number of LSTMs in the encoder would have to be the same as number of words in the text (if word embeddings are being used) or characters in the text (if char embeddings are being used). For char embeddings, each embedding would correspond to 1 LSTM in 1 direction and 1 encoder hidden state.
+
+
+- Is this understanding correct?
+E.g. If we have another model that uses encoder-decoder for a different application (say text-to-speech synthesis described here https://ai.googleblog.com/2017/12/tacotron-2-generating-human-like-speech.html) tha uses 256 LSTMs in each direction of the bidirectional-encoder, does it mean the input to this encoder is limited to 256 characters of text?
+- Can the decoder output has to be same length as the encoder input or can it be different? If different what factor describes what the decoder output length should be?
+
+"
+"['natural-language-processing', 'transformer']"," Title: Transformer encoding for regressionBody: I have a string of characters encoding a molecule. I want to regress some properties of those molecules. I tried using an LSTM that encodes all one hot encdoed characters, and then I take the last hidden state fed into a linear layer to regress the property. This works fine, but I wanted to see if transformers can do better, since they are so good in NLP.
+
+However, I am not quiet sure about two things:
+
+
+- Pytorch transformer encoder layer has two masking parameters: ""src_mask"" and ""src_key_padding_mask"". The model needs the whole string to do the regression, so I dont think I need ""src_mask"", but I do padding with 0 for parallel processing, is that what ""src_key_padding_mask"" is for?
+- What output from the transformer do I feed into the linear regression layer? For the LSTM I took the last hidden output. For the transformer, since everything is processed in parallel, I feel like I should rather use the sum of all, but it doesn't work well. Instead using only the last state works better, which seems arbitrary to me. Any ideas on how to properly do this, how do sentiment analysis model do it?
+
+"
+"['reinforcement-learning', 'convergence', 'temporal-difference-methods']"," Title: What are the conditions of convergence of temporal-difference learning?Body: In reinforcement learning, temporal difference seem to update the value function in each new iteration of experience absorbed from the environment.
+
+What would be the conditions for temporal-difference learning to converge in the end? How is it guaranteed to converge?
+
+Any intuitive understanding of those conditions that lead to the convergence?
+"
+"['neural-networks', 'deep-learning', 'natural-language-processing', 'attention']"," Title: What is the intuition behind the attention mechanism?Body:
+ Attention idea is one of the most influential ideas in deep learning. The main idea behind attention technique is that it allows the decoder to ""look back” at the complete input and extracts significant information that is useful in decoding.
+
+
+I am really having trouble understanding the intuition behind the attention mechanism. I mean how the mechanism works and how to configure.
+
+In simple words (and maybe with an example), what is the intuition behind the attention mechanism?
+
+What are some applications, advantages & disadvantages of attention mechanism?
+"
+"['machine-learning', 'python', 'training', 'models']"," Title: Training a model for text document transformation?Body: I have a bunch of text documents, split into source documents and transformed documents. These text documents have multiple lines and are edited at specific locations, in a specific way.
+
+I make use of the difflib
package available in Python to identify the associated transformation, for each source document and the resulting transformed document.
+
+I wish to train and implement a ML technique which will help in identifying and automating this conversion activity.
+
+
+
+Here is a sample result of how the transformation result looks like:
+(NOTE: This example contains only one line, but my actual use case contains several lines)
+
+import difflib
+
+Initial = 'This is my initial state'
+Final = 'This is what I transform into'
+
+diff = difflib.SequenceMatcher(None, Initial, Final)
+
+for tag,i1,i2,j1,j2 in diff.get_opcodes():
+ print('{:7} Initial[{:}:{:}] --> Final[{:}:{:}] {:} --> {:}'.format(tag,i1,i2,j1,j2,Initial[i1:i2],Final[j1:j2]))
+
+#Result:
+equal Initial[0:8] --> Final[0:8] This is --> This is
+insert Initial[8:8] --> Final[8:23] --> what I transfor
+equal Initial[8:9] --> Final[23:24] m --> m
+delete Initial[9:10] --> Final[24:24] y -->
+equal Initial[10:13] --> Final[24:27] in --> in
+delete Initial[13:14] --> Final[27:27] i -->
+equal Initial[14:15] --> Final[27:28] t --> t
+replace Initial[15:24] --> Final[28:29] ial state --> o
+
+
+This helps in outlining the transformation steps to transform Initial
to Final
. I wish to make use of ML to identify the common pattern in such transformation between a large collection of txt documents and train a model that I can use in future.
+
+
+
+What will be the best method to approach this problem? I am not facing a problem in identifying and classifying text data, but in identifying the nature of editing and transformation of strings.
+"
+"['deep-learning', 'reinforcement-learning', 'policy-gradients', 'actor-critic-methods']"," Title: Learning policy where action involves discrete and continuous parametersBody: Typically it seems like reinforcement learning involves learning over either a discrete or a continuous action space. An example might be choosing from a set of pre-defined game actions in Gym Retro or learning the right engine force to apply in Continuous Mountain Car; some popular approaches for these problems are deep Q-learning for the former and actor-critic methods for the latter.
+
+What about in the case where a single action involves picking both a discrete and a continuous parameter? For example, when choosing the type (discrete), pixel grid location (discrete), and angular orientation (continuous) of a shape from a given set to place on a grid and optimize for some reward.
+Is there a well-established approach for learning a policy to make both types of decisions at once?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'comparison', 'feedforward-neural-networks']"," Title: Why do we need convolutional neural networks instead of feed-forward neural networks?Body: Why do we need convolutional neural networks instead of feed-forward neural networks?
+
+What is the significance of a CNN? Even a feed-forward neural network will able to solve the image classification problem, then why is the CNN needed?
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl']"," Title: Do we have two Q-learning update formulas?Body: I have seen two deep Q-learning formulas:
+
+$$Q\left(S_{t}, A_{t}\right) \leftarrow Q\left(S_{t}, A_{t}\right)+\alpha\left[R_{t+1}+\gamma \max _{a} Q\left(S_{t+1}, a\right)-Q\left(S_{t}, A_{t}\right)\right]$$
+
+and this one
+
+$$Q(s, a)=r(s, a)+\gamma \max _{a} Q\left(s^{\prime}, a\right)$$
+
+Which one is correct?
+"
+"['reinforcement-learning', 'off-policy-methods', 'importance-sampling']"," Title: What is the intuition behind importance sampling for off-policy value evaluation?Body: The technique for off-policy value evaluation comes from importance sampling, which states that
+
+$$E_{x \sim q}[f(x)] \approx \frac{1}{n}\sum_{i=1}^n f(x_i)\frac{q(x_i)}{p(x_i)},$$ where $x_i$ is sampled from $p$.
+
+In the application of importance sampling to RL, is the expectation of the function $f$ equivalent to the value of the trajectories, which is represented by the trajectories $x$?
+
+The distributions $p$ represent the probability of sampling trajectories from the behavior policy and the distribution $q$ represents the probability of sampling trajectories from the target policy $q$?
+
+How would the trajectories from distribution $q$ be better than that of $p$? I know from the equation how it is better, but it is hard to understand intuitively why this could be so.
+"
+"['deep-learning', 'reinforcement-learning', 'q-learning']"," Title: Can we increase the speed of training a reinforcement learning algorithm?Body: I am new in reinforcement learning. I started reading the PyTorch's documentation about the cart pole control. Whenever an agent fails, they restart the environment.
+
+When I run the code, the time in the game is the same as time in real life. Can we train models quicker? Can we make the game faster so that model will be training faster?
+"
+"['reinforcement-learning', 'policy-gradients']"," Title: In vanilla policy gradient is the baseline lagging behind the policy?Body: Vanilla policy gradient algorithm (using baseline to reduce variance) acc to here (page 16)
+
+
+ Initialize policy parameter θ, baseline b
+
+ for iteration=1, 2, . . . do
+
+
+ Collect a set of trajectories by executing the current policy
+
+ At each timestep in each trajectory, compute
+
+
+ the return $R_{t}= \sum_{t'=t}^{T-1}\gamma^{t'-t}r_{t'}$
+
+ the advantage estimate $\hat{A}_{t} = R_{t} - b(s_{t})$
+
+
+ Re-fit the baseline, by minimizing $\lVert b(s_{t}) - R_{t} \rVert^{2}$
+
+
+ summed over all trajectories and timesteps.
+
+
+ Update the policy, using a policy gradient estimate $\hat{g}$,
+
+
+ which is a sum of terms $\nabla_{\theta}log\pi(a_{t}|s_{t},\theta)\hat{A_{t}}$
+
+
+
+
+
+- At line 6, advantage estimate is computed by subtracting baseline from the returns
+- At line 7, baseline is re-fit minimizing mean squared error between state dependent baseline and return
+- At line 8, we update the policy using advantage estimate from line 6
+
+
+So is the baseline expected to be used in the next iteration when our policy has changed?
+
+To compute the advantage we subtract the state value $V(s_{t})$ from the action value $Q(s_{t},a_{t})$, under the same policy, then why is the old baseline used here in advantage estimation?
+"
+"['neural-networks', 'reinforcement-learning', 'policy-gradients', 'pytorch']"," Title: Policy Gradient on Tic-Tac-Toe not workingBody: I wanted to implement the Policy Gradient on Tic-Tac-Toe.
+I tried to use the code that worked for any environment like CartPole-v0 to my Tic-Tac-To game. But it is not learning. There are no errors. Just the result is so bad.
+
+RandomPlayer (""Player X"") vs PolicyAgent (""Player O"")
+
+
+
+So one can see that the Policy Agent is not learning after 500 battles. Each battle consists of 100games against the random player. Together 500 * 100 games.
+
+Can someone tell me the problem or the bug in my code. I can not figure it out. Or what I have to improve. It would be so great.
+
+Here is also a project which did the same, which I want to do, but with success.
+https://github.com/fcarsten/tic-tac-toe/blob/master/tic_tac_toe/DirectPolicyAgent.py
+I did not get what I am making different.
+
+Code:
+
+Packages:
+
+import torch
+import torch as T
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.optim as optim
+
+import numpy as np
+import gym
+from gym import wrappers
+
+
+Neural Net:
+
+class PolicyNetwork(nn.Module):
+ def __init__(self, lr, input_dims, fc1_dims, fc2_dims, n_actions):
+ super(PolicyNetwork, self).__init__()
+ self.input_dims = input_dims
+ self.lr = lr
+ self.fc1_dims = fc1_dims
+ self.fc2_dims = fc2_dims
+ self.n_actions = n_actions
+
+ self.fc1 = nn.Linear(self.input_dims, self.fc1_dims)
+ self.fc2 = nn.Linear(self.fc1_dims, self.fc2_dims)
+ self.fc3 = nn.Linear(self.fc2_dims, self.n_actions)
+
+ self.optimizer = optim.Adam(self.parameters(), lr=lr)
+
+ def forward(self, observation):
+ state = T.Tensor(observation)
+ x = F.relu(self.fc1(state))
+ x = F.relu(self.fc2(x))
+ x = self.fc3(x)
+ return x
+
+
+Policy Agent:
+
+class PolicyAgent:
+ def __init__(self, player_name):
+ self.name = player_name
+ self.value = PLAYER[self.name]
+
+ def board_to_input(self, board):
+ input_ = np.array([0] * 27)
+ for i, val in enumerate(board):
+ if val == self.value:
+ input_[i] = 1
+ if val == self.value * -1:
+ input_[i+9] = 1
+ if val == 0:
+ input_[i+18] = 1
+ return np.reshape(input_, (1,-1))
+
+
+ def start(self, learning_rate=0.001, gamma=0.1):
+ self.lr = learning_rate
+ self.gamma = gamma
+ self.all_moves = list(range(0,9))
+ self.policy = PolicyNetwork(self.lr, 27, 243, 91, 9)
+ self.reward_memory = []
+ self.action_memory = []
+
+ def turn(self, board, availableMoves):
+ state = self.board_to_input(board.copy())
+ prob = F.softmax(self.policy.forward(state))
+ action_probs = torch.distributions.categorical.Categorical(prob)
+ action = action_probs.sample()
+
+ while action.item() not in availableMoves:
+ state = self.board_to_input(board.copy())
+ prob = F.softmax(self.policy.forward(state))
+ action_probs = torch.distributions.categorical.Categorical(prob)
+ action = action_probs.sample()
+
+ log_probs = action_probs.log_prob(action)
+ self.action_memory.append(log_probs)
+
+ self.reward_memory.append(0)
+ return action.item()
+
+ def learn(self, result):
+ if result == 0:
+ reward = 0.5
+ elif result == self.value:
+ reward = 1.0
+ else:
+ reward = 0
+
+ self.reward_memory.append(reward)
+ #print(self.reward_memory)
+
+ self.policy.optimizer.zero_grad()
+ #G = np.zeros_like(self.action_memory, dtype=np.float64)
+ G = np.zeros_like(self.reward_memory, dtype=np.float64)
+
+
+ #running_add = reward
+ #for t in reversed(range(0, len(self.action_memory))):
+ # G[t] = running_add
+ # running_add = running_add * self.gamma
+
+ #'''
+ running_add = 0
+ for t in reversed(range(0, len(self.reward_memory))):
+ if self.reward_memory[t] != 0:
+ running_add = 0
+ running_add = running_add * self.gamma + self.reward_memory[t]
+ G[t] = running_add
+ for t in range(len(self.reward_memory)):
+ G_sum = 0
+ discount = 1
+ for k in range(t, len(self.reward_memory)):
+ G_sum += self.reward_memory[k] * discount
+ discount *= self.gamma
+ G[t] = G_sum
+ mean = np.mean(G)
+ std = np.std(G) if np.std(G) > 0 else 1
+ G = (G-mean)/std
+ #'''
+
+ G = T.tensor(G, dtype=T.float)
+
+ loss = 0
+ for g, logprob in zip(G, self.action_memory):
+ loss += -g * logprob
+
+ loss.backward()
+ self.policy.optimizer.step()
+
+ self.reward_memory = []
+ self.action_memory = []
+
+"
+"['reinforcement-learning', 'q-learning', 'convergence']"," Title: Convergence of a delayed policy update Q-learningBody: I thought about an algorithm that twists the standard Q-learning slightly, but I am not sure whether convergence to the optimal Q-value could be guaranteed.
+
+The algorithm starts with an initial policy.
+Within each episode, the algorithm conducts policy evaluation and does NOT update the policy.
+Once the episode is done, the policy is updated using the greedy policy based on the current learnt Q-values.
+The process then repeats. I attached the algorithm as a picture.
+
+Just to emphasize that the updating policy does not change within each episode. The policy at each state is updated AFTER one episode is done, using the Q-tables.
+
+Has anyone seen this kind of Q-learning before? If so, could you please kindly guide me to some resources regarding the convergence? Thank you!
+
+
+"
+"['neural-networks', 'data-preprocessing']"," Title: How to train a neural network with a data set that in which the target is a mix of 0-1 label and numeric real value label?Body: I am running into an issue in which the the target (label collums) of my dataset contain a mixture of binary label (yes/no) and some numeric value label.
+
+
+
+The value of these numeric value (resource 1 and resource 2 collumns) experience a large variation margin. Sometime these numeric value can be like 0.389 but sometimes they can be 0.389 x 10^-4 or something.
+
+My goal is to predict the binary decision and the amount of resource allocated to a new user who have input feature 1 (numeric) and input feature 2 (numeric).
+
+My initial though would be that the output neuron corresponding to the 0-1 decision would use logistic regression activation function. But for the neuron that corresponding to the resource I am not quite sure.
+
+What would be the appropriate way to tackle such situation in term of network structure or data pre-processing strategy ?
+
+Thank you for your enthusiasm !
+"
+"['reinforcement-learning', 'implementation', 'ddpg', 'hyper-parameters']"," Title: What made your DDPG implementation on your environment work?Body: I am working on scheduling problem that has inherent randomness. The dimensions of action and state spaces are 1 and 5 respectively.
+
+I am using DDPG, but it seems extremely unstable, and so far it isn't showing much learning. I've tried to
+
+
+- adjust the learning rate,
+- clip the gradients,
+- change the size of the replay buffer,
+- different neural net architectures, using SGD and Adam,
+- change the $\tau$ for the soft-update.
+
+
+So, I'd like to know what people's experience is with this algorithm, for the environments where it was tested on the paper but also for other environments. What values of hyperparameters worked for you? Or what did you do? How cumbersome was the fine-tuning?
+
+I don't think my implementation is incorrect, because I pretty much replicated this, and every other implementation I found did exactly the same.
+
+(Also, I am not sure this is necessarily the best website to post this kind of question, but I decided to give a shot.)
+"
+"['neural-networks', 'recurrent-neural-networks', 'long-short-term-memory', 'backpropagation']"," Title: How does backpropagation work in LSTMs?Body: After reading a lot of articles (for instance, this one Understanding LSTM Networks), I know that the long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning.
+
+How does backpropagation work in the specific case of LSTMs?
+"
+"['neural-networks', 'activation-functions']"," Title: Why not replacing the simple linear functions that neurons compute with more complex functions?Body: In a neural network, a neuron typically computes a linear function $f(x) = w*x$, where $w$ is the weight and $x$ is the input.
+
+Why not replacing the linear function with more complex functions, such as $f(x,w,a,b,c) = w*(x + b)^a + c$?
+
+It will provide much more diversity into neural networks.
+
+Does this have a name? Has this been used?
+"
+"['reference-request', 'agi', 'aixi', 'godel-machine']"," Title: Are there other mathematical frameworks of artificial general intelligence apart from AIXI?Body: AIXI is a mathematical framework for artificial general intelligence developed by Marcus Hutter since the year 2000. It's based on many concepts, such as reinforcement learning, Bayesian statistics, Occam's razor, or Solomonoff induction. The blog post What is AIXI? — An Introduction to General Reinforcement Learning provides an accessible overview of the topic for those of you not familiar with it.
+
+Are there any other mathematical frameworks of artificial general intelligence apart from AIXI?
+
+I am aware of projects such as OpenCog, but that's not really a mathematical framework, but more a cognitive science framework.
+"
+"['agi', 'research', 'academia']"," Title: Are there any conferences dedicated to artificial general intelligence?Body: Similarly to What are the scientific journals dedicated to artificial general intelligence?, are there any conferences dedicated to artificial general intelligence?
+"
+"['reinforcement-learning', 'comparison', 'multi-agent-systems']"," Title: What is the relation between multi-agent learning and reinforcement learning?Body: What is the relation between multi-agent learning and reinforcement learning?
+
+Is one a sub-field of the other? For instance, would it make sense to state that your research interest are multi-agent learning and reinforcement learning, or would that be weird as one includes most of the topics of the other?
+"
+"['reinforcement-learning', 'definitions', 'markov-decision-process', 'policies']"," Title: Why is the policy not a part of the MDP definition?Body: I'm reading an article on reinforcement learning, and I don't understand why the agent's policy $\pi$ is not part of definition of Markov Decision process(MDP):
+
+
+Bu, Lucian, Robert Babu, and Bart De Schutter. "A comprehensive survey of multiagent reinforcement learning." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 38.2 (2008): 156-172.
+
+My question is:
+
+Why the policy is not a part of the MDP definition?
+
+"
+"['computer-vision', 'image-recognition', 'object-detection', 'object-recognition', 'algorithm-request']"," Title: Detect object in video and augment another video on top of itBody: I'm trying to detect an object in a video (with slight camera movement), and then augment another video on top of it. What is the simplest approach to do that?
+
+For instance, let's assume I have this simple video of a couch HERE. Now I want to augment the right cusion with a human or a dog. The dog or human are video itself (let's assume transparency is not an issue).
+
+What's the simplest approach to do that?
+
+
+"
+"['deep-learning', 'reinforcement-learning', 'dqn']"," Title: How would researchers determine the best deep learning model if every run of the code yields different results?Body: There are many factors that cause the results of ML models to be different for every run of the same piece of code. One factor could be different initialization of weights in the neural network.
+
+Since results might be stochastic, how would researchers know what their best performing model is? I know that a seed can be set to incorporate more determinism into the training. However, there could be other pseudo-random sequences that produce slightly better results?
+"
+"['q-learning', 'value-iteration']"," Title: Is the PyTorch official tutorial really about Q-learning?Body: I read Q-learning algorithm and also I know value iteration (when you update action values). I think the PyTorch example is value iteration rather than Q-learning.
+
+Here is the link:
+https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
+"
+"['natural-language-processing', 'bert']"," Title: Two questions about the architecture of Google Bert model (in particular about parameters)Body: I'm looking for someone who can help me clarify a few details regarding the architecture of Bert model. Those details are necessary for me to come with a full understanding of Bert model, so your help would be really helpful. Here are the questions:
+
+
+- Does the self-attention layer of Bert model have parameters? Do the embeddings of words change ONLY according to the actual embeddings of other words when the sentence is passed through the self-attention layer?
+- Are the parameters of the embedding layer of the model (the layer which transforms the sequence of indexes passed as input into a sequence of embeddings of size=size of the model) trainable or not?
+
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl']"," Title: Handle non-existing states in q-learningBody: I am using Q-learning to solve an engineering problem. The objective is to generate a Q-table associating state to Q-values.
+
+I created a State vector DS = [s1, s2, ..., sN]
containing all ""desired"" states. So the Q-table has the form of Q-table =[DS, Q-values]
.
+
+On the other hand, my agent follows a trajectory. Playing action a
at state s
(which is a point of the trajectory) leads the agent to another state s'
(another point of the trajectory). However, I don't have the s'
state in the initially desired states vector DS
.
+
+One solution is to add new states to the DS
vector while the Q-learning algorithm is running, but I do not want to add new states.
+
+Any other ideas on how to handle this problem?
+"
+"['models', 'linear-regression', 'logistic-regression']"," Title: Why is the hypothesis function $h_{\theta}(x)$ equivalent to $E[y | x; \theta]$ in generalised linear models?Body: Reading through the CS229 lecture notes on generalised linear models, I came across the idea that a linear regression problem can be modelled as a Gaussian distribution, which is a form of the exponential family. The notes state that $h_{\theta}(x)$ is equal to $E[y | x; \theta]$. However, how can $h_{\theta}(x)$ be equal to the expectation of $y$ given input $x$ and $\theta$, since the expectation would require a sort of an averaging to take place?
+
+Given x, our goal is to predict the expected value of $T(y)$ given $x$. In most of our examples, we will have $T(y) = y$, so this means we would like the prediction $h(x)$ output by our learned hypothesis h to satisfy $h(x) = E[y|x]$.
+To show that ordinary least squares is a special case of the GLM family of models, consider the setting where the target variable y (also called the response variable in GLM terminology) is continuous, and we model the conditional distribution of y given x as a Gaussian $N(\mu,\sigma^2)$. (Here, $\mu$ may depend $x$.) So, we let the ExponentialFamily($\eta$) distribution above be the Gaussian distribution. As we saw previously, in the formulation of the Gaussian as an exponential family distribution, we had μ = η. So, we have
+$$h_{\theta}(x) = E[y|x; \theta] = \mu = \eta = \theta^Tx.$$
+
+EDIT
+Upon reading other sources, $y_i \sim N(\mu_i, \sigma^2)$ meaning that each individual output has it's own normal distribution with mean $\mu_i$ and $h_{\theta}(x_i)$ is set as the mean of the normal distribution for $y_i$. In that case, then the hypothesis makes sense to be assigned the expectation.
+"
+"['reinforcement-learning', 'markov-decision-process', 'proofs', 'reward-functions']"," Title: How do I convert an MDP with the reward function in the form $R(s,a,s')$ to and an MDP with a reward function in the form $R(s,a)$?Body: The AIMA book has an exercise about showing that an MDP with rewards of the form $r(s, a, s')$ can be converted to an MDP with rewards $r(s, a)$, and to an MDP with rewards $r(s)$ with equivalent optimal policies.
+In the case of converting to $r(s)$ I see the need to include a post-state, as the author's solution suggests. However, my immediate approach to transform from $r(s,a,s')$ to $r(s,a)$ was to simply take the expectation of $r(s,a,s')$ with respect to s' (*). That is:
+$$ r(s,a) = \sum_{s'} r(s,a,s') \cdot p(s'|s,a) $$
+The authors however suggest a pre-state transformation, similar to the post-state one. I believe that the expectation-based method is much more elegant and shows a different kind of reasoning that complements the introduction of artificial states. However, another resource I found also talks about pre-states.
+Is there any flaw in my reasoning that prevents taking the expectation of the reward and allow a much simpler transformation? I would be inclined to say no since the accepted answer here seems to support this. This answer mentions Sutton and Barto's book, by the way, which also seems to be fine with taking the expectation of $r(s, a, s')$.
+This is the kind of existential question that bugs me from time to time and I wanted to get some confirmation.
+(*) Of course, that doesn't work in the $r(s, a)$ to $r(s)$ case, as we do not have a probability distribution over the actions (that would be a policy, in fact, and that's what we are after).
+"
+"['machine-learning', 'forecasting']"," Title: How to make a multivariate forecasting if one of features becomes known for the future with some confidence level, e.g. weather forecast dataBody: Let's assume that we make forecasting of another metric partially based on forecasts of the weather forecast, e.g. of temperature, pressure, then we can potentially obtain those forecasts from one of public APIs and have some information about future values of these features and do more precise prediction of another parameter taken as a label.
+
+What is the approach to use in this case if one or more of the features in multivariate forecasting have some forecasted values for the predicted horizon? It looks that in this case not only values from historical data can be used but also predicted values, though not clear how to organize the model in this case, e.g. it can be multivariate LSTM.
+"
+['tensorflow']," Title: How to use one-hot encoding for multiple columns (multi-class) with varying number of labels in each class?Body: I am a beginner in TensorFlow as well as in AI. I am basically from Pharma background and learning AI from scratch.
+
+I have data with 5038 input (Float64) and 826 output (Categorical - Multi Labels in each column). I have utilized one-hot encoding but the neural network tackles only one output at a time.
+
+[1]How to process all 826 output (which give 6689 one-hot output) at once in a neural network. Here is the code that I am using.
+[2] I am getting only 31% accuracy. I get this accuracy just in the second epoch. From second or third epoch onwards accuracy and other parameters become constant. Am I doing the wrong code here?
+
+dataset = df.values
+X = dataset[:,0:5038]/220
+Y_smile = dataset[:,5038 :5864]
+
+from sklearn.preprocessing import OneHotEncoder
+enc = OneHotEncoder(handle_unknown='ignore')
+enc.fit(Y_smile)
+OneHotEncoder(handle_unknown='ignore')
+enc.categories_
+Y = enc.transform(Y_smile).toarray()
+print(Y,Y.shape, Y.dtype)
+
+from sklearn.model_selection import train_test_split
+X_train, X_val_and_test, Y_train, Y_val_and_test = train_test_split(X, Y, test_size=0.3)
+X_val, X_test, Y_val, Y_test = train_test_split(X_val_and_test, Y_val_and_test, test_size=0.5)
+
+import numpy as np
+X_train = np.asarray(X_train).astype(np.float64)
+X_val = np.asarray(X_val).astype(np.float64)
+X_test = np.asarray(X_test).astype(np.float64)
+
+filepath = ""bestmodelweights.hdf5""
+checkpoint = [tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_accuracy', mode='auto', save_best_only=True, Save_weights_only = True, verbose = 1),
+ tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5, verbose =1)]
+
+model = tf.keras.Sequential([
+ tf.keras.layers.Dense(1024, activation='relu', input_shape=(5038,)),
+ tf.keras.layers.Dense(524, activation='relu'),
+ tf.keras.layers.Dense(524, activation='relu'),
+ tf.keras.layers.Dense(1024, activation='relu'),
+ tf.keras.layers.Dense(1024, activation='relu'),
+ tf.keras.layers.Dense(1024, activation='relu'),
+ tf.keras.layers.Dense(1024, activation='relu'),
+ tf.keras.layers.Dense(1024, activation='relu'),
+ tf.keras.layers.Dense(1024, activation='relu'),
+ tf.keras.layers.Dense(1024, activation='relu'),
+ tf.keras.layers.Dense(1024, activation='relu'),
+ tf.keras.layers.Dense(1024, activation='relu'),
+ tf.keras.layers.Dense(1024, activation='relu'),
+ tf.keras.layers.Dense(1024, activation='relu'),
+ tf.keras.layers.Dense(6689, activation= 'softmax')])
+
+model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.BinaryCrossentropy(from_logits = True), metrics=['accuracy'])
+
+hist = model.fit(X_train, Y_train, epochs= 200, callbacks=[checkpoint],validation_data=(X_val, Y_val))
+
+"
+"['convolutional-neural-networks', 'image-processing', 'image-generation']"," Title: Banding artifacts in CNNBody: I was working on a CNN for HDR image generation from LDR images. I used an encoder-decoder architecture and merged the input with the decoder output. However I'm getting some banding artifacts in the model prediction as shown.
+
+1)Input LDR image
+
+
+
+2) Ground Truth HDR
+
+
+
+3) Predicted output
+
+
+
+Notice the fine bands in the prediction. What might be causing these bands? Also I trained only for 20 epochs yet. Is the problem due to inadequate training?
+Here's my model:
+
+ class TestNet2(nn.Module):
+ def __init__(self):
+ super(TestNet2, self).__init__()
+
+ def enclayer(nIn, nOut, k, s, p, d=1):
+ return nn.Sequential(
+ nn.Conv2d(nIn, nOut, k, s, p, d), nn.SELU(inplace=True)
+ )
+ def declayer(nIn, nOut, k, s, p):
+ return nn.Sequential(
+ nn.ConvTranspose2d(nIn, nOut, k, s, p), nn.SELU(inplace=True)
+ )
+
+ self.encoder = nn.Sequential(
+ enclayer(3,64,5,3,1),
+ #nn.MaxPool2d(2, stride=2),
+ enclayer(64,128,5,3,1),
+ #nn.MaxPool2d(2, stride=2),
+ enclayer(128,256,5,3,1),
+ #nn.MaxPool2d(2, stride=2),
+ )
+ self.decoder = nn.Sequential(
+ declayer(256,128,5,3,1),
+ declayer(128,64,5,3,1),
+ declayer(64,3,5,3,1),
+
+
+ )
+
+ def forward(self, x):
+ xc=x
+ x = self.encoder(x)
+ x = self.decoder(x)
+ x = F.interpolate(
+ x, (512, 512), mode='bilinear', align_corners=False
+ )
+ x=x+xc
+ return x
+
+"
+"['machine-learning', 'data-science', 'support-vector-machine', 'accuracy']"," Title: Why is the accuracy of my model very low on a separate dataset from the training and test datasets?Body: I am working on stock price prediction project, I am using the support vector regression (SVR) model for it.
+
+As I am splitting my data into train and test, I am getting high accuracy while predicting test data after fitting the model.
+
+But now when I am trying to use another data which I separate out from the used dataset before start doing anything, it gives me very bad results. Can anyone tell me what's happening?
+
+
+
+Looking forward to your response.
+"
+"['reinforcement-learning', 'off-policy-methods', 'importance-sampling']"," Title: Can weighted importance sampling be applied to off-policy evaluation for continuous state space MDPs?Body: Can weighted importance sampling (WIS) and importance sampling (IS) be applied to off-policy evaluation for continuous state spaces MDPs?
+
+Given that I have trajectories of $(s_t,a_t)$ pairs and the behavior policy distribution $\pi_b(a_t | s_t)$ can be approximated with a neural network.
+
+A paper I came across where they say that IS can be used for continuous states whereas WIS cannot be used for function approximation. I am not sure why WIS cannot be applied to the continuous case whereas IS can be. Both of these techniques seem similar.
+"
+"['open-ai', 'implementation']"," Title: Is there a general file type associated with AI projects?Body: This is a general question.
+
+Is there a general file type associated with AI projects?
+
+Photoshop = .psd
+Excel = csv
+Artificial Intelligence = ?
+
+"
+"['neural-networks', 'natural-language-processing', 'tensorflow', 'training', 'accuracy']"," Title: Low accuracy during training for text summarizationBody: I am trying to implement an extractive text summarization model. I am using keras and tensorflow. I have used bert sentence embeddings and the embeddings are fed into an LSTM layer and then to a Dense layer with sigmoid activation function. I have used adam optimizer and binary crossentropy as the loss function. The input to the model are the sentence embeddings.
+
+The training y labels is a 2d-array i.e [array_of_documents[array_of_biniary_labels_foreach_sentence]]
+
+The problem is that during training, I am getting the training accuracy of around 0.22 and loss 0.6.
+
+How can I improve my accuracy for the model?
+"
+"['machine-learning', 'explainable-ai']"," Title: Is it possible to create a fair machine learning system?Body: I started thinking about the fairness of machine learning models recently.
+Wiki page for Fairness_(machine_learning) defines fairness as:
+
+
+ In machine learning, a given algorithm is said to be fair, or to have
+ fairness if its results are independent of some variables we consider
+ to be sensitive and not related to it (f.e.: gender, ethnicity,
+ sexual orientation, etc.).
+
+
+UC Berkley CS 294 in turn defines fairness as:
+
+
+ understanding and mitigating discrimination based on sensitive
+ characteristics, such as, gender, race, religion, physical ability,
+ and sexual orientation
+
+
+Many other resources, like Google in the ML Fairness limit the fairness to these aforementioned categories and no other categories are considered.
+
+But fairness is a lot broader context than simply these few categories mentioned here, you could easily add another few like IQ, height, beauty anything that could have a real impact on your credit score, school application or job application. Some of these categories may not be popular in existing datasets nowadays but given the exponential growth of data, they will be soon, to the extent, that we will have an abundance of data about every individual with all their physical and mental categories mapped into the datasets.
+
+Then the question would be how to define fairness given all these categories presented in the datasets. And will it even be possible to define fairness if all physical and mental dimensions are considered as it seems that when we do so, all our weights in, say, the neural nets should be exactly the same, i.e., giving no discriminator in any way or form towards or against any physical or mental category of a human being? That means that a machine learning system that is fair across all possible dimensions will have no way of distinguishing one human being from another which would render these machine learning models useless.
+
+To wrap it up, while it does make perfect sense to withdraw bias towards any individual given the categories like gender, ethnicity, sexual orientation, etc., the set is not closed and with the increasing number of categories being added to this set, we will inevitably arrive at a point where no discrimination (in a statistical sense) would be possible.
+
+And that's why my question, are fair machine learning models possible?
+Or perhaps, the only possible fair machine learning models are those that arbitrarily include some categories but ignore other categories which, of course, if far from being fair.
+"
+"['reinforcement-learning', 'training', 'deep-rl', 'a3c']"," Title: Why do we also need to normalize the action's values on continuous action spaces?Body: I was reading here tips & tricks for training in DRL and I noticed the following:
+
+
+
+ - always normalize your observation space when you can, i.e., when you know the boundaries
+ - normalize your action space and make it symmetric when continuous (cf potential issue below) A good practice is to rescale your actions to lie in [-1, 1]. This does not limit you as you can easily rescale the action inside the environment
+
+
+
+I am working on a discrete action space but it is quite difficult to normalize my states when I don't actually know the full range for each feature (only an estimation).
+
+How does this affect training? And more specifically, why on continuous action spaces we need to normalize also the action's values?
+"
+"['deep-learning', 'reinforcement-learning', 'deep-rl', 'sample-efficiency', 'sample-complexity']"," Title: Can you find another reason for sample inefficiency of model-free on-policy Deep Reinforcement Learning?Body: The following mindmap gives an overview of multiple reasons for sample inefficiency. The list is definitely not complete. Can you see another reason not mentioned so far?
+
+
+
+Some related links:
+
+
+"
+"['reinforcement-learning', 'policy-gradients', 'model-based-methods']"," Title: Using a model-based method to build an accurate day trading environment modelBody: There are several different angles we can classify Reinforcement Learning methods from. We can distinguish three main aspects :
+
+
+- Value-based and policy-based
+- On-policy and off-policy
+- Model-free and model-based
+
+
+Historically, due to their sample-efficiency, the model-based methods have been used in the robotics field and other industrial controls. That is happened due to the cost of the hardware and the physical limitations of samples that could be obtained from a real robot. Robots with a large number of degrees of freedom are not widely accessible, so RL researchers are more focused on computer games and other environments where samples are relatively cheap. However, the ideas from robotics are infiltrating, so, who knows, maybe the model-based methods will enter the focus quite soon.
+
+As we know, ""model"" means the model of the environment, which could have various forms, for example, providing us with a new state and reward from the current state and action. From what I have seen so far, all the methods (i.e. A3C, DQN, DDPG) put zero effort into predicting, understanding, or simulating the environment. What we are interested in is proper behavior (in terms of the final reward), specified directly (a policy) or indirectly (a value), given the observation. The source of observations and reward is the environment itself, which in some cases could be very slow and inefficient.
+
+In a model-based approach, we're trying to learn the model of the environment to reduce the ""real environment"" dependency. If we have an accurate environment model, our agent can produce any number of trajectories that it needs, simply by using the model instead of executing the actions in the real world.
+
+Question:
+
+I am interested in a day trading environment. Is it possible to create a model-based environment in order to build an accurate day trading environment model?
+"
+"['reinforcement-learning', 'reference-request']"," Title: Is there any good source for when the pole actually starts all the way at the bottom, in the cartpole problem?Body: There are a lot of examples of balancing a pole (see image below) using reinforcement learning, but I find that almost all examples start close to the upright position.
+
+Is there any good source (or paper) for when the pole actually starts all the way at the bottom?
+
+
+"
+"['deep-learning', 'q-learning', 'deep-rl', 'dqn']"," Title: How and when should we update the Q-target in deep Q-learning?Body: I have recently watched David silver's course, and started implementing the deep Q-learning algorithm.
+
+I thought I should make a switch between the Q-target and Q-current directly (meaning, every parameter of Q-current goes to Q-target), but I found a repository on GitHub where that guy updates Q-target as follows:
+
+$$Q_{\text{target}} = \tau * Q_{\text{current}} + (1 - \tau)*Q_{\text{target}}$$.
+
+where $\tau$ is some number probably between 0 and 1.
+
+Is that update correct or I miss something?
+
+I thought after some iterations (e.g. 2000 iteration), we should update Q-target as: $Q_{\text{target}}=Q_{\text{current}}$.
+"
+"['reinforcement-learning', 'proofs', 'epsilon-greedy-policy', 'policy-improvement-theorem']"," Title: Is this proof of $\epsilon$-greedy policy improvement correct?Body: The following paragraph about $\epsilon$-greedy policies can be found at the end of page 100, under section 5.4, of the book "Reinforcement Learning: An Introduction" by Richard Sutton and Andrew Barto (second edition, 2018).
+
+but with probability $\varepsilon$ they instead select an action at random. That is, all nongreedy actions are given the minimal probability of selection, $\frac{\varepsilon}{|\mathcal{A}(s)|}$, and the remaining bulk of the probability, $1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|}$, is given to the greedy action. The $\varepsilon$-greedy policies are
+
+So, the non-greedy actions are given the probability $\frac{\varepsilon}{|\mathcal{A}(s)|}$, and the greedy action is given the probability $1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|}$. All clear up to this point.
+However, I have a doubt in the policy improvement theorem that is mentioned in page 101, under section 5.4. I have enclosed a copy of this proof for your convenience:
+$$
+\begin{aligned}
+q_{\pi}(s, \pi^{\prime}(s)) &=\sum_{a} \pi^{\prime}(a \mid s) q_{\pi}(s, a) \\
+&=\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+(1-\varepsilon) \max _{a} q_{\pi}(s, a) \\
+& \geq \frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+(1-\varepsilon) \sum_{a} \frac{\pi(a \mid s)-\frac{\varepsilon}{|\mathcal{A}(s)|}}{1-\varepsilon} q_{\pi}(s, a)\\
+&=\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)-\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+\sum_{a} \pi(a \mid s) q_{\pi}(s, a) \\
+&=v_{\pi}(s) .
+\end{aligned}
+$$
+My question is shouldn't the greedy action be chosen with a probability of $1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|}$?
+The weighing factors do not add up to 1, as they are probability values. With this argument, the proof (with a slight modification) would be:
+$$
+\begin{aligned}
+q_{\pi}(s, \pi^{\prime}(s))
+&=\sum_{a} \pi^{\prime}(a \mid s) q_{\pi}(s, a) \\
+&=\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+ \left( 1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|} \right) \max _{a} q_{\pi}(s, a) \\
+& \geq \frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+\left(1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|} \right) \sum_{a} \frac{\pi(a \mid s)-\frac{\varepsilon}{|\mathcal{A}(s)|}}{1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|} } q_{\pi}(s, a)\\
+&=\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)-\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+\sum_{a} \pi(a \mid s) q_{\pi}(s, a) \\
+&=v_{\pi}(s) .
+\end{aligned}
+$$
+Though the end result isn't changed, I just want to know what I am conceptually missing, in order to understand the proof that is originally provided.
+"
+"['classification', 'regression', 'supervised-learning']"," Title: Is there a classification task with multiple attribute regression?Body: I'm trying to look for a task that predicts a discrete label first (classification), and then predicts the multiple continuous attributes of the predicted class. I found some papers about multi-output regression, but it wasn't what I wanted. Perhaps such a task exists in robot control or in video games, but I haven't found it yet. At the same time I want to train within a supervised learning framework, not reinforcement learning.
+"
+"['neural-networks', 'machine-learning', 'recurrent-neural-networks', 'time-series', 'echo-state-network']"," Title: Why do we use a delay when feeding our input data to the echo state network?Body: I'm new to working with neural networks and have recently began implementing neural networks for time series forecasting in some of my work. I've been particularly using Echo State Networks and have been doing some reading to understand how they work. For the most part, things seem pretty straight forward, but I'm confused as to why we use a 'delay' when feeding our input data (the delay concept mentioned in the paper Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication)?
+
+I'm looking at some source code on Github, and they implement this delay as well (they feed two arrays inputData & targetData, into the network where one is delayed by one element relative to the other). I am noticing that the larger the delay, the worse the fit.
+
+Why is this done?
+My interest is eventually to forecast past sample data.
+"
+"['neural-networks', 'machine-learning']"," Title: How could a NN be trained to output a cyclic (e.g. hue) number?Body: I was thinking about training a neural network to colourize images. The input would be the luminosity/value for each pixel, and the output would be a hue and/or saturation. Training data would be easily obtained just by selecting the luminosity/value channel from a full colour image.
+
+Suppose all channels are scaled to 0.0-1.0, there is a problem with pixels whose hue is nearly 0.0 or nearly 1.0.
+
+
+- The input data may have sharp discontinuities in hue which are not visible to the human eye. This is an unstable, illusory boundary which seems like it would destablilize the training.
+- Also if the network outputs a value of 1.001 instead of 0.001 then this should NOT be penalised since.
+
+
+Possible workarounds might be to preprocess the image to remap e.g. 0.99 to -0.01 if that pixel is near a region dominated by near-0 hues, or similarly to to remap e.g. 0.01 to 1.01 if that pixel is near a region dominated by near-1 hues. This has its own problems. Similarly, outputs could be wrapped to the range 0-1 before being scored.
+
+But is there a better way to encode cyclic values such as hue so that they will naturally be continuous?
+
+One solution I thought of would be to treat (hue,saturation) as a (theta,r) polar coordinate and translate this to Cartesian (x,y) and have that be the training target, but I don't know how this change of colour space will affect things (it might be find, I haven't tried it yet).
+
+Are there alternative colour representations which are better suited to machine learning?
+"
+"['math', 'objective-functions', 'optimization', 'linear-algebra']"," Title: Simplifying Log LossBody: I am reading through a paper (https://www.mitpressjournals.org/doi/pdf/10.1162/0891201053630273) where they describe logloss as a ranking function and can be simplified to the margin of the training data $X$. I am not sure what the transformation is in each step and could use a bit of help.
+
+A precursor to this ranking loss is standard logloss which may clarify my understanding as well:
+
+In this loss I only get from step 2 to here:
+
+$$-\sum_{i=1}^n log(\frac{{e^{y_iF(x_i,\overline{a})}}}{1 + e^{yF(x,\overline{a})}})$$
+$$=-\sum_{i=1}^ny_iF(x_i, \overline{a}) - log(1 + e^{yF(x,\overline{a})})$$
+
+
+And here is the full ranking loss I am having trouble on:
+
+
+"
+"['machine-learning', 'deep-learning', 'generative-adversarial-networks', 'discriminator']"," Title: Training Conditional DCGAN with GAN-CLS lossBody: I am trying to implement conditional GAN using GAN-CLS loss as described in paper: https://arxiv.org/abs/1605.05396
+
+So, while training discriminator, I should I have three batches of data:
+
+
+- [Real_Image, Embeddings]
+- [Generated_Image, Embeddings]
+- [Wrong_Image,Embeddings]
+
+
+And, while training generator, I should have one batch of data i.e [Generated_Image, Embeddings].
+
+Is this correct way to train the model?
+"
+"['convolutional-neural-networks', 'optimization']"," Title: Which one is more important in case of different loss optimization algorithms, Speed or the Route?Body: We have different kinds of algorithms to optimize the loss like AdaGrad, SGD + Momentum, etc. Some are more commonly used than the others. In some algorithms, they usually range out before they converge, reach to the steepest slope and find the minima. But some of these algorithms are significantly fast. So my question is that the speed is more of a deciding factor here or the route is important too? Or is it just problem dependent?
+
+
+
+Here is a picture of what I mean by the Route.
+"
+"['reinforcement-learning', 'dqn', 'deep-neural-networks']"," Title: Understanding the role of the target network in this DQN algorithmBody: I've found online this interesting algorithm:
+
+
+From what I understand reading this algorithm, I can't figure out why I should ""perform the opposite action"" and consequently storing that second experience as then it is never used to update or doing anything else. Is this algorithm incorrect?
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl']"," Title: How should I decay $\epsilon$ in Q-learning?Body: How should I decay the $\epsilon$ in Q-learning?
+
+Currently, I am decaying epsilon as follows. I initialize $\epsilon$ to be 1, then, after every episode, I multiply it by some $C$ (let it be $0.999$), when it reaches $0.01$. After that, I keep $\epsilon$ to be $0.01$ all the time. I think this has a terrible consequence.
+
+So, I need a $\epsilon$ decay algorithm. I haven't found script or formula about it, so can you tell me?
+"
+"['reinforcement-learning', 'terminology', 'papers']"," Title: What are finite horizon look-ahead policies in reinforcement learning?Body: I was reading the paper How to Combine Tree-Search Methods in Reinforcement Learning published in AAAI Conference 2019. It starts with the sentence
+
+
+ Finite-horizon lookahead policies are abundantly used in Reinforcement Learning and demonstrate impressive empirical success.
+
+
+What is meant by ""finite horizon look-ahead""?
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'crossover-operators']"," Title: How to effectively crossover mathematical curves?Body: I'm trying to optimize some reflective properties of curves of the form:
+$a_1x^n+a_2x^{n-1}+a_3x^{n-2} + ... + a_n + b_1y^n+b_2y^{n-1}+b_3y^{n-2} + ... + b_n = 0$
+
+which is basically the curve that you get when you sum two polynomials of same degree in different variables:
+
+$f(x) + g(y) = 0$
+
+Anyways, I was wondering what would be a good way to do crossover on two such curves? I tried averaging the curves and then mutating them, but the problem is that the entire population quickly becomes homogeneous and the fitness starts to drop. Another typical method I tried is taking a cutoff point somewhere on the expansion above and mixing the left and right part from both parents.
+
+Of course there are many ways to do a similar process. I could order the above expansion differently and then do a cutoff. I could seperate Xs and Ys and do two seperate cutoffs. Etc. The question is: in the context of algebraic curves, which method of generating offspring would be a good option considering I want to optimize some property of the curve (in this case I want it to have some reflective properties)?
+"
+"['proofs', 'game-theory', 'minimax']"," Title: How do you prove that minimax algorithm outputs a subgame-perfect Nash equilibrium?Body: At every node, MAX would always move to maximise the minimum payoff while MIN choose to minimise the maximum payoff, hence there is nash equilibrium.
+
+By using backwards induction, at every node, MAX and MIN player would act optimally. Hence, there is subperfect nash equilibrium.
+
+How do I formally prove this?
+"
+"['deep-learning', 'classification', 'tensorflow', 'objective-functions', 'gated-recurrent-unit']"," Title: Incorporating domain knowledge into recurrent networkBody: I am currently trying to solve a classification task with a recurrent artificial neural network (RNN).
+
+Situation
+
+There are up to 350 inputs (X) mapped on one categorical output (y)(13 differnt classes).
+The sequence to predict is deterministic in the sense that only specific state transitions are allowed based on the past. A simplified Abstraction of my problem:
+
+
+- y - Ground Truth: 01020
+- y - Model Prediction: 01200
+- Valid Transitions: 01, 10, 02, 20
+
+
+The predicted transition 12 is consequently not valid.
+
+Question
+
+What would be the best way to optimize a model to make as less invalid transition predictions as possible (ideally none)?
+(temporal shifts of the predictions in comparison to the ground truth are still acceptable)
+
+
+- By the integration of the knowledge about the valid transitions into the artificial neural network. Is this even possible to code hard restrictions into a RNN?
+- By a custom loss function which penalizes these invalid transitions
+- Another approach
+
+
+Current Approach
+
+With a bidirectional recurrent network (one gated recurrent units layer (2000 neurons)) in a many to many fashion an accuracy of 99.5% on the training and 97.5% on the test set could be reached (Implemented with TensorFlow / keras 2.2)
+"
+"['neural-networks', 'machine-learning', 'training', 'objective-functions']"," Title: Is there a way of deriving a loss function given the neural network and training data?Body: There is some sort of art to using the right loss function. However, I was wondering if there is a way to derive the loss function if I gave you a neural network model (the weights) as well as the training data.
+
+The point of this exercise is to see what family of loss functions we would get. And how that compares to the loss function that actually gave rise to the model.
+"
+"['reinforcement-learning', 'q-learning', 'reference-request', 'deep-rl']"," Title: Is there any good reference for double deep Q-learning?Body: I am new in reinforcement learning, but I already know deep Q-learning and Q-learning. Now, I want to learn about double deep Q-learning.
+
+Do you know any good references for double deep Q-learning?
+
+I have read some articles, but some of them don't mention what the loss is and how to calculate it, so many articles are not complete. Also, Sutton and Barto (in their book) don't describe that algorithm either.
+
+Please, help me to learn Double Q-learning.
+"
+"['deep-learning', 'convolutional-neural-networks', 'recurrent-neural-networks', 'structured-data']"," Title: Deep learning techniques with time-fixed, time-dependent and imaging dataBody: I have a question about the use of deep learning techniques with time-fixed features and images (setting 1) and time-dependent features (setting 2). (I am pretty new to the deep learning world so please excuse me if it's a basic question.)
+
+Setting 1:
+Imagine having a training dataset composed of
+
+
+- some time-fixed features such as height, weight, and age of an individual at the first medical visit (these features are recorded once and therefore time-fixed, i.e., they do not change in time in the dataset).
+- some medical images for each individual, such as for example a CT scan.
+- a label defining if the patient has or not a specific disease.
+
+
+Setting 2:
+Same as setting 1 but with some features that are repeated over time (time-dependent, longitudinal), such as for example blood pressure recorded twice a day for each individual for several days.
+
+Let say that the goal is to classify if an individual has or not a specific disease given the aforementioned features.
+
+I have seen zillions of papers and blogs talking about convolutional neural networks to classify images and a few million about recurrent neural networks for time-dependent features. However, I am not very aware of what to use in case I have time-fixed, time-dependent, and imaging features altogether.
+
+I am wondering how you would attach this problem.
+"
+"['long-short-term-memory', 'feature-selection', 'feature-extraction', 'feature-engineering']"," Title: Visualisation for Features to Predict Timeseries DataBody: I have a course assignment to use an LSTM to predict the movement directions of stock prices. One of the things I am asked to do is provide a visualization to compare the predictive powers of a set of N features (e.g. 1-day return, volatility, moving average, etc.). Let's assume that we use a window of 50 days as input to the LSTM.
+
+The first thing that came to my mind is to use a RadViz plot (check below image taken from https://www.scikit-yb.org/en/latest/api/features/radviz.html).
+
+However, I soon realized this will not work for the features since each sample will have 50 values. So if we have M samples, the shape of the input data will be something like Mx50xN. This, unfortunately, is not something RadViz can deal with (it can handle 2D data).
+
+Given this, I would be grateful if someone can point me to a viable way to visualize the data. Is it even possible when each feature comprises 50 values?
+"
+"['deep-learning', 'generative-adversarial-networks']"," Title: Why is this GAN not converging?Body: This GAN being trained with CelebA dataset doesn't seem to mode collapse, discriminator is not really over confident, and yet the quality is stuck on these rough Picasso-like generator images. Using Leaky-ReLU, strided conv instead of maxpool, and dampened truths helped a little, but still no better than this. Not sure what else to try. training clip Discriminator feed is in top left corner.
+
+
+"
+"['reinforcement-learning', 'markov-decision-process', 'model-based-methods']"," Title: Why is learning $s'$ from $s,a$ a kernel density estimation problem but learning $r$ from $s,a$ is just regression?Body: In David Silver's 8th lecture he talks about model learning and says that learning $r$ from $s,a$ is a regression problem whereas learning $s'$ from $s,a$ is a kernel density estimation. His explanation for the difference is that if we are in a stochastic environment and we are in the tuple $s,a$ then there might be a 30% chance the wind will blow me left, and a 70% chance the wind will blow me right, so we want to estimate these probabilities.
+
+Is the main difference between these two problems, and hence why one is regression and the other is kernel density estimation, because with the reward we are mainly concerned with the expected reward (hence regression) whereas with the state transitioning, we want to be able to simulate this so we need the estimated density?
+"
+"['image-recognition', 'pattern-recognition', 'facial-recognition']"," Title: Does anyone know of a model for comparing the eyes of people in two images to see if they match?Body: There’s a lot of talk of undercover cops intentionally starting violence in otherwise peaceful protests. The evidence, primarily, are images like this.
+
+https://images.app.goo.gl/4n3o2EXwFzMQfsKq6
+
+It looks pretty convincing, but I’d like something more solid. Does anyone know of a model that can detect with a high level of certainty if the “mask” area of two photos represents the same person?
+"
+"['neural-networks', 'math', 'hebbian-learning']"," Title: What is an auto-associator?Body: What is an auto-associator, and how does it work? How can we design an auto-associator for a given pattern? I couldn't find a clear explanation for this anywhere on the internet.
+
+Here's an example of a pattern.
+
+
+"
+"['neural-networks', 'deep-learning']"," Title: What is the intuition behind the Xavier initialization for deep neural networks?Body:
+ The aim of weight initialization is to prevent layer activation outputs from exploding or vanishing during the course of a forward pass through a deep neural network
+
+
+I am really having trouble understanding weights initialization technique and Xavier Initialization for deep neural networks (DNNs).
+
+In simple words (and maybe with an example), what is the intuition behind the Xavier initialization for DNNs? When should we use Xavier's initialization?
+"
+"['neural-networks', 'convolutional-neural-networks', 'pooling']"," Title: What is the effect of using pooling layers in CNNs?Body: I know how pooling works, and what effect it has on the input dimensions - but I'm not sure why it's done in the first place. It'd be great if someone could provide some intuition behind it - while explaining the following excerpt from a blog:
+
+A problem with the output feature maps is that they are sensitive to the location of the features in the input. One approach to address this sensitivity is to down sample the feature maps. This has the effect of making the resulting down sampled feature maps more robust to changes in the position of the feature in the image, referred to by the technical phrase “local translation invariance.”
+
+What's local translation invariance here?
+"
+"['reinforcement-learning', 'planning']"," Title: Is the distribution of state-action pairs from sample based planning accurate for small experience sets?Body: From the David Silver's lecture 8: Integrating Learning and Planning - based on Sutton and Barto - he talks about using sample-based planning to use our model to take a sample of a state and then use model-free planning, such as Monte Carlo, etc, to run the trajectory and observe the reward. He goes on to say that this effectively gives us infinite data from only a few actual experiences.
+
+However, if we only experience a handful of true state-action-rewards and then start sampling to learn more then we will surely end up with a skewed result, e.g., If I have 5 experiences but then create 10000 samples (as he says, infinite data). I am aware that as the experience set grows the Central Limit Theorem will come into play and the distribution of experience will more accurately represent the true environment's state-actions-rewards distribution but before this happens is sampled based planning still useful?
+"
+"['machine-learning', 'natural-language-processing', 'transformer', 'bert']"," Title: Why does the BERT NSP head linear layer have two outputs?Body: Here's the code in question.
+
+https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L491
+
+class BertOnlyNSPHead(nn.Module):
+ def __init__(self, config):
+ super().__init__()
+ self.seq_relationship = nn.Linear(config.hidden_size, 2)
+
+ def forward(self, pooled_output):
+ seq_relationship_score = self.seq_relationship(pooled_output)
+ return seq_relationship_score
+
+
+I think it was just ranking how likely one sentence would follow another? Wouldn't it be one score?
+"
+"['recurrent-neural-networks', 'long-short-term-memory']"," Title: How to make a LSTM network to predict sequence only after input sequence is finished?Body: I am learning to use a LSTM model to predict time series data. Specifically, I hope the network should output a sequence (with multiple time steps) only after the input sequence has finished feeding in, as shown in the left figure.
+
+
+
+However, most of the LSTM sequence-to-sequence prediction tutorial I have read seems to be the right figure (i.e. each time step of the output sequence is generated after each time step of the input sequence). What's more, as far as I understand, the LSTM implementation in PyTorch (and probably Keras) can only return output sequence corresponding to each time step of the input sequence. It cannot make predictions after the input sequence is over.
+
+I hope to know is there any way to make a sequence-to-sequence LSTM network which starts output only after the input sequence has finished feeding in? And it would be better if someone can show me an example implementation code.
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'convergence']"," Title: If deep Q-learning starts to choose only one action, is this a sign that the algorithm diverged?Body: I'm working on a deep q-learning model in an infinite horizon problem, with a continous state space and 3 possible actions. I'm using a neural network to approximate the action-value function. Sometimes it happens that, after a few steps, the algorithm starts choosing only one between the possible actions (apart from a few steps where I suppose it explores, given the epsilon-greedy policy it follows), leading to bad results in terms of cumulative rewards. Is this a sign that the algorithm diverged?
+"
+"['optimization', 'gradient-descent', 'stochastic-gradient-descent']"," Title: How does SGD escape local minima?Body:
+ SGD is able to jump out of local minima that would otherwise trap BGD
+
+
+I don't really understand the above statement. Could someone please provide a mathematical explanation for why SGD (Stochastic Gradient Descent) is able to escape local minima, while BGD (Batch Gradient Descent) can't?
+
+P.S.
+
+While searching online, I read that it has something to do with ""oscillations"" while taking steps towards the global minima. What's that?
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'overfitting']"," Title: How to prevent deep Q-learning algorithms to overfit?Body: I have recently solved the Cartpole problem using double deep Q-learning. When I saw how the agent was doing, it used to go right every time, never left, and it did similar actions all the time.
+
+Did the model overfit the environment? It seems that the agent just memorized the environment.
+
+What are the common techniques to prevent the agent to overfit like that? Is that a common problem?
+"
+"['reinforcement-learning', 'actor-critic-methods']"," Title: Actor-Critic implementation not learningBody: I've implemented a vanilla actor-critic and have run into a wall. My model does not seem to be learning the optimal policy. The red graph below shows its performance in cartpole, where the algorithm occasionally does better than random but for the most part lasts between 10-30 timesteps.
+
+
+
+I am reasonably sure that the critic part of the algorithm is working. Below is a graph of the delta value (r + Q_w(s',a') - Q_w(s,a)), which seems to show that for the most part the predicted future reward is quite similar to the approximate future reward.
+
+
+
+Thus, I am at a loss for what the problem could be. I have an inkling it lies within the actor, but I am not sure. I have double checked the loss function and that seems to be correct to me as well. I would appreciate any advice. Thanks!
+"
+"['neural-networks', 'deep-learning', 'papers', 'variational-autoencoder', 'disentangled-representation']"," Title: What is the main contribution of the paper Disentangling by Factorising?Body: Considering the paper Disentangling by Factorising, in addition to introducing a new model for Disentangled Representation Learning, FactorVAE (see figure), what is the main theoretical contribution provided by the paper?
+
+
+"
+"['machine-learning', 'reinforcement-learning', 'q-learning', 'exploration-exploitation-tradeoff']"," Title: Can tabular Q-learning converge even if it doesn't explore all state-action pairs?Body: My understanding of tabular Q-learning is that it essentially builds a dictionary of state-action pairs, so as to maximize the Markovian (i.e., step-wise, history-agnostic?) reward. This incremental update of the Q-table can be done by a trade-off exploration and exploitation, but the fact remains that one ""walks around"" the table until it converges to optimality.
+
+But what if we haven't ""walked around"" the whole table? Can the algorithm still perform well in those out-of-sample state-action pairs?
+"
+"['reinforcement-learning', 'deep-rl', 'implementation', 'gym']"," Title: What is a RAM state in the gym's breakout-ram environment?Body: I have encountered the gym environment and decided to create AI that plays breakout. Here is the link: https://gym.openai.com/envs/Breakout-ram-v0/.
+
+The documentation says that the state is represented as a RAM state, but what is the RAM in this context? Is it the random access memory? What does the RAM state represent?
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'time-series']"," Title: How to exclude sections of bad data from time-series data before training an LSTM networkBody: I am using LSTM network for predicting IOT time-series data receiving from un-reliable devices and networks.
+This results in several multiple sections [continuous streak of bad data for several days until the problem is fixed].
+I need to exclude this bad data section before feeding it to model training.
+Since I am using LSTM-RNN network, it requires to do an un-roll data based on the previous records.
+
+How can I properly exclude this bad data?
+I thought of an approach as training model separately using each batch of good data, and use subsequent good-data batch for fine-tuning the model.
+Please let me know if this is a good approach? or is there a better method?
+
+example data:
+""1-01"",266.0
+""1-02"",145.9
+""1-03"",183.1
+""1-04"",0 [bad data]
+""1-05"",0 [bad data]
+""1-06"",0 [bad data]
+""1-07"",0 [bad data]
+""1-08"",224.5
+""1-09"",192.8
+""1-10"",122.9
+""1-11"",0 [bad data]
+""1-12"",0 [bad data]
+""2-01"",194.3
+""2-02"",149.5
+
+"
+"['knowledge-representation', 'conceptual-dependency']"," Title: How can I draw a conceptual dependency for the statement ""Place all ingredients in a bowl and mix thoroughly""?Body: I stumbled across a question asking to draw a conceptual dependency for the following statement:
+
+
+ Place all ingredients in a bowl and mix thoroughly
+
+
+My attempt so far
+
+
+
+Explanation: Both the sender and recipient are the same, except that the states of their contents are different.
+
+Something feels like it isn't right. I would appreciate if you could correct errors, if any.
+"
+"['machine-learning', 'online-learning', 'algorithmic-bias']"," Title: How do I keep my system (online) learning if I can get ground truth labels only for examples flagged positive?Body: I have a binary classifier (think of it as a content moderation system) that is deployed after having being trained via batch learning.
+
+Once deployed, humans review and check for correctness only items predicted positive by the algorithm.
+
+In other words, once in production if I group predictions of the model on unseen examples in the confusion matrix
+
++-----------+-----------------+
+| | Ground-truth |
+| +-----+-----------+
+| | | Neg | Pos |
++-----------+-----+-----+-----+
+| | Neg | x11 | x12 |
+| Predicted +-----+-----+-----+
+| | Pos | x21 | x22 |
++-----------+-----+-----+-----+
+
+
+
+- I have access to all the ground-truth labels of the elements counted in $x_{21}$, $x_{22}$ (the predicted-positive)
+- I know the sum of $x_{11}$ and $x_{12}$, but not their values
+- I do not have access to the ground-truth labels of the elements predicted-negative.
+
+
+This (suboptimal) setup allows to measure precision $\frac{x_{22}}{x_{21} + x_{22}}$, while recall stays unknown as elements predicted negative are not examined at all (ground-truth labels of negatives can't be assigned due to resource constraints).
+
+Information gathered from users about the (true and false) positive elements can be used to feed a retraining loop... but
+
+
+- are there any ""smart"" learning recipes that are expected to make the algorithm improve its overall performance (say, the F1 score for the positive class) in this setting?
+- what's a meaningful metric to monitor to ensure that the performance of the model is not degrading?* (given the constraint specified here, F1 score is unknown).
+
+
+Thanks for any hint on how to deal with this!
+
+ * One solution could be to continuously monitor the F1 score on a labeled evaluation set, but maybe there's more one can do?
+"
+"['reinforcement-learning', 'definitions', 'rewards', 'sarsa']"," Title: Can the agent wait until the end of the episode to determine the reward in SARSA?Body: From Sutton and Barto's book Reinforcement Learning (Adaptive Computation and Machine Learning series) (p. 99), the following definition for first-visit MC prediction, for estimating $V \sim V_\pi$ is given:
+
+
+
+Is determining the reward for each action a requirement of SARSA? Can the agent wait until the end of the episode to determine the reward?
+
+For example, the reward for the game tic-tac-toe is decided at the end of the episode, when the player wins, loses, or draws the match. The reward is not available at each step $t$.
+
+Does this mean then that depending on the task it is not always possible to determine reward at time step $t$, and the agent must wait until the end of an episode? If the agent does not evaluate reward until the end of an episode, is the algorithm still SARSA?
+"
+"['neural-networks', 'machine-learning', 'applications', 'research', 'htm']"," Title: What are the real applications of hierarchical temporal memory?Body: What are the real applications of hierarchical temporal memory (HTM) in machine learning (ML) these days?
+"
+"['reinforcement-learning', 'reference-request', 'graph-neural-networks', 'graphs']"," Title: How to learn how to select a subgraph via reinforcement learning?Body: I have the following problem.
+
+I am given a graph with a lot (>30000) nodes. Nodes are associated with a low (<10)-dimensional feature vector, and edges are associated with a low (<10)-dimensional feature vector. In addition, all nodes start out having the color white.
+
+At every time step until completion, I want to select a subset of the nodes in the graph and color them blue. Then I receive a reward based on my coloring. I continue until all nodes are colored blue, and the total reward is the sum (maybe with a gamma factor) of my total rewards.
+
+Do you have suggestions of papers to read where the task was choosing an appropriate subgraph from a larger graph?
+
+Just doing a node classification task using a Graph Convolutional Network doesn't seem to do well, I suspect because, given that a good heuristic for reward is connectivity, it would need to learn to choose an optimal neighborhood in the graph and upweight only that neighborhood.
+
+To contextualize, each of the nodes of the graph represents a constraint that will be sent to an incremental SMT solver, and edges represent shared variables or other relationships between the constraints. I have found empirically that giving these constraints incrementally to the SMT solver when in a good order can be faster than just dumping the entire problem into an SMT solver, since the SMT solver doesn't have the best heuristics for this particular SMT problem. However, eventually, I want to add all the constraints, i.e., color the entire graph. The cost is the amount of time the solver takes on each set, with a reward at the end for completing all the constraints.
+"
+['reinforcement-learning']," Title: Calculating the advantage 'gain' of actions in model-free reinforcement learningBody: I have a simple question about model-free reinforcement. In a model I'm writing about, I want to know the value 'gain' we'd get for executing an action, relative to the current state. That is, what will I get if I moved from the current state $s$ taking action $a$.
+
+The measure I want is:
+
+$$G(s, a)=V(s^{\prime})-V(s)$$
+
+where $s'$ is the state that I would transition to if the underlying MDP was deterministic. If the MDP has a stochastic transition function, the model I want is:
+
+$$G(s, a)=\left[\sum_{s' \in S } P(s^{\prime} \mid a, s) V(s^{\prime})\right]-V(s)$$
+
+In a model-free environment, we don't have $P(s' \mid a,s)$.
+
+If we had a Q-function $Q(s,a)$, could we represent $G(s,a)$?
+
+NOTE: This is not the same as an 'advantage function' as first proposed by Baird (Leemon C Baird. Reinforcement learning in continuous time: Advantage updating. In Proceedings of 1994 IEEE International Conference on Neural Networks, pages 448–2453. IEEE, 1994.), which means the advantage of actions relative to the optimal action. What I'm looking for is the gain of actions relative to the current state.
+"
+"['machine-learning', 'audio-processing']"," Title: What is the most compressed audio that I can feed an AI?Body: The problem I currently have is that I want to train an AI to produce music, like music that contains voices etc... However, the problem is that with a WAV file, one second of audio can be up to 48,000 inputs, which is extremely detrimental to the ai's learning process and prevents it from really gaining any knowledge about context.
+I've tried to do the fast Fourier transformation, but the amount of data coming in varies depending on what part of the song I'm training it on, which will not work since I can't know ahead of time what every single time unit of every single song will have!
+And the max number of inputs is still 24,000
+
+Is there any other way of compressing data down in a way that I can give my ai something within the range of a couple hundred inputs for a second or two?
+"
+['reinforcement-learning']," Title: Is there any programming practice website for beginners in Reinforcement LearningBody: I am doing an online course on Reinforcement Learning from university of Alberta.
+It focus too much on theory. I am engineering and I am interested towards applying RL to my applications directly.
+
+My question is, is there any website which has sample programmers for beginners. Small sample programs.
+I have seen several websites for other machine learning topics such as CNN/RNN etc. But the resources for RL are either limited, or I couldn't find them
+"
+"['reinforcement-learning', 'papers', 'recommender-system', 'knowledge-graph', 'knowledge-graph-embeddings']"," Title: What is meant by the rank of the scoring function here?Body: I've been reading the paper Reinforcement Knowledge Graph Reasoning for Explainable Recommendation (by Yikun Xian et al.) lately, and I don't understand a particular section:
+
+Specifically, the scoring function $f((r,e)|u)$ maps any edge $(r,e)$ to a real-valued score conditioned on user $u$. Then, the user-conditional pruned action space of state $s_t$ denoted by $A_t(u)$ is defined as:
+$A_t(u) = \{(r,e)| rank(f((r,e)|u))) \leq \alpha ,(r,e) \in A_t\} $
+where $\alpha$ is a predefined integer that upper bounds the size of the action space.
+
+Details about the scoring function can be found in the attached paper.
+What I don't understand is: What does rank mean, here? Is the thing inside of it a matrix?
+It would be great if someone could explain the expression for the user conditional pruned action space in greater detail.
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'dqn', 'papers']"," Title: Are the final states not being updated in this $n$-step Q-Learning algorithm?Body: I am reading this paper and in algorithm 3 they describe an $n$-step Q-Learning algorithm. Below is the pseudo-code.
+
+
+
+From this pseudo-code, it looks as though the final tuples that they would visit in don't get added to the memory buffer $M$. They define a sample size $T$, but also say in the paper that an episode terminates when $|S| = b$.
+
+This leaves me two questions:
+
+
+- Have I understood the episode termination correctly? It seems from the pseudocode they are just running an episode for $T$ time steps but also in the paper they have a definition for when an episode terminates, so I'm not sure why they would want to truncate the episode size.
+- As I mentioned, it seems as though the final state $S_T$ that you would be in won't get added to the experience buffer as we only append the $(T-n)$th tuple. Why would you want to exclude the information you get from the final tuples you visit?
+
+"
+"['reinforcement-learning', 'comparison', 'genetic-algorithms', 'optimization', 'evolutionary-algorithms']"," Title: What is the difference between reinforcement learning and evolutionary algorithms?Body: What is the difference between reinforcement learning (RL) and evolutionary algorithms (EA)?
+I am trying to understand the basics of RL, but I do not yet have practical experience with RL. I know slightly more about EAs, but not enough to understand the difference between RL and EA, and that's why I'm asking for their main differences.
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'dqn', 'objective-functions']"," Title: How can the target rely on untrained parameters?Body: I'm trying to understand DQN. I understand where the loss function comes from. I'm just unsure about why the target function works in practice. Given the loss function
+$$ L_i(\theta_i) = [(y_i - Q(s,a;\theta_i))^2] $$
+where
+$$ y_i = r + \gamma * max_{a'}Q(s',a';\theta_{i-1}) $$
+is the target value in the loss function.
+
+From my understanding, experiences are pulled from the replay buffer, then the DQN is used to estimate the future sum of the discounted rewards for the next state (assuming it plays optimally) and adds this onto the current rewards $r$ to create the target value. Then the DQN is used again to estimate $Q$ value for the current state. Then the loss function is just the difference between the target and the estimated $Q$ value for the current state. Afterward, you optimize the loss function.
+
+But, if the parameters of the DQN start off randomly, then surely the target value will be completely wrong since the parameters that define that target function are random. So, if the target function is wrong, then it will minimize the difference between the target value and the estimated value, but it will be learning to predict incorrect values, since the target values are wrong?
+
+I don't understand why the target value works if the parameters of the DQN needed to create that target are completely random.
+
+What obvious mistake am I making here?
+"
+"['deep-learning', 'recurrent-neural-networks', 'long-short-term-memory', 'transformer', 'attention']"," Title: How to understand the matrices used in the Attention layer?Body: Attention-scoring mechanism seems to be a commonly-used component in various seq2seq models, and I was reading about the original ""Location-based Attention"" in Bahadanau well-known paper at https://arxiv.org/pdf/1506.07503.pdf. (it seems this attention is used in various forms of GNMT and text-to-speech sythesizers like tacotron-2 https://github.com/Rayhane-mamah/Tacotron-2).
+
+Even after repeated readings of this paper and other articles about Attention-mechanism, I'm confused about the dimensions of the matrices used, as the paper doesn't seem to describe it. My understanding is:
+
+
+- If I have decoder hidden dim 1024, that means ${s_{i-1}}$ vector is 1024 length.
+- If I have encoder output dim 512, that means $h_{j}$ vector is 512 length.
+- If total inputs to encoder is 256, then number of $j$ can be from 1 to 256.
+- Since $W x S_{i-1}$ is a matrix multiply, it seems $cols(W)$ should match $rows(S_{i-1})$, but $rows(W)$ still remain undefined. Same seems true for matrices $V, U, w, b$.
+
+
+This is page-3/4 from the paper above that describes Attention-layer:
+
+
+
+I'm unsure how to make sense of this. Am I missing something, or can someone explain this?
+
+What I don't understand is:
+
+
+- What is the dimension of previous alignment (denoted by $alpha_{i-1})$? Shouldn't it be total values of $j$ in $h_{j}$ (which is 256 and means total different encoder output states)?
+- What is the dimension of $f_{i,j}$ and convolution filter $F$? (the paper says $F$ belongs to $kxr$ shape but doesn't define $'r'$ anywhere). What is $'r'$ and what does $'k x r'$ mean here?
+- How are these unknown dimensions for matrices $'V, U, w, b'$ described above determined in this model?
+
+"
+"['reference-request', 'word2vec', 'hierarchical-softmax']"," Title: What are the applications of hierarchical softmax?Body: Apart from its use in word embeddings (e.g word2vec algorithm), are there any other applications of hierarchical softmax? If yes, can you please give me some reference papers?
+"
+"['reinforcement-learning', 'temporal-difference-methods']"," Title: What is correct update when the some indexes are not available?Body: To update the Q table Q-learning takes the arg max of the Q values - the state, value mappings.
+
+For example, in tic tac toe the state XOX OXO -X-
contains two available positions, each marked by the -
character. In order to evaluate the arg max should temporal difference
calculate the arg max of just the available positions?
+
+For the state XOX OXO -X-
the arg max should be taken at index 6 and 8 ? (assuming zero indexing) If not then how should the arg max indexes be updated? The indexes 0,1,2 3,4,5 7
can be used as they are already been taken by X or O so should not be evaluated for their value ? This also means that indexes which are not available will not have their Q value updated, will this break the Q-learning procedure ?
+"
+"['philosophy', 'agi', 'artificial-consciousness', 'chinese-room-argument']"," Title: How does one prove comprehension in machines?Body: Say we have a machine and we give it a task to do (vision task, language task, game, etc.), how can one prove that a machine actually know's what's going on/happening in that specific task?
+
+To narrow it down, some examples:
+
+Conversation - How would one prove that a machine actually knows what it's talking about or comprehending what is being said? The Turing test is a good start, but never actually addressed actual comprehension.
+
+Vision: How could someone prove or test that a machine actually knows what it's seeing? Object detection is a start, but I'd say it's very inconclusive that a machine understands at any level what it is actually seeing.
+
+How do we prove comprehension in machines?
+"
+"['q-learning', 'feedforward-neural-networks']"," Title: Help with deep Q learning for 2048 game getting stuckBody: I am having trouble making a reinforcement algorithm than can win the 2048 game.
+
+I have tried with deep Q (which I think is the simplest algorithm that should be able to learn a winning strategy).
+
+My Q function is given by a NN of two hidden layers 16 -> 8 -> 4. Weight initialization is XAVIER. Activation function is RELU. Loss function is cuadratic loss. Correction is via gradient descent.
+
+To train the NN I used a reward given by :
+
+$$r_t = \frac{1}{1024} \sum_{i=0}^{n}{p^i r_{((t-n)+i)}}$$
+
+Where n is 20 or the amount of iterations since the last update if a game is lost and $p = 1.4$.
+
+There is an epsilon for discovery, set at 100% at the start and it decreases by 10% until it reaches 1%.
+
+I have tried to optimize the parameters but can't get better results than a ""256"" in the board. And the cuadratic loss seems to get stuck at 0.25:
+
+
+Is there something I am missing?
+
+Code:
+
+
+public enum GameAction {
+ UP, DOWN, LEFT, RIGHT
+}
+
+public final class GameEnvironment {
+
+ public final int points;
+ public final boolean lost;
+ public final INDArray boardState;
+
+ public GameEnvironment(int points, boolean lost, int[] boardState) {
+ this.points = points;
+ this.lost = lost;
+ this.boardState = new NDArray(boardState, new int[] {1, 16}, new int[] {16, 1});
+ }
+}
+
+public class SimpleAgent {
+ private static final Random random = new Random(SEED);
+
+ private static final MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
+ .seed(SEED)
+ .weightInit(WeightInit.XAVIER)
+ .updater(new AdaGrad(0.5))
+ .activation(Activation.RELU)
+ .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
+ .weightDecay(0.0001)
+ .list()
+ .layer(new DenseLayer.Builder()
+ .nIn(16).nOut(8)
+ .build())
+ .layer(new OutputLayer.Builder()
+ .nIn(8).nOut(4)
+ .lossFunction(LossFunctions.LossFunction.SQUARED_LOSS)
+ .build())
+ .build();
+ MultiLayerNetwork Qnetwork = new MultiLayerNetwork(conf);
+
+ private GameEnvironment oldState;
+ private GameEnvironment currentState;
+ private INDArray oldQuality;
+
+ private GameAction lastAction;
+
+ public SimpleAgent() {
+ Qnetwork.init();
+ ui();
+ }
+
+ public void setCurrentState(GameEnvironment currentState) {
+ this.currentState = currentState;
+ }
+
+ private final ArrayList<INDArray> input = new ArrayList<>();
+ private final ArrayList<INDArray> output = new ArrayList<>();
+ private final ArrayList<Double> rewards = new ArrayList<>();
+
+ private int epsilon = 100;
+
+ public GameAction act() {
+ if(oldState != null) {
+ double reward = currentState.points - oldState.points;
+
+ if (currentState.lost) {
+ reward = 0;
+ }
+
+ input.add(oldState.boardState);
+ output.add(oldQuality);
+ rewards.add(reward);
+
+ if (currentState.lost || input.size() == 20) {
+ for(int i = 0; i < rewards.size(); i++) {
+ double discount = 1.4;
+ double discountedReward = 0;
+
+ for(int j = i; j < rewards.size(); j++) {
+ discountedReward += rewards.get(j) * Math.pow(discount, j - i);
+ }
+
+ rewards.set(i, lerp(discountedReward, 1024));
+ }
+
+ ArrayList<DataSet> dataSets = new ArrayList<>();
+
+ for(int i = 0; i < input.size(); i++) {
+ INDArray correctOut = output.get(i).putScalar(lastAction.ordinal(), rewards.get(i));
+
+ dataSets.add(new DataSet(input.get(i), correctOut));
+ }
+
+ Qnetwork.fit(DataSet.merge(dataSets));
+
+ input.clear();
+ output.clear();
+ rewards.clear();
+ }
+
+ epsilon = Math.max(1, epsilon - 10);
+ }
+
+ oldState = currentState;
+ oldQuality = Qnetwork.output(currentState.boardState);
+
+ GameAction action;
+
+
+ if(random.nextInt(100) < 100-epsilon) {
+ action = GameAction.values()[oldQuality.argMax(1).getInt()];
+ } else {
+ action = GameAction.values()[new Random().nextInt(GameAction.values().length)];
+ }
+
+ lastAction = action;
+
+ return action;
+ }
+
+ private static double lerp(double x, int maxVal) {
+ return x/maxVal;
+ }
+
+ private void ui() {
+ UIServer uiServer = UIServer.getInstance();
+ StatsStorage statsStorage = new InMemoryStatsStorage();
+ uiServer.attach(statsStorage);
+ Qnetwork.setListeners(new StatsListener(statsStorage));
+ }
+}
+
+"
+['object-recognition']," Title: Video recognition (specifically video, not individual frames)Body: There are libraries for recognizing individual video frames, but I need to recognize an object in motion. I can recognize a person in every single frame, but I need to know if the person is running or waving. I can recognize a tree in every single frame, but I need to find out if the tree is swaying in the wind. I can recognize a wind turbine in every frame, but I need to know if it's spinning right now.
+
+So the question is: do technologies, libraries, concepts, or algorithms exist for recognizing objects over a certain period of time? For example, I have a series of several pieces of frames, each of which exactly has a person, and need to find out if person are walking or waving their hands.
+"
+"['comparison', 'minimax', 'alpha-beta-pruning']"," Title: Should I use minimax or alpha-beta pruning?Body: Should I use minimax or alpha-beta pruning (or both)? Apparently, alpha-beta pruning prunes some parts of the search tree.
+"
+"['reinforcement-learning', 'reference-request', 'resource-request']"," Title: Are there any good tutorials about training RL agent from raw pixels using PyTorch?Body: Is there any good tutorials about training reinforcement learning agent from raw pixels using PyTorch?
+
+I don't understand the official PyTorch tutorial. I want to train the agent on the atari breakout environment. Unfortunately, I failed to train the agent on the RAM version. Now, I am looking for a way to train the agent from raw pixels.
+"
+"['machine-learning', 'python', 'supervised-learning', 'scikit-learn']"," Title: Train a model using a multi-column text-filled excel sheetBody: I have an excel sheet filled with my own personal appreciations of movies I've watched, and I want to use it to train an AI model so that it can predict if I'll like a specific movie or not, based on the ones I've already seen.
+
+My data is formatted as following (just a sample, the spreadsheet is filled with hundreds of movies):
+
+
+
+And I would like to use all the columns to train my model. Because I am going to say if I liked the movie or not, I know it will be Supervised Learning. I already cleaned the data so there's no blank or missing data, but I do not know how to train my model using every column.
+
+If required, I can be more specific on something, just ask and I'll edit the post.
+"
+"['neural-networks', 'weights']"," Title: Is better to spend parameters on weights or bias?Body: If a neural network has a limited number of neuron parameters to find, -let's say only 1000 parameters-, it is generally better to spend the parameters on weights or neuron bias?
+
+For example, if each neuron has 2 weights and one bias, it uses 3 parameters per neuron, so only 333 neurons would be available.
+
+But if each neuron uses no bias parameter, then 500 neurons are available with 1000 parameters.
+
+I'm concerned with overfiting by using too many parameters, so I want to minimize the number of parameters meanwhile maximizing the quality of the result.
+"
+"['generative-model', 'conditional-probability', 'restricted-boltzmann-machine', 'deep-belief-network']"," Title: How do I sample conditionally from deep belief networks?Body: Deep belief networks (DBNs) are generative models, where, usually, you sample by thermalising the deepest layer (as it's a restricted Boltzmann machine), and then forward propagating a sample towards the visible layer to get a sample from the learned distribution.
+
+This is less flexible sampling than in a single layer DBN: a restricted Boltzmann machine. There, we can start our sampling chain at any state we want, and get samples ""around"" that state. In particular, we can clamp some visible nodes $\{v_i\}$ and get samples from the conditional probability $𝑝(v_j|\{v_i\})$
+
+Is there a way to do something similar in DBNs? When we interpret the non-RBM layers as RBMs by removing directionality, can we treat it as a deep Boltzmann machine and start sampling at e.g. a training example again?
+"
+"['reinforcement-learning', 'temporal-difference-methods', 'td-lambda']"," Title: How is $\Delta$ updated in true online TD($\lambda$)?Body: In the RL textbook by Sutton & Barto section 7.4, the author talked about the ""True online TD($\lambda$)"". The figure (7.10 in the book) below shows the algorithm.
+
+At the end of each step, $V_{old} \leftarrow V(S')$ and also $S \leftarrow S'$. When we jump to next step, $\Delta \leftarrow V(S') - V(S')$, which is 0. It seems that $\Delta$ is always going to be 0 after step 1. If that is true, it does not make any sense to me. Can you please elaborate on how $\Delta$ is updated?
+
+
+"
+"['neural-networks', 'objective-functions', 'time-series']"," Title: What are some good loss functions used to minimize extreme errors in regression and time series forecasting?Body: I'm working on a time series forecasting task, and, in some specific cases, I don't need perfect accuracy, but the network cannot by any means miss by a lot. So, in detriment of a smaller mean error, I want to have fewer big mistakes.
+
+Any suggestions of loss functions or other methods to solve this issue?
+"
+"['deep-learning', 'generative-adversarial-networks', 'wasserstein-metric', 'wasserstein-gan']"," Title: Under what conditions can one find the optimal critic in WGAN?Body: The Kantorovich-Rubinstein duality for the optimal transport problem implies that the Wasserstein distance between two distributions $\mu_1$ and $\mu_2$ can be computed as (equation 2 in section 3 in the WGAN paper)
+$$W(\mu_1,\mu_2)=\underset{f\in \text{1-Lip.}}{\sup}\left(\mathbb{E}_{x\sim \mu_1}\left[f\left(x\right)\right]-\mathbb{E}_{x \sim \mu_2}\left[f\left(x\right)\right]\right).$$
+Under what conditions can one find the optimal $f$ that achieves the maximum? Is it possible to have an analytical expression for $f$ that achieves the maximum in such scenarios?
+Any help is deeply appreciated.
+"
+"['algorithm', 'search', 'hill-climbing', 'local-search']"," Title: Can we solve an $8 \times 8$ sliding puzzle using hill climbing?Body: Can we solve an $8 \times 8$ sliding puzzle using a random-restart hill climbing technique (steepest-ascent)? If yes, how much computing power will this need? And what is the maximum $n \times n$ that can be solved normally (e.g. with a Google's colab instance)?
+"
+"['machine-learning', 'deep-learning']"," Title: Confusion about the proof that optimizing InfoNCE equals to maximizing mutual informationBody: In the appendix of Representation Learning with Contrastive Predictive Coding, van den Oord et al. prove that optimizing InfoNCE is equivalent to maximize the mutual information between input image $x_t$ and the context latent $c_t$ as follows:
+
+where $x_{t+k}$ is the image at time step $t+k$, $X_{neg}$ is a set of negative samples that do not appear in the sequence $x_t$ belongs to, and $N-1$ is the negative samples used to compute InfoNCE.
+I'm confused about Equation $(8)$. van den Oord et al. stressed that Equation $(8)$ becomes more accurate as $N$ increases, but I cannot see why. Here's my understanding, for $x_j\in X_{neg}$, we have $p(x_j|c_t)\le p(x_j)$ . Therefore, $\sum_{x_j\in X_{neg}}{p(x_j|c_t)\over p(x_j)}\le N-1$ and this does not become accurate as $N$ increases. In fact, I think the gap between the left and right of $\le$ increases as $N$ increases. Do I make any mistake?
+"
+"['reinforcement-learning', 'comparison', 'rewards', 'return']"," Title: Is there any difference between reward and return in reinforcement learning?Body: I am reading Sutton and Barto's book on reinforcement learning. I thought that reward and return were the same things.
+
+However, in Section 5.6 of the book, 3rd line, first paragraph, it is written:
+
+
+ Whereas in Chapter 2 we averaged rewards, in Monte Carlo methods we average returns.
+
+
+What does it mean? Are rewards and returns different things?
+"
+"['deep-learning', 'natural-language-processing', 'transformer', 'attention']"," Title: What are the keys and values of the attention model for the encoder and decoder in the ""Attention Is All You Need"" paper?Body: I have recently encountered the paper on NLP. It is very new to me and I am still unable to see how that works. I have used all the resources over there from the original paper to Youtube videos and the very famous ""Illustrated Transformer"".
+
+Suppose I have a training example of ""I am a student"" and I have the respective French as ""Je suis etudient"".
+
+I want to know how these 3 words are converted to 4 words. What are the query, keys, values?
+
+This is my understanding of the topic so far.
+
+The encoder part is:
+
+
+- Query: a single word embedded in a vector form. such as ""I"" expressed as a vector of length 5 as $[.2, 0.1, 0.4, 0.9, 0.44]$.
+- Keys: the matrix of all the vectors or in simple words, a matrix that has all the words from a sentence in the form of embeddings.
+- Values = Keys
+
+
+For decoder:
+
+
+- Query: the input word in the form of a vector (which is output given by the decoder from the previous pass).
+- Keys = values = outputs from the encoder's layers.
+
+
+BUT there are 2 different attention layers and one of which do not use the encoder's output at all. So, what are the keys and values now? (I think they are just like encoder, but just the generated until that pass)?
+"
+"['machine-learning', 'classification', 'prediction', 'data-preprocessing']"," Title: How can I predict the label given a partial feature vector?Body: Most of the traditional machine learning algorithms need a feature vector of a constant dimension to predict the label.
+
+Which algorithms can be used to predict a class label with a shorter or partial feature vector?
+
+For example, consider a search engine. In search engines, when the user types a few letters, the search engine predicts the context of the query and suggests more queries to user.
+
+Similarly, how can I predict a class label with an incomplete feature vector? I know one way is to pad the sequence, but I want a better solution.
+"
+"['classification', 'probability-theory', 'bayes-error-rate']"," Title: How is the formula for the Bayes error rate with an integral derived?Body: My questions concern a particular formulation of the Bayes error rate from Wikipedia, summarized below.
+
+For a multiclass classifier, the Bayes error rate may be calculated as follows: $$p = 1 - \sum_{C_i \ne C_{\max,x}} \int_{x \in H_i} P(C_i|x)p(x)\,dx$$ where $x$ is an instance, $C_i$ is a class into which an instance is classified, $H_i$ is the area/region that a classifier function $h$ classifies as $C_i$.
+
+We are interested in the probability of misclassifying an instance, so we wish to sum up the probability of each unlikely class label (hence we want to look at $C_i \ne C_{\max, x}$).
+However, the integral is confusing me. We want to integrate an area corresponding to the probability that we choose label $C_i$ given $x$. But we drew $x$ from $H_i$, the region covered/classified by $C_i$, so wouldn't $P(C_i|x) = 1$?
+I think most of my confusion will be resolved if someone can help clarify the intention of the integral.
+Is it to draw random samples from the total space of $h$ (the classifier function), and then sum the probabilities from each classified $C_i \ne C_{\max, x}$? How does $x$ exist in the outer summation before it has been sampled from $H_i$ in the integral?
+"
+"['reinforcement-learning', 'value-functions', 'expectation', 'return', 'bellman-equations']"," Title: Why is $G_{t+1}$ is replaced with $v_*(S_{t+1})$ in the Bellman optimality equation?Body: In equation 3.17 of Sutton and Barto's book:
+
+$$q_*(s, a)=\mathbb{E}[R_{t+1} + \gamma v_*(S_{t+1}) \mid S_t = s, A_t = a]$$
+
+$G_{t+1}$ here have been replaced with $v_*(S_{t+1})$, but no reason has been provided for why this step has been taken.
+
+Can someone provide the reasoning behind why $G_{t+1}$ is equal to $v_*(S_{t+1})$?
+"
+"['deep-learning', 'training', 'recurrent-neural-networks', 'long-short-term-memory', 'sequence-modeling']"," Title: How do LSTM or GRU gates learn to specialize in their desired tasks?Body: While I was studying the equations for the computation inside GRU and LSTM units, I realized that although the different gates have different Weight matrices, their overall structure is the same. They are all dot products of a weight matrix and their inputs, plus bias, followed by a learned gating activation. Now, the difference between computation depends on the weight matrices being different from each other, that is, those weight matrices are specifically for specializing in the particular tasks like forgetting/keeping etc.
+
+But these matrices are all initialized randomly, and it seems that there's no special tricks in the training scheme to make sure these weight matrices are learned in a manner that the associated gates specialize in their desired tasks. They are all random matrices that kept getting updated with gradient descent.
+
+So how does, for example, a forget gate learn to function as a forgetting unit? Same question applies to others as well. Am I missing a part of the training for these networks? Can we ever say that these units learn truly disentangled functions from each other?
+"
+"['reinforcement-learning', 'rewards', 'multi-armed-bandits', 'discount-factor']"," Title: When discounted MAB is useful?Body: Many of multi-armed bandit(MAB) algorithms are used when the total reward is the sum of all rewards. However, in RL, the discounted reward is mainly used. Why is the discounted reward not prevailing in MAB problem, and in what cases is this type of modeling valid and might be better?
+"
+"['deep-learning', 'q-learning', 'dqn', 'experience-replay']"," Title: In a DQN, can Prioritized Experience Replay actually perform worse than a regular Experience Replay?Body: I've written a Double DQN-based stock trading bot using mainly time series stock data.
+
+I've recently upgraded my Experience Replay(ER) code with a version of Prioritized Experience Replay (PER) similar to the one written by OpenAI. My DQN's reward function is the stock return over 30 days (the length of my test window).
+
+The strange thing is, once the bot has been trained using the same set of time series data and let free to trade on unseen stock data, the version that uses PER actually comes up with worse stock returns than the version using a regular ER.
+
+This is not quite what I'd expected but it's very hard to debug and see what might have gone wrong.
+
+So my question is, will PER always perform better than a regular ER? If not, when/why not?
+"
+"['q-learning', 'deep-rl']"," Title: How to predict Q-values based on stack of framesBody: I decided to train deep Q-learning agent based on getting raw pixels from environment.I have one particular problem:when I input stack of frames, suppose 4 consecutive frames, if action space is 6,then output is 4 by 6 matrix.So which one is real Q-value?I mean, I input batch of frames and it outputs batch of values and question is which is real Q-value out of those batch values?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'training']"," Title: What does it mean to train a model?Body: We hear this many time for different problems
+
+
+ Train a model to solve this problem!
+
+
+What do we really mean by training a model?
+"
+['feature-selection']," Title: How to obtain SHAP valuesBody: I want to obtain SHAP values with kernel SHAP without using python but I don't really understand the algorithm. If I have a kNN classifier, do I have to run the classifier for all the coalitions possible? For $n$ variables, $2^n$ predictions?
+
+Also, how do I obtain the SHAP values after that? Linear regression?
+"
+"['reinforcement-learning', 'rewards', 'sutton-barto', 'expectation', 'transition-model']"," Title: If the current state is $S_t$ and the actions are chosen according to $\pi$, what is the expectation of $R_{t+1}$ in terms of $\pi$ and $p$?Body: I'm trying to solve exercise 3.11 from the book Sutton and Barto's book (2nd edition)
+
+Exercise 3.11 If the current state is $S_t$ , and actions are selected according to a stochastic policy $\pi$, then what is the expectation of $R_{t+1}$ in terms of $\pi$ and the four-argument function $p$ (3.2)?
+
+Here's my attempt.
+For each state $s$, the expected immediate reward when taking action $a$ is given in terms of $p$ by eq 3.5 in the book:
+$$r(s,a) = \sum_{r \in R} r \, \sum_{s'\in S} p(s',r \mid s,a) = E[R_t \mid S_{t-1} = s, A_{t-1} = a] \tag{1}\label{1}$$
+The policy $\pi(a \mid s)$, on the other hand, gives the probability of taking action $a$ given the state $s$.
+Is it possible to express the expectation of the immediate reward over all actions $A$ from the state $s$ using (1) as
+$$E[R_t \mid S_{t-1} = s, A] = \sum_{a \in A} \pi(a \mid s) r(a,s) \tag{2}\label{2}$$
+?
+If this is valid, is this also valid in the next time step
+$$E[R_{t+1} \mid S_{t} = s, A] = \sum_{a \in A} \pi(a \mid s) r(a, s) \tag{3}\label{3}$$
+?
+If (2) and (3) are OK, then
+$$E[R_{t+1} \mid S_{t} = s, A] = \sum_{a \in A} \pi(a \mid s) \sum_{r \in R} r \, \sum_{s'\in S} p(s',r \mid s,a)$$
+"
+"['neural-networks', 'machine-learning', 'training']"," Title: What kind of problems cannot be solved using machine learning techniques?Body: For the problems that can be solved algorithmically.
+
+We have very good formal literature for which problems can be solved in polynomial, exponential time and which cannot. P/NP/NP-hard
+
+But do we know some problems in machine learning paradigm for which no model can be trained? (With/without infinite computation capacity)
+"
+"['deep-learning', 'reinforcement-learning', 'dqn', 'deep-neural-networks']"," Title: What's the right way of building a deep Q-network?Body: I'm new to RL and to deep q-learning and I have a simple question about the architecture of the neural network to use in an environment with a continous state space a discrete action space.
+
+I tought that the action $a_t$ should have been included as an input of the neural network, togheter with the state. It also made sense to me as when you have to compute the argmax or the max w.r.t. $a_t$ it was like a ""standard"" function. Then I've seen some examples of networks that had as inputs only $s_t$ and that had as many outputs as the number of possible actions. I quite understand the logic behind this (replicate the q-values pairs of action-state) but is it really the correct way? If so, how do you compute the $argmax$ or the $max$? Do I have to associate to each output an action?
+"
+"['reinforcement-learning', 'ai-design', 'deep-rl']"," Title: How to train a reinforcement learning agent from raw pixels?Body: How would you train a reinforcement learning agent from raw pixels?
+
+For example, if you have 3 stacked images to sense motion, then how would you pass them to neural networks to output Q-learning values?
+
+If you pass that batch output, it would be a batch of values, so from here it is impossible to deduce which ones are the true Q-values for that state.
+
+Currently, I am watching a YouTuber: Machine Learning with Phil, and he did it very differently. On the 13th minute, he defined a network that outputs a batch of values rather than Q-values for 6 states. In short, he outputs a matrix rather than a vector.
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'dqn']"," Title: How to take actions at each episode and within each step of the episode in deep Q learning?Body: In deep Q learning, we execute the algorithm for each episode, and for each step within an episode, we take an action and record a reward.
+
+I have a situation where my action is 2-tuple $a=(a_1,a_2)$. Say, in episode $i$, I have to take the first half of an action $a_1$, then for each step of the episode, I have to take the second half of the action $a_2$.
+
+More specifically, say we are in episode $i$ and this episode has $T$ timesteps. First, I have to take $a_1(i)$. (Where $i$ is used to reference episode $i$.) Then, for each $t_i\in\{1,2,\ldots,T\}$, I have to take action $a_2(t_i)$. Once I choose $a_2(t_i)$, I get an observation and a reward for the global action $(a_1(i), a_2(t_i))$.
+
+Is it possible to apply deep Q learning? If so, how? Should I apply the $\epsilon$-greedy twice?
+"
+"['reinforcement-learning', 'alphago-zero']"," Title: Why does AlphaGo Zero select move based on exponentiated visit count?Body: From the AlphaGo Zero paper, AlphaGo Zero uses an exponentiated visit count from the tree search.
+
+Why use visit count instead of the mean action value $Q(s, a)$?
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision']"," Title: Can we force the initial state of a neural network to produce an ""unknown"" class?Body: Has anyone investigated ways to initialize a network so that everything is considered ""unknown"" at the start?
+
+When you consider the ways humans learn, if something doesn't fit a class well enough, it falls into an ""unknown category"".
+
+I would argue we all ultimately do some type of correlation matching internally with a certain threshold existing for recognition.
+
+Deep networks don't currently have this ability, everything falls into a class. What I'm curious about is how might we force a network to initially classify things as ""unknown"" as the default state.
+"
+"['value-functions', 'policies', 'sutton-barto', 'policy-iteration']"," Title: Why is there an inconsistency between my calculations of Policy Iteration and this Sutton & Barto's diagram?Body: In equation 4.9 of Sutton and Barto's book on page 79, we have (for the policy iteration algorithm):
+$$\pi'(s) = arg \max_{a}\sum_{s',r}p(s',r|s,a)[r+\gamma v_{\pi}(s')]$$
+where $\pi$ is the previous policy and $\pi'$ is the new policy. Hence in iterations $k$ it must mean
+$$\pi_{k+1}(s) = arg \max_{a}\sum_{s',r}p(s',r|s,a)[r+\gamma v_{\pi_{k}}(s')]$$
+But in the example given in the same book on page 77 we have:
+
+Now, for the concerned state marked in red -
+
+- $v_{\pi_{1}} = -1$ for all four surrounding states
+- $r = -1$ for all four surrounding states
+- $p(s',r|s,a) = 1$ for all four surrounding states
+- $\pi_{2}(s) = arg \max_{a}[1*[-1+1*-1],1*[-1+1*-1],1*[-1+1*-1],1*[-1+1*-1]]$
+- $\pi _{2}(s) = arg \max_{a}(-2,-2,-2,-2)$
+
+Hence this should give us a criss-cross symbol (4 directional arrow) in $\pi_{2}$(s) but here a left arrow symbol is given.
+What's wrong with my calculations?
+"
+"['reinforcement-learning', 'ai-design', 'q-learning', 'deep-rl']"," Title: How to implement RAM versions of Atari gamesBody: I have coded the breakout RAM version, but, unfortunately, its highest reward was 5. I trained it for about 2 hours and never reached a higher score. The code is huge, so I can't paste here, but, in short, I used double deep Q-learning, and trained it like it was CartPole or lunar-lander environment. In CartPole, the observation was a vector of 4 components. In that case, my double deep Q-learning agent solved the environment, but in the breakout-ram version whose observation was a vector of 128 elements, it was not even close.
+
+Did I miss something?
+"
+"['neural-networks', 'machine-learning', 'generative-adversarial-networks']"," Title: What does it mean when the discriminator's loss gets a constant value while the generator's loss keeps on changing?Body: While training a GAN-based model, every time the discriminator's loss gets a constant value of nearly 0.63 while the generator's loss keeps on changing from 0.5 to 1.5, so I am not able to understand if this thing is happening either due to the generator being successful in fooling the discriminator or some instability in training. I have been stuck in this confusion for so many days.
+"
+"['reinforcement-learning', 'value-functions', 'bellman-equations', 'expectation']"," Title: Why does the state-action value function, defined as an expected value of the reward and state value function, not need to follow a policy?Body: I often see that the state-action value function is expressed as:
+$$q_{\pi}(s,a)=\color{red}{\mathbb{E}_{\pi}}[R_{t+1}+\gamma G_{t+1} | S_t=s, A_t = a] = \color{blue}{\mathbb{E}}[R_{t+1}+\gamma v_{\pi}(s') |S_t = s, A_t =a]$$
+Why does expressing the future return in the time $t+1$ as a state value function $v_{\pi}$ make the expected value under policy change to expected value in general?
+"
+"['neural-networks', 'training', 'optimization', 'hyperparameter-optimization', 'performance']"," Title: How many training runs are needed to obtain a credible value for performance?Body: I'm trying to optimize a neural network. For that, I'm changing parameters like the batch size, learning rate, weight initialization, etc.
+
+A neural network is not a deterministic algorithm, so, in each training set, I train the neural network from scratch and I stop it when it's full converged.
+
+After training is complete, I calculate the performance of the neural network in a test dataset. The problem is, I trained the neural network from scratch 2 times with the same parameters, but the difference in performance was almost 5%, which is a BIG DIFFERENCE.
+
+So, what's the reasonable number of training runs to obtain a credible performance number of a neural network?
+"
+"['neural-networks', 'reinforcement-learning', 'convolutional-neural-networks', 'dqn']"," Title: How to convert sequences of images into state in DQN?Body: I recently read the DQN paper titled: Playing Atari with Deep Reinforcement Learning. My basic and rough understanding of the paper is as follows:
+
+You have two neural networks; one stays frozen for a duration of time steps and is used in the computation of the loss function with the neural network that is updating. The loss function is used to update the neural network using gradient descent.
+
+Experience replay is used, which basically creates a buffer of experiences. This buffer of experiences is randomly sampled and these random samples are used to update the non-frozen neural network.
+
+My question pertains to the DQN algorithm illustrated in the paper: Algorithm 1, more specifically lines 4 and 9 of this algorithm. My understanding, which is also mentioned early on in the paper, is that the states are actually sequences of the game-play frames. I want to know, since the input is given to a CNN, how would we encode these frames to serve as input to the CNN?
+
+I also want to know since $s_{1}$ is equal to a set, which can be seen in line 4 of the algorithm, then why is $s_{t+1}$ equal to $s_{t}$, $a_{t}$, $x_{t+1}$?
+"
+"['reinforcement-learning', 'dqn', 'actor-critic-methods', 'reinforce', 'contextual-bandits']"," Title: Can I apply DQN or policy gradient algorithms in the contextual bandit setting?Body: I have a problem which I believe can be described as a contextual bandit.
+
+More specifically, in each round, I observe a context from the environment consisting of five continuous features, and, based on the context, I have to choose one of the ten available actions. The actions do not influence the next context.
+
+Based on the above I have the following questions:
+
+
+- Is this a contextual bandit or an MDP with a discount equal to zero (one step RL)? I have read that, in contextual bandits, we receive a different context for each action and I am a little bit confused.
+- Can I use the DQN algorithm with TD Target only the observed reward instead of the reward plus the predicted value of the next state?
+- Can I use a policy gradient algorithm, like REINFORCE or A2C? If yes, should I use a baseline and what this baseline should be?
+- I have seen in the literature that there are some algorithms for contextual bandits such as LinUCB, LinRel, NeuralBandit, etc. And I am wondering why the DQN, A2C and REINFORCE algorithms, which seem to work well in MDP setting, are not used in contextual bandits, given the fact that this problem can be described as an MDP with a discount equal to zero?
+
+"
+"['reinforcement-learning', 'policy-gradients', 'exploration-exploitation-tradeoff', 'upper-confidence-bound', 'thompson-sampling']"," Title: Should I use exploration strategy in Policy Gradient algorithms?Body: In policy gradient algorithms the output is a stochastic policy - a probability for each action.
+
+I believe that if I follow the policy (sample an action from the policy) I make use of exploration because each action has a certain probability so I will explore all actions for a given state.
+
+Is it beneficial or is it common to use extra exploration strategies, like UCB, Thompson sampling, etc. with such algorithms?
+"
+"['reinforcement-learning', 'probability', 'probability-distribution', 'expectation', 'statistics']"," Title: How does $\mathbb{E}$ suddenly change to $\mathbb{E}_{\pi'}$ in this equation?Body: In Sutton-Barto's book on page 63 (81 of the pdf):
+$$\mathbb{E}[R_{t+1} + \gamma v_\pi(S_{t+1}) \mid S_t=s,A_t=\pi'(s)] = \mathbb{E}_{\pi'}[R_{t+1} + \gamma v_\pi(S_{t+1}) \mid S_{t} = s]$$
+
+How does $\mathbb{E}$ suddenly change to $\mathbb{E}_{\pi'}$ and the $A_t = \pi'(s)$ term disappears?
+
+Also, in general, in the conditional expectation, which distribution do we compute the expectation with respect to? From what I have seen, in $\mathbb{E}[X \mid Y]$, we always calculate the expected value over distribution $X$.
+"
+"['neural-networks', 'deep-learning', 'batch-normalization']"," Title: In Batch Normalisation, are $\hat{\mu}$, $\hat{\sigma}$ the mean and stdev of the original mini-batch or of the input into the current layer?Body: In Batch Normalisation, are the sample mean and standard deviation we normalise by the mean/sd of the original data put into the network, or of the inputs in the layer we are currently BN'ing over?
+
+For instance, suppose I have a mini-batch size of 2 which contains $\textbf{x}_1, \textbf{x}_2$. Suppose now we are at the $k$th layer and the outputs from the previous layer are $\tilde{\textbf{x}}_1,\tilde{\textbf{x}}_2$. When we perform batch norm at this layer would be subtract the sample mean of $\textbf{x}_1, \textbf{x}_2$ or of $\tilde{\textbf{x}}_1,\tilde{\textbf{x}}_2$?
+
+My intuition tells me that it must be the mean,sd of $\tilde{\textbf{x}}_1,\tilde{\textbf{x}}_2$ otherwise I don't think it would be normalised to have 0 mean and sd of 1.
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'dqn', 'convergence']"," Title: If the minimum Q value is decreasing and the maximum Q value increasing, is this a sign that dueling double DQN is diverging?Body: I'm training a dueling double DQN agent with prioritized replay buffer and notice that the min Q values are decreasing, while the max Q values are increasing.
+
+Is this a sign that it is diverging?
+
+Or should be just be looking at the mean Q value, which has a slight uptrend?
+
+
+
+1 There are 2 different colored lines because the initial training (in orange) was stopped at around 1.3M time steps, and resumed (in blue) from a checkpoint at around 1.1M time steps
+
+2 Plots are from Tensorboard, visualizing data generated by Ray/RLlib
+
+3 Epsilon starts at 1.0 and anneals to 0.02 over 10000 time steps. The sudden increase in magnitude of Q-values appear to come after resuming from checkpoint, but might just be a coincidence.
+
+
+
+After training for more steps...
+
+
+"
+"['machine-learning', 'classification', 'algorithm', 'image-processing', 'time-complexity']"," Title: Is subsection generation $O(n^4)$Body: When I say template matching, I'm referring to finding occurrences of a small image (the template) in a larger image.
+
+The OpenCV library provides the trivial solution, that slides the template over every possible location of the image. While this implementation provides translation invariance, it provides no stretching/scaling invariance. To get stretch and translation invariance, one would need to stretch the template (or shrink the main image) until iteratively, running the original template check over the image. This increases complexity to $O(S * n^2)$ where S is the number of different resolutions one checks - if one want's to check every possible resolution, the overall complexity is $O(n^4)$. Effectively, you generate $O(n^4)$ subsections and check if they're equal to the template.
+
+From what I was taught, Image Segmentation Networks do just this - however, instead of using the basic template matching (i.e. checking the pixels match) the generated subsection is put through a classifier network - so this would be more expensive than standard templating.
+
+
+
+My question is, are my calculations correct - for complete subsection generation, is the complexity $O(n^4)$ and is there really no better algorithm for generating these subsections - used by both image detection algorithms?
+"
+"['deep-learning', 'comparison', 'transfer-learning', 'one-shot-learning', 'fine-tuning']"," Title: What is the difference between one-shot learning, transfer learning and fine tuning?Body: Lately, there are lots of posts on one-shot learning. I tried to figure out what it is by reading some articles. To me, it looks like similar to transfer learning, in which we can use pre-trained model weights to create our own model. Fine-tuning also seems a similar concept to me.
+
+Can anyone help me and explain the differences between all three of them?
+"
+"['reinforcement-learning', 'definitions', 'value-functions', 'sutton-barto']"," Title: How do we express $q_\pi(s,a)$ as a function of $p(s',r|s,a)$ and $v_\pi(s)$?Body: The task (exercise 3.13 in the RL book by Sutton and Barto) is to express $q_\pi(s,a)$ as a function of $p(s',r|s,a)$ and $v_\pi(s)$.
+
+$q_\pi(s,a)$ is the action-value function, that states how good it is to be at some state $s$ in the Markov Decision Process (MDP), if at that state, we choose an action $a$, and after that action, the policy $\pi(s,a)$ determines future actions.
+
+Say that we are at some state $s$, and we choose an action $a$. The probability of landing at some other state $s'$ is determined by $p(s',r|s,a)$. Each new state $s'$ then has a state-value function that determines how good is it to be at $s'$ if all future actions are given by the policy $\pi(s',a)$, therefore:
+
+$$q_\pi(s,a) = \sum_{s' \in S} p(s',r|s,a) v_\pi(s')$$
+
+Is this correct?
+"
+"['reinforcement-learning', 'terminology', 'definitions', 'multi-armed-bandits']"," Title: How do I recognise a bandit problem?Body: I'm having difficulty understanding the distinction between a bandit problem and a non-bandit problem.
+
+An example of the bandit problem is an agent playing $n$ slot machines with the goal of discovering which slot machine is the most probable to return a reward. The agent learns to find the best strategy of playing and is allowed to pull the lever of one slot machine per time step. Each slot machine obeys a distinct probability of winning.
+
+In my interpretation of this problem, there is no notion of state. The agent potentially can utilise the slot results to determine a state-action value? For example, if a slot machine pays when three apples are displayed, this is a higher state value than a state value where three apples are not displayed.
+
+Why is there just one state in the formulation of this bandit problem? As there is only one action (""pulling the slot machine lever"" ), then there is one action. The slot machine action is to pull the lever, which starts the game.
+
+I am taking this a step further now. An RL agent purchases $n$ shares of an asset and its not observable if the purchase will influence the price. The next state is the price of the asset after the purchase of the shares. If $n$ is sufficiently large, then the price will be affected otherwise there is a minuscule if any effect on the share price. Depending on the number of shares purchased at each time step, it's either a bandit problem or not.
+
+It is not a bandit problem if $n$ is large and the share price is affected? It is a bandit problem if $n$ is small and the share price is not affected?
+
+Does it make sense to have a mix of a bandit and non-bandit states for a given RL problem? If so, then the approach to solving should be to consider the issue in its entirety as not being a bandit problem?
+"
+"['comparison', 'constraint-satisfaction-problems', 'linear-programming']"," Title: What are the differences between constraint satisfaction problems and linear programming?Body: I have taken an algorithms course where we talked about LP significantly, and also many reductions to LPs. As I recall, normal LP is not NP-Hard. Integer LP is NP-Hard. I am currently taking an introduction to AI course, and I was wondering if CSP is the same as LP.
+
+There seems an awful lot of overlap, and I haven't been able to find anything that confirms or denies my suspicions.
+
+If they are not the same (or one cannot be reduced to the other), what are the core differences in their concepts?
+"
+"['deep-learning', 'reinforcement-learning', 'dqn']"," Title: Should the network weights converge when training Deep Q networks?Body: I have two sets of data, one training and one test set. I use the train set to train the deep q network model variant. I also continuously evaluate the agent Q values obtained on the test set every 5000 epochs I find that the agent Q values on the test set do not converge and neither do the policies.
+
+iteration $x$: Q values for the first 5 test data are [15.271439, 13.013742, 14.137051, 13.96463, 11.490129]
with policies: [15, 0, 0, 0, 15]
+
+iteration $x+10000$:
+Q values for the first 5 test data are [15.047309, 15.5233555, 16.786497, 16.100864, 13.066223]
with policies: [0, 0, 0, 0, 15]
+
+This means that the weights of the neural network are not converging. Although I can manually test each policy at each iteration and decide which of the policy performs best, I would like to know if correct training of the network would lead to weight convergence ?
+
+Training loss plot:
+
+
+You can see that the loss decreases over time however, there are occasional spikes in the loss which does not seem to go away.
+"
+"['machine-learning', 'deep-learning', 'computer-vision']"," Title: Action recognition using video stream dataBody: Recently, I am working on an action recognition project where my input data is from the video stream. I read some of the concepts like ConvLstm, Convolutional Lstm, etc. I am looking for someone who already those kinds of staff already and can share his work with me that will be a really good help for me?
+"
+"['long-short-term-memory', 'objective-functions', 'weights', 'loss', 'scalability']"," Title: LSTM - MAPE Loss Function gives Better Results when Data is De-Scaled before Loss CalculationBody: I am building an LSTM
for predicting a price chart. MAPE
resulted in the best loss function compared to MSE
and MAE
. MAPE
function looks like this
+
+$$\frac{1}{N}\sum^N_{i=0}\frac{|P_i - A_i|}{A_i}$$
+
+Where $P_i$ is the current predicted value and $A_i$ is the corresponding actual value. In neural network, it is always advised to scale the data between a small range close to zeros such as [0, 1]
. In this case scaling range of [0.001, 1]
is imperative to remove a possible division by zero.
+
+Due to the MAPE
denominator, the close the scaling range is to zero the larger the loss function becomes for a given $|P_i - A_i|$. If on the other hand, the data is de-scaled just before it is inserted in the MAPE
function, the same $|P_i - A_i|$ would give a smaller MAPE
+
+Consider a hypothetical example with a batch size of 1, $|P_i - A_i| = 2$ (this is scale indipendent) and $A_i = 200$. Therefore scaled $A_i = 0.04$. The MAPE
error loss for the scaled version would be $\frac{2}{0.04} = 50$, and for the unscaled version $\frac{2}{200} = 0.01$
+
+This will mean that the derivative w.r.t each weight of the scaled version will also be larger, therefore making the weights themselves even smaller. Is this correct?
+
+I am concluding that scaling the data when using MAPE
will effectively shrink the weights down more than necessary. Is that a good reason why I am seeing significantly better performance with de-scaled MAPE
calculation?
+
+Note: I am not keeping the same hyperparameters for scaled and de-scaled MAPE
but a Bayesian optimization is performed with both runs, In the later a deeper network was preferred but in the scaled MAPE
more regularisation was preferred.
+
+Some expertise on this would be helpful.
+"
+"['comparison', 'optimization', 'simulated-annealing', 'deterministic-annealing']"," Title: What is the difference between simulated annealing and deterministic annealing?Body: Not sure if this is the right place, but I was wondering if someone could briefly explain to me the differences & similarities between simulated annealing and deterministic annealing?
+
+I know that both methods are used for optimization and both originate from statistical physics with the intuition of reaching a minimum energy (cost) configuration by cooling (i.e. slowly reducing the temperature in the Boltzmann distribution to calculate probabilities for configurations).
+
+Unfortunately, Wikipedia has no article about deterministic annealing and the one about simulated annealing does not mention any comparison.
+
+This resource has a brief comparison section between the two methods, however, I do not understand why the search strategy of DA is
+
+
+ based on the steepest descent algorithm
+
+
+and how
+
+
+ it searches the local minimum deterministically at each temperature.
+
+
+Any clarification appreciated.
+"
+"['neural-networks', 'reference-request', 'resource-request', 'capsule-neural-network']"," Title: Is there a tutorial for beginners on capsule neural networks?Body: I am interested in capsule neural networks. I have already read the paper Dynamic Routing Between Capsules, but it is a little bit difficult to follow. Is there a tutorial for beginners on capsule neural networks?
+"
+"['machine-learning', 'generative-adversarial-networks', 'probability', 'probability-distribution']"," Title: Is the generator distribution in GAN's continuous or discrete?Body: I have some trouble with the probability densities described in the original paper. My question is based on Goodfellow's paper and tutorial, respectively: Generative Adversarial Networks and NIPS 2016 Tutorial: Generative Adversarial Networks.
+
+When Goodfellow et al. talk about probability distributions/densities in their paper, are they talking about discrete or continuous probability distributions? I don't think it's made clear.
+
+In the continuous case, it would imply, for instance, that both $p_{data}$ and $p_g$ must be differentiable since the optimal discriminator (see Prop. 1) is essentially a function of their ratio and is assumed to be differentiable. Also, the existence of a continuous $p_g$ is non-trivial. One sufficient condition would be that $G$ is a diffeomorphism (see normalising flows), but this is rarely the case. So it seems that much stronger assumptions are needed.
+
+In the case that the answer is discrete distributions: the differentiability of $G$ implies continuous outputs of the generator. How can this work together with a discrete distribution of its outputs? Does the answer have something to do with the fact that we can only represent a finite set of numbers with computers anyway?
+"
+"['computer-vision', 'hyperparameter-optimization', 'feature-extraction']"," Title: How to choose a suitable threshold value for the Shi-Tomasi corner detection algorithm?Body: While implementing the Shi-Tomasi corner detection algorithm, I got stuck in deciding a suitable threshold for corner detection.
+In the Shi-Tomasi algorithm, all those points that qualify $\min( \lambda_1, \lambda_2) > \text{threshold}$ are considered as corner points. (where $\lambda_1, \lambda_2$ are eigenvalues).
+My question is: what is a suitable criterion to decide that threshold?
+"
+"['reinforcement-learning', 'policy-gradients', 'rewards']"," Title: Non-differentiable reward function to update a neural networkBody: In Reinforcement Learning, when reward function is not differentiable, a policy gradient algorithm is used to update the weights of a network. In the paper Neural Architecture Search with Reinforcement Learning they use accuracy of one neural network as the reward signal then choose a policy gradient algorithm to update weights of another network.
+
+I cannot wrap my head around the concept of accuracy as a non-differentiable reward function. Do we need to find the function and then check if it is mathematically non-differentiable?
+
+I was wondering if I can use another value, for example silhouette score (in a different scenario) as the reward signal?
+"
+"['reinforcement-learning', 'value-functions', 'notation']"," Title: Why are the value functions sometimes written with capital letters and other times with lower-case letters?Body: Why are the state-value and action-value functions are sometimes written in small letters and other times in capitals? For instance, why in the Q-learning algorithm (page 131 of Barto and Sutton's book but not only), we the capitals are used $Q(S, A)$, while the Bellman equation it is $q(s,a)$?
+"
+"['python', 'computer-vision', 'feature-extraction']"," Title: Corner detection algorithm gives very high value for slanted edges?Body: I have tried implementing a basic version of shi-tomasi corner detection algorithm. The algorithm works fine for corners but I came across a strange issue that the algorithm also gives high values for slanted(titled) edges.
+
+Here's what i did
+
+
+- Took gray scale image
+- computer dx, and dy of the image by convolving it with sobel_x and sobel_y
+- Took a 3 size window and moved it across the image to compute the sum of the elements in the window.
+- computed sum of the window elements from the dy image and sum of window elements from the dx image and saved it in sum_xx and sum_yy.
+- created a new image (call it
result
) where that pixel for which the window sum was computed was replaced with min(sum_xx, sum_yy)
as shi-tomasi algorithm requires.
+
+
+I expected it to give maximum value for corners where dx and dy both are high, but i found it giving high values even for titled edges.
+
+Here are the some outputs of the image i received:
+
+
+Result:
+
+
+so far so good, corners have high values.
+
+Another Image:
+
+
+Result:
+
+
+Here's where the problem lies. edges have high values which is not expected by the algorithm. I can't fathom how can edges have high values for both x and y gradients (sobel being close approximation of gradient).
+
+I would like to ask your help, if you can help me fix this issue for edges. I am open to any suggestions and ideas .
+
+Here's my code (if it helps):
+
+
+
+def shi_tomasi(image, w_size):
+ ans = image.copy()
+ dy, dx = sy(image), sx(image)
+
+ ofset = int(w_size/2)
+ for y in range(ofset, image.shape[0]-ofset):
+ for x in range(ofset, image.shape[1]-ofset):
+
+ s_y = y - ofset
+ e_y = y + ofset + 1
+
+ s_x = x - ofset
+ e_x = x + ofset + 1
+
+ w_Ixx = dx[s_y: e_y, s_x: e_x]
+ w_Iyy = dy[s_y: e_y, s_x: e_x]
+
+ sum_xx = w_Ixx.sum()
+ sum_yy = w_Iyy.sum()
+
+ ans[y][x] = min(sum_xx, sum_yy)
+ return ans
+
+def sy(img):
+ t = cv2.Sobel(img,cv2.CV_8U,0,1,ksize=3)
+ return t
+def sx(img):
+ t = cv2.Sobel(img,cv2.CV_8U,1,0,ksize=3)
+ return t
+
+"
+"['reinforcement-learning', 'policy-gradients', 'proofs', 'reward-to-go']"," Title: What is the proof that ""reward-to-go"" reduces variance of policy gradient?Body: I am following the OpenAI's spinning up tutorial Part 3: Intro to Policy Optimization. It is mentioned there that the reward-to-go reduces the variance of the policy gradient. While I understand the intuition behind it, I struggle to find a proof in the literature.
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'computer-vision', 'time-series']"," Title: How to calibrate model's prediction given past images?Body: I want to predict how open is the mouth given a face image. It's a regression problem (0= mouth not open, 1=mouth completely open). And something between 0 and 1 is also allowed. ConvNet works fine for one person. But when I train with many people with hope that it will generalize to an unseen person, the model suffers from not knowing the limit of a person's mouth.
+
+For example, if a new person uses the model to predict, the model doesn't have a clue whether this person has completely opened the mouth or not. Because it's hard to know how much a person can open the mouth from one image. People's mouth openness capability is not the same. Some guys cannot open their mouth that much, but some guys can open the mouth like they can swallow an apple. The only way you can know how much a person can open the mouth is to look at multiple images of their mouth movements, especially when they open the mouth completely.
+
+I want to know how to make the model know the limit of a person's mouth by using the info from past images.
+
+Is there a way for me to use a few unlabeled images of a new person in order to help the model calibrate its prediction? How do I do it?
+
+This should help the model know the min/max of the person's mouth and also knows the intermediate values between 0 and 1. If you run the model continuously on a webcam, I expect the prediction to be smooth (not noisy).
+
+My idea is to encode many images into an embedding that can be used as a calibration vector. The vector will be fed into the model along with the person's image. But I am not sure how to do it.
+Any suggestions/tutorials would be welcomed.
+"
+"['deep-learning', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: Do you have to add a dense layer onto the final layer of an LSTM?Body: If my understanding of an LSTM is correct then the output from each LSTM unit is the hidden state from that layer. For the final layer if I wanted to predict e.g. a scalar real number, would I want to add a dense layer with 1 neuron or is it recommended to have a final LSTM layer where the output has just one hidden unit (i.e. the output dimension of the final hidden state is 1)? If we didn't add the dense layer, then the output from the hidden layer I believe would be between (-1,1), if you use the traditional activations in the LSTM unit.
+
+Apologies if I've used wrong terminology, there seems to be some inconsistency with LSTM's when going between literature and definition in TensorFlow etc.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'tensorflow', 'generative-adversarial-networks']"," Title: How GAN generator produce integer RGB colored picture?Body: For traditional neural networks, I know that we can't constraint the output to be strict integers. My question is what technique does GANs use to produce integer outputs, that can be then converted to RGB colored pictures?
+"
+"['natural-language-processing', 'comparison', 'chat-bots', 'natural-language-understanding']"," Title: When to use NLP, NLG and NLU in conversation agents?Body: I had read some blogs (like 1, 2 or 3) about what the difference between all three of them is. I am trying to build an open domain conversation agent using natural language AI. That agent can do casual conversation, like a friend. So, for that, I want to know what is the importance of NLP, NLG, and NLU, so that I can learn that part first.
+"
+"['comparison', 'terminology', 'knowledge-representation', 'knowledge-graph', 'knowledge-base']"," Title: What are the differences between a knowledge base and a knowledge graph?Body: During my readings, I have seen many authors using the two terms interchangeably, i.e. as if they refer to the same thing. However, we all know about Google's first quotation of "knowledge graph" to refer to their new way of making use of their knowledge base. Afterward, other companies are claiming to use knowledge graphs.
+What are the technical differences between the two? Concrete examples will be very useful to understand better the nuances.
+"
+"['reinforcement-learning', 'algorithm', 'learning-algorithms']"," Title: Can Reinforcement Learning be used for UAV waypoint control?Body: I want to make a drone which can follow static and dynamic waypoints. I am a total beginner in the drone field so I can't figure out that should I use Reinforcement Learning or any other learning methods for the drone to make it follow both static and dynamic waypoints. If RL is the best choice for the task, then how would I go about training the model and upload it to the flight controller. And if RL is not required, then what should I use in order to achieve this task.
+
+Please let me know how should I begin with this task
+"
+"['neural-networks', 'convolutional-neural-networks', 'comparison', 'feedforward-neural-networks']"," Title: Can we achieve what a CNN can do with just a normal neural network?Body: When I was learning about neural networks, I saw that a complex neural network can understand the MNIST dataset and a simple convolution network can also understand the same. So I would like to know if we can achieve a CNN's functionality with just using a simple neural network without the convolution layer and if we can then how to convert a CNN into an ANN.
+"
+"['neural-networks', 'machine-learning', 'papers']"," Title: Why can't neural networks be applied to preference learning problems?Body: In section 6.1 of the paper Neural Networks in Economics, the authors say
+
+
+ this leads to the problem, that no risk can be formulated which shall be minimized by a Neural Network learning algorithm.
+
+
+So, why can't neural networks be applied to preference learning problems?
+
+See sections 6.0 of the same paper for a definition of preference learning.
+"
+"['machine-learning', 'reinforcement-learning', 'q-learning', 'dqn', 'experience-replay']"," Title: My Double DQN with Experience Replay produces a no-action decision most of the time. Why?Body: I've written a Double DQN-based stock trading bot using mainly time series stock data. The internal network of the Double DQN is a LSTM which handles the time series data. An Experience Replay buffer is also used. The objective function is cumulative stock return over the test period. My epsilon used for exploration is 0.1 (which I think is already very high).
+
+My trading bot has a very simple action space, trade or no-trade.
+
+-- When it decides to trade, it sends a signal to buy and own a stock for a day. I'd get a positive return if stock price has gone up from today to tomorrow; equally would get a negative return if stock price has gone down.
+
+-- When it decides to not trade, I own no stock and the daily return is 0 because there is no trading.
+Strangely, my algorithm gives a daily 'no trade' signal most of the time when I run the algo through a number of different test periods.
+
+Very often, after giving a 'no trade' signal for many days, the algo would finally give a 'trade' signal but the next day reverse back to giving 'no trade' right away.
+
+My questions:
+
+Why am I getting this phenomenon? Most importantly, what can I do to make the algo not stuck in giving out 'no trade' signal most of the time?
+"
+"['machine-learning', 'convolutional-neural-networks', 'computer-vision', 'image-segmentation', 'fully-convolutional-networks']"," Title: What is a fully convolution network?Body: I was surveying some literature related to Fully Convolutional Networks and came across the following phrase,
+
+
+ A fully convolutional network is achieved by replacing the parameter-rich fully connected layers in standard CNN architectures by convolutional layers with $1 \times 1$ kernels.
+
+
+I have two questions.
+
+
+- What is meant by parameter-rich? Is it called parameter rich because the fully connected layers pass on parameters without any kind of ""spatial"" reduction?
+- Also, how do $1 \times 1$ kernels work? Doesn't $1 \times 1$ kernel simply mean that one is sliding a single pixel over the image? I am confused about this.
+
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing', 'deep-neural-networks', 'bert']"," Title: Can we use a pre trained Encoder (BERT, XLM ) with a Decoder (GPT, Transformer-XL) to build a Chatbot instead of Language Translation?Body: I was wondering if the BERT
or T5
models can do the task of generating sentences in English. Most of the models I have mentioned are trained to translate from English to German or French. Is it possible that I can use the output of BERT
as an input to my Decoder? My theory is that when I already have the trained Embeddings
, I do not need to train the Encoder part. I can just add the outputs of sentences to the decoder to generate the sentences.
+In place of finding the loss value from the translated version, Can I compute loss on the reply of a sentence?
+Can someone point me toward a tutorial where I can use the BERT
output for the decoder part? I have a data of conversation with me. I want to build a Chatbot
from that data.
+I have already implemented LSTM
based Sequence2sequence
model but it is not providing me satisfactory answer.
+After some research, 2 such models are there as T5
and BART
which are based on the same idea.
+If possible, can someone tell me how can I use BART
or T5
to make a conversational bot?
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'alexnet']"," Title: What is the reason for different learned features in upper and lower half in AlexNet?Body: I was reading AlexNet paper and the authors quoted
+
+
+ the kernels on one GPU were ""largely color agnostic,"" whereas the kernels on the other GPU were largely ""color-specific.""
+
+
+The upper GPU takes operates on filters on the top and lower GPU deals with the lower half. But what is the reason for each of them learning a different set of features, i.e. the top half of kernels learning the edges mostly and the bottom kernels learning color variation? Is there any reason behind it?
+
+
+"
+"['neural-networks', 'reinforcement-learning', 'q-learning', 'dqn', 'deep-rl']"," Title: When we use a neural network to approximate the Q values, is the Q target a single value?Body: I have two questions
+
+
+- When we use our network to approximate our Q values, is the Q target a single value?
+- During backpropagation, when the weights are updated, does it automatically update the Q values, shouldn’t the state be passed in the network again to update it?
+
+"
+"['deep-learning', 'tensorflow', 'speech-recognition', 'speech-synthesis', 'transpose-convolution']"," Title: Studying the speech-generation model and have question about the confusing nature of model input and outputsBody: I am currently studying this model speech generation known as WaveNet model by Google using the linked original paper and this implementation.
+I find the model very confusing in the input it takes and the output it generates, and some of the layer dimensions didn't seem to match based on what I understood from the WaveNet paper, or am I misunderstanding something?
+
+- What is the input to the WaveNet, isn't this a mel-spectrum input and not just 1 floating point value for raw audio? E.g. the input kernel layer shows as shaped 1x1x128. Isn't the input to the
input_convolution
layer the mel-spectrum frames, which are 80 float values * 10,000 max_decoder_steps, so the in_channels for this conv1d layer should be 80 instead of 1?
+
+
+inference/input_convolution/kernel:0 (float32_ref 1x1x128) [128, bytes: 512]
+
+
+- Is there reason for upsampling stride values to be [11, 25], like are the specific numbers 11 and 25 special or relevant in affecting other shapes/dimensions?
+
+inference/ConvTranspose1D_layer_0/kernel:0 (float32_ref 1x11x80x80) [70400, bytes: 281600]
+inference/ConvTranspose1D_layer_1/kernel:0 (float32_ref 1x25x80x80) [160000, bytes: 640000]
+
+
+- Why is the input-channels in residual_block_causal_conv 128 and residual_block_cin_conv 80? What exactly is their inputs? (e.g. is it mel-spectrum or just a raw floating point value?) Is the wavenet-vocoder generating just 1 float value per 1 input mel-spectrum frame of 80 floats?
+
+inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/kernel:0 (float32_ref 3x128x256) [98304, bytes: 393216]
+inference/ResidualConv1DGLU_0/residual_block_cin_conv_ResidualConv1DGLU_0/kernel:0 (float32_ref 1x80x256) [20480, bytes: 81920]
+
+I was able to print the whole WaveNet network using print(tf.trainable_variables())
, but the model still seems very confusing.
+"
+"['machine-learning', 'python', 'decision-trees', 'scikit-learn']"," Title: Why isn't my decision tree classifier able to solve the XOR problem properly?Body: I was trying to solve an XOR problem, and the dataset seems like the one in the image.
+
+
+
+I plotted the tree and got this result:
+
+
+
+As I understand, the tree should have depth 2 and four leaves. The first comparison is annoying, because it is close to the right x border (0.887). I've tried other parameterizations, but the same result persists.
+
+I used the code below:
+
+from sklearn.tree import DecisionTreeClassifier
+
+clf = DecisionTreeClassifier(criterion='gini')
+clf = clf.fit(X, y)
+
+fn=['V1','V2']
+
+fig, axes = plt.subplots(nrows = 1,ncols = 1,figsize = (3,3), dpi=300)
+
+tree.plot_tree(clf, feature_names = fn, class_names=['1', '2'], filled = True);
+
+
+I would be grateful if anyone can help me to clarify this issue.
+"
+"['reinforcement-learning', 'temporal-difference-methods', 'value-iteration', 'expectation']"," Title: Problem in understanding equation given for convergence of TD(n) algorithmBody: Given equation 7.3 of Sutton and Barto's book for convergence of TD(n):
+
+
+ $\max_s|\mathbb{E}_\pi[G_{t:t+n}|S_t = s] - v_\pi(s)| \leqslant \gamma^n \max_s|V_{t+n-1}(s) - v_\pi(s)|$
+
+
+$\textbf{PROBLEM 1}$ : Why is this error $|\mathbb{E}_\pi[G_{t:t+n}|S_t = s] - v_\pi(s)|$ compared with the error $|V_{t+n-1}(s) - v_\pi(s)|$.
+
+There can be two other logical comparisons for the convergence of algorithm($TD(n)$):
+
+1) If we compare and say that $V_{t+n-1}(s)$ is more close to $v_\pi(s)$ than $V_{t+n-2}(s)$ i.e we compare $|V_{t+n-1}(s) - v_\pi(s)|$ with $|V_{t+n-2}(s) - v_\pi(s)|$
+
+2) We can also compare $|\mathbb{E}_\pi[G_{t:t+n}|S_t = s] - v_\pi(s)|$ with $|\mathbb{E}_\pi[V_{t+n-1}(S_t)|S_t = s] - v_\pi(s)|$ to show that $\mathbb{E}_\pi[G_{t:t+n}|S_t = s]$ is better than $\mathbb{E}_\pi[V_{t+n-1}(S_t)|S_t = s]$ hence moving $V_{t+n-1}(S_t)$ towards $G_{t:t+n}$(as done in eq 7.2) can lead to convergence.
+
+$\textbf{PROBLEM 2}$ : Are the above 2 methods of comparison for testing convergence correct.
+
+Equations for reference:
+
+Eq 7.1: $G_{t:t+n} = R_{t+1} + \gamma R_{t+2} +......+\gamma^{n-1}R_{t+n} + \gamma^{n}V_{t+n-1}(S_{t+n})$
+
+Eq 7.2: $V_{t+n}(S_t) = V_{t+n-1}(S_t) + \alpha [G_{t:t+n} - V_{t+n-1}(S_t)]$
+"
+"['reinforcement-learning', 'temporal-difference-methods', 'importance-sampling', 'return']"," Title: Why does the n-step return being zero result in high variance in off policy n-step TD?Body: In the paragraph given between eq 7.12 and 7.13 in Sutton & Barto's book:
+
+$G_{t:h} = R_{t+1} + G_{t+1:h} , t < h < T$
+where $G_{h:h} = V_{h-1}(S_h)$. (Recall that this return is used at time h, previously denoted t + n.) Now consider the effect of following a behavior policy $b$ that is not the same as the target policy $\pi$. All of the resulting experience, including the first reward $R_{t+1}$ and the next state $S_{t+1}$ must be weighted by the importance sampling ratio for time $t, \rho_t = \frac{\pi(A_t|S_t)}{b(A_t|S_t)}$ . One might be tempted to simply weight the righthand side of the above equation, but one can do better. Suppose the action at time t would never be selected by $\pi$, so that $\rho_t$ is zero. Then a simple weighting would result in the n-step return being zero, which could result in high variance when it was used as a target.
+
+Why does the n-step return being zero results in high variance?
+Also, why is the experience weighted by $\rho_t$, it should be weighted by $\rho_{t:t+h}$?
+"
+"['generative-adversarial-networks', 'bayesian-deep-learning', 'bayesian-statistics', 'game-gan']"," Title: Will adding memory to a supervised learning system makes it into a Bayesian learning system?Body: Seung et.al recently published GameGAN paper, GameGAN learned and stored the whole Pacman game and was able to reproduce it without a game engine. The uniqueness of GameGAN is that it had added memory to its discriminator/generator which helped it to store the game states.
+
+
+
+In Bayesian interpretation, a supervised learning system learns by optimizing weights which maximizes a likelihood function.
+
+$$\hat{\boldsymbol{\theta}} = \mathop{\mathrm{argmax}} _ {\boldsymbol{\theta}} P(X \mid \boldsymbol{\theta})$$
+
+Will adding memory which can store prior information makes GameGAN a Bayesian learning system?
+
+Can GameGAN or similar neural network with memory can be considered as a bayesian learning system. If yes, then which of these two equations(or something other) correctly explains this system (considering prior as memory)?
+
+
+- $$\mathop{\mathrm{argmax}} \frac{P(X \mid \boldsymbol{\theta})P(\boldsymbol{\theta})}{P(X)}$$
+
+
+or
+
+
+- $$\mathop{\mathrm{argmax}} \frac{P(X_t \mid \boldsymbol{X^{t+1}})P(\boldsymbol{X^{t+1}})}{P(X_t)}$$
+
+
+PS: I understand GAN's are unsupervised learning systems, but we can assume discriminator and generator models separately trying to find weights that maximize their individual likelihood function.
+"
+"['papers', 'hyperparameter-optimization', 'online-learning', 'bayesian-networks', 'probabilistic-graphical-models']"," Title: Deriving hyperparameter updates in Online Interactive Collaborative FilteringBody: I've been going through ""Online Interactive Collaborative Filtering Using Multi-Armed Bandit with Dependent Arms"" by Wang et al. and am unable to understand how the update equations for the hyperparameters (section 4.3, equation set (23)) were derived. I'd deeply appreciate it if anyone could provide a full or partial derivation of the updates. Any general suggestions regarding how to proceed with the derivation would also be appreciated.
+
+ICTR Graphical Model
+
+
+
+The variables are sampled as below
+
+$$\mathbb{p}_m|\lambda \sim \text{Dirichlet}(\lambda)$$
+
+$$\sigma^2_n|\alpha,\beta \sim \text{Inverse-Gamma}(\alpha,\beta)$$
+
+$$\mathbb{q}_n |\mu_{\mathbb{q}}, \Sigma_{\mathbb{q}}, \sigma_n^2 \sim \mathcal{N}(\mu_{\mathbb{q}}, \sigma_n^2\Sigma_{\mathbb{q}})$$
+
+$$\mathbb{\Phi}_k |\eta \sim \text{Dirichlet}(\eta)$$
+
+$$z_{m,t} | \mathbb{p}_m \sim \text{Multinomial}(\mathbb{p}_m)$$
+
+$$x_{m,t} | \mathbb{\Phi}_k \sim \text{Multinomial}(\mathbb{\Phi}_k) $$
+
+$$y_{m,t} \sim \mathcal{N}(\mathbb{p}_m^T\mathbb{q}_n, \sigma_n^2)$$
+
+And the update equations are below
+
+
+"
+"['reinforcement-learning', 'off-policy-methods', 'expectation', 'importance-sampling', 'conditional-probability']"," Title: How is per-decision importance sampling derived in Sutton & Barto's book?Body: In per-decison importance sampling given in Sutton & Barto's book:
+
+
+ Eq 5.12 $\rho_{t:T-1}R_{t+k} = \frac{\pi(A_{t}|S_{t})}{b(A_{t}|S_{t})}\frac{\pi(A_{t+1}|S_{t+1})}{b(A_{t+1}|S_{t+1})}\frac{\pi(A_{t+2}|S_{t+2})}{b(A_{t+2}|S_{t+2})}......\frac{\pi(A_{T-1}|S_{T-1})}{b(A_{T-1}|S_{T-1})}R_{t+k}$
+
+ Eq 5.13 $\mathbb{E}\left[\frac{\pi(A_{k}|S_{k})}{b(A_{k}|S_{k})}\right] = \displaystyle\sum_ab(a|S_k)\frac{\pi(A_{k}|S_{k})}{b(A_{k}|S_{k})} = \displaystyle\sum_a\pi(a|S_k) = 1$
+
+ Eq.5.14 $\mathbb{E}[\rho_{t:T-1}R_{t+k}] = \mathbb{E}[\rho_{t:t+k-1}R_{t+k}]$
+
+
+As full derivation is not given, how do we arrive at Eq 5.14 from 5.12?
+
+From what i understand :
+
+1) $R_{t+k}$ is only dependent on action taken at $t+k-1$ given state at that time i.e. only dependent on $\frac{\pi(A_{t+k-1}|S_{t+k-1})}{b(A_{t+k-1}|S_{t+k-1})}$
+
+2) $\frac{\pi(A_{k}|S_{k})}{b(A_{k}|S_{k})}$ is independent of $\frac{\pi(A_{k+1}|S_{k+1})}{b(A_{k+1}|S_{k+1})}$ , so $\mathbb{E}\left[\frac{\pi(A_{k}|S_{k})}{b(A_{k}|S_{k})}\frac{\pi(A_{k+1}|S_{k+1})}{b(A_{k+1}|S_{k+1})}\right] = \mathbb{E}\left[\frac{\pi(A_{k}|S_{k})}{b(A_{k}|S_{k})}\right]\mathbb{E}\left[\frac{\pi(A_{k+1}|S_{k+1})}{b(A_{k+1}|S_{k+1})}\right], \forall \, k\in [t,T-2]$
+
+Hence, $\mathbb{E}[\rho_{t:T-1}R_{t+k}]= \mathbb{E}\left[\frac{\pi(A_{t}|S_{t})}{b(A_{t}|S_{t})}\frac{\pi(A_{t+1}|S_{t+1})}{b(A_{t+1}|S_{t+1})}\frac{\pi(A_{t+2}|S_{t+2})}{b(A_{t+2}|S_{t+2})}......\frac{\pi(A_{T-1}|S_{T-1})}{b(A_{T-1}|S_{T-1})}R_{t+k}\right] \\= \mathbb{E}\left[\frac{\pi(A_{t}|S_{t})}{b(A_{t}|S_{t})}\frac{\pi(A_{t+1}|S_{t+1})}{b(A_{t+1}|S_{t+1})}\frac{\pi(A_{t+2}|S_{t+2})}{b(A_{t+2}|S_{t+2})}....\frac{\pi(A_{t+k-2}|S_{t+k-2})}{b(A_{t+k-2}|S_{t+k-2})}\frac{\pi(A_{t+k}|S_{t+k})}{b(A_{t+k}|S_{t+k})}......\frac{\pi(A_{T-1}|S_{T-1})}{b(A_{T-1}|S_{T-1})}\right]\mathbb{E}\left[\frac{\pi(A_{t+k-1}|S_{t+k-1})}{b(A_{t+k-1}|S_{t+k-1})}R_{t+k}\right] \\= \mathbb{E}\left[\frac{\pi(A_{t}|S_{t})}{b(A_{t}|S_{t})}\right]\mathbb{E}\left[\frac{\pi(A_{t+1}|S_{t+1})}{b(A_{t+1}|S_{t+1})}\right]\mathbb{E}\left[\frac{\pi(A_{t+2}|S_{t+2})}{b(A_{t+2}|S_{t+2})}\right]....\mathbb{E}\left[\frac{\pi(A_{t+k-2}|S_{t+k-2})}{b(A_{t+k-2}|S_{t+k-2})}\right]\mathbb{E}\left[\frac{\pi(A_{t+k}|S_{t+k})}{b(A_{t+k}|S_{t+k})}\right]......\mathbb{E}\left[\frac{\pi(A_{T-1}|S_{T-1})}{b(A_{T-1}|S_{T-1})}\right]\mathbb{E}\left[\frac{\pi(A_{t+k-1}|S_{t+k-1})}{b(A_{t+k-1}|S_{t+k-1})}R_{t+k}\right] \\= \mathbb{E}[\frac{\pi_{t+k-1}}{b_{t+k-1}}R_{t+k}]\\=\mathbb{E}[\rho_{t+k-1}R_{t+k}]$
+
+which is not equal to eq 5.14. What's the mistake in the above calculations? Are 1 and 2 correct?
+"
+"['machine-learning', 'deep-learning', 'ai-design', 'tensorflow', 'image-segmentation']"," Title: How to compare SegNet, U-Net and EfficientNet?Body: SegNet and U-Net are created for segmentation problem and EfficientNet is created for classification problem. I have a task and it is saying that train these models on the same dataset and compare results. Is it possible?
+"
+"['reinforcement-learning', 'markov-decision-process', 'value-functions', 'value-iteration', 'discount-factor']"," Title: What is the value of a state when there is a certain probability that agent will die after each step?Body: We assume infinite horizon and discount factor $\gamma = 1$. At each step, after the agent takes an action and gets its reward, there is a probability $\alpha = 0.2$, that agent will die. The assumed maze looks like this
+
+
+
+Possible actions are go left, right, up, down or stay in a square. The reward has a value 1 for any action done in the square (1,1) and zero for actions done in all the other squares.
+
+With this in mind, what is the value of a square (1,1)?
+
+The correct answer is supposed to be 5, and is calculated as $1/(1\cdot 0.2) = 5$. But why is that? I didn't manage to find any explanation on the net, so I am asking here.
+"
+"['reinforcement-learning', 'q-learning', 'importance-sampling', 'bellman-equations']"," Title: Why we don't use importance sampling in tabular Q-Learning?Body: Why don't we use an importance sampling ratio in Q-Learning, even though Q-Learning is an off-policy method?
+
+Importance sampling is used to calculate expectation of a random variable by using data not drawn from the distribution. Consider taking a Monte Carlo average to calculate $\mathbb{E}[X]$.
+
+Mathematically an expectation is defined as
+$$\mathbb{E}_{x \sim p(x)}[X] = \sum_{x = \infty}^\infty x p(x)\;;$$
+where $p(x)$ denotes our probability mass function, and we can approximate this by
+$$\mathbb{E}_{x \sim p(x)}[X] \approx \frac{1}{n} \sum_{i=1}^nx_i\;;$$
+where $x_i$ were simulated from $p(x)$.
+
+Now, we can re-write the expectation from earlier as
+
+$$\mathbb{E}_{x \sim p(x)}[X] = \sum_{x = \infty}^\infty x p(x) = \sum_{x = \infty}^\infty x \frac{p(x)}{q(x)} q(x) = \mathbb{E}_{x\sim q(x)}\left[ X\frac{p(X)}{q(X)}\right]\;;$$
+and so we can calculate the expectation using Monte Carlo averaging
+$$\mathbb{E}_{x \sim p(x)}[X] \approx \frac{1}{n} \sum_{i=1}^nx_i \frac{p(x)}{q(x)}\;;$$
+where the data $x_i$ are now simulated from $q(x)$.
+
+Typically importance sampling is used in RL when we use off-policy methods, i.e. the policy we use to calculate our actions is different from the policy we want to evaluate. Thus, I wonder why we don't use the importance sampling ratio in Q-learning, even though it is considered to be an off-policy method?
+"
+"['machine-learning', 'social', 'adversarial-ml']"," Title: Can the addition of unnoticeable noise to images be used to create subliminals?Body: I was reading this report: https://www.theverge.com/2017/4/12/15271874/ai-adversarial-images-fooling-attacks-artificial-intelligence
+
+Researchers used noise to trick machine learning algorithms to misidentify or misclassify an image of a fish as a cat. I was wondering if something like that can be used to create subliminals.
+
+What I mean by subliminals: United Nations has defined subliminal messages as perceiving messages without being aware of them, it is unconscious perception, or perception without awareness. Like you may be aware of a message but cannot consciously perceive that message in the form of text, etc.
+
+All the reports about the noise trick said the noise was so transparent that humans couldn't detect it. This can be changed to make it noticeable unconsciously but unnoticeable at a conscious level so a human can register the subliminal but not be aware of it.
+
+Is it possible to take an output from a hidden layer to construct such subliminal for humans, with trial and error one can find right combination? Can it be possible to come up with a pixel pattern or noise with ML which allows one to impose subliminals?
+"
+['prediction']," Title: Which machine learning approach can be used to predict a univariate value?Body: I have a stream of data coming in like below (random numbers 0-9)
+
+
+ 7, 7, 0, 0, 8, 9, 2, 7, 3, 8, 2, 8, 5, 7, 0, 8, 7, 8, 5, 3, 2, 6, 1, 9, 5, 7, 5, 3, 4, 9, 1, 3, 5, 5, 0, 7, 7, 5, 2, 8, 8, 7, 5, 5, 5, 2, 9, 7, 2, 1, 0, 0, 5, 7, 1, 4, 2, 7, 8, 8, 5, 2, 7, 5, 7, 1, 7, 2, 0, 5, 7, 5, 2, 6, 3, 6, 3, 6, 1, 9, 1, 9, 7, 2, 3, 9, 8, 8, 4, 9, 8, 2, 5, 3, 4, 0, 3, 1, 0, 7, 2, 3, 8, 7, 5, 7, 3, 6, 0, 3, 3, 3, 6, 3, 1, 3, 0, 6, 9, 8, 0, 1, 4, 4, 9, 9, 3, 7, 4, 1, 0, 5, 0, 6, 8, 8, 8, 1, 7, 6
+
+
+Ask: is to Predict the next numbers(at least 3-10).
+
+Which approach would be helpful in getting through this problem?
+"
+"['natural-language-processing', 'ai-design', 'machine-translation']"," Title: How to autocorelate multiple variants of same text into one?Body: I want to improve quality of translations for open-source projects in Ukrainian language. We have multiple translations from different authors. We can also translate messages using machine translations. Sometimes machine translation is even better than human translation.
+
+Given multiple variants of translation of the same original text, I want to create AI which will be able to ""translate"" from Ukrainian to Ukrainian, using these multiple variants in parallel as the source, to produce one variant of higher quality.
+
+So, in general, given multiple similar input sequences, the neural network needs to ""understand"" them, and produce a single output sequence.
+
+$$S_1, S_2, \dots \rightarrow S$$
+
+For a simple example, we may want to train a NN to recognize a sequence of natural numbers: $1,2,3,4, \dots$. We give two sequences to NN: $23,4,24,6,8$ and $3,65,5,6,23$, then trained the NN is expected to produce $3,4,5,6,7$.
+
+How to modify an existing neural network to achieve that? Is it possible at all?
+"
+"['reinforcement-learning', 'q-learning']"," Title: Why do my rewards fall using tabular Q-learning as I perform more episodes?Body: Using the tutorial from: SentDex - Python Programming I added Q Learning to my script that was previously just picking random actions. His script uses the MountainCar Environment so I had to amend it to the CartPole env I am using. Initially, the rewards seem sporadic but, after a while, they just drop off and oscillate between 0-10. Does anyone know why this is?
+
+Learning_rate = 0.1
+Discount_rate = 0.95
+episodes = 200
+
+# Exploration settings
+epsilon = 1 # not a constant, qoing to be decayed
+START_EPSILON_DECAYING = 1
+END_EPSILON_DECAYING = episodes//2
+epsilon_decay_value = epsilon/(END_EPSILON_DECAYING - START_EPSILON_DECAYING)
+
+env = gym.make(""CartPole-v0"") #Create the environment. The name of the environments can be found @ https://gym.openai.com/envs/#classic_control
+#Each environment has a number of possible actions. In this case there are two discrete actions, left or right
+
+#Each environment has some integer characteristics of the state.
+#In this case we have 4:
+
+#env = gym.wrappers.Monitor(env, './', force=True)
+
+DISCRETE_OS_SIZE = [20, 20, 20, 20]
+
+discrete_os_win_size = (env.observation_space.high - env.observation_space.low)/ DISCRETE_OS_SIZE
+
+def get_discrete_state(state):
+ discrete_state = (state - env.observation_space.low)/discrete_os_win_size
+ return tuple(discrete_state.astype(np.int))
+
+q_table = np.random.uniform(low = -2, high = 0, size = (20, 20, 20, 20, env.action_space.n))
+
+plt.figure() #Instantiate the plotting environment
+rewards_list = [] #Create an empty list to add the rewards to which we will then plot
+for i in range(episodes):
+ discrete_state = get_discrete_state(env.reset())
+ done = False
+ rewards = 0
+ frames = []
+
+ while not done:
+ #frames.append(env.render(mode = ""rgb_array""))
+
+ if np.random.random() > epsilon:
+ # Get action from Q table
+ action = np.argmax(q_table[discrete_state])
+
+ else:
+ # Get random action
+ action = np.random.randint(0, env.action_space.n)
+
+ new_state, reward, done, info = env.step(action)
+
+ new_discrete_state = get_discrete_state(new_state)
+
+ # If simulation did not end yet after last step - update Q table
+ if not done:
+
+ # Maximum possible Q value in next step (for new state)
+ max_future_q = np.max(q_table[new_discrete_state])
+
+ # Current Q value (for current state and performed action)
+ current_q = q_table[discrete_state, action]
+
+ # And here's our equation for a new Q value for current state and action
+ new_q = (1 - Learning_rate) * current_q + Learning_rate * (reward + Discount_rate * max_future_q)
+
+ # Update Q table with new Q value
+ q_table[discrete_state, action] = new_q
+
+ else:
+ q_table[discrete_state + (action,)] = 0
+
+ discrete_state = new_discrete_state
+
+ rewards += reward
+ rewards_list.append(rewards)
+ #print(""Episode:"", i, ""Rewards:"", rewards)
+ #print(""Observations:"", obs)
+
+ # Decaying is being done every episode if episode number is within decaying range
+ if END_EPSILON_DECAYING >= i >= START_EPSILON_DECAYING:
+ epsilon -= epsilon_decay_value
+
+plt.plot(rewards_list)
+plt.show()
+env.close()
+
+
+
+
+It becomes even more pronounced when I increase episodes to 20,000 so I don't think it's related to not giving the model enough training time.
+
+
+
+If I set START_EPSILON_DECAYING
to say 200 then it only drops to < 10 rewards after episode 200 which made me think it was the epsilon that was causing the problem. However, if I remove the epsilon/exploratory then the rewards at every episode are worse as it gets stuck in picking the argmax value for each state.
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'off-policy-methods', 'value-functions']"," Title: How do I know that the DQN has learnt an appropriate Q function?Body: Is there any sanity check to know whether the Q functions learnt are appropriate in deep Q networks? I know that the Q values for end states should approximate the terminal reward. However, is it normal that Q values for the non-terminal states have higher values than those of the terminal states?
+
+The reason why I want to know whether Q values learnt are appropriate is because I want to apply the doubly robust estimator for off-policy value evaluation. Using doubly robust requires a good Q value estimate to be learnt for each state.
+"
+"['machine-learning', 'deep-learning', 'optimization']"," Title: Can a machine learning approach solve this constrained optimisation problem?Body: I had done with different classification, regression and clustering approaches for predictions of values, etc. I was wondering if there is a machine learning approach for distribution of a whole based on some features (I do not know if there is an approach for that I just could not find one with my research).
+
+An easy example might be lets consider we have height and weight data of many children and we have to distribute a given number of pizza slices amongst them so that skinny children get more pizza as compared to obese ones because pizza is more beneficial for skinny as compared to obese. So might have to find out the optimum number of slices for each child out of the total number of slices so that each child gets maximum possible nutrients. A more complex version could incorporate more features like age, overall health, blood sugar content, physical activity index, daily calorie consumption, and others.
+
+A similar example might be to find out the optimal value of fuel to be allocated to each vehicle if we have a total of 100 gallons. Features might be distance they have to travel, mpg, driver competency, engine horsepower, etc., so that all of them might travel the maximum distance possible.
+
+So, can we achieve a task like this with machine learning/deep learning approaches? If not what are the hurdles achieving this?
+"
+"['reinforcement-learning', 'tensorflow', 'actor-critic-methods', 'hyperparameter-optimization', 'soft-actor-critic']"," Title: Why is my Soft Actor-Critic's policy and value function losses not converging?Body: I'm trying to implement a soft actor-critic algorithm for financial data (stock prices), but I have trouble with losses: no matter what combination of hyper-parameters I enter, they are not converging, and basically it caused bad reward return as well. It sounds like the agent is not learning at all.
+I already tried to tune some hyperparameters (learning rate for each network + number of hidden layers), but I always get similar results.
+The two plots below represent the losses of my policy and one of the value functions during the last episode of training.
+
+
+My question is, would it be related to the data itself (nature of data) or is it something related to the logic of the code?
+"
+"['machine-learning', 'objective-functions', 'homework']"," Title: Why is it useful to track loss while model is being trained?Body: Why is it useful to track loss while the model is being trained?
+
+Options are:
+
+
+- Loss is only useful as a final metric. It should not be evaluated while the model is being trained.
+- Loss dictates how effective the model is.
+- Loss can help understand how much the model is changing per iteration. When it converges, that's an indicator that further training will have little benefit.
+- None of the above
+
+"
+"['deep-learning', 'deepfakes']"," Title: How do deepfakes work and how they might be dangerous?Body:
+ Deepfakes (a portmanteau of ""deep learning"" and ""fake"") are synthetic media in which a person in an existing image or video is replaced with someone else's likeness.
+
+
+Nowadays most of the news circulating in the news and social media are fake/gossip/rumors which may false-positives or false-negatives except WikiLeaks
+
+I know there has been a Deepfake Detection Challenge Kaggle competition for a whooping sum $1,000,000 prize money.
+
+I would like to know how deepfakes work and how they might be dangerous?
+"
+"['reinforcement-learning', 'q-learning', 'self-play']"," Title: Generalising performance of Q-learning agent through self-play in a two-player game (MCTS?)Body: I'm using Q-learning (off-policy TD-control as specified in Sutton's book on pg 131) to train an agent to play connect four. My goal is to create a strong player (superhuman performance?) purely by self-play, without training models against other agents obtained externally.
+
+I'm using neural network architectures with some convolutional layers and several fully connected layers. These train surprisingly efficiently against their opponent, either a random player or another agent previously trained through Q-learning. Unfortunately the resulting models don't generalise well. 5000 episodes seems enough to obtain a high (> 90%) win rate against whichever opponent, but after > 20 000 episodes, they are still rather easy to beat by myself.
+
+To solve this, I now train batches of models (~ 10 models per batch), which are then used in group as a new opponent, i.e.:
+
+
+- I train a batch of models against a completely random agent (let's call them the generation one)
+- Then I train a second generation of agents against this first generation
+- Then I train a third generation against generation two
+- ...
+
+
+So far this helped in creating a slightly stronger/more general connect four model, but the improvement is not as good as I was hoping for. Is it just a matter of training enough models/generations or are there better ways for using Q-learning in combination with self-play?
+
+I know the most successful techniques (e.g. alpha zero) rely on MCTS, but I'm not sure how to integrate this with Q-learning? Neither how MCTS helps to solve the problem of generalisation?
+
+Thanks for your help!
+"
+"['deep-learning', 'comparison', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: What is the difference between LSTM and fully connected LSTM?Body: I'm currently trying to understand the difference between a vanilla LSTM and a fully connected LSTM. In a paper I'm reading, the FC-LSTM gets introduced as
+
+
+ FC-LSTM may be seen as a multivariate version of LSTM where the input, cell output and states are all 1D vectors
+
+
+But is not really expanded further upon. Google also didn't help me much in that regard as I can't seem to find anything under that keyword.
+
+What is the difference between the two? Also, I'm a bit confused by the quote - aren't inputs, outputs, etc. of a vanilla LSTM already 1D vectors?
+"
+"['machine-learning', 'generative-adversarial-networks', 'homework']"," Title: After a GAN is trained, which parts of it are used to generate new outputs from data?Body: After a GAN is trained, which parts of it are used to generate new outputs from data?
+
+Options are:
+
+
+- Neither
+- Discriminator
+- Generator
+- Both Generator and Discriminator
+
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'crossover-operators']"," Title: Crossover method for gene value containing a set of valuesBody: I have a chromosome where each gene contain s set of values. Like the following:
+
+chromosome = [[A,B,C],[C,B,A],[C,D,],[],[E,F]]
+
+
+- The order in each gene values matters. (A,B,C is different to A,C,B)
+- Each value should not appear more than once in a gene. ([A,B,B] is not desirable, B is repeated.)
+
+
+In my current two-point crossover method. The genes values that are crossover is the whole set of values. (E.g the whole of [A,B,C] is crossed to another chromosome)
+
+Soon, I realize my population lacks variations very quickly because the values within a gene always remain the same. Hence, my algorithm is evolving very slowly, and limited by the variation of gene values at initialization stage.
+
+What crossover can I implement to cross values within the set as well?
+
+I am pretty new to genetic algorithm. Any help will be much appreciated. Thank you.
+"
+['reinforcement-learning']," Title: Solution to exercise 3.22 in the RL book by Sutton and BartoBody: The goal is to find an optimal deterministic policy for this MDP:
+
+
+
+There are two possible policies: left (L) and right (R). What is the optimal policy, when different discounts are used:
+
+A $\gamma = 0$
+
+B $\gamma = 0.9$
+
+C $\gamma = 0.5$
+
+The optimal policy $\pi_* \ge \pi$ if $v_{\pi^*}(s) \ge v_{\pi}(s), \forall s \in S$, so to find the optimal policy, the goal is to check which one of those results in the largest state value function for all states in the system given discount factors (A,B,C).
+
+The Bellman equation for the state value function is
+
+$v(s) = E_\pi[G_t | S_t= s] = E_\pi[R_{t+1} + \gamma v(S_{t+1}) | S_t = s]$
+
+The suffix $_n$ marks the current iteration, and $_{n+1}$ marks the next iteration. The following is valid if the value function is initialized to $0$ or some random $x \ge 0$.
+
+A) $\gamma = 0$
+
+$v_{L,n+1}(S_0) = 1 + 0 v_{L,n}(S_L) = 1$
+
+$v_{R,n+1}(S_0) = 0 + 0 v_{R,n}(S_R) = 0$
+
+$L$ is optimal in case A.
+
+B) $\gamma = 0.9$
+
+$v_{L,n+1}(S_0) = 1 + 0.9 v_{L,n}(S_L) = 1 + 0.9(0 + 0.9 v_{L,n}(S_0)) = 1 + 0.81v_{L,n}(S_0)$
+
+$v_{R,n+1}(S_0) = 0 + 0.9 v_{R,n}(S_R) = 0 + 0.9(2 + 0.9 v_{R,n}(S_0)) = 1.8 + 0.81v_{R,n}(S_0)$
+
+$R$ is optimal in case B.
+
+C) $\gamma = 0.5$
+
+$v_{L,n+1}(S_0) = 1 + 0.5 v_{L,n}(S_L) = 1 + 0.5(0 + 0.9 v_{L,n}(S_0)) = 1 + 0.45v_{L,n}(S_0)$
+
+$v_{R,n+1}(S_0) = 0 + 0.5 v_{R,n}(S_R) = 0 + 0.5(2 + 0.9 v_{R,n}(S_0)) = 1 + 0.45v_{R,n}(S_0)$
+
+Both $R$ and $L$ are optimal in case C.
+
+Question: Is this correct?
+"
+"['reinforcement-learning', 'actor-critic-methods', 'reinforce', 'advantage-actor-critic', 'reward-to-go']"," Title: Why is the ""reward to go"" replaced by Q instead of V, when transitioning from PG to actor critic methods?Body: While transitioning from simple policy gradient to the actor-critic algorithm, most sources begin by replacing the ""reward to go"" with the state-action value function (see this slide 5).
+
+I am not able to understand how this is mathematically justified. It seems intuitive to me that the ""reward to go"" when sampled through multiple trajectories should be estimated by the state-value function.
+
+I feel this way since nowhere in the objective function formulation or resulting gradient expression do we tie down the first action after reaching a state. Alternatively, when we sample a bunch of trajectories, these trajectories might include different actions being taken from the state reached in timestep $t$.
+
+So, why isn't the estimation/approximation for the ""reward to go"" the state value function, in which the expectation is also over all the actions that may be taken from that state as well?
+"
+"['natural-language-processing', 'named-entity-recognition']"," Title: What are the main ideas behind NER?Body:
+ Named entity recognition (NER), also known as entity chunking/extraction, is a popular technique used in information extraction to identify and segment the named entities and classify or categorize them under various predefined classes.
+
+
+Briefly, how does NER work? What are the main ideas behind it? And which algorithms are used to perform NER?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'computer-vision']"," Title: Overcome caveats on using Deep Learning for faster inference on limited performance availabilityBody: I am working in the field of Machine Vision, where accuracy and performance both play a major factor in deciding the approach towards a problem. Traditional rule based approaches work quite well in such cases.
+
+I am gradually migrating towards deep learning, due to its umpteen advantages, where the results seem promising albeit with two huge caveats:
+
+
+- Lack of Training data in this field. To be precise, the lack of erroneous data.
+- Performance issues on inference. Accuracy and speed are required in equal proportion, and cannot be compromised.
+
+
+In industrial settings, Point 1 plays a strong factor. I have been dabbling with Transfer learning techniques and using pre-trained models to overcome this situation. For simpler applications such as classification, this suits and gives good results. In other cases such as detection and localization, I have tried using MaskRCNN, which gives really good results but poor inference speed, means it is not production-ready.
+
+The worrying factor in both cases is how slow detection and inference is, compared to traditional vision algorithms. A solution would be to buy Machine Vision software specifically from companies such as Cognex, HALCON, etc, who sell deep learning bundles. They are quite expensive and are to be used out- of- box with minimal modifications, which does not suit me currently.
+
+Point 2, is highly necessary in production lines, where each iteration/image may take less than 500ms for execution.
+
+Deep Learning gives a lot of opportunities in getting state of the art results with very less data in most of the situations, but in general without inference optimization in using apps such as TensorRT, the ""time"" metric does not give good results.
+
+Is there an approach in using open source that can solve both point 1 and point 2? Creating a CNN from scratch is out of the question.
+
+This post is to discuss ideas if possible, I know a concrete solution is not really possible in the scope of this question. I am the only person working on this problem at my company, thus any discussion ideas would be highly appreciated!
+"
+"['reinforcement-learning', 'comparison', 'q-learning', 'sarsa']"," Title: What are the differences between SARSA and Q-learning?Body: From Sutton and Barto's book Reinforcement Learning (Adaptive Computation and Machine Learning series), are the following definitions:
+
+
+
+
+
+To aid my learning of RL and gain an intuition, I'm focusing on the differences between some algorithms. I've selected Sarsa (on-policy TD control) for estimating Q ≈ q *
and Q-learning (off-policy TD control) for estimating π ≈ π *
.
+
+For conciseness I'll refer to Sarsa (on-policy TD control) for estimating Q ≈ q *
and Q-learning (off-policy TD control) for estimating π ≈ π *
as Sarsa and Q-learning respectively.
+
+Are my following assertions correct?
+
+The primary differences are how the Q values are updated.
+
+Sarsa Q value update:
+$ Q ( S, A ) ← Q ( S, A ) + α [ R + \gamma Q ( S ′ , A ′ ) − Q ( S, A ) ] $
+
+Q-learning Q value update:
+$ Q ( S, A ) ← Q ( S, A ) + α [ R + \gamma \max_a Q ( S ′ , a ) − Q ( S, A ) ] $
+
+Sarsa, in performing the td update subtracts the discounted Q value of the next state and action, S', A' from the Q value of the current state and action S, A.
+Q-learning, on the other hand, takes the discounted difference between the max action value for the Q value of the next state and current action S', a. Within the Q-learning episode loop the $a$ value is not updated, is an update made to $a$ during Q-learning?
+
+Sarsa, unlike Q-learning, the current action is assigned to the next action at the end of each episode step. Q-learning does not assign the current action to the next action at the end of each episode step
+
+Sarsa, unlike Q-learning, does not include the arg max as part of the update to Q value.
+
+Sarsa and Q learning in choosing the initial action for each episode both use a ""policy derived from Q"", as an example, the epsilon greedy policy is given in the algorithm definition. But any policy could be used here instead of epsilon greedy?
+Q learning does not utilise the next state-action pair in performing the td update, it just utilises the next state and current action, this is given in the algorithm definition as $ Q ( S ′ , a ) $ what is $a$ in this case ?
+"
+"['reinforcement-learning', 'q-learning', 'proofs']"," Title: Proof of Maximization Bias in Q-learning?Body: In the textbook ""Reinforcement Learning: An Introduction"" by Richard Sutton and Andrew Barto, the concept of Maximization Bias is introduced in section 6.7, and how Q-learning ""over-estimates"" action-values is discussed using an example. However, a formal proof of the same is not presented in the textbook, and I couldn't get it anywhere on the internet as well.
+
+After reading the paper on Double Q-learning by Hado van Hasselt (link), I could understand to some extent why Q-learning ""over-estimates"" action values. Here is my (vague, informal) construction of a mathematical proof:
+
+We know that Temporal Methods (just like Monte Carlo methods), use sample returns instead of real expected returns as estimates, to find the optimal policy. These sample returns converge to the true expected returns over infinite trials, provided all the state-action pairs are visited. Thus the following notation is used,
+
+$$\mathbb{E}[Q()] \rightarrow q_\pi()$$
+where $Q()$ is calculated from the sample return $G_t$ observed at every time-step. Over infinite trials, this sample return when averaged converges to it's expected value which is the true $Q$-value under the policy $\pi$. Thus $Q()$ is really an estimate of the true $Q$-value $q_\pi$.
+
+In section 3 on page 4 of the paper, Hasselt describes how the quantity $\max_a Q(s_{t+1}, a)$ approximates $\mathbb{E}[\max_a Q(s_{t+1}, a)]$ which in turn approximates the quantity $\max_a(\mathbb{E}[Q(s_{t+1},a)])$ in Q-learning. Now, we know that the $\max[]$ function is a convex function (proof). From Jensen's inequality, we have
+$$\phi(\mathbb{E}[X]) \leq \mathbb{E}[\phi(X)]$$ where $X$ is a random variable, and the function $\phi()$ is a convex function. Thus,
+$$\max_a(\mathbb{E}[Q(s_{t+1},a)]) \leq \mathbb{E}[\max_a(Q(s_{t+1}, a)]$$
+
+$$\therefore \max_a Q(s_{t+1}, a) \approx \max_a(\mathbb{E}[Q(s_{t+1},a)]) \leq \mathbb{E}[\max_a(Q(s_{t+1}, a)]$$
+
+The quantity on the LHS of the above equation appears (along with $R_{t+1}$) as an estimate of the next action-value in the Q-learning update equation:
+$$Q(S_t,A_t) \leftarrow (1-\alpha)Q(S_t, A_t) + \alpha[R_{t+1} + \gamma\max_aQ(S_{t+1}, a)] $$
+
+Lastly, we note that the bias of an estimate $T$ is given by:
+$$b(T) = \mathbb{E}[T] - T$$
+Thus the bias of the estimate $\max_a Q(s_{t+1},a)$ will always be positive:
+$$b(\max_a Q(s_{t+1},a)) = \mathbb{E}[\max_a Q(s_{t+1},a)] - \max_a Q(s_{t+1},a) \geq 0$$
+In statistics literature, any estimate whose bias is positive is said to be an ""over-estimate"". Thus the action values are over-estimated by the Q-learning algorithm due to the $\max[]$ operator, thus resulting in a $maximization$-$bias$.
+
+Are the arguments made above valid? I am a student, with no rigorous knowledge of random processes. Thus, please forgive me if any of the steps above are totally unrelated, and doesn't make sense in a more mathematically rigorous fashion. Please let me know, if there is a much better proof than this failed attempt.
+
+Thank you so much for your precious time. Any help/suggestions/corrections are greatly appreciated!
+"
+"['computer-vision', 'feature-extraction', 'features', 'bag-of-features', 'content-based-image-retrieval']"," Title: What are bag-of-features in computer vision?Body: In computer vision, what are bag-of-features (also known as bag-of-visual-words)? How do they work? What can they be used for? How are they related to the bag-of-words model in NLP?
+"
+"['reinforcement-learning', 'multi-armed-bandits', 'thompson-sampling', 'upper-confidence-bound', 'epsilon-greedy-policy']"," Title: Why am I getting better performance with Thompson sampling than with UCB or $\epsilon$-greedy in a multi-armed bandit problem?Body: I ran a test using 3 strategies for multi-armed bandit: UCB, $\epsilon$-greedy, and Thompson sampling.
+
+The results for the rewards I got are as follows:
+
+
+- Thompson sampling had the highest average reward
+- UCB was second
+- $\epsilon$-greedy was third, but relatively close to UCB
+
+
+Can someone explain to me why the results are like this?
+"
+"['machine-learning', 'deep-learning', 'learning-rate']"," Title: Is it harmful to set the learning rate of training a model to be too high if there is some decay function for the learning rate?Body: It is known that if $\alpha$ is set to high, then the cost function of the model may not converge.
+However, would a decaying of the learning rate provide some ""tuning"" of the $\alpha$ value during training ? In the sense that if you set a high learning rate but you also have some form of learning rate decay, then eventually $\alpha$ value would fall within the ""just right"" and ""too low"" range eventually. Is it better to then set an initial learning rate that is more ""flexible"" in the higher ranges rather than a learning rate that is too low ?
+"
+"['reinforcement-learning', 'q-learning', 'optimization']"," Title: How can I model this problem of delivering assets by choosing a route with reinforcement learning?Body: I would like to build a model based on reinforcement learning (RL) for the following scenario
+
+Recommend the best route (of cities listed for a given country) that satisfies the required criteria (museum, beaches, food, etc) for a total budget of $2000.
+
+Based on the recommendation, the user will provide its feedback (as a reward), so the recommendations can be fine-tuned (by reinforcement learning) the next time. I modeled the system this way:
+
+- States = (c,cr), where $c$ is the city and $cr$ is the criteria (history, beach, food, etc)
+
+- Actions = (p) is the price of visiting the city
+
+- Reward: acceptance of the cities selected by end user as a route (1 or 0)
+
+
+The objective is to decide which list of cities together satisfy the
+given budget.
+Is this MDP model right and how can I implement this? May be the only option is using Monte Carlo methods and linear/dynamic programming.. Is there any other way?
+"
+"['deep-learning', 'classification', 'training', 'datasets']"," Title: How to split data into training validation and test set when the number of data in classes varies greatly?Body: I have 5 classes of pictures to classify:
+
+0 -> ~3 200 (~800 initial number before interference and duplication)
+
+1 -> ~9 000 (I reduced from ~90 000)
+
+2 -> ~8 000
+
+3 -> ~3 000
+
+4 -> ~7 200
+
+How to divide the data?
+
+Now I have divided the data giving 2 000 to test and 2 000 to validation set
+by taking a fixed number of images (400) from each class. I don't have much knowledge so I don't know if this is a good division of data.
+The attached picture shows the results on the test data after about 60 epoch of CNN with 15 layers.
+
+
+
+The network continues to overfiting, and the results of validation and test set do not improve.
+I know that I could definitely improve my model but I would like to divide the data in some thoughtful and reasonable way. Pictures are spectrograms and are in RGB format.
+"
+"['natural-language-processing', 'classification', 'bert']"," Title: How to use speaker's information as well as text for fine-tuning BERT?Body: I want to classify my corporate chat messages into a few categories such as question, answer, and report. I used a fine-tuned BERT model, and the result wasn't bad. Now, I started thinking about ways to improve it, and a rough idea came up, but I don't know what to do it exactly.
+
+Currently, I simply put chat text into the model, but don't use the speaker's information (who said the text, the speaker's ID in our DB). The idea is if I can use the speaker's information, the model might better understand the text and classify it better.
+
+The question is, are there any examples or prior researches similar to what I want to achieve? I googled for a few hours, but couldn't find anything useful. (Maybe the keywords weren't good.)
+
+Any advice would be appreciated.
+"
+"['reinforcement-learning', 'terminology', 'definitions', 'academia']"," Title: What is Reinforcement Learning?Body: What is the cleanest, easiest way to explain someone who is a non-STEM work colleague the concept of Reinforcement Learning? What are the main ideas behind Reinforcement Learning?
+"
+"['reinforcement-learning', 'reference-request']"," Title: Can reinforcement learning algorithms be applied on problems involving a very large number of possible actions?Body: There is a question already about applying RL to ""large scale problems"", where large scale refers to the problem of a relatively small number of actions (that could be from a continous space) resulting in a very large number of states.
+
+A good exapmle of this kind of large-scale probems is modeling a motion of a boat as a point on a plane, with an action being a displacement vector $\mathbf{\delta}_b = (\delta_x, \delta_y)$ and there are infinitely many states, because the next state is given by the next position of the boat, in a circle surrounding the boat $\mathcal{B}(\mathbf{x}_b, \mathbf{\delta}_{b,max})$, where $\mathbf{x_b}$ is the boat's current position, and $\mathbf{\delta}_{b,max}$ the maximal possible displacement. So here, the displacement as an action (move the boat) is from an infinite space because it is a 2D vector ($\delta_b \in \subset \mathbb{R}^2$) and so is the state space $\mathcal{B}$. Still, I just have two actions to apply to the boat in the end: move in x-directions this much, and in y-direction that much.
+
+What I mean, is something even larger. Considering the example of a boat, is it possible to apply reinforcement learning on a system that has 100 000 of such boats, and what would be the methods to look into to accomplish this. I do not mean to have 100 000 agents. The agent in this scenario is observing 100 000 boats, they are its environment, and let's say the agent is distributing them in a current on the sea in such a way that they have the least amount of resistance in the water (the wake of one ship influences the resistance of its downstream neighbors).
+
+From this answer and from what I have read so far, I believe an approximation will be necessary for the displacements in $2D$ space $\mathbf{\delta}(x,y)$ as well as for the states and rewards, because there are so many of them. However, before digging into this, I would like to know if there are some references out there where something like this has already been tried, or if this is simply something where RL cannot be applied.
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'long-short-term-memory', 'gpu']"," Title: How do GPUs faciliate the training of a Deep Learning Architecture?Body: I would love to know in detail, how exactly GPUs help, in technical terms, in training the deep learning models.
+
+To my understanding, GPUs help in performing independent tasks simultaneously to improve the speed. For example, in calculation of the output through CNN, all the additions are done simultaneously, and hence improves the speed.
+
+But, what exactly happens in a basic neural network or in a LSTM type complex models in regard with GPU.
+"
+"['natural-language-processing', 'python', 'spacy']"," Title: How to make spacy lemmatization process fast?Body: I am applying spacy lemmatization on my dataset, but already 20-30 mins passed and the code is still running.
+
+Is there anyway to make it faster? Is there any option to do this process using GPU?
+
+My dataset size is 20k number of rows & 3 columns
+"
+"['reinforcement-learning', 'q-learning', 'exploration-exploitation-tradeoff']"," Title: Why can't we fully exploit the environment after the first episode in Q-learning?Body: During the first episode, it's 100% exploration, because all our Q values are 0. Suppose we have 1000 time steps, and it's terminated by meeting a reward. So, after the first episode, why can't we make it 100% exploitation? Why do we still need exploration?
+"
+['reinforcement-learning']," Title: Additional (Potential) Action for Agent in MazeGrid Environment (Reinforcement Learning)Body: In a classic GridWorld Environment where the possible actions of an agent are (Up, Down, Left, Right), can another potential output of Action be ""x amount of steps"" where the agent takes 2,3,.. steps in the direction (U,D,L,R) that it chooses? If so, how would one go about doing it?
+"
+"['neural-networks', 'convolutional-neural-networks', 'computer-vision', 'definitions', 'image-processing']"," Title: What is a convolutional neural network?Body: Given that this question has not yet been asked on this site, although similar questions have already been asked in the past (e.g. here or here), what is essentially a convolutional neural network (CNN)? Why are they heavily used in image recognition applications of machine learning?
+"
+"['machine-learning', 'deep-learning', 'gradient-descent', 'learning-rate', 'adam']"," Title: What is the equation of the learning rate decay in the Adam optimiser?Body: Adam is known as an algorithm that has an adaptive learning rate for each parameter. I believe this is due to the division by the term $$v_t = \beta_2 \cdot v_{t-1} + (1-\beta_2) \cdot g_t^2 $$ Hence, each weight will get updated differently based on the accumulated squared gradients in their respective dimensions, even though $\alpha$ might be constant. There are other StackOverflow posts that have said that Adam has a built-in learning rate decay. In the original paper also, the authors of adam paper says that the learning rate at time step $t$ decays based on the equation $$\alpha_t = \alpha \cdot \frac{\sqrt{1-\beta_2^t}}{{1-\beta_1^t}}$$
+Is the second equation the learning rate decay that has been built into the Adam algorithm?
+"
+"['reinforcement-learning', 'comparison', 'value-functions', 'sutton-barto']"," Title: How to express $v_\pi(s)$ in terms of $q_\pi(s,a)$?Body:
+This is exercise 3.18 in Sutton and Barto's book.
+The task is to express $v_\pi(s)$ using $q_\pi(s,a)$.
+Looking at the diagram above, the value of $q_\pi(s,a)$ at $s$ for each $a \in A$ we take gives us the value function at $s$ after taking the action $a$ and then following the policy $\pi$.
+This is probably wrong, but if
+$$v_\pi(s) = E_\pi[G_t | S_t = s]$$
+and
+$$q_\pi(s) = E_\pi[G_t | S_t = s, A_t = a]$$
+isn't then $v_\pi(s)$ just the expected action value function at $s$ over all actions $a$ that are given by the policy $\pi$, namely
+$$v_\pi(s) = E_{a \sim \pi}[q_\pi(s,a) | S_t = s, A_t = a] = \sum_{a \in A}\pi(a|s) q_\pi(s,a)$$?
+"
+"['reinforcement-learning', 'value-functions', 'bellman-equations']"," Title: Connection between the Bellman equation for the action value function $q_\pi(s,a)$ and expressing $q_\pi(s,a) = q_\pi(s, a,v_\pi(s'))$Body: When deriving the Bellman equation for $q_\pi(s,a)$, we have
+$q_\pi(s,a) = E_\pi[G_t | S_t = s, A_t = a] = E_\pi[R_{t+1} + \gamma G_{t+1} | S_t = s, A_t = a]$ (1)
+This is what is confusing me, at this point, for the Bellman equation for $q_\pi(s,a)$, we write $G_{t+1}$ as an expected value, conditioned on $s'$ and $a'$ of the action value function at $s'$, otherwise, there is no recursion with respect to $q_\pi(s,a)$, and therefore no Bellman equation. Namely,
+$ = \sum_{a \in A} \pi(a |s) \sum_{s' \in S} \sum_{r \in R} p(s',r|s,a)(r + \gamma E_\pi[G_{t+1}|S_{t+1} = s', A_{t+1} = a'])$ (2)
+which introduces the recursion of $q$,
+$ = \sum_{a \in A} \pi(a |s) \sum_{s' \in S} \sum_{r \in R} p(s',r|s,a)(r + \gamma q_\pi(s',a'))$ (3)
+which should be the Bellman equation for $q_\pi(s,a)$, right?
+On the other hand, when connecting $q_\pi(s,a)$ with $v_\pi(s')$, in this answer, I believe this is done
+$q_\pi(s,a) = \sum_{a\in A} \pi(a |s) \sum_{s' \in S}\sum_{r \in R} p(s',r|s,a)(r + \gamma E_{\pi}[G_{t+1} | S_{t+1} = s'])$ (4)
+$q_\pi(s,a) = \sum_{a\in A} \pi(a |s) \sum_{s' \in S}\sum_{r \in R} p(s',r|s,a)(r + \gamma v_\pi(s'))$ (5)
+Is the difference between using the expectation $E_{\pi}[G_{t+1} | S_{t+1} = s', A_{t+1} = a']$ in (3) and the expectation $E_{\pi}[G_{t+1} | S_{t+1} = s']$ in $(4)$ simply the difference in how we choose to express the expected return $G_{t+1}$ at $s'$ in the definition of $q_\pi(s,a)$?
+In $3$, we express the total return at $s'$ using the action value function
+
+leading to the recursion and the Bellman equation, and in $4$, the total return is expressed at $s'$ using the value function
+
+leading to $q_\pi(s,a) = q_\pi(s,a,v_\pi(s'))$?
+"
+"['search', 'a-star', 'ida-star']"," Title: Doesn't the number of explored nodes with IDA* increase linearly?Body: I think I'm misunderstanding the description of IDA* and want to clarify.
+IDA* works as follows (quoting from Wiki):
+
+At each iteration, perform a depth-first search, cutting off a branch when its total cost exceeds a given threshold. This threshold starts at the estimate of the cost at the initial state, and increases for each iteration of the algorithm. At each iteration, the threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold.
+
+Suppose that we have the following tree:
+
+- branching factor = 5
+- all cost are different
+
+Say we have expanded 1000 nodes. We pick the lowest cost of the nodes that we 'touched' but didn't expand. Since all costs are unique, there is now only one more node which satisfies this new cost bound, and so we expand 1001 nodes, and 'touch' 5 new ones. We now pick the smallest of these weights, and starting from the root expand 1002 nodes, and so on and so forth, 1003, 1004...
+I must be doing something wrong here right? If not, the complexity is $n^2$, where n is the number of nodes with cost smaller than the optimum, compared to n for normal A*.
+Someone pointing out my misunderstanding would be greatly appreciated.
+"
+"['computer-vision', 'terminology', 'algorithm', 'definitions', 'image-processing']"," Title: What are the main algorithms used in computer vision?Body: Nowadays, CV has really achieved great performance in many different areas. However, it is not clear what a CV algorithm is.
+What are some examples of CV algorithms that are commonly used nowadays and have achieved state-of-the-art performance?
+"
+"['comparison', 'gradient-descent', 'stochastic-gradient-descent', 'gradient', 'batch-size']"," Title: What is the relationship between gradient accumulation and batch size?Body: I am currently training some models using gradient accumulation since the model batches do not fit in GPU memory. Since I am using gradient accumulation, I had to tweak the training configuration a bit. There are two parameters that I tweaked: the batch size and the gradient accumulation steps. However, I am not sure about the effects of this modification, so I would like to fully understand what is the relationship between the gradient accumulation steps parameter and the batch size.
+I know that when you accumulate the gradient you are just adding the gradient contributions for some steps before updating the weights. Normally, you would update the weights every time you compute the gradients (traditional approach):
+$$w_{t+1} = w_t - \alpha \cdot \nabla_{w_t}loss$$
+But when accumulating gradients you compute the gradients several times before updating the weights (being $N$ the number of gradient accumulation steps):
+$$w_{t+1} = w_t - \alpha \cdot \sum_{0}^{N-1} \nabla_{w_t}loss$$
+My question is: What is the relationship between the batch size $B$ and the gradient accumulation steps $N$?
+By example: are the following configurations equivalent?
+
+- $B=8, N=1$: No gradient accumulation (accumulating every step), batch size of 8 since it fits in memory.
+- $B=2, N=4$: Gradient accumulation (accumulating every 4 steps), reduced batch size to 2 so it fits in memory.
+
+My intuition is that they are but I am not sure. I am not sure either if I would have to modify the learning rate $\alpha$.
+"
+"['neural-networks', 'training', 'chat-bots', 'generative-model']"," Title: How to combine several chatbots into one?Body: I'm in the middle of a project in which I want to generate a TV series script (characters answering to each other, scene by scene) using SOTA models, and I need some guidance to simplify my architecture.
+My current intuition is as follows: for a given character C1, I have pairs of sentences from the original scripts where C1 answers other characters, for example, C2 (C2->C1). These are used to fine-tune a data-driven chatbot. At inference time, the different chatbots simply answer each other, and, hopefully, the conversation will have some sense.
+This is however unpractical and will be kind of a mess with many characters, especially if I use heavy models.
+Is there an architecture out there that could be used for conversational purposes, which could be trained only once with the whole dataset while separating the different characters?
+I'm open to any ideas!
+"
+"['machine-learning', 'reinforcement-learning', 'deep-learning', 'dqn', 'reward-design']"," Title: Designing a reward function for my reinforcement learning problemBody: I'm working on a project lately and I'm trying to solve a problem with reinforcement learning and I have serious issues with shaping the reward function.
+The problem is designing a device with maximum efficiency. So we simulated the problem as follows. There is a 4x4 grid (we defined a 4x4 matrix) and the elements of this matrix can be either 0 or 1 (value 0 means "air" and 1 means a certain material in reality), so there is a 2^16 possible configurations for this matrix. Our agent starts from the top left corner of this matrix and has 5 possible actions: move up, down, left, right and flip (which means flipping a 0 to 1 or vice versa). Based on flipping action, we get a new configuration and each configuration has an efficiency (which is calculated by maxwell equations in the background).
+Our goal is to find the best configuration so that the efficiency of the device is maximum.
+So far we have tried many reward functions and non of them seemed to work at all! I will mention some of them:
+
+reward = current_efficiency - previous_efficiency (the efficiency is being calculated in each time step)
+
+-
+
if current_efficiency > previous_efficiency:
+ reward = current_efficiency
+ previous_efficiency = current_efficiency
+
+
+
+-
+
diff = current_efficiency - previous_efficiency
+ if diff > 0:
+ reward = 1
+ else:
+ reward = -2
+
+
+
+
+and some other variations. Nothing is working for our problem and the agent doesn't learn at all! So far, we have used different approaches to DQN and also A2C method and so far no positive feedback. We tried different definitions of states as well, but we don't think that is the problem.
+So, can somebody maybe help me with this? It would be a huge help!
+"
+"['machine-learning', 'reinforcement-learning', 'game-ai', 'getting-started', 'gaming']"," Title: What would be the good choice of algorithm to use for character action selection in an RPG, implemented in Python?Body: I have developed an RPG in likeness to the features showcased in the Final Fantasy series; multiple character classes which utilise unique action sets, sequential turn-based combat, front/back row modifiers, item usage and a crisis mechanic which bears similarity to the limit break feature.
+The problem is that the greater portion of my project depends on the use of some means of Machine Learning to, in some manner, act as an actor in the game environment, however, I do not know of my options under the bare-bones environment of a command line game; I am more familiar with the use of pixel data and a neural network for action selection on a frame-by-frame basis.
+Could I use reinforcement learning to learn a policy for action selection under a custom environment or should I apply a machine learning algorithm to character data, find the example outlined below, determine the best action to use on a particular turn state?
++-------+--------+--------+---------+---------+------------+---------------------+------------+--------------+--------------------+--------------+--------------+
+| Level | CurrHP | CurrMP | AtkStat | MagStat | StatusList | TargetsInEnemyParty | IsInCrisis | TurnNumber | BattleParams |ActionOutput | DamageOutput |
++-------+--------+--------+---------+---------+------------+---------------------+------------+--------------+--------------------+--------------+--------------+
+| 65 | 6500 | 320 | 47 | 56 | 0 | Behemoth |0 | 7 | None |ThiefAttack |4254 |
+| 92 | 8000 | 250 | 65 | 32 | 0 | Behemoth |1 | 4 | None |WarriorLimit |6984 |
++-------+--------+--------+---------+---------+------------+---------------------+------------+--------------+--------------------+--------------+--------------+
+
+I would like to prioritise the ease of implementation of an algorithm over how optimal the potential algorithm could be, I just need a baseline to work towards. Many thanks.
+"
+"['deep-learning', 'image-recognition', 'papers']"," Title: Why should the baseline's prediction be near zero, according to the Integrated Gradients paper?Body: I am trying to understand Intagrated Gradients, but have difficulty in understanding the authors' claim (in section 3, page 3):
+
+For most deep networks, it is possible to choose a baseline such that the prediction at the baseline is near zero ($F(x') \approx 0$). (For image models, the black image baseline indeed satisfies this property.)
+
+They are talking about a function $F : R^n \rightarrow [0, 1]$ (in 2nd paragraph of section 3), and if you consider a deep learning classification model, the final layer would be a softmax layer. Then, I suspect for image models, the prediction at the baseline should be close to $1/k$, where $k$ is the number of categories. For CIFAR10 and MNIST, this would equal to $1/10$, which is not very close to $0$. I have a binary classification model on which I am interested in applying the Integrated Gradients algorithm. Can the baseline output of $0.5$ be a problem?
+Another related question is, why did they choose a black image as the baseline in the first place? The parameters in image classification models (in a convolution layer) are typically initialized around $0$, and the input is also normalized. Therefore, image classification models do not really care about the sign of inputs. I mean we could multiply all the training and test inputs with $-1$, and the model would learn the task equivalently. I guess I can find other neutral images other than a black one. I suppose we could choose a white image as the baseline, or maybe the baseline should be all zero after normalization?
+"
+"['reinforcement-learning', 'deep-rl']"," Title: In Deep Q-learning, are the target update frequency and the batch training frequency related?Body: In a Deep Q-learning algorithm, we perform a batch training every train_freq
and we update the parameters of the target network every target_update_freq
. Are train_freq
and target_update_freq
necessary related, e.g., one should be always greater than the other, or must they be independently optimized depending on the problem?
+EDIT Changed the name of batch_freq
to train_freq
.
+"
+"['machine-learning', 'deep-learning', 'computer-vision']"," Title: Can imbalance data create overfitting?Body: I am doing human activity recognition project. I have total of 12 classes. The class distribution look like this:
+
+$\color{red}{If \ you \ watch \ carefully, you \ can \ see \ that \ I \ have \ no \ data \ points \ for \ class \ 11 \ and \ class \ 8.}$ Also, the dataset is highly imbalanced. So, I took minimum data points (in this case 2028) for all of the classes. Now my balanced data look like this:
+
+After doing this it looks like a balance data. $\color{red}{But \ still, \ I \ think \ it \ not, \ because \ I \ have \ zero \ datapoints \ for \ class \ 11 \ and \ class \ 8}$. In my opinion the classes are still imbalance.
+I am using CNN model to solve this activity project. My model summary is following:
+
+The main problem is, my model starts overfitting heavily when I train it.
+
+Is it due to my imbalance data( class 8 and 11 has zero data points) or something else?
+$\textbf{Hyperperameter:}$
+$\textbf{features:}$ X, Y, Z of mobile accelerometer
+$\textbf{frame size:}$ 80
+$\textbf{optimizer:}$ Adam, $\textbf{Learning rate:}$ 0.001
+$\textbf{Loss:}$ Sparse categorical cross-entropy
+"
+"['reinforcement-learning', 'sample-efficiency']"," Title: How to measure sample efficiency of a reinforcement learning algorithm?Body: I want to know if there is any metric to use for measuring sample-efficiency of a reinforcement learning algorithm? From reading research papers, I see claims that proposed models are more sample efficient but how does one reach this conclusion when comparing reinforcement learning algorithms?
+"
+"['machine-learning', 'computer-vision', 'facial-recognition']"," Title: How to determine when the image is steady enough in a video sequence to take photos?Body: How do I calculate the points in a video sequence where the images are steady enough for a photo. For example, I want to take maybe 20 photos for a facial recognition dataset. Instead of asking the subject to hold still 20 times, I take a moving video of him and filter the images for 20 good photos. The problem is that the subject face has to move into different poses and facial expressions and many times the movements blur the quality of the image.
+Thanks in advance.
+"
+"['machine-learning', 'math', 'support-vector-machine', 'mapping-space']"," Title: How to understand mapping function of kernel?Body: For a kernel function, we have two conditions one is that it should be symmetric which is easy to understand intuitively because dot products are symmetric as well and our kernel should also follow this. The other condition is given below
+There exists a map $φ:R^d→H$ called kernel feature map into some high dimensional feature space H such that $∀x,x'$ in $R^d:k(x,x') = <φ(x),φ(x')>$
+I understand that this means that there should exist a feature map that will project the data from low dimension to any high dimension $D$ and kernel function will take the dot product in that space.
+For example, the Euclidean distance is given as
+$d(x,y)=∑_i(x_i−y_i)^2=<x,x>+<y,y>−2<x,y>$
+If I look this in terms of second condition how do we know that doesn't exist any feature map for euclidean distance? What exactly are we looking in feature maps mathematically?
+"
+"['reinforcement-learning', 'algorithmic-trading']"," Title: Looping over Sarsa algorithm for better Q valuesBody: Let's say an RL trading system places trades based on pricing data.
+Each episode represents 1 hour of trading, and there are 24 hours of data available. The Q table represents for a given state, what is the most action with the highest utility.
+The state is a sequence of prices, and the action is either buy, hold, sell.
+Instead of "Loop for each episode" as per the Sarsa algorithm :
+
+I add an additional outer loop. Now instead of just looping for each episode we have:
+for 1 to N:
+ "Loop for each episode"
+
+Manually set N or exit out of the loop on convergence.
+Is this the correct approach? Iterating multiple times over the episodes will produce more valuable state-action pairs in the Q table, because e greedy is not deterministic and for each iteration may exploit an action to greater reward than other episode epochs.
+"
+"['neural-networks', 'convolutional-neural-networks', 'comparison', 'convolution', 'cross-correlation']"," Title: Do convolutional neural networks perform convolution or cross-correlation?Body: Typically, people say that convolutional neural networks (CNN) perform the convolution operation, hence their name. However, some people have also said that a CNN actually performs the cross-correlation operation rather than the convolution. How is that? Does a CNN perform the convolution or cross-correlation operation? What is the difference between the convolution and cross-correlation operations?
+"
+"['algorithm', 'performance', 'support-vector-machine', 'accuracy', 'sigmoid']"," Title: Which kind of data does sigmoid kernel performance well?Body: While I was playing with some hyperparameters, I came to a wired situation. My dataset is IRIS dataset to be specific. SVM algorithm has some hyperparameters that we can tune, such as Kernels, and C value.
+(All accuracy calculations and SVM are from sklearn package to be specific)
+I made a comparison between kernels and noticed sigmoid kernel was performing way worse in terms of accuracy. It is more than 3 times less accuracy than RBF, Linear, and Polynomial. I do know that kernels are quite data-sensitive and data-specific, but I would like to know "Which types of data is sigmoid kernel good at any example? or is this my fault due to wrong C value for sigmoid kernel?"
+"
+"['proofs', 'logic']"," Title: When to use AND and when to use Implies in first-order logic?Body: I am trying to learn the theory behind first-order logic (FOL) and do some practice runs of converting statements into the form of FOL.
+One issue I keep running into is hesitating on whether to use an AND ($\land$) statement or an IMPLIES ($\rightarrow$) statement.
+I have seen examples such as "Some boys are intelligent" turned into:
+$$
+\exists x \text{boys}(x) \land \text{intelligent}(x)
+$$
+Can I make a general assumption that when I see $x$ is/are $y$, I can use an AND?
+With a statement such as "All movies directed by M. Knight Shamalan have a supernatural character", I feel that that statement can be translated to either:
+$$
+\forall x, \exists y \; \text{directed}(\text{Shamalan}, x) \rightarrow \text{super-natural character}(y)
+$$
+or
+$$
+\forall x, \exists y \; \text{directed}(\text{Shamalan}, x) \land \text{super-natural character}(y)
+$$
+Is there a better way to distinguish between when to use one or the other?
+"
+"['reinforcement-learning', 'reference-request', 'books']"," Title: What introductory books to reinforcement learning do you know, and how do they approach this topic?Body: Currently, I'm only going through these two books
+
+What other introductory books to reinforcement learning do you know, and how do they approach this topic?
+"
+"['neural-networks', 'features']"," Title: Feature scaling strategy for many feature with very large variation between them?Body: I was running into a situation in which my input feature experience a very large variation in term of magnitude.
+Particularly, consider feature 1 belong to group 1 and feature 2 3 4 belong to group 2,
+Like this picture below
+
+I was really worried that in this case feature 1 might dominate feature 2,3,4 (group 2) because its corresponding value is so large (I was trying to train this data set on a neural network).
+In this situation, what would be the appropriate scaling strategy ?
+Update: I know for sure that the value of feature 1 is an integer that is uniform on the interval [22,42]
+But for feature 2 ,3 ,4 I do not have any insight
+Thank you for your enthusiast !
+"
+"['machine-learning', 'reinforcement-learning', 'computational-learning-theory', 'vc-dimension', 'vc-theory']"," Title: Is the VC Dimension meaningful in the context of Reinforcement Learning?Body: Is the VC dimension meaningful for reinforcement learning (RL), as a machine learning (ML) method? How?
+"
+"['machine-learning', 'papers', 'notation', 'explainable-ai']"," Title: What do the notations $\sim$ and $\Delta (A) $ mean in the paper ""Fairness Through Awareness""?Body: In this paper Fairness Through Awareness, the notation $\mathbb{E}_{x \sim V} \mathbb{E}_{a \sim \mu_x} L(x,a)$ is being used (page 5 top line), where $V$ denotes the set of individuals (so I guess set of feature vectors?) and the meaning of the other variables can be found in the paragraph above the mentioned notation. What does the $\sim$ in the expectation stand for?
+Another notation that I do not know is $\Delta (A) $, where $A$ is the set of outcomes, for instance, $A = \{ 0,1\}$. What does it stand for?
+"
+"['machine-learning', 'reference-request', 'applications', 'research']"," Title: Creating 4k HDR video from 720p footageBody: So, my company recently bought a big 4k HDR TV for our reception, where we keep showing some videos that were originally shot/created at 720p resolution. Before this, we had a relatively small HD TV, so not a problem. Because the old videos now look dated, my boss wanted to upscale them and enhance their coloring to avoid shooting or procuring new animated videos.
+This sounded like a fun project, but I know little about AI, and less so about video encoding/decoding. I've started researching and found some references, such as Video Super-Resolution via Bidirectional
+Recurrent Convolutional Networks, so while it seems like I have homework to do, it's clearly "been done before". Would be great to find some code that works on standard formatted videos though.
+What I'm struggling to find, but would need some good basis to answer in the negative, is: What about HDR? I'm not finding the research terms nor any mention on result for improving dynamic range on videos. Is there any research done on that? Though actual HDR is a format, most of the shots and pictures used for our videos were taken on cameras with small color gamut and latitude, thus everything looks "washed" and the new TV really makes this obvious by comparing against demo videos.
+PS:
+Unlike much of the literature I'm finding, I'm not aiming at real-time super resolution, it would be great if it took less than one night to process for a 10 minute video though.
+"
+"['reinforcement-learning', 'deep-rl']"," Title: Two DQNs in two different time scalesBody: I have the following situation. An agent plays a game and wants to maximize the accumulated reward as usual, but it can choose its adversary. There are $n$ adversaries.
+In episode $e$, the agent must first select an adversary. Then for each step $t$ in the episode $e$, it plays the game against the chosen adversary. Every step $t$, it receives a reward following the chosen action in step $t$ (for the chosen adversary). How to maximize the expected rewards using DQN? It is clear that choosing the "wrong" (the strongest) adversary won't be a good choice for the agent. Thus, to maximize the accumulated rewards, the agent must take two actions at two different timescales.
+I started solving it using two DQNs, one to decide the adversary to play against and one to play the game against the chosen adversary. I have two duplicate hyperparameters (batch_size
, target_update_freq
, etc), one for each DQN. Have you ever seen two DQNs like this? Should I train the DQNs simultaneously?
+The results that I am getting is not that good. The accumulated reward is decreasing, the loss isn't always decreasing...
+"
+"['machine-learning', 'q-learning', 'dqn']"," Title: How to know if my DQN is optimized?Body: I made a DQN that controls a traffic light. The observation states are the number of vehicles of each lane in the intersection. I trained it for 500 episodes and saved the model every 50th episode. I plotted the reward curve of the model after the training and found out that around the 460th episode, the reward curve become unstable. Does it mean that the optimized DQN model is the 450th model? If not, how do I know if the my DQN is really optimized?
+"
+"['reinforcement-learning', 'pytorch']"," Title: In layman's terms, what is stochastic computation graph?Body: I'm going through the distributions
package on PyTorch's documentation and came across the term stochastic computation graph. In layman's terms, what is it?
+"
+"['generative-adversarial-networks', 'pretrained-models']"," Title: Can StyleGAN be refined without a full training?Body: Can I refine StyleGAN or StyleGAN2 without retraining it for many days, such that its pretrained model is trained to generate only faces similar to a (rather small) set of reference images?
+I would like to avoid creating a large dataset and training for many days to weeks, but use the existing model and just bias it towards a set of images.
+"
+"['reinforcement-learning', 'policies', 'off-policy-methods', 'importance-sampling']"," Title: Does the off-policy evaluation work for non-stationary policies?Body: As the title says, in reinforcement learning, does the off-policy evaluation work for non-stationary policies?
+For example, IS (importance sampling)-based estimators, such as weighted IS or doubly robust, are still unbiased when they are used to evaluate UCB1, which is a non-stationary policy, as it chooses an action based on the history of rewards?
+"
+"['reinforcement-learning', 'q-learning', 'markov-decision-process', 'discount-factor', 'semi-mdp']"," Title: Updating action-value functions in Semi-Markov Decision Process and Reinforcement LearningBody: Suppose that the transition time between two states is a random variable (for example, unknown exponential distribution); and between two arrivals, there is no reward. If $\tau$ (real number not an integer number) shows the time between two arrivals, should I update Q-functions as follows:
+$Q(s,a) = Q(s,a)+\alpha.(R+\gamma^{\tau} \max_{b \in A}Q(s^{\prime},b)-Q(s,a))$
+And, to compare different algorithms, total rewards ($TR=R_{1}+ R_2+R_{3}+...+R_{T}$) is used.
+What measure should be used in the SMDP setting? I would be thankful if someone can explain the Q-Learning algorithm for the SMDP problem with this setting.
+Moreover, I am wondering when Q-functions are updated. For example, if a customer enters our website and purchases a product, we want to update the Q-functions. Suppose that the planning horizon (state $S_{0}$) starts at 10:00 am, and the first customer enters at 10:02 am, and we sell a product and gain $R_1$ and the state will be $S_1$. The next customer enters at 10:04 am, and buy a product, and gain reward $R_2$ (state $S_{2}$). In this situation, should we wait until 10:02 to update the Q-function for state $S_0$?
+Is the following formula correct?
+$$V(S_0)= R_1 \gamma^2+ \gamma^2V(S_1)$$
+In this case, if I discretize the time horizon to 1-minute intervals, the problem will be a regular MDP problem. Should I update Q-functions when no customer enters in a time interval (reward =0)?
+"
+"['reinforcement-learning', 'policy-gradients', 'actor-critic-methods', 'ddpg']"," Title: In Deep Deterministic Policy Gradient, are all weights of the policy network updated with the same or different value?Body: I'm trying to understand the DDPG algorithm shown at this page. I don't know what should the result of the gradient at step 14 be.
+
+Is it a scalar that I have to use to update all the weights (so all weights are updated with the same value)? Or is it a list with a different values to use for updating for each weight? I'm used to working with loss functions and an $y$ target, but here I don't have them so I'm quite confused.
+"
+"['reinforcement-learning', 'policy-gradients', 'actor-critic-methods', 'off-policy-methods']"," Title: Is this figure a correct representation of off-policy actor-critic methods?Body:
+Does this figure correctly represent the overall general idea about actor-critic methods for on-policy (left) and off-policy (right) case?
+I am a bit confused about the off-policy case (right figure). Why does the right figure represent the off-policy actor-critic methods?
+"
+"['machine-learning', 'reinforcement-learning', 'long-short-term-memory', 'open-ai', 'time-series']"," Title: How do I test an LSTM-based reinforcement learning model using any Atari games in OpenAI gym?Body: I am writing a couple of different reinforcement learning models based on Rainbow DQN or some PG models. All of them internally use an LSTM network because my project is using time series data.
+I wanted to test my models using OpenAI Gym before I add too many domain specific code to the models.
+The problem is that, all of the Atari games seem to fall into the CNN area which I don't use.
+Is it possible to use OpenAI Gym to test any time series data driven RL models/networks?
+If not, is there any good environment that I can use to examine the validity of my models?
+"
+"['machine-learning', 'reinforcement-learning', 'policy-iteration']"," Title: Why care about the value of the action which I'm not gonna take in policy iteration?Body: In this article, there is an explanation (with an example) of how policy iteration works.
+It seems that, if we replace all the probabilities of moves in the example by new probabilities where the best action is taken 100% of the time and the other moves are taken 0% of the time, then the final policy will end up to be (south, south, north) as in the example provided. However, if we are certain of our moves, we can go north and then south in the example to get most of the reward.
+In other words, it seems to be incorrect to calculate the value of a state by summing up rewards for all possible actions out of the state, because, like in the case I described above, or a case where one action gives you a huge penalty, you are 100% gonna avoid it, therefore the value state is unfairly weighted.
+Why care about the value of the action which I'm not gonna take?
+"
+"['machine-learning', 'reinforcement-learning', 'q-learning', 'dqn']"," Title: Is there a way to show convergence of DQN other than by eye observation?Body: I made a DQN model and plot its reward curve. You can see intuitively that the curve already converged since its reward value now just oscillates. How can I show confidence that my DQN already reached its optimal other than by just showing the curve? Are there any way to validate that it is already optimized?
+"
+"['machine-learning', 'reinforcement-learning', 'q-learning', 'dqn']"," Title: How to validate that my DQN hyperparameters are the optimal?Body: My DQN model outputs the best traffic light state in an intersection. I used different values of batch size and learning rate to find the best model. How would I know if I got the optimal hyperparameter values?
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'deep-learning', 'q-learning']"," Title: What's the best practice for Boltzmann Exploration temperature in RL?Body: I'm currently modeling DQN in Reinforcement Learning. My question is: what are the best practices related to Boltzmann Exploration? My current thoughts are: (1) Let the temperature decay through training and finally stop at 0.01, when the method will always select the best practice, with almost no randomness. (2) Standardize the predicted Q values before feeding into the softmax function.
+Currently, I'm using (2), and the reward is suffering from high variance. I'm wondering whether it has something to do with the exploration method?
+"
+"['machine-learning', 'reinforcement-learning', 'image-processing', 'image-segmentation', 'image-generation']"," Title: Is Reinforcement Learning what I need for this image to image translation problem?Body: I have a paired dataset of binary images A and B: A1 paired with B1, A2-B2, etc., with simple shapes (rectangles, squares).
+The external software receives both images A and B and it returns a number that represents the error.
+I need a model that, given images A and B, can modify A into A' by adding or removing squares, so that the error from the software is minimized. I don't have access to the source code of the software so I don't know how it works.
+I tried to make a NN that copies the functionality of the software, and a generative NN to generate the modified image A' but I haven't got good results.
+The software can only receive binary images, so I cannot use a loss function because my last layer of the generator being a softmax, if I apply a threshold, I will lose the track of the gradients, so I cannot apply gradient descent.
+Someone told me that when you cannot calculate the gradient of the loss with respect to the weights, reinforcement learning with policy gradients is a good solution.
+I'm new to this field, so I want to be sure I'm going in the right direction.
+"
+"['machine-learning', 'state-of-the-art']"," Title: Can we give a command to an AI and wait for it to do the job without explicitly telling it how to do it?Body: I am a computer science student. I learned about programming languages recently, but I don't know much about artificial intelligence.
+I want to know, why don't we program something in a way that we could tell the program
+
+Hey! Do this for me!
+
+And then just sit down and wait that the AI does the job?
+Is this currently possible to do?
+"
+"['deep-learning', 'computer-vision', 'feature-extraction', 'sentiment-analysis', 'data-mining']"," Title: What are some good papers or resources for aspect extraction and opinion modelling from video or audio?Body: I am quite new to deep learning. I just finished the deep learning specialization by Professor Andrew NG and Deep Learning AI. Now, my professor (instructor) has advised me to look into some classic papers for aspect extraction and opinion mining from video. Could anyone suggest me some resources where I can get started? Can anyone suggest some papers I should read? Maybe a course or a book or some links to descriptive sessions. Your help would be appreciated.
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'experience-replay']"," Title: What would happen if we sampled only one tuple from the experience replay?Body: The concept of experience replay is saving our experiences in our replay buffer. We select at random to break the correlation between consecutive samples, right?
+What would happen if we calculate our loss using just one experience instead of a mini-batch of experiences?
+"
+"['convolutional-neural-networks', 'filters', 'weights']"," Title: Does the number of parameters in a convolutional neuronal network increase if the input dimension increases?Body: If I have a convolutional neuronal network, does the input dimension change the number of parameters? And if yes, why? If the sizes and lengths of the filters are still the same, how can the number of parameter in a network increase?
+"
+"['reinforcement-learning', 'optimization', 'time-complexity', 'finite-markov-decision-process', 'interpolation']"," Title: Continuous state and continuous action Markov decision process time complexity estimate: backward induction VS policy gradient method (RL)Body: Model Description: Model based(assume known of the entire model) Markov decision process.
+Time($t$): Finite horizon discrete time with discounting factor
+State($x_t$): Continuous multi-dimensional state
+Action($a_t$): Continuous multi-dimensional action (deterministic action)
+Feasible(possible action) ($A_t$): possible action is a continuous multi-dimensional space, no discretization for the action space.
+Transition kernel($P$): known and have some randomness associated with the current state and next state
+Reward function: known in explicit form and terminal reward is know.
+The method I tried to solve the model:
+
+- Discretize the state space and construct multi-dimensional grid for state space, starting from the terminal state, I used backward induction to reconstruct value function from the previous period. By using the Bellman equation, I need to solve an optimization problem, selecting the best action that gives me the best objective function.
+
+$$V_t(x_t) = max_{a_t \in A_t}[R_t(x_t,a_t) + E[\tilde{V}_{t+1}(x_{t+1})|x_t, a_t]]$$
+where $\tilde{V}_{t+1}$ here is an approximation using interpolation method, since the discrete values are calculated from the previous time episode. In other words: $\tilde{V}_{t+1}$ is approximated by some discrete value: $$V_{t+1}(x_0),V_{t+1}(x_1),V_{t+1}(x_2)\cdots V_{t+1}(x_N)$$ where $x_0,x_1,x_2\cdots x_N$ are grid points from discretizing the state space.
+In this way, for every time steps t, I could have a value function for every grid point and my value function could be approximated by using some interpolation method(probably cubic spline). But here are some of the problems: 1. what kind of interpolation is suitable for high dimensional data. 2. Say we have five dimension for the state, then I discretize the state by giving 5 grid points to every dimension, then there are 5^5 = 3125 discrete state values I need to calculate through the optimization.(Curse of the dimensionality). 3. What kind of optimizer should I use? Since I do not know the shape of the objective function, I do not know if it is a smooth function and I do not know if the function is concave. So I may have to use a robust optimizer, probably some evolutionary optimizer. So eventually I end up with this computational complexity and it takes too long for the computation.
+And recently I learned the techniques of policy gradient from OpenAI: https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html
+This method avoid using this backward induction and approximating the value function by using interpolation. And obtained the approximated policy by firstly guessing a functional form of the policy and take approximated gradient of the policy function by using sampling(simulation) method. Since the model is known, every time it could sample new trajectories and use that to update the policy by using the stochastic gradient assent method. In this way updating the policy until it gets some sort of convergency.
+I am wondering if this type of technique could potentially reduce the computational complexity significantly. Any advice helps, thanks a lot.
+"
+"['transformer', 'attention', 'softmax']"," Title: Is the self-attention matrix softmax output (layer 1) symmetric?Body: Let's assume that we embedded a vector of length 49 into a matrix using 512-d embeddings. If we then multiply the matrix by its transposed version, we receive a matrix of 49 by 49, which is symmetric. Let's also assume we do not add the positional encoding and we only have only one attention head in the first layer of the transformer architecture.
+What would the result of the softmax on this 49 by 49 matrix look like? Is it still symmetric, or is the softmax correctly applied for each line of the matrix, resulting in a non-symmetric matrix? My guess would be that the matrix should not be symmetric anymore. But I'm unsure about that.
+I ask this to verify if my implementation is wrong or not, and what the output should look like. I have seen so many sophisticated and different implementations of the transformer architecture with different frameworks, that I can't answer this question for myself right now (confusion). I still try to understand the basic building blocks of the transformer architecture.
+"
+"['machine-learning', 'algorithm', 'support-vector-machine', 'cross-validation']"," Title: Is my 57% sports betting accuracy correct?Body: I have been creating sports betting algorithms for many years using Microsoft access and I am transitioning to the ML world and trying to get a grasp on determining the success of my algorithms. I have exported my algorithms as CSV files dating back to the 2013-14 NBA season and imported them into python via pandas.
+The purpose of importing these CSV files is to determine the future accuracy of these algorithms using ML. Here are the algorithm records based on the Microsoft access query:
+
+- A System: 471-344 (58%) +92.60
+
+- B System: 317-239 (57%) +54.10
+
+- C System: 347-262 (57%) +58.80
+
+
+I have a total of 8,814 records in my database, however, the above systems are based on situational stats, e.g., Team A fits an algorithm if they have better Field Goal %, Played Last Game Home/Away, More Points Per Game, etc...
+
+Here is some of the code that I wrote using Jupyter to determine the accuracy:
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
+
+clf = LinearSVC(C=1.0, penalty="l2", dual=False)
+
+clf.fit(X_train, y_train)
+
+pred_clf = clf.predict(X_test)
+
+scores = cross_val_score(clf, X, y, cv=10)
+
+rfe_selector = RFE(clf, 10)
+
+rfe_selector = rfe_selector.fit(X, y)
+
+rfe_values = rfe_selector.get_support()
+
+train = accuracy_score(y_train, clf.predict(X_train))
+
+test = accuracy_score(y_test, pred_clf)
+
+print("Train Accuracy:", accuracy_score(y_train, clf.predict(X_train)))
+
+print("Test Accuracy:", accuracy_score(y_test, pred_clf))
+
+print(classification_report(y_test, pred_clf, zero_division=1))
+
+print(confusion_matrix(y_test, pred_clf))
+
+print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
+
+
+Here are the results from the code above by system:
+A System:
+
+- Train Accuracy: 0.6211656441717791
+- Test Accuracy: 0.5153374233128835
+- F1 Score: 0.52
+- CONFUSION MATRIX: [[16 50] [29 68]]
+- Accuracy: 0.55 (+/- 0.10)
+
+B System:
+
+- Train Accuracy: 0.6306306306306306
+- Test Accuracy: 0.5178571428571429
+- F1 Score: 0.52
+- CONFUSION MATRIX: [[49 23] [31 9]]
+- Accuracy: 0.55 (+/- 0.08)
+
+C System:
+
+- Train Accuracy: 0.675564681724846
+- Test Accuracy: 0.5409836065573771
+- F1 Score: 0.54
+- CONFUSION MATRIX: [[15 29] [27 51]]
+- Accuracy: 0.57 (+/- 0.16)
+
+
+In order to have a profitable system, the accuracy only needs to be 52.5%. If I base my systems off of the Test Accuracy, only the C System is profitable. However, all are profitable if based on Accuracy (mean & standard deviation).
+My question is, can I rely on my Accuracy (mean & standard deviation) for future games even though my Testing Accuracy is lower than 52.5%?
+If not, any suggestions are greatly appreciated on how I can gauge the future results on these systems.
+"
+"['natural-language-processing', 'comparison', 'models', 'machine-translation', 'language-model']"," Title: What are the main differences between a language model and a machine translation model?Body: What are the main differences between a language model and a machine translation model?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'recurrent-neural-networks', 'generative-adversarial-networks']"," Title: Why does my ""entropy generation"" RNN do so badly?Body: I'm new to relatively RNNs, and I'm trying to train generative and guessing neural networks to produce sequences of real numbers that look random. My architecture looks like this (each "circle" in the output is the adverserial network's guess for the generated circle vertically below it -- having seen only the terms before it):
+
+Note that the adverserial network is rewarded for predicting outputs close to the true values, i.e. the loss function looks like tf.math.reduce_max((sequence - predictions) ** 2)
(I have also tried reduce_mean
).
+I don't know if there's something obviously wrong with my architecture, but when I try to train this network (and I've added a reasonable number of layers), it doesn't really work very well.
+
+If you look at the result of the last code block, you'll see that my generative neural network produces things like
+
+[0.9907787, 0.9907827, 0.9907827, 0.9907827, 0.9907827, 0.9907827, 0.9907827, 0.9907827, 0.9907827, 0.9907827]
+
+But it could easily improve itself by simply training to jump around more, since you'll observe that the adverserial network also predicts numbers very close to the given number (even when the sequence it is given to predict is one that jumps around a lot!).
+What am I doing wrong?
+"
+"['reference-request', 'ontology']"," Title: Are there any existing ontologies that model engineering data?Body: Are there any existing ontologies available for "engineering" data?
+By "engineering" I mean pertaining to the fields of electrical, mechanical, thermal, etc., engineering.
+"
+"['reinforcement-learning', 'policy-based-methods', 'value-based-methods']"," Title: Are policy-based methods better than value-based methods only for large action spaces?Body: In different books on reinforcement learning, policy-based methods are motivated by their ability to handle large (continuous) action spaces. Is this the only motivation for the policy-based methods? What if the action space is tiny (say, only 9 possible actions), but each action costs a huge amount of resources and there is no model for the MDP, would this also be a good application of policy-based methods?
+"
+"['reinforcement-learning', 'policy-gradients', 'proofs', 'sutton-barto', 'policy-gradient-theorem']"," Title: How exactly is $Pr(s \rightarrow x, k, \pi)$ deduced by ""unrolling"", in the proof of the policy gradient theorem?Body: In the proof of the policy gradient theorem in the RL book of Sutton and Barto (that I shamelessly paste here):
+
+there is the "unrolling" step that is supposed to be immediately clear
+
+With just elementary calculus and re-arranging of terms
+
+Well, it's not. :) Can someone explain this step in more detail?
+How exactly is $Pr(s \rightarrow x, k, \pi)$ deduced by "unrolling"?
+"
+"['reinforcement-learning', 'pomdp']"," Title: How to choose an RL algorithm for a Gridworld that models a much more complex problemBody: I am considering using Reinforcement Learning to do optimal control of a complex process that is controlled by two parameters
+$(n_O, n_I), \quad n_I = 1,2,3,\dots, M_I, n_O = 1,2,3,\dots, M_O$
+In this sense, the state of the system is represented $S_t = (n_{O,t}, n_{I,t})$. It is represented, because there is a relatively complex system, a solution of coupled Partial Differential Equations (PDES), actually in the background.
+Is this problem considered a partially observable Markov Decision Process (POMDP) because there is a whole mess of things behind $S_t = (n_{O,t}, n_{I,t})$?
+The reward function has two parameters
+$r(s) = (n_{lt}, \epsilon_\infty)$
+that are results of the environment (solution of the PDEs).
+In a sense, using $S_t = (n_{O,t}, n_{I,t})$ makes this problem similar to Gridworld, where the goal is to go from $S_0 = (M_O, M_I)$ to a state with smaller $(n_O, n_I)$, given reward $r$, where the reward changes from state to state and episode to episode.
+Available action operations are
+$inc(n) = n + 1$
+$dec(n) = n - 1$
+$id(n) = n$
+where $n$ can be $n_I$ or $n_O$. This means there are $9$ possible actions
+$A=\{(inc(n_O), inc(n_I)),(inc(n_O), dec(n_I)),(inc(n_O), id(n_I)),(dec(n_O), inc(n_I)), \dots\}$
+to be taken, but there is no model for the state transition, and the state transition is extremely costly.
+Intuitively, as solving a kinematic equation for a point in space, solving coupled PDEs from fluid dynamics should have the Markov property (strongly if the flow is laminar, for turbulence, I have no idea). I've also found a handful of papers where a fluid dynamics problem is parameterized and a policy-gradient method is simply applied.
+I was thinking to use REINFORCE as a start, but the fact that $(n_O, n_I)$ does not fully describe the state and questions like this one on POMDP and this one about simulations make me suspicious. Could REINFORCE be used for such a problem, or is there something that prevents this?
+"
+"['neural-networks', 'classification', 'activation-functions']"," Title: Why don't we use trigonometric functions for the output neurons?Body: Why don't we use a trigonometric function, such as $\tan(x)$, where $x$ is an element of the interval $[0,pi/2)$, instead of the sigmoid function for the output neurons (in the case of classification)?
+"
+"['reinforcement-learning', 'dqn']"," Title: How to handle changing goals in a DQN?Body: I created a virtual 2D environment where an agent aims to find a correct pose corresponding to a target image. I implemented a DQN to solve this task. When the goal is fixed, e.g. the aim is to find the pose for position (1,1), the agent is successful. I would now like to train an agent to find the correct pose while the goal pose changes after every episode. My research pointed me to the term "Multi-Objective Deep Reinforcement Learning". As far as I understood, the aim here is to train one or multiple agents to achieve a policy approximation that fits all goals.
+Am I on the right track or how should I deal with different goal states?
+"
+['convolutional-neural-networks']," Title: What is meant by the number of channels of a network?Body: Currently, I am reading Rethinking Model Scaling for Convolutional Neural Networks. The authors are talking about a different way of scaling convolutional neural networks by scaling all dimensions simultaneously and relative to each dimension. I understand the scaling methods regarding the depth of a network (# layers) and the resolution (size of the input image).
+What I was stumbling is the concept of the network's width (# channels). What is meant by the width or the number of channels of a network? I don't think it is the number of color channels, or is this the case? The number of color channels was the only link I found regarding the terms "ConvNets" and "number of channels".
+"
+"['ai-design', 'genetic-algorithms', 'evolutionary-algorithms']"," Title: What is meant by gene, chromosome, population in genetic algorithm in terms of feature selection?Body: I am trying to understand the genetic algorithm in terms of feature selection and these features are extracted using a machine learning algorithm.
+Let's suppose I have data of heart rate for 3 minutes collected from $50$ subjects. From these 3-minute heart rate, I extracted $5$ features, like the mean, standard deviation, variance, skewness and kurtosis. Now the shape of my feature set is (50, 5)
.
+I want to know what are gene, chromosome and population in genetic algorithm related to the above scenario.
+What I understand is each feature is a gene, and a set of all features for one subject (1, 5)
is the chromosome, and the whole feature set (50, 5)
is a population. But I think this concept is not correct. Because in the genetic algorithm, we take a random population, but according to my concept complete data is population, so how random data is selected.
+Can anyone help me to understand it?
+
+"
+"['convolutional-neural-networks', 'tensorflow', 'keras', 'autoencoders']"," Title: How can I have the same input and output shape in an auto-encoder?Body: I'm building a denoising autoencoder. I want to have the same input and output shape image.
+This is my architecture:
+input_img = Input(shape=(IMG_HEIGHT, IMG_WIDTH, 1))
+
+x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
+x = MaxPooling2D((2, 2), padding='same')(x)
+x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
+encoded = MaxPooling2D((2, 2), padding='same')(x)
+
+
+
+x = Conv2D(32, (3, 3), activation='relu', padding='valid')(encoded)
+x = UpSampling2D((2, 2))(x)
+x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
+x = UpSampling2D((2, 2))(x)
+decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
+
+
+# decodedSize = K.int_shape(decoded)[1:]
+
+# x_size = K.int_shape(input_img)
+# decoded = Reshape(decodedSize, input_shape=decodedSize)(decoded)
+
+
+autoencoder = Model(input_img, decoded)
+autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
+
+My input shape is: 1169x827
+This is Keras output:
+Model: "model_6"
+_________________________________________________________________
+Layer (type) Output Shape Param #
+=================================================================
+input_7 (InputLayer) [(None, 1169, 827, 1)] 0
+_________________________________________________________________
+conv2d_30 (Conv2D) (None, 1169, 827, 32) 320
+_________________________________________________________________
+max_pooling2d_12 (MaxPooling (None, 585, 414, 32) 0
+_________________________________________________________________
+conv2d_31 (Conv2D) (None, 585, 414, 64) 18496
+_________________________________________________________________
+max_pooling2d_13 (MaxPooling (None, 293, 207, 64) 0
+_________________________________________________________________
+conv2d_32 (Conv2D) (None, 291, 205, 32) 18464
+_________________________________________________________________
+up_sampling2d_12 (UpSampling (None, 582, 410, 32) 0
+_________________________________________________________________
+conv2d_33 (Conv2D) (None, 582, 410, 32) 9248
+_________________________________________________________________
+up_sampling2d_13 (UpSampling (None, 1164, 820, 32) 0
+_________________________________________________________________
+conv2d_34 (Conv2D) (None, 1162, 818, 1) 289
+===============================================================
+
+How can I have the same input and output shape?
+"
+"['reinforcement-learning', 'deep-rl', 'monte-carlo-methods', 'temporal-difference-methods', 'bias-variance-tradeoff']"," Title: What is the bias-variance trade-off in reinforcement learning?Body: I am watching DeepMind's video lecture series on reinforcement learning, and when I was watching the video of model-free RL, the instructor said the Monte Carlo methods have less bias than temporal-difference methods. I understood the reasoning behind that, but I wanted to know what one means when they refer to bias-variance tradeoff in RL.
+Is bias-variance trade-off used in the same way as in machine learning or deep learning?
+(I am just a beginner and have just started learning RL, so I apologize if it is a silly question.)
+"
+"['reinforcement-learning', 'dqn', 'policy-gradients', 'epsilon-greedy-policy', 'softmax-policy']"," Title: What happens when you select actions using softmax instead of epsilon greedy in DQN?Body: I understand the two major branches of RL are Q-Learning and Policy Gradient methods.
+From my understanding (correct me if I'm wrong), policy gradient methods have an inherent exploration built-in as it selects actions using a probability distribution.
+On the other hand, DQN explores using the $\epsilon$-greedy policy. Either selecting the best action or a random action.
+What if we use a softmax function to select the next action in DQN? Does that provide better exploration and policy convergence?
+"
+['ontology']," Title: Is there an algorithm to calculate the weights of an ontology tree's inner nodes?Body: I have a tree that represents a hierarchical ontology of computer science topics (such as AI, data mining, IR, etc). Each node is a topic, and its child nodes are its sub-topics. Leaf nodes are weighted based on their occurrence in a given document.
+Is there a well-known algorithm or function to calculate the weight of inner nodes based on the weights of leaf nodes? Or is it totally based on the application to decide mathematical calculation of the weights?
+In my application, the node's weights should be some sort of accumulation of its child nodes weights. Is there a better mathematical formula or function to do that than just summing up weights of child nodes?
+I am not asking about traversal, but rather about weighting function.
+"
+"['machine-learning', 'dropout', 'human-inspired', 'neuroscience']"," Title: Is some kind of dropout used in the human brain?Body: I've read that ANNs are based on how the human brain works. Now, I am reading about dropout.
+Is some kind of dropout used in the human brain? Can we say that the ability to forget is some kind of dropout?
+"
+"['search', 'heuristics', 'a-star', 'admissible-heuristic']"," Title: Which heuristics guarantee the optimality of A*?Body: The following is a statement and I am trying to figure out if it's true or false and why.
+
+Given a non-admissible heuristic function, A* will always give a solution if one exists, but there is no guarantee it will be optimal.
+
+I know that a non-admissible function is $h(n) > h^*(n)$ (where $h^*(n)$ is the real cost to the goal), but I do not know if there is a guarantee.
+Which heuristics guarantee the optimality of A*? Is the admissibility of the heuristic always a necessary condition for A* to produce an optimal solution?
+"
+"['reinforcement-learning', 'dqn', 'experience-replay']"," Title: How to handle the final state in experience replay?Body: I'm using the DQN algorithm to train my agent to play a turn-based game. The memory replay buffer stores tuples of experiences $(s, a, r, s')$, where $s$ and $s'$ are consecutive states. At the last turn, the game ends, and the non-zero reward is given to the agent. There are no more observations to be made and there is no next state $s'$ to store in the experience tuple. How should the final states be handled?
+"
+"['convolutional-neural-networks', 'datasets', 'reference-request', 'data-preprocessing', 'performance']"," Title: How is the performance of a CNN trained with monochrome images on image recognition tasks degraded?Body: For CNN image recognition tasks, like object recognition/face recognition/object segmentation/posture recognition, are there experiment results about how much will the performance be degraded with monochrome images?
+The imaginary experiment is like:
+
+- Take the existing frameworks, reduce the channel number in the framework to fit monochrome images
+
+- Transform the existing training data and testing data to monochrome images
+
+- Train the model with monochrome training data.
+
+- Use the model to test the monochrome testing data.
+
+- Compare the result with the original result.
+
+
+"
+"['neural-networks', 'comparison', 'training', 'probability-distribution']"," Title: Intuitively, why can the training of a neural network be formulated as a probability estimation problem?Body: Neural network training problems are oftentimes formulated as probability estimation problems (such as autoregressive models).
+How does one intuitively understand this idea?
+"
+"['convolutional-neural-networks', 'keras', 'autoencoders', 'sample-complexity', 'denoising-autoencoder']"," Title: How much data do we need for making a successful de-noising auto-encoder?Body: Is there a guide how much data do you need for making successful denoising model using autoencoders?
+Or the rule is, the more data, the better it is?
+I tried with small dataset 350 samples, to see what I will get as an output. And I failed. :D
+"
+"['reinforcement-learning', 'q-learning', 'exploration-exploitation-tradeoff']"," Title: Why do we explore after we have an accurate estimate of the value function?Body: Suppose we have a small space state and that, after about 2000 episodes, we've accurately explored the environment and known the accurate $Q$ values. In that case, why do we still leave a small probability for exploration?
+My guess is in the case of a dynamic environment where a bigger reward might pop up in another state. Is my assumption correct?
+"
+"['reinforcement-learning', 'resource-request', 'ddpg']"," Title: Is there a good website where I can learn about Deep Deterministic Policy Gradient?Body: Is there a good website where I can learn about Deep Deterministic Policy Gradient?
+"
+"['machine-learning', 'classification']"," Title: What is the advantage of having a stochastic classification procedure?Body: What is the advantage of having a stochastic/probabilistic classification procedure?
+The classifiers I have encountered so far are as follows. Suppose we have two outcomes $A = \{0,1\}$. Given a feature vector $x$, we have calculated a probability for each outcome and return the outcome $a \in A$ for which the probability is highest.
+Now, I encountered a classification procedure as follows: first, map each $x$ to a probability distribution on $A$ by a mapping $H$. To classify $x$, choose an outcome $a$ according to the distribution $H(x)$.
+Why not use the deterministic classification? Suppose $H(x)$ is 1 with probability $0.75$. Then, the obvious choice for an outcome would be $1$ and not $0$.
+"
+"['evolutionary-algorithms', 'artificial-life']"," Title: Artificial life simulator that is fully embodied and passes open endedness testsBody: Geb is an alife simulation that as far as I know passes all of the tests we have tried to come up with in defining open endedness. However, when you actually run the code, the behavioral complexity certainly increases, but the physical bodies of the creatures never changes (and cannot change), and Geb bodies are the only thing present in the world.
+Tool development, or at least developing new physical capabilities and new “actions” seems to be a crucial part of what makes evolution open ended. I think the problem with Geb is that the evolution and progress all takes place in the networks and network production rules, which are systems outside the physical world. They are external systems that take in data from the world and output actions for the agents. So while this rich complexity and innovation is occurring, it’s not integrated with the agents actions and physical bodies.
+This leads to a simple question: is there an alife system that passes all the same tests Geb does, but is “fully embedded” in its world? In the sense that any mechanism agents use to make actions must be part of the physical world of those agents, and subject to the same rules as the bodies of the agents themselves.
+What I’m saying here is loose, you could come up with plenty of weird edge cases that meet what I ask exactly without meeting my intent. And perhaps being fully embodied isn’t necessary, just being more embodied would be enough. What my intent is is to ask if we have any systems that pass the open endedness tests Geb have passed, but have the innovation occurring in a way that leads to emergent growth of “actions” and emergent growth of “bodies” because to evolve and do better those aspects must be improved as well.
+"
+"['deep-learning', 'natural-language-processing', 'applications', 'sequence-modeling', 'transformer']"," Title: Do transformers have success in other domains different than NLP?Body: Everybody knows how successful transformers have been in NLP. Is there known work on other domains (e.g that also have a sequential natural way of occurring, such as stock price prediction or other problems)?
+"
+"['neural-networks', 'deep-learning', 'activation-functions']"," Title: Why is non-linearity desirable in a neural network?Body: Why is non-linearity desirable in a neural network?
+I couldn't find satisfactory answers to this question on the web. I typically get answers like "real-world problems require non-linear solutions, which are not trivial. So, we use non-linear activation functions for non-linearity".
+"
+"['ai-design', 'geometric-deep-learning', 'graph-neural-networks']"," Title: How to design a graph neural network to predict the forces in truss elements of a space frame?Body: I am trying to create a Graph NN that will be able to predict the forces in truss elements of a space frame.
+The input for the NN will be a graph, where the nodes represent the nodes of the spaceframe. And the output should be the forces in the edges of the supplied graph/frame.
+The problem that I am facing is that for the NN to be beneficial I need to encode a lot of data per node:
+
+- 3 floats for the position of the node,
+- 3 floats for the vector of the force applied to the node,
+- a boolean/int to determine whether a node is a support.
+
+I am not sure how to design my Graph NN to allow for so many parameters per node.
+Maybe I should try a different approach?
+Any help is greatly appreciated!
+"
+"['reinforcement-learning', 'environment', 'exploration-exploitation-tradeoff']"," Title: How to deal with the addition of a new state to the environment during training?Body: Let's say we have a dynamic environment: a new state gets added after 2000 episodes have been done. So, we leave room for exploration, so that it can discover the new state.
+When it gets to that new state, it has no idea of the Q values, and, since we're past 2000 episodes, our exploration rate is very low. What happens if try to exploit when all Q values are 0?
+"
+"['deep-learning', 'convolutional-neural-networks', 'convolution']"," Title: How will the input be preserved as we go deeper in CNN, where dimensions decrease drastically?Body: Our length of feature representation decreases as we go deeper into the CNN, I mean to say that horizontal and vertical lengths decrease while depth(channels) increase. So, how will the input be preserved, since there won't be any data left at the end of the network, where we connect, to say Multi Layer Perceptrons?
+"
+"['reinforcement-learning', 'markov-decision-process', 'pomdp', 'transition-model']"," Title: Does ""transition model"" alone in an MDP imply it's non-deterministic?Body: I am looking at a lecture on POMDP, and the context is that, when the quadcopter can't see the landmarks, it has to use reckoning. And then he mentions the transition model is not deterministic, hence the uncertainty grows.
+Can transition models in MDP be deterministic?
+"
+"['definitions', 'transfer-learning', 'representation-learning', 'self-supervised-learning', 'pretext-tasks']"," Title: Does self-supervised learning require auxiliary tasks?Body: Self-supervised learning algorithms provide labels automatically. But, it is not clear what else is required for an algorithm to fall under the category "self-supervised":
+Some say, self-supervised learning algorithms learn on a set of auxiliary tasks [1], also named pretext task [2, 3], instead of the task we are interested in. Further examples are word2vec or autoencoders [4] or word2vec [5]. Here it is sometimes mentioned that the goal is to "expose the inner structure of the data".
+Others do not mention that, implying that some algorithms can be called to be "self-supervised learning algorithms" if they are directly learning the task we are interested in [6, 7].
+Is the "auxiliary tasks" a requirement for a training setup to be called "self-supervised learning" or is it just optional?
+
+Research articles mentioning the auxiliary / pretext task:
+
+- Revisiting Self-Supervised Visual Representation Learning, 2019, mentioned by [3]:
+
+
+The self-supervised learning framework requires only unlabeled data in order to formulate a pretext learning task such as predicting context or image rotation, for which a target objective can be computed without supervision.
+
+
+- Unsupervised Representation Learning by Predicting Image Rotations, ICLR, 2018, mentioned by
+[2]:
+
+
+a prominent paradigm is the so-called self-supervised learning that defines an annotation free pretext task, using only the visual information present on the images or videos, in order to provide a surrogatesupervision signal for feature learning.
+
+
+- Unsupervised Visual Representation Learning by Context Prediction, 2016, mentioned by
+[2]:
+
+
+This converts an apparently unsupervised problem (finding a good similarity metric between words) intoa “self-supervised” one: learning a function from a givenword to the words surrounding it. Here the context predic-tion task is just a “pretext” to force the model to learn agood word embedding, which, in turn, has been shown tobe useful in a number of real tasks, such as semantic wordsimilarity.
+
+
+- Scaling and Benchmarking Self-Supervised Visual Representation Learning, 2019:
+
+
+In discriminative self-supervised learning, which is the main focus of this work, a model is trained on an auxiliary or ‘pretext’ task for which ground-truth is available for free. In most cases, the pretext task involves predicting some hidden portion of the data (for example, predicting color for gray-scale images
+
+"
+"['comparison', 'supervised-learning', 'self-supervised-learning']"," Title: What is the difference between distant supervision and self-supervision?Body: Weak supervision is supervised learning, with uncertainty in the labeling, e.g. due to automatic labeling or because non-experts labelled the data [1].
+Distant supervision [2, 3] is a type of weak supervision that uses an auxiliary automatic mechanism to produce weak labels / reference output (in contrast to non-expert human labelers).
+According to this answer
+
+Self-supervised learning (or self-supervision) is a supervised learning technique where the training data is automatically labelled.
+
+In the examples for self-supervised learning, I have seen so far, the labels were extracted from the input data.
+What is the difference between distant supervision and self-supervision?
+
+
+(Setup mentioned in discussion:
+
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl']"," Title: Why do we update the weights of the target network in deep Q learning?Body: I know we keep the target network constant during training to improve stability, but why exactly are we updating the weights of our target network? In particular, if we've already reached convergence, why exactly are we updating the weights of our target network?
+"
+"['reinforcement-learning', 'q-learning', 'exploration-exploitation-tradeoff']"," Title: Why is it not advisable to have a 100 percent exploration rate?Body: During the learning phase, why don't we have a 100% exploration rate, to allow our agent to fully explore our environment and update the Q values, then during testing we bring in exploitation? Does that make more sense than decaying the exploration rate?
+"
+"['neural-networks', 'training', 'algorithm']"," Title: Algorithm to train a neural network against differentiable and non-differentiable databases?Body: Let's say I have two databases, $(\mathbf{x_i}, \mathbf{\hat{p_i}})$ and $(\mathbf{x_j}, \mathbf{\hat{q_j}})$. A neural network with weights $\theta$ can receive an input $\mathbf{x}$ and produce an output $\mathbf{y}$. Mathematically, $\mathbf{y} = f_{NN}(\mathbf{x},\theta)$. To compare the output of my neural network and the database, I need two wrappers, $\mathbf{p}=g(\mathbf{y})$ and $\mathbf{q}=h(\mathbf{y})$.
+The problem is: only $g(\cdot)$ is differentiable while writing $h(\cdot)$ in a differentiable manner would take a huge effort.
+Is there any efficient way to train my neural network to minimize the following loss function?
+$$
+\mathcal{L}(\theta) = \sum_i \left\{g\left[f_{NN}(\mathbf{x_i}, \theta)\right] - \mathbf{\hat{p_i}}\right\}^2 + \sum_j \left\{h\left[f_{NN}(\mathbf{x_j}, \theta)\right] - \mathbf{\hat{q_j}}\right\}^2
+$$
+My thinking
+If I am using gradient descent-type algorithm, I can only optimize the first part of the loss function while ignoring the second part.
+If I am using evolutionary-type algorithm, I can optimize both parts, but it will take a long time and I don't make a full use of the differentiable property of $g(\cdot)$.
+"
+"['reinforcement-learning', 'q-learning', 'temporal-difference-methods', 'value-functions']"," Title: Why isn't it wise for us to completely erase our old Q value and replace it with the calculated Q value?Body: Why isn't it wise for us to completely erase our old Q value and replace it with the calculated Q value? Why can't we forget the learning rate and temporal difference?
+Here's the update formula.
+
+"
+"['reinforcement-learning', 'python', 'q-learning', 'game-ai', 'combinatorial-games']"," Title: q learning appears to converge but does not always win against random tic tac toe playerBody: q learning is defined as:
+
+Here is my implementation of q learning of the tic tac toe problem:
+import timeit
+from operator import attrgetter
+import time
+import matplotlib.pyplot
+import pylab
+from collections import Counter
+import logging.handlers
+import sys
+import configparser
+import logging.handlers
+import unittest
+import json, hmac, hashlib, time, requests, base64
+from requests.auth import AuthBase
+from pandas.io.json import json_normalize
+from multiprocessing.dummy import Pool as ThreadPool
+import threading
+import time
+from statistics import mean
+import statistics as st
+import os
+from collections import Counter
+import matplotlib.pyplot as plt
+from sklearn import preprocessing
+from datetime import datetime
+import datetime
+from datetime import datetime, timedelta
+import matplotlib.pyplot as plt
+import matplotlib.ticker as ticker
+import matplotlib
+import numpy as np
+import pandas as pd
+from functools import reduce
+from ast import literal_eval
+import unittest
+import math
+from datetime import date, timedelta
+import random
+
+today = datetime.today()
+model_execution_start_time = str(today.year)+"-"+str(today.month)+"-"+str(today.day)+" "+str(today.hour)+":"+str(today.minute)+":"+str(today.second)
+
+epsilon = .1
+discount = .1
+step_size = .1
+number_episodes = 30000
+
+def epsilon_greedy(epsilon, state, q_table) :
+
+ def get_valid_index(state):
+ i = 0
+ valid_index = []
+ for a in state :
+ if a == '-' :
+ valid_index.append(i)
+ i = i + 1
+ return valid_index
+
+ def get_arg_max_sub(values , indices) :
+ return max(list(zip(np.array(values)[indices],indices)),key=lambda item:item[0])[1]
+
+ if np.random.rand() < epsilon:
+ return random.choice(get_valid_index(state))
+ else :
+ if state not in q_table :
+ q_table[state] = np.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
+ q_row = q_table[state]
+ return get_arg_max_sub(q_row , get_valid_index(state))
+
+def make_move(current_player, current_state , action):
+ if current_player == 'X':
+ return current_state[:action] + 'X' + current_state[action+1:]
+ else :
+ return current_state[:action] + 'O' + current_state[action+1:]
+
+q_table = {}
+max_steps = 9
+
+def get_other_player(p):
+ if p == 'X':
+ return 'O'
+ else :
+ return 'X'
+
+def win_by_diagonal(mark , board):
+ return (board[0] == mark and board[4] == mark and board[8] == mark) or (board[2] == mark and board[4] == mark and board[6] == mark)
+
+def win_by_vertical(mark , board):
+ return (board[0] == mark and board[3] == mark and board[6] == mark) or (board[1] == mark and board[4] == mark and board[7] == mark) or (board[2] == mark and board[5] == mark and board[8]== mark)
+
+def win_by_horizontal(mark , board):
+ return (board[0] == mark and board[1] == mark and board[2] == mark) or (board[3] == mark and board[4] == mark and board[5] == mark) or (board[6] == mark and board[7] == mark and board[8] == mark)
+
+def win(mark , board):
+ return win_by_diagonal(mark, board) or win_by_vertical(mark, board) or win_by_horizontal(mark, board)
+
+def draw(board):
+ return win('X' , list(board)) == False and win('O' , list(board)) == False and (list(board).count('-') == 0)
+
+s = []
+rewards = []
+def get_reward(state):
+ reward = 0
+ if win('X' ,list(state)):
+ reward = 1
+ rewards.append(reward)
+ elif draw(state) :
+ reward = -1
+ rewards.append(reward)
+ else :
+ reward = 0
+ rewards.append(reward)
+
+ return reward
+
+def get_done(state):
+ return win('X' ,list(state)) or win('O' , list(state)) or draw(list(state)) or (state.count('-') == 0)
+
+reward_per_episode = []
+
+reward = []
+def q_learning():
+ for episode in range(0 , number_episodes) :
+ t = 0
+ state = '---------'
+
+ player = 'X'
+ random_player = 'O'
+
+
+ if episode % 1000 == 0:
+ print('in episode:',episode)
+
+ done = False
+ episode_reward = 0
+
+ while t < max_steps:
+
+ t = t + 1
+
+ action = epsilon_greedy(epsilon , state , q_table)
+
+ done = get_done(state)
+
+ if done == True :
+ break
+
+ if state not in q_table :
+ q_table[state] = np.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
+
+ next_state = make_move(player , state , action)
+ reward = get_reward(next_state)
+ episode_reward = episode_reward + reward
+
+ done = get_done(next_state)
+
+ if done == True :
+ q_table[state][action] = q_table[state][action] + (step_size * (reward - q_table[state][action]))
+ break
+
+ next_action = epsilon_greedy(epsilon , next_state , q_table)
+ if next_state not in q_table :
+ q_table[next_state] = np.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
+
+ q_table[state][action] = q_table[state][action] + (step_size * (reward + (discount * np.max(q_table[next_state]) - q_table[state][action])))
+
+ state = next_state
+
+ player = get_other_player(player)
+
+ reward_per_episode.append(episode_reward)
+
+q_learning()
+
+The alogrithm player is assigned to 'X' while the other player is 'O':
+ player = 'X'
+ random_player = 'O'
+
+The reward per episode:
+plt.grid()
+plt.plot([sum(i) for i in np.array_split(reward_per_episode, 15)])
+
+renders:
+
+Playing the model against an opponent making random moves:
+## Computer opponent that makes random moves against trained RL computer opponent
+# Random takes move for player marking O position
+# RL agent takes move for player marking X position
+
+def draw(board):
+ return win('X' , list(board)) == False and win('O' , list(board)) == False and (list(board).count('-') == 0)
+
+x_win = []
+o_win = []
+draw_games = []
+number_games = 50000
+
+c = []
+o = []
+
+for ii in range (0 , number_games):
+
+ if ii % 10000 == 0 and ii > 0:
+ print('In game ',ii)
+ print('The number of X game wins' , sum(x_win))
+ print('The number of O game wins' , sum(o_win))
+ print('The number of drawn games' , sum(draw_games))
+
+ available_moves = [0,1,2,3,4,5,6,7,8]
+ current_game_state = '---------'
+
+ computer = ''
+ random_player = ''
+
+ computer = 'X'
+ random_player = 'O'
+
+ def draw(board):
+ return win('X' , list(board)) == False and win('O' , list(board)) == False and (list(board).count('-') == 0)
+
+ number_moves = 0
+
+ for i in range(0 , 5):
+
+ randomer_move = random.choice(available_moves)
+ number_moves = number_moves + 1
+ current_game_state = current_game_state[:randomer_move] + random_player + current_game_state[randomer_move+1:]
+ available_moves.remove(randomer_move)
+
+ if number_moves == 9 :
+ draw_games.append(1)
+ break
+ if win('O' , list(current_game_state)) == True:
+ o_win.append(1)
+ break
+ elif win('X' , list(current_game_state)) == True:
+ x_win.append(1)
+ break
+ elif draw(current_game_state) == True:
+ draw_games.append(1)
+ break
+
+ computer_move_pos = epsilon_greedy(-1, current_game_state, q_table)
+ number_moves = number_moves + 1
+ current_game_state = current_game_state[:computer_move_pos] + computer + current_game_state[computer_move_pos+1:]
+ available_moves.remove(computer_move_pos)
+
+ if number_moves == 9 :
+ draw_games.append(1)
+# print(current_game_state)
+ break
+
+ if win('O' , list(current_game_state)) == True:
+ o_win.append(1)
+ break
+ elif win('X' , list(current_game_state)) == True:
+ x_win.append(1)
+ break
+ elif draw(current_game_state) == True:
+ draw_games.append(1)
+ break
+
+outputs:
+In game 10000
+The number of X game wins 4429
+The number of O game wins 3006
+The number of drawn games 2565
+In game 20000
+The number of X game wins 8862
+The number of O game wins 5974
+The number of drawn games 5164
+In game 30000
+The number of X game wins 13268
+The number of O game wins 8984
+The number of drawn games 7748
+In game 40000
+The number of X game wins 17681
+The number of O game wins 12000
+The number of drawn games 10319
+
+The reward per episode graph suggests the algorithm has converged? If the model has converged shouldnt the number of O game wins be zero ?
+"
+"['reinforcement-learning', 'off-policy-methods', 'importance-sampling']"," Title: Why is it the case that off-policy evaluation using importance sampling suffers from high variance?Body: The average return for trajectories, $V^{\pi_e}$(s) is often computed via the importance sampling estimate $$V^{\pi_e}(s) = \frac{1}{n}\sum_{i=1}^n\prod_{t=0}^{H}\frac{\pi_e(a_t | s_t)}{\pi_b(a_t|s_t)}G_i$$ where $G_i$is the reward observed for the $i$th trajectory. Sutton and Barton gives an example whereby the variance could be infinite.
+In general, however, why does this estimator suffer from high variance? Is it because $\pi_e(a_t|s_t)$ is mainly deterministic and, therefore, the importance weight is $0$ for most trajectories, rendering those sample trajectories useless?
+"
+['spiking-neural-networks']," Title: How does Lateral Inhibition Provide Competition among Neurons?Body: I stumbled upon a paper from P.Diehl and M.Cook with the title "Unsupervised learning of digit recognition using spike-timing-dependent plasticity" and I'm trying to understand the logic behind the network connection they made.
+The network is as follows. The inputs (size of an image 28x28) are connected to the usual all-to-all fashion with positive weights to an NxN layer of neurons. They are decoded using poisson random distribution in which the frequency of spikes of a pixel is set accordingly to the pixel value. The NxN layer is connected 1 on 1 to an NxN layer of inhibitory neurons. These neurons inhibit all other neurons except the one they are connected to. Thus they are connected all-to-all with an exception.
+According to the paper this provides competition among neurons. I cannot understand how, in this particular connection, competition is provided. How can different neurons inherit different properties? To me it seems that all neurons will inherit the same properties, thus no differences in weights will be made in the training session among all neurons. For example, if the input 5 is passed to the network all weights of all neurons will try to adjust according to 5. Then if input 7 is passed next, all the weights will be updated according to the new number (7). It is expected, though, that some weights will keep the previous adjustment ie that some weights will have the properties of 5 and the others the properties of 7.
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'temporal-difference-methods', 'stationary-policy', 'bootstrapping']"," Title: Why do bootstrapping methods produce nonstationary targets more than non-bootstrapping methods?Body: The following quote is taken from the beginning of the chapter on "Approximate Solution Methods" (p. 198) in "Reinforcement Learning" by Sutton & Barto (2018):
+
+reinforcement learning generally requires function approximation methods able to handle nonstationary target functions (target functions that change over time). In control methods based on GPI (generalized policy iteration) we often seek to learn $q_\pi$ while $\pi$ changes. Even if the policy [pi] remains the same, the target values of training examples are nonstationary if they are generated by bootstrapping methods (DP and TD learning).
+
+Could someone explain why the same is not the case if we use non-bootstrapping methods (such as Monte Carlo that is not allowed infinite rollouts)?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'objective-functions']"," Title: Should illegal moves be excluded from loss calculation in DQN algorithm?Body: I'm implementing DQN algorithm to train my agent to play a turn-based game. The action space for the game is small, but not all moves are available at all the states. Therefore, when deciding on which action to pick, agent sets Q-values to 0 for all the illegal moves while normalizing the values of the rest.
+During training, when the agent is calculating the loss between policy and target networks, should the illegal actions be ignored (set to 0) so that they don't affect the calculations?
+"
+"['reinforcement-learning', 'automated-theorem-proving', 'coq']"," Title: Has reinforcement learning been used to prove mathematical theorems?Body: Coq exists, and there are other similar projects out there. Further, Reinforcement Learning has made splashes in the domain of playing games (a la Deepmind & OpenAI and other less well-known efforts).
+It seems to me that these two domains deserve to be married such that machine learning agents try to solve mathematical theorems. Does anyone know of any efforts in this area?
+I'm a relative novice in both of these domains, but I'm proficient enough at both to take a stab at building a basic theorem solver myself and trying to make a simple agent have a go at solving some basic number theory problems. When I went to look for prior art in the area I was very surprised to find none. I'm coming here as an attempt to broaden my search space.
+"
+"['reinforcement-learning', 'q-learning', 'markov-decision-process', 'sarsa']"," Title: Implementing SARSA for a 2-stage Markov Decision ProcessBody: I am a bit confused as to how exactly I should be implementing SARSA (or Q-learning too) on what is a simple 2-stage Markov Decision Task. The structure of the task is as follows:
+
+Basically, there are three states $\{S_1,S_2,S_3\}$ with $S_1$ is in the first stage for which the two possible actions are the two yellow airplanes. $S_2$ and $S_3$ are the possible states for the second stage and the feasible actions are the blue and red background pictures, respectively. There is only a reward at the end of the second stage choice. If I call the two first stage actions $\{a_{11},a_{12}\}$ and the four possible second stage actions $\{a_{21},a_{22},a_{23},a_{24}\}$, from left to right, then a sample trial/episode will look like:
+$$S_1, a_{11}, S_2, a_{22},R \quad \text{ or }\quad S_1, a_{11}, S_3, a_{24}, R.$$
+In the paper I am reading, where the figure is from, they used a complicated version of TD$(\lambda)$ in which they maintained two action-value functions $Q_1$ and $Q_2$ for each stages. On the other hand, I am trying to implement a simple SARSA update for each episode $t$:
+$$Q_{t+1}(s,a)= Q_t(s,a) + \alpha\left(r + \gamma\cdot Q_t(s',a') - Q_t(s,a)\right).$$
+In the first-stage, there is no reward so an actual realization will look like:
+$$Q_{t+1}(S_1, a_{11}) = Q_t(S_1,a_{11})+\alpha\left( \gamma\cdot Q_t(S_3,a_{23}) - Q_t(S_1,a_{11})\right).$$
+I guess my confusion is then how should it look like for the second stage of an episode? That is, if we continue the above realization of the task above, $S_1, a_{11}, S_3, a_{23}, R$, would should fill in the $?$:
+$$Q_{t+1}(S_3,a_{23}) = Q_t(S_3,a_{23}) + \alpha\left(R +\gamma\cdot Q_t(\cdot,\cdot)-Q_t(s_3,a_{23}) \right)$$
+One on hand, it seems to me that since this is the end of an episode, we assign $0$ to the $Q_t(\cdot,\cdot).$ On the other hand, the nature of this task is that it repeats the same episode over time for a total of $T$, a large number, times we need $Q_t(\cdot,\cdot) = Q_t(S_1,\cdot),$ with the additional action-selection in the first stage there.
+I will greatly appreciate if someone can tell me what is the right way to go here.
+The link to paper
+"
+"['reinforcement-learning', 'comparison', 'importance-sampling', 'dyna']"," Title: How is trajectory sampling different than normal (importance) sampling in reinforcement learning?Body: I am using Sutton and Barto's book for Reinforcement Learning.
+In Chapter 8, I am having difficulty in understanding the Trajectory Sampling.
+I have read the particular section on trajectory sampling (Sec 8.6) two times (plus 3rd time partially) but still, I do not get how it is different from the normal sampling update, and what are its benefits.
+"
+"['reinforcement-learning', 'python', 'environment', 'gym']"," Title: Should I build an environment from scratch myself or it is not always needed?Body: I am inspired by the paper Neural Architecture Search with Reinforcement Learning to use reinforcement learning for optimizing a child network (learner). My meta-learner (controller or parent network) is an MLP and will take as the reward function a silhouette score. Its output is a vector of real numbers between 0 and 1. These values are k different possibilities for the number of clusters (the goal is to cluster the result of the child network which is an auto-encoder, embedded images are the input to the meta-learner).
+What I am confused about is the environment here and how to implement this network. I was reading this tutorial and the author has used gym library to set the environment.
+Should I build an environment from scratch myself or it is not always needed?
+I appreciate any help or hints or links to a source that helps me understand better RL concepts. I am new to it and easily gets confused.
+"
+"['reinforcement-learning', 'reference-request', 'proofs', 'actor-critic-methods', 'reinforce']"," Title: What is the proof that the variance of the gradient estimate in Actor-Critic is smaller than in REINFORCE?Body: The intuition provided when introducing actor-critic algorithms is that the variance of its gradient estimates is smaller than in REINFORCE as, e.g., discussed here. This intuition makes sense for the reasons outlined in the linked lecture.
+Is there a paper/lecture providing a formal proof of that claim for any type of actor-critic algorithm (e.g. the Q Actor-Critic)?
+"
+"['reinforcement-learning', 'rewards', 'off-policy-methods', 'importance-sampling']"," Title: Does importance sampling for off-policy estimation also apply to the case of negative rewards?Body: Importance sampling is a common method for calculating off-policy estimates in RL. I have been reading through some of the original documentation (D.G. Horvitz and D.J. Thompson, Powell, M.J. and Swann, J) and cannot find any restrictions on the reward or value being estimated. However, it seems that there are constraints because the calculation is not what I would expect for RL environments that have negative rewards.
+For example, consider for a given action-state pair ($a_i, s_i$), $\pi_e(a|s) = 0.4$ and $\pi_b(a|s) = 0.6,$ where $\pi_b$ and $\pi_e$ are the behavioral and evaluation policies respectively. Also, assume the reward range is $[-1,0]$, and this action has a reward of $r_{\pi_b}=-0.5$.
+Under the IS definition, the expected reward under $\pi_b$ would be $r_{\pi_e} = \frac{\pi_b(a|s)}{\pi_e(a|s)} r_{\pi_b}$. In this example, $r_{\pi_e}=-0.75$ thus $r_{\pi_e} < r_{\pi_b}$. However, assuming a change of scale of the reward to be $[0,1]$ which result in $r_{\pi_b}=0.5$, results in $r_{\pi_e} > r_{\pi_b}$.
+All examples of IS I have seen in reference focus on positive rewards. However, I find myself wondering if this formulation applies to negative rewards too. If this formulation does allow for negative reward structures, I'm not sure how to interpret this result. I'm wondering how changing the scale of the reward could change the order? Is there any documentation on the requirements of the value in IS? Any insight into this would be greatly appreciated!
+"
+"['reinforcement-learning', 'exploration-exploitation-tradeoff']"," Title: Why is 100% exploration bad during the learning stage in reinforcement learning?Body: Why can't we during the first 1000 episodes allow our agent to perform only exploration?
+This will give a better chance of covering the entire space state. Then, after the number of episodes, we can then decide to exploit.
+"
+"['deep-learning', 'backpropagation']"," Title: I need help understanding general back propagation algorithmBody: In section 6.5.6 of the book Deep Learning by Ian et. al. general backpropagation algorithm is described as:
+
+The back-propagation algorithm is very simple. To compute the gradient of some
+scalar z with respect to one of its ancestors x in the graph, we begin by observing
+that the gradient with respect to z is given by dz = 1. We can then compute dz
+the gradient with respect to each parent of z in the graph by multiplying the current gradient by the Jacobian of the operation that produced z. We continue multiplying by Jacobians traveling backwards through the graph in this way until we reach x. For any node that may be reached by going backwards from z through two or more paths, we simply sum the gradients arriving from different paths at that node.
+
+To be specific I don't get this part:
+
+We can then compute dz the gradient with respect to each parent of z in the graph by multiplying the current gradient by the Jacobian of the operation that produced z.
+
+Can anyone help me understand this with some illustration? Thank you.
+"
+"['reinforcement-learning', 'convolutional-neural-networks', 'dqn']"," Title: Atari Games: Pretrained CNN to accelerate training?Body: DQN for Atari takes considerable training time. For example, the 2015 paper in Nature notes that algorithms are trained for 50 million frames or equivalently around 38 days of game experience in total. One reason is that DQN for image data typically uses a CNN, which is costly to train.
+However, the main purpose of a CNN is to extract the image features. Note that the policy for DQN is represented by a CNN and an output layer equal to the number of discrete actions. Is it possible to use a pretrained DQN to accelerate the training process by fixing the weights of the underlying pretrained CNN, resetting the weights of the output layer, and then running another (possibly different) DQN algorithm to relearn the weights of the output layer? Both DQN algorithms would be run on the same underlying environment.
+"
+"['reinforcement-learning', 'markov-decision-process', 'rewards', 'return', 'discount-factor']"," Title: How do I calculate the return given the discount factor and a sequence of rewards?Body: I know that $G_t = R_{t+1} + G_{t+1}$.
+Suppose $\gamma = 0.9$ and the reward sequence is $R_1 = 2$ followed by an infinite sequence of $7$s. What is the value of $G_0$?
+As it's infinite, how can we deduce the value of $G_0$? I don't see the solution. It's just $G_0 = 5 + 0.9*G_1$. And we don't know $G_1$ value, and we don't know $R_2, R_3, R_4, ...$
+"
+"['reinforcement-learning', 'dqn']"," Title: Should the agent play the game until the end or until the winner is found?Body: I'm using the DQN algorithm to train my agent to play a turn-based game. The winner of the game can be known before the game is over. Once the winning condition is satisfied, it cannot be reverted. For example, the game might last 100 turns, but it's possible to know that one of the players won at move 80, because some winning condition was satisfied. The last 20 moves don't change the outcome of the game. If people were playing this game, they, would play it to the very end, but the agent doesn't have to.
+The agent will be using memory replay to learn from the experience. I wonder, is it helpful for the agent to have the experiences after the winning condition was satisfied for a more complete picture? Or is it better to terminate the game immediately, and why? How would this affect agent's learning?
+"
+"['convolutional-neural-networks', 'dense-layers', 'convolutional-layers']"," Title: When to use convolutional layers as opposed to fully connected layers?Body: I am still new to CNNs, but I would like to check my understanding between when to use convolutional layers versus fully connected layers.
+From what I have read, we can use convolutional layers with filters, rather than fully connected layers, with images, text, and audio. However, with regular data, for example, the iris dataset, a convolutional layer would not perform well because of the structure. As in the columns can be swapped, yet the record or sample itself does not change. For example we can swap the order of the Petal Length column with Petal Width and the record does not change. Where as in an image or audio, changing the column items would result in a different image or audio file.
+These convolutional layers are "better" for images and audio because not all the features need to connect to the next layer. For example, we do not need the background of a car image to know it is a car, thus we do not need all the connections and we save computational costs.
+Is this the right way to think about when to use convolutional layers versus fully connected layers?
+"
+"['convolutional-neural-networks', 'dense-layers', 'convolutional-layers', 'feature-detection']"," Title: Can fully connected layers be used for feature detection?Body: I need help in understanding something basic.
+In this video, Andrew Ng says, essentially, that convolutional layers are better than fully connected (FC) layers because they use fewer parameters.
+But I'm having trouble seeing when FC layers would/could ever be used for what convolutional layers are used for, specifically, feature detection.
+I always read that FC layers are used in the final, classification stages of a CNN, but could they ever be used for the feature detection part?
+In other words, is it even possible for a "feature" to be deciphered when the filter size is the same as the entire image?
+If not, it's hard for me to understand Andrew Ng's comparison---there aren't any parameter reduction "savings" if we're not going to use an FC "filter" in place of a CNN layer in the first place.
+A semi-related question: Can multi-layer perceptrons (which I understand to be fully connected neural networks) be used for feature detection? If so, how do image-sized "filters" make any sense?
+"
+"['machine-learning', 'natural-language-processing', 'text-classification', 'text-summarization']"," Title: NLP Identifying important key words in a corpusBody: I am intrigued with the idea of Zettelkasten but unsatisfied with the current implementations. It seems to me that a machine learning and NLP approach could be productive by helpfully identifying “important” keywords on which to links could be created, with learning to help narrow the selection of keywords over time.
+My problem is that it’s been 30 years since AI classes in grad school and things have moved on. I’m sure I could become an nlp expert with study but I don’t wanna. So I’m looking for guidance: what are the right terms to describe identifying keywords in context, ideally with some semantic content; how would I apply ML with my training to improve the keyword identification.
+I’d love references, ideas, and packages references. Python is preferred, but not strongly; I write most common (and many uncommon, SNOBOL and COBOL anyone?) languages so language isn’t all that much of an issue.
+"
+"['natural-language-processing', 'named-entity-recognition', 'pos-tagging']"," Title: Is it possible to create a named entity recognition system without using POS tagging in the corpus?Body: Is it possible to create a named entity recognition system without using POS tagging in the corpus?
+"
+"['machine-learning', 'resource-request', 'ethics']"," Title: Studies on interest in results from ML purely due to use of MLBody: There is often interest in the results of machine learning algorithms, specifically because they came from machine learning algorithms -- as opposed to interest in the results in and of itself. It seems similar to the 'from the mouths of babes' trope, where comments made by children are sometimes regarded as containing some special wisdom, due to the honest and innocent nature of their source. Similarly, people seem to think that the impassionate learning of a machine might extract some special insight from a data set.
+(Of course, anyone with such opinions has obviously never met either a machine learning algorithm or a child.)
+Has this effect been discussed or studied anywhere? Does it have a name?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'comparison', 'pretrained-models']"," Title: How to measure/estimate the energy consumption of CNN models during testing?Body: Does someone know a method to estimate / measure the total energy consumption during the test phase of the well-known CNN models? So with a tool or a power meter...
+MIT has already a tool to estimate the energy consumption but it only works on AlexNet and GoogleNet, I need something for more Architectures (VGG, MobileNet, ResNet...). And I also need a metric to evaluate the well-known architects in terms of energy consumption. So at first estimate or measure the energy consumption and then evaluate the results with a good metric.
+With a measuring device I would measure the power consumption before using the CNN and I will repeat this experiment a few times then I will average the results, and do the same thing while using the CNN and at the end I will compare the results. But I have three problems here:
+1. how can I know that nothing else is running on the PC that also consumes energy while using the CNN?
+2. how can I increase the accuracy of the measurements?
+3. I don't find any power meter that measures the energy consumption in short periodes (1s).
+
+Thats why I prefer a tool to estimate the energy consumption, the accuracy of the measurments will not be that good but I didn't find any another tool..
+Does someone have an Idea, papers, sites that can help me?
+Many thanks in advance for your reply!
+"
+"['convolutional-neural-networks', 'activation-functions', 'convolution']"," Title: Is it possible to apply the associative property of the convolution operation when it is followed by a non-linearity?Body: The associative property of multidimensional discrete convolution says that:
+$$Y=(x \circledast h_1) \circledast h_2=x\circledast(h_1\circledast h_2)$$
+where $h_1$ and $h_2$ are the filters and $x$ is the input.
+I was able to do exploit this property in Keras with Conv2D
: first, I convolve $h_1$ and $h_2$, then I convolve the result with $x$ (i.e. the rightmost part of the equation above).
+Up to this point, I don't have any problem, and I also understand that convolution is linear.
+The problem is when two Conv2D layers have a non-linear activation function after the convolution. For example, consider the following two operations
+$$Y_1=\text{ReLU}(x \circledast h_1)$$
+$$Y_2=\text{ReLU}(Y_1\circledast h_2)$$
+It is possible to apply the associative property if the first or both layers have a non-linear activation function (in the case above ReLU, but it could be any activation function)? I don't think so. Any idea or related paper or some kind of approach?
+"
+"['research', 'papers', 'academia']"," Title: How can I read any AI paper?Body: I have studied linear algebra, probability, and calculus twice. But I don't understand how can I reach the level that I can read any AI paper and understand mathematical notation in it.
+What is your strategy when you see the mathematical expression that you can't understand?
+For example, in Wasserstein GAN article, there are many advanced mathematical notations. Also, some papers are written by people who have a master's in mathematics, and those people use advanced mathematics in some papers, but I have a CS background.
+When you come across this kind of problem, what do you do?
+"
+"['reinforcement-learning', 'generative-adversarial-networks', 'generative-model', 'imitation-learning', 'gail']"," Title: Is GAIL applicable if the expert's trajectories are for the same task but are in a different environment?Body: Is the GAIL applicable if the expert's trajectories (sample data) are for the same task but are in a different environment (modified but will not be completely different)?
+My gut feeling is, yes, otherwise we can just simply adopt behavioural cloning. Furthermore, since the expert's trajectories are from a different environment, the dimension/length of state-action pairs will most likely be different. Will those trajectories still be useful for GAIL training?
+"
+"['reinforcement-learning', 'papers']"," Title: How are the coefficients of the Region of Interest being selected?Body: I was reading the following paper: Rl-Ncs: Reinforcement Learning Based Data-Driven Approach For Nonuniform Compressed Sensing, and my question is: how do they decide whether a signal is characterized as a region of interest coefficient or non-region of interest coefficient?
+"
+['generative-adversarial-networks']," Title: Do GANs also learn to map between the distribution from which the random noise is sampled and the true distribution of the data?Body: I am reading about GANs. I understand that GANs learn implicitly the probability distribution that generated the data. However, at the input we give a random noise vector. It seems that we can sample that random noise vector from whatever distribution we want.
+My question is: given that there is ONLY ONE possible distribution that could have generated the data, and that our GAN is trying to approximate that distribution, can I think that the GAN also learns how to map between that distribution that it needs to learn and the distribution from which we sample the random noise vector?
+I am thinking about this, as the random noise vector can be sampled from whatever distribution we want, so it can be different each time we start to train, so it can vary, but the GAN needs to be able to still imitate one unique distribution, so, in a way, it needs to be able to adapt to the distribution from which the noise comes.
+"
+"['reinforcement-learning', 'dqn', 'rewards', 'reward-shaping', 'reward-functions']"," Title: Why does shifting all the rewards have a different impact on the performance of the agent?Body: I am new to reinforcement learning. For my application, I have found out that if my reward function contains some negative and positive values, my model does not give the optimal solution, but the solution is not bad as it still gives positive reward at the end.
+However, if I just shift all readings by subtracting a constant until my reward function is all negative, my model can reach the optimal solution easily.
+Why is this happening?
+I am using DQN for my application.
+I feel that this is also the reason why the gym environment mountaincar-v0 uses $-1$ for each time step and $0.5$ at the goal, but correct me if I am wrong.
+"
+"['neural-networks', 'deep-learning', 'deep-rl', 'terminology']"," Title: A neural network with 2 or more hidden layers is a DNN?Body: I just learned the math behind neural networks so please bear with my ignorance. I wonder if there is a precise definition for DNN.
+Is it true that any neural network with more than 2 hidden layers can be named as a DNN, and training a NN with 2 hidden layers using Q-learning we are technically doing a type of deep reinforcement learning?
+PS: If it is conceptually that simple why do common people regard deep learning as something done by archmages in ivory towers.
+"
+"['neural-networks', 'deep-learning', 'relu', 'batch-normalization']"," Title: Should batch normalisation be applied before or after ReLU?Body: I know that there has been some discussion about this (e.g. here and here), but I can't seem to find consensus.
+The crucial thing that I haven't seen mentioned in these discussions is that applying batch normalization before ReLU switches off half the activations, on average. This may not be desirable.
+In other words, the effect of batch normalization before ReLU is more than just z-scaling activations.
+On the other hand, applying batch normalization after ReLU may feel unnatural because the activations are necessarily non-negative, i.e. not normally distributed. Then again, there's also no guarantee that the activations are normally distributed before ReLU clipping.
+I currently lean towards a preference to batch normalization after ReLU (which is also based on some empirical results).
+What do you all think? Am I missing something here?
+"
+"['machine-learning', 'reinforcement-learning', 'q-learning', 'sarsa']"," Title: Tic-tac-toe: How would standard SARSA and Q-learning yield different results in the agent's behaviour?Body: I know this is deceptively simple. Tic tac toe is a well studied game for RL.
+Assume your agent is playing aggainst a strong opponent.
+I know you deal in after states. I know that in Q learning the optimal policy should be converged on faster as the Q(S,A) is becoming closer to the optimal each step. While in SARSA the Q function will not be updated towards the optimal sometimes as it is exploring. If epislon is fixed SARSA will convereg to the epislon greedy policy.
+I came across the question above and I don't know the answer, is it that SARSA may play more conservatively? Opting for more draws rather than getting in board positions where one could is likely to either lose or win rather than draw.
+"
+"['reinforcement-learning', 'algorithm', 'dqn', 'ddpg']"," Title: What reinforcement learning algorithm should I use in continuous states?Body: I want to use reinforcement learning in an environment I made. The exact environment doesn't really matter, but it comes down to this: The amount of different states in the environment is infinite e.g. amount of ways you can put 4 cars at an intersection, but the amount of different actions is only 3 e.g. go forward, right or left. The state exists out of five numbers. My question is: what algorithm should I use or at least what kind of algorithm?
+"
+"['objective-functions', 'math', 'support-vector-machine', 'hinge-loss']"," Title: What is the definition of the ""cost"" function in the SVM's objective function?Body: In a course that I am attending, the cost function of a support vector machine is given by
+$$J(\theta)=\sum_{i=1}^{m} y^{(i)} \operatorname{cost}_{1}\left(\theta^{T} x^{(i)}\right)+\left(1-y^{(i)}\right) \operatorname{cost}_{0}\left(\theta^{T} x^{(i)}\right)+\frac{\lambda}{2} \sum_{j=1}^{n} \Theta_{j}^{2}$$
+where $\operatorname{cost}_{1}$ and $\operatorname{cost}_{0}$ look like this (in Magenta):
+
+
+What are the values of the functions $\operatorname{cost}_{1}$ and $\operatorname{cost}_{0}$?
+For example, if using logistic regression the values of $\operatorname{cost}_{1}$ and $\operatorname{cost}_{0}$ would be $-\log* \operatorname{sigmoid}(-z)$ and $-\log*(1-\operatorname{sigmoid}(-z))$.
+"
+"['decision-trees', 'random-forests', 'feature-engineering']"," Title: Why are decision trees and random forests scale invariant?Body: Feature scaling, in general, is an important stage in the data preprocessing pipeline.
+Decision Tree and Random Forest algorithms, though, are scale-invariant - i.e. they work fine without feature scaling. Why is that?
+"
+"['machine-learning', 'regularization', 'l2-regularization', 'l1-regularization']"," Title: Why does L1 regularization yield sparse features?Body:
+In contrast to L2 regularization, L1 regularization usually yields sparse feature vectors and most feature weights are zero.
+
+What's the reason for the above statement - could someone explain it mathematically, and/or provide some intuition (maybe geometric)?
+"
+"['reinforcement-learning', 'environment', 'policies', 'experience-replay', 'on-policy-methods']"," Title: Do we need multiple parallel environments to train in batches an on-policy algorithm?Body: When using an on-policy method in reinforcement learning, like advantage actor-critic, you shouldn't use old data from an experience buffer, since a new policy requires new data. Does this mean that to apply batching to an on-policy method you have to have multiple parallel environments?
+As an extension of this, if only one environment is available when using on-policy methods, does that mean batching isn't possible? Doesn't that limit the power of such algorithms in certain cases?
+"
+"['natural-language-processing', 'long-short-term-memory', 'sequence-modeling', 'text-classification', 'padding']"," Title: Text classification of non-equal length texts, should I pad left or right?Body: Text classification of equal length texts works without padding, but in reality, practically, texts never have the same length.
+For example, spam filtering on blog article:
+thanks for sharing [3 tokens] --> 0 (Not spam)
+this article is great [4 tokens] --> 0 (Not spam)
+here's <URL> [2 tokens] --> 1 (Spam)
+
+Should I pad the texts on the right:
+thanks for sharing --
+this article is great
+here's URL -- --
+
+Or, pad on the left:
+-- thanks for sharing
+this article is great
+-- -- here's URL
+
+What are the pros and cons of either pad left or right?
+"
+"['machine-learning', 'training', 'image-processing']"," Title: How to manually collect rectangular training data samples from images?Body: I want to collect training samples from images.
+That can mean different things depending on the context. I think of the simplest case, which should be most commonly required. Because it is so common, there may be a standard tool for it.
+An example would be to have a collection of images of random street scenes and manually collect images of nonoccluded cars from them into separate files.
+What is a common way or tool to do this:
+For a large number of images, select one or more rectangles (of arbitrary size and with edges parallel to the image edges) in the image and save them to separate image files.
+Of course, it can be done with any general image editing program, but in this case, most of the work time would be used for opening new images, closing old images, saving sample images and the most time-consuming part of entering a non-conflicting file name for the individual sample image files.
+For small numbers of samples per input file, this may need about an order of magnitude more time, and also more complex interaction.
+I would prefer a tool running on Linux/Ubuntu.
+If this does not exist, I'd be curious why.
+"
+"['machine-learning', 'classification', 'supervised-learning']"," Title: Why is 'scatter' used instead of variance in LDA?Body: I've been reading about Fisher's Linear Discriminant Analysis lately, and I noticed that the objective function (particularly for two-class classification) to be maximized contains scatter terms instead of variance, in the denominator. Why is that?
+To clarify, the scatter of a sample is just the variance multiplied by the number of data points in the sample.
+Thank you!
+"
+"['deep-learning', 'hyperparameter-optimization']"," Title: 96.91% accuracy on MNIST after 2 hours of training using custom made neural net library. Ways to improve?Body: I wanted to understand back-propagation so I made a basic neural network library. I used momentum, with learning rate = $0.1$, beta = $0.99$, epochs = $200$, batch size = $10$, loss function is cross entropy and model structure is $784$, $64$, $64$, $10$ and all layers use sigmoid. It performed terribly at first, so I initialized all the weights and biases in the range $[10^{-9}, 10^{-8}]$ and it worked. I am quite new to deep learning and I find TensorFlow doesn't seem as friendly to beginners who want to play around with hyper-parameters. How do you find the right hyper-parameters? I trained it on 100 digits (which took 10 minutes), tweaked hyper-parameters, chose the best set and trained the model using that set on the entire data set of $60,000$ images. I also found that halving the epochs and doubling the training set size gave better results. Are there fool proof heuristics to find good hyper-parameters? What is the best set of hyper-parameters (without regularization, dropout, etc) for MNIST digits? Here is the code for those who want to take a look.
+"
+"['intelligent-agent', 'quantum-computing']"," Title: Are there any agents that are based on quantum computing?Body: Assuming the definition of an agent to be:
+
+An entity that perceives its environment, processes the perceived information, and acts on the environment such that some goal is fulfilled.
+
+Are there any agents that are based on quantum processing/computing (i.e. implemented by a network of quantum gates)?
+Is there any work done towards this end? If so, could someone provide references?
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-segmentation', 'u-net']"," Title: Suppress heatmap non-maxima in segmentation with UNetBody: I'm using U-Net for image segmentation.
+The model was trained with images that could contain up to 4 different classes. The train classes are never overlapping.
+The output of the UNet is a heatmap (with float values between 0 and 1) for each of these 4 classes.
+Now, I have 2 problems:
+
+- for a certain class, how do I segment (draw contours) in the original image only for the points where the heatmap has significant values? (In the image below an example: the values in the centre are significant, while the values on the left aren't. If I draw the segmentation of the entire image without any additional operation, both are considered.)
+
+
+
+- downstream of the first point, how do I avoid that in the original image the contours of two superimposed classes are drawn? (maybe by drawing only the one that has higher values in the corresponding heatmap)
+
+"
+"['deep-learning', 'terminology', 'papers', 'statistics']"," Title: What does it mean when a model ""statistically outperforms"" another?Body: I was reading this paper where they are stating the following:
+
+We also use the T-Test to test the significance of GMAN in 1 hour ahead prediction compared to Graph WaveNet. The p-value is less than 0.01, which demonstrates that GMAN statistically outperforms Graph WaveNet.
+
+What does "Model A statistically outperforms B" mean in this context? And how should the p-value threshold be selected?
+"
+"['reinforcement-learning', 'recurrent-neural-networks', 'dqn', 'deep-rl', 'embeddings']"," Title: What is the role of embeddings in a deep recurrent Q network?Body: When describing the model architecture for a deep recurrent q network, the authors of the paper Learning to Communicate with Deep Multi-Agent Reinforcement Learning
+
+each agent consists of a recurrent neural network (RNN), unrolled for $T$ time-steps, that maintains an internal state $h$, an input network for producing a task embedding $z$, and an output network for the Q-values and the messages $m$. The input for agent $a$ is defined as a tuple of $\left(o_{t}^{a}, m_{t-1}^{a^{\prime}}, u_{t-1}^{a}, a\right)$.
+
+Can someone explain what the purpose of the embedding layer is in this specific context?
+Implementation can be found here.
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'temporal-difference-methods', 'model-based-methods', 'model-free-methods']"," Title: Into which subcategories can reinforcement learning be divided?Body: In the course of a scientific work, I will discuss the different types of reinforcement learning. However, I have difficulties to find these different types.
+So, into which subcategories can reinforcement learning be divided? For example, the following subdivisions seem to be useful
+
+- Model-free and Model-based
+- Dynamic Programming, Monte Carlo and Temporal Difference
+
+Any others?
+"
+"['reinforcement-learning', 'recurrent-neural-networks', 'dqn', 'notation']"," Title: What does the notation ""for t=T to 1,−1 do"" in terms of time steps, in deep recurrent q network?Body: In looking at an algorithm in the paper Learning to Communicate with Deep Multi-Agent Reinforcement Learning.
+Here is the full algorithm:
+
+What does the notation for t=T to 1,−1 do:
refer to in terms of time steps?
+The network structure is a deep recurrent q network.
+Secondly, why do the gradients need to be reset to zero?
+"
+"['deep-learning', 'classification', 'computer-vision', 'datasets', 'categorical-data']"," Title: Is it possible to classify resistors using ResNet50?Body: I want to train ResNet50 model using resistor images like below:
+
+I tried it by collecting data from google images and there were quite few. So accuracy was very low (around %10) but I wonder If it is due lack of images or is it really possible to classify these images? Because as it is seen the object to be classified is very small and its value as color coded. I thought maybe this is not a good idea. Searched it on google but could not find anybody tried to do it before. I have also tried data augmentation and changing to other models but still its accuracy was quite low.
+P.S: I have also tried changin epoch numbers, optimizers and all other parameters. So I want to make sure If it is due low data or is it just very hard task to complete for a computer vision model.
+And Is it rational to crop the image by using a mask before classifying it to make sure all color codes are bigger and easily valuable by model?
+"
+"['convolutional-neural-networks', 'computer-vision', 'image-recognition', 'captcha']"," Title: How to prevent image recognition of my dataset with neural networks and make it hard to train them?Body: Suppose I have a private set of images containing some objects.
+How do i
+
+- Make it very hard for the neural networks such as ImageNet to recognize these objects, while allowing humans to do it at the same time?
+
+- Suppose I label these private images - a picture of a cat with a label "cat" - how do I make it hard for the attacker to train his neural network on my labels? Is it possible to somehow fool a neural network so that they couldn't easily train it to recognize it?
+
+
+Like random transforms etc, so that they couldn't use a neural network to recognize these objects, or even train it on my dataset if they had labels.
+"
+"['comparison', 'search', 'hill-climbing', 'best-first-search']"," Title: How does best-first search differ from hill-climbing?Body: How does best-first search differ from hill-climbing?
+"
+"['convolutional-neural-networks', 'image-recognition', 'image-segmentation', 'feature-extraction', 'captcha']"," Title: Is such a captcha AI-resistant?Body: Let's say we have a captcha system that consists of a greyscale picture (of a part of a street or something akin to re-captcha), divided into 9 blocks, with 2 missing pieces.
+You need to choose the appropriate missing pieces from over 15 possibilities to complete the picture.
+The puzzle pieces have their edges processed with glitch treatment as well as they have additional morphs such as heavy jpeg compression, random affine transform, and blurred edges.
+Every challenge picture is unique - pulled from a dataset of over 3 million images.
+Is it possible for the neural network to reliably (above 50%) predict the missing pieces? Sometimes these are taken out of context and require human logic to estimate the correct piece.
+The chance of selecting two answers in correct order is 1/15*1/14.
+"
+['convolutional-neural-networks']," Title: Structure-preserving layer in a network with respect to a transformationBody: I'm reading this paper: https://arxiv.org/pdf/1602.07576.pdf. I'll quote the relevant bits:
+
+Deep neural networks produce a sequence of progressively more abstract representations by mapping the input through a series of parameterized functions. In the current generation of neural networks, the representation spaces are usually endowed with very minimal internal
+structure, such as that of a linear space $\mathbb{R}$^n.
+In this paper we construct representations that have the structure of a linear $G$-space, for some chosen group $G$. This means that each vector in the representation space has
+a pose associated with it, which can be transformed by the elements of some group of transformations $G$. This additional structure allows us to model data more efficiently: A
+filter in a $G$-CNN detects co-occurrences of features that have the preferred relative pose [...]
+A representation space can obtain its structure from other representation spaces to which it is connected. For this to work, the network or layer $\phi$ that maps one representation
+to another should be structure preserving. For $G$-spaces this means that $\phi$ has to be equivariant: $$\phi(T_gx)=T'_g\phi(x)$$That is, transforming an input $x$ by a transformation $g$ (forming $T_gx$) and then passing it through the learned map $\phi$ should give the same result as first mapping $x$ through $\phi$ and then transforming the representation.
+Equivariance can be realized in many ways, and in particular the operators $T$ and $T'$ need not be the same. The only requirement for $T$ and $T'$ is that for any two transformations $g$ and $h$, we have $T(gh) = T (g)T (h)$ (i.e. $T$ is a linear representation of $G$).
+
+I didn't understand the paragraph in bold. A structure preserving map is something that preserves some operation between elements in the underlying set. A simple example: if $f:\mathbb{R}^3\to\mathbb{R}$ such that $(x,y,z)^T\mapsto x+y+z$, then
+$$f(r+s)=f((r_1,r_2,r_3)^T+(s_1,s_2,s_3)^T)=f((r_1+s_1,r_2+s_2,r_3+s_3))
+\\=r_1+s_1+r_2+s_2+r_3+s_3=f(r)+f(s)$$
+where the addition in the far left term is in $\mathbb{R}^3$ and addition in the far right is in $\mathbb{R}$. So the map $f$ preserves the additional structure of addition.
+In the quoted paragraph, $\phi$ is the structure preserving map, but what's the structure being preserved exactly? And why is the operator on the right different from one on the left? i.e. $T'$ on RHS instead of $T$
+"
+"['machine-learning', 'comparison', 'hyper-parameters', 'cross-validation']"," Title: How exactly does nested cross-validation work?Body: I have trouble understanding how nested cross-validation works - I understand the need for two loops (one for selecting the model, and another for training the selected model), but why are they nested?
+From what I understood, we need to select the model before training it, which points toward non-nested loops.
+Could someone please explain what's wrong (or right?) with my line of reasoning, and also explain nested cross-validation in greater detail? A representative example would be great.
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'exploration-exploitation-tradeoff']"," Title: Why do some DQN implementations not require random exploration but instead emulate all actions?Body: I've found online some DQN algorithms that (in a problem with a continuous state space and few actions, let's say 2 or 3), at each time step, compute and store (in the memory used for updating) all the possible actions (so all the possible rewards). For example, on page 5 of the paper Deep Q-trading, they say
+
+This means that we don't need a random exploration to sample an action as in many reinforcement learning tasks; instead we can emulate all the three actions to update the Q-network.
+
+How can this be compatible with the exploration-exploitation dilemma, which states that you have to balance the time steps of exploring with the ones of exploiting?
+"
+"['reinforcement-learning', 'q-learning', 'dqn']"," Title: How does training for DQN work if messing up in the environment in costly?Body: Suppose that we want to train a car to drive in the real world and decide to use Reinforcement Learning (specifically, DQN) for that. I am a bit confused about how training generally works.
+Is it that we are exploring the environment at the same time that we are training the Q network? If so, is there not a way to train the Q network before actually going out into the real world? And then, aren't there millions of possible states in the real world? So, how does RL or I guess the neural network generalize so that it can function during rush hour, empty roads, etc.
+"
+"['generative-adversarial-networks', 'generative-model', 'image-generation', 'transpose-convolution']"," Title: Concrete example of how transposed convolutions are able to *add* features to an imageBody: Say we have a simple gray scale image. If we use a filter which is just the 3x3 identity matrix (or more pointedly the identity matrix but with -1 instead of the 0 entries), it is fairly easy to see how applying this filter with stride length 1 and padding of 1 would produce an image of the same size that represents the presence of north-west diagonals in the input image.
+As I am reading more about the generative networks in GAN paradigms, I am learning that 'transposed convolutions' are used to turn gaussian noise into meaningful images like a human face. However, when I try to look at sources for transposed convolutions, most articles address the upscaling use of these convolutions, rather than their 'generative' properties. Also it is not clear to me that upscaling is even necessary in these applications, since we could start with noise that has the same resolution as our desired output.
+I am asking for an example, article, or paper that can provide me with more understanding as to the feature generation aspect of transposed convolutions. I have found this interesting article that relates the word 'transpose' to the transpose of a matrix. I have a good background in linear algegra, and I understand how the transpose would swap the dimensions of the input/output. This has obvious relation to upscaling/downscaling, but this effect would happen if we replaced the m x n matrix with any other n x m matrix, not specifically just the transpose. Essentially, I'm not sure how actual transpose functor can go from detecting a given feature associated to a convolutional filter, to producing that same feature
+EDIT: I've done some thinking at it is clear to me now how the transpose matrix will produce an 'input image' that has the features specified by a given feature map. That is if $M$ is the matrix given by the convultion operation, and $F$ is a feature map then
+$$M^T F$$
+will produce an image with the corresponding features. Its obviously not a perfect inverse operation, but it works. However I still don't yet see how to interpret this transposed matrix as a convolution of its own
+"
+"['keras', 'long-short-term-memory', 'sequence-modeling', 'hidden-layers', 'network-design']"," Title: Number of LSTM layers needed to learn a certain number of sequencesBody: Theoretically, number of units for a LSTM layer is the number of hidden states or the max length of sequences as per my practice.
+For example, in Keras:
+Lstm1 = LSTM(units=MAX_SEQ_LEN, return_sequences=False);
+
+However, with lots of sequences to train, should I add more LSTM layers? because increasing MAX_SEQ_LEN is not the way as it doesn't help make the network better since the extra number of hidden states isn't useful any more.
+I'm considering increasing number of LSTM layers, but how many are enough?
+For example, 3 of them:
+Lstm1 = LSTM(units=MAX_SEQ_LEN, return_sequences=True);
+Lstm2 = LSTM(units=MAX_SEQ_LEN, return_sequences=True);
+Lstm3 = LSTM(units=MAX_SEQ_LEN, return_sequences=False);
+
+"
+"['natural-language-processing', 'bert', 'fine-tuning', 'question-answering']"," Title: How to fine tune BERT for question answering?Body: I wish to train two domain-specific models:
+
+- Domain 1: Constitution and related Legal Documents
+- Domain 2: Technical and related documents.
+
+For Domain 1, I've access to a text-corpus with texts from the constitution and no question-context-answer tuples. For Domain 2, I've access to Question-Answer pairs.
+Is it possible to fine-tune a light-weight BERT model for Question-Answering using just the data mentioned above?
+If yes, what are the resources to achieve this task?
+Some examples, from the huggingface/models library would be mrm8488/bert-tiny-5-finetuned-squadv2, sshleifer/tiny-distilbert-base-cased-distilled-squad, /twmkn9/albert-base-v2-squad2.
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'recurrent-neural-networks', 'r-cnn']"," Title: Why are RNNs used in some computer vision problems?Body: I am learning computer vision. When I was going through implementations of various computer vision projects, some OCR problems used GRU or LSTM, while some did not. I understand that RNNs are used only in problems where input data is a sequence, like audio or text.
+So, in kernels of MNIST on kaggle almost no kernel has used RNNs and almost every repository for OCR on IAM dataset on GitHub has used GRU or LSTMs. Intuitively, written text in an image is a sequence, so RNNs were used. But, so is the written text in MNIST data. So, when exactly is it that RNNs(or GRUs or LSTMs) need to be used in computer vision and when don't?
+"
+['regularization']," Title: Where is L2-regularization term appliedBody: I have a confusion on where exactly is the L2 regularization (weight decay) is added.
+In various resources I have come across, I find two equations where L2 regularization is applied.
+Adding R(W) to loss function makes sense because it tries decrease large weights. Also, I have seen equations where we add R(W) to the weight update term, 2nd equation in 2nd line as shown in this image:
+
+In the above image, using the weight update rule that
+W(final) = W(initial) + (alpha) * (Gradient of W),
+
+I obtain a different equation as compared to the other equation which is commonly written in various resources.
+Where exactly is the regularization term added, I previously thought it was only added in Loss function but that gives me a different weight update equation from what is commonly presented in resources.( Or is my interpretation of the equation wrong? )
+I presume it is also added in weight update equation because while constructing models, we add regularization term.
+model.add(Conv2D(256, (5,5), padding="same", kernel_regularizer=l2(reg)))
+
+Would be grateful for any help.
+"
+"['neural-networks', 'machine-learning', 'recurrent-neural-networks', 'gradient-descent']"," Title: Why the cost/loss starts to increase for some iterations during the training phase?Body: I am trying to build a recurrent neural network from scratch. It's a very simple model. I am trying to train it to predict two words (dogs and gods). While training, the value of cost function starts to increase for some time, after that, the cost starts to decrease again, as can be seen in the figure.
+
+I am using the gradient descend method for optimization. Decreasing the step size/learning rate does not change the behavior. I have checked the code and math, again and again, I don't think there is an error (I could be wrong).
+Why is the cost function not decreasing monotonically? Could there be a reason other than an error in my code/math? If there is an error, do you think that it is just a coincidence that each time the system finally converges to a very small value of error?
+I am a beginner in the field of machine learning, hence, many questions I have asked may seem foolish to you. I am saving the values after every 100 iterations so the figure is actually for 15000 iterations.
+About training:
+I am using one-hot encoding. As the training data has only two samples ("gods" and "dogs"), where each alphabet is represented as d=[1,0,0,0],o=[0,1,0,0],g=[0,0,1,0],s=[0,0,0,1]. The recurrent neural network (RNN) goes back to a maximum of 3 time units, (e.g for dogs, the first input is 'd', then 'o', followed by 'g' and s). So, for the second input, the RNN goes back to 1 input, for the third input the RNN observes both previous inputs and so on. After calculating the gradients for the word "dogs", the values of the gradients are saved and the process is repeated for the word "gods". The gradients calculated for the second input "gods" are summed with the gradients calculated for "dogs" at the end of each epoch/iteration, and then the sum is used to updated all the weights. In each epoch, the inputs remain the same i.e "gods" and "dogs". In mini-batch training, in some epoch the RNN may encounter new inputs, hence, the loss may increase. However, I do not think that what I am doing qualifies as mini-batch training as there are only two inputs, both inputs are used in each epoch, and the sum of calculated gradients is used to update the weights.
+"
+"['neural-networks', 'long-short-term-memory', 'sequence-modeling', 'categorical-data', 'network-design']"," Title: Network design to learn multiple sequences of multiple categoriesBody: For learning a single sequence, LSTM only should suffice.
+However, my situation is different here. I have a list of sequences to learn:
+
+- The sale volumes of 12 months, these are the sequences
+
+And each sequence above belongs to a category.
+I'm trying it out by consider [category,sequence] as a sequential sample, the loss can be reduced to 1% but it gives wrong values in inferring real data.
+The second try is considering [category,sequence] as a sample of 2 inputs:
+
+- X1 = sequence
+- X2 = category
+
+Feed the sequence thru' LSTM layers to get H, and then concat with X2, and feed again the pair [H,X2] thru' some dense layers, the results aren't better.
+Any popular solutions (network shape, network design) for learning this kind of data: sequential data in different categories?
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'off-policy-methods', 'importance-sampling']"," Title: Should the importance sampling ratio be updated at the end of the for loop in the off-policy Monte Carlo control algorithm?Body: I'm studying RL with Sutton and Barto's book. I'd like to ask about the order of execution of a statement in the algorithm below.
+
+Here, $W$ (importance sampling ratio) is updated at the end of the For
loop.
+But, I think that updating should be located after calculating $G$ (the return) and before updating $C(s,a)$ (cumulative of $W$). This seems to be right considering the second picture below, which I found in http://incompleteideas.net/book/first/ebook/node56.html.
+
+Is Sutton and Barto's book wrong? Or the two algorithms in Sutton and Barto's book and seconds picture are actually the same, and I am wrong? Is there any difference between the two when implemented? If I am wrong, can you explain the reason?
+"
+"['natural-language-processing', 'papers', 'implementation', 'dialogue-systems']"," Title: What is the score used to visualize attention in this paper?Body: I'm reading this paper Global-Locally Self-Attentive Dialogue State Tracker and follow through the implementation published in GLAD.
+I was wondering if someone can clarify what variable or score is used to calculate the global and local self-attention scores in Figure 4 (the heatmap).
+For me, it is not really clear how to derive these scores. The only score that would match the given dimension would be in the scoring module $p_{utt}=softmax(a_{utt})$. However, I do not see in their implementation that anything is done with this value.
+So, what I did was the following:
+ q_utts = []
+ a_utts=[]
+ for c_val in C_vals:
+ q_utt, a_utt = attend(H_utt, c_val.unsqueeze(0).expand(len(batch), *c_val.size()), lens=utterance_len)
+ q_utts.append(q_utt)
+ a_utts.append(a_utt)
+attention_score= torch.mean(torch.stack(a_utts,dim=1),dim=1)
+
+But the resulting attention score differs very much from what I expect.
+"
+['natural-language-processing']," Title: How can I identify bigrams and trigrams that represent concepts?Body: I have many text documents and I want to identify concepts in these documents in an unsupervised manner. One of my problems is that the concepts can be bigrams, trigrams, or even longer.
+So, for example, out of all the bigrams, how can I identify the ones that are more likely to represent a concept?
+A concept could be "machine learning".
+Are you aware of any standard approaches to solve this problem?
+Edit: The corpus I am working with consists of papers accessed from web of science. That is, they are all in some given domain niche. I want to extract words, bigrams, trigrams... that represent common concepts/buzzwords from these papers. These could be "Automated machine learning", "natural language processing" et cetera. I need to be able to distinguish these from other common n-grams such as "New York", "Barack Obama",...
+I know that I could do this using a NER approach but this would require hand-labelling. Are you aware of any unsupervised ways to approach this problem? Or even a semi-superised method with little labelled data?
+"
+"['neural-networks', 'recurrent-neural-networks', 'pytorch']"," Title: In PyTorch, why does the sequence length need to be provided as the first dimension of the input tensor for an RNN?Body: I am confused as to why the sequence length is the first dimension of the input tensor for an RNN, while the batch size is the first dimension for any other kind of network (linear, CNN, etc.).
+This makes me think that I haven't fully grasped the concept of RNN batches. Is each independent batch a different sequence? And is the same hidden state across batches? Is the hidden state maintained between timesteps for a given sequence (for vanilla/truncated BPTT)?
+"
+"['reinforcement-learning', 'comparison', 'rewards', 'supervised-learning']"," Title: How is the reward in reinforcement learning different from the label in supervised learning problems?Body: How is the notion of immediate reward used in the reinforcement learning different from the notion of a label we find in the supervised learning problems?
+"
+"['reinforcement-learning', 'training', 'policy-gradients', 'metric']"," Title: How should we interpret all the different metrics in reinforcement learning?Body: I'm trying to train some deep RL agents using policy gradient methods like AC and PPO. While training, I have a ton of different metrics being monitored.
+I understand that the ultimate goal is to maximize the reward or return per episode.
+But there are a ton of other metrics that I don't understand what they are used for.
+In particular, how should one interpret the mean and standard deviation curves of the policy loss, value, value loss, entropy, and reward/return over time while training?
+What does it mean when these values increase or decrease over time? Given these curves, how would one decide how to tune hyperparameters, see where the training is succeeding and failing, and the like?
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'transformer', 'attention']"," Title: What is the big fuzz about SHA-RNN versus Transformers?Body: In his paper introducing SHA-RNN (https://arxiv.org/pdf/1911.11423.pdf) Stephen Merity states that neglecting one direction of research (in this case LSTMs) over another (transformers) merily because the SOTA in transformers are due to using more computing power is not the way to go.
+I agree that finding neat tricks in AI/ML is equally (if not more) important than just throwing more computing power at the problem. However I am a little bit confused.
+The main difference (since they both use attention units) between his SHA-RNN and transformers seem to be the fact that SHA-RNN uses LSTMs to "encode the position of words", where transformers do position encoding by using cosine and sine functions.
+My confusion comes from the fact that LSTMs need to be handled sequentially and thus they cannot use this large advantage of GPUs, being able to compute things in parallel, whilst transformers can. Wouldn't this mean that (assuming LSTMs and positional encoding are able to acquire the same results), training using LSTMs would take longer than transformers and thus need more computing power, thus defeating the initial puporse of this paper? Or am I misinterpreting this?
+Basically my question comes down to "Why would an SHA-RNN be less computationally expensive than a transformer?"
+"
+"['machine-learning', 'text-classification', 'metric', 'precision']"," Title: Is it possible that every class has a higher recall than precision for multi-class classification?Body: I am a student learning machine learning recently, and one thing is keep confusing me, I tried multiple sources and failed to find the related answer.
+As following table shows (this is from some paper):
+
+Is it possible that every class has a higher recall than precision for multi-class classification?
+Recall can be higher than precision over some class or overall performance which is common, but is it possible to keep recall greater than precision for every class?
+The total amount of test data is fixed, so, to my understanding, if the recall is greater than the precision for one class, it is a must that the recall must be smaller than the precision for some other classes.
+I tried to make a fake confusion matrix to simulate the result, but I failed. Can someone explain it to me?
+this is a further description:
+Assume we have classified 10 data into 3 classes, and we have a confusion matrix like this,
+![]()
+if we want to keep recall bigger than precision over each class (this case 0,1,2) respectively, we need to keep:
+x1+x2 < x3+x5
+x3+x4 < x1+x6
+x5+x6 < x2+x4
+There is a conflict, because the sum of the left side equals to the sum of the right side in these inequalities, and the sum(x1...x6) = 10 - sum(a,b,c) in this case.
+Hence, I think to get recall higher than precision on all classes is not feasible, because the quantity of the total classification is fixed.
+I don't know am I right or wrong, please tell me if I made a mistake.
+"
+['neural-networks']," Title: Does the input layer have bias and are there bias neurons?Body: I have seen two different representations of neural networks when it comes to bias. Consider a "simple" neural network, with just an input layer, a hidden layer and an output layer. To compute the value of a neuron in the hidden layer, the weights and neurons from the input layer are multiplied, shifted by a bias and then activated by the activation function. To compute the values in the output layer, you may choose not to have a bias and have an identity activation function on this layer, so that this last calculation is just "scaling".
+Is it standard to have a "scaling" layer? You could say that there is a bias associated with each neuron, except those in the input layer correct (and those in the output layer when it is a scaling layer)? Although I suppose you could immediately shift any value you're given. Does the input layer have a bias?
+I have seen bias represented as an extra unchanging neuron in each layer (except the last) having value 1, so that the weights associated with the connections from this neuron correspond to the biases of the neurons in the next layer. Is this the standard way of viewing bias? Or is there some other way to interpret what bias is that is more closely described by "a number that is added to the weighted sum before activation"?
+"
+"['machine-learning', 'naive-bayes', 'semi-supervised-learning', 'expectation-maximization']"," Title: How can the expectation-maximization improve the classification?Body: I am learning the expectation-maximization algorithm from the article Semi-Supervised Text Classification Using
+EM. The algorithm is very interesting. However, the algorithm looks like doing a circular inference here.
+
+I don't know am I understanding the description right or wrong, what I perceived is:
+Step 1: train NB classifier on labeled data.
+Repeat
+Step 2 (E-step): use trained NB to add label to unlabeled data.
+Step 3 (M-step): train NB classifier by using labeled data and unlabeled data(with tags from step 2) to get a new classifier
+Until convergent.
+Here is the question:
+In step 2, the label is tagged by the classifier trained on the labeled data, which is the only source containing the knowledge on a correct prediction. And the step-3 (M-step) is actually updating the classifier on the labels generated from step 2. The whole process is relying on the labeled data, so how can the EM classifier improve the classification? Can someone explain it to me?
+"
+"['machine-learning', 'deep-learning', 'computer-vision']"," Title: Best ROC threshold for classifier?Body: Suppose I have a neural network $N$ that produces the output probabilities $[0.3, 0.8]$. Normally, I would specify a threshold of 0.5 for the argmax of the prediction, let's say, second arg > 0.5 means that the image is attractive, and if both probabilities are lesser than 0.5 we don't have that good of a prediction.
+My question is, can we plot this threshold on a ROC curve so we can figure out the best value?
+"
+"['machine-learning', 'optimization', 'gradient-descent']"," Title: If the normal equation works, why do we need gradient descent?Body: Recently, I followed the open course CS229,
+http://cs229.stanford.edu/notes/cs229-notes1.pdf
+This lecturer introduces an alternative approach to gradient descent that is called "Normal Equation" and the equation is as follows:
+$$\theta=\left(X^{T} X\right)^{-1} X^{T} \vec{y}$$
+The normal equation can directly compute the $\theta$.
+If the normal equation works, why do we need gradient descent? What is the trade-off between these two methods?
+"
+"['deep-learning', 'computer-vision', 'training', 'image-segmentation', 'u-net']"," Title: Is it necessary to label the background when generating the labelled dataset for semantic segmentation?Body: When I label images for semantic segmentation (using u-net, if that matters), is labeling the background (anything I am not interested in) necessary? Will it improve the network's performance?
+"
+"['deep-learning', 'computer-vision', 'applications']"," Title: How is depth perception (e.g. in autonomous driving) addressed without using a Lidar or Radar unit?Body: For practical applications, like autonomous driving, depth perception is needed to make useful decisions.
+How is this normally addressed without using a LIDAR or RADAR unit (but using a camera)?
+"
+"['convolutional-neural-networks', 'object-recognition', 'image-segmentation', 'u-net']"," Title: How to quickly change hand-drawn shapes to symmetrical polished shapes?Body: Given a hand-drawn shape, I'd like to generate the corresponding symmetrical polished shapes such as circle, rectangle, triangle, trapezoid, square, parallelogram, etc.
+A short video demonstration
+Here below we can see a parallelogram, trapezoid, triangle and a circle.
+I was wondering how can I transform it into symmetrical polished shapes?
+
+At first, I tried a simple approach with traditional computer vision algorithms, with OpenCV (no neural networks were involved), by counting the number of corners, but it failed miserably, since there are many edge cases within a user's doodler.
+So, I was thinking to delve into CNN specifically U-Net for segmentation.
+Can somebody please give me some suggestions on how to approach this kind of problem? I'd like to read some relevant articles and code about this subject for getting a better grasp of this kind of problem.
+"
+"['reinforcement-learning', 'multi-armed-bandits']"," Title: Solving multi-armed bandit problems with continuous action spaceBody: My problem has a single state and an infinite amount of actions on a certain interval (0,1). After quite some time of googling I found a few paper about an algorithm called zooming algorithm which can solve problems with a continous action space. However my implementation is bad at exploiting. Therefore I'm thinking about adding an epsilon-greedy kind of behavior.
+Is it reasonable to combine different methods?
+Do you know other approaches to my problem?
+"
+"['neural-networks', 'implementation', 'word-embedding']"," Title: How can I create an embedding layer to convert words to a vector space from scratch?Body: For an upcoming project, I am trying to build a neural network for classifying text from scratch, without the use of libraries. This requires an embedding layer, or a way to convert words to some vector representation. I understand the gist, but I can't find any deep explanations or tutorials that don't start with importing TensorFlow. All I'm really told is that it works by context using a few surrounding words, but I don't understand exactly how.
+Is it much different from a classic network, with weights and biases? How does it figure out the loss?
+If someone could point me towards a guide to how these things work exactly I would be very grateful.
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'temporal-difference-methods', 'bellman-equations', 'dynamic-programming']"," Title: If the transition model is available, why would we use sample-based algorithms?Body: Sample-based algorithms, like Monte Carlo Algorithms and TD-Learning, are often presented as useful since they do not require a transition model.
+Assuming I do have access to a transition model, are there any reasons one might want to use sample-based methods instead of performing a full Bellman update?
+"
+"['reinforcement-learning', 'reward-design', 'reward-functions', 'multi-objective-rl', 'reward-hypothesis']"," Title: Can rewards be decomposed into components?Body: I'm training a robot to walk to a specific $(x, y)$ point using TD3, and, for simplicity, I have something like reward = distance_x + distance_y + standing_up_straight
, and then it adds this reward to the replay buffer. However, I think that it would be more efficient if it can break the reward down by category, so it can figure out "that action gave me a good distance distance_x
, but I still need work on distance_y
and standing_up_straight
".
+Are there any existing algorithms that add rewards this way? Or have these been tested and proven not to be effective?
+"
+"['neural-networks', 'machine-learning']"," Title: Neural Network is not learning a very simple taskBody: I am a complete beginner in the area. I implemented my first neural network following the online book "Neural Networks and Deep Learning" by Micheal Nielsen. It works fine with classifying handwritten digits. Achieving ~9500/10000 accuracy on the test data.
+I am trying to train the network to determine whether $x > 5$ where $x$ is in the interval $[0,10)$, which should be a much simpler task than classifying handwritten digits. However, no learning happens and the accuracy ever the test data stays exactly the same with every epoch. I tried different structures and different learning rates but always the same thing happened. Here is the code I wrote that uses libraries in Nielsen's book:
+import networkCopy
+import numpy as np
+# Creating training data
+x = []
+y = []
+for n in range(1000):
+ to_add = 100*np.random.rand()
+ x.append(np.array([to_add]).reshape(1,1))
+ y.append(np.array([float(to_add > 50)]).reshape(1,1))
+training_data = zip(x, y)
+# Creating test data
+tx = []
+ty = []
+for n in range(1000):
+ to_add = 100*np.random.rand()
+ tx.append(np.array([to_add]).reshape(1,1))
+ ty.append(np.array([float(to_add > 50)]).reshape(1,1))
+test_data = zip(tx, ty)
+
+# Creating and training the network
+net = networkCopy.Network([1, 5, 1]) # [1, 5, 1] contains the number of neurons for each layer
+net.SGD(training_data, 300, 100, 5.0, test_data=test_data)
+# 300 is the number of epochs, 100 is the mini batch size
+#5.0 is the learning rate
+
+The way I generated the data may not be optimal, it is an ad hoc solution to make the data in the proper form for the network. This is my first question so I apologize for any mistakes that might be in the format of the question.
+"
+"['neural-networks', 'comparison', 'training', 'gradient-descent', 'softmax']"," Title: Isn't it true that using max over a softmax will be much slower because there is not a smooth gradient?Body: Isn't it true that using max over a softmax will be much slower because there is not a smooth gradient?
+Max basically zeros out the gradients of all the non-maximum values. Especially at the beginning of training, this means it is zeroing out potentially useful features simply because of random weight initialization. Wouldn't this drastically slow down the training in the beginning?
+"
+"['neural-networks', 'convolutional-neural-networks', 'data-preprocessing', 'data-augmentation']"," Title: Do I need to rotate the masks, if I also rotate the images and the masks are generated from the input?Body: I am training a neural network that takes an input (H, W, 3)
and has the output of size (H', W', C)
. Now, to augment my dataset, since I only have 45k images, I am using the following in my custom data generator
+def Generator():
+img=cv2.imread(trainDir+'\'+imgpath)
+img=tf.keras.preprocessing.image.random_rotation(img,20m,row_axis=0,col_axis=1,channel_axis=2)
+
+output_mask=np.load(trainDir+'\'+maskpath)
+
+yield(img/255-.5,output_mask)
+
+Since I am rotating my input images and the output_masks are generated from information about the input (specifically, heat maps around the joint locations) do I also need to rotate the masks as well?
+"
+"['reinforcement-learning', 'transfer-learning']"," Title: How to ""forward"" updated NN model to a transferred model?Body: I've trained a robot to walk in a straight line for as long as it can (using TD3), and now I'm using that pre-trained model for two new models with separate purposes: 1. Walk to a specific point and halt (adding target position to the NN inputs); 2. Walk straight at a specified velocity (adding a target velocity to NN inputs).
+Now let's say I retrain the original model again to walk properly after changing, say, the mass of the robot. How can I approach "forwarding" this update to the two transfer-learned models? The purpose of this is to minimize re-training time for all future models transfer-learned from the original.
+(What strikes me as particularly challenging is the fact that the input layer of the transfer-learned models have additional features, so this may re-wire the majority of the NN, making a "forwarded update" completely incompatible...)
+"
+"['neural-networks', 'reinforcement-learning', 'unsupervised-learning', 'forecasting', 'semi-supervised-learning']"," Title: Should forecasting with neural networks only be treated as a supervised learning (regression) problem?Body: I have recently made a work about the application of neural networks to time series forecasting, and I treated this as a supervised learning (regression) problem. I have come across the suggestion of treating this problem as an unsupervised, semi-supervised, or reinforcement learning problem. The ones that made this suggestion didn't know how to explain this approach and I haven't found any paper of this. So I found myself now trying to figure it out without any success. To my understanding:
+Unsupervised learning problems (clustering and segmentation reduction) and semi-supervised learning problems (semi-supervised clustering and semi-supervised classification) can be used to decompose the time series but not forecast it.
+Reinforcement learning problems (model-based and non-model-based on/off-policy) is to decision taken problems, not to forecast.
+It is possible to treat forecasting time series with neural networks as an unsupervised, semi-supervised, or reinforcement learning problem? How it is done?
+"
+"['neural-networks', 'deep-learning', 'classification', 'history', 'softmax']"," Title: Which paper introduced the term ""softmax""?Body: Nowadays, the softmax function is widely used in deep learning and, specifically, classification with neural networks. However, the origins of this term and function are almost never mentioned anywhere. So, which paper introduced this term?
+"
+"['game-ai', 'monte-carlo-methods']"," Title: Monte Carlo Exploring Starts broke for 2048 game AIBody: I implemented a MCES for 2048 (the game), with a quality function implemented as a neural net of a single layer.
+The starts are created with 6 cells filled with values between 64 and 1024, two cells are 1024 an ther other 8 cells are filled with 0. The game is then progressed until the AI loses or wins and another start is created.
+After 10 wins the max cell created in the start is reduced in half. Thus, after the first 10 wins, the max cell created in the start is 512.
+The issue I am having is that after the first 10 wins, the AI gets stuck, it can run around 3 million steps but doesn't get any more wins.
+How should I create the starts for it to actually learn?
+Code for reward (complete code here):
+ ArrayList<DataSet> dataSets = new ArrayList<>();
+ double gain = 0;
+
+ for(int i = rewards.size()-1; i >= 0; i--) {
+ gain = gamma * gain + rewards.get(i);
+
+ double lerpGain = reward(gain);
+ INDArray correctOut = output.get(i).putScalar(actions.get(i).ordinal(), lerpGain);
+ dataSets.add(new DataSet(input.get(i), correctOut));
+ }
+
+ Qnetwork.fit(DataSet.merge(dataSets));
+
+Code:
+public class SimpleAgent {
+ private static final Random random = new Random(SEED);
+
+ private static final MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
+ .seed(SEED)
+ .weightInit(WeightInit.XAVIER)
+ .updater(new AdaGrad(0.5))
+ .activation(Activation.RELU)
+ .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
+ .weightDecay(0.0001)
+ .list()
+ .layer(new DenseLayer.Builder()
+ .nIn(16).nOut(4)
+ .build())
+ .layer(new OutputLayer.Builder()
+ .nIn(4).nOut(4)
+ .lossFunction(LossFunctions.LossFunction.SQUARED_LOSS)
+ .build())
+ .build();
+
+
+ public SimpleAgent() {
+ Qnetwork.init();
+ ui();
+ }
+
+ private static final double gamma = 0.02;
+
+ private final ArrayList<INDArray> input = new ArrayList<>();
+ private final ArrayList<INDArray> output = new ArrayList<>();
+ private final ArrayList<Double> rewards = new ArrayList<>();
+ private final ArrayList<GameAction> actions = new ArrayList<>();
+
+ private MultiLayerNetwork Qnetwork = new MultiLayerNetwork(conf);
+ private GameEnvironment oldState;
+ private GameEnvironment currentState;
+ private INDArray oldQuality;
+ private double epsilon = 1;
+
+ public void setCurrentState(GameEnvironment currentState) {
+ this.currentState = currentState;
+ }
+
+ public GameAction act() {
+ if(oldState != null) {
+ double reward = currentState.points - oldState.points;
+
+ if (currentState.lost) {
+ reward = 0;
+ }
+
+ input.add(oldState.boardState);
+ output.add(oldQuality);
+ rewards.add(reward);
+
+ epsilon -= (1 - 0.01) / 1000000.;
+ }
+
+ oldState = currentState;
+ oldQuality = Qnetwork.output(currentState.boardState);
+
+ GameAction action;
+
+ if(random.nextDouble() < 1-epsilon) {
+ action = GameAction.values()[oldQuality.argMax(1).getInt()];
+ } else {
+ action = GameAction.values()[new Random().nextInt(GameAction.values().length)];
+ }
+
+ actions.add(action);
+
+ return action;
+ }
+
+ private final int WINS_TO_NORMAL_GAME = 100;
+ private int wonTimes = 0;
+
+ public void setHasWon(boolean won) {
+ if(won) {
+ wonTimes++;
+ }
+ }
+
+ public boolean playNormal() {
+ return wonTimes > WINS_TO_NORMAL_GAME;
+ }
+
+ public boolean shouldRestart() {
+ if (currentState.lost || input.size() == 20) {
+ ArrayList<DataSet> dataSets = new ArrayList<>();
+ double gain = 0;
+
+ for(int i = rewards.size()-1; i >= 0; i--) {
+ gain = gamma * gain + rewards.get(i);
+
+ double lerpGain = reward(gain);
+ INDArray correctOut = output.get(i).putScalar(actions.get(i).ordinal(), lerpGain);
+ dataSets.add(new DataSet(input.get(i), correctOut));
+ }
+
+ Qnetwork.fit(DataSet.merge(dataSets));
+
+ input.clear();
+ output.clear();
+ rewards.clear();
+ actions.clear();
+
+ return true;
+ }
+
+ return false;
+ }
+
+ public Game2048.Tile[] generateState() {
+ double lerped = lerp(wonTimes, WINS_TO_NORMAL_GAME);
+ int filledTiles = 8;
+
+ List<Integer> values = new ArrayList<>(16);
+
+ for (int i = 0; i < 16-filledTiles; i++) {
+ values.add(0);
+ }
+
+ for (int i = 16-filledTiles; i < 14; i++) {
+ values.add((int) (7-7*lerped) + random.nextInt((int) (2- 2*lerped)));
+ }
+
+ values.add((int) ceil(10-10*lerped));
+ values.add((int) ceil(10-10*lerped));
+
+ Collections.shuffle(values);
+
+ return values
+ .stream()
+ .map((value) -> (value == 0? 0: 1 << value))
+ .map(Game2048.Tile::new)
+ .toArray(Game2048.Tile[]::new);
+ }
+
+ private static double reward(double x) {
+ return x/ 2048;
+ }
+
+ private static double lerp(double x, int maxVal) {
+ return x/maxVal;
+ }
+
+ private void ui() {
+ UIServer uiServer = UIServer.getInstance();
+ StatsStorage statsStorage = new InMemoryStatsStorage();
+ uiServer.attach(statsStorage);
+ Qnetwork.setListeners(new StatsListener(statsStorage));
+ }
+}
+
+"
+"['classification', 'training', 'regression']"," Title: Finding the 'ultimate resolution' of an ANNBody: I want to use a neural network to predict the refractive index of a solution. My thinking is, instead of immediately training on many samples, I will first find the 'ultimate resolution' of the network given the experimental apparatus I am using. What I mean is I will make two different solutions which have refractive indices near the middle of the range of which I am interested. Then I will train the network to classify these two solutions based on reflectance measured from the solution. If it works, say with at least 95% accuracy, then I will make two different solutions in which the difference in refractive index is smaller than before. I will repeat this until the ANN classifies, say below 95%.
+Will this method of finding the 'resolution' by classification extrapolate well to regression with many more training examples?
+"
+"['machine-learning', 'transfer-learning', 'batch-normalization']"," Title: How does batch normalisation actually work?Body: I actually went through the Keras' batch normalization tutorial and the description there puzzled me more.
+Here are some facts about batch normalization that I read recently and want a deep explanation on it.
+
+- If you froze all layers of neural networks to their random initialized weights, except for batch normalization layers, you can still get 83% accuracy on CIFAR10.
+
+- When setting the trainable layer of batch normalization to false, it will run in inference mode and will not update its mean and variance statistics.
+
+
+"
+"['neural-networks', 'deep-learning', 'prediction', 'feature-selection', 'features']"," Title: When is adding a feature useless?Body: I'm building a model, where, from a feature set A, I want to predict a target set C. I need to understand if another feature set B, together with A, can improve my model performances, instead of using only A.
+Now I want to check if I can predict B directly from A, since, in my understanding, this would mean that info on B is already inside A.
+If I get good predictions when testing the model A -> B, is it true then that adding B to A in predicting C is completely useless?
+And furthermore, are there smarter ways to decide if/when a feature is useless?
+"
+"['neural-networks', 'deep-learning', 'python', 'keras', 'r-cnn']"," Title: Inaccurate masks with Mask-RCNN: Stairs effect and sudden stopsBody: I've been using matterport's Mask R-CNN to train on a custom dataset. However, there seem to be some parameters that i failed to correctly define because on practically all of the images, the bottom or top of the object's mask is cut off:
+
+As you can see, the bounding box is fine since it covers the whole blade, but the mask seems to suddenly stop in a horizontal line on the bottom.
+On another hand, there is a stair-like effect on masks of larger and curvier objects such as this one (in addition to the bottom and top cut-offs):
+
+
+- The original images are downscaled to
IMAGE_MIN_DIM = IMAGE_MAX_DIM = 1024
using the "square" mode.
+USE_MINI_MASK
is set to true with MINI_MASK_SHAPE = (512, 512)
(somehow if i set it off, RAM gets filled and training chrashes).
+RPN_ANCHOR_SCALES = (64, 128, 256, 512, 1024)
since the objects occupy a large space of the image.
+
+It doesn't feel like the problem comes from the amount of training. These two predictions come from 6 epochs of 7000 steps per epoch (took around 17 hours). And the problem appears from early stage and persists along all the epochs.
+I posted the same question on stack overflow, and an answer pointed out that this issue is common when using mask r-cnn. It also suggested to look at PointRend, an implementation to mask r-cnn that addresses this issue.
+Nevertheless I feel like I could still optimize my model and use the full potential of mask r-cnn before looking for an alternative.
+Any idea on what changes to make ?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'double-dqn']"," Title: Why does adding another network help in double DQN?Body: What is the idea behind double DQN?
+The target in double DQN is computed as follows
+$$
+Y_{t}^{\text {DoubleQ }} \equiv R_{t+1}+\gamma Q\left(S_{t+1}, \underset{a}{\operatorname{argmax}} Q\left(S_{t+1}, a ; \boldsymbol{\theta}_{t}\right) ; \boldsymbol{\theta}_{t}^{\prime}\right),
+$$
+where
+
+- $\boldsymbol{\theta}_{t}^{\prime}$ are the weights of the target network
+- $\boldsymbol{\theta}_{t}$ are the weights of the online value network
+- $\gamma$ is the discount factor
+
+On the other hand, the target in DQN is computed as
+$$Y_{t}^{\mathrm{DQN}} \equiv R_{t+1}+\gamma \max _{a} Q\left(S_{t+1}, a ; \boldsymbol{\theta}_{t}^{-}\right),$$
+where $\boldsymbol{\theta}_{t}^{-}$ are the weights of the target network.
+The target network for evaluating the action is updated using weights of the online network and the value fed to the target value is basically the old q value of the action.
+Any ideas on how or why adding another network based on weights from the first network helps? Any example?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'backpropagation', 'gradient-descent']"," Title: Is my understanding of back-propogation correct?Body: I am trying to learn backpropagation and this is what I know so far.
+To update the weights of the neural network you have to figure out the partial derivative of each of the parameters on the loss function using the chain rule. List all of these partial derivatives in a column vector and you have your gradient vector of your current parameter's on the loss function. Then by taking the negative of the gradient vector to descend the loss function and multiplying it by the learning rate (step size) and adding it to your original gradient vector, you have your new weights.
+Is my understanding correct? Also, how can this be done in iterations over training examples?
+"
+"['machine-learning', 'python', 'keras']"," Title: Binary mode or Multi-label mode is correct when using binary crossentropy and sigmoid output function on multi-label classificationBody: I would like to ask a question about the relationship of accuracy with the loss function.
+My experiment is a multiclass text classification problem, and I have built a Keras neural network to tackle it. My labels are something like
+array([array([0, 0, 0, 0, 0, 1, 0, 1]), array([0, 1, 1, 0, 0, 0, 0, 1])])
+For the final output layer I use the 'sigmoid' activation function and for loss the 'binary crossentropy', however, I am a bit confused about the metric. I am using the F1_score metric because Accuracy it's not a metric to count on when there are many more negative labels than positive labels. So, since the problem is multilabel classification shall I use the multi-label mode like tfa.metrics.F1_score(mode="micro"). Is that correct? or since I use binary_crossentropy and sigmoid activation function should I use the standard binary f1-score because every label-tag Is independent to the others and has a different Bernoulli distribution?
+I would really like to get your input on this. My humble opinion is that I should you the binary standard mode of binary f1-score and not the multi-label micro approach even though my experiment is multi-label text classification.
+My current approach (using micro F1-score since my y_train is Multi-label
+model_for_pruning.compile(optimizer='adam',
+ loss='binary_crossentropy',
+ metrics=[tfa.metrics.F1Score(y_train[0].shape[-1], average="micro")])
+
+My alternative approach (based on the binary_crossentropy and the sigmoid activation function, despite I have multi-label y_train)
+model_for_pruning.compile(optimizer='adam',
+ loss='binary_crossentropy',
+ metrics=[tfa.metrics.F1Score(y_train[0].shape[-1], average=None)])
+
+Τhe reason why I use sigmoid and not softmax as the output layer
+relevant link
+Why Sigmoid and not Softmax in the final dense layer?
+In the final layer of the above architecture, sigmoid function as been used instead of softmax. The advantage of using sigmoid over Softmax lies in the fact that one synopsis may have many possible genres. Using the Softmax function would imply that the probability of occurrence of one genre depends on the occurrence of other genres. But for this application, we need a function that would give scores for the occurrence of genres, which would be independent of occurrences of any other movie genre.
+relevant link 2
+Binary cross-entropy rather than categorical cross-entropy.
+This may seem counterintuitive for multi-label classification; however, the goal is to treat each output label as an independent Bernoulli distribution and we want to penalize each output node independently.
+Please check my reasoning behind this and I would be happy if we can contradict this explanation. To better explain my experiment, I want to predict movie genres. So a movie can belong to 1 or more genres ['Action', 'Comedy', 'Children'], so when I use softmax the probability sum to 1, while when I use sigmoid its single probability of a class has a range between (0,1). Thus, if the predictions are correct the genres with the highest probabilities are those assigned to the movie. So imagine that my vector of prediction probabilities are something like [0.15, 0.12, 0.54, 0.78, 0.99] sum()> 1, and not something like [0.12, 0.43, 11, 0.32, 0.01, 0.01) sum() = 1.
+"
+"['deep-learning', 'computer-vision', 'object-detection', 'papers']"," Title: What is a heatmap in the CornerNet paper?Body: I have been working on understanding how CornerNet works, but I couldn't figure out a few parts about the architecture.
+First, the authors mention that there are 3 distinct parts to be predicted as a heatmap, embedding, and offset.
+Also, in the paper, it is stated that the network was trained on the COCO dataset, which has bounding box and class annotations.
+As far as I am concerned, since CornerNet is based on detecting the top-left and bottom-right corners, the ground-truth labels for heatmap should be composed of top-left and bottom-right pixel locations of bounding boxes with the class score (but I might be wrong). What is the heatmap used for?
+Moreover, for the embedding part, authors used the pull&push loss at the ground-truth pixel locations to find out which corner pairs belong to which object, but I don't understand how to backpropagate this loss. How do I back-propagate the embedding loss?
+"
+"['reinforcement-learning', 'terminology', 'policy-gradients']"," Title: How can I classify policy gradient methods in RL?Body: In the book of Barto and Sutton, there are 3 methods presented that solve an RL problem: DP, Monte Carlo, and TD. But which category does policy gradient methods (or actor-only methods) classify in? Should I classify them as the 4th method of solving a reinforcement learning problem?
+"
+"['machine-learning', 'deep-learning', 'computer-vision']"," Title: Can I use augmented data in the validation set?Body: I am trying to predict nursing activity using mobile accelerometer data. My dataset is a CSV file containing x, y, z component of acceleration. Each frame contains 20-second data. The dataset is highly imbalance, so I perform data augmentation and balance the data. In the data augmentation technique, I only use scaling and my assumption is, if I scale down or up a signal the activity remains the same. Using this assumption I augmented the data and my validation set not only contain the original signals but also the augmented (scaling) signals. Using this process, I am getting quite a good accuracy that I never being expected using only data augmentation. So, I am thinking that I performed a terrible mistake somewhere. I check the code, everything is right. So now I think, since my validation set has augmented data, that's the reason of getting this high accuracy (maybe the augmented data is really easy to classify).
+"
+"['neural-networks', 'deep-learning', 'feedforward-neural-networks']"," Title: Why isn't the loss of my neural network reduced after 2500 iterations?Body: I have developed a basic feedforward neural network from scratch to classify whether image is of cat or not cat. It works fine, but after 2500 iterations, my cost function is not reducing properly.
+The loss function which I am using is
+$L(\hat{y},y) = -ylog\hat{y}-(1-y)log(1-\hat{y})$
+Can you please point out where I am going wrong the link to the notebook is
+https://www.kaggle.com/sidcodegladiator/catnoncat-nn?
+"
+"['reinforcement-learning', 'keras', 'gradient-descent', 'ddpg', 'pseudocode']"," Title: Why does this Keras implementation of the DDPG algorithm update the critic's network using the gradient but the pseudocode doesn't?Body: I'm trying to understand the DDPG algorithm using Keras
+I found the site and started analyzing the code, I can't understand 2 things.
+The algorithm used to write the code presented on the page
+
+In the algorithm image, updating the critic's network does not require gradient
+But the gradient is implemented in the code, why?
+with tf.GradientTape() as tape:
+ target_actions = target_actor(next_state_batch)
+ y = reward_batch + gamma * target_critic([next_state_batch, target_actions])
+ critic_value = critic_model([state_batch, action_batch])
+ critic_loss = tf.math.reduce_mean(tf.math.square(y - critic_value))
+
+critic_grad = tape.gradient(critic_loss, critic_model.trainable_variables)
+critic_optimizer.apply_gradients(zip(critic_grad, critic_model.trainable_variables))
+
+The second question is why in the photo of the algorithm when calculating the actor's policy gradient are 2 gradients multiplied by themselves and in the code only one gradient is calculated for the critic's network and it's not multiplied by the second gradient?
+with tf.GradientTape() as tape:
+ actions = actor_model(state_batch)
+ critic_value = critic_model([state_batch, actions])
+ # Used `-value` as we want to maximize the value given
+ # by the critic for our actions
+ actor_loss = -tf.math.reduce_mean(critic_value)
+
+actor_grad = tape.gradient(actor_loss, actor_model.trainable_variables)
+actor_optimizer.apply_gradients(zip(actor_grad, actor_model.trainable_variables))
+
+"
+"['reinforcement-learning', 'ai-design', 'q-learning', 'datasets']"," Title: How can I formulate a prediction problem (given labeled data) as an RL problem and solve it with Q-learning?Body: One of my friends sent me a problem he was working on lately, and I couldn't help but I wonder how could it be solved using Q-learning. The statement is as follows:
+
+Given the following datasets, the objective is to find a suitable strategy per customer contract to maximize Gain and minimize Cost according to the characteristics of the customer.
+
+
+train.csv: 5000 independent rows, 33 columns.
+
+
+Columns description:
+
+
+
+- Day (1, 2 or 3): on which day the strategy was applied.
+- 28 variables (A, B, C, ..., Z, AA, BB): characteristics of the individual;
+- Gain: the gain for this individual for the corresponding strategy;
+- Cost: the cost for this individual for the corresponding strategy;
+- Strategy (0, 1 or 2): the strategy applied on this individual;
+- Success: 1 if the strategy succeeded, 0 otherwise.
+- If Success is 1, then the net gain is Gain - Cost, and if Success is 0, consider a standardized cost of 5.
+
+
+
+
+- test.csv: 2 000 independent rows, 31 columns.
+
+
+
+Columns description:
+
+
+
+- Index: 0 to 1999, unique for each row.
+- Day (4): on which day the strategy will be applied.
+- 28 variables (A, B, C, ..., Z, AA, BB): characteristics of the client;
+- Gain: the gain for this individual for the corresponding strategy;
+- Cost: the cost for this individual for the corresponding strategy;
+
+
+From what I understood, the train.csv
file is used to build a Q-Learning model, and the test one for generating a strategy and predicting a Success.
+My main question is:
+How to formulate this problem as an RL problem? How to define an episode? Since the training data is labeled, this could be clearly a classification problem (predicting the strategy), but I have no idea how to solve it using RL (Q-learning ideally). Any ideas will be helpful.
+"
+"['machine-learning', 'convolutional-neural-networks', 'ai-design', 'natural-language-understanding']"," Title: Can text-independent writer identification be done without multi-sentence training datasets for each writer?Body: I am trying to learn more about text-independent writer identification and was hoping for some advice.
+I have a folder with 100k images, each of them with a different handwritten sentence. All of the images have sentences of different lengths. They range from about 30 to 80 English characters. The file names start at 1.png
and go up to 100k.png
. That's it, as far as input data. 95% of the sentences are written by different writers. 5% are written by the same writers. Some writers might have written 2 sentences, while others 300+.
+Does anyone know of an identification method that would be able to determine what images were written by the same writer?
+I know that most methods require each writer to have provided a full page of sample writing for training, but, of course, I do not have that.
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: Why are these same neural network architecture giving different results?Body: I tried the first neural network architecture and the second one, but keeping all other variables constants, I am getting better results with the second architecture. Why are these same neural network architecture giving different results? Or am I making some mistakes?
+First one:
+def __init__(self, state_size, action_size, seed, hidden_advantage=[512, 512],
+ hidden_state_value=[512,512]):
+ super(DuelingQNetwork, self).__init__()
+ self.seed = torch.manual_seed(seed)
+ hidden_layers = [state_size] + hidden_advantage
+ self.adv_network = nn.Sequential(nn.Linear(hidden_layers[0], hidden_layers[1]),
+ nn.ReLU(),
+ nn.Linear(hidden_layers[1], hidden_layers[2]),
+ nn.ReLU(),
+ nn.Linear(hidden_layers[2], action_size))
+
+ hidden_layers = [state_size] + hidden_state_value
+ self.val_network = nn.Sequential(nn.Linear(hidden_layers[0], hidden_layers[1]),
+ nn.ReLU(),
+ nn.Linear(hidden_layers[1], hidden_layers[2]),
+ nn.ReLU(),
+ nn.Linear(hidden_layers[2], 1))
+def forward(self, state):
+ """Build a network that maps state -> action values."""
+ # Perform a feed-forward pass through the networks
+ advantage = self.adv_network(state)
+ value = self.val_network(state)
+ return advantage.sub_(advantage.mean()).add_(value)
+
+Second one:
+def __init__(self, state_size, action_size, seed, hidden_advantage=[512, 512],
+ hidden_state_value=[512,512]):
+ super(DuelingQNetwork, self).__init__()
+ self.seed = torch.manual_seed(seed)
+
+ hidden_layers = [state_size] + hidden_advantage
+ advantage_layers = OrderedDict()
+ for idx, (hl_in, hl_out) in enumerate(zip(hidden_layers[:-1],hidden_layers[1:])):
+ advantage_layers['adv_fc_'+str(idx)] = nn.Linear(hl_in, hl_out)
+ advantage_layers['adv_activation_'+str(idx)] = nn.ReLU()
+
+ advantage_layers['adv_output'] = nn.Linear(hidden_layers[-1], action_size)
+
+ self.network_advantage = nn.Sequential(advantage_layers)
+
+ value_layers = OrderedDict()
+ hidden_layers = [state_size] + hidden_state_value
+
+ # Iterate over the parameters to create the value network
+ for idx, (hl_in, hl_out) in enumerate(zip(hidden_layers[:-1],hidden_layers[1:])):
+ # Add a linear layer
+ value_layers['val_fc_'+str(idx)] = nn.Linear(hl_in, hl_out)
+ # Add an activation function
+ value_layers['val_activation_'+str(idx)] = nn.ReLU()
+
+ # Create the output layer for the value network
+ value_layers['val_output'] = nn.Linear(hidden_layers[-1], 1)
+
+ # Create the value network
+ self.network_value = nn.Sequential(value_layers)
+
+def forward(self, state):
+ """Build a network that maps state -> action values."""
+
+ # Perform a feed-forward pass through the networks
+ advantage = self.network_advantage(state)
+ value = self.network_value(state)
+ return advantage.sub_(advantage.mean()).add_(value)
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'quantum-computing', 'forward-pass']"," Title: Could a quantum computer perform vectorized forward propagation in deep networks?Body: Forward propagation in Deep Neural Networks
+In the "Forward Propagation in a Deep Network" video on Coursera, Andrew NG mentions that there's no way to avoid a for loop to loop through the different layers of the network during forward propagation.
+See image showing a deep network with 4 layers, and the requirement of a forloop to compute activations for each layer during forward propagation: https://nimb.ws/CkRVLT
+This makes intuitive sense since each layer's activation depends on the previous layer's output.
+Warning: start of speculation
+My rudimentary understanding of quantum computing is that it somehow "magically" can bypass computing intermediate states -> this is why supposedly quantum computers can break cryptography... or something like that.
+I'm wondering if a quantum computer could perform vectorized forward propagation on an L layer deep neural network.
+"
+"['neural-networks', 'algorithm', 'architecture', 'gpt']"," Title: Is the size of a neural network directly linked with an increase in its inteligence?Body: Just came across this article on GPT-3, and that lead me to the question:
+In order to make a certain kind of neural network architecture smarter all one needs to do is to make it bigger?
+Also, if that is true, how does the importance of computer power relates with the importance of fine-tuning/algorithmic improvement?
+"
+"['reinforcement-learning', 'policies', 'off-policy-methods']"," Title: What is meant by ""generate the data"" in describing the difference between on-policy and off-policy?Body: From the book:
+Sutton, Richard S.,Barto, Andrew G.. Reinforcement Learning (Adaptive Computation and Machine Learning series) (p. 100). The MIT Press. Kindle Edition. "
+following is stated:
+"On-policy methods attempt to evaluate or improve the policy that is used to make decisions, whereas off-policy methods evaluate or improve a policy different from that used to generate the data."
+Looking at off policy:
+
+and on-policy:
+
+What is meant by "generate the data"? I'm confused as to what 'data' means in this context.
+Does "generate the data" translate to the actions generated by the policy ? or Does "generate the data" translate to the Q data state action mappings?
+"
+"['objective-functions', 'information-theory']"," Title: Loss Function In Units Of Bits?Body: Where can I find a machine learning library that implements loss functions
measuring the Algorithmic Information Theoretic-friendly quantity "bits of information"?
+To illustrate the difference between entropy, in the Shannon information sense of "bits" and the algorithmic information sense of "bits", consider the way these two measures treat a 1 million character string representing $\pi$:
+Shannon entropy "bits" ($6$ for the '.'): $\lceil 1e6*\log_2(10) \rceil+6$
+Algorithmic "bits": The length, in bits, of the shortest program that outputs 1e6 digits of $\pi$ .
+All statistical measures of information, such as KL divergence, are based on Shannon information. By contrast, algorithmic information permits representations that are fully dynamical as in Chomsky type 0, Turing Complete, etc. languages. Since the world in which we live is dynamical, algorithmic models are at least plausibly more valid in many situations than are statistical models. (I recognize that recursive neural nets can be dynamical and that they can be trained with statistical loss functions.)
+For a more authoritative and formal description of these distinctions see the Hutter Prize FAQ questions Why aren't cross-validation or train/test-set used for evaluation? and Why is Compressor Length superior to other Regularizations? For a paper-length exposition on the same see "A Review of Methods for Estimating Algorithmic Complexity: Options, Challenges, and New Directions".
+From what I can see, machine learning makes it difficult to relate loss
to algorithmic information. Such an AIT-friendly loss function
must, by definition, measure the number of bits required to reconstruct, without loss, the original training dataset.
+Let me explain with examples of what I mean by AIT-friendly loss functions
, starting with the baby-step of classification loss
(usually measured as cross-entropy):
+Let's say your training set consists of $P$ patterns belonging to $C$ classes. You can then construct a partial AIT loss function providing the length of the corrections to the model's classifications with a $P$-length vector, each element containing a $0$ if the model was correct for that pattern, or the class if not. These elements would each have a bit-length of $\lceil \log_2(C+1) \rceil$, and be prefixed by a variable length integer storing $P$. The more $0$ elements, the more compressible this correction vector until, in the limit, a single run-length code
for $P$ $0$'s is stored as the correction, prefixed by $P$ and the length of the binary for the RLE
algorithm itself. The bit-length of these, taken together, would comprise this partial loss function
.
+This is a reasonable first cut at an AIT-friendly loss function for classification error
.
+So now let's go one step further to outputs that are numeric, the typical approach is a summation of a function of individual error measures, such as squaring or taking their absolute value or whatever -- perhaps taking their mean. None of these are in units of bits of information. To provide the correction on the outputs to reproduce the actual training values requires, again, a vector of corrections. This time it would be deltas, the precision of which must be adequate to the original data being losslessly represented, hence requiring some sort of adaptive variable length quantity representation(s). These deltas would likely have a non-uniform distribution so they can be arithmetically encoded. That seems like a reasonable approach to another AIT-friendly loss function.
+But now we get to the "model parameters" and find ourselves in the apparently well-defined but ill-founded notions like "L2 regularization", which are defined in terms of ill-defined "parameters", e.g. "parameter counts" aren't given in bits.
+I'll grant that L2 regularization
sounds like it is heading in the right direction by squaring the weights and summing them up, but when one looks at what is actually being done, it is:
+
+- applying additional functions to the sum such as mean
+- asking for a scaling factor to apply
+- applying the regularization on a per-layer basis rather than the entire model
+
+I'm sure I missed some of the many ways L2 regularization
fails to be AIT-friendly.
+Finally, there is the model's pseudo-invariance, measured, not simply in terms of its hyperparameters
but in terms of the length of the (compressed archive of the) actual executable binary running on the hardware. I say 'pseudo' because there is nothing that says one cannot vary, say, the number of neurons in a neural network during learning -- nor even change to another learning paradigm than neural networks during learning (in the most general case).
+So that's pretty much the complete loss function down to the Universal Turing Machine iron, but I'd be happy to see just a reference to an existing TensorFlow or another library that tries to do even a partial loss
function for AIT-theoretic learning.
+"
+"['neural-networks', 'python', 'keras', 'linear-regression']"," Title: Do correlations matter when building neural networks?Body: I am new to working with neural networks. However, I have built some linear regression models in the past. My question is, is it worth looking for features with a correlation to my target variable as I would normally do in a linear regression or is it better to feed the neural network with all the data I have?
+Assuming that the data I have is all related to my target variable of course. I am working with this dataset and building a neural network regressor for it.
+https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0101EN/labs/data/concrete_data.csv
+Here is a snippet of the data. The target variable is the concrete strength rate given a certain combination of materials for that concrete sample.
+
+I greatly appreciate any tips and explanations. I excuse me if this is too noob of a question but unfortunately I did not find any info about it on google. Thanks again!
+"
+['machine-learning']," Title: How to solve the ""dangerous feedback loops"" in machine learning?Body: From the article Dangerous Feedback Loops in ML
+
+Let’s say our model has leads from Facebook, Google, and Bing. If our first model decides that the probability of conversion is 3%, 5%, and 1% from these given sources, and we have finite amount of callbacks we can make, we will only callback the 5% probability. Now fast forward two months. The second model finds these probabilities are now: 0.5%, 8.5%, and 0%. What happened?
+
+
+Because we started only calling Google leads back, we increased our chances of converting these leads, and likewise, because we stopped calling Facebook and Bing leads, these leads never converted because we never called them. This is an example of a real world feedback loop
+
+How can we solve this problem?
+"
+"['reinforcement-learning', 'optimization', 'policy-gradients']"," Title: How to optimize neural network parameters with REINFORCEBody: I've seen a few mentions in papers that neural network parameters can be found using REINFORCE algorithm. It was mentioned in the context of nondifferentiable operations involving e.g. step function which appears in "hard attention" or weight pruning. Unfortunately, I haven't seen how to really do this. In Markov Decision Process we have, states (S), actions (A) and rewards (R) so what is what in case of neural net? I don't see how can we find parameters or neural net if the gradient is not well defined.
+Any code sample or explanation?
+"
+"['neural-networks', 'machine-learning']"," Title: Should the training data be the same in each epoch?Body: Should the training data be the same in each epoch?
+If the training data is generated on the fly, for example, is there a difference between training 1000 samples with 1 epoch or training 1000 epochs with 1 sample each?
+To elaborate further, samples do not need to be saved or stay in memory if they are never used again. However, if training performs best by training over the same samples repeatedly, then data would have to be stored to be reused in each epoch.
+More samples is generally considered advantageous. Is there a disadvantage to never seeing the same sample twice in training?
+"
+"['neural-networks', 'machine-learning', 'generalization', 'learning-rate']"," Title: Why does learning rate reduce train-test generalization gap?Body: In this blog post: http://www.argmin.net/2016/04/18/bottoming-out/
+Prof Recht shows two plots:
+
+
+He says one of the reasons the plot below has a lower train-test gap is because that model was trained with a lower learning rate (and he also manually drops the learning rate at 120 epoch).
+Why would a lower learning rate reduce overfitting?
+"
+"['terminology', 'papers', 'generative-model', 'latent-variable']"," Title: What is meant by degrees of freedom of latent variables?Body:
+...Designing such a likelihood function is typically challenging; however, we observe that features like spectrogram are effective when latent variables have limited degrees of freedom. This motivates us to infer latent variables via methods like Gibbs sampling, where we focus on approximating the conditional probability of a single variable given the others.
+
+Above is an excerpt from a paper I've been reading, and I don't understand what the author means by degrees of freedom of latent variables. Could someone please explain with an example, or add more details?
+
+References
+Shape and Material from Sound (31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA)
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'notation', 'on-policy-methods', 'epsilon-greedy-policy']"," Title: What does the term $|\mathcal{A}(s)|$ mean in the $\epsilon$-greedy policy?Body: I've been looking online for a while for a source that explains these computations but I can't find anywhere what does the $|A(s)|$ mean. I guess $A$ is the action set but I'm not sure about that notation:
+$$\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} Q^{\pi}(s, a)+(1-\varepsilon) \max _{a} Q^{\pi}(s, a)$$
+Here is the source of the formula.
+I also want to clarify that I understand the idea behind the $\epsilon$-greedy approach and the motivation behind the on-policy methods. I just had a problem understanding this notation (and also some other minor things). The author there omitted some stuff, so I feel like there was a continuity jump, which is why I didn't get the notation, etc. I'd be more than glad if I can be pointed towards a better source where this is detailed.
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: Do the order of the features ie channel matter for a 1d convolutional network?Body: Do the test dataset feature order and inference (real world) feature order have to be the same as the training dataset? For example, if features are in the order (a,c,b,e,d) for the training dataset, does that particular order have to match for the inference and test dataset?
+"
+"['reinforcement-learning', 'q-learning', 'papers', 'convergence']"," Title: What is convergence analysis, and why is it needed in reinforcement learning?Body: While reading a paper about Q-learning in network energy consumption, I came across the section on convergence analysis. Does anyone know what convergence analysis is, and why is convergence analysis needed in reinforcement learning?
+"
+"['reinforcement-learning', 'markov-decision-process', 'function-approximation']"," Title: Correct dimensionality of parameter vector for solving an MRP with linear function approximation?Body: I'm in the process of trying to learn more about RL by shadowing a course offered collaboratively by UCL and DeepMind that has been made available to the public. I'm most of the way through the course, which for auditors consists of a Youtube playlist, copies of the Jupyter notebooks used for homework assigments (thanks to some former students making them public on Github), and reading through Sutton and Barto's wonderful book Reinforcement Learning: An Introduction (2nd edition).
+I've gone a little more than half of the book and corresponding course material at this point, thankfully with the aid of public solutions for the homework assignments and textbook exercises which have allowed me to see which parts of my own work that I've done incorrectly. Unfortunately, I've been unable to find such a resource for the last homework assignment offered and so I'm hoping one of the many capable people here might be able to explain parts of the following question to me.
+
+We are given a simple Markov reward process consisting of two states and with a reward of zero everywhere. When we are in state $s_{0}$, we always transition to $s_{1}$. If we are in state $s_{1}$, there is a probability $p$ (which is set to 0.1 by default) of terminating, after which the next episode starts in $s_{0}$ again. With a probability of $1 - p$, we transition from $s_{1}$ back to itself again. The discount is $\gamma = 1$ on non-terminal steps.
+Instead of a tabular representation, consider a single feature $\phi$, which takes the values $\phi(s_0) = 1$ and $\phi(s_1) = 4$. Now consider using linear function approximation, where we learn a value $\theta$ such that $v_{\theta}(s) = \theta \times \phi(s) \approx v(s)$, where $v(s)$ is the true value of state $s$.
+Suppose $\theta_{0} = 1$, and suppose we update this parameter with TD(0) with a step size of $\alpha = 0.1$. What is the expected value of $\mathbb{E}[ \theta_T ]$ if we step through the MRP until it terminates after the first episode, as a function of $p$? (Note that $T$ is random.)
+
+My real point of confusion surrounds $\theta_{0}$ being given as 1. My understanding was that the dimensionality of the parameter vector should be equal to that of the feature vector, which I've understood as being (1, 4) and thus two-dimensional. I also don't grok the idea of evaluating $\mathbb{E}[ \theta_T ]$ should $\theta$ be a scalar (as an aside I attempted to simply brute-force simulate the first episode using a scalar parameter of 1 and, unless I made errors, found the value of $\theta$ to not depend on $p$ whatsoever). If $\theta$ is two-dimensional, would that be represented as (1, 0), (0, 1), or (1, 1)?
+Neither the 1-d or 2-d options make intuitive sense to me so I hope there's something clear and obvious that someone might be able to point out. For more context or should someone just be interested in the assignment, here is a link to the Jupyter notebook:
+https://github.com/chandu-97/ADL_RL/blob/master/RL_cw4_questions.ipynb
+"
+"['deep-learning', 'q-learning', 'dqn', 'deep-rl', 'deep-neural-networks']"," Title: Why do we need target network in deep Q learning?Body: I already know deep RL, but to learn it deeply I want to know why do we need 2 networks in deep RL. What does the target network do? I now there is huge mathematics into this, but I want to know deep Q-learning deeply, because I am about to make some changes in the deep Q-learning algorithm (i.e. invent a new one). Can you help me to understand what happens during executing a deep Q-learning algorithm intuitively?
+"
+"['neural-networks', 'convolutional-neural-networks', 'training', 'testing', 'data-augmentation']"," Title: What is the amount of test data needed to evaluate a CNN?Body: I have an image dataset of about 400 images. 70% of these data points were used for training, 15% for validation, and 15% for testing. I am using the 70% to train a CNN-based binary classifier. I augmented the training data to around 8000 images. That makes my test set really small in comparison. Is that ok, and what is considered a decent size of images for a test set?
+"
+"['neural-networks', 'clustering', 'k-means']"," Title: Would it be possible to implement the principals of the K means clustering algorithm in a Neural NetworkBody: During a Machine Learning course which I have done I have learnt about the K means algorithm. Is it possible to use the principals of K means within a neural network?
+"
+"['machine-learning', 'natural-language-processing', 'named-entity-recognition']"," Title: Do we have to use the IOB format on labels in the NER dataset? If so, why?Body: Do we have to use the IOB format on labels in the NER dataset (such as B-PERSON, I-PERSON, etc.) instead of using the usual format (PERSON, ORGANIZATION, etc.)? If so, why? How will it affect the performance of the model?
+"
+"['image-recognition', 'object-detection', 'models', 'optical-character-recognition']"," Title: Detect data in tables of roughly the same structureBody: I would like to train a model that serializes a table of nutrition facts into it's values.
+The tables can vary in form and colour, but always contain the same set of keys (e.g. carbs, fats).
+Examples for these tables can be found here.
+The end goal is to be able to take a picture of such a table and have it's values added to a database.
+My initial idea was to train a model on finding subpictures of the individual key/value pairs and then using OCR to find out which value it actually is.
+As I am relatively new to ML, I would love to have some ideas about how one could try to build this, so I can do further research on it.
+Thanks
+"
+"['agi', 'definitions', 'intelligence-testing', 'turing-test']"," Title: What is the Turing test?Body: I'm looking for intuition in simple words but also some simple insights (I don't know if the latter is possible). Can anybody shed some light on the Turing test?
+"
+"['papers', 'notation', 'multi-armed-bandits', 'upper-confidence-bound']"," Title: Why do we use $X_{I_t,t}$ and $v_{I_t}$ to denote the reward received and the at time step $t$ and the distribution of the chosen arm $I_t$?Body: I'm doing some introductory research on classical (stochastic) MABs. However, I'm a little confused about the common notation (e.g. in the popular paper of Auer (2002) or Bubeck and Cesa-Bianchi (2012)).
+As in the latter study, let us consider an MAB with a finite number of arms $i\in\{1,...,K\}$, where an agent choses at every timestep $t=1,...,n$ an arm $I_t$ which generates a reward $X_{I_t,t}$ according to a distribition $v_{I_t}$.
+In my understanding, each arm has an inherent distribution, which is unknown to the agent. Therefore, I'm wondering why the notation $v_{I_t}$ is used instead of simply using $v_{i}$? Isn't the distribution independent of the time the arm $i$ was chosen?
+Furthermore, I ask myself: Why not simply use $X_i$ instead of $X_{I_t,t}$ (in terms of rewards). Is it because the chosen arm at step $t$ (namely $I_t$) is a random variable and $X$ depends on it? If I am right, why is $t$ used twice in the index (namely $I_t,t$)?
+Shouldn't $X_{I_t}$ be sufficient, since $X_{I_t,m}$ and $X_{I_t,n}$ are drawn from the same distribution?
+"
+"['neural-networks', 'reinforcement-learning', 'deep-learning', 'training', 'q-learning']"," Title: What is the purpose of a Neural Network in Reinforcement Learning when we have a Q-learning update rule?Body: I'm confused as to the purpose of training a neural network (NN) for reinforcement learning (RL) tasks such as Gridworld. In RL tasks, namely q-learning, we have a q-learning update rule, which is designed to take some state and action and compute the value of that state-action pair.
+Performing this process several times will eventually produce a table of states and what action will likely lead to a high reward.
+In RL examples, I've seen them train a neural network to output q-values and a loss function like MSE to compute the loss between the q-learning update rule q-value and the NN's q value.
+So:
+(a) Q-learning update rule-> outputs target Q-values
+(b) NN -> outputs Q values
+MSE to compute the loss between (a) and (b)
+So, given we already know what the target Q-value is from a, why do we need to train a NN?
+"
+"['machine-learning', 'decision-trees', 'random-forests']"," Title: Can I apply AdaBoost on a random forest?Body: I know the random forest is a bagging technique. But what if my random forest overfits on a dataset, so I reduce the depth of the decision tree and now it is underfitting. In this scenario, can I take the under-fitted random forest with little depth and try to boost it?
+"
+"['reinforcement-learning', 'q-learning', 'rewards', 'value-functions', 'return']"," Title: Why is the expected return in Reinforcement Learning (RL) computed as a sum of cumulative rewards?Body: Why is the expected return in Reinforcement Learning (RL) computed as a sum of cumulative rewards?
+Would it not make more sense to compute $\mathbb{E}(R \mid s, a)$ (the expected return for taking action $a$ in the given state $s$) as the average of all rewards recorded for being in state $s$ and taking action $a$?
+In many examples, I've seen the value of a state computed as the expected return computed as the cumulative sum of rewards multiplied by a discount factor:
+$V^π(s)$ = $\mathbb{E}(R \mid s)$ (the value of state s, if we follow policy π is equal to the expected return given state s)
+So, $V^π(s)$ = $\mathbb{E}(r_{t+1}+ γr_{t+2}+ (γ^2)_{t+3} + ... \mid s) = {E}(∑γ^kr_{t+k+1}\mid s)$
+as $R=r_{t+1}+ γr_{t+2}+ {γ^2}r_{t+3}, + ... $
+Would it not make more sense to compute the value of a state as the following:
+$V^π(s)$ = $(r_{t+1} + γr_{t+2} + (γ^2)_{t+3}, + ... \mid s)/k = {E}(∑γ^kr_{t+k+1}\mid s)/k $ where k is the number of elements in the sum, thus giving us the average reward for being in state s.
+Reference for cumulative sum example: https://joshgreaves.com/reinforcement-learning/understanding-rl-the-bellman-equations/
+"
+"['learning-rate', 'mean-squared-error']"," Title: How can a learning rate that is too large cause the output of the network (and the error) to go to infinity?Body: It happened to my neural network, when I use a learning rate of <0.2 everything works fine, but when I try something above 0.4 I start getting "nan" errors because the output of my network keeps increasing.
+From what I understand, what happens is that if I choose a learning rate that is too large, I overshoot the local minimum. But still, I am getting somewhere, and from there I'm moving in the correct direction. At worst my output should be random, I don't understand what is the scenario that causes my output and error to approach infinity every time I run my NN with a learning rate that is too large (and it's not even that large)
+
+How does the red line go to infinity ever? I kind of understand it could happen if we choose a crazy high learning rate, but if the NN works for 0.2 and doesn't for 0.4, I don't understand that
+"
+"['convolutional-neural-networks', 'convolution', 'filters', 'convolutional-layers', '1d-convolution']"," Title: Why does the number of channels in the PointNet increase as we go deeper?Body: For example, in PointNet, you see the 1D convolutions with the following channels 64 -> 128 -> 1024
. Why not e.g. 64 -> 1024 -> 1024
or 1024 -> 1024 -> 1024
?
+"
+"['philosophy', 'social']"," Title: How is AI helping humanity?Body:
+There was a lot of Negative news on Artificial Intelligence. Most people were first exposed to the idea of artificial intelligence from Hollywood movies, long before they ever started seeing it in their day-to-day lives. This means that many people misunderstand the technology. When they think about common examples that they’ve seen in movies or television shows, they may not realize that the killer robots they’ve seen were created to sell emotional storylines and drive the entertainment industry, rather than to reflect the actual state of AI technology.
+
+There are few questions on our SE on how AI impacts/harms humankind. For example, How could artificial intelligence harm us? and Could artificial general intelligence harm humanity?
+However, now, I'm looking for the positive impacts of AI on humans. How could AI help humankind?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'experience-replay']"," Title: Prioritised Remembering in Experience Replay (Q-Learning)Body: I'm using Experience Replay based on the original Prioritized Experience Replay (PER) paper. In the paper authors show ~ an order of magnitude increase in data efficiency from prioritized sampling. There is space for further improvement, since PER remembers all experiences, regardless of their importance.
+I'd like to extend PER so it remembers selectively based on some metric, which would determine whether the experience is worth remembering or not. The time of sampling and re-adjusting the importance of the experiences increases with the number of samples remembered, so being smart about remembering should at the very least speed-up the replay, and hopefully also show some increase in data efficiency.
+Important design constrains for this remembering metric:
+
+- compatibility with Q-Learning, such as DQN
+- computation time, to speed up the process of learning and not trade off one type of computation for another
+- simplicity
+
+
+My questions:
+
+- What considerations would you make for designing such a metric?
+- Do you know about any articles addressing the prioritized experience memorization for Q-Learning?
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'tensorflow', 'keras']"," Title: CIFAR-10 can't get above 10% Accuracy with MobileNet, VGG16 and ResNet on KerasBody: I'm trying to train the most popular Models (mobileNet, VGG16, ResNet...) with the CIFAR10-dataset but the accuracy can't get above 9,9%. I want to do that with the completely model (include_top=True) and without the weights from imagenet.
+I have tried increasing/decreasing dropout and learning rate and I changed the optimizers but I become always the same accuracy.
+with weights='imagenet' and include_top=False I achieve an accuracy of over 90% but I want to train the model without those parameters.
+Is there any solution to solve this? It is possible, that the layers of those Models are not set to be trainable?
+train_generator = ImageDataGenerator(
+ rotation_range=2,
+ horizontal_flip=True,
+ zoom_range=.1 )
+val_generator = ImageDataGenerator(
+ rotation_range=2,
+ horizontal_flip=True,
+ zoom_range=.1)
+
+train_generator.fit(x_train)
+val_generator.fit(x_val)
+
+base_model_1 = MobileNet(include_top=True,weights=None,input_shape=(32,32,3),classes=y_train.shape[1])
+
+batch_size= 100
+epochs=50
+
+learn_rate=.001
+
+sgd=SGD(lr=learn_rate,momentum=.9,nesterov=False)
+adam=Adam(lr=learn_rate, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
+
+model_1.compile(optimizer=adam,loss='sparse_categorical_crossentropy',metrics=['accuracy'])
+
+model_1.fit_generator(train_generator.flow(x_train,y_train,batch_size=batch_size),
+ epochs=epochs,
+ steps_per_epoch=x_train.shape[0]//batch_size,
+ validation_data=val_generator.flow(x_val,y_val,batch_size=batch_size),validation_steps=250,
+ verbose=1)
+
+Results of MobileNet:
+ Epoch 1/50
+350/350 [==============================] - 17s 50ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1021
+Epoch 2/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1030
+Epoch 3/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1016
+Epoch 4/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1014
+Epoch 5/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1040
+Epoch 6/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1009
+Epoch 7/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1035
+Epoch 8/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1013
+Epoch 9/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1029
+Epoch 10/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1023
+Epoch 11/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1017
+Epoch 12/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1020
+Epoch 13/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1020
+Epoch 14/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1033
+Epoch 15/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1011
+Epoch 16/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1016
+Epoch 17/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1024
+Epoch 18/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1024
+Epoch 19/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1041
+Epoch 20/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1010
+Epoch 21/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1022
+Epoch 22/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1014
+Epoch 23/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1035
+Epoch 24/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1032
+Epoch 25/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1012
+Epoch 26/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1018
+Epoch 27/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1022
+Epoch 28/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1031
+Epoch 29/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1022
+Epoch 30/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1015
+Epoch 31/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1028
+Epoch 32/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1015
+Epoch 33/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1030
+Epoch 34/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1003
+Epoch 35/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1044
+Epoch 36/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1012
+Epoch 37/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1022
+Epoch 38/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1021
+Epoch 39/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1028
+Epoch 40/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1012
+Epoch 41/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1035
+Epoch 42/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1009
+Epoch 43/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1034
+Epoch 44/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1024
+Epoch 45/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1016
+Epoch 46/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1028
+Epoch 47/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1016
+Epoch 48/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1033
+Epoch 49/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1018
+Epoch 50/50
+350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1023
+
+<tensorflow.python.keras.callbacks.History at 0x7fa30b188e48>
+
+"
+"['deep-learning', 'deep-neural-networks', 'supervised-learning']"," Title: Advantages of training Neural Networks based on analytic success criteriaBody: What is the reason to train a Neural Network to estimate a task's success (i.e. robotic grasp planning) using a simulator that is based on analytic grasp quality metrics?
+Isn't a perfectly trained NN going to essentially output the same probability of task success as the analytic grasp quality metrics that were used to train it? What benefits does this NN have with respect to just directly using said analytic grasp quality metrics to determine whether a certain grasp candidate is good or bad? Analytic metrics are by definition deterministic, so I fail to understand the reason for using them to train a NN that will ultimately output the same result.
+This approach is used in high-caliber works like the Dex-Net2 from Berkeley Automation. I am rather new to the field and the only reason I can think of is computational efficiency in production?
+"
+"['reinforcement-learning', 'policy-gradients', 'alphazero']"," Title: What kind of policy evaluation and policy improvement AlphaGo, AlphaGo Zero and AlphaZero are usingBody: I'm trying to find out what kind of policy improvement and policy evaluation AlphaGo, AlphaGo Zero, and AlphaZero are using. By looking into their respective paper and SI, I can conclude that it is a kind of policy gradient actor-critic approach, where the policy is evaluated by a critic and is improved by and actor. Yet still can't fit it to any of the known policy gradient algorithms.
+"
+"['reinforcement-learning', 'environment', 'gym']"," Title: How does an episode end in OpenAI Gym's ""MountainCar-v0"" environment?Body: I am working on OpenAI's "MountainCar-v0" environment. In this environment, each step that an agent takes returns (among other values) the variable named done
of type boolean. The variable gets a True
value when the episode ends. However, I am not sure how each episode ends. My initial understanding was that an episode should end when the car reaches the flagpost. However, that is not the case.
+What are the states/actions under which the episode terminates in this environment?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'learning-rate']"," Title: Why would the learning rate curve go backwards?Body: I'm working on recognizing the numbers 3 and 7 using the MNIST data set. I'm using cnn_learner()
function from fastai library.
+When I plotted the learning rate, the curve started going backward after a certain value on X-axis. Can someone please explain what does it signify?
+
+"
+"['neural-networks', 'keras', 'regularization', 'weights']"," Title: When would bias regularisation and activation regularisation be necessary?Body: For Keras on TensorFlow, a layer class constructor comes with these:
+
+kernel_regularizer
=...
+bias_regularizer
=...
+activity_regularizer
=...
+
+For example, Dense layer:
+https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense#arguments_1
+The first one, kernel_regularizer
is easy to understand, it regularises weights, makes weights smaller to avoid overfitting on training data only.
+Is kernel_regularizer
enough? When should I use bias_regularizer
and activity_regularizer
too?
+"
+"['search', 'heuristics', 'a-star', 'admissible-heuristic', 'consistent-heuristic']"," Title: Is A* with an admissible but inconsistent heuristic optimal?Body: I understand that, in tree search, an admissible heuristic implies that $A*$ is optimal. The intuitive way I think about this is as follows:
+Let $P$ and $Q$ be two costs from any respective nodes $p$ and $q$ to the goal. Assume $P<Q$. Let $P'$ be an estimation of $P$. $P'\le P \Rightarrow P'<Q$. It follows from uniform-cost-search that the path through $p$ must be explored.
+What I don't understand, is why the idea of an admissible heuristic does not apply as well to "graph-search". If a heuristic is admissible but inconsistent, would that imply that $A*$ is not optimal? Could you please provide an example of an admissible heuristic that results in a non-optimal solution?
+"
+"['reinforcement-learning', 'deep-rl', 'q-learning', 'dqn', 'rewards']"," Title: Is there an upper limit to the maximum cumulative reward in a deep reinforcement learning problem?Body: Is there an upper limit to the maximum cumulative reward in a deep reinforcement learning problem?
+For example, you want to train a DQN agent in an environment, and you want to know what the highest possible value you can get from the cumulative reward is, so you can compare this with your agents performance.
+"
+"['reinforcement-learning', 'policy-gradients', 'probability-distribution']"," Title: In continuous action spaces, how is the standard deviation, associated with Gaussian distribution from which actions are sampled, represented?Body: I have a question about implementing policy gradient methods for problems with continuous action spaces.
+Assume that actions are sampled from a diagonal Gaussian distribution with mean vector $\mu$ and standard deviation vector $\sigma$. As far as I understand, we can define a neural network that takes the current state as the input and returns a $\mu$ as its output. According to OpenAI Spinning Up, the standard deviation $\sigma$ can be represented in two different ways:
+
+I don't completely understand the first method. Does it mean that we must set the log standard deviations to fix numbers? Then how do we choose these numbers?
+"
+"['reinforcement-learning', 'q-learning', 'open-ai', 'gym']"," Title: How can I change observation states' values in OpenAI gym's cartpole environment?Body: I am learning with the OpenAI gym's cart pole environment.
+I want to make the observation states discrete (with small stepsize) and for that purpose, I need to change two of the observations from [$
+-\infty, \infty$] to some finite upper and lower limits. (By the way, these states are velocity and pole velocity at the tip).
+How can I change these limits in the actual gym's environment?
+Any other suggestions are also welcome.
+"
+"['neural-networks', 'training', 'distributed-computing']"," Title: Why can't we train neural networks in a peer-to-peer manner?Body: I have recently been exposed to the concept of decentralized applications,
+I know that neural networks require a lot of parallel computing infra for training.
+What are the technical difficulties one may face for training neural networks in a p2p manner?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'self-supervised-learning', 'automated-machine-learning']"," Title: Is there a way to get landmark features automatically learned by a neural network?Body: Is there a way to get landmark features automatically learned by a neural network without having to manually pre-label them in the images that are being fed into the network?
+"
+"['neural-networks', 'resource-request']"," Title: Is there a place where people can share (or buy) ready made neural networks?Body: Is there a place where people can share (or buy) ready made neural networks instead of creating them themselves? Something like a Wikipedia for DNNs?
+"
+"['machine-learning', 'evolutionary-algorithms', 'unsupervised-learning', 'neat']"," Title: NEAT can't solve XOR completelyBody: I'm currently implementing the NEAT algorithm. But problems occur when testing it with problems which don't have a linear solution(for example xor). My xor only produces 3 correct outputs once at a time:
+1, 0 -> 0.99
+0, 0 -> 0
+1, 1 -> 0
+0, 1 -> 0
+
+My genome class works fine, so I guess that the problem occurs on breeding or that my config is wrong.
+Config
+const size_t population_size = 150;
+const size_t inputs = 3 (2 inputs + bias);
+const size_t outputs = 1;
+double compatibility_threshold = 3;
+double steps = 0.01;
+double perturb_weight = 0.9;
+double mutate_connection = 0.05;
+double mutate_node = 0.03;
+double mutate_weights = 0.8;
+double mutate_disable = 0.1;
+double mutate_enable = 0.2;
+double c_excess = 1;
+double c_disjoint = 1;
+double c_weight = 0.4;
+double crossover_chance = 0.75;
+
+Does anyone has an idea what the problem might be? I proof read my code multiple times, but wasnt able the figure it out.
+Here is the github link to my code(not documented): click
+"
+"['reinforcement-learning', 'open-ai', 'gpt']"," Title: Why is GPT-3 such a game changer?Body: I've been hearing a lot about GPT-3 by OpenAI, and that it's a simple to use API with text in text out and has a big neural network off 175B parameters.
+But how did they achieve this huge number of parameters, and why is it being predicted as one of the greatest innovations?
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-segmentation', 'convolutional-layers', 'pooling']"," Title: How can the FCNN reduce the dimensions of the input from $1048 \times 100$ to $523 \times 100$ with max-pooling?Body: I am trying to implement a paper on Image tempering detection and localization, the paper is Image Manipulation Detection and Localization Based on the Dual-Domain Convolutional Neural Networks, I was able to implement the SCNN, the one surrounded by red dots, I could not quite understand the FCNN, the one that is surrounded with blue dots.
+The problem I am facing is: How the network made features vector from (1048 x 100) to (523 x 100) through max-pooling (instead of 524 x 100), and from (523 x 100) to (260 x 100) and then (260 x 100) to (256, ).
+
+It appears that the given network diagram might be wrong, but, if it is wrong, how could it be published in IEEE. Please, help me understand how the FCNN is constructed.
+"
+"['reinforcement-learning', 'architecture', 'environment', 'ddpg', 'weights']"," Title: Are there examples of agents that use a more modest number of parameters on Pendulum (or similar environments)?Body: I'm looking at some baseline implementations of RL agents on the Pendulum environment. My guess was to use a relatively small neural net (~100 parameters).
+I'm comparing my solution with some baselines, e.g. the top entry on the Pendulum leaderboard. The models for these solutions are typically huge, i.e. ~120k parameters. What's more, they use very large replay buffers as well, like ~1M transitions. Such model sizes seem warranted for Atari-like environments, but for something as small as the Pendulum, this seems like complete overkill to me.
+Are there examples of agents that use a more modest number of parameters on Pendulum (or similar environments)?
+"
+"['neural-networks', 'machine-learning', 'comparison', 'symbolic-ai', 'deep-blue']"," Title: Why is symbolic AI not so popular as ANN but used by IBM's Deep Blue?Body: Everybody is implementing and using DNN with, for example, TensorFlow or PyTorch.
+I thought IBM's Deep Blue was an ANN-based AI system, but this article says that IBM's Deep Blue was symbolic AI.
+Are there any special features in symbolic AI that explain why it was used (instead of ANN) by IBM's Deep Blue?
+"
+"['convolutional-neural-networks', 'ai-design', 'recurrent-neural-networks', 'deep-neural-networks', 'architecture']"," Title: How can one be sure that a particular neural network architecture would work?Body: Traditionally, when working with tabular data, one can be sure(or at least know) that a model works because the included features could explain a target variable, say "Price of a ticket" good. More features can be then be engineered to explain the target variable even better.
+I have heard people say, that there is no need to hand-engineer features when working with CNNs or RNNs or Deep Neural Networks, provided all the advancements in AI and computation. So, my question is, how would one know, before training, why a particular architecture worked(or would work) when it did or why it didn't when the performance isn't acceptable or very bad. And also that not all of us would have the time to try out all possible architectures, how can one know or at least be sure that something would work for the problem in hand. Or to say, what are the things one needs to follow when designing an architecture to train for a problem, to ensure that an architecture will work?
+"
+"['neural-networks', 'convolutional-neural-networks', 'backpropagation']"," Title: What should I do with the flatten layer during back-propagation?Body: I'm creating a CNN network without other frameworks such as PyTorch, Keras, Tensorflow, and so on.
+During the forward pass, the Flatten
layer reshapes the previous layer's activation. I know there are a lot of questions about it, but what should I do with the Flatten
layer during back-propagation? Should I compute the derivative of $dA$ and reshape it for the next layer or just reshape $dA$ of the previous layer?
+"
+"['deep-learning', 'image-segmentation']"," Title: How do we make our outputs to have the same size as the true mask?Body: When we are doing multi-label segmentation tasks, our y_true
(the mask) will be (w, h, 3)
, but, in our model, at the last layer, we will be getting (w, h, number of classes)
as output.
+How do we make our outputs to have the same size as the true mask so that to apply the loss function, given that, currently, the shapes are not equal? Also, if we are done with applying the loss function and trained the model, how do I make results in the shape of (w, h, 3)
from (w, h, number of classes)
?
+"
+"['reinforcement-learning', 'policy-gradients', 'policies']"," Title: Is it common to have extreme policy's probabilities?Body: I have implemented several policy gradient algorithms (REINFORCE, A2C, and PPO) and am finding that the resultant policy's action probability distributions can be rather extreme. As a note, I have based my implementations on OpenAI's baselines. I've been using NNs as the function approximator followed by a Softmax layer. For example, with Cartpole I end up with action distributions like $[1.0,3e-17]$. I could understand this for a single action, potentially, but sequential trajectories end up having a probability of 1. I have been calculating the trajectory probability by $\prod_i \pi(a_i|s_i)$. Varying the learning rate changes how fast I arrive at this distribution, I have used learning rates of $[1e-6, 0.1]$. It seems to me that a trajectory's probability should never be 1.0 or 0.0 consistently, especially with a stochastic start. This also occurs for environments like LunarLander.
+For the most part, the resulting policies are near-optimal solutions that pass the criteria for solving the environments set by OpenAI. Some random seeds are sub-optimal
+I have been trying to identify a bug in my code, but I'm not sure what bug would be across all 3 algorithms and across environments.
+Is it common to have such extreme policy's probabilities? Is there a common way to handle an update so the policy's probabilities do not end up so extreme? Any insight would be greatly appreciated!
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'alphago-zero', 'alphago']"," Title: How AlphaGo Zero is learning from $\pi_t$ when $z_t = -1$?Body: I have questions on the way AlphaGo Zero is trained.
+From original AlphaGo Zero paper, I knew that AlphaGo Zero agent learns a policy, value functions by the gathered data $\{(s_t, \pi_t, z_t)\}$ where $z_t = r_T \in \{-1,1\}$.
+However, the fact that the agent tries to learn a policy distribution when $z_t = -1$ seems to be counter-intuitive (at least to me).
+My assertion is that the agent should not learn the policy distribution of when it loses (i.e, gets $z_t=-1$), since such a policy will guide it to lose.
+I think I have missed some principles and resulted in that assertion. Or is my assertion reasonable, either?
+"
+"['reinforcement-learning', 'reward-design', 'reward-functions', 'reward-shaping', 'dense-rewards']"," Title: Is a reward given at every step or only given when the RL agent fails or succeeds?Body: In reinforcement learning, an agent can receive a positive reward for correct actions and a negative reward for wrong actions, but does the agent also receive rewards for every other step/action?
+"
+"['machine-learning', 'research', 'papers', 'state-of-the-art', 'ai-development']"," Title: Ways to keep up with the latest developments in Machine Learning and AI?Body: With over 100 papers published in the area of artificial intelligence, machine learning and their subfields every day (source), accounting for ~3% of all publications world wide per year (source) and dozens of annual conferences like NeurIPS, ICML, ICLR, ACL, ... I wonder how you keep up with the current state of the art and latest developments? The field is progressing very fast, models that were considered SOTA not even a decade ago are now (almost) outdated (Attention Is All You Need). A lot of this progress is driven by big tech companies (source), e.g. 12% of all papers accepted at NeurIPS 2019 have at least one author from Google and DeepMind (source).
+My strategy is to read blogs and articles to maintain a general overview and not to miss any important breakthroughs. To be up to date in subfields of my own interest, I read specific papers once in a while. What are your personal strategies? Continuous education is a big keyword here. It's not about understanding every detail and being able to reproduce results, but rather maintaining a bird's eye view, having an idea about the direction of research and knowing whats already possible.
+To name a few of my preferred sources there are the research blogs of the big players: OpenAI, DeepMind, Google AI, FAIR. Further there are very good personal blogs with a more educational character, like the well known one of Christopher Olah, the recently started one of Yoshua Bengio and the one from Jay Alammar. Unfortunately finding personal blogs is hard, it often depends on luck and referrals, also the update frequency is generally lower since these people have (understandably) other important things to do in life as well.
+Therefore I'm always looking for new sources, which I can bookmark and read later, if I like to avoid doing other stuff.
+Can you name any other personal / corporate research blogs or news websites that publish latest advances in ML & AI?
+"
+"['model-based-methods', 'pac-learning']"," Title: What is the expectation of an empirical model in model based RL?Body: In the paper - "Action Elimination and Stopping Conditions for the Multi-Armed Bandit and Reinforcement Learning Problems", on page 1083, on the 6th line from the bottom, the authors define expectation of the empirical model as
+$$\hat{\mathbb{E}}_{s,s',a}[V(s')] = \sum_{s' \in S} \hat{P}^{a}_{s, s'}V(s').$$
+I didn't understand the significance of this quantity since it puts $V(s')$ inside an expectation while assuming the knowledge of $V(s')$ in the definition on the right.
+A clarification in this regard would be appreciated.
+EDIT:
+The paper defines $\hat{P}^{a}_{s, s'}$ as,
+$$\hat{P}^{a}_{s, s'} = \frac{|(s, a, s', t)|}{|(s, a, t)|}.$$
+Where $|(s, a, t)|$ is the number of times state $s$ was visited and action $a$ was taken and $|(s, a, s', t)|$ as the number of times among the $|(s, a, t)|$ times $(s, a)$ was visited when the next state landed in was $s'$ during model learning.
+No explicit definition for $V$ is provided however, $V^{\pi}$ is defined as the usual expected discounted return, using the same definition as Sutton and Barto or other sources.
+"
+"['reinforcement-learning', 'training', 'dqn', 'deep-rl', 'double-dqn']"," Title: How does the target network in double DQNs find the maximum Q value for each action?Body: I understand the fact that the neural network is used to take the states as inputs and it outputs the Q-value for state-action pairs. However, in order to compute this and update its weights, we need to calculate the maximum Q-value for the next state $s'$. In order to get that, in the DDQN case, we input that next state $s'$ in the target network.
+What I'm not clear on is: how do we train this target network itself that will help us train the other NN? What is its cost function?
+"
+"['reinforcement-learning', 'deep-learning']"," Title: Are tabular reinforcement learning methods obsolete (or getting obsolete)?Body: While learning RL, I came across some problems where the Q-matrix that I need to make is very very large. I am not sure if it is ever practical. Then I research and came to this conclusion that using the tabular method is not the only way, in fact, it is a very less powerful tool as compared to other methods such as deep RL methods.
+Am I correct in this understanding that with the increasing complexity of problems, tabular RL methods are getting obsolete?
+"
+"['reinforcement-learning', 'policy-gradients', 'pytorch', 'ddpg']"," Title: Why is the policy loss the mean of $-Q(s, \mu(s))$ in the DDPG algorithm?Body: I am trying to implement the DDPG algorithm based on this paper.
+The part that confuses me is the actor network's update.
+I don't understand why the policy loss is simply the mean of $-Q(s, \mu(s))$, where $Q$ is the critic network and $\mu$ is the policy network.
+How does one arrive at this?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'unsupervised-learning', 'autoencoders']"," Title: What should the output of a neural network that needs to classify in an unsupervised fashion XOR data be?Body: XOR data, without labels:
+[[0,0],[0,1],[1,0],[1,1]]
+
+I'm using this network for auto-classifying XOR data:
+H1 <-- Dense(units=2, activation=relu) #any activation here
+Z <-- Dense(units=2, activation=softmax) #softmax for 2 classes of XOR result
+Out <-- Dense(units=2, activation=sigmoid) #sigmoid to return 2 values in (0,1)
+
+There's a logical problem in the network, that is, Z represents 2 classes,
+however, the 2 classes can't be decoded back to 4 samples of XOR data.
+How to fix the network above to auto-classify XOR data, in unsupervised manner?
+"
+"['reinforcement-learning', 'policy-gradients']"," Title: Choosing a policy improvement algorithm for a continuing problem with continuous action and state-spaceBody: I'm trying to decide which policy improvement algorithm to use in the context of my problem. But let me emerge you into the problem
+Problem
+I want to move a set of points in a 3D space. Depending on how the points move, the environment gives a positive or negative reward. Further, the environment does not split up into episodes, so it is a continuing problem. The state space is high-dimensional (a lot of states are possible) and many states can be similar (so state aliasing can appear), also states are continuous. The problem is dense in rewards, so for every transition, there will be a negative or positive reward, depending on the previous state.
+A state is represented as a vector with dimension N (initially it will be something like ~100, but in the future, I want to work with vectors up to 1000).
+In the case of action, it is described by a matrix 3xN, where N is the same as in the case of the state. The first dimension comes from the fact, that action is 3D displacement.
+What I have done so far
+Since actions are continuous, I have narrowed down my search to policy gradient methods. Further, I researched methods, that work with continuous state spaces. I found a deep deterministic policy gradient (DDPG) and the Proximal Policy Gradient (PPO) would fit here. Theoretically, they should work but I'm unsure and any advice would be gold here.
+Questions
+Would those algorithms be suitable for the problem (PPO or DDPG)?
+There are other policy improvement algorithms that would work here or a family of policy improvement algorithms?
+"
+"['deep-learning', 'applications', 'generative-adversarial-networks', 'deepfakes']"," Title: Can GANs be used to generate something other than images?Body: AFAIK, GANs are used for generating/synthesizing near-perfect human faces (deepfakes), gallery arts, etc., but can GANs be used to generate something other than images?
+"
+"['resource-request', 'geometric-deep-learning', 'graph-neural-networks']"," Title: What is the best resources to learn Graph Convolutional Neural Networks?Body: For the past few days, I am trying to learn graph convolutional networks. I saw some of the lectures on youtube. But I can not able to get any clear concept of how those networks are trained. I have a vague understanding of how to perform convolution, but I can not understand how we train them. I want a solid mathematical understanding of graph convolutional networks. So, can anyone please suggest me how to start learning graph convolutional network from start to expert level?
+"
+['social']," Title: How artificial intelligence will change the future?Body: AI is the emerging field and biggest business opportunity of the next decade. It's already automating manual and repetitive tasks. And in some areas, it can learn faster than humans, if not yet as deeply.
+From the Forbes article
+
+In the AI-enabled future, humans will be able to converse and interact with each other in the native language of choice, not having to worry about miscommunicating intentions.
+
+I would like to know more about how artificial intelligence will change the future?
+"
+"['neural-networks', 'convolutional-neural-networks', 'activation-functions', 'relu', 'residual-networks']"," Title: Can residual neural networks use other activation functions different from ReLU?Body: In many diagrams, as seen below, residual neural networks are only depicted with ReLU activation functions, but can residual NNs also use other activation functions, such as the sigmoid, hyperbolic tangent, etc.?
+
+"
+"['neural-networks', 'machine-learning', 'data-preprocessing', 'normalisation', 'standardisation']"," Title: Is it necessary to standardise the expected outputBody: Normalisation transform data into a range:
+$$X_i = \dfrac{X_i - Min}{Max-Min}$$
+Practically, I found out that the model doesn't generalise well when using normalisation of input data, instead of standardisation (another formula shown below).
+Before training a neural net, data are usually standardised or normalised. Standardising seems good as it makes the model generalise better, while normalisation may make the model not working with values out of training data range.
+So I'm using standardisation for input data (X), however, I'm confusing whether I should standardise the expected output values too?
+For a column in input data:
+$$X_i = \dfrac{(X_i - Mean)}{Standard\ Deviation\ of\ the\ Column}$$
+Should I apply this formula to the expected output values (labels) too?
+"
+"['neural-networks', 'training', 'reference-request', 'regularization', 'algorithmic-bias']"," Title: Forcing a neural network to be close to a previous model - Regularization through given modelBody: I'm wondering, has anyone seen any paper where one trains a network but biases it to produce similar outputs to a given model (such as one given from expert opinion or it being a previously trained network).
+Formally, I'm looking for a paper doing the following:
+Let $g:\mathbb{R}^d\rightarrow \mathbb{R}^D$ be a model (not necessarily, but possibly, a neural network) trained on some input/output data pairs $\{(x_n,y_n)\}_{n=1}^N$ and train a neural network $f_{\theta}(\cdot)$ on
+$$
+\underset{\theta}{\operatorname{argmin}}\sum_{n=1}^N \left\|
+f_{\theta}(x_n) - y_n
+\right\| + \lambda \left\|
+f_{\theta}(x_n) - g(x_n)
+\right\|,
+$$
+where $\theta$ represents all the trainable weight and bias parameters of the network $f_{\theta}(\cdot)$.
+So put another way...$f_{\theta}(\cdot)$ is being regularized by the outputs of another model...
+"
+"['reinforcement-learning', 'tensorflow', 'python', 'convergence', 'ddpg']"," Title: Why is DDPG not learning and it does not converge?Body: I have used a different setting, but DDPG is not learning and it does not converge. I have used these codes 1,2, and 3 and I used different optimizers, activation functions, and learning rate but there is no improvement.
+ parser.add_argument('--actor-lr', help='actor network learning rate', default=0.001)
+ parser.add_argument('--critic-lr', help='critic network learning rate', default=0.0001)
+ parser.add_argument('--gamma', help='discount factor for critic updates', default=0.95)
+
+ parser.add_argument('--tau', help='soft target update parameter', default=0.001)
+ parser.add_argument('--buffer-size', help='max size of the replay buffer', default=int(1e5))
+ parser.add_argument('--minibatch-size', help='size of minibatch for minibatch-SGD', default=64)
+
+ # run parameters
+ # parser.add_argument('--env', help='choose the gym env- tested on {Pendulum-v0}', default='MountainCarContinuous-v0')
+ parser.add_argument('--random-seed', help='random seed for repeatability', default=1234)
+ parser.add_argument('--max-episodes', help='max num of episodes to do while training', default=200)
+ parser.add_argument('--max-episode-len', help='max length of 1 episode', default=100)
+
+
+I have trained in the same environment with A2C and it converged.
+
+Which parameters should I change to make the DDPG converge? Can anyone help me with this?
+"
+"['machine-learning', 'deep-learning', 'training', 'datasets', 'testing']"," Title: How can I predict the true label for data with incomplete features based on the trained model with data with more features?Body: Suppose I have a model that was trained with a dataset that contains the features (f1, f2, f3, f4, f5, f6)
. However, my test dataset does not contain all features of the training dataset, but only (f1, f2, f3)
. How can I predict the true label of the entries of this test dataset without all features?
+"
+"['geometric-deep-learning', 'graph-neural-networks']"," Title: How Graph Convolutional Neural Networks forward propagate?Body: In the basic variant of GCN we have the following:
+
+Here we aggregate the information from the adjacent node and pass it to a neural network, then transform our own information and add them all.
+But the main question is: how can we ensure that $W_{k}(\sum(\frac{h_k}{N(V)})$ will be the same size as $B_{k}h_{v}$ and does $B_{k}$ emply another neural network?
+"
+"['reinforcement-learning', 'python', 'objective-functions', 'pytorch']"," Title: Classification or regression for deep Q learningBody: DQN implemented at https://github.com/PacktPublishing/PyTorch-1.x-Reinforcement-Learning-Cookbook/blob/master/Chapter07/chapter7/dqn.py uses the mean square error loss function for the neural network to learn the state -> action mapping :
+self.criterion=torch.nn.MSELoss()
+
+Could cross-entropy be used instead as the loss function? Cross entropy is typically used for classification, and mean squared error for regression.
+As the actions are discrete (the example utilises the mountain car environment - https://github.com/openai/gym/wiki/MountainCar-v0) and map to [0,1,2] can cross-entropy loss be used instead of mean squared error? Why use regression as the state -> action function approximator for deep Q learning instead of classification?
+Entire DQN src from https://github.com/PacktPublishing/PyTorch-1.x-Reinforcement-Learning-Cookbook/blob/master/Chapter07/chapter7/dqn.py :
+'''
+Source codes for PyTorch 1.0 Reinforcement Learning (Packt Publishing)
+Chapter 7: Deep Q-Networks in Action
+Author: Yuxi (Hayden) Liu
+'''
+
+import gym
+import torch
+
+from torch.autograd import Variable
+import random
+
+
+env = gym.envs.make("MountainCar-v0")
+
+
+
+class DQN():
+ def __init__(self, n_state, n_action, n_hidden=50, lr=0.05):
+ self.criterion = torch.nn.MSELoss()
+ self.model = torch.nn.Sequential(
+ torch.nn.Linear(n_state, n_hidden),
+ torch.nn.ReLU(),
+ torch.nn.Linear(n_hidden, n_action)
+ )
+ self.optimizer = torch.optim.Adam(self.model.parameters(), lr)
+
+
+ def update(self, s, y):
+ """
+ Update the weights of the DQN given a training sample
+ @param s: state
+ @param y: target value
+ """
+ y_pred = self.model(torch.Tensor(s))
+ loss = self.criterion(y_pred, Variable(torch.Tensor(y)))
+ self.optimizer.zero_grad()
+ loss.backward()
+ self.optimizer.step()
+
+
+ def predict(self, s):
+ """
+ Compute the Q values of the state for all actions using the learning model
+ @param s: input state
+ @return: Q values of the state for all actions
+ """
+ with torch.no_grad():
+ return self.model(torch.Tensor(s))
+
+
+
+def gen_epsilon_greedy_policy(estimator, epsilon, n_action):
+ def policy_function(state):
+ if random.random() < epsilon:
+ return random.randint(0, n_action - 1)
+ else:
+ q_values = estimator.predict(state)
+ return torch.argmax(q_values).item()
+ return policy_function
+
+
+def q_learning(env, estimator, n_episode, gamma=1.0, epsilon=0.1, epsilon_decay=.99):
+ """
+ Deep Q-Learning using DQN
+ @param env: Gym environment
+ @param estimator: DQN object
+ @param n_episode: number of episodes
+ @param gamma: the discount factor
+ @param epsilon: parameter for epsilon_greedy
+ @param epsilon_decay: epsilon decreasing factor
+ """
+ for episode in range(n_episode):
+ policy = gen_epsilon_greedy_policy(estimator, epsilon, n_action)
+ state = env.reset()
+ is_done = False
+
+ while not is_done:
+ action = policy(state)
+ next_state, reward, is_done, _ = env.step(action)
+ total_reward_episode[episode] += reward
+
+ modified_reward = next_state[0] + 0.5
+
+ if next_state[0] >= 0.5:
+ modified_reward += 100
+ elif next_state[0] >= 0.25:
+ modified_reward += 20
+ elif next_state[0] >= 0.1:
+ modified_reward += 10
+ elif next_state[0] >= 0:
+ modified_reward += 5
+
+ q_values = estimator.predict(state).tolist()
+
+ if is_done:
+ q_values[action] = modified_reward
+ estimator.update(state, q_values)
+ break
+
+ q_values_next = estimator.predict(next_state)
+
+ q_values[action] = modified_reward + gamma * torch.max(q_values_next).item()
+
+ estimator.update(state, q_values)
+
+ state = next_state
+
+
+ print('Episode: {}, total reward: {}, epsilon: {}'.format(episode, total_reward_episode[episode], epsilon))
+
+ epsilon = max(epsilon * epsilon_decay, 0.01)
+
+n_state = env.observation_space.shape[0]
+n_action = env.action_space.n
+n_hidden = 50
+lr = 0.001
+dqn = DQN(n_state, n_action, n_hidden, lr)
+
+
+n_episode = 1000
+
+total_reward_episode = [0] * n_episode
+
+q_learning(env, dqn, n_episode, gamma=.9, epsilon=.3)
+
+
+
+import matplotlib.pyplot as plt
+plt.plot(total_reward_episode)
+plt.title('Episode reward over time')
+plt.xlabel('Episode')
+plt.ylabel('Total reward')
+plt.show()
+
+"
+"['reinforcement-learning', 'q-learning', 'proofs', 'convergence', 'bellman-equations']"," Title: Is the Bellman equation that uses sampling weighted by the Q values (instead of max) a contraction?Body: It is proved that the Bellman update is a contraction (1).
+Here is the Bellman update that is used for Q-Learning:
+$$Q_{t+1}(s, a) = Q_{t}(s, a) + \alpha*(r(s, a, s') + \gamma \max_{a^*} (Q_{t}(s',
+ a^*)) - Q_t(s,a)) \tag{1} \label{1}$$
+The proof of (\ref{1}) being contraction comes from one of the facts (the relevant one for the question) that max operation is non expansive; that is:
+$$\lvert \max_a f(a)- \max_a g(a) \rvert \leq \max_a \lvert f(a) - g(a) \rvert \tag{2}\label{2}$$
+This is also proved in a lot of places and it is pretty intuitive.
+Consider the following Bellman update:
+$$ Q_{t+1}(s, a) = Q_{t}(s, a) + \alpha*(r(s, a, s') + \gamma SAMPLE_{a^*} (Q_{t}(s', a^*)) - Q_t(s,a)) \tag{3}\label{3}$$
+where $SAMPLE_a(Q(s, a))$ samples an action with respect to the Q values (weighted by their Q values) of each action in that state.
+Is this new Bellman operation still a contraction?
+Is the SAMPLE operation non-expansive? It is, of course, possible to generate samples that will not satisfy equation (\ref{2}). I ask is it non-expansive in expectation?
+My approach is:
+$$\lvert\,\mathbb{E}_{a \sim Q}[f(a)] - \mathbb{E}_{a \sim Q}[g(a)]\, \rvert \leq \,\,\mathbb{E}_{a \sim Q}\lvert\,\,[f(a) - g(a)]\,\,\rvert \tag{4} \label{4} $$
+Equivalently:
+$$\lvert\,\mathbb{E}_{a \sim Q}[f(a) - g(a)] \, \rvert \leq \,\,\mathbb{E}_{a \sim Q}\lvert\,\,[f(a) - g(a)]\,\,\rvert$$
+(\ref{4}) is true since:
+$$\lvert\,\mathbb{E}[X] \, \rvert \leq \,\,\mathbb{E} \,\,\lvert\,\,[X]\,\,\rvert $$
+But, I am not sure if proving (\ref{4}) proves the theorem. Do you think that this is a legit proof that (\ref{3}) is a contraction.
+(If so; this would mean that stochastic policy q learning theoretically converges and we can have stochastic policies with regular q learning; and this is why I am interested.)
+Both intuitive answers and mathematical proofs are welcome.
+"
+"['neural-networks', 'deep-learning', 'activation-functions', 'multilayer-perceptrons', 'hyper-parameters']"," Title: Why does every neuron in hidden layers of a multi-layer perceptron typically have the same activation function?Body: Why does every neuron in a hidden layer of a multi-layer perceptron (MLP) typically have the same activation function as every other neuron in the same or other hidden layers (so I exclude the output layer, which typically has a different activation function) of the MLP? Is this a requirement, are there any advantages, or maybe is it just a rule of thumb?
+"
+"['deep-learning', 'classification', 'image-recognition']"," Title: Which neural network should I use to distinguish between different types of defects?Body: I want to teach a neural network to distinguish between different types of defects. For that, I generated images of fake-defects. The images of the fake-defect types are attached.
+
+
+
+
+
+
+I tried many different network architectures now:
+
+- resnet18
+- squeezenet
+- own architectures: a narrow network with broad layers and high dropout rates.
+
+I have to say that some of these defects have really random shapes, like the type single-dirt or multi-dirt. I imagine that the classification should not be as easy as I thought before, due to the lack of repetitive features within the defects. But I always feel like the network is learning some "weird" features, which do not occur in the test set, and the results are really frustrating. I felt like teaching binary images had way better results, which should IMO be not the case.
+Still, I feel like a neural network should be able to learn to distinguish them.
+Which kind of network architecture would you recommend to classify the images in the attachment?
+"
+"['deep-learning', 'geometric-deep-learning', 'convolution', 'graph-neural-networks']"," Title: Can I think of the graph convolution operation as a regular 2D convolution for images?Body: Kipf et al. described in his paper that we can write graph convolution operation like this:
+$$H_{t+1} = AH_tW_t$$
+where $A$ is the normalized adjacency matrix, $H_t$ is the embedded representation of the nodes and $W_t$ is the weight matrix.
+Now, can I imagine the same formula as first performing 2D convolution with fixed-size kernel over the whole feature space then multiply the result with the adjacency matrix?
+If this is the case, I think I can create a graph convolution operation just using the Conv2D layer then performing simple matrix multiplication with adjacency matrix using PyTorch.
+"
+"['neural-networks', 'deep-learning', 'computer-vision', 'terminology', 'papers']"," Title: What is meant by ""arranging the final features of CNN in a grid"" and how to do it?Body: In the paper What You Get Is What You See: A Visual Markup Decompiler, the authors have proposed a method to extract the features from the CNN and then arrange those extracted features in a grid to pass into an RNN encoder. Here's an illustration.
+
+I can easily extract features from either the existing model, like ResNet, VGG, or make a new CNN model easily as they have described in the paper.
+For example, let us suppose, I do this
+features = keras.applications.ResNet()(images_array) # just hypothetical
+
+How can I convert these images to the grid?? I am supposed to feed the output of the changed grid to an LSTM Encoder as:
+keras.layers.LSTM()(grid) # again, hypothetical type
+
+I just want to know what the author means from changing the output in the grid format.
+"
+"['neural-networks', 'deep-learning', 'training', 'hyperparameter-optimization', 'hyper-parameters']"," Title: How are training hyperparameters determined for large models?Body: When training a relatively small DL model, which takes several hours to train, I typically start with some starting points from literature and then use a trial-and-error or grid-search approach to fine-tune the values of the hyper-parameters, in order to prevent overfitting and achieve sufficient performance.
+However, it is not uncommon for large models to have training time measured in days or weeks [1], [2], [3].
+How are hyperparameters determined in such cases?
+"
+['natural-language-processing']," Title: Is there an optimal way to split the text into small parts when working with co-reference resolution?Body: I am working with co-reference resolution in a large text. Is there an optimal way to split the text into small parts? Or the best correct procedure is to use the entire text?
+Just for reference, I am using the library spacy-neuralcoref in Python that is based on Deep Reinforcement Learning for Mention-Ranking Coreference Models by Kevin Clark and Christopher D. Manning, EMNLP 2016.
+Why am I asking about splitting the text?
+I am applying coreference to chapters of books (roughly 30 pages of text). All the examples I have seen show situations of coreference applied to small pieces of texts. I applied to a chapter and I found strange results. However, this is not a clear justification for that since the state of art in coreference is about 60%. Am I right?
+I didn't check all databases that people use to test coreference but the ones I took a look (like MUC 3 and MUC 4 Data Sets), if I understand well, they were composed by a collection of a small number of paragraphs.
+A test Example:
+
+TST1-MUC3-0001
+GUATEMALA CITY, 4 FEB 90 (ACAN-EFE) -- [TEXT] THE GUATEMALA ARMY
+DENIED TODAY THAT GUERRILLAS ATTACKED THE "SANTO TOMAS" PRESIDENTIAL
+FARM, LOCATED ON THE PACIFIC SIDE, WHERE PRESIDENT CEREZO HAS BEEN
+STAYING SINCE 2 FEBRUARY.
+A REPORT PUBLISHED BY THE "CERIGUA" NEWS AGENCY -- MOUTHPIECE OF
+THE GUATEMALAN NATIONAL REVOLUTIONARY UNITY (URNG) -- WHOSE MAIN
+OFFICES ARE IN MEXICO, SAYS THAT A GUERRILLA COLUMN ATTACKED THE FARM
+2 DAYS AGO.
+HOWEVER, ARMED FORCES SPOKESMAN COLONEL LUIS ARTURO ISAACS SAID
+THAT THE ATTACK, WHICH RESULTED IN THE DEATH OF A CIVILIAN WHO WAS
+PASSING BY AT THE TIME OF THE SKIRMISH, WAS NOT AGAINST THE FARM, AND
+THAT PRESIDENT CEREZO IS SAFE AND SOUND.
+HE ADDED THAT ON 3 FEBRUARY PRESIDENT CEREZO MET WITH THE
+DIPLOMATIC CORPS ACCREDITED IN GUATEMALA.
+THE GOVERNMENT ALSO ISSUED A COMMUNIQUE DESCRIBING THE REBEL REPORT
+AS "FALSE AND INCORRECT," AND STRESSING THAT THE PRESIDENT WAS NEVER
+IN DANGER.
+COL ISAACS SAID THAT THE GUERRILLAS ATTACKED THE "LA EMINENCIA"
+FARM LOCATED NEAR THE "SANTO TOMAS" FARM, WHERE THEY BURNED THE
+FACILITIES AND STOLE FOOD.
+A MILITARY PATROL CLASHED WITH A REBEL COLUMN AND INFLICTED THREE
+CASUALTIES, WHICH WERE TAKEN AWAY BY THE GUERRILLAS WHO FLED TO THE
+MOUNTAINS, ISAACS NOTED.
+HE ALSO REPORTED THAT GUERRILLAS KILLED A PEASANT IN THE CITY OF
+FLORES, IN THE NORTHERN EL PETEN DEPARTMENT, AND BURNED A TANK TRUCK.
+
+"
+"['convolutional-neural-networks', 'classification', 'image-segmentation', 'representation-learning']"," Title: Extending patch based image classification into image classificationBody: I am trying to classify tampered, pristine images from set of images, in that I have built a network in which I would divide the image into multiple overlapping patches and then classify them into pristine or fake(based on the probability outputs), but now I want extend the same to Image level. That is I want to build some model or some rule over output probabilities of patches of each image to get probability that the image is fake or pristine.
+ways I am thinking to do is -
+
+- Build a shallow network over the probabilities of the patch probabilities. In this case problem is all images are of different shape
+- Apply a ML classifier (something like Logistic Regression), with output probabilities by appending zeros to the output probability vector generated so that all image has same sized probability vector as input
+- generate a mask using patches and then build a simple classification network over the masks using original image labels.
+
+I can't really say which among the above three is better or worse, I don't even know the possibility of the above three. (Kind of hit a roadblock in thinking)
+Now the question am I thinking in right direction, what would be better among the ideas I am considering and why. Is there anything better than what I am thinking. It would helpful in suggesting some resources.
+"
+"['computer-vision', 'geometric-deep-learning', 'convolution', 'graph-neural-networks']"," Title: Why can we perform graph convolution using the standard 2d convolution with $1 \times \Gamma$ kernels?Body: Recently I was reading this paper Skeleton Based Action RecognitionUsing Spatio Temporal Graph Convolution. In this paper, the authors claim (below equation (\ref{9})) that we can perform graph convolution with the following formula
+$$
+\mathbf{f}_{o u t}=\mathbf{\Lambda}^{-\frac{1}{2}}(\mathbf{A}+\mathbf{I}) \mathbf{\Lambda}^{-\frac{1}{2}} \mathbf{f}_{i n} \mathbf{W} \label{9}\tag{9}
+$$
+using the standard 2d convolution with kernels of shape $1 \times \Gamma$ (where $\Gamma$ is defined under equation 6 of the paper), and then multiplying it with the normalised adjacency matrix
+$$\mathbf{\Lambda}^{-\frac{1}{2}}(\mathbf{A}+\mathbf{I}) \mathbf{\Lambda}^{-\frac{1}{2}}$$
+For the past few days, I was thinking about his claim but I can't find an answer. Does anyone read this paper and can help me to find it out, please?
+"
+"['reinforcement-learning', 'deep-learning', 'supervised-learning']"," Title: What is a multi channel supervised classifier?Body: I came across a paper that describes its model architecture in the following way.
+
+Our TRIL network is a two-channel network jointly trained to predict the expert’s action given state and the system’s next state transition given state and expert action. The training procedure of TRIL is similar to that of a multi- channel supervised classifier with regularization. Let $\theta_{π_0}$ be the parameters of TRIL and $L_{ce}$ be the cross entropy loss for predicting expert’s action and $L_{mse}$ be the mean squared error loss on predicting next state given current state and the expert’s action
+
+The loss function is given in the following manner
+$$L(\theta_{\pi_0}) = L_{ce}(a, \pi_0(s)) + \lambda L_{mse}(T_{\pi_0}(s,a),s')$$
+
+TRIL is a dual- channel network that shares certain hidden layers and jointly predicts expert action(a) and state transitions(s’)
+
+I am not sure what a dual channel network means and what does it mean when it is able to jointly predict two outputs ? It seems something similar to a multi-task learning since there is shared hidden layers and different "task" prediction but i am not too sure of that either.
+"
+"['neural-networks', 'pattern-recognition']"," Title: What kind of neural network can be trained to recognise patterns?Body: Is there a type of neural network that can be fed patterns to train itself on to complete new patterns that it has not seen before?
+What I'm trying to do is train a neural network to transform an image into another image. The image may be slightly different each time (denoted with different lines in the shapes) but a human would get the idea of how the new images should look. I'd like to make a network that can learn how to learn what comes next and then predict the rest of the sequence from the first part of a new sequence.
+Taking the picture below as an example. The neural network would be fed the patterns in grey and learn how to predict the next ones in the sequence. Then the user would put the blue shapes into the network and hope to get the green ones out.
+Is there a neural network that could perform this type of function of completing a pattern based on only a small number of examples to start the pattern based on the other patterns it has seen?
+
+EDIT: Corrected image and added more context
+"
+['reinforcement-learning']," Title: When to apply reward for time series data?Body: Reading the paper 'Reinforcement Learning for FX trading 'at https://stanford.edu/class/msande448/2019/Final_reports/gr2.pdf it states:
+
+While our end goal is to be able to make decisions on a universal time
+scale, in order to apply a reinforcement learning approach to this
+problem with rewards that do not occur at each step, we formulate the
+problem with a series of episodes. In each episode, which we designate
+to be one hour long, the agent will learn the make decisions to
+maximize the reward (return) in that episode, given the time series
+features we have.
+
+This may be a question for the authors but is it not better in RL to apply rewards at each time step instead of "rewards that do not occur at each step"? If apply rewards at each time step then the RL algorithm will achieve better convergence properties as a result of learning at smaller time intervals rather than waiting for "one hour". Why not apply rewards at each time step?
+"
+"['neural-networks', 'generative-adversarial-networks', 'kl-divergence', 'wasserstein-metric', 'wasserstein-gan']"," Title: What is the reason for mode collapse in GAN as opposed to WGAN?Body: In this article I am reading:
+
+$D_{KL}$ gives us inifity when two distributions are disjoint. The value of $D_{JS}$ has sudden jump, not differentiable at $\theta=0$. Only Wasserstein metric provides a smooth measure, which is super helpful for a stable learning process using gradient descents.
+
+Why is this important for a stable learning process? I have also the feeling this is also the reason for mode collapse in GANs, but I am not sure.
+The Wasserstein GAN paper also talks about it obviously, but I think I am missing a point. Does it say JS does not provide a usable gradient? What exactly does that mean?
+"
+"['applications', 'speech-recognition', 'siri']"," Title: How do AIs like Siri and Alexa respond to their names being called?Body: AIs like Siri and Alexa respond to their names being called. How does the system recognize the name by ignoring all the other words that have been said before their name? For example, "Hey Siri" would trigger Siri to start listening for commands, but if a user said "hey how are you hey Siri" the system will ignore "hey how are you" but trigger the system to "hey Siri". Is it because their listening function reloads in milliseconds or even nanoseconds, or is there a different way it works?
+"
+"['recurrent-neural-networks', 'open-ai', 'transformer', 'attention', 'gpt']"," Title: What exactly are the ""parameters"" in GPT-3's 175 billion parameters and how are they chosen/generated?Body: When I studied neural networks, parameters were learning rate, batch size etc. But even GPT3's ArXiv paper does not mention anything about what exactly the parameters are, but gives a small hint that they might just be sentences.
+
+Even tutorial sites like this one start talking about the usual parameters, but also say "model_name: This indicates which model we are using. In our case, we are using the GPT-2 model with 345 million parameters or weights"
. So are the 175 billion "parameters" just neural weights? Why then are they called parameters? GPT3's paper shows that there are only 96 layers, so I'm assuming it's not a very deep network, but extremely fat. Or does it mean that each "parameter" is just a representation of the encoders or decoders?
+
+An excerpt from this website shows tokens:
+
+In this case, there are two additional parameters that can be passed
+to gpt2.generate(): truncate and include_prefix. For example, if each
+short text begins with a <|startoftext|> token and ends with a
+<|endoftext|>, then setting prefix='<|startoftext|>',
+truncate=<|endoftext|>', and include_prefix=False, and length is
+sufficient, then gpt-2-simple will automatically extract the shortform
+texts, even when generating in batches.
+
+So are the parameters various kinds of tokens that are manually created by humans who try to fine-tune the models? Still, 175 billion such fine-tuning parameters is too high for humans to create, so I assume the "parameters" are auto-generated somehow.
+The attention-based paper mentions the query-key-value weight matrices as the "parameters". Even if it is these weights, I'd just like to know what kind of a process generates these parameters, who chooses the parameters and specifies the relevance of words? If it's created automatically, how is it done?
+"
+"['reinforcement-learning', 'markov-decision-process', 'convergence', 'pomdp']"," Title: If the performance of an RL agent in a partially observable environment is ""good"", is this likely only accidental?Body: In my research, I remember to have read that, in case of an environment which can be modeled by partially observable MDP, there are no convergence guarantees (unfortunately, I do not find the paper anymore and I would appreciate if someone can post the link to the reference).
+If the performance of an RL agent in a partially observable environment is "good" (i.e. the agent does pretty well in achieving its goal), is this likely only accidental or due to chance?
+"
+"['natural-language-processing', 'natural-language-understanding', 'metric', 'question-answering']"," Title: How is the F1 score calculated in a question-answering system?Body: I have an NLP model for answer-extraction. So, basically, I have a paragraph and a question as input, and my model extracts the span of the paragraph that corresponds to the answer to the question.
+I need to know how to compute the F1 score for such models. It is the standard metric (along with Exact Match) used in the literature to evaluate question-answering systems.
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'markov-decision-process', 'environment']"," Title: Reinforcement learning with action consisting of two discrete valuesBody: I'm new to reinforcement learning. I have a problem where an action is composed of an order (rod with a required length) and an item from a warehouse (an existing rod with a certain length, which will be cut to the desired length and the remainder put back to the warehouse).
+I imagine my state as two lists of a defined size: orders and warehouse, and my action as an index from the first list and an index from the second list. However, I have only worked with environments where it was only possible to pick single action and I'm not sure how to deal with two indexes. I'm not sure how DQN architecture should look like to give me such action.
+Can anyone validate my general idea and help me find a solution? Or maybe just point me to some papers where similar problems are described?
+"
+"['reinforcement-learning', 'training', 'dqn', 'deep-rl', 'time-series']"," Title: How can I build a deep reinforcement learning model that can be trained with multiple time series datasetsBody: I built a DRL model to trade stocks in the financial market but the number of observations is relatively small and I would like to increase it by training the same model with stocks from several different companies. My problem is that I don't know what is the correct way to do this since the price series is a time series. Someone to enlighten me? I have read articles that show that this is possible but none that say how.
+"
+"['neural-networks', 'deep-learning', 'activation-functions']"," Title: What activation functions are currently popular?Body: I am not asking what activation function is better. I want to know what activation functions are more used in research or deployment. Also, are they used in combination? E.g., ReLU, ELUs, etc. I'd appreciate any statistics or insight on this.
+"
+"['machine-learning', 'reinforcement-learning', 'deep-learning']"," Title: Why can we use a network to estimate $Q_\pi(s, a)$ in Actor-Critic Method?Body: According to deep Q learning, we want to learn $Q^*(s,a)$, which is the optimal action-value function. It does make sense because we assume there is only one optimal function so the algorithm will converge supposedly.
+But when it comes to actor-critic method, we use critic network (also called value network) to estimate $Q_\pi(s, a)$. This is what confused me. Since our policy $\pi$ will change through time, the target $Q_\pi(s, a)$ of value network will also change. What will happen for a network to learn a changing function?
+"
+"['reinforcement-learning', 'comparison', 'policy-gradients', 'actor-critic-methods', 'advantage-actor-critic']"," Title: What is the difference between vanilla policy gradient with a baseline as value function and advantage actor-critic?Body: What is the difference between vanilla policy gradient (VPG) with a baseline as value function and advantage actor-critic (A2C)?
+By vanilla policy gradient I am specifically referring to spinning up's explanation of VPG.
+"
+"['machine-learning', 'plotting']"," Title: Plotting loss vs number of updates made and plotting loss vs run timeBody: I wanted to plot a graph to show the effect of increasing the batch size on loss calculated (MNIST dataset). But I am not able to decide if I should show change in loss over training time of the neural network or number of updates made to weights and biases (iterations and epochs basically, but for large differences in batch sizes, I think number of updates made makes more sense?). I am confused about what makes more sense (or neither of them makes sense, idk).
+With Loss vs training time graph, I can show that for any training time, the loss for large batch is more. From what I have read on wiki, with Loss vs number of updates made graph, I can show that change in loss is smoother for larger batches (rate of convergence). But can't the same conclusion be made when plotted against time? (Smooth convergence means lesser spikes right?)
+"
+"['reinforcement-learning', 'experience-replay']"," Title: What is the advantage of using experience replay (as opposed to feeding it sequential data)?Body: Let's suppose that our RL agent needs to play a game with different levels. If we train our RL agent sequentially or with sequential data, our agent will learn how to play level 1, but then it will learn to play level 2 differently, because our agent learns how to play level 2 and forgets how to play level 1, since now our model is fitted using only experiences from level 2.
+How does an experience replay buffer change this? Can you explain this in simple terms?
+"
+"['reinforcement-learning', 'comparison', 'markov-decision-process', 'pomdp']"," Title: What is the difference between Bayes-adaptive MDP and a Belief-MDP in Reinforcement Learning?Body: I have been reading a few papers in this area recently and I keep coming across these two terms. As far as I'm aware, Belief-MDPs are when you cast a POMDP as a regular MDP with a continuous state space where the state is a belief (distribution) with some unknown parameters.
+Are they not the same thing?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'experience-replay', 'catastrophic-forgetting']"," Title: Why do DQNs tend to forget?Body: Why do DQNs tend to forget? Is it because when you feed highly correlated samples, your model (function approximation) doesn't give a general solution?
+For example:
+
+- I use level 1 experiences, my model $p$ is fitted to learn how to play that level.
+
+- I go to level 2, my weights are updated and fitted to play level 2 meaning I don't know how to play level 1 again.
+
+
+"
+"['models', 'pytorch', 'word-embedding', 'transformer', 'pretrained-models']"," Title: Is it good practice to save NLP Transformer based pre-trained models into file system in production environmentBody: I have developed a multi label classifier using BERT. I'm leveraging Hugging Face Pytorch implementation for transformers.
+I have saved the pretrained model into the file directory in dev environment. Now, the application is ready to be moved the production environment.
+Is it a good practice to save the models into file system in prod ?
+Can I serialize the model files and word embeddings into any DB and read again ?
+"
+"['natural-language-processing', 'tensorflow', 'keras', 'convolution']"," Title: Embedding Layer into Convolution LayerBody: I'm looking to encode PDF documents for deep learning such that an image representation of the PDF refers to word embeddings instead of graphic data
+So I've indexed a relatively small vocabulary (88 words). I've generated images that replace graphic data with word indexed (1=cat, 2=dog, etc) data. Now I'm going to my NN model
+right_input = Input((width, height, 1), name='right_input')
+right = Flatten()(right_input)
+right = Embedding(wordspaceCount, embeddingDepth)(right)
+right = Reshape((width, height, embeddingDepth))(right)
+right = vgg16_model((width, height, embeddingDepth))(right)
+
+Image data is positive-only and embedding outputs negative values though so I'm wondering if it is necessary to normalize the embedding layer with something like this after the Embedding
layer
+right = Lambda(lambda x: (x + 1.)/2.)(right)
+
+The word indexed image looks like this:
+
+Also, is this a problematic concept generally?
+"
+"['neural-networks', 'deep-learning', 'activation-functions']"," Title: In what situations ELUs should be used instead of RELUs?Body: I always use RELUs actication functions when I need to and I understand limitations of ELUs. So in what situation do I need to consider ELUs over RELUs?
+"
+"['neural-networks', 'papers', 'autoencoders', 'anomaly-detection', 'denoising-autoencoder']"," Title: How can a de-noising auto-encoder act as an anomaly detection model?Body: In some research papers, I have seen that, for training the autoencoders, instead of giving the non-anomalous input images, they add some anomalies to the normal input images, and train the auto-encoders with this anomalous images.
+And, during testing, they take and pass an anomalous image and get the output, take their pixel-wise difference, and, based on a threshold, they detect if it is an anomaly or not.
+If we are adding noise or anomalies to the training set, are we generalizing the model's capability to recreate the original normal input?
+How does it help to detect the anomaly?
+My understanding is that we should train using only normal data without adding any noise, then give an anomaly image at test, take the loss as a threshold.
+"
+"['neural-networks', 'deep-learning', 'objective-functions', 'regularization', 'loss']"," Title: Why L2 loss is more commonly used in Neural Networks than other loss functions?Body: Why L2 loss is more commonly used in Neural Networks than other loss functions?
+What is the reason to L2 being a default choice in Neural Networks?
+"
+"['deep-learning', 'computer-vision', 'clustering']"," Title: Combining clustering and deep learning for computer visionBody: Is there any recent work on combining clustering approaches (k-means, or gaussian mixture or PGM) with deep learning for computer vision?
+In particular I'm interested in if anyone has used the first few layers of a deep learning network as feature extractors in conjunction with clustering algorithms which have been engineered to induce things like translation and rotation invariance while preserving basic object structure?
+Taking the max value out of the output of each feature and fitting them to a gaussian mixture is the easiest approach but I'm interested in seeing other ways you could structure the clustering algorithm. For example I'm interested in seeing how you might learn structure between features that includes position information.
+"
+"['q-learning', 'open-ai', 'gym']"," Title: OpenAI gym's CartPole problem system does not learnBody: My OpenAI CartPole-v0 problem's implementation using basic Q-learning does not learn at all.
+I am a beginner and have implemented my first ever Q-learning from scratch after learning from tutorials.
+Can anyone suggest what is going wrong?
+I have seen through testing that the problem may be that most of the states are remain unvisited even after 10,000 runs. Hence, Q-table remains mostly unchanged at the end of all episodes. I have seen other things in the implementation and they all seem fine to me, at least. Any tip where I should start looking at?
+The reward is -200 flat, for all the episodes! which suggests that the improvement is NILL/NADDA/NONE!
+Some relevant images are given at the end.
+The q-learning part of code is given below:
+env.reset()
+while not done:
+ current_state = current_state_to_string(assign_obs_to_bins(obs, bins))
+
+ if np.random.uniform() < EPSILON:
+ act = env.action_space.sample()
+ best_q_value = return_max_from_dict(q[current_state], action = act)
+ else:
+ act, best_q_value = return_max_from_dict(q[current_state])
+
+ obs, reward, done, _ = env.step(act)
+ q[current_state][act] += LEARNING_RATE * (reward + DISCOUNT_FACTOR * best_q_value - q[current_state][act])
+ cnt+=1
+ total_reward += reward
+
+
+
+"
+"['reinforcement-learning', 'q-learning', 'open-ai', 'gym']"," Title: Most of state-action pairs remain unvisited in the q-tableBody: In building my first Q-learning algorithm for OpenAI gym's CartPole problem, many of my states remain unvisited. I believe it is the reason that my agent does not learn.
+Can I be told of the reasons I can look into why that may happen? I have read and seen tutorials and I know a lot has already been done for this problem. My goal here is to learn and hence this simple implementation with Q-learning.
+PS. The specific question to my problem is asked here.
+PSS. As an edit, I am inserting my whole code in the following.
+import numpy as np
+import gym
+import matplotlib.pyplot as plt
+
+env = gym.make('MountainCar-v0')
+
+EPISODES = 5000
+LEARNING_RATE = 0.1
+SHOW_AFTER = 1500
+DISCOUNT_FACTOR = 0.95
+EPSILON = 0.1
+NUMBER_OF_BINS = 10
+OBSERVATION_SPACE = 2
+MAX_STATES = 100 # e.g. 23 means obs one is 2, obs two is 3
+
+'''This function breaks the continuous states into discrete form'''
+def digitize_states():
+ bins = np.zeros((OBSERVATION_SPACE,NUMBER_OF_BINS))
+ bins[0] = np.linspace(-1.2, 0.6, NUMBER_OF_BINS)
+ bins[1] = np.linspace(-.07, 0.07, NUMBER_OF_BINS)
+ return bins
+
+'''This function assign the observations into discrete bins using
+ digitize function and the bins that we created using digitize_states()
+'''
+def assign_obs_to_bins(obs, bins):
+ states = np.zeros((OBSERVATION_SPACE))
+ states[0] = np.digitize(obs[0], bins[0])
+ states[1] = np.digitize(obs[1], bins[1])
+ return states
+
+'''This function merely make the states in form of the strings so that we can
+ later use those strings (i.e. number of states) as the KEYs in our q-table
+ dictionary.
+'''
+def get_all_states_as_strings():
+ states = []
+ for i in range(MAX_STATES):
+ states.append(str(i).zfill(OBSERVATION_SPACE))
+ return states
+
+'''Convert the current state into string so that it can be used as key for dictionary '''
+def current_state_to_string(state):
+ current_state = ''.join(str(int(e)) for e in state)
+ return current_state
+
+'''This function iniquation the q-table to zeros'''
+def initialize_q():
+ states = get_all_states_as_strings()
+ q = {}
+ for state in states:
+ q[state] = {}
+ for action in range(env.action_space.n):
+ q[state][action] = 0
+ return q
+
+def initialize_Q():
+ Q = {}
+
+ all_states = get_all_states_as_strings()
+ for state in all_states:
+ Q[state] = {}
+ for action in range(env.action_space.n):
+ Q[state][action] = np.random.uniform(-.5, 0.5, 1)
+ return Q
+
+'''This function returns the maximum Q-value from Q-table'''
+def return_max_from_dict(dict_var, action = None):
+ ''' Arguments
+ # dict_var: Dictionary variable, which represent the q-table.
+
+ # Return
+ # max_key: Best Action
+ # max_val: max q-value for the current state, taking best action
+ '''
+ if(action == None):
+ max_val = float('-Inf')
+ for key, val in dict_var.items():
+ if val > max_val:
+ max_val = val
+ max_key = key
+ return max_key, max_val
+ else:
+ return dict_var[action]
+
+
+'''Main code starts here'''
+
+bins = digitize_states()
+all_states = get_all_states_as_strings()
+
+q = initialize_Q()
+
+Total_reward_matrix = []
+_testing_action_matrix = []
+_testing_state_matrix = []
+_testing_states = []
+_testing_random = 0
+_testing_greedy = 0
+
+for episode in range(EPISODES):
+
+ done = False
+ cnt = 0
+
+ # Reset the observations -> then assign them to bins
+ obs = env.reset()
+
+ if episode%SHOW_AFTER == 0:
+ print(episode)
+
+ total_reward = 0
+
+ while not done:
+ current_state = current_state_to_string(assign_obs_to_bins(obs, bins))
+ _testing_state_matrix.append(int(current_state))
+
+ if np.random.uniform() < EPSILON:
+ act = env.action_space.sample()
+ best_q_value = return_max_from_dict(q[current_state], action = act)
+ _testing_random+=1
+ else:
+ act, best_q_value = return_max_from_dict(q[current_state])
+ _testing_greedy+=1
+
+ obs, reward, done, _ = env.step(act)
+ _testing_action_matrix.append(act)
+
+ q[current_state][act] = (1-LEARNING_RATE)*q[current_state][act] + LEARNING_RATE * (reward + DISCOUNT_FACTOR * best_q_value)
+ cnt+=1
+ total_reward += reward
+
+ if done and cnt > 200:
+ print(f'reached at episode: {episode} in count {cnt}')
+ Total_reward_matrix.append(total_reward)
+ elif done:
+# print('Failed to reach flag in episode ', episode)
+ Total_reward_matrix.append(total_reward)
+
+
+env.close()
+
+"
+"['deep-neural-networks', 'time-series']"," Title: Why does not the deepAR model of Amazon require the time series being stationary, as opposed to ARMA model?Body: As what the title said. Does not deepAR require the time series being stationary?
+"
+"['neural-networks', 'deep-learning', 'comparison', 'definitions']"," Title: What is the difference between artificial neural networks and deep learning?Body: I have read many mixed definitions around these two terms. For example, is it right to say deep learning is any ANN with more than two hidden layers?
+What are formal definitions for these two?
+"
+"['reinforcement-learning', 'markov-decision-process', 'policies', 'path-planning']"," Title: Is a policy in reinforcement learning analogous to a field such as APF?Body: If a policy maps states to actions in reinforcement learning, then for a path planning with obstacles, can't we simply use Artificial Potential Field fields for path planning and model policy mathematically as a field where the obstacles form repulsive field and goal form attractive field?
+So, technically, is a policy simply a field?
+"
+"['ai-design', 'game-ai', 'deep-rl']"," Title: Do I need a large pool of training data to train a bot to play the 'pegging' game in the cribbage card gameBody: The game of cribbage https://en.wikipedia.org/wiki/Cribbage is a two-player card game played over a series of deal with the goal to reach 121 points.
+The game's elements are:
+
+- the discard. There are three hands, one for each player of 4 cards selected from six. The discards go face down to a hand which belongs to the dealer (the crib). On average, because non-dealer is optimising one hand, his hand scores slightly more points, say 8 points vs 7.8 for dealer, while the crib scores around 4 points, because one player aims to toss trash (while not damaging his own hand), while the other aims to toss good cards (while likewise maximising his own hand)
+- pegging. Here the dealer expects to peg on average around 4 points (he plays second, so has an advantage in responding to non-dealer's lead), and non-dealer around 2 points
+- the cut. This is a fifth card which belongs to all three hands. If the cut is a Jack, dealer scores 2 points, for free.
+- the order of play. This is:
+
+- the cut. Averages 2/13 = 0.15 points for dealer (if a Jack is cut)
+- pegs (played one card at a time, scoring for each). Dealer will peg at least 1 point, and non-dealer will peg 0 or more. Averages 4 and 2 points.
+- non-dealer shows his hand. Averages 8 points
+- dealer shows his hand. Averages 8 points.
+- dealer shows his crib. Average 4 points.
+
+
+
+As should be obvious the hands are generally the main source of points, so players will tend to optimise their hands to maximise points there. This problem is OUTSIDE OF THE SCOPE of this post, since there are exhaustive analyses that have solved this problem reasonably well. E.g., https://cliambrown.com/cribbage/
+There are 6C4 = 15 discards possible for each hand, and often the correct play will be unambiguous. For example, if we had the hand A49QKK with mixed suits (no more than 3 of any 1 suit) and we are non-dealer, then the correct hold is obvious - A4KK (this hand has two ways to make 15 A4K and A4K for 2 points each, and a pair for 2 points, plus it improve swith cuts of A, 4, 5, T, J, Q K)
+After we've held the card we must then 'play the pegging game'. The scope of this post/question is therefore limited to how to play a given 4-card hand that we have already selected using an exhaustive combinatorial analysis, hence I'm assuming that the bots inputs are a given 4 card hand, as well as the three dead cards (discard and cut), and the replies to each play from our opponent.
+The scoring for pegging is:
+
+- each pair scores 2 for the player playing the second card, 6 for 3 of a kind, 12 for 4 of a kind
+- each time the count reaches 15, the player playing the card scores 2
+- each time the count reaches 31, the player reaching scores 2 (it is not permitted to exceed 31, and if you have no card that will keep the count at/below 31, you must call 'go', and if both players call 'go', the last player scores 1 point). After this the cards are turned over, and play continues from zero
+- each time a run of 3 or more cards is visible (between the cards played by both players), then you score 3, 4, 5, 6, 7, or even 8 points according to the length of the run.
+
+Follows discussion about pegging strategy, which I have blockquoted for those who find this tl;dr
+
+The optimum play for a given hand is not necessarily clear-cut. For
+example, with the hand A4KK, then we could lead:
+
+- K - on the basis that if dealer replies with another K (for 2 points), then we reply with the third K (for 6 points), and then most
+likely dealer does not have an Ace, so we score 2 points for the 31 (K
+= 10, A = 1), AND dealer must lead the next round of pegging, which is a considerable disadvantage.
+- not K - because there's a bias to holding 5s, so if dealer was dealt a 5, then he can reply with the 5 scoring 2 points for the 15, to
+which we have no scoring reply. In addition, non-dealer generally has
+a bias to leading from a pair, so if we lead the K, then in general a
+smart dealer says 'he is likely to have a pair of Ks'. So even if
+dealer has a K, he may decline to peg it, especially if he doesn't
+hold the A himself
+- 4 - this is the most popular lead in cribbage because 15 cannot be scored on it. As indicated above there are 16 10 cards (TJQK), as
+opposed to 4 of every other denomination, so it's more likely to
+receive a ten back than any other denomination, and we hold an Ace, so
+we would score 2 points following a ten card reply
+- A - this has the same advantage that dealer cannot reach 15, and again if we get a ten card back we can score 15-2.
+- It might be that the A is better to lead because the 4 is such a popular lead that dealer is more likely to pair it (risking that we
+hold the third 4 for 6 points), in which case we have no good reply.
+If we lead the A, then the dealer is [probably!] less likely to reply
+with an A (which doesn't help us).
+- We might prefer to lead the 4 if we think that the A is more likely to allow us to play the last card before 31, in which case dealer is
+forced to lead first next round, which is a disadvantage.
+
+During pegging we have considerable information:
+
+- which cards we hold and discarded. For example, clearly if we hold 3 4s, and the 4th was cut, then there is no chance that we get one
+played against us.
+- which cards have been played and therefore which cards are likely to remain, based on the hand selection process. For example, if a 4 is
+led then x% of the time that will be from A4 TJQK, y% it will be from
+4456, etc. As more cards are played, then this becomes more obvious.
+For example, if we've seen the 3 cards 5TK, then it's VERY likely that
+the 4th card is another TJQK card, and somewhat likely that it's a 5.
+It could be ANY card because maybe of the six cards the player was
+dealt he ended up with an unconnected card. But we can say that the
+chance of this is low.
+
+In terms of exactly what cards are held, then if we analysed millions
+of games, we could calculate in general knowing one, two or three
+cards what the remaining cards could be.
+Although there are in theory 270,725 4 card hands, in pegging terms
+(ignoring suits), there are only 1820 distinct hands
+(https://docs.google.com/spreadsheets/d/1fxkLBkWC2LA6J06zhku21jcG2ATHqE1RNHlPSDcc4fQ/edit#gid=834958733)
+For any given hand, if we for example held the cards 358TTT and we
+were non-dealer, then we would choose to hold 5TTT and toss 38. We
+would then lead the T. In this spot, clearly of the 1820 possible
+hands, then combinations such as TT xy are no longer possible.
+As another example if we were dealt A778JK, then we'd toss the JK.
+Here we'd likely lead the 7.
+Before we play the 7 it's relatively simple to calculate the odds that
+dealer was dealt two 7s. Since he had six cards, that is
+2/461/456C2, which is 1.45%. The chance that he chose to hold those
+two 7s is a different number, and we could theoretically calculate it
+by figuring out for each of the 45C6 (but simplifiable!) combinations,
+which hold he would make. However it won't be too far from this number
+of 1.45%.
+HOWEVER, once we have played a 7 and dealer replies with a second 7,
+then this number is now very far away from accurate. Firstly because
+it's now a simple conditional probability where one condition is
+already satisfied, so the chance of any of 5 unknown cards being a
+specific card (e.g., the 7 of diamonds) is now almost 1/9. Of course
+dealer has 3 cards, not 5, so again the chances of this are not that
+number, but not too far off, because of those hands where dealer was
+dealt 77, and he holds one seven, then he is seldom going to toss the
+other one.
+In terms of our possible plays, we have at the count of 14:
+
+- play the Ace and score 2 points for the count of 15.
+- play the 7 and bring the count to 21, which is 6 points. However, dealer is fairly likely to have a TJQK (bringing the count to 31,
+scoring 2 points, and making us lead the next deal). In addition,
+given that dealer has a similar analysis process to us, in hands where
+he holds say 78 then he probably replies to the 7 lead with the 8, as
+it doesn't allow us to score 6 points. So given that dealer has
+replied with 7, this increases the chance that he holds the fourth 7
+as well. It's hard to say what this chance is, but for example there
+is an x% chance of 31-2, a y% chance of 28-12, and a z% chance of
+something else. Or we could consider the chance of any given card.
+- probably not play the 8, because at 778 the count is 22, and dealer has many scoring replies: 6 is a run (3 points), 9 is a run AND 31 (5
+points), and 8 is a pair (2 points)
+
+In general it can be seen that there are:
+
+- up to 1820 unique 4-card hands
+- up to 455 3-card hands
+- up to 91 2-card hands
+- up to 13 1-card hands
+
+Clearly we could weight each hand by the chance it has to be held, and
+this would work well BEFORE a card has been played.
+However after 1 or more cards has been played this approach is going
+to be hopelessly naive. For example, let's say we led the 4. If dealer
+is holding a hand like 5JJK then he's definitely not going to reply
+with the 5, because we can play a 3 for 3 points (run), or a 6 for 5
+points (run and count of 15).
+Further, against non beginner-level players, it will be obvious that
+we often hold hands like A4 JK or 23 KK, etc. So if we lead the 3, and
+dealer replies with a ten card (TJQK), then it's likely he's NOT
+holding cards such as 789, which a non-beginner player would likely
+prefer to reply with here. It's possible of course that dealer has the
+same cards. For example, if we hold 23JK, dealer might be holding
+23QQ. In this case, the Q reply might be preferred by dealer, because
+if the play goes 3-Q-2, then dealer can pair our 2, and perhaps dealer
+doesn't like to pair our 3 lead, for fear we have a third 3, while he
+lacks the fourth.
+In addition, while for pegging terms suits don't matter at all, suit
+information is during important, so long as the player has a flush.
+For example if we were dealt 2468h Jd Js, we'd hold the 2468 of
+hearts, because that's a flush worth 4 points. So if during we play,
+we've seen 468h from our opponent, then the probability distribution
+of possible hands is going to contain a good weight for every heart
+card. Whereas if we've seen 4h 6d 8h, then we know after two cards
+that he does NOT have a flush, so this weights likely hands towards
+hand such as 4568, 4678 and so on, and it's unlikely that the
+remaining card is, say, a K.
+
+In general the goal of a pegging bot could be seen as:
+
+- score most points or
+- concede fewest points or
+- maximise net points
+
+
+Nuance: In early game it's likely max net points is the best approach.
+But if we are at a score like 117-117 (121 is the winning mark) as
+dealer, then it's (let's say) 80% likely that non-dealer has 4 points
+in his hand, which means he wins if we do not peg 4 points. In this
+case non-dealer would try to hold cards that score 4 points (as a
+hand), and if there are multiple options, try to hold cards that
+reduce dealer's chance of pegging. Meanwhile dealer's realistic route
+to victory would be to hold the 4 cards that give him the best chance
+of pegging 4 points (remembering that dealer will on average score
+more pegs, while non-dealer scores his hand first). If the score was
+113-113, then dealer would play differently as he has NO chance of
+pegging 8 points, but there's perhaps a 40% chance that non-dealer
+fails to score 8 points from pegging and his hand. So in this case
+dealer would try to stop non-dealer pegging anything.
+So it seems that an AI would need to take into account the current score
+to decide how to peg.
+
+I have read a couple of papers on cribbage AI, but they have been superficial and tended to concentrate on the discard game, which can be optimised at least for discarding purposes (without considering the relative pegging value), by simply iterating through the possible hands.
+
+Now the question is to what extent this is a problem of machine learning, and to what extent this is an exhaustive analysis? If we ignore the discard problem, and we say that we will input four-card hands to our bot using the output of a process similar to the one here https://cliambrown.com/cribbage/methodology.php then for example:
+
+- we could calculate the exact probability that our opponent holds any given four card hand, by iterating through the 45C6 hands (subject to appropriate simplifications) that our opponent could have, and then producing a weight for each of the (up to) 1820 possible hands, and for the 715 different hands of 4 unique denominations, the further chance for each to be a flush
+- I am not quite clear how computationally expensive this is, but it seems to me that should be calculate the weights in a reasonable amount of time
+
+So we have 4 cards, and we have weighted possibilities for each of 1820 hands.
+Clearly it's not appropriate for us to simply randomly iterate through the hands. I.e. there are four choices for our first card, likewise four for our opponent, then 3 for the next. Roughly there will be 4!*4! choices in this way (roughly because the issue of the count resetting at 31 means that the card order after the third card is not always the samme). But our opponent's reply is not random. If we lead the 5, then he will certainly reply with a TJQK if he has one. He won't possibly reply with a 5.
+So it seems to me that some kind of learning process is appropriate. But I am not really familiar with AI to say how this should work. Do we need a pool of human training data? Or do we allow the bot to play against itself?
+And what role do probability tables have in this process? Is it going to be productive on a hand level to iterate through possible hands that our opponent might have, or is a Monte Carlo process just as good?
+I should note that humans might make sub-optimal play. So for example, if we play PERFECTLY, then if we hold the hand 6699, and the 4 is led, then we SHOULD reply with the 9, rather than risk a 5 on our 6, conceding 5 points. So the play of a 6 on a 4 SHOULD indicate that we hold a hand such as 5566, in which case then we are informed accordingly. But clearly the chance that we hold 6699 and just made a blunder is not zero. So the bot cannot ever be completely certain about our holds.
+Likewise it might be we choose to make 'sub-optimal' plays in order to avoid becoming predictable ourselves. For example, the question 'will our opponent pair our lead' is an important one - if we have a pair, we generally want him to. But sometimes we will hold a hand such as A4 JK, in which case we don't want him to pair our lead. Some players will play more aggressively than others, and against an aggressive player, he might pair our lead every time, and a more defensive player might almost never pair our lead.
+"
+"['machine-learning', 'optimization', 'computational-learning-theory', 'online-learning']"," Title: Does this $\max$ mean that we need to maximize the regret in this regret formula?Body: I found that the regret in Online Machine Learning is stated as:
+$$\operatorname{Regret}_{T}(h)=\sum_{t=1}^{T} l\left(p_{t}, y_{t}\right)-\sum_{t=1}^{T} l\left(h(x), y_{t}\right),$$
+where $p_t$ is the answer of my algorithm to the question $x$ and $y_t$ is the right answer, while $h()$ is one of the hypotheses in the hypothesis space. Intuitively, as denoted in the paper, our objective is to minimize this Regret in order to optimize our algorithm, but in the following formula
+$$
+\operatorname{Regret}_{T}(\mathcal{H})=\max _{h^{\star} \in \mathcal{H}} \operatorname{Regret}_{T}\left(h^{\star}\right)
+$$
+they maximize this value. Am I interpreting the $max$ wrongly?
+"
+"['neural-networks', 'recurrent-neural-networks', 'reference-request', 'time-series']"," Title: Recommendations or resources for neural network/deep learning for time series application?Body: I know there are quite a few good deep learning books out there, but most explain neural networks and deep learning via application on images. If there are examples/code, they are often done on the MNIST data set.
+I was wondering if there's a book out there that goes in depth into neural networks and is equally well written but explains it on non-image data. I am particularly interested in time series application, although some talk on cross-sectional application would be helpful as well. I am particularly interested in learning about:
+
+- What types of layers/functions/structure are better suited for time-series data
+- Various models for time series and pros/cons/applications of each (Convoluted NN, LTSM, etc...)
+- Typical structures of your neural network (depth, sparse connections, etc...) that seem to work well on time series data
+- Special considerations or settings you should have in your neural network when working with time series data
+- Maybe some talk/examples on how time series prediction using traditional models like ARIMA, can be reproduced or done better using neural networks. Or side by side comparison of pros/cons of using one vs the other.
+
+Thanks!
+"
+"['deep-learning', 'natural-language-processing', 'tensorflow', 'training', 'gpt']"," Title: How large should the corpus be to optimally retrain the GPT-2 model?Body: I just started working with the GPT-2 models and want to retrain one on a pretty narrow topic, so I have problems finding training material.
+How large should the corpus be to optimally retrain the GPT-2 model? And what is the bare minimum size? Should it simply be as large as possible or can it flip over and make the model worse in some way?
+I am also not certain how many steps you should let the retraining run. I have been using 6000 steps when testing, and it seems not much happens after that, loss only moved from 0.2 to 0.18 last 1000 steps.
+"
+"['neural-networks', 'training', 'matlab', 'mnist']"," Title: Is there a way to reduce the RMSE error when training a neural network to recognise MNIST digits using ANFIS?Body: I wanted to build a digit recognition neural network using MATLAB ANFIS kit.
+I started by using the MNIST database and I figured out it's almost impossible to classify 784 dimension data using ANFIS. So, I reduced my data dimension from 784 to 13, using an autoencoder in Python. With the new data, I had about 80 percent accuracy in classification using a sequential model. I implemented my data in MATLAB too.
+Since MATLAB treats the problem as a regression problem, I had about 1.5 RMSE after 10 epoch of learning, in both grid partitioning and subtractive clustering, and also the error curve almost seems constant in the process.
+Is there any way that I can have less error?
+"
+"['classification', 'terminology', 'game-theory']"," Title: Does Algorithmic Mechanism Design come under the field of AI?Body: I see many papers in AAMAS talk about artificial intelligence and mechanism design simultaneously. I was wondering, for the sake of being pedantic, is mechanism design could be classified under AI.
+"
+"['deep-learning', 'papers', 'transformer', 'space-complexity', 'reformer']"," Title: What is the memory complexity of the memory-efficient attention in Reformer?Body: When I read the paper, Reformer: The Efficient Transformer, I cannot get the same complexity of the memory-efficient method in Table 1 (p. 5), which summarizes time/memory complexity of scaled dot-product, memory efficient, and LSH attention.
+The memory complexity of the memory-efficient method is as follow:
+$$\max \left(b n_{h} l d_{k}, b n_{h} l^{2}\right)$$
+$b$: batch size
+$l$: sequence length
+$n_h$: the number of attention head
+$d_k$: the dimension of query or key
+To the best of my knowledge, the memory-efficient method will do a loop for each query, therefore, the whole attention matrix will not show up.
+So, shouldn't the memory complexity be $\max(b n_h l d_k, b n_h l)=(b n_h l d_k)$ instead of $\max(b n_h l d_k,b n_h l^2)$?
+"
+"['neural-networks', 'backpropagation', 'architecture', 'dense-layers']"," Title: Why does a neuron in a multi-layer network need several input connections?Body: For example, if I have the following architecture:
+
+
+- Each neuron in the hidden layer has a connection from each one in the
+input layer.
+- 3 x 1 Input Matrix and a 4 x 3 weight matrix (for the backpropagation we have of course the transformed version 3 x 4)
+
+But until now, I still don't understand what the point is that a neuron has 3 inputs (in the hidden layer of the example). It would work the same way, if I would only adjust one weight of the 3 connections.
+But in the current case the information flows only distributed over several "channels", but what is the point?
+With backpropagation, in some cases the weights are simply adjusted proportionally based on the error.
+Or is it just done that way, because then you can better mathematically implement everything (with matrix multiplication and so on)?
+Either my question is stupid or I have an error in my thinking and assume wrong ideas. Can someone please help me with the interpretation.
+In tensorflow playground for example, I cut the connections (by setting the weight to 0), it just compansated it by changing the other still existing connection a bit more:
+
+"
+['q-learning']," Title: How can I update my Q-table in Python?Body: I want to implement this function on a voice searching application:
+$$
+Q(S, A) \leftarrow Q(S, A)+\alpha\left(R+\gamma Q\left(S^{\prime}, A^{\prime}\right)-Q(S, A)\right)
+$$
+And also restricted to use epsilon-greedy policy based on a given Q-function and epsilon. I simply need a $\epsilon$-greedy policy for updating my q-table.
+"
+['q-learning']," Title: How can I fetch exploration decay rate of an iterable Q-table in Python?Body: I have done creating the virtual environment, creating the Q-table, initializing the q-parameters, then I made a training module and stored it in a numpy
array. After completion of training, I have updated the q-table and now I get the plots for the explorations But how can I code for rate decay? Here is my sample code for every step of the training module,
+for step in range(max_steps):
+ exploration_rate_threshold = random.uniform(0,1)
+
+ if exploration_rate_threshold > exploration_rate:
+ action = np.argmax(q_table[state,:])
+ else:
+ action = env.action_space.sample()
+
+"
+"['autoencoders', 'principal-component-analysis']"," Title: Looking for the proper algorithm to compress many lowres images of nearby locationsBody: I have an optimization problem that I'm looking for the right algorithm to solve.
+What I have: A large set of low-res 360 images that were taken on a regular grid within a certain area. each of these images is quite sparsely sampled and each of these 360 images has an accurate XYZ position assigned of its center. There are millions of these small images, clusters of close-by images obviously share a lot of information while images farther apart can be completely different.
+What I want to do is to compress these small 360 images.
+If two 360 images are close by each other, they can be 'warped' into each other by projecting it onto a sphere of finite distance and then moving that sphere (so a closeby 360 image can be a good aproximation of another 360 image when it has been warped that way).
+Based on this idea, I want to compress these small low-res 360 images by replacing each of them with:
+
+- N (N being something like 2-5) indices into an archive of M (M being something like 50-500) different 'prototype' images (of possibly higher resolution than the low res 360 images), each of which has an XZY location assigned plus a radius
+- N blend weights
+
+Such that if I want to reconstruct one of the small, sparsely sampled 360 images I take the N indices stored for this image, look-up the corresponding prototype images from the archive, warp them based on the radius of the archive image and the delta vector of archive XZY and compressed image XYZ location, and then blend the N prototype images based on the N blend weights (and possibly scale down in the prototype images are higher res)
+I guess this goes into the direction of Eigen Faces, but with Eigen faces each compressed face has a weight stored for each eigen-face, whereas I want that each compressed sphere only has N non-zero weights.
+So my input is:
+a lot of small 360 images plus a XYZ location each
+my output should be:
+
+- an archive of M "prototype" images, each assigned an XYZ location and a projection radius
+- all compressed spheres, with each sphere compressed to N indices and N weights
+
+This seems to be some non-linear least squares problem, but I wonder if someone can point me into the right direction on how to solve this?
+As a completely alternative approach I also looked into spherical harmonics, but with those I only get enough high-frequency details at l=6 which takes 36 coefficients which is too much and also too slow to decompress.
+"
+"['natural-language-processing', 'transformer']"," Title: Using transformer but masking in reverse direction/smart sampling for desired final word?Body: I'm trying to generate rhymes, so it would be very helpful to have a language model where I could input a final word, and have it output a sequence of words that ends with that word.
+I could train my own model and reverse the direction of the mask, but I was wondering if there was any way I could use a pretrained model but apply a different mask to achieve this goal.
+If this isn't possible, what's the best way to sample a forward-predicting model to achieve the highest probability sentence ending with a particular word?
+Thanks!
+"
+"['terminology', 'definitions', 'history', 'academia']"," Title: What is Cognitive Intelligence?Body: Similarly to the question, What is artificial intelligence?
+
+Cognitive Intelligence, as well as being a part of Artificial Intelligence, is an area that mainly covers the technology and tools that allow our apps, websites, and bots to see, hear, speak and understand the needs of the user through natural language
+
+What is the definition of Cognitive Intelligence?
+"
+"['datasets', 'data-preprocessing']"," Title: Multilabel stratified split for images/object detectionBody: I am working on an object detection model and have thought of looking into stratified splits for the dataset.
+Now since I am doing object detection I have a variable number of "labels" for every image because in each image there is variable number of occurrences for each object I am looking for (car, truck, motorbike, etc.).
+Obviously single-label stratification does not apply.
+From what I understand multi-label stratification is only applicable if there are basically label "features" that we know are always present, which does not seem the case here.
+My question is... is there a way to perform stratified split in this case so that in each split there is roughly the same number of cars/trucks/bikes/etc.? (Or is it going to actually improve the results at all?)
+"
+"['deep-learning', 'natural-language-processing', 'tensorflow']"," Title: How to train an LSTM with varying length input?Body: I have a dataset where each of the training instances is different in the length and the data is sequential. So, I design an LSTM but I am thinking about how to train the LSTM. In fixed-length data, we just keep all of the input in an array and pass it to the network, but here the case is different. I can not store varying length data in an array and I do not want to use padding to make it fixed length. Now, should I train the LSTM where each training instance are varying in length?
+"
+"['reinforcement-learning', 'q-learning', 'environment', 'testing']"," Title: Strange behavior of Q-learning agent after being trainedBody: I built a simple X*Y grid world environment to learn and then trained my agent over it. All worked fine and the agent learned as well. Let me give some detail about the environment.
+Environment:
+
+- A 4x4 grid world with episode starting at (0,0) and terminal state (3,3)
+- Four actions: Left, Up, Right, Down
+- The reward of -1 for moving into new state from the previous state to a new state. The reward of 0 when reaching the terminal state. The reward of -2 for bouncing off of the boundary
+- Epsilon-greedy scheme for action selection.
+
+All works fine, and the following are the learning results of the agent.
+
+Later I ran a test run of my TRAINED QL-agent where I used greedy action selection. All I could see in every episode was that my agent start from (0,0), take right to move to (1,0), and then take left to move back to (0,0) again and this goes on and on and on... I check the Q table and it makes sense because the Q-values for these actions justifies such behaviour. But this is not a practical agent should be doing.
+"
+"['reinforcement-learning', 'meta-learning']"," Title: How are mujoco environments used for meta-rl?Body: Afaik, investigating meta reinforcement learning algorithms requires a collection of two or more environments which have similar structure but are still different enough.
+When I read this paper it was unclear to me what the meta-training and meta-testing environments were.
+For eg., a graph is given for Ant-Fwd-Bkwd showing its performance with number of gradient steps. I'm guessing these are the meta-testing performances. So, which environment was it 'meta-trained' on?
+Was it meta-trained on the same Ant-Fwd-Bkwd environment?
+"
+"['natural-language-processing', 'training', 'python', 'natural-language-understanding']"," Title: How to predict the ""word"" based on the meaning in a document?Body: What I mean to say is
+
+- For example, if I give the meaning of Apple from the dictionary as input to the program, it should give output as Apple.
+- Or I say My day to day job involves monitoring and managing the resources - the output should be Project management.
+
+The meaning and the word could be a dictionary or it could be custom. I am looking for ideas and tools to go further on this.
+"
+"['neural-networks', 'comparison', 'definitions']"," Title: What is eager learning and lazy learning?Body: What is the difference between eager learning and lazy learning?
+How does eager learning or lazy learning help me build a neural network system? And how can I use it for any target function?
+"
+"['machine-learning', 'reinforcement-learning', 'keras']"," Title: What is the best way to make a deep reinforcement learning environment with a continuous 2D action space?Body: I understand that the actor-critic method is probably where I want to start because of how it works with continuous action spaces.
+However, the problem I am trying to solve would require the action be a vector of 11 continuous values. When I go to design my training environment and the architecture of my DRL network, I am not sure how to map a vector of values to the state for the state-action pairs.
+I am trying to use this article as a jumping off point, but I am not sure where to go: https://medium.com/@asteinbach/actor-critic-using-deep-rl-continuous-mountain-car-in-tensorflow-4c1fb2110f7c
+"
+"['clustering', 'dimensionality-reduction']"," Title: How to cluster data points such that the number of clusters is kept minimal and each cluster projects well onto a lower-dimensional subspace?Body: If I want to find a (linear) subspace onto which a data-set projects well, I can simply use PCA. However, often the data can project with much smaller error if I first separate it into a couple of classes and then perform PCA for each class individually. But what If I don't know what kind of classes there might be in my data and into how many classes it would make sense to split the data? What kind of machine learning algorithm can do this well?
+Example:
+
+If I'd just cluster first based on distance in the high-dimensional space, I would arrive at the bad clustering. There are 5 clusters and the green and red clusters don't project very well onto a 2D subspace.
+As a human looking at the data, I see however that if I separate the data as indicated, red and blue will project very well onto a plane each and green will project very well onto a line, so I can run PCA for each group individually.
+How can I automate this clustering based on how well it will project onto as low-imensional subspaces as possible?
+Something like minimize E = SumOverClusters(SumOverPoints(SquaredDist(projected_point, original_point)) * (number_dims_projected / number_dims_original)) + C * number_of_clusters
+What technique is well suited to do that?
+(edit: while the example shows a 3d space, I'm more interested in doing that in about 64dimensional spaces)
+"
+"['comparison', 'q-learning', 'dqn', 'deep-rl', 'double-dqn']"," Title: What exactly is the advantage of double DQN over DQN?Body: I started looking into the double DQN (DDQN). Apparently, the difference between DDQN and DQN is that in DDQN we use the main value network for action selection and the target network for outputting the Q values.
+However, I don't understand why would this be beneficial, compared to the standard DQN. So, in simple terms, what exactly is the advantage of DDQN over DQN?
+"
+['natural-language-processing']," Title: NLP: What is expected from the output of a perfect coreference system?Body: For instance, consider the following piece of text:
+'The father of Richard is a very nice guy. He was born in a poor family. Because of that, Richard learnt very good values. Richard is also a very nice guy. However, Richard's mother embarasses the family. She was born rich and she does not know the real value of the money. She did not have to be a hard worker to succeed in life."
+How should a perfect coreference should work? Is this solution the perfect solution?
+Cluster 1:
+'The father of Richard' (first sentence) <-> 'He' (second sentence)
+
+Cluster 2:
+'Richard' (third sentence) <-> 'Richard' (forth sentence)
+
+Cluster 3:
+'Richard\'s mother' (fifth sentence) <-> She (sixth sentence) <-> she (sixth sentence) <-> She (seventh sentence)
+
+If I use the coreference library of spacy (neuralcoref), I get these clusters:
+Clusters:
+[Richard: [The father of Richard, He, Richard, Richard, Richard], a poor family: [a poor family, the family], Richard's mother: [Richard's mother, She, she, She]]
+
+Note that this output says that "Richard" is the same for sentences that is true. However, "He" in the second sentence is not related to "Richard", but to his father. Also "Richard" and the "Father of Richard" are together in the same cluster. Furthermore, "poor family" and "family" should not come together. However, this is realy difficulty since in this case there is some level of ambiguity.'
+I know that this is a very difficult problem. The point is not criticize this fantastic library. I am just trying to understand what I should expect as perfect result.
+If I change a little the text:
+'The mother of Richard is a very nice woman. She was born in a poor family. Because of that, Richard learnt very good values. Richard is a very nice guy. However, Richard's father embarasses the family. He was born rich and he does not know the real value of the money. He did not have to be a hard worker to succeed in life.'
+The clusters are:
+[Richard: [The mother of Richard, She, Richard, Richard, Richard, He, he, He], a poor family: [a poor family, the family]]
+
+In this case, the clusters become stranger, since "She" and "Richard" are in the same cluster. Furthermore, the "He" related to the "father of Richard" belongs to the cluster, but not "Richard's father".
+So, my question is:
+What is the perfect result that I should expect from a "perfect" coreference system?
+"
+"['reinforcement-learning', 'proofs', 'policy-iteration', 'bellman-equations', 'bellman-operators']"," Title: Why are the Bellman operators contractions?Body: In these slides, it is written
+\begin{align}
+\left\|T^{\pi} V-T^{\pi} U\right\|_{\infty} & \leq \gamma\|V-U\|_{\infty} \tag{9} \label{9} \\
+\|T V-T U\|_{\infty} & \leq \gamma\|V-U\|_{\infty} \tag{10} \label{10}
+\end{align}
+where
+
+- $F$ is the space of functions on domain $\mathbb{S}$.
+- $T^{\pi}: \mathbb{F} \mapsto \mathbb{F}$ is the Bellman
+policy operator
+- $T: \mathbb{F} \mapsto \mathbb{F}$ is the Bellman
+optimality operator
+
+In slide 19, they say that equality $9$ follows from
+\begin{align}
+{\scriptsize
+\left\|
+T^{\pi} V-T^{\pi} U
+\right\|_{\infty}
+=
+\max_{s} \gamma \sum_{s^{\prime}} \operatorname{Pr}
+\left(
+s^{\prime} \mid s, \pi(s)
+\right)
+\left|
+V\left(s^{\prime}\right) - U
+\left(s^{\prime}\right)
+\right| \\
+\leq \gamma \left(\sum \operatorname{Pr} \left(s^{\prime} \mid s, \pi(s)\right)\right) \max _{s^{\prime}}\left|V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right| \\
+\leq \gamma\|U-V\|_{\infty}
+}
+\end{align}
+Why is that? Can someone explain to me this derivation?
+They also write that inequality \ref{10} follows from
+\begin{align}
+{\scriptsize
+\|T V-T U\|_{\infty}
+= \max_{s}
+\left|
+\max_{a}
+\left\{
+R(s, a) + \gamma \sum_{s^{\prime}} \operatorname{Pr}
+\left(
+s^{\prime} \mid s, a
+\right) V
+\left(
+s^{\prime}
+\right)
+\right\}
+-\max_{a} \left\{R(s, a)+\gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right) U\left(s^{\prime}\right)\right\} \right| \\
+\leq \max _{s, a}\left|R(s, a)+\gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right) V\left(s^{\prime}\right)
+-R(s, a)-\gamma \sum \operatorname{Pr}\left(s^{\prime} \mid s, a\right) V\left(s^{\prime}\right) \right| \\
+=
+\gamma \max _{s, a}\left|\sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right)\left(V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right)\right| \\
+\leq \gamma\left(\sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right)\right) \max _{s^{\prime}}\left|\left(V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right)\right| \\
+\leq
+\gamma\|V-U\|_{\infty}
+}
+\end{align}
+Can someone explain to me also this derivation?
+"
+"['deep-learning', 'terminology', 'history']"," Title: Who first coined the term ""deep learning""?Body: AFAIK, deep learning became popular in 2012 with the victory of ImageNet Competition - Large Scale Visual Recognition Challenge 2012 where winners of this contest actually used deep learning techniques for optimizing the solution for object recognition.
+Who first coined the term deep learning? Is there any published research paper that first used that term?
+"
+"['objective-functions', 'optimization', 'logistic-regression', 'neural-networks']"," Title: How could logistic loss be used as loss function for an ANN?Body: Normally, in practice, people use those loss functions with minima, e.g. $L_1$ mean absolute loss, $L_2$ mean squared error, etc. All those come with a minimum to optimize to.
+
+However, there's another thing, logistic loss, I'm reading about, but don't get it why the logistic function could be used as a loss function, given that it has the so-called minimum at infinity, but that isn't a normal minimum. Logistic loss function (black curve):
+
+How can an optimizer minimize the logistic loss?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl']"," Title: What happens if our target network overestimates the value?Body: When we use DDQN, we often use the target network in case our online network overestimates a value, but this doesn't make sense to me, because
+
+- What happens if our target network is the one that overestimated a value, then we’d keep using that overestimated value
+
+- Why can't we use both target network for selection and evaluation
+
+
+"
+"['neural-networks', 'deep-learning', 'image-recognition', 'research']"," Title: Is it a good idea to train a neural network to classify images without base-hypothesis?Body: I'm a relative beginner in deep learning (understand by that, I'm doing my first Kaggle competition right now, and I have loads to learn still) and I was just wondering something.
+Let's say you have pathology/biopsy tissue images from patients dying from a disease A and patients dying from other causes (whatever causes actually but not related to disease A).
+To date, I think we can say that nobody actually really knows what causes at the level of a biopsy the disease A.
+My idea, as my group could have actually a lot of these biopsies for both groups, would be to use them to fuel a neural network.
+Why would I do that? Biopsies images are rather complex images, and maybe some fine details are hard to guess for a human being, or maybe the sum of some details is actually important to tell whether disease A kills the patient or not. But again, I don't think anybody could come and say: on those tissue biopsy, the sign(s) for disease A are x, y, z.
+My question then becomes a bit more theoretical: given the fact that you have enough data to actually give chances to the algorithm to find differences, is it a good idea to train a neural network without having actually any idea of what could differentiate the two groups?
+Do you know examples of such a strategy? How hard is it afterwards - in the case of a rather good accuracy - to understand what makes it so recognisable?
+"
+"['machine-learning', 'deep-learning']"," Title: Is gradient descent scale invariant or not?Body: I know we should scale the input and output (assuming regression task) before we feed it to the neural network. Then the gradient descent will give the better minima much faster. But I have subtle confusion whether gradient descent with feature scale and without feature scale gives the same result or just gradient descent is not scale-invariant.
+"
+"['machine-learning', 'supervised-learning', 'dimensionality']"," Title: Finding whether an input column is missingBody: I am working on a problem similar to this one:(supervised, artificial data)
+x=np.random.uniform(-100,100,10000)
+y=np.random.uniform(-100,100,10000)
+z=np.random.uniform(-100,100,10000)
+a=np.random.uniform(-100,100,10000)
+b=np.random.uniform(-100,100,10000)
+
+i= x**2/1000 + 2*x*y/1000 + 4*np.sqrt(abs(z)) + z*a + 4*a + abs(b)**1.5 -1/((b+1)**2) * a + 10*y
+
+Since I am not creating the data myself I want to make sure, that my customer provided all the relevant input features. Is there a way to find out, whether the input is complete and not lacking a feature, say "a"? Obviously if the input is the same and the output differs it would be evidence of missing data but it isn't guaranteed that any two input samples are the same. Another way I thought of would be to use an autoencoder to find the dimension of the dataset(including the output) and hope it is exactly the input dimension but in my case it is also possible that there are redundant features. Is there any other way to check whether a function is computable from the given inputs?
+"
+"['convolutional-neural-networks', 'image-processing']"," Title: How to choice CNN architecture for stitching imagesBody: I decided to start learning neural networks by creating a bot for the game. One of the intermediate steps is to create a global map from a series of inaccurate overlapping sub-maps. This task can be solved using OpenCV, but this solution will be too limited and sensitive (in the future I intend to complicate the task and work directly with the map image, instead of binary masks).
+I've tried the following options:
+
+- predict the position of a new map area within the global map. (as a probability distribution)
+
+- predict the new state of the global map from the old and new minimap.
+
+
+I've tried a lot of options formulation of the problem of network architecture, including the idea of conjoined networks, but nothing gave any relevant results.
+Some articles about solving similar problems:
+
+Here is an example of one of the options statement of the problem:
+
+"
+"['deep-learning', 'reference-request', 'human-like']"," Title: Are there fundamental learning theories for developing an AI that imitates human behavior?Body: Most, if not all, AI systems do not imitate humans. Some of them out-perform humans. Examples include using AI to play a game, classification problems, auto-driving, and goal-oriented chatbots. Those tasks usually come with an easily and clearly defined value function, which is the objective function for the AI to optimize.
+My question is: how is deep reinforcement learning, or related techniques, to be applied to an AI system that is designed to just imitate humans but not outperform humans?
+Note this is different from a human-like system. Our objective here is to let the AI become a human rather than a superintelligence. For example, if a human consistently makes a mistake in image identification, then the AI system must also make the same mistake. Another example is the classic chatbot to pass the Turing test.
+Is deep reinforcement learning useful in these kinds of tasks?
+I find it is really hard to start with because the value function cannot be easily calculated.
+What is some theory behind this?
+"
+"['deep-learning', 'convolutional-neural-networks', 'training']"," Title: How to handle images that don’t pertain to image classifier at all?Body: I am trying to create a CNN model that classifies if a person is wearing a seatbelt or not to verify they drive safely. I know to get images of people wearing seatbelts and people not wearing seatbelts, but I have a problem.
+What if the person doesn’t submit a picture of them in a car at all? How do I construct the rest of the dataset to determine if that picture is an actual picture of a person wearing a seatbelt?
+Do I insert completely random pictures in a different category? Do I classify images that don’t have a high confidence score as "wrong" images? Or leave it?
+"
+"['genetic-algorithms', 'genetic-operators', 'selection-operators']"," Title: How to avoid running out of solutions in genetic algorithm due to selection?Body: The genetic algorithm consists of 5 phases of which 4 are repeated:
+
+- Initial population (initially)
+- Fitness function
+- Selection
+- Crossover
+- Mutation
+
+In the selection phase, the number of solutions decreases. How is it avoided to run out of the population before reaching a suitable solution?
+"
+"['neural-networks', 'comparison', 'decision-trees', 'random-forests', 'gradient-boosting']"," Title: What are some applications where tree models perform better than neural networks?Body: Neural networks are known to be generally better modeling techniques as compared to tree-based models (such as decision trees). Are there any exceptions to this?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'natural-language-processing', 'recurrent-neural-networks']"," Title: How to use text as an input for a neural network - regression problem? How many likes/claps an article will getBody: I am trying to predict the number of likes an article or a post will get using a NN.
+I have a dataframe with ~70,000 rows and 2 columns: "text" (predictor - strings of text) and "likes" (target - continuous int variable). I've been reading on the approaches that are taken in NLP problems, but I feel somewhat lost as to what the input for the NN should look like.
+Here is what I did so far:
+
+- Text cleaning: removing html tags, stop words, punctuation, etc...
+- Lower-casing the text column
+- Tokenization
+- Lemmatization
+- Stemming
+
+I assigned the results to a new column, so now I have "clean_text" column with all the above applied to it. However, I'm not sure how to proceed.
+In most NLP problems, I have noticed that people use word embeddings, but from what I have understood, it's a method used when attempting to predict the next word in a text. Learning word embeddings creates vectors for words that are similar to each other syntax-wise, and I fail to see how that can be used to derive the weight/impact of each word on the target variable in my case.
+In addition, when I tried to generate a word embedding model using the Gensim library, it resulted in more than 50k words, which I think will make it too difficult or even impossible to onehot encode. Even then, I will have to one hot encode each row and then create a padding for all the rows to be of similar length to feed the NN model, but the length of each row in the new column I created "clean_text" varies significantly, so it will result in very big onehot encoded matrices that are kind of redundant.
+Am I approaching this completely wrong? and what should I do?
+"
+"['reinforcement-learning', 'policy-gradients', 'pytorch']"," Title: Is it possible to have a fixed trajectory size in the vanilla policy gradient algorithm?Body: In the concept of the vanilla policy gradient algorithm, is it possible for our trajectory size to be fixed?
+For example, my environment is the space of embedded images (using a pre-trained encoder to take images in a space with smaller dimensions), the action I am performing is clustering via k-means algorithm and the reward is the silhouette score metric applied on the clustered images.
+I am thinking to set batches of size 100 (dataset is MNIST and its trainset size is 60000). Then take the mean of them and consider this as one observation. Then feed this into the policy network to give me its logits which is an array of size 20 (20 discrete actions). These actions tell me the number of k clusters in the k-means clustering algorithm. One value for k is sampled and the k-means algorithm is applied on these 100 images and then reward is calculated.
+I can set a constant number for trajectory sizes, for example, 20 and sum the rewards to get the R(trajectory). Is this possible in the context of the RL and policy gradient, or the trajectory's size cannot be fixed? Also, the action that our policy gives us must lead us to the next observation in the environment, but here images are independent of the policy network's parameters.
+I wonder if I can utilize RL to implement this. I appreciate any hints.
+"
+"['deep-learning', 'deep-rl', 'multi-agent-systems', 'alphazero']"," Title: Can AlphaZero considered as Multi-Agent Deep Reinforcement Learning?Body: Can AlphaZero considered as Multi-Agent Deep Reinforcement Learning?
+I could not find a clear answer on this. I would say yes it is Multi Agent Learning, as there are two Agents playing against each other.
+"
+"['reinforcement-learning', 'self-play']"," Title: What does self-play in reinforcement learning lead to?Body:
+Suppose, instead of playing against a random opponent, the reinforcement learning algorithm described above played against itself, with both sides learning. What do you think would happen in this case? Would it learn a different policy for selecting moves?
+
+Above is an extract from Reinforcement Learning: An Introduction by Andrew Barto and Richard S. Sutton, and I wasn't quite sure about what the answer to the question would be, so thought of posting it here. The algorithm being referred to is the one for playing the game tic-tac-toe.
+In my opinion, if the same algorithm plays both sides, it may end up assisting itself to win every time - and not really learn anything. What do you think?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl', 'convergence']"," Title: How can deep Q-learning converge if the targets may not be correct?Body: In deep Q-learning, $Q(s, a)$ and $Q'(s, a)$ are predicted or estimated by the neural network itself. In supervised learning, the target value is a true unbiased value. However, this isn't the case in reinforcement learning. So, how can we be sure that deep Q-learning converges? How do we know that the target Q values are accurate?
+"
+"['neural-networks', 'training']"," Title: How to input dataset with multi-value properties?Body: I'm trying to learn to use AI, and so I've followed some basic tutorials like training an MLP to predict the price of a car given properties like its age and manufacturer. Now I want to see if I can do it myself, so I thought it'd be fun to predict what score I would give a movie given some data scraped off IMDB.
+I immediately got stuck, because how do you deal with the cast? A single property with multiple values, where a particular actor may impact the final score (or a combination of actors - that's for the neurons to suss out).
+I haven't found a way to do this when googling, but it may just be that I'm unfamiliar with the terminology. Or have I accidentally chosen a really difficult problem?
+Note that I'm completely new to all of this, so if you have suggestions, please try to put it as simply as possible.
+"
+"['agi', 'singularity', 'kolmogorov-complexity']"," Title: Is increasing software complexity the most likely bottleneck to the AI singularity?Body: From Wikipedia:
+
+According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.
+
+But what if the complexity of the problem of self-improving the software grows at a faster rate than the AGI intelligence self-improvement?
+From experience we know that problems tend to be harder to solve at every iteration, with diminishing returns. Take as an example the theory of gravitation. Newtonian physics is relatively easy to formulate and covers the majority of high level gravitation phenomenas.
+A more refined picture, like General Relativity, fill few holes in the theory with a huge increase in complexity. To describe black holes and primordial cosmology we need a theory of quantum gravity, which appears to require a further step in complexity.
+What "saved" us so far is the economic growth of our civilisation, which allowed more and more scientist to focus on solving the next problems. It's true that the first AGI will have the luxury of being duplicated, being a software base intelligence, but at the same time is likely that the first AGI will be extremely compute intensive. But even assuming that we (or maybe it would better to say, they) have the hardware to run $10^{2}$ instances, if the complexity of every substantial self-improvement grows say by $10^{d}$x with $d=3$ while the improvement to the intelligence is only $10^{l}$x with $l=1$, the self-improvement cycle will quickly slow down.
+So is increasing software complexity the most likely bottleneck to the AI singularity? And what are likely values for $d$ and $l$?
+"
+"['game-ai', 'markov-decision-process', 'game-theory', 'pomdp', 'tic-tac-toe']"," Title: Why is tic-tac-toe considered a non-deterministic environment?Body: I have been reading about deterministic and stochastic environments, when I came up with an article that states that tic-tac-toe is a non-deterministic environment.
+But why is that?
+An action will lead to a known state of the game and an agent has full knowledge of the board and of its enemies' past moves.
+"
+"['neural-networks', 'deep-learning', 'recurrent-neural-networks', 'time-series', 'feature-engineering']"," Title: What are some solutions for dealing with time series data that are recorded at uneven intervals?Body: Let's say I have a time series data which is a bunch of observations that occur at different time stamps and intervals. For example, my observations come from a camera located at a traffic intersection. It only records when something occurs, like a car passes, a pedestrian crosses, etc... But otherwise doesn't record information.
+I want to produce a LTSM NN (or some other memory based NN for time series data), but since my features don't occur at even time intervals, I am not sure how having a memory would help. For example let's consider two sequences:
+Scenario 1:
+
+- At 1PM, I recorded a car passing.
+- At 105, some people cross.
+- At 150 some more people cross.
+- At 2PM, another car passes.
+
+Scenario 2:
+
+- At 1 PM a car passes
+- At 2 PM a car passes
+
+In the first scenario, the last car passed 3 observations ago. In the second scenario, the last car passed 1 observation ago. Yet in both scenarios, the last car passed 1 hour ago. I am afraid that any model would treat the last car passing in scenario 1 as 4 time periods ago and the last car passing in scenario 2 as 1 time period ago, even though in reality, the time difference is the same. My hypothesis is that the time difference is a very important feature, probably more so than the intermediate activity between the two car passing. In other words, knowing that the last car passed 1 hour ago is equal or likely more important than knowing that there were some people crossing in the last hour. With that said, knowing that people crossed is important too so I can't just remove that feature.
+Another example of my issue can be scene below:
+Scenario 1
+
+- 1PM Car passes
+- 2PM Car passes
+
+Scenario 2
+
+- 1PM Car passes
+- 10PM Car passes
+
+Once again, in my data set, this would be treated as adjacent observations, but in reality, the time gap is vastly different and thus, the two scenarios should be viewed as very dissimilar.
+What are some ways to solve these issues?
+
+- I've thought of just expanding the data set by creating a row for every possible time stamp, but I don't think this is the right choice as it would make my dataset humongous and most rows would have 0s across the board. I have observations that occur in microseconds so it would just become a very sparse dataset.
+- It would be nice to include time difference as a feature, but I am not sure if there's a way to include a dynamic feature in your data set. For example, in the first scenario, at 105, the 1PM observation needs a feature that says this occurred 5 minute ago. But at 150, that feature needs to be changed to this occurred 50 minutes ago, and then at 2PM, that feature needs to now say that it occurred 1 hour ago.
+- Would the solution to just give the NN the raw data and not worry about it? When building a NN on word prediction, I guess if you give the model enough data, it'll learn the language even if the relevant word happened 10 paragraphs ago... However, I am not sure if there are enough examples of the exact same sequences (even with the amount of data) for it to obtain the predictability I want.
+
+Any ideas on ways to solve this problem while keeping in mind that the goal is to build a NN? Another way to think about it is, the time when a data point occurred relative to when the prediction will be made, in my situation, is a crucial piece of information for prediction.
+"
+"['deep-learning', 'computer-vision', 'graph-theory', 'probabilistic-graphical-models']"," Title: How to make sense of label propagation formula in graph neural networks?Body: In the label propagation algorithm in section 3.2.3, we know the label of some nodes and we want to predict the label for the rest of the nodes whose labels we don't know. The update formula for this is the following:
+$$F(t+1) = \alpha SF(t) + (1-\alpha)Y $$
+where $F(t)$ is predicted label from timestep t and $S$ can be considered as an adjacency matrix, $Y$ is the label for both the unlabeled data and labeled data. In the case of labeled data, we initialize $Y$ with ground truth and for the unlabeled data, we randomly initialize their label and assign it to $Y$.
+Now, the most problematic part is I think the $Y$ matrix. Since I do not know the label of some nodes, so we initialize with some random value and keep Y as a constant throughout this iterative process.
+We can calculate the optimal value of F directly using:
+$$F^{*} = (I - \alpha S)^{-1}Y$$
+But my question is, if we keep Y as a constant ( assign random numbers to unknown nodes as labels) what kind of sense does it make?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'python', 'audio-processing']"," Title: How can I find a specific word in an audio file?Body: I'm trying to train and use a neural network to detect a specific word in an audio file. The input of the neural network is an audio of 2-3 seconds duration, and the neural network must determine whether the input audio (the voice of a person) contains the word "hello" or not.
+I do not know what kind of network to use. I used the SOM network, but I did not get the desired result. My training data contains a large number of voices that contain the word "hello".
+Is there any python code for dis problem?
+"
+"['neural-networks', 'objective-functions']"," Title: Why would the loss increase on a single fixed input?Body: I'm training a neural network on some input data. I know that loss increasing may be related to:
+
+- overfitting, if the loss increases on test data (while still decreases on training data)
+- oscillations near the optimal point, if the learning rate is too big
+
+However, I find that while for some input data the net makes good predictions, for other data the loss continues to increase, even if I only train on one data point and if the learning rate is fairly low. To me, it's quite strange that performing the training on only one point the loss continues to increase and not decreases; in fact, the only reason I can find for this is a big learning rate.
+Can you think about some other reason?
+"
+"['reinforcement-learning', 'reward-design', 'reward-functions', 'reward-shaping', 'inverse-rl']"," Title: What are some best practices when trying to design a reward function?Body: Generally speaking, is there a best-practice procedure to follow when trying to define a reward function for a reinforcement-learning agent? What common pitfalls are there when defining the reward function, and how should you avoid them? What information from your problem should you take into consideration when going about it?
+Let us presume that our environment is fully observable MDP.
+"
+"['python', 'time-series', 'hidden-markov-model']"," Title: How to deal with very, very small time-series?Body: I have an ensemble of 231 time series, the largest among them being 14 lines long. The task at hand is to try to predict these time-series. But I'm finding this difficult due to the very small size of the data. Any suggestions about what algorithm to use? I'm thinking about going for a hidden markov model, but I don't know if that's a wise choice.
+"
+"['reinforcement-learning', 'comparison', 'long-short-term-memory', 'actor-critic-methods', 'a3c']"," Title: When past states contain useful information, does A3C perform better than TD3, given that TD3 does not use an LSTM?Body: I am trying to build an AI that needs to have some information about the past states as well. Therefore, LSTMs are suitable for this.
+Now, I want to know that for a problem/game like Breakout, where we require previous states as well, does A3C perform better than TD3, given that TD3 does not have LSTM?
+Or without the LSTM TD3 should perform better than A3C despite the fact that A3C has LSTM in it.
+"
+"['reinforcement-learning', 'policy-gradients', 'policy-gradient-theorem', 'deterministic-pg-theorem', 'theorems']"," Title: Why do the standard and deterministic Policy Gradient Theorems differ in their treatment of the derivatives of $R$ and the conditional probability?Body: I would like to understand the difference between the standard policy gradient theorem and the deterministic policy gradient theorem. These two theorem are quite different, although the only difference is whether the policy function is deterministic or stochastic. I summarized the relevant steps of the theorems below. The policy function is $\pi$ which has parameters $\theta$.
+Standard Policy Gradient
+$$
+\begin{aligned}
+\dfrac{\partial V}{\partial \theta} &= \dfrac{\partial}{\partial \theta} \left[ \sum_a \pi(a|s) Q(a,s) \right] \\
+&= \sum_a \left[ \dfrac{\partial \pi(a|s)}{\partial \theta} Q(a,s) + \pi(a|s) \dfrac{\partial Q(a,s)}{\partial \theta} \right] \\
+&= \sum_a \left[ \dfrac{\partial \pi(a|s)}{\partial \theta} Q(a,s) + \pi(a|s) \dfrac{\partial}{\partial \theta} \left[ R + \sum_{s'} \gamma p(s'|s,a) V(s') \right] \right] \\
+&= \sum_a \left[ \dfrac{\partial \pi(a|s)}{\partial \theta} Q(a,s) + \pi(a|s) \gamma \sum_{s'} p(s'|s,a) \dfrac{\partial V(s') }{\partial \theta} \right]
+\end{aligned}
+$$
+When one now expands next period's value function $V(s')$ again one can eventually reach the final policy gradient:
+$$
+\dfrac{\partial J}{\partial \theta} = \sum_s \rho(s) \sum_a \dfrac{\pi(a|s)}{\partial \theta} Q(s,a)
+$$
+with $\rho$ being the stationary distribution. What I find particularly interesting is that there is no derivative of $R$ with respect to $\theta$ and also not of the probability distribution $p(s'|s,a)$ with respect to $\theta$. The derivation of the deterministic policy gradient theorem is different:
+Deterministic Policy Gradient Theorem
+$$
+\begin{aligned}
+\dfrac{\partial V}{\partial \theta} &= \dfrac{\partial}{\partial \theta} Q(\pi(s),s) \\
+&= \dfrac{\partial}{\partial \theta} \left[ R(s, \pi(s)) + \gamma \sum_{s'} p(s'|a,s) V(s') \right] \\
+&= \dfrac{R(s, a)}{\partial a}\dfrac{\pi(s)}{\partial \theta} + \dfrac{\partial}{\partial \theta} \left[\gamma \sum_{s'} p(s'|a,s) V(s') \right] \\
+&= \dfrac{R(s, a)}{\partial a}\dfrac{\pi(s)}{\partial \theta} + \gamma \sum_{s'} \left[p(s'|\mu(s),s) \dfrac{V(s')}{\partial \theta} + \dfrac{\pi(s)}{\partial \theta} \dfrac{p(s'|s,a)}{\partial a} V(s') \right] \\
+&= \dfrac{\pi(s)}{\partial \theta} \dfrac{\partial}{\partial a} \left[ R(s, a) + p(s'|s,a) V(s') \right] + \gamma p(s'|\pi(s),s) \dfrac{V(s')}{\partial \theta} \\
+&= \dfrac{\pi(s)}{\partial \theta} \dfrac{\partial Q(s, a)}{\partial a} + \gamma p(s'|\pi(s),s) \dfrac{V(s')}{\partial \theta} \\
+\end{aligned}
+$$
+Again, one can obtain the finaly policy gradient by expanding next period's value function. The policy gradient is:
+$$
+\dfrac{\partial J}{\partial \theta} = \sum_s \rho(s) \dfrac{\pi(s)}{\partial \theta} \dfrac{\partial Q(s,a))}{\partial a}
+$$
+In contrast to the standard policy gradient, the equations contain derivatives of the reward function $R$ and the conditional probability $p(s'|s, a,)$ with respect to $a$.
+Question
+Why do the two theorems differ in their treatment of the derivatives of $R$ and the conditional probability? Does determinism in the policy function make such a difference for the derivatives?
+"
+"['deep-learning', 'datasets', 'supervised-learning', 'self-supervised-learning', 'representation-learning']"," Title: How to generate labels for self-supervised training?Body: I've been reading a lot lately about self-supervised learning and I didn't understand very well how to generate the desired label for a given image.
+Let's say that I have an image classification task, and I have very little labeled data.
+How can I generate the target label from the other data in the dataset?
+"
+"['prediction', 'regression', 'linear-regression', 'non-linear-regression']"," Title: What ML algorithm should I use that suits this data?Body: What if I have some data, let's say I'm trying to answer if education level and IQ affect earnings, and I want to analyze this data and put in a regression model to predict earnings based on the IQ and education level. My confusion is, what if the data is not linear or polynomial? What if it's a mess but there are still patterns that the linear plane algorithm can't capture? How do I figure out if plotting all of the independent variables will form a line or a polynomial curve like here?
+
+I mean, with one dependent and one independent variable it's easy because you can plot it and see, but in a situation with multiple independent variables... how do I figure out if the relationship is linear or something like this? How do I figure out if I should use a regression model?
+Let's say I want to predict a store's daily revenue based on the day of the week, weather and the number of people arrived in the city. My data would look something like this:
++-----------+---------+----------------+---------+
+| DAY | WEATHER | PEOPLE ARRIVED | REVENUE |
++-----------+---------+----------------+---------+
+| Monday | Sunny | 1115 | $500 |
++-----------+---------+----------------+---------+
+| Tuesday | Cloudy | 808 | $250 |
++-----------+---------+----------------+---------+
+| Wednesday | Sunny | 450 | $300 |
++-----------+---------+----------------+---------+
+
+I'm a bit confused about what ML algorithm I should use in such a scenario. I can represent the days of the week as (Monday - 1, Tuesday - 2, Wednesday - 3, etc.) and the weather as (Sunny - 1, Cloudy - 2, Normal - 3, etc.) but would a regression model work? I'm skeptical because I'm not sure if there's a linear relationship between the variables and I'm not sure if a hyperplane can create accurate representation of what's going on.
+"
+"['machine-learning', 'video-classification', 'games-of-chance']"," Title: Can I use ML to discover via videos the best place to shoot in foosball?Body: I am a programmer, but just now attempting to enter the world of ML. I'm eyeballing a potential project/problem related to foosball.
+Pro foosball is a thing believe it or not and I'm wondering if I can use decades worth of game footage to determine where defensive holes are most likely to be.
+The way shooting works in pro foosball is you front pin the ball and walk it back and forth in front of the goal. The defense meanwhile is attempting to randomly move two men in front of the goal. Of course, our human brains are not truly random and this I'd like ML to help me understand and exploit.
+Questions like, if I walk the ball left, then step right, historically where is the open hole likely to be?
+If you'd like to better understand the nature of shooting, here is video of pro foosball: https://www.youtube.com/watch?v=uOdnqmwOQhA&t=16s
+So what ML topics should I research and what strategies and tools do you recommend for creating such a model?
+"
+"['convolutional-neural-networks', 'computer-vision', 'object-detection', 'r-cnn', 'selective-search']"," Title: Does the selective search algorithm in object detection learn?Body: I am trying to get a better grasp of how object detection works. I (almost) completely understand the concept behind RPNs. However, I am a little bit confused with the selective search algorithm part. This algorithm does not really learn anything, as far as I understand.
+So, for example, when I have an image containing people (even though my network does not need to classify these), will the selective search still propose these people to my network?
+Of course, my CNN has not learned to classify a human and will output a very low probability for every class (it did learn), and thus, this way, the human will not contain a bounding box.
+Also, in further iterations of the R-CNN model, they proposed using regressors to improve the bounding box.
+Does this mean that this part of the model got the CNNs feature maps, and, based on this, learned to output a bounding box (this way smaller instances of a detected object would get a smaller bounding box)?
+So, in this first iteration, they probably did not need bounding boxes in the training data (since there was no way to learn the size of the bounding boxes and thus no need to find a loss function for this problem)?
+Lastly, I understand that the selective search algorithm is an improvement on the sliding window algorithm. It tries to have a high recall, so having false positives is not bad, as long as we have all the true positives. Again, I do not seem to understand HOW this algorithm knows when it has the object it needs without really learning. Any intuïtive explanation or visual (I am a visual learner at first) on how this algorithm works is greatly appreciated.
+"
+"['machine-learning', 'classification']"," Title: How do I classify whether a document is legal or not given a set of keywords that appear only in legal documents?Body: Let's say that I want to classify whether a document is a legal document or not. I have a list of keywords that will be presented only in legal documents.
+What is the proper way or algorithm to calculate probability based on this list?
+"
+"['gpu', 'gpt']"," Title: How much computing power does it cost to run GPT-3?Body: I know it cost around $4.3 million dollars to train, but how much computing power does it cost to run the finished program? IBM Watson chatbot AI only costs a few cents per chat message to use, OpeenAI Five seemed to run on a single gaming PC setup. So I'm wondering how much computing power does it need to run the finished ai program.
+"
+"['deep-learning', 'tensorflow', 'keras', 'recurrent-neural-networks', 'accuracy']"," Title: Keras model accuracy not improving beyond thresholdBody: I am currently working on a public project for the National Weather Model. We are experimenting with using a recurrent neural network to replace the output of a quadratic formula that is in use. The aim of the experiment is to get a speedup in the computation by using a neural network to essentially mimic the output of the quadratic formula. We have achieved an accuracy of about +-.02 but would like to see that improve to +-.001 or so in order to make the outputs indiscernible from a usage standpoint. Despite changing or increasing the training data size, validation data size, number of layers, size of layers, optimizer, batch size, epoch number, normalizations, etc. we cannot seem to move past this level of accuracy. We have changed and tested every standard metric we can find on how to improve the model, but nothing improves the accuracy beyond that threshold.
+The main question we have is whether or not Keras is rounding at some point between each layer or has some limiting factor on the backend limiting the model's significant figures in the output. The training data resolution should allow for a finer level of accuracy, but as stated before, any changes made the model cannot improve past what has been achieved. Any insight on what is holding the model back would be greatly appreciated and could help with applying this method elsewhere. The Github has a readme file explaining what is occurring in each file and how to run the model as this is still a work in progress. I would be happy to dive deeper into any aspect of the model as well.
+https://github.com/NOAA-OWP/t-route/tree/testing/src/lookup_routing
+"
+"['neural-networks', 'python', 'keras', 'recurrent-neural-networks']"," Title: Is there a neural network that accepts both the current input and previous output?Body: I am quite new to neural networks. I am trying to implement in Python a neural network having only one hidden layer with $N$ neurons and $1$ output layer.
+The point is that I am analyzing time series and would like to use the output layer as the input of the next unit: by feeding the network with the input at time $t-1$ I obtain the output $O_{t-1}$ and, in the next step, I would like to use both the input at time $t$ and $O_{t-1}$, introducing a sort of auto-regression. I read that recurrent neural network are suitable to address this issue.
+Anyway I cannot imagine how to implement a network in Keras that involves multilayer recurrence: all the references I found are linked to using the output of a layer as input of the same layer in the next step. Instead, I would like to include the output of the last layer (the output layer) in the inputs of the first hidden layer.
+"
+"['reinforcement-learning', 'markov-decision-process', 'policies', 'optimal-policy', 'optimality']"," Title: Why is the optimal policy for an infinite horizon MDP deterministic?Body: Could someone please help me gain some intuition as to why the optimal policy for a Markov Decision Process in the infinite horizon case (agent acts forever) is deterministic?
+"
+"['object-detection', 'object-recognition', 'data-preprocessing']"," Title: Can I resize my images after labeling them?Body: Is it okay if I label my images with their original size and then resize them, or should I first resize them and then label them?
+I mean do I need to recalibrate my labels if I resized my images?
+"
+"['deep-learning', 'backpropagation', 'activation-functions', 'learning-rate', 'vanishing-gradient-problem']"," Title: Would a different learning rate for every neuron and layer mitigate or solve the vanishing gradient problem?Body: I'm interested in using the sigmoid (or tanh) activation function instead of RELU. I'm aware of RELU advantages on faster computation and no vanishing gradient problem. But about vanishing gradient, the main problem is about the backpropagation algorithm going to zero quickly if using sigmoid or tanh. So I would like to try to compensate this effect that affects deep layers with a variable learning rate
for every layer, increasing the coefficient every time you go a layer deeper to compensate the vanishing gradient.
+I have read about adaptive learning rate, but it seems to refer to a technique to change the learning rate on every epoch, I'm looking for a different learning rate for every layer, into any epoch.
+
+- Based on your experience, do you think that is a good effort to try?
+
+- Do you know some libraries I can use that already let you define the learning rate as a function and not a constant?
+
+- If such function exists, it will be better to define a simple function
lr=(a*n)*0.001
where n
is layer number, and a
a multiplier based on experience, of we will need the inverse of the activation function to compensate enough the gradient vanishing?
+
+
+"
+"['machine-learning', 'learning-rate']"," Title: Is it a good idea to change the learning rate at each training step as a function of the loss?Body: Is it a good idea to change the learning rate at each training step as a function of the loss? i.e. for points with high loss value, put a high learning rate and for low loss value a low learning rate (using a tailored function)?
+I know that the update of the parameters is done via $\gamma \nabla L$, where $\nabla L$ is the gradient and $\gamma$ the learning rate, and that points with high loss should correspond to a high gradient. Hence the dependency of the update of the parameters on the value of the loss should be already contained, although in a more indirect way. Is doing what I propose dangerous and/or useless?
+"
+"['deep-learning', 'computer-vision', 'deep-neural-networks', 'image-processing', 'image-generation']"," Title: What is the state-of-the-art algorithm for neural style transfer?Body: I've read the paper A Neural Algorithm of Artistic Style by Gatys et. al. and I find the application of neural style transfer very fun.
+I also read that Exploring the structure of a real-time, arbitrary neuralartistic stylization network by Ghiasi et. al. is a more modern approach to NST.
+My question is whether the above paper by Ghiasi et. al. is still the state-of-the-art method in NST, or maybe new algorithms perform even more efficiently.
+I shall precise that my goal is to deploy some NST algorithm on a web page as a fun project to apply some deep learning and learn about backend-frontend interactions.
+"
+"['generative-adversarial-networks', 'overfitting']"," Title: How to overfit GANs with a single imageBody: When designing CNN for image recogition a commonly used sainty check to see if a model is working/designed fine is to see if we are able to overfit the model with a very small subset of images.
+I am trying out GANs. While designing GAN I took a dataset with just one image(full black image). I used the DCGAN implementation in pytorch websitecode link.
+I tried training the model with this just one black image and even after training for 100s-1000 epochs. I am not able to overfit the model ie generate a black (or something close). All what is generated are random noise image as below
+However the model does work well for celeba dataset(the one used in the tutorial). Which means the model is good. Can anybody help me why overfitting is very difficult/impossible when using a single image.
+"
+"['reinforcement-learning', 'rewards', 'reward-functions', 'multi-objective-rl']"," Title: Why is the reward in reinforcement learning always a scalar?Body: I'm reading Reinforcement Learning by Sutton & Barto, and in section 3.2 they state that the reward in a Markov decision process is always a scalar real number. At the same time, I've heard about the problem of assigning credit to an action for a reward. Wouldn't a vector reward make it easier for an agent to understand the effect of an action? Specifically, a vector in which different components represent different aspects of the reward. For example, an agent driving a car may have one reward component for driving smoothly and one for staying in the lane (and these are independent of each other).
+"
+"['ai-design', 'deep-rl', 'ddpg', 'td3']"," Title: Which is the best RL algo for continuous states but discrete action spaces problemBody: I am trying to train an AI with an environment where the states are continuous but the actions are discrete, that means I can not apply DDPG or TD3.
+Can someone please help to let know what should be the best algorithm for discrete action spaces and is there any version of DDPG or TD3 which can be applied to discrete action spaces on partially observable MDPs.
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'model-free-methods']"," Title: Why are state-values alone not sufficient in determining a policy (without a model)?Body:
+"If a model is not available, then it is particularly useful to estimate action values (the
+values of state-action pairs) rather than state values. With a model, state values alone are
+sufficient to determine a policy; one simply looks ahead one step and chooses whichever
+action leads to the best combination of reward and next state, as we did in the chapter on
+DP. Without a model, however, state values alone are not sufficient. One must explicitly
+estimate the value of each action in order for the values to be useful in suggesting a policy."
+
+The above extract is from Sutton and Barto's Reinforcement Learning, Section 5.2 - part of the chapter on Monte Carlo Methods.
+Could someone please explain in some more detail, as to why it is necessary to determine the value of each action (i.e. state-values alone are not sufficient) for suggesting a policy in a model-free setting?
+
+P.S.
+From what I know, state-values basically refer to the expected return one gets when starting from a state (we know that we'll reach a terminal state, since we're dealing with Monte Carlo methods which, at least in the book, look at only episodic MDPs). That being said, why is it not possible to suggest a policy solely on the basis of state-values; why do we need state-action values? I'm a little confused, it'd really help if someone could clear it up.
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'temporal-difference-methods', 'markov-property', 'dynamic-programming']"," Title: Why does TD Learning require Markovian domains?Body: One of my friends and I were discussing the differences between Dynamic Programming, Monte-Carlo, and Temporal Difference (TD) Learning as policy evaluation methods - and we agreed on the fact that Dynamic Programming requires the Markov assumption while Monte-Carlo policy evaluation does not.
+However, he also pointed out that Temporal Difference Learning cannot handle non-Markovian domains, i.e. it depends on the Markov assumption. Why is it so?
+The way I understand it, the TD learning update is, in essence, the same as the Monte-Carlo update, except for the fact that the return instead of being calculated using the entire trajectory, is bootstrapped from the previous estimate of the value function, i.e. we can update the value as soon as we encounter a $(s,a,r,s')$ tuple, we don't have to wait for the episode (if finite) to terminate.
+Where is the Markov assumption being used here, i.e the future is independent of the past given the present?
+"
+"['reinforcement-learning', 'deep-rl', 'reward-design', 'reward-functions']"," Title: How to combine two differently equally important signals into the reward function, that have different scales?Body: I have two signals that I want to use to model my reward.
+The first one is the CPU TIME: running mean from this diagram:
+
+The second one is the MAX RESIDUAL from this diagram:
+
+Since they are both equally important, I can weight them together like this:
+$r = w_\rho \rho + w_\tau \tau$
+where $r$ is the reward function, $\tau$ is the CPU TIME: running mean, and $\rho$ is the MAX RESIDUAL. The problem is, how to set the weights $w_\tau,w_\rho$ to make the contributions equally important if $\rho$ and $\tau$ are on very different scales?
+RL algorithms will learn policies based on increases/decreases of the reward, and if one signal has values that are much smaller than the other, it will influence the reward less, if this is done in a wrong way.
+On the other hand, if the algorithm converges, they must be on different scales, as I want the CPU time to go ideally to $0$, and residuals $\rho$ to be minimized as well.
+Modeling the reward function is a crucial RL step, because it decides what the algorithm will in fact optimize. How are examples like these handled? Are there any best practices for this? Also, what happens when there are $n$ such signals, that have to be combined with "equally important" weighting into a reward function?
+Basing the weights $w$ on the current signal values is possible to define their reward, but then the reward contributions won't see $\max(\rho), \min(\rho)$ and $\max(\tau), \min(\tau)$ over time.
+So, how do you do feature scaling for reward signals?
+"
+"['convolutional-neural-networks', 'time-complexity', 'computational-complexity', 'space-complexity', 'forward-pass']"," Title: What is the computational complexity of the forward pass of a convolutional neural network?Body: How do I determine the computational complexity (big-O notation) of the forward pass of a convolutional neural network?
+Let's assume for simplicity that we use zero-padding such that the input size and the output size are the same.
+"
+['monte-carlo-tree-search']," Title: How to run a Monte Carlo Tree Search MCTS for stochastic environment?Body: For MCTS there is an expansion phase where we make a move and list down all the next states. But this is complicated by the fact that for some games, after making the move, there is a stochastic change to the environment. Consider the game 2048, after I make a move, random tile is generated. So the state of the world after my next move is a mix of possibilities!
+How does MCTS work in a stochastic environment? I am having trouble understanding how to keep track of the expansion, do I expand all stochastic possibilities and weight the return via their chance of happening?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl']"," Title: How does DQN convergence work in reinforcement learningBody: In supervised learning we have an unbiased target value, but in reinforcement learning this isn’t the case
+The network predicts its own target value, now how exactly does it converge if the network predicts its target value
+Can someone explain this to me ??
+
+"
+"['neural-networks', 'natural-language-processing', 'python', 'pytorch']"," Title: Get Neural Network to predict a tag/class on a certain word using the surrounding words as context [PyTorch]?Body: I am somewhat a novice at the topic of Neural Netoworks and PyTorch.
+I am trying to create a model that takes a word (that I have modified very slightly) and a 'window' of context around it and predicts one of 5 tags (the tags relate to what sort of action I should perform on that word to get its correct form).
+For example, here's what I would call a window of size 7 and it's tag (what it means isn't too important, it's just the 'target'):
+ Sentence Label
+here is a sentence for my network N
+
+sentence
is the word that I want the network to predict the label for, but the 3 words on either side provide contextual meaning. My problem is, how would I get a network to know I want it to predict for that central word but not outright ignore the others? I am familiar with more normal NLP tasks such as NMT and character level classification.
+I have already gotten my dataset 'padded' out so they're all of equal size.
+Any help is appreciated
+"
+"['reinforcement-learning', 'deep-rl', 'reference-request', 'resource-request']"," Title: What are some (deep) reinforcement learning books for beginners?Body: What are some books on reinforcement learning (RL) and deep RL for beginners?
+I'm looking for something as friendly as the head first series, that breaks down every single thing.
+"
+"['neural-networks', 'deep-learning', 'training', 'recurrent-neural-networks']"," Title: Understanding graphs of the mean square error: relationships between val loss and train lossBody: I am currently working with some models aimed at predicting time series (89 days for training, 22 for testing), including a CNN LSTM and a convLSTM.
+When training these models, I had the following scenario:
+
+
+In the first case, it is possible to see the val loss moving more sharply away from the train loss. In the second case, it seems to me that this also happens, but in a much smoother way.
+What do these graphs mean? What causes these situations to occur? If they are problematic situations, is it possible to correct them? If so, how?
+"
+"['math', 'proofs', 'automated-theorem-proving']"," Title: Can a computer make a proof by induction?Body: Can a computer solve the following problem, i.e. make a proof by induction? And why?
+
+Prove by induction that $$\sum_{k=1}^nk^3=\left(\frac{n(n+1)}{2}\right)^2, \, \, \, \forall n\in\mathbb N .$$
+
+I'm doing a Ph.D. in pure maths. I love coding when I wanna have some fun, but I've never got too far in this field. I say my background because maybe there's someone who wants to explain this in a more abstract language there's a chance that I will understand it.
+"
+"['neural-networks', 'genetic-algorithms']"," Title: experiences on using genetic algorithms as a way to improve neural networks?Body: I wonder if there is research, patents, or libraries using Genetic algorithms (GA) to improve Neural Networks. I don't find anything in the subject. For example:
+
+- use GA to find better parameters in a NN. So the chromosome will be [learning rate, activation function, layers number, layers size, dropout factor] and the fit function minimize computational cost to reach NN 95% accuracy.
+- use GA to mix your NN input data and generate new data to adjust.
+- use GA to mix several small NN, different types, and find the perfect mix for better predictions.
+
+"
+"['reinforcement-learning', 'q-learning', 'convergence', 'sarsa']"," Title: When do SARSA and Q-Learning converge to optimal Q values?Body: Here's another interesting multiple-choice question that puzzles me a bit.
+
+In tabular MDPs, if using a decision policy that visits all states an infinite number of times, and in each state, randomly selects an action, then:
+
+- Q-learning will converge to the optimal Q-values
+- SARSA will converge to the optimal Q-values
+- Q-learning is learning off-policy
+- SARSA is learning off-policy
+
+
+My thoughts, and question: Since the actions are being sampled randomly from the action space, learning definitely seems to be off-policy (correct me if I'm wrong, please!). So that rules 3. and 4. as incorrect. Coming to the first two options, I'm not quite sure whether Q-learning and/or SARSA would converge in this case. All that I'm able to understand from the question is that the agent explores more than it exploits, since it visits all states (an infinite number of times) and also takes random actions (and not the best action!). How can this piece of information help me deduce if either process converges to the optimal Q-values or not?
+
+Source: Slide 2/55
+"
+"['reinforcement-learning', 'python', 'policy-gradients', 'ddpg', 'gym']"," Title: DDPG doesn't converge for MountainCarContinuous-v0 gym environmentBody: I am trying to implement Deep Deterministic policy gradient algorithm by referring to the paper Continuous Control using Deep Reinforcement Learning on the MountainCarContinuous-v0 gym environment. I am using 2 hidden Linear layers of size 32 for both the actor and the critic networks with ReLU activations and a Tanh activation for the output layer of the actor network. However, for some reason, algorithm doesn't seem to converge for some reason. I tried tuning the hyperparameters to no success.
+
+import copy
+import random
+from collections import deque, namedtuple
+
+import matplotlib.pyplot as plt
+import torch
+import torch.nn as nn
+import torch.optim as optim
+
+"""
+Hyperparameters:
+
+actor_layer_sizes
+critic_layer_sizes
+max_buffer_size
+polyak_constant
+max_time_steps
+max_episodes
+actor_lr
+critic_lr
+GAMMA
+update_after
+batch_size
+"""
+
+device = torch.device("cpu")
+dtype = torch.double
+
+Transition = namedtuple(
+ "Transition", ("state", "action", "reward", "next_state", "done")
+)
+
+
+class agent:
+ def __init__(
+ self,
+ env,
+ actor_layer_sizes=[32, 32],
+ critic_layer_sizes=[32, 32],
+ max_buffer_size=2500,
+ ):
+ self.env = env
+ (
+ self.actor,
+ self.critic,
+ self.target_actor,
+ self.target_critic,
+ ) = self.make_models(actor_layer_sizes, critic_layer_sizes)
+ self.replay_buffer = deque(maxlen=max_buffer_size)
+ self.max_buffer_size = max_buffer_size
+
+ def make_models(self, actor_layer_sizes, critic_layer_sizes):
+ actor = (
+ nn.Sequential(
+ nn.Linear(
+ self.env.observation_space.shape[0],
+ actor_layer_sizes[0],
+ ),
+ nn.ReLU(),
+ nn.Linear(actor_layer_sizes[0], actor_layer_sizes[1]),
+ nn.ReLU(),
+ nn.Linear(
+ actor_layer_sizes[1], self.env.action_space.shape[0]
+ ), nn.Tanh()
+ )
+ .to(device)
+ .to(dtype)
+ )
+
+ critic = (
+ nn.Sequential(
+ nn.Linear(
+ self.env.observation_space.shape[0]
+ + self.env.action_space.shape[0],
+ critic_layer_sizes[0],
+ ),
+ nn.ReLU(),
+ nn.Linear(critic_layer_sizes[0], critic_layer_sizes[1]),
+ nn.ReLU(),
+ nn.Linear(critic_layer_sizes[1], 1),
+ )
+ .to(device)
+ .to(dtype)
+ )
+
+ target_actor = copy.deepcopy(actor) # Create a target actor network
+
+ target_critic = copy.deepcopy(critic) # Create a target critic network
+
+ return actor, critic, target_actor, target_critic
+
+ def select_action(self, state, noise_factor): # Selects an action in exploratory manner
+ with torch.no_grad():
+ noisy_action = self.actor(state) + noise_factor * torch.randn(size = self.env.action_space.shape, device=device, dtype=dtype)
+ action = torch.clamp(noisy_action, self.env.action_space.low[0], self.env.action_space.high[0])
+
+ return action
+
+ def store_transition(self, state, action, reward, next_state, done): # Stores the transition to the replay buffer with a default maximum capacity of 2500
+ if len(self.replay_buffer) < self.max_buffer_size:
+ self.replay_buffer.append(
+ Transition(state, action, reward, next_state, done)
+ )
+ else:
+ self.replay_buffer.popleft()
+ self.replay_buffer.append(
+ Transition(state, action, reward, next_state, done)
+ )
+
+ def sample_batch(self, batch_size=128): # Samples a random batch of transitions for training
+ return Transition(
+ *[torch.cat(i) for i in [*zip(*random.sample(self.replay_buffer, min(len(self.replay_buffer), batch_size)))]]
+ )
+
+
+ def train(
+ self,
+ GAMMA=0.99,
+ actor_lr=0.001,
+ critic_lr=0.001,
+ polyak_constant=0.99,
+ max_time_steps=5000,
+ max_episodes=200,
+ update_after=1,
+ batch_size=128,
+ noise_factor=0.2,
+ ):
+
+ self.train_rewards_list = []
+ actor_optimizer = optim.Adam(self.actor.parameters(), lr=actor_lr)
+ critic_optimizer = optim.Adam(
+ self.critic.parameters(), lr=critic_lr
+ )
+ print("Starting Training:\n")
+ for e in range(max_episodes):
+ state = self.env.reset()
+ state = torch.tensor(state, device=device, dtype=dtype).unsqueeze(0)
+ episode_reward = 0
+ for t in range(max_time_steps):
+ #self.env.render()
+ action = self.select_action(state, noise_factor)
+ next_state, reward, done, _ = self.env.step(action[0]) # Sample a transition
+ episode_reward += reward
+
+ next_state = torch.tensor(next_state, device=device, dtype=dtype).unsqueeze(0)
+ reward = torch.tensor(
+ [reward], device=device, dtype=dtype
+ ).unsqueeze(0)
+ done = torch.tensor(
+ [done], device=device, dtype=dtype
+ ).unsqueeze(0)
+
+ self.store_transition(
+ state, action, reward, next_state, done
+ ) # Store the transition in the replay buffer
+
+ state = next_state
+
+ sample_batch = self.sample_batch(128)
+
+ with torch.no_grad(): # Determine the target for the critic to train on
+ target = sample_batch.reward + (1 - sample_batch.done) * GAMMA * self.target_critic(torch.cat((sample_batch.next_state, self.target_actor(sample_batch.next_state)), dim=1))
+
+ # Train the critic on the sampled batch
+ critic_loss = nn.MSELoss()(
+ target,
+ self.critic(
+ torch.cat(
+ (sample_batch.state, sample_batch.action), dim=1
+ )
+ ),
+ )
+
+ critic_optimizer.zero_grad()
+ critic_loss.backward()
+ critic_optimizer.step()
+
+ actor_loss = -1 * torch.mean(
+ self.critic(torch.cat((sample_batch.state, self.actor(sample_batch.state)), dim=1))
+ )
+
+ #Train the actor
+ actor_optimizer.zero_grad()
+ actor_loss.backward()
+ actor_optimizer.step()
+
+
+ #if (((t + 1) % update_after) == 0):
+ for actor_param, target_actor_param in zip(self.actor.parameters(), self.target_actor.parameters()):
+ target_actor_param.data = polyak_constant * actor_param.data + (1 - polyak_constant) * target_actor_param.data
+
+ for critic_param, target_critic_param in zip(self.critic.parameters(), self.target_critic.parameters()):
+ target_critic_param.data = polyak_constant * critic_param.data + (1 - polyak_constant) * target_critic_param.data
+
+ if done:
+ print(
+ "Completed episode {}/{}".format(
+ e + 1, max_episodes
+ )
+ )
+ break
+
+ self.train_rewards_list.append(episode_reward)
+
+ self.env.close()
+ print(self.train_rewards_list)
+
+ def plot(self, plot_type):
+ if (plot_type == "train"):
+ plt.plot(self.train_rewards_list)
+ plt.show()
+ elif (plot_type == "test"):
+ plt.plot(self.test_rewards_list)
+ plt.show()
+ else:
+ print("\nInvalid plot type")
+
+
+import gym
+
+env = gym.make("MountainCarContinuous-v0")
+
+myagent = agent(env)
+myagent.train(max_episodes=150)
+myagent.plot("train")
+
+The figure below shows the plot for episode reward vs episode number:
+
+"
+"['reinforcement-learning', 'learning-algorithms', 'hierarchical-rl']"," Title: Alternatives to Hierarchical RL for centralized control tasks?Body: Consider a problem where the agent must learn to control a hierarchy of agents acting against another such agent in a competitive environment. The agents on each team need to learn cooperate in order to compete with the other agents.
+A hierarchical RL algorithm would seem to be ideal for such a problem, learning a policy that includes sub-policies for sub-agents. But are there are other types of algorithms that could be used for this kind of task, perhaps ones that are involved centralized cooperation but aren't considered hierarchical RL?
+"
+['reinforcement-learning']," Title: Where can I find short videos of examples of RL being used?Body: I would like to add a short ~1-3 minute video to a presentation, to demonstrate how Reinforcement Learning is used to solve problems. I am thinking something like a short gif of an agent playing an Atari game, but for my audience it would probably be better to have something more manufacturing/industry based.
+Does anyone know any good sources where I could find some stuff like this?
+"
+"['neural-networks', 'python', 'feedforward-neural-networks', 'forward-pass']"," Title: What is the Preferred Mathematical Representation for a Forward Pass in a Neural Network?Body: I know this may be a question of semantics but I always see different articles explain forward pass slightly different. e.g. Sometimes they represent a forward pass to a hidden layer in a standard neural network as np.dot(x, W)
and sometimes I see it as np.dot(W.T, x)
and sometimes np.dot(W, x)
.
+Take this image for example. They represent the input data as a matrix of [NxD]
and weight data as [DxH]
where H is the number of neurons in the hidden layer. This seems the most natural since input data will often be in tabular format with rows as samples and columns as features.
+
+Now an example from the CS231n course notes. They talk about this below example and cite the code used to compute it as:
+f = lambda x: 1.0/(1.0 + np.exp(-x)) # activation function (use sigmoid)
+x = np.random.randn(3, 1) # random input vector of three numbers (3x1)
+h1 = f(np.dot(W1, x) + b1) # calculate first hidden layer activations (4x1)
+h2 = f(np.dot(W2, h1) + b2) # calculate second hidden layer activations (4x1)
+out = np.dot(W3, h2) + b3 # output neuron (1x1)
+
+Where W
is [4x3]
and x is [3x1]
. I would expect the weight matrix to have dimensions equal to [n_features, n_hidden_neurons] but in this example it just seems like they transposed it naturally before it was used.
+
+I guess I am just confused about general nomenclature in how data should be shaped and used consistently when computing neural network forward passes. Sometimes I see transpose, sometimes I don't. Is there a standard, preferred way to represent data in accordance to a diagram like these This question may be silly but I just wanted to discuss it a bit. Thank you.
+"
+"['classification', 'computer-vision', 'audio-processing']"," Title: Is there a problem for ""Sound Source Identification in Video Footage""?Body: I've been considering starting a project for some time on sound source identification.
+To be more specific, my goal is to be able to identify the "sources" for sound in videos. Moving parts clanging, lips speaking, hands clapping etc. I'd like to think a model trained to be able to do this might be helpful for:
+
+- Identifying who is saying what in a crowd
+- Discovering noise sources caught on video (ex. a carpenter's saw as he is talking to someone)
+- Extending this to design a model for reading lips, to discern speech in silent video.
+
+
+(Taken from https://www.youtube.com/watch?v=8ch_H8pt9M8)
+You might think of this task like grounding in NLP, expect for sound/speech instead. I'm sure this has been done before, and I'd like to conduct a literature review. So is there a name for this kind of sound-source identification?
+I've tried Googling "Sound Source Identification", but it only returns Speech Classification results (Is this sound a car or a truck etc.)
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'long-short-term-memory', 'transformer']"," Title: How can Transformers handle arbitrary length input?Body: The transformer, introduced in the paper Attention Is All You Need, is a popular new neural network architecture that is commonly viewed as an alternative to recurrent neural networks, like LSTMs and GRUs.
+However, having gone through the paper, as well as several online explanations, I still have trouble wrapping my head around how they work. How can a non-recurrent structure be able to deal with inputs of arbitrary length?
+"
+"['reinforcement-learning', 'agi', 'rewards', 'aixi', 'reward-hacking']"," Title: How can we prevent AGI from doing drugs?Body: I recently read some introductions to AI alignment, AIXI and decision theory things.
+As far as I understood, one of the main problems in AI alignment is how to define a utility function well, not causing something like the paperclip apocalypse.
+Then a question comes to my mind that whatever the utility function would be, we need a computer to compute the utility and reward, so that there is no way to prevent AGI from seeking it to manipulate the utility function to always give the maximum reward.
+Just like we humans know that we can give happiness to ourselves in chemical ways and some people actually do so.
+Is there any way to prevent this from happening? Not just protecting the utility calculator physically from AGI (How can we sure it works forever?), but preventing AGI from thinking of it?
+"
+"['deep-learning', 'computer-vision', 'object-detection', 'yolo']"," Title: How does YOLO handle non-class objects?Body: I have been reading more about computer vision and I'm bothered by YOLO and similar deep learning architectures.
+The thing I am confused about is how non-class image sections are dealt with. In particular, it's not clear to me at all why YOLO doesn't consider every part of an image a possible class.
+What actually sets the cutoff for detection and then classification?
+"
+"['reinforcement-learning', 'q-learning', 'rewards', 'stationary-policy']"," Title: Should I use the discounted average reward as objective in a finite-horizon problem?Body: I am new to reinforcement learning, but, for a finite horizon application problem, I am considering using the average reward instead of the sum of rewards as the objective. Specifically, there are a total of $T$ maximally possible time steps (e.g., the usage rate of an app in each time-step), in each time-step, the reward may be 0 or 1. The goal is to maximize the daily average usage rate.
+Episode length ($T$) is maximally 10. $T$ is the maximum time window the product can observe about a user's behavior of the chosen data. There is an indicator value in the data indicating whether an episode terminates. From the data, it is offline learning, so in each episode, $T$ is given in the data. As long as an episode doesn't terminate, there is a reward of $\{0, 1\}$ in each time-step.
+I heard if I use an average reward for the finite horizon, the optimal policy is no longer a stationary policy, and optimal $Q$ function depends on time. I am wondering why this is the case.
+I see normally, the objective is defined maximizing
+$$\sum_t^T \gamma^t r_t$$
+And I am considering two types of average reward definition.
+
+- $1/T(\sum^𝑇_{t=0}\gamma^t r_t)$, $T$ varies is in each episode.
+
+- $1/(T-t)\sum^T_{i=t-1}\gamma^i r_i$
+
+
+"
+['transfer-learning']," Title: What is layer freezing in transfer learning?Body: Transfer learning consists of taking features learned on one problem and leveraging them on a new, similar problem.
+In the Transfer Learning, we take layers from a previously trained model and freeze them.
+Why is this layer freezing required and what are the effects of layer freezing?
+"
+"['reinforcement-learning', 'proofs', 'temporal-difference-methods', 'off-policy-methods', 'importance-sampling']"," Title: How can I derive n-step off-policy temporal difference formula?Body: I was reading the book "Reinforcement Learning: An Introduction" by Sutton and Barto. In section 7.3, they write the formula for n-step off-policy TD as
+$$V(S_t) = V(S_{t-1}) + \alpha \rho_{t:t+n-1}[G_{t:t+n} - V(S_{t-1})],$$
+where $V(S_{t})$ is state value function of the state $S$ at time $t$ and $ G_{t:t+n} \doteq \sum_{i=t}^{t+n-1}\gamma^{i-t}R_{i+1} + \gamma^n V(S_{t+n})$ and $\rho_{t:t+n-1}$ is the importance sampling ratio.
+I tried to prove this equation for $n = 1$ using the incremental update of the value function. Now I end up with this formula:
+$$V(S_t) = \frac{1}{t} \sum_{j=1}^{t} \rho_{j}G_{j} $$
+$$V(S_t)= \frac{1}{t}(\rho_{t}G_{t} + \sum_{j=1}^{t-1}\rho_{j}G_{j}) $$
+$$V(S_t) = \frac{1}{t}(\rho_t G_t + (t-1)V(S_{t-1}))$$
+$$V(S_t)=V(S_{t-1}) + \frac{1}{t}(\rho_{t}G_{t} - V(S_{t-1}))$$
+I know I'm wrong because this does not match with the above equation. But can anyone please show me where I am wrong?
+"
+"['training', 'deep-rl', 'reward-functions', 'algorithmic-trading']"," Title: Given the daily stock prices of the last 3 years, how should I sample the training data for episodic RL?Body: I am playing around with a stock trading agent trained via (deep) reinforcement learning, including memory replay. The agent is trained for 1000 episodes, where each episode consists of 180 timesteps (e.g. daily stock prices).
+My question is concerning the sampling of episodes for training.
+Assuming I've got daily stock prices going back 3 years, that's about 750 trading days/prices.
+How should I sample this data set to get enough episodes for training?
+With an episode length of 180 and an episode count of 1000, I'd need 180k "days" to choose from, if I wouldn't want any duplication.
+Do I even need to sample 1000 non-overlapping windows from my dataset or can I sample my episodes using a sliding window approach? Could I even just randomly sample the dataset for episodes? For example, calculate a random date and build the episode from the 180 days following that random starting date?
+The reward for each action is calculated as follows, p
are the prices and t
is the current timestep of the episode.
+
+- CASH:
0
+- BUY:
p(t+1) - p(t) - fee
+- HOLD:
p(t+1) - p(t)
+
+"
+"['convolutional-neural-networks', 'time-complexity', 'u-net', 'computational-complexity', 'pooling']"," Title: What is the time complexity of the upsampling stage of the U-net?Body: I am trying to determine the complexity of the neural network we use. The neural network is a U-net generator with an input shape of NxN (not an image but image-like data) and output of the same shape. There is 7x downsampling and 7x upsampling. Downsampling is a simple convolutional layer, where I have no problem to determine complexity as stated here:
+$$
+O\left(\sum_{l=1}^{d} n_{l-1} \cdot s_{l}^{2} \cdot n_{l} \cdot m_{l}^{2}\right)
+$$
+I however cannot find what is big O complexity for the upsampling stage, where the UpSampling2D layer is used before convolution.
+Any idea what is the time complexity of the upsampling convolutional layer, or where I might find information? Thanks in advance!
+"
+"['deep-learning', 'terminology', 'computational-learning-theory']"," Title: What is the representational capacity of a learning algorithm?Body: The definition I see for representational capacity is "the family of functions the learning algorithm can choose from when varying the parameters in order to reduce a training objective." (Goodfellow's Deep learning book).
+However, to me this seems to be the same as the definition of the hypothesis space. Is the key difference the "in order to reduce a training objective" in that some functions may not be chosen in reducing a training objective? Or are these identical definitions.
+"
+"['neural-networks', 'recurrent-neural-networks', 'text-classification', 'sentiment-analysis', 'multi-label-classification']"," Title: How do RNN's for sentiment classification deal with different sentence lengths?Body: I have been doing a course which teaches you about Deep Neural Networks, during one of the exercises I was made to make an RNN for sentiment classification which I did, but I did not understand how an RNN is able to deal with sentences of different lengths while conducting sentiment classification.
+"
+"['neural-networks', 'python', 'gradient-descent']"," Title: Implementing Gradient Descent Algorithm in Python, bit confused regarding equationsBody: I'm following the guide as outlined at this link: http://neuralnetworksanddeeplearning.com/chap2.html
+For the purposes of this question, I've written a basic network 2 hidden layers, one with 2 neurons and one with one neuron. For a very basic task, the network will learn how to compute an OR logic gate so the training data will be:
+X = [[0, 0], [0, 1], [1, 0], [1, 1]]
+Y = [0, 1, 1, 1]
+
+And the diagram:
+
+For this example, the weights and biases are:
+w = [[0.3, 0.4], [0.1]]
+b = [[1, 1], [1]]
+
+The feedforward part was pretty easy to implement so I don't think I need to post that here. The tutorial I've been following summarises calculating the errors and the gradient descent algorithm with the following equations:
+For each training example $x$, compute the output error $\delta^{x, L}$ where $L =$ Final layer (Layer 1 in this case). $\delta^{x, L} = \nabla_aC_x \circ \sigma'(z^{x, L})$ where $\nabla_aC_x$ is the differential of the cost function (basic MSE) with respect to the Layer 1 activation output, and $\sigma'(z^{x, L})$ is the derivative of the sigmoid function of the Layer 1 output i.e. $\sigma(z^{x, L})(1-\sigma(z^{x, L}))$.
+That's all good so far and I can calculate that quite straightforwardly. Now for $l = L-1, L-2, ...$, the error for each previous layer can be calculated as
+$\delta^{x, l} = ((w^{l+1})^T \delta^{x, l+1}) \circ \sigma(z^{x, l})$
+Which again, is pretty straight forward to implement.
+Finally, to update the weights (and bias), the equations are for $l = L, L-1, ...$:
+$w^l \rightarrow w^l - \frac{\eta}{m}\sum_x\delta^{x,l}(a^{x, l-1})^T$
+$b^l \rightarrow b^l - \frac{\eta}{m}\sum_x\delta^{x,l}$
+What I don't understand is how this works with vectors of different numbers of elements (I think the lack of vector notation here confuses me).
+For example, Layer 1 has one neuron, so $\delta^{x, 1}$ will be a scalar value since it only outputs one value. However, $a^{x, 0}$ is a vector with two elements since layer 0 has two neurons. Which means that $\delta^{x, l}(a^{x, l-1})^T$ will be a vector even if I sum over all training samples $x$. What am I supposed to do here? Am I just supposed to sum the components of the vector as well?
+Hopefully my question makes sense; I feel I'm very close to implementing this entirely and I'm just stuck here.
+Thank you
+[edit] Okay, so I realised that I've been misrepresenting the weights of the neurons and have corrected for that.
+weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])]
+
+Which has the output
+[array([[0.27660583, 1.00106314],
+ [0.34017727, 0.74990392]])
+array([[ 1.095244 , -0.22719165]])
+
+Which means that layer0 has a weight matrix with shape 2x2 representing the 2 weights on neuron01 and the 2 weights on neuron02.
+My understanding then is that $\delta^{x,l}$ has the same shape as the weights array because each weight gets updated indepedently. That's also fine.
+But the bias term (according to the link I sourced) has 1 term for each neuron, which means layer 0 will has two bias terms (b00 and b01) and layer 1 has one bias term (b10).
+However, to calculate the update for the bias terms, you sum the deltas over x i.e $\sum_x \delta^{x, l}$; if delta has the size of the weight matrix, then there are too many terms to update the bias terms. What have I missed here?
+Many thanks
+"
+"['reinforcement-learning', 'dqn', 'deep-neural-networks', 'open-ai', 'gym']"," Title: My Deep Q-Learning Network does not learn for OpenAI gym's cartpole problemBody: I am implementing OpenAI gym's cartpole problem using Deep Q-Learning (DQN). I followed tutorials (video and otherwise) and learned all about it. I implemented a code for myself and I thought it should work, but the agent is not learning. I will really really really appreciate if someone can pinpoint where I am doing wrong.
+Note that I have a target neuaral network and a policy network already there. The code is as below.
+import numpy as np
+import gym
+import random
+from keras.optimizers import Adam
+from keras.models import Sequential
+from keras.layers import Dense
+from collections import deque
+
+env = gym.make('CartPole-v0')
+
+EPISODES = 2000
+BATCH_SIZE = 32
+DISCOUNT = 0.95
+UPDATE_TARGET_EVERY = 5
+STATE_SIZE = env.observation_space.shape[0]
+ACTION_SIZE = env.action_space.n
+SHOW_EVERY = 50
+
+class DQNAgents:
+
+ def __init__(self, state_size, action_size):
+ self.state_size = state_size
+ self.action_size = action_size
+ self.replay_memory = deque(maxlen = 2000)
+ self.gamma = 0.95
+ self.epsilon = 1
+ self.epsilon_decay = 0.995
+ self.epsilon_min = 0.01
+ self.model = self._build_model()
+ self.target_model = self.model
+
+ self.target_update_counter = 0
+ print('Initialize the agent')
+
+ def _build_model(self):
+ model = Sequential()
+ model.add(Dense(20, input_dim = self.state_size, activation = 'relu'))
+ model.add(Dense(10, activation = 'relu'))
+ model.add(Dense(self.action_size, activation = 'linear'))
+ model.compile(loss = 'mse', optimizer = Adam(lr = 0.001))
+
+ return model
+
+ def update_replay_memory(self, current_state, action, reward, next_state, done):
+ self.replay_memory.append((current_state, action, reward, next_state, done))
+
+ def train(self, terminal_state):
+
+ # Sample from replay memory
+ minibatch = random.sample(self.replay_memory, BATCH_SIZE)
+
+ #Picks the current states from the randomly selected minibatch
+ current_states = np.array([t[0] for t in minibatch])
+ current_qs_list= self.model.predict(current_states) #gives the Q value for the policy network
+ new_state = np.array([t[3] for t in minibatch])
+ future_qs_list = self.target_model.predict(new_state)
+
+ X = []
+ Y = []
+
+ # This loop will run 32 times (actually minibatch times)
+ for index, (current_state, action, reward, next_state, done) in enumerate(minibatch):
+
+ if not done:
+ new_q = reward + DISCOUNT * np.max(future_qs_list)
+ else:
+ new_q = reward
+
+ # Update Q value for given state
+ current_qs = current_qs_list[index]
+ current_qs[action] = new_q
+
+ X.append(current_state)
+ Y.append(current_qs)
+
+ # Fitting the weights, i.e. reducing the loss using gradient descent
+ self.model.fit(np.array(X), np.array(Y), batch_size = BATCH_SIZE, verbose = 0, shuffle = False)
+
+ # Update target network counter every episode
+ if terminal_state:
+ self.target_update_counter += 1
+
+ # If counter reaches set value, update target network with weights of main network
+ if self.target_update_counter > UPDATE_TARGET_EVERY:
+ self.target_model.set_weights(self.model.get_weights())
+ self.target_update_counter = 0
+
+ def get_qs(self, state):
+ return self.model.predict(np.array(state).reshape(-1, *state.shape))[0]
+
+
+''' We start here'''
+
+agent = DQNAgents(STATE_SIZE, ACTION_SIZE)
+
+for e in range(EPISODES):
+
+ done = False
+ current_state = env.reset()
+ time = 0
+ total_reward = 0
+ while not done:
+ if np.random.random() > agent.epsilon:
+ action = np.argmax(agent.get_qs(current_state))
+ else:
+ action = env.action_space.sample()
+
+ next_state, reward, done, _ = env.step(action)
+
+ agent.update_replay_memory(current_state, action, reward, next_state, done)
+
+ if len(agent.replay_memory) < BATCH_SIZE:
+ pass
+ else:
+ agent.train(done)
+
+ time+=1
+ current_state = next_state
+ total_reward += reward
+
+ print(f'episode : {e}, steps {time}, epsilon : {agent.epsilon}')
+
+ if agent.epsilon > agent.epsilon_min:
+ agent.epsilon *= agent.epsilon_decay
+
+Results for first 40ish iterations are below (look for the number of steps, they should be increasing and should reach a maximum of 199)
+episode : 0, steps 14, epsilon : 1
+episode : 1, steps 13, epsilon : 0.995
+episode : 2, steps 17, epsilon : 0.990025
+episode : 3, steps 12, epsilon : 0.985074875
+episode : 4, steps 29, epsilon : 0.9801495006250001
+episode : 5, steps 14, epsilon : 0.9752487531218751
+episode : 6, steps 11, epsilon : 0.9703725093562657
+episode : 7, steps 13, epsilon : 0.9655206468094844
+episode : 8, steps 11, epsilon : 0.960693043575437
+episode : 9, steps 14, epsilon : 0.9558895783575597
+episode : 10, steps 39, epsilon : 0.9511101304657719
+episode : 11, steps 14, epsilon : 0.946354579813443
+episode : 12, steps 19, epsilon : 0.9416228069143757
+episode : 13, steps 16, epsilon : 0.9369146928798039
+episode : 14, steps 14, epsilon : 0.9322301194154049
+episode : 15, steps 18, epsilon : 0.9275689688183278
+episode : 16, steps 31, epsilon : 0.9229311239742362
+episode : 17, steps 14, epsilon : 0.918316468354365
+episode : 18, steps 21, epsilon : 0.9137248860125932
+episode : 19, steps 9, epsilon : 0.9091562615825302
+episode : 20, steps 26, epsilon : 0.9046104802746175
+episode : 21, steps 20, epsilon : 0.9000874278732445
+episode : 22, steps 53, epsilon : 0.8955869907338783
+episode : 23, steps 24, epsilon : 0.8911090557802088
+episode : 24, steps 14, epsilon : 0.8866535105013078
+episode : 25, steps 40, epsilon : 0.8822202429488013
+episode : 26, steps 10, epsilon : 0.8778091417340573
+episode : 27, steps 60, epsilon : 0.8734200960253871
+episode : 28, steps 17, epsilon : 0.8690529955452602
+episode : 29, steps 11, epsilon : 0.8647077305675338
+episode : 30, steps 42, epsilon : 0.8603841919146962
+episode : 31, steps 16, epsilon : 0.8560822709551227
+episode : 32, steps 12, epsilon : 0.851801859600347
+episode : 33, steps 12, epsilon : 0.8475428503023453
+episode : 34, steps 10, epsilon : 0.8433051360508336
+episode : 35, steps 30, epsilon : 0.8390886103705794
+episode : 36, steps 21, epsilon : 0.8348931673187264
+episode : 37, steps 24, epsilon : 0.8307187014821328
+episode : 38, steps 33, epsilon : 0.8265651079747222
+episode : 39, steps 32, epsilon : 0.8224322824348486
+episode : 40, steps 15, epsilon : 0.8183201210226743
+episode : 41, steps 20, epsilon : 0.8142285204175609
+episode : 42, steps 37, epsilon : 0.810157377815473
+episode : 43, steps 11, epsilon : 0.8061065909263957
+episode : 44, steps 30, epsilon : 0.8020760579717637
+episode : 45, steps 11, epsilon : 0.798065677681905
+episode : 46, steps 34, epsilon : 0.7940753492934954
+episode : 47, steps 12, epsilon : 0.7901049725470279
+episode : 48, steps 26, epsilon : 0.7861544476842928
+episode : 49, steps 19, epsilon : 0.7822236754458713
+episode : 50, steps 20, epsilon : 0.778312557068642
+
+"
+"['search', 'ai-field', 'depth-first-search']"," Title: Why is depth-first search an artificial intelligence algorithm?Body: I'm new to the artificial intelligence field. In our first chapters, there is one topic called "problem-solving by searching". After searching for it on the internet, I found the depth-first search algorithm. The algorithm is easy to understand, but no one explains why this algorithm is included in the artificial intelligence study.
+Where do we use it? What makes it an artificial intelligence algorithm? Is every search algorithm is an AI algorithm?
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'q-learning', 'dqn']"," Title: When using experience replay in reinforcement learning, which state is used for training?Body: I'm slightly confused about the experience replay process. I understand why we use batch processing in reinforcement learning, and from my understanding, a batch of states is input into the neural network model.
+Suppose there are 2 valid moves in the action space (UP or DOWN)
+Suppose the batch size is 5, and the 5 states are this:
+$$[s_1, s_2, s_3, s_4, s_5]$$
+We put this batch into the neural network model and output Q values. Then we put $[s_1', s_2', s_3', s_4', s_5']$ into a target network.
+What I'm confused about is this:
+Each state in $[s_1, s_2, s_3, s_4, s_5]$ is different.
+Are we computing Q values for UP and DOWN for ALL 5 states after they go through the neural network?
+For example, $$[Q_{s_1}(\text{UP}), Q_{s_1}(\text{DOWN})],
+\\ [Q_{s_2} (\text{UP}), Q_{s_2}(\text{DOWN})], \\
+[Q_{s_3}(\text{UP}), Q_{s_3}(\text{DOWN})], \\
+[Q_{s_4}(\text{UP}), Q_{s_4}(\text{DOWN})], \\
+[Q_{s_5}(\text{UP}), Q_{s_5}(\text{DOWN})]$$
+"
+"['convolutional-neural-networks', 'filters', 'convolutional-layers', 'stride']"," Title: Do all filters of the same convolutional layer need to have the same dimensions and stride?Body: In Convolutional Neural Networks, do all filters of the same convolutional layer need to have the same dimensions and stride?
+If they don't, then it would seem the channel produced by each filter would have different sizes. Or is there some way to get around that?
+"
+"['deep-learning', 'graphs', 'precision']"," Title: What is Precision@K for link prediction in graph embedding meaning?Body: I am trying to re-implement the SDNE algorithm for graph embedding by PyTorch.
+I get stuck at some issues about evaluation metric Precision@K.
+
+precision@k is a metric which gives equal weight to the returned instance. It is defined as follows
+$$precision@k(i) = \frac{\left| \, \{ j \, | \, i, j \in V, index(j) \le k, \Delta_i(j) = 1 \} \, \right|}{k}$$
+where $V$ is the vertex set, $index(j)$ is the ranked index of the $j$-th vertex and $\Delta_i(j) = 1$ indicates that $v_i$ and $v_j$ have a link.
+
+I don't understand what "ranked index of the $j$-th vertex" means.
+Beside, I am also confused about the MAP metric in section 4.3. I don't understand how to calculate it.
+
+Mean Average Precision (MAP) is a metric with good discrimination and stability. Compared with precision@k, it is
+more concerned with the performance of the returned items ranked ahead. It is calculated as follows:
+$$AP(i) = \frac{\sum_j precision@j(i) \cdot \Delta_i(j)}{\left| \{ \Delta_i(j) = 1 \} \right|}$$
+$$MAP = \frac{\sum_{i \in Q} AP(i)}{|Q|}$$
+where $Q$ is the query set.
+
+If anyone is familiar with these metrics, could you help me to explain them?
+"
+"['machine-learning', 'reinforcement-learning', 'deep-learning', 'dqn', 'deep-rl']"," Title: Why are Target Networks used in Deep Q-Learning as opposed to the Expected Value equation?Body: I understand we use a target network because it helps resolve issues regarding stability, however, that's not what I'm here to ask.
+What I would like to understand is why a target network is used as a measure of ground truth as opposed to the expectation equation.
+To clarify, here is what I mean. This is the process used for DQN:
+
+- In DQN, we begin with a state $S$
+- We then pass this state through a neural network which outputs Q values for each action in the action space
+- A policy e.g. epsilon-greedy is used to take an action
+- This subsequently produces the next state $S_{t+1}$
+- $S_{t+1}$ is then passed through a target neural network to produce target Q values
+- These target Q values are then injected into the Bellman equation which ultimately produces a target Q value via the Q-learning update rule equation
+- MSE is used on 6 and 2 to compute the loss
+- This is then back-propagated to update the parameters for the neural network in 2
+- The target neural network has its parameters updated every X epochs to match the parameters in 2
+
+Why do we use a target neural network to output Q values instead of using statistics. Statistics seems like a more accurate way to represent this. By statistics, I mean this:
+Q values are the expected return, given the state and action under policy π.
+$Q(S_{t+1},a) = V^π(S_{t+1})$ = $\mathbb{E}(r_{t+1}+ γr_{t+2}+ (γ^2)_{t+3} + ... \mid S_{t+1}) = {E}(∑γ^kr_{t+k+1}\mid S_{t+1})$
+We can then take the above and inject it into the Bellman equation to update our target Q value:
+$Q(S_{t},a_t) + α*(r_t+γ*max(Q(S_{t+1},a))-Q(S_{t},a))$
+So, why don't we set the target to the sum of diminishing returns? Surely a target network is very inaccurate, especially since the parameters in the first few epochs for the target network are completely random.
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'deep-learning', 'dqn']"," Title: In DQN, when do the parameters in the Neural Network update based on the reward received?Body: I'm aware that we back-propagate after computing the loss between:
+The Neural Network Q values and the Target Network Q values
+However, all this is doing is updating the parameters of the Neural Network to produce an output that matches the Target Q values as close as possible.
+Suppose one epoch is run and the reward is +10, surely we need to update the parameters using this too to tell the Network to push the probability of these actions, given these parameters up.
+How does the algorithm know +10 is good? Suppose the reward range is -10 for loss and +10 for win.
+"
+"['reinforcement-learning', 'q-learning', 'policy-gradients', 'markov-decision-process', 'multi-armed-bandits']"," Title: Customized food for persons based on their profile using Reinforcement learningBody: I am newbie to Reinforcement Learning, this is my idea - Agent(food provider) has to select a food based on the environment(based on the user profile). Here the reward will be given to the agent based on the user's feedback. This is for single person, what if I wanted to make it for multiple persons. I wanted the system to learn on its own and create a policy such that it can identify certain group of people based on their profile and what type of food will be suitable for them.
+
+- Is this possible to implement in Reinforcement learning?
+- If so what type of problem is this and what type of solution I can use to solve this.
+
+"
+"['reinforcement-learning', 'comparison', 'reward-functions', 'sparse-rewards', 'dense-rewards']"," Title: What are the pros and cons of sparse and dense rewards in reinforcement learning?Body: From what I understand, if the rewards are sparse the agent will have to explore more to get rewards and learn the optimal policy, whereas if the rewards are dense in time, the agent is quickly guided towards its learning goal.
+Are the above thoughts correct, and are there any other pros and cons of the two contrasting settings? On a side-note, I feel that the inability to specify rewards that are dense in time is what makes imitation learning useful.
+"
+"['reinforcement-learning', 'papers', 'imitation-learning']"," Title: What is the surrogate loss function in imitation learning, and how is it different from the true cost?Body: I've been reading A Reduction of Imitation Learning and Structured Prediction
+to No-Regret Online Learning lately, and I can't understand what they mean by the surrogate loss function.
+Some relevant notation from the paper -
+
+- $d_\pi$ = average distribution of states if we follow policy $\pi$ for $T$ timesteps
+- $C(s,a)$ = the expected immediate cost of performing action a in state s for the task we are considering (assume $C$ is bounded in [0,1]
+- $C_\pi(s) = \mathbb{E}_{a\sim\pi(s)}[C(s,a)]$ is the expected immediate cost of $π$ in $s$.
+- $J(π) = T\mathbb{E}_{s\sim d_\pi}[C_\pi(s)]$ is the total cost of executing policy $\pi$ for $T$ timesteps
+
+
+In imitation learning, we may not necessarily know or observe true costs $C(s,a)$ for the particular task. Instead, we observe expert demonstrations and seek to bound $J(π)$
+for any cost function $C$ based on how well $π$ mimics the expert’s policy $π^{*}$. Denote $l$ the observed surrogate loss function we minimize instead of $C$. For instance, $l(s,π)$ may be the expected 0-1 loss of $π$ with respect to $π^{*}$ in state $s$, or a squared/hinge loss of $π$ with respect to $π^{*}$ in $s$. Importantly, in many instances, $C$ and $l$ may be the same function – for instance, if we are interested in optimizing the learner’s ability to predict the actions chosen by an expert.
+
+I don't understand how exactly the surrogate loss is different from the true costs, and what are the possible cases in which both are the same. It'd be great if someone could throw some light on this. Thank you!
+"
+"['reinforcement-learning', 'apprenticeship-learning', 'inverse-rl', 'imitation-learning']"," Title: What does the number of required expert demonstrations in Imitation Learning depend on?Body: I just read the following points about the number of required expert demonstrations in imitation learning, and I'd like some clarifications. For the purpose of context, I'll be using a linear reward function throughout this post (i.e. the reward can be expressed as a weighted sum of the components of a state's feature vector)
+
+The number of expert demonstrations required scales with the number of features in the reward function.
+
+I don't think this is obvious at all - why is it true? Intuitively, I think that as the number of features rises, the complexity of the problem does too, so we may need more data to make a better estimate of the expert's reward function. Is there more to it?
+
+The number of expert demonstration required does not depend on -
+
+- Complexity of the expert’s optimal policy $\pi^{*}$
+- Size of the state space
+
+
+I don't see how the complexity of the expert's optimal policy plays a role here - which is probably why it doesn't affect the number of expert demonstrations we need; but how do we quantify the complexity of a policy in the first place?
+Also, I think that the number of expert demonstrations should depend on the size of the state space. For example, if the train and test distributions don't match, we can't do behavioral cloning without falling into problems, in which case we use the DAGGER algorithm to repeatedly query the expert and make better decisions (take better actions). I feel that a larger state space means that we'll have to query the expert more frequently, i.e. to figure out the expert's optimal action in several states.
+I'd love to know everyone's thoughts on this - the dependence of the number of expert demonstrations on the above, and if any, other factors. Thank you!
+
+Source: Slide 20/75
+"
+"['reinforcement-learning', 'comparison', 'value-iteration', 'policy-iteration']"," Title: Why are policy iteration and value iteration studied as separate algorithms?Body: In Sutton and Barto's book about reinforcement learning, policy iteration and value iterations are presented as separate/different algorithms.
+This is very confusing because policy iteration includes an update/change of value and value iteration includes a change in policy. They are the same thing, as also shown in the Generalized Policy Iteration method.
+Why then, in many papers as well, they (i.e. policy and value iterations) are considered two separate update methods to reach an optimal policy?
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'meta-learning']"," Title: Finding the optimal policy from a set of fixed policies in reinforcement learningBody: This is an open-ended question.Suppose I have a reinforcement learning task that is being solved using many different fixed policies, one of which is optimal. The goal of the agent is not to figure out what the optimal policy is but rather which policy (from a set of predefined fixed policies) is the optimal one.
+Are there any algorithms/methods that handle this?
+I was wondering if meta learning is the right area to look into?
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl', 'double-dqn']"," Title: How to compute the target for double Q-learning update step?Body: I've already read the original paper about double DQN but I do not find a clear and practical explanation of how the target $y$ is computed, so here's how I interpreted the method (let's say I have 3 possible actions (1,2,3)):
+
+- For each experience $e_{j}=(s_{j},a_{j},r_{j},s_{j+1})$ of the mini-batch (consider an experience where $a_{j}=2$) I compute the output through the main network in the state $s_{j+1}$, so I obtain 3 values.
+
+- I look which of the three is the highest so: $a^*=arg\max_{a}Q(s_{j+1},a)$, let's say $a^*=1$
+
+- I use the target network to compute the value in $a^*=1$ , so $Q_{target}(s_{j+1},1)$
+
+- I use the value at point 3 to substitute the value in the target vector associeted with the known action $a_{j}=2$, so: $Q_{target}(s_{j+1},2)\leftarrow r_{j}+\gamma Q_{target}(s_{j+1},1)$, while $Q_{target}(s_{j+1},1)$ and $Q_{target}(s_{j+1},3)$, which complete the target vector $y$, remain the same.
+
+
+Is there anything wrong?
+"
+"['reinforcement-learning', 'value-iteration', 'policy-evaluation', 'pseudocode', 'policy-improvement']"," Title: Is value iteration stopped after one update of each state?Body: In section 4.4 Value Iteration, the authors write
+
+One important special case is when policy evaluation is stopped after just one sweep (one update of each state). This algorithm is called value iteration.
+
+After that, they provide the following pseudo-code
+
+It is clear from the code that updates of each state occur until $\Delta$ is sufficiently small. Not one update of each state as the authors write in the text. Where is the mistake?
+"
+"['machine-learning', 'recommender-system']"," Title: What is the most appropriate ML algorithm for creating recommendationsBody: I am trying to find the best algorithm to create a list of recommendations for a user based on the interests of all other users.
+Say I have a list of of samples:
+$samples = [
+ ['hot dog', 'big mac', 'whopper'],
+ ['hot dog', 'big mac'],
+ ['hot dog', 'whopper'],
+ ['big mac', 'dave single'],
+ ['whopper', 'mcnuggets', 'mcchicken'],
+ ['mcchicken', 'original chicken sandwich'],
+ ['mcchicken', 'mcrib']
+];
+
+And we will say each array in the sample list is unique user's food preferences.
+Let's say now I have a user with this food preference:
+['hot dog', 'mcchicken']
+
+I want to be able to recommend to this user other foods that other users have in their preferences.
+So in the simplest terms, it should return:
+['whopper', 'big mac', 'original chicken sandwich', 'mcrib', 'mcnuggets']
+
+Obviously I will also introduce other variables such as how each user rates each item in their preference list and also the percentage of users that need to have that item in order to use their other food items as recommendations.
+But I would like to find the best algorithm to start working on it.
+At first I thought Apriori
might be my best guess, but I wasn't having luck once I introduced multiple items.
+"
+"['machine-learning', 'computer-vision', 'online-learning', 'incremental-learning']"," Title: Is there any real-time computer vision system that can learn to detect new objects of new classes?Body: Suppose you have a ground plane and can use a stereo vision system to detect things that are possibly separate objects.
+Suppose also your robot or agent can attempt to pick up and move these objects around in real-time.
+Is there any current system in computer vision that allows new objects to be learned in real-time?
+"
+"['deep-learning', 'applications']"," Title: What is the scope of real-world deep learning applications in 2020?Body: 2015 was a milestone year for AI--"deep learning" was validated in a very public way with AlphaGo. However, at the time, the question was raised: "What else is deep learning good for?"
+5 years later, I want to gauge:
+
+- How is deep learning applied to real world problems in 2020? What real world applications is it currently used for?
+
+"
+"['recurrent-neural-networks', 'time-series']"," Title: Time Series Forecasting - Recurrent Neural Networks (tensorflow)Body: I am attempting to forecast a time series using tensorflow with the following code:
+X = mytimeseries
+scaler = MinMaxScaler()
+scaled = scaler.fit_transform(X)
+
+length = len(X)-1
+generator = TimeseriesGenerator(scaled,scaled,
+ length=length,batch_size=1)
+
+model = Sequential()
+model.add(LSTM(units=100,activation='relu',input_shape=(length,n_features)))
+model.add(Dense(units=100))
+model.add(Dense(units=1))
+
+model.fit(generator,epochs=20)
+
+Then I just run a loop to forecast, but it's giving me nothing more than a straight line after a few points, as observed below.
+Obviously there is a trend for the data to go down, and I would expect to see that.
+Is this because my architecture is not sophisticated enough / not the right one to pick up on the general decline of the known data? Have I inappropriately chosen any parameters?
+I have tried increasing the number of neurons in the dense layer, units in the LSTM cell, etc. At the moment, the thing that looks like to most effect the resultant curve is to change the length
parameter in my code above. But all this does is make the predictions more sinusoidal.
+
+Thanks for your help!
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'filters']"," Title: How can I implement 2D CNN filter with channelwise-bound kernel weights?Body: I would like to bind kernel parameters through channels/feature-maps for each filter. In a conv2d operation, each filter consists of HxWxC parameters I would like to have filters that have HxW parameters, but the same (HxWxC) form.
+The scenario I have is that I have 4 gray pictures of bulb samples (yielding similar images from each side), which I overlay as channels, but a possible failure that needs to be detected might only appear on one side (a bulb has 4 images and a single classification). The rotation of the object when the picture is taken is arbitrary. Now I solve this by shuffling the channels at training, but it would be more efficient if I could just bind the kernel parameters. Pytorch and Tensorflow solutions are both welcome.
+"
+"['reinforcement-learning', 'policy-gradients', 'reinforce']"," Title: Why does REINFORCE work at all?Body: Here's a screenshot of the popular policy-gradient algorithm from Sutton and Barto's book -
+
+I understand the mathematical derivation of the update rule - but I'm not able to build intuition as to why this algorithm should work in the first place. What really bothers me is that we start off with an incorrect policy (i.e. we don't know the parameters $\theta$ yet), and we use this policy to generate episodes and do consequent updates.
+Why should REINFORCE work at all? After all, the episode it uses for the gradient update is generated using the policy that is parametrized by parameters $\theta$ which are yet to be updated (the episode isn't generated using the optimal policy - there's no way we can do that).
+I hope that my concern is clear and I request y'all to provide some intuition as to why this works! I suspect that, somehow, even though we are sampling an episode from the wrong policy, we get closer to the right one after each update (monotonic improvement). Alternatively, we could be going closer to the optimal policy (optimal set of parameters $\theta$) on average.
+So, what's really going on here?
+"
+"['neural-networks', 'ai-design']"," Title: Given two neural networks that compute two functions $f(x)$ and $g(x)$, how can I create a neural network that computes $f(x)g(x)$?Body: I have two functions $f(x)$ and $g(x)$, and each of them can be computed with a neural network $\phi_f$ and $\phi_g$.
+My question is, how can I write a neural net for $f(x)g(x)$?
+So, for example, if $g(x)$ is constant and equal to $c$ and $\phi_f = ((A_1,b_1),...(A_L,b_L))$, then $\phi_{fg} = ((A_1,b_1),...,(cA_L,cb_L))$.
+Actually, I need to show it for $f(x)=x$ and $g(x)=x^2$ if this make something easier.
+"
+"['machine-learning', 'explainable-ai', 'black-box']"," Title: Black Box Explanations: Using LIME and SHAP in pythonBody: Recently, I came across the paper Robust and Stable Black Box Explanations, which discusses a nice framework for global model-agnostic explanations.
+I was thinking to recreate the experiments performed in the paper, but, unfortunately, the authors haven't provided the code. The summary of the experiments are:
+
+- use LIME, SHAP and MUSE as baseline models, and compute fidelity score on test data. (All the 3 datasets are used for classification problems)
+
+- since LIME and SHAP give local explanations, for a particular data point, the idea is to use K points from the training dataset, and create K explanations using LIME. LIME is supposed to return a local linear explanation. Now, for a new test data point, using the nearest point from K points used earlier and use the corresponding explanation to classify this new point.
+
+- measure the performance, using fidelity score (% of points for which $E(x) = B(x)$, where $E(x)$ is the explanation of the point and $B(x)$ is the classification of the point using the black box.
+
+
+Now, the issue is, I am using LIME and SHAP packages in Python to achieve the results on baseline models.
+However, I am not sure how I'll get a linear explanation for a point (one from the set K), and use it to classify a new test point in the neighborhood.
+Every tutorial on YouTube and Medium discusses visualizing the explanation for a given point, but none talks about how to get the linear model itself and use it for newer points.
+"
+"['hyperparameter-optimization', 'restricted-boltzmann-machine']"," Title: Best/quickest approach for tuning the hyperparameters of a restricted boltzmann machineBody: I have an RBM model which takes extremely long to train and evaluate because of the large number of free parameters and the large amount of input data. What would be the most efficient way of tuning its hyperparameters (batch size, number of hidden units, learning rate, momentum and weight decay)?
+"
+"['reinforcement-learning', 'comparison', 'q-learning', 'a-star']"," Title: What are the differences between Q-Learning and A*?Body: Q-learning seems to be related to A*. I am wondering if there are (and what are) the differences between them.
+"
+"['reinforcement-learning', 'comparison', 'markov-decision-process', 'multi-armed-bandits', 'contextual-bandits']"," Title: Can you convert a MDP problem to a Contextual Multi-Arm Bandits problem?Body: I'm trying to get a better understanding of Multi-Arm Bandits, Contextual Multi-Arm Bandits and Markov Decision Process.
+Basically, Multi-Arm Bandits is a special case of Contextual Multi-Arm Bandits where there is no state(features/context). And Contextual Multi-Arm Bandits is a special case of Markov Decision Process, where there is only one state (features, but no transitions).
+However, since MDP has Markov property, I wonder if every MDP problem can also be converted into a Contextual Multi-Arm Bandits problem, if we simply treat each state as a different input context (features)?
+"
+"['neural-networks', 'recurrent-neural-networks', 'backpropagation']"," Title: Can the normal equation be used to optimise the RNN's weights?Body: I have made an RNN from scratch in Tensorflow.js. In order to update my weights (without needing to calculate the derivatives), I thought of using the normal equation to find the optimal values for my RNN's weights. Would you recommend this approach and if not why?
+"
+"['reinforcement-learning', 'actor-critic-methods', 'off-policy-methods']"," Title: Learning only using off-policy samplesBody: When training policies, is there a reason we need on-policy samples? For expensive simulations, it makes sense to try and reuse samples. Say we're interested in hyperparameter tuning. Can we collect a bunch of episodes using randomly sampled actions (or maybe by following an old policy) one time, and train multiple policies using this set of samples to find the most effective hyperparameters? Every time we train a new policy, does it make sense to replay all the episodes generated by the previous policy? I'm mostly interested in actor-critic methods.
+"
+"['reinforcement-learning', 'keras', 'ddpg', 'control-problem']"," Title: Using DDPG for control in multi-dimensional continuous action space?Body: I am relatively new to reinforcement learning, and I am trying to implement a reinforcement learning algorithm that can do continuous control in a custom environment. The state of the environment is composed of 5 sequential observations of 11 continuous values that represent cells (making the observation space 55 continuous values total), and the action is 11 continuous values representing multipliers of those cell totals.
+After preliminary research, I decided to use Deep Deterministic Policy Gradient (DDPG) as my control algorithm because of its ability to deal with both discrete states and actions. However, most of the examples, including the one that I am basing my implementation off of, have only a single continuously valued action as the output. I have tried to naively change the agent network from outputting a single value to output a vector of values, but the agent does not improve as all, and the set of outputs seems to split into two groups near either the maximum value or the minimum value (I believe the tanh activation on the output has something to do with it) with the values in those groups changing in unison.
+I have two questions about my problems.
+
+- First, is it even possible to use DDPG for multi-dimensional continuous action spaces? My research leads me to believe it is, but I have not found any code examples to learn from and many of the papers I have read are near the limit of my understanding in this area.
+
+- Second, why might my actor network be outputting values clustered near its max/min values, and why would the values in either cluster all be the same?
+
+
+Again, I am fairly new to reinforcement learning, so any advice or recommendations would be greatly appreciated, thanks.
+"
+"['recurrent-neural-networks', 'data-preprocessing', 'time-series', 'feature-extraction', 'feature-engineering']"," Title: How to deal with Unix timestamps features of sequences, which will be classified with RNNs?Body: I want to use RNN for classifying whole sequences of events, generated by website visitors. Each event has some categorical properties and a Unix timestamp:
+sequence1 = [{'timestamp': 1597501183, 'some_field': 'A'}, {'timestamp': 1597681183, 'some_field': 'B'}]
+sequence2 = [{'timestamp': 1596298782, 'some_field': 'B'}]
+sequence3 = [{'timestamp': 1596644362, 'some_field': 'A'}, {'timestamp': 1596647951, 'some_field': 'C'}]
+
+Unfortunately, they can't be treated as classic time series, because they're of variable length and irregular, so timestamps contain essential information and cannot be ignored. While categorical features can be one-hot encoded or made into embeddings, I'm not sure what to do with the timestamps. It doesn't look like a good idea to use them raw. I've come up with two options so far:
+
+- Subtract the minimum timestamp from every timestamp in the sequence, so that all sequences start at 0. But in this case the numbers can still be high, because the sequences run over a month.
+- Use offsets from previous event instead of absolute timestamps.
+
+I'm wondering if there are common ways to deal with this? I haven't found much on this subject.
+"
+"['reinforcement-learning', 'deep-learning', 'dqn', 'hyperparameter-optimization', 'hyper-parameters']"," Title: How should I choose the target's update frequency in DQN?Body: I have been dealing with a problem that I'm trying to solve with DQN. A general question that I have is regarding the target's update frequency. How should it change? Depending on what factor do we increase or decrease this hyperparameter?
+"
+"['computer-vision', 'terminology', 'object-detection', 'linear-algebra']"," Title: What do we mean by 'principal angle between subspaces'?Body: I came across the term 'principal angle between subspaces' as a tool for comparing objects in images. All material that I found on the internet seems to deal with this idea in a highly mathematical way and I couldn't understand the real physical meaning behind the term.
+I have some knowledge of linear algebra. Any help to understand the physical significance of this term and its application in object recognition would be appreciated.
+"
+"['intelligent-agent', 'environment']"," Title: Need some reviews in PEAS descriptionsBody: Here is the Question:
+Describe the PEAS descriptions for the following agents:
+a) A grocery store scanner that digitally scans a fruit or vegetable and
+identifies it.
+b) A GPS system for an automobile. Assume that the destination has been
+preprogrammed and that there is no ongoing interaction with the driver.
+However, the agent might need to update the route if the driver misses a turn.
+c) A credit card fraud detection agent that monitors an individual’s transactions
+and reports suspicious activity.
+d) A voice activated mobile-phone assistant
+For each of the agents described above, categorize it with respect to the six dimensions
+of task environments as described on pages 41-45 (Section 2.3.2 of AIMA). Be sure
+that your choices accurately reflect the way you have specified your environment,
+especially the sensors and actuators. Give a short justification for each property
+Here is what i thinks that the answers of above questions might be this. Can you guyz correct me if i answered wrong at any point.
+
+"
+"['neural-networks', 'accuracy', 'loss']"," Title: Why does loss and accuracy for a multi label classification ann does not change overtime?Body: I have run into a strange behavior of my multi label classification ANN
+model = Sequential()
+model.add(Dense(6, input_shape=(input_size,), activation='elu'))
+#model.add(BatchNormalization(axis=-1))
+model.add(Dropout(0.2))
+#model.add(BatchNormalization(axis=-1))
+model.add(Dense(6, activation='elu'))
+model.add(Dropout(0.2))
+#model.add(BatchNormalization(axis=-1))
+model.add(Dense(6, activation='elu'))
+model.add(Dropout(0.2))
+
+# model.add(keras.layers.BatchNormalization(axis=-1))
+model.add(Dense(6, activation='sigmoid'))
+model.compile(loss='binary_crossentropy',
+ optimizer='nadam',
+ metrics=['accuracy'])
+history = model.fit(X_train, Y_train,batch_size=64 ,epochs=300,
+ validation_data = (X_test, Y_test), verbose=2)
+
+The result is quite strange, I have a feeling that my model could not improve any more. Why does the loss and the accuracy does not change overtime ?
+P/S For clarification, I have 6 output and the value of each output is 0 or 1 that is
+output 1: can be 0 or 1
+output 2: can be 0 or 1
+output 3: can be 0 or 1
+output 4: can be 0 or 1
+output 5: can be 0 or 1
+output 6: can be 0 or 1
+
+"
+"['reinforcement-learning', 'ddpg', 'hindsight-experience-replay', 'state-spaces', 'pybullet']"," Title: What do the state features of KukaGymEnv represent?Body: I trying to use DDPG augmented with Hindsight Experience Replay (HER) on pybullet's KukaGymEnv.
+To formulate the feature vector for the goal state, I need to know what the features of the state of the environment represent. To be precise, a typical state vector of KukaGymEnv is an object of the numpy.ndarray
class with a shape of (9,)
.
+What do each of these 8 elements represent, and how can I formulate the goal state vector for this environment? I tried going through the source code of the KukaGymEnv, but was unable to understand anything useful.
+"
+"['reinforcement-learning', 'proofs', 'policies', 'deterministic-policy']"," Title: Do we assume the policy to be deterministic when proving the optimality?Body: In reinforcement learning, when we talk about the principle of optimality, do we assume the policy to be deterministic?
+"
+"['deep-learning', 'autoencoders', 'hyperparameter-optimization', 'hidden-layers', 'hyper-parameters']"," Title: How to determine the number of hidden layers and units of a deep auto-encoder?Body: I am using a deep autoencoder for my problem. However, the way I choose the number of hidden layers and hidden units in a hidden layer is still based on my feeling.
+The size of the model that indicates the number of hidden layers and units should not be too much or too few for the model can capture useful features from the dataset.
+So, how do I choose the right size of the deep autoencoder model is enough to good?
+"
+"['natural-language-processing', 'python', 'text-summarization']"," Title: What are the best techniques to perform text simplification?Body: I'm evaluating the state of the art techniques to translate legal text to simple text, what are the best approaches for a non-English language (Portuguese)?
+"
+"['machine-learning', 'comparison', 'transfer-learning', 'domain-adaptation']"," Title: What's the difference between domain randomization and domain adaptation?Body: In my understanding, domain randomization is one method of diversifying the dataset to achieve a better shot at domain adaptation. Am I wrong?
+"
+['neural-networks']," Title: How are neural networks built in practice?Body: I am curious to know how neural networks are built in practice.
+Are they hand coded using weight matrices, activation functions etc OR are there ways to build the NN by mentioning the number of layers, number of neurons in each layer, activation to be used, etc as parameters?
+Similar question on training, once built is there a ‘fit’ method or does the training need to be hand coded?
+Any reference for understanding these basics will be of great help.
+"
+"['convolutional-neural-networks', 'convolution', 'alphago', 'filters', 'convolutional-layers']"," Title: What does ""convolve k filters"" mean in the AlphaGo paper?Body: On page 27 of the DeepMind AlphaGo paper appears the following sentence:
+
+The first hidden layer zero pads the input into a $23 \times 23$ image, then convolves $k$ filters of kernel size $5 \times 5$ with stride $1$ with the input image and applies a rectifier nonlinearity.
+
+What does "convolves $k$ filters" mean here?
+Does it mean the following:
+
+The first hidden layer is a convolutional layer with $k$ groups of $(19 \times 19)$ neurons, where there is a kernel of $(5 \times 5 \times numChannels + 1)$ parameters (input weights plus a bias term) used by all the neurons of each group. $numChannels$ is 48 (the number of feature planes in the input image stack).
+All $(19 \times 19 \times k)$ neurons' outputs are available to the second hidden layer (which happens to be another convolutional layer, but could in principle be fully connected).
+
+?
+"
+"['deep-learning', 'computer-vision', 'tensorflow', 'data-preprocessing']"," Title: How to take the optimal batch_size for training a model?Body: I have an image dataset, which is composed of 113695 images for training and 28424 images for validation. Now, when I use ImageDataGenerator
and flow_from_dataframe
, it as the parameter batch_size
.
+How can I take the correct number for batch_size
because both numbers cannot be divided by the same number? Should I need to drop four images in the validation data to make them batch_size
of 5? Or is there another way?
+"
+"['reinforcement-learning', 'dqn', 'environment']"," Title: How should I compute the target for updating in a DQN at the terminal state if I have pseudo-episodes?Body: I'm training a DQN in a real environment where I do not have a natural terminal state, so I've built the episode in an artificial way (i.e. it starts in a random condition and after T steps it ends). My question is about the terminal state: should I consider it when I have to compute $y$ (so using only the reward) or not?
+"
+"['deep-learning', 'generative-adversarial-networks']"," Title: Does a better discriminator in GANs mean better sample generation by the generator?Body: Since the discriminator defines how the generator is updated, then building a discriminator with a higher number of parameters/more layers should lead to a better quality of generated samples. So, assuming that it won't lead to overwhelming the generator (discriminator loss toward 0) or mode collapse, when engineering a GAN, I should build a discriminator as good as possible?
+"
+"['actor-critic-methods', 'exploration-exploitation-tradeoff']"," Title: How to modify the Actor-Critic policy gradient algorithm to perform Safe exploration in Reinforcement LearningBody: I am trying to implement safe exploration technique in [Ref.1]. I am using Soft Actor-Critic algorithm to teach an agent to introduce a bias between 0 and 1 to a specific state of interest in my environment.
+I would like to ask for your help in order to modify the critic update equation -which is originally based on the return or the RL cost function- from this:
+
+which is based on the return functions:
+
=
+
+to be based on the following cost function to make the RL objective risk-sensitive and avoid large-valued actions (bias values) at the start of the agent's learning,
+
+How can I include the second part of the cost function - in which the variance of the reward is evaluated- in the update equation?
+[Ref.1] Heger, M., Consideration of risk in reinforcement learning. 11th International Machine Learning Conference (1994)
+"
+"['game-ai', 'chess', 'go']"," Title: Were AI strategies identified at go or starcraft games and how?Body: When an AI is trained to play an opposing game, such as chess or go, it can become very strong.
+I have read in an article (non-scientific) the claim that AI strategies were identified by scientists while an AI was bound to play go games, as well as starcraft games. However it did not tell what these strategies actually were, how they were identified, nor did it explain the configuration in which AI played (AI vs AI? AI vs human?)
+Can someone explain it to me? I am familiar with go, not with starcraft, so an explanation about go is appreciated.
+I also note that the chess game is not mentioned. Is there any specific feature for chess that makes them inappropriate for strategies? Or is it the behavior of an AI in the chess game that does not allow to identify strategy?
+I understand there are plenty of definitions for strategy, and the article did not give one. So let's focus on following significance: Strategy is a group of principles that tell which fields are important to fight for and which are not. A strategy gives long term rewards, which is opposite to tactics with short term rewards obtained thanks to calculation on a specific issue. With this definition, go game stand for a strategic game with a few, well known tactical situations such as line versus line.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'supervised-learning']"," Title: How can we teach a neural net to make arbitrary data associations?Body: Let's say I have pairs of keys and values of the form $(x_1, y_1), \dots, (x_N, y_N)$. Then I give a neural net a key and a value, $(x_i, y_i)$. For example, $x_i$ could be $4$ and $y_i$ could be $3$, but this does not have to be the case.
+Is there a way to teach the neural net to output the $y_i$ variable every time it receives the corresponding $x_i$?
+By the way, how do our brains perform this function?
+"
+"['neural-networks', 'convolutional-neural-networks', 'interpolation']"," Title: What's the nearest neighbor algorithm used for upsampling?Body:
+Additionally, by default, the UpSampling2D layer will use a nearest neighbor algorithm to fill in the new rows and columns. This has the effect of simply doubling rows and columns, as described and is specified by the ‘interpolation‘ argument set to ‘nearest‘. Alternately, a bilinear interpolation method can be used which draws upon multiple surrounding points. This can be specified via setting the ‘interpolation‘ argument to ‘bilinear‘.
+
+How exactly does the nearest neighbor algorithm mentioned above work? Also, what does interpolation mean in this context (nearest and bilinear)?
+Source: Section on Upsampling2D layer
+"
+"['machine-learning', 'recommender-system', 'linear-algebra', 'singular-value-decomposition']"," Title: Human intuition behind SVD in case of recommendation systemBody: This does not answer my question. I struggled very hard to understand the SVD from a linear-algebra point of view. But in some cases I failed to connect the dots. So, I started to see all the application of SVD. Like movie recommendation system, Google page ranking system, etc.
+Now in the case of movie recommendation system, what I had as a mental picture is...
+The SVD is a technique that falls under collaborative filtering. And what the SVD does is factor a big data matrix into two smaller matrix. And as an input to the SVD we give an incomplete data matrix. And SVD gives us a probable complete data matrix. Here, in the case of a movie recommendation system we try to predict ratings of users. Incomplete input data matrix means some users didn't give ratings to certain movies. So the SVD will help to predict users' ratings. I still don't know how the SVD breaks down a large matrix to smaller pieces. I don't how the SVD determines the dimensions of the smaller matrices.
+It would be helpful if anyone could judge my understanding. And I will very much appreciate any resources which can help me to understand the SVD from scratch to its application to Netflix recommendation systems. Also for the Google Page ranking system or for other applications.
+I am looking forward to seeing an explanation more from human-intuition level and from a linear-algebra point of view. Because I am interested in using this algorithm in my research, I need to understand as soon as possible: how does the SVD work deep down from the core?
+"
+"['machine-learning', 'computer-vision', 'objective-functions', 'papers']"," Title: How to calculate the attention loss in the paper ""Tell Me Where to Look: Guided Attention Inference Network""?Body: I have been reading the research paper Tell Me Where to Look: Guided Attention Inference Network.
+In this paper, they calculate the attention loss, but I didn't understand how to calculate it. Do we have to calculate it like outcome[c]
? If it is so, then why do arrows connect with each other from the middle FC and the last FC?
+Here is the image:
+
+"
+"['machine-learning', 'activation-functions', 'autoencoders', 'hyperparameter-optimization', 'hyper-parameters']"," Title: What is the best activation function for the embedding layer in a deep auto-encoder?Body: I am designing a deep autoencoder for graph embedding (exactly node embedding) following this paper SDNE. In the original paper, they used the sigmoid activation for all hidden layers in the autoencoder model, even for the embedding layer.
+However, I think the embedding layer should use the tanh activation and the reconstruction layer should be used ReLU activation. Because, embedding is in the range $[-1, 1]$ and reconstruction layer is in the range $[0, x]$, which generates better results due to a larger range for representation and directed graph. Instead of in the range $[0,1]$ from sigmoid will lead to a lack of embedding information.
+So, what is the best activation function for deep autoencoders to capture good information about the structure of graph?
+"
+"['convolutional-neural-networks', 'terminology', 'tensorflow', 'keras', '2d-convolution']"," Title: Why is the convolution layer called Conv2D?Body: When I build a convolution layer for image processing, the filter parameters should have 3 dimensions, (filter_length, filter_width, color_depth)
is that correct?
+Why is this convolution layer called Conv2D
? Where does the 2
come from?
+"
+"['natural-language-processing', 'tensorflow', 'recurrent-neural-networks', 'long-short-term-memory', 'word-embedding']"," Title: How is dropout applied to the embedding layer's output?Body: model = tf.keras.Sequential([
+ tf.keras.layers.Embedding(1000, 16, input_length=20),
+ tf.keras.layers.Dropout(0.2), # <- How does the dropout work?
+ tf.keras.layers.Conv1D(64, 5, activation='relu'),
+ tf.keras.layers.MaxPooling1D(pool_size=4),
+ tf.keras.layers.LSTM(64),
+ tf.keras.layers.Dense(1, activation='sigmoid')
+])
+
+I can understand when dropout is applied between Dense layers
, which randomly drops and prevents the former layer neurons from updating parameters. I don't understand how dropout works after an embedding layer.
+Let's say the output shape of the Embedding layer is (batch_size,20,16)
or simply (20,16)
if we ignore the batch size. How is dropout applied to the embedding layer's output?
+Randomly dropout rows or columns?
+"
+"['algorithm', 'optimization', 'search']"," Title: Which method of tree searching should be used for this board game?Body: Suppose the following properties of a board game:
+
+- High branching factor in the beginning of the game (~500) which slowly tends towards 0 at the end of the game
+
+- Evaluation of the any given board state isn't hard to create and can be quite accurate
+
+
+And that we want to create an AI to play such board game.
+
+What method of tree searching should be applied for the AI?
+
+Considering the absurd branching factor (at least for most of the game), the Monte Carlo method of search is appealing. The problem is that from what I've seen usually monte carlo search methods are used on games with both high branching factor and no easy evaluation function. However that is not the case for this board game as previously stated.
+I'm simply curious how this property of evaluation should influence my decision. For example: Should I replace simulations and playouts with an evaluation function? At that point, would alpha-beta pruning minimax work better? Is there some hybrid which would be optimal?
+"
+"['deep-learning', 'objective-functions', 'generative-adversarial-networks', 'expectation']"," Title: Why is the mean used to compute the expectation in the GAN loss?Body: From Goodfellow et al. (2014), we have the adversarial loss:
+$$ \min_G \, \max_D V (D, G) = \mathbb{E}_{x∼p_{data}(x)} \, [\log \, D(x)] + \, \mathbb{E}_{z∼p_z(z)} \, [\log \, (1 − D(G(z)))] \, \text{.} \quad$$
+In practice, the expectation is computed as a mean over the minibatch. For example, the discriminator loss is:
+$$
+\nabla_{\theta_{d}} \frac{1}{m} \sum_{i=1}^{m}\left[\log D\left(\boldsymbol{x}^{(i)}\right)+\log \left(1-D\left(G\left(\boldsymbol{z}^{(i)}\right)\right)\right)\right]
+$$
+My question is: why is the mean used to compute the expectation? Does this imply that $p_{data}$ is uniformly distributed, since every sample must be drawn from $p_{data}$ with equal probability?
+The expectation, expressed as an integral, is:
+$$
+\begin{aligned}
+V(G, D) &=\int_{\boldsymbol{x}} p_{\text {data }}(\boldsymbol{x}) \log (D(\boldsymbol{x})) d x+\int_{\boldsymbol{z}} p_{\boldsymbol{z}}(\boldsymbol{z}) \log (1-D(g(\boldsymbol{z}))) d z \\
+&=\int_{\boldsymbol{x}} p_{\text {data }}(\boldsymbol{x}) \log (D(\boldsymbol{x}))+p_{g}(\boldsymbol{x}) \log (1-D(\boldsymbol{x})) d x
+\end{aligned}
+$$
+So, how do we go from an integral involving a continuous distribution to summing over discrete probabilities, and further, that all those probabilities are the same?
+The best I could find from other StackExchange posts is that the mean is just an approximation, but I'd really like a more rigorous explanation.
+This question isn't exclusive to GANs, but is applicable to any loss function that is expressed mathematically as an expectation over some sampled distribution, which is not implemented directly via the integral form.
+(All equations are from the Goodfellow paper.)
+"
+"['natural-language-processing', 'data-preprocessing', 'word-embedding']"," Title: When to convert data to word embeddings in NLPBody: When training a network using word embeddings, it is standard to add an embedding layer to first convert the input vector to the embeddings.
+However, assuming the embeddings are pre-trained and frozen, there is another option. We could simply preprocess the training data prior to giving it to the model so that it is already converted to the embeddings. This will speed up training, since this conversion need only be performed once, as opposed to on the fly for each epoch.
+Thus, the second option seems better. But the first choice seems more common. Assuming the embeddings are pre-trained and frozen, is there a reason I might choose the first option over the second?
+"
+"['neural-networks', 'applications']"," Title: Are neural networks really used apart from specific hi-tech organisations?Body: This is a generic question. Still posting it to get insights from experts in the field.
+I am interested in knowing if Neural Networks are used in general apart from specific hi-tech organizations.
+If so, which type of NN is used in which industry and for what purpose?
+"
+"['deep-learning', 'computer-vision']"," Title: How to train the images of various sizes?Body: I am practicing with an image dataset which is having different dimensions.
+If I simply crop and pad them to 1024X1024(the original images having smallest width is around 300 and largest is around 2400 and widths and heights of the images are not the same) I am not getting good val_accuracy. It's just giving 49% accuracy.
+How to do image processing to these images because the brightness of the images is also changing. My task is to classify them into 5 classes.
+"
+['machine-learning']," Title: What Classification Algorithm Do I need to Use to Solve this Problem?Body: I am trying to solve the following problem it is to classify the the red points
+and green points in image 1 into two cases. The cluster of green or
+red points can be anywhere and there can be any number of green or red
+clusters; different coloured clusters do not mix or bleed into each other;
+at least one green and one red cluster always exists.
+An example of points to classify is given in image 1.
+So I guess there are two ways to do this
+
+- Classify then with boundaries as shown in image 2, with some algorithm, then use a some post
+processing step to link the separate classes that have the same colour.
+
+- Use some algorithm to directly find the classification boundaries as shown in image 3.
+
+
+So my question is what is the algorithm or algorithm can I use for 1) and 2)?
+It seems it can be solved using the MLE algorithm in some way.
+
+
+
+"
+"['reinforcement-learning', 'deep-rl', 'policy-gradients', 'papers', 'ddpg']"," Title: How does the Ornstein-Uhlenbeck process work, and how it is used in DDPG?Body: In section 3 of the paper Continuous control with deep reinforcement learning, the authors write
+
+As detailed in the supplementary materials we used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) to generate temporally correlated exploration for exploration efficiency in physical control problems with inertia (similar use of autocorrelated noise was introduced in (Wawrzynski, 2015)).
+
+In section 7, they write
+
+For the exploration noise process we used temporally correlated noise in order to explore well in physical environments that have momentum. We used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) with θ = 0.15 and σ = 0.2. The Ornstein-Uhlenbeck process models the velocity of a Brownian particle with friction, which results in temporally correlated values centered around 0.
+
+In a few words, what is the Ornstein-Uhlenbeck process? How does it work? How exactly is it used in DDPG?
+I want to implement the Deep Deterministic Policy Gradient algorithm, and, in the initial actions, noise has to be added. However, I cannot understand how this Ornstein-Uhlenbeck process works. I have searched the internet, but I have not understood the information that I found.
+"
+"['neural-networks', 'ai-design', 'recurrent-neural-networks', 'evolutionary-algorithms', 'neuroevolution']"," Title: Choosing an AI method to recreate a given binary 2D imageBody: If the title wan not very clear, I want a method to take an input image like this,
+[[0, 0, 0, 0],
+ [1, 1, 1, 0],
+ [1, 1, 1, 0],
+ [0, 1, 1, 0]]
+
+and output the 2D coordinates of the 1
s of the image (So that I can recreate the image)
+The application is a robot creating the image using some kind of building blocks, placing one block after the other
+I want the output to be sequential because I need to reconstruct the input image pixel by pixel and there are some conditions on the image construction order (e.g. You cannot place a 1
somewhere when it is surrounded by 1
s)
+The image can change and the number of 1
s in the image too.
+
+- What is an appropriate AI method to apply in this case?
+- How should I feed the image to the network? (Will flattening it to 2D affect my need of an output order?)
+- Should I get the output coordinates one by one or as an ordered 2xN matrix?
+- If one by one, should I feed the same image for each output or the image without the
1
s already filled?
+
+
+I have tried to apply "NeuroEvolution of Augmenting Topologies" for this using neat-python but was unsuccessful. I am currently looking at RNNs but I am not sure if it is the best choice either.
+"
+"['neural-networks', 'overfitting', 'bias-variance-tradeoff']"," Title: Why are large models necessary when we have a limited number of training examples?Body: In Goodfellow et al. book Deep Learning chapter 12.1.4 they write
+
+These large models learn some function $f(x)$, but do so using many more parameters than are necessary for the task. Their size is necessary only due to the limited number of training examples.
+
+I am not able to understand this. Large models are expressive, but if you train them on few examples they should also overfit.
+So, what do the authors mean by saying large models are necessary precisely because of the limited number of training examples?
+This seems to go against the spirit of using more bias when training data is limited.
+"
+"['reinforcement-learning', 'deep-learning', 'q-learning', 'dqn', 'deep-neural-networks']"," Title: How is weighted average computed in Deep Q networksBody: I was going through the Sutton book and they said the update formula for Q learning comes from the weighted average of the returns
+I.e
+New estimate= old estimate +alpha*[returns- old estimate]
+So by the law of large numbers this will converge to the optimal true q value
+Now when we go to Deep Q networks,how exactly is the weighted average computed, all they simply did was try to reduce the error between the target and the estimate, and keep in mind this isn’t the true target, it’s just an unbiased estimate,since it’s an unbiased estimate how is the weighted average computed , which is the expectation?
+Can someone help me out here??
+Thanks in advance
+"
+"['neural-networks', 'terminology', 'objective-functions']"," Title: Is the error function known or unknown?Body: What is the error function? Is it the same as the cost function?
+Is the error function known or unknown?
+When I get the outcome of a neural net I compare it with the target value. The difference between both is called the error. When I get mutiple error values e.g. when I pass a batch through the NN I will get as many error value as the size of my batch. Is the error function the plot of the points? If yes, to me the error function would be unknown. I would only know some point on the graph of the error function.
+"
+"['reinforcement-learning', 'deep-learning', 'dqn', 'reference-request', 'discount-factor']"," Title: Combine DQN with the Average Reward settingBody: I have to deal with a non-episodic task, where there is addittionally a continuous state space and more specifically in each time step there is always a new state that has never been seen before. I want to use DQN algorithm. As it is referred in Sutton's book (Chapter 10), the average reward setting, that is the undiscounted setting with differential function, should be preferred for non-episodic tasks with function approximation.
+(a) Are there any reference papers that use DQN with the average reward setting?
+(b) Why should the classic discounted setting (with no average reward) fail in such tasks, comparing to the average reward setting, taking into account that the highest reward that my agent can gain in a time step is 1.0 and thus the max $G_t = \frac{1}{1-γ}$ and not infinite ?
+"
+"['machine-learning', 'reinforcement-learning', 'deep-learning', 'dqn', 'deep-rl']"," Title: How is exponential moving average computed in deep Q networks?Body: In normal Q-learning, the update rule is an implementation of the exponential moving average, which then converges to the optimal true Q values. However, looking at DQN, how exactly is the exponential moving average implemented in deep networks?
+"
+"['machine-learning', 'training', 'models', 'testing']"," Title: How can I be sure that the final model, trained on all data, is correct?Body: The 'by the book' method of delivering final machine learning models is to include all data in the final training (including validation and test sets). To check robustness of my model I use randomly chosen population for training and validation sets with each training (no set random seed). The results on validation and then test sets are pretty satisfactory for my case however they are always different each time, precision spans between 0.7 and 0.9. This is due to fact that each time different data points fall to set with which model is trained.
+My question is: how do I know that final training will also generate good model and how to estimate its precision when I do not have anymore unseen data?
+"
+"['terminology', 'variational-autoencoder', 'notation', 'random-variable', 'bayesian-statistics']"," Title: What does the notation $\mathcal{N}(z; \mu, \sigma)$ stand for in statistics?Body: I know that the notation $\mathcal{N}(\mu, \sigma)$ stands for a normal distribution.
+But I'm reading the book "An Introduction to Variational Autoencoders" and in it, there is this notation:
+$$\mathcal{N}(z; 0, I)$$
+What does it mean?
+picture of the book:
+
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'implementation']"," Title: What are the variables that need to be saved and loaded, so that a DQN model starts where it left off?Body: TensorFlow allows users to save the weights and the model architecture, however, that will be insufficient unless the values of certain other variables are also stored. For instance, in DQN, if $\epsilon$ is not stored the model will start exploring from scratch and a new model will have to be trained.
+What are the variables that need to be saved and loaded, so that a DQN model starts where it left off? Some pseudocode will be highly appreciated!
+Here is my current model with code
+## Slightly modified from the following repository - https://github.com/gsurma/cartpole
+
+from __future__ import absolute_import, division, print_function, unicode_literals
+
+import os
+import random
+import gym
+import numpy as np
+import tensorflow as tf
+
+from collections import deque
+from tensorflow.models import Sequential
+from tensorflow.layers import Dense
+from tensorflow.optimizers import Adam
+
+
+ENV_NAME = "CartPole-v1"
+
+GAMMA = 0.95
+LEARNING_RATE = 0.001
+
+MEMORY_SIZE = 1000000
+BATCH_SIZE = 20
+
+EXPLORATION_MAX = 1.0
+EXPLORATION_MIN = 0.01
+EXPLORATION_DECAY = 0.995
+
+checkpoint_path = "training_1/cp.ckpt"
+
+
+class DQNSolver:
+
+ def __init__(self, observation_space, action_space):
+ self.exploration_rate = EXPLORATION_MAX
+
+ self.action_space = action_space
+ self.memory = deque(maxlen=MEMORY_SIZE)
+
+ self.model = Sequential()
+ self.model.add(Dense(24, input_shape=(observation_space,), activation="relu"))
+ self.model.add(Dense(24, activation="relu"))
+ self.model.add(Dense(self.action_space, activation="linear"))
+ self.model.compile(loss="mse", optimizer=Adam(lr=LEARNING_RATE))
+
+ def remember(self, state, action, reward, next_state, done):
+ self.memory.append((state, action, reward, next_state, done))
+
+ def act(self, state):
+ if np.random.rand() < self.exploration_rate:
+ return random.randrange(self.action_space)
+ q_values = self.model.predict(state)
+ return np.argmax(q_values[0])
+
+ def experience_replay(self):
+ if len(self.memory) < BATCH_SIZE:
+ return
+ batch = random.sample(self.memory, BATCH_SIZE)
+ for state, action, reward, state_next, terminal in batch:
+ q_update = reward
+ if not terminal:
+ q_update = (reward + GAMMA * np.amax(self.model.predict(state_next)[0]))
+ q_values = self.model.predict(state)
+ q_values[0][action] = q_update
+ self.model.fit(state, q_values, verbose=0)
+ self.exploration_rate *= EXPLORATION_DECAY
+ self.exploration_rate = max(EXPLORATION_MIN, self.exploration_rate)
+
+
+def cartpole():
+ env = gym.make(ENV_NAME)
+ #score_logger = ScoreLogger(ENV_NAME)
+ observation_space = env.observation_space.shape[0]
+ action_space = env.action_space.n
+ dqn_solver = DQNSolver(observation_space, action_space)
+ checkpoint = tf.train.get_checkpoint_state(os.getcwd()+"/saved_networks")
+ print('checkpoint:', checkpoint)
+ if checkpoint and checkpoint.model_checkpoint_path:
+ dqn_solver.model = keras.models.load_model('cartpole.h5')
+ dqn_solver.model = model.load_weights('cartpole_weights.h5')
+
+ run = 0
+ i = 0
+ while i<5:
+ i = i + 1
+ #total = 0
+ run += 1
+ state = env.reset()
+ state = np.reshape(state, [1, observation_space])
+ step = 0
+ while True:
+ step += 1
+ #env.render()
+ action = dqn_solver.act(state)
+ state_next, reward, terminal, info = env.step(action)
+ #total += reward
+ reward = reward if not terminal else -reward
+ state_next = np.reshape(state_next, [1, observation_space])
+ dqn_solver.remember(state, action, reward, state_next, terminal)
+ state = state_next
+ dqn_solver.model.save('cartpole.h5')
+ dqn_solver.model.save_weights('cartpole_weights.h5')
+ if terminal:
+ print("Run: " + str(run) + ", exploration: " + str(dqn_solver.exploration_rate) + ", score: " + str(step))
+ #score_logger.add_score(step, run)
+ break
+ dqn_solver.experience_replay()
+
+
+if __name__ == "__main__":
+ cartpole()
+
+"
+"['natural-language-processing', 'word-embedding', 'natural-language-understanding']"," Title: How homographs is an NLP task can be treated?Body:
+A homograph - is a word that shares the same written form as another word but has a different meaning.
+
+They can be even different parts of speech. For example:
+
+- close(verb) - close(adverb)
+- lead(verb) - lead(noun)
+- wind(noun) - wind(verb)
+
+And there is rather a big list https://en.wikipedia.org/wiki/List_of_English_homographs.
+As far as I understand, after processing the text data in any conventional way, lemmatization, building an embedding, these words, despite having different meaning, and appearing in different contexts, would be absolutely the same for the algorithm, and in the end we would get some averaged context between two or more meainings of the word. And this embedding would be meaningless.
+How is this problem treated or these words are regarded to be too rare to have a significant impact on the quality of resulting embedding?
+I would appreciate comments and references to the papers or sources
+"
+"['neural-networks', 'reinforcement-learning', 'dqn', 'deep-rl']"," Title: Why scaling reward drastically affects performance?Body: I have devised an gridworld-like environment where a RL agent is tasked to cover all the blank squares by passing through them. Possible actions are up, down, left, right. The reward scheme is the following: +1 for covering a blank cell, and -1 per step. So, if the cell was colored after a step, the summed reward is (+1) + (-1) = 0, otherwise it is (0) + (-1) = -1. The environment is a tensor whose layers encode the positions to be covered and the position of the agent.
+Under this reward scheme, DQN fails to find a solution (implementation: stable_baselines3). However, when the rewards are reduced by a factor of 10 to +0.1/-0.1, then the algorithm learns an optimal path.
+I wonder why that happens. I have tried reducing learning rate and gradient clipping (by norm) for the first case to see whether it will improve the learning, but it does not.
+The activation function used is ReLU
+"
+"['natural-language-processing', 'comparison', 'transformer', 'bert']"," Title: How is BERT different from the original transformer architecture?Body: As far as I can tell, BERT is a type of Transformer architecture. What I do not understand is:
+
+- How is Bert different from the original transformer architecture?
+
+- What tasks are better suited for BERT, and what tasks are better suited for the original architecture?
+
+
+"
+"['computer-vision', 'image-recognition', 'object-recognition']"," Title: Why isn't medical imaging improving faster with AI?Body: Researcher here. I just read this piece about medical imaging ai with object recognition and it left me wondering why there are still 100,000+ deaths a year in the US due to misdiagnosis - anyone out there working on these problems? Vinod Khosla famously said that he'd rather get surgery from AI than from a human - so where are we at with that?
+"
+"['computer-vision', 'image-processing', 'gpu', 'edge-detection', 'canny-edge-detector']"," Title: How can traditional edge detection algorithms be implemented on a GPU?Body: How can edge detection algorithms, which are not based on deep learning, such as the canny edge detector, be implemented on a GPU? For example, how are non-edge pixels removed from an image once it detects all the edges?
+The reason why I am asking this question is that when writing data to memory the GPU cores can't see what memory locations the other cores are writing to, so I am interested in knowing how traditional edge detectors can be implemented in on GPU.
+"
+"['machine-learning', 'comparison', 'terminology', 'online-learning', 'active-learning']"," Title: What is the difference between active learning and online learning?Body: The definitions for these two appear to be very similar, and frankly, I've been only using the term "active learning" the past couple of years. What is the actual difference between the two? Is one a subset of the other?
+"
+"['convolutional-neural-networks', 'models', 'audio-processing']"," Title: How to combine specific CNN models that work better at slightly different tasks?Body: I'm not sure how to describe this in the most accurate way but I'll give it a shot.
+I've developed a Inception-Resnet V2 model for detecting audio signals via spectrogram. It does a pretty good job but is not exactly the way I'd like it to be.
+Some details: I use 5 sets of data to evaluate my model during training. They are all similar but slightly different. Once I get to a certain threshold of F1 Scores for each training set I stop training. My overall threshold is pretty hard to get to. Every time training develops a model that produces a "best yet" of one of these data sets I save the model.
+What I've noticed is that, during training, some round will produce a high F1 Score for one particular set while the other sets languish as mediocre. Then, several dozen rounds later, another data set will peak while the others are mediocre. Overall the entire model gets better but there are always some models that work better for some data sets.
+What I would like to know is, given I might have 5 different models that each work better for a particular subset of data, is there a way that I can combine these models (either as a whole or better yet their particular layers) to produce a single model that works the best for all my data validation subsets?
+Thank you. Mecho
+"
+"['dqn', 'atari-games', 'hierarchical-rl']"," Title: How to train a hierarchical DQN to play the Montezuma's Revenge game?Body: Would anybody share the experience on how to train a hierarchical DQN to play the Montezuma's Revenge game? How should I design the reward function? How should I balance the anneal rate of the two-level?
+I've been trying to train an agent to solve this game. The agent is with 6 lives. So, every time when the agent fetches the key and loses his life for the instability or power of the sub-goal network, the agent restart at the original location and simply go through the door, thus gets a huge reward. With an $\epsilon$-greedy rate 0.1, the agent is possible to choose the subgoal key network and fetch the key, so the agent always chooses the door as the subgoal.
+Would anyone show me how to train this agent in the setting of one life?
+"
+"['intelligence-testing', 'turing-test']"," Title: Based on the Turing test, what would be the criteria for an agent to be considered smart?Body: Based on the Turing test, what would be the criteria for an agent to be considered smart?
+"
+"['reinforcement-learning', 'tensorflow', 'actor-critic-methods', 'environment', 'multi-agent-systems']"," Title: How to handle a changing in the Reinforcement Learning environment where there is increasing or decreasing in number of agents?Body: I'm working in A2C and I have an environment where there is increasing or decreasing in the number of agents. The action space in the environment will not change but the state will change when new agents join or leave the game.
+I have tried
+encoder-decoder model with attention but the problem is that the state and the model will change when the number of agents is changing.
+I also tried this way where they use LSTM to get the Q value for the agent but I got this message
+Cannot interpret feed_dict key as Tensor: Tensor Tensor("state:0", shape=(137,), dtype=float32) is not an element of this graph.
+
+or error like this because of changing of state size
+ValueError: Cannot feed value of shape (245,) for Tensor 'state:0', which has shape '(161,)'
+
+(1) Are there any reference papers that deal with such a problem?
+(2) What is the best way to deal with the new agents that join or leave the game?
+(3) How to deal with the changing of state space?
+"
+"['natural-language-processing', 'long-short-term-memory', 'pytorch', 'yolo']"," Title: Feeding YOLOv4 image data into LSTM layer?Body: How would one extract the feature vector from a given input image using YOLOv4 and pass that data into an LSTM to generate captions for the image?
+I am trying to make an image captioning software in PyTorch using YOLO as the base object classifier and an LSTM as the caption generator.
+Can anyone help me figure out what part of the code I would need to call and how I would achieve this?
+Any help is much appreciated.
+"
+"['reinforcement-learning', 'deep-rl', 'data-preprocessing', 'atari-games']"," Title: How does the RL agent understand motion if it gets only one image as input?Body: Basic deep reinforcement learning methods use as input an image for the current state, do some convolutions on that image, apply some reinforcement learning algorithm, and it is solved.
+Let us take the game Breakout or Pong as an example. What I do not understand is: how does the agent understand when an object is moving towards it or away from it? I believe that the action it chooses must be different in these two scenarios, and, from a single image as input, there is no notion of motion.
+"
+"['reinforcement-learning', 'policies', 'value-iteration', 'policy-iteration', 'bellman-equations']"," Title: Why doesn't value iteration use $\pi(a \mid s)$ while policy evaluation does?Body: I was looking at the Bellman equation, and I noticed a difference between the equations used in policy evaluation and value iteration.
+In policy evaluation, there was the presence of $\pi(a \mid s)$, which indicates the probability of choosing action $a$ given $s$, under policy $\pi$. But this probability seemed to be omitted in the value iteration formula. What might be the reason? Maybe an omission?
+"
+"['q-learning', 'dqn', 'deep-rl']"," Title: Is there a logical method of deducing an optimal batch size when training a Deep Q-learning agent with experience replay?Body: I am training an RL agent using Deep-Q learning with experience replay. At each frame, I am currently sampling 32 random transitions from a queue which stores a maximum of 20000 and training as described in the Atari with Deep RL paper. All is working fine, but I was wondering whether there is any logical way to select the proper batch size for training, or if simply using a grid search is best. At the moment, I’m simply using 32, for its small enough that I can render the gameplay throughout training at a stunning rate of 0.5fps. However, I’m wondering how much of an effect batch size has, and if there is any criteria we could generalize across all Deep Q-learning tasks.
+"
+"['convolutional-neural-networks', 'convolutional-layers', 'stride']"," Title: Is the stride applied both in the horizontal and vertical directions in convolutional neural networks?Body: In the convolutional layer for CNNs, when you specify the stride of a filter, typical notes show some examples of this but only for the horizontal panning. Is this same stride applied for the vertical direction too when you're done with the current row?
+In other words, say our input volume is 7x7, and we apply a stride of 1 for a 3x3 filter. Is the output volume 5x5? (which would mean you applied the stride in both the horizontal and vertical panning).
+Is it possible to apply a different stride for each direction?
+"
+"['neural-networks', 'deep-learning', 'dimensionality-reduction']"," Title: How classification neural nets are different from simple dimension reduction + clustering?Body: I know the training of neural nets involves some sort of dimension manipulation to separate classes of different features.
+If there is no variation of features, no matter for neural nets or simple dimension reduction methods (e.g. PCA, LDA) + clustering, neither of them are going to distinguish different classes.
+In such sense, I would like to know the true power of neural nets:
+How classification neural nets are different from simple dimension reduction + clustering?
+or rephrase the question:
+What value do neural nets add to solving classification problems in terms of its algorithmic architecture compared with simple dimension reduction + clustering?
+"
+"['optimization', 'simulated-annealing', 'meta-heuristics']"," Title: Why does Simulated Annealing not take worse solution if the energy difference becomes higher?Body: In Simulated Annealing, a worse solution is accepted with this probability:
+$$p=e^{-\frac{E(y)-E(x)}{kT}}.$$
+If that understanding is correct: Why is this probability function used? This means that, the bigger the energy difference, the smaller the probability of accepting the new solution. I would say the bigger the difference the more we want to escape a local minimum. I plotted that function in Matlab in two dimensions:
+
+"
+"['machine-learning', 'math', 'decision-trees']"," Title: Mathematical calculation behind decision tree classifier with continuous variablesBody: I am working on a binary classification problem having continuous variables (Gene expression Values). My goal is to classify the samples as case
or control
using gene expression values (from Gene-A
, Gene-B
and Gene-C
) using decision tree classifier. I am using the entropy
criteria for node splitting and is implementing the algorithm in python. The classifier is easily able to differentiate the samples.
+Below is the sample data,
+sample training set with labels
+Gene-A Gene-B Gene-C Sample
+ 1 0 38 Case
+ 0 7 374 Case
+ 1 6 572 Case
+ 0 2 538 Control
+ 33 5 860 Control
+
+sample testing set labels
+Gene-A Gene-B Gene-C Sample
+ 1 6 394 Case
+ 13 4 777 Control
+
+I have gone through a lot of resources and have learned, how to mathematically calculate Gini-impurity
, entropy
and information gain
.
+I am not able to comprehend how the actual training and testing work. It would be really helpful if someone can show the calculation for training and testing with my sample datasets or provide an online resource?
+"
+"['papers', 'bayesian-networks']"," Title: How are the ""Link Strength true"", ""Link Strength blind"" and ""Mutual Information"" calculated in this report on Bayesian networks?Body: I'm trying to understand how to calculate the strength of every arc in a Bayesian Network.
+I came across this report Measuring Connection Strengths and Link Strengths in Discrete Bayesian Networks, but I got lost in the calculation.
+In particular, how are the values of Link Strength true, Link Strength blind, and Mutual Information computed in Table 1?
+
+"
+"['natural-language-processing', 'reference-request', 'prediction', 'game-theory', 'information-theory']"," Title: Compressing text using AI by sending only prediction rank of next wordBody: Is there any effort made to compress text (and maybe other media) using prediction of next word and thus sending only the order number of the word/token which will be predicted on the client side
+i.e
+Server text: This is an example of a long text example, custom word flerfom inserted to confuse, that may appear on somewhere
+Compressed Text transmitted : This [choice no 3] [choice no 4] [choice no 1] [choice no 6] [choice no 1] [choice no 3] [choice no 1], custom word flerfom [choice no 4] inserted [choice no 4] confuse [choice no 5] [choice no 4] [choice no 6] [choice no 5] on somewhere
+
(Note: of course [choice no 3] will be shortened to [3] to save bytes and also maybe we can do much better in some cases by sending the first letter of the word)
+
of course it means that the client side neural network has to be static or only updated in a predictable fasion, so the server knows for sure that the client neural network's predictions will follow the given choice orders. I tried example with https://demo.allennlp.org/next-token-lm, but the prediction is not that good. maybe gpt-3 can do better . but its too heavy for use in a normal pc / mobile device
+In more details, the process is
+Deploy the same model on both sides
+Predict the next word after the starting word
+Keep the prediction limit say 100
+For any word which have more than 2 characters we do the prediction
+If the current word is predicted within the top 100 predictions of the model , we can essentially replace it with a numeric char between 0-99 (inclusive) so we are replacing a say , 5 character word with a 2 character numerical char..
+And if the word is not predicted in top 100 we send the word as it is..
+As much better the model predicts, that much better the compression
+And under no scenario it will work worse than the existing method..
+"
+"['neural-networks', 'keras', 'hidden-layers', 'dense-layers', 'batch-learning']"," Title: Why does the output shape of a Dense layer contain a batch size?Body: I understand that the batch size is the number of examples you pass into the neural network (NN). If the batch size is 10, it means you feed the NN 10 examples at once.
+Assuming I have an NN with a single Dense
layer. This Dense
layer of 20 units
has an input shape (10, 3)
. This means that I am feeding the NN 10 examples at once, with every example being represented by 3 values. This Dense
layer will have an output shape of (10, 20)
.
+I understand that the 20 in the 2nd dimension comes from the number of units in the Dense
layer. However, what does the 10 (Batch Size)
in the first dimension mean? Does this mean that the NN learns 10 separate sets of weights (with each set of weights corresponding to one example, and one set of weights being a matrix of 60 values:3 features x 20 units)?
+"
+"['deep-learning', 'recurrent-neural-networks', 'long-short-term-memory', 'deep-neural-networks']"," Title: What type of model should I fit to increase accuracy?Body: Currently, I'm working on 6-axis IMU(Inertial Measurment Unit) dataset. This dataset contain 6 axis IMU data of 7 different drivers. The Imu sensor attached on vehicle. The drivers drives same path. So, the dataset include 6 feature columns and a label column.
+I tried multiple neural network models.The sensor data is a sequential data so I tried LSTM(Long Short Term Memory) & classical fully-connected layers.
+Some of my architecture(in keras framework):
+
+Layer (type) Output Shape Param #
+
+lstm_4 (LSTM) (None, 1, 128) 69120
+_________________________________________________________________
+lstm_5 (LSTM) (None, 1, 64) 49408
+_________________________________________________________________
+lstm_6 (LSTM) (None, 1, 32) 12416
+_________________________________________________________________
+dense_8 (Dense) (None, 1, 64) 2112
+_________________________________________________________________
+dropout_2 (Dropout) (None, 1, 64) 0
+_________________________________________________________________
+dense_9 (Dense) (None, 1, 7) 455
+
+
+2nd Architecture:
+
+=================================================================
+dense_10 (Dense) (None, 32) 224
+_________________________________________________________________
+dense_11 (Dense) (None, 64) 2112
+_________________________________________________________________
+dense_12 (Dense) (None, 128) 8320
+_________________________________________________________________
+dense_13 (Dense) (None, 256) 33024
+_________________________________________________________________
+dropout_3 (Dropout) (None, 256) 0
+_________________________________________________________________
+dense_14 (Dense) (None, 512) 131584
+_________________________________________________________________
+dense_15 (Dense) (None, 256) 131328
+_________________________________________________________________
+dense_16 (Dense) (None, 128) 32896
+_________________________________________________________________
+dense_17 (Dense) (None, 64) 8256
+_________________________________________________________________
+dropout_4 (Dropout) (None, 64) 0
+_________________________________________________________________
+dense_18 (Dense) (None, 128) 8320
+_________________________________________________________________
+dense_19 (Dense) (None, 7) 903
+
+The best accuracy in my models was %70 which is not good. How style of layers should I use to handle this data? Or, which type of model would increase accuracy?
+"
+"['reinforcement-learning', 'reference-request', 'model-based-methods', 'imagination']"," Title: Have agents that ""dream"" been explored in Reinforcement Learning?Body: I was reading this article about the question "Why do we dream?" in which the author discusses dreams as a form of rehearsal for future threats, and presents it as an evolutive advantage. My question is whether this idea has been explored in the context of RL.
+For example, in a competition between AIs on a shooter game, one could design an agent that, besides the behavior it has learned in a "normal" training, seeks for time in which is out of danger, to then use its computation time in the game to produce simulations that would further optimize its behavior. As the agent still needs to be somewhat aware of its environment, it could alternate between processing the environment and this kind of simulation. Note that this "in-game" simulation has an advantage with respect to the "pre-game" simulations used for training; the agent in the game experiences the behavior of the other agents, which could not have been predicted beforehand, and then simulates on top of these experiences, e.g. by slightly modifying them.
+For more experienced folks, does this idea make sense? has something similar been explored?
+I have absolutely no experience in the field, so I apologize if this question is poorly worded, dumb or obvious. I would appreciate suggestions on how to improve it if this is the case.
+"
+"['reinforcement-learning', 'keras', 'objective-functions', 'policy-gradients', 'proximal-policy-optimization']"," Title: Generation of 'new log probabilities' in continuous action space PPOBody: I have a conceptual question for you all that hopefully I can convey clearly. I am building an RL agent in Keras using continuous PPO to control a laser attached to a pan/tilt turret for target tracking. My question is how the new policy gets updated. My current implementation is as follows
+
+- Make observation (distance from laser to target in pan and tilt)
+- Pass observation to actor network which outputs a mean (std for now is fixed)
+- I sample from a gaussian with the mean output from step 2
+- Apply the command and observe the reward (1/L2 distance to target)
+- collect N steps of experience, compute advantage and old log probabilities,
+- train actor and critic
+
+My question is this. I have my old log probabilities (probabilities of the actions taken given the means generated by the actor network), but I dont understand how the new probabilities are generated. At the onset of the very first minibatch my new policy is identical to my old policy as they are the same neural net. Given that in the model.fit function I am passing the same set of observations to generate 'y_pred' values, and I am passing in the actual actions taken as my 'y_true' values, the new policy should generate the exact same log probabilities as my old one. The only (slight) variation that makes the network update is from the entropy bonus, but my score
+np.exp(new_log_probs-old.log_probs) is nearly identically 1 because the policies are the same.
+Should I be using a pair of networks similar to DDQN so there are some initial differences in the policies between the one used to generate the data and the one used for training?
+"
+"['reinforcement-learning', 'dqn', 'probability-distribution']"," Title: How to calculate v min and v max for C51 DQNBody: Background: In C51 DQNs you must specify a v-min/max to be used during training. The way this is generally done is you take the max score possible for the game and set that to v-max, then v-min is just negative v-max. For a game like Pong deciding the v-min/max is simple because the max score possible is 20, therefore, v_min=-20
and v_max=20
.
+Question: In a game like Space Invaders, there is no max score, so how would I calculate the v-min/max for a C51 DQN?
+"
+"['reinforcement-learning', 'policy-gradients', 'policies', 'value-functions', 'applications']"," Title: What are some other real-life examples of simple policies but complex value functions?Body: Hado van Hasselt, a researcher at DeepMind, mentioned in one of his videos (from 7:20 to 8:20) on Youtube (about policy gradient methods) that there are cases when the policy is very simple compared to the value function - and it makes more sense to learn the policy directly rather than first learning the value function and then doing control. He gives a very simple example at minute 7:20.
+What are some other real-life examples (even just one example) of simple policies but complex value functions?
+By real-life example I mean an example that is not as simple as a robot in a grid world, but some relatively complex real-world situations (say, autonomous driving).
+"
+"['reinforcement-learning', 'game-theory', 'optimal-policy']"," Title: What's the optimal policy in the rock-paper-scissors game?Body: A deterministic policy in the rock-paper-scissors game can be easily exploited by the opponent - by doing just the right sequence of moves to defeat the agent. More often than not, I've heard that a random policy is the optimal policy in this case - but the argument seems a little informal.
+Could someone please expound on this, possibly adding more mathematical details and intuition? I guess the case I'm referring to is that of a game between two RL agents, but I'd be happy to learn about other cases too. Thanks!
+EDIT: When would a random policy be optimal in this case?
+"
+"['reinforcement-learning', 'policy-gradients', 'proofs']"," Title: Why does (not) the distribution of states depend on the policy parameters that induce it?Body: I came across the following proof of what's commonly referred to as the log-derivative trick in policy-gradient algorithms, and I have a question -
+
+While transitioning from the first line to the second, the gradient with respect to policy parameters $\theta$ was pushed into the summation. What bothers me is how it skipped over $\mu (s)$, the distribution of states - which (the way I understand it), is induced by the policy $\pi_\theta$ itself! Why then does it not depend on $\theta$?
+Let me know what's going wrong! Thank you!
+"
+"['reinforcement-learning', 'deep-learning', 'q-learning', 'experience-replay']"," Title: Why is sampling non-uniformly from the replay memory an issue? (Prioritized experience replay)Body: I can't seem to understand why we need importance sampling in prioritized experience replay (PER). The authors of the paper write on page 5:
+
+The estimation of the expected value with stochastic updates relies on those updates corresponding to the same distribution as its expectation. Prioritized replay introduces bias because it changes this distribution in an uncontrolled fashion, and therefore changes the solution that the estimates will converge to (even if the policy and state distribution are fixed).
+
+My understanding of this statement is that sampling non-uniformly from the replay memory is an issue.
+So, my question is: Since we are working 1-step off-policy, why is it an issue? I thought that in an off-policy setting we don't care how transitions are sampled (at least in the 1-step case).
+The one possibility for an issue that came to my mind is that in the particular case of PER, we are sampling transitions according to the errors and rewards, which does seem a little fishy.
+A somewhat related question was asked here, but I don't think it answers my question.
+"
+"['reinforcement-learning', 'q-learning', 'reward-functions', 'sparse-rewards', 'combinatorial-optimization']"," Title: How to apply Q-learning when rewards is only available at the last state?Body: I have a scheduling problem in which there are $n$ slots and $m$ clients. I am trying to solve the problem using Q-learning so I have made the following state-action model.
+A state $s_t$ is given by the current slot $t=1,2,\ldots,n$ and an action $a_t$ at slot $t$ is given by one client, $a_t\in\{1,2,\ldots,m\}$. In my situation, I do not have any reward associated with a state-action pair $(s_t,a_t)$ until the terminal state which is the last slot. In other words, for all $s_t\in\{1,2,\ldots,n-1\}$, the reward is $0$ and for $s_t=n$ I can compute the reward given $(a_1,a_2,\ldots,a_n)$.
+In this situation, the Q table, $Q(s_t,a_t)$, will contain only zeros except for the last row in which it will contain the updated reward.
+Can I still apply Q-learning in this situation? Why do I need a Q table if I only use the last row?
+"
+"['game-ai', 'reference-request']"," Title: Examples of single player games that use modern ML techniques in the AI?Body: Are there any examples of single player games that use modern ML technique in its games? By this I mean AI that plays with or against the human player, and not just play the game by itself (like Atari).
+"Modern ML techniques" is a vague term, but for example, Neural Networks, Reinforcement Learning, or probabilistic methods. Basically anything that goes above and beyond traditional search methods that most games use nowadays.
+Ideally, the AI would be:
+
+- widely available (i.e. not like the OpenAI Five, which was only available for a limited amount of time and requires a high amount of computational power)
+- human level (not overpowered)
+
+Ideally, the game would be:
+
+- symmetrical (the AI has the same agent capabilities as the player, though answers similar to The Director would be very interesting as well)
+- "complex environment" (more complex than, say, a board game, but a CIV5 game might work)
+
+But any answer would be appreciated, as some of the criteria above are quite vauge.
+Edit: the ideal cases listed above are not meant to discourage other answers, nor are they intended to be of strictly inclusionary (ie: any game would need to satisfy all of the above requirements)
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl', 'exploration-exploitation-tradeoff']"," Title: What is the optimal exploration-exploitation trade-off in Q*bert?Body: I am training an RL agent with Deep Q-learning + Experience Replay on the Q*bert Atari environment. After 400,000 frames, my agent appears to have learned strategic information about the game, but none about the environment. It has learned that a good immediate strategy is to simply jump down both diagonals and fall of the board, thus completing a large portion of the first level. However, it remains to understand neither the boundaries of the board to prevent jumping off, nor anything about avoiding enemies. I’m asking this here, instead of Stack Overflow because it is a more general question with less of a need in terms of programming understanding. Simply, I am asking whether or not this is a matter of a pore exploration policy (which I presume). If you agree, what should be a better exploration policy for Q*bert that would facilitate my agent’s learning experience?
+As per the request of a comment:
+
+Could you add what your current exploration approach is, and what options you are using for your Deep Q Learning implementation (e.g. replay size, batch size, NN architecture, steps per target network copy, or if you are using a different update mechanism for the target network). Also if you are using any other approach different to the classic DQN paper such as in state representation.
+
+Here are my parameters:
+
+- Exploration policy: epsilon =
min(1.0, 1000 / (frames + 1))
+- Replay Memory = 20,000 frames
+- Batch size = 32 transitions
+- NN architecture: Conv2D(64, 3, 2), Dropout(0.2), Dense(32, relu), Dense(32, relu), Dense(num_actions, linear)
+- Steps per target network copy: 100
+
+"
+"['reinforcement-learning', 'deep-rl', 'sarsa']"," Title: How are we calculating the average reward ($r(\pi)$) if the policy changes over time?Body: In the average reward setting, the quality of a policy is defined as:
+$$ r(\pi) = \lim_{h\to\infty}\frac{1}{h} \sum_{j=1}^{h}E[R_j] $$
+When we reach the steady state distribution, we can write the above equation as follows:
+$$ r(\pi) = \lim_{t\to\infty}E[R_t | A \sim \pi] $$
+We can use the incremental update method to find $r(\pi)$:
+$$ r(\pi) = \frac{1}{t} \sum_{j=1}^{t} R_j = \bar R_{t-1} + \beta (R_t - \bar R_{t-1})$$
+where $ \bar R_{t-1}$ is the estimate of the average reward $r(\pi)$ at time step $t-1$.
+We use this incremental update rule in the SARSA algorithm:
+
+Now, in this above algorithm, we can see that the policy will change with respect to time. But to calculate the $r(\pi)$, the agent should follow the policy $\pi$ for a long period of time. Then how we are using $r(\pi)$ if the policy changes with respect to time?
+"
+"['neural-networks', 'robotics', 'state-of-the-art', 'humanoid-robots']"," Title: Is it feasible using today's technology to use an AI training algorithm to custom teach a robot to do common household cores?Body: Like making a bed, washing dishes, taking out the garbage, etc., by training it on the video of specific individuals doing those cores in their own unique environments?
+I have researched what machine learning is capable of doing at this point in time, and it seems this may be now feasible when done on a customer-specific basis and enable by an A.I. enhanced, full articulated, robot along the lines of an enhanced InMoov. https://en.wikipedia.org/wiki/InMoov
+If it's feasible, what are the AI algorithms I should be considering to train my robot to do these tasks? Isn't deep learning the most promising of these selections: https://www.ubuntupit.com/machine-learning-algorithms-for-both-newbies-and-professionals/?
+"
+"['machine-learning', 'linear-regression']"," Title: Effect of adding an Independent Variable in Multiple Linear RegressionBody: I am new in machine learning and learning linear regression concept. Please help with answers to below queries.
+I want to understand effect on existing independent variable(X1) if I add a new independent variable(X2) in my model.
+This new variable is highly correlated with dependent variable(Y)
+
+- Will it have any effect on beta coefficient of X1?
+- Will relationship between X1 and Y become insignificant?
+- Can adjusted R-square value decrease?
+
+"
+"['machine-learning', 'reinforcement-learning', 'terminology', 'monte-carlo-methods', 'temporal-difference-methods']"," Title: Why is the target called ""target"" in Monte Carlo and TD learning if it is not the true target?Body: I was going through Sutton's book and, using sample-based learning for estimating the expectations, we have this formula
+$$
+\text{new estimate} = \text{old estimate} + \alpha(\text{target} - \text{old estimate})
+$$
+What I don't quite understand is why it's called the target, because since it's the sample, it’s not the actual target value, so why are we moving towards a wrong value?
+"
+"['reinforcement-learning', 'rewards', 'reward-design']"," Title: How do I design the rewards and penalties for an agent whose goal it is to explore a mapBody: I am trying to train an agent to explore an unknown two-dimensional map while avoiding circular obstacles (with varying radii). The agent has control over its steering angle and its speed. The steering angle and speed are normalized in a $[-1, 1]$ range, where the sign encodes direction (i.e. a speed of $-1$ means that it is going backwards at the maximum units/second).
+I am familiar with similar problems where the agent must navigate to a waypoint, and in which case the reward is the successful arrival to the target position. But, in my case, I can't really reward the agent for that, since there is no direct 'goal'.
+What I have tried
+The agent is penalised when it hits an obstacle; however, I am not sure how to motivate the agent to move. Initially, I was thinking of having the agent always move forward, meaning that it only has control over the steering angle. But, I want the ability for the agent to control its speed and be able to reverse (since I'm trying to model a car).
+What I have tried is to reward the agent for moving and to penalise it for remaining stationary. At every timestep, the agent is rewarded ${1}/{t_\text{max}}$ if the absolute value of the speed is above some epsilon, or penalised that same amount if otherwise. But, as expected, this doesn't work. Rather than motivating the agent to move, it simply causes it to jitter back and forth. This makes sense since 'technically' the most optimal strategy if you want to avoid obstacles is to remain stationary. If the agent can't do that then the next best thing is to make small adjustements in the position.
+So my question: how can I add in an exploration incentive to my agent? I am using proximal policy optimization (PPO).
+"
+"['reinforcement-learning', 'multi-armed-bandits', 'epsilon-greedy-policy']"," Title: Understanding GLIE conditions for epsilon greedy approachBody: I was going through this course on reinforcement learning (the course has two lecture videos and corresponding slides) and I had a doubt. On slide 18 of this pdf, it states following condition for an algorithm to have regret sublinear in T (T being number of pulls of multi arm bandit).
+
+C2 - Greedy in the Limit: Let exploit(T) denote the number of pulls that are that are greedy w.r.t the empirical mean up to horizon $T$. For sub-linear regret, we need
+$$\lim_{T\rightarrow\infty}\frac{\mathbb{E}[exploit(T)]}{T}=1 $$
+
+Here, $exploit(T)$ denote the total number of "exploit" rounds performed in the first $T$ pulls. Given that expectation is defined as "the weighted sum of the outcome values, where the weights correspond to the probabilities of realizing that value",
+(Q1) how exactly mathematically we define $\mathbb{E}[exploit(T)]$?
+In second video (at 24:44), instructor has said that $\mathbb{E}[exploit(T)]$ is the number of exploit steps.
+(Q2) Then how it equals "weighted sum of outcome values"?
+(note that instructor assumes that the pulling of arm may give reward which correspond to outcome value of 1 and may not give reward which correspond to ourcome value of 0)
+Also in slide 27, for GLIE-ifying $\epsilon_T$-first strategy, he selects $\epsilon_T=\frac{1}{\sqrt{T}}$. Then, the instructor counts $\sqrt{T}$ exploratory pulls and $T-\sqrt{T}$ exploitory pulls. Then to show that this satisfies condition C2, instructor states $$\mathbb{E}[exploit(T)]\geq \frac{T-\sqrt{T}}{T}$$.
+Here, $\frac{T-\sqrt{T}}{T}$ is a fraction of exploitory pulls.
+(Q3) So, by above equation does the instructor mean, number of exploitory pulls is greater than equal to fraction of number of exploitory pulls?
+(Q4) How can we put 2nd equation in first equation and still prove limit in first equation still holds, that is, how following is the case:
+$$\lim_{T=\rightarrow\infty}\frac{\frac{T-\sqrt{T}}{T}}{T}=1$$
+I guess I am missing some basic concept of expectation here.
+"
+"['reinforcement-learning', 'deep-learning', 'deep-rl', 'rewards', 'reward-shaping']"," Title: How can I fix jerky movement in a continuous action spaceBody: I am training an agent to do object avoidance. The agent has control over its steering angle and its speed. The steering angle and speed are normalized in a $[−1,1]$ range, where the sign encodes direction (i.e. a speed of −1 means that it is going backwards at the maximum units/second).
+My reward function penalises the agent for colliding with an obstacle and rewards it for moving away from its starting position. At a time $t$, the reward, $R_t$, is defined as
+$$
+R_t=
+\begin{cases}
+r_{\text{collision}},&\text{if collides,}\\
+\lambda^d\left(\|\mathbf{p}^{x,y}_t-\mathbf{p}_0^{x,y}\|_2-\|\mathbf{p}_{t-1}^{x,y}-\mathbf{p}_0^{x,y}\|_2 \right),&\text{otherwise,}
+\end{cases}
+$$
+where $\lambda_d$ is a scaling factor and $\mathbf{p}_t$ gives the pose of the agent at a time $t$. The idea being that we should reward the agent for moving away from the inital position (and in a sense 'exploring' the map—I'm not sure if this is a good way of incentivizing exploration but I digress).
+My environment is an unknown two-dimensional map that contains circular obstacles (with varying radii). And the agent is equipped with a sensor that measures the distance to nearby obstacles (similar to a 2D LiDAR sensor). The figure below shows the environment along with the agent.
+
+Since I'm trying to model a car, I want the agent to be able to go forward and reverse; however, when training, the agent's movement is very jerky. It quickly switches between going forward (positive speed) and reversing (negative speed). This is what I'm talking about.
+One idea I had was to penalise the agent when it reverses. While that did significantly reduce the jittery behaviour, it also caused the agent to collide into obstacles on purpose. In fact, over time, the average episode length decreased. I think this is the agent's response to the reverse penalties. Negative rewards incentivize the agent to reach a terminal point as fast as possible. In our case, the only terminal point is obstacle collision.
+So then I tried rewarding the agent for going forward instead of penalising it for reversing, but that did not seem to do much. Evidently, I don't think trying to correct the jerky behaviour directly through rewards is the proper approach. But I'm also not sure how I can do it any other way. Maybe I just need to rethink what my reward signal wants the agent to achieve?
+How can I rework the reward function to have the agent move around the map, covering as much distance as possible, while also maintaining smooth movement?
+"
+"['reinforcement-learning', 'pytorch', 'a3c']"," Title: is it ok to take random actions while training a3c as in below codeBody: i am trying to train an A3C algorithm but I am getting same output in the multinomial function.
+can I train the A3C with random actions as in below code.
+can someone expert comment.
+while count<max_timesteps-1:
+ value, action_values, (hx, cx) = model((Variable(state.unsqueeze(0)), (hx, cx)))
+ prob = F.softmax(action_values,dim = -1)
+ log_prob = F.log_softmax(action_values, dim=-1)
+ print(log_prob.shape)
+ print("log_prob: ",log_prob)
+ entropy = -(log_prob * prob).sum(1, keepdim=True)
+ entropies.append(entropy)
+ actn = np.random.randn(3)
+ action = actn.argmax()
+ log_prob = log_prob[0,action]
+ # print("log_prob ",log_prob)
+ # print("action ",action)
+ state, reward, done = env.step(action)
+ done = (done or count == max_timesteps-2)
+ reward = max(min(reward, 1), -1)
+
+"
+"['neural-networks', 'machine-learning']"," Title: Are there neural networks where nodes are randomly selected from among a set of nodes (in random orders and a random number of times)?Body: I am trying to make a classifier.
+I am new to AI (even if I know the definition and all such a bit) , and also I have no idea of how to implement it properly by myself even if I know a bit of Python coding (in fact, I am fifteen years old !🙄🙄), but my passion for this has made me ask this (silly, probably) question.
+Are there neural networks where nodes are randomly selected from among a set of nodes (in random orders and a random number of times)? I know this is from ML (or maybe deep learning, I suppose), but I have no idea how to recognize such a thing from the presently available algorithms. It will be great if you all could help me, because I am preparing to release an API for programming a model which I call the 'Insane Mind' on GitHub, and I want some help to know if my effort was fruitless.
+And for reference, here's the code :
+from math import *
+from random import *
+
+class MachineError(Exception):
+ '''standard exception in the API'''
+ def __init__(self, stmt):
+ self.stmt = stmt
+def sig(x):
+ '''Sigmoid function'''
+ return (exp(x) + 1)/exp(x)
+
+class Graviton:
+ def __init__(self, weight, marker):
+ '''Basic unit in 'Insane Mind' algorithm
+ -------------------------------------
+ Graviton simply refers to a node in the algorithm.
+ I call it graviton because of the fact that it applies a weight
+ on the input to transform it, besides using the logistic function '''
+ self.weight = weight # Weight factor of the graviton
+ self.marker = marker # Marker to help in sorting
+ self.input = 0 # Input to the graviton
+ self.output = 0 # Output of the graviton
+ self.derivative = 0 # Derivative of the output
+
+ def process(self, input_to_machine):
+ '''processes the input (a bit of this is copied from the backprop algorithm'''
+ self.input = input_to_machine
+ self.output = (sig(self.weight * self.input) - 1)/(self.marker + 1)
+ self.derivative = (sig(self.input * self.weight) - 1) * self.input *self.output * (1- self.output)
+ return self.output
+
+ def get_derivative_at_input(self):
+ '''returns the derivative of the output'''
+ return self.derivative
+
+ def correct_self(self, learning_rate, error):
+ '''edits the weight'''
+ self.weight += -1 * error * learning_rate * self.get_derivative_at_input() * self.weight
+
+class Insane_Mind:
+
+ def __init__(self, number_of_nodes):
+ '''initialiser for Insane_Mind class.
+ arguments : number_of_nodes : the number of nodes you want in the model'''
+ self.system = [Graviton(random(),i) for i in range(number_of_nodes)] # the actual system
+ self.system_size = number_of_nodes # number of nodes , or 'system size'
+
+ def output_sys(self, input_to_sys):
+ '''system output'''
+ self.output = input_to_sys
+ for i in range(self.system_size):
+ self.output = self.system[randint(0,self.system_size - 1 )].process(self.output)
+ return self.output
+
+ def train(self, learning_rate, wanted):
+ '''trains the system'''
+ self.cloned = [] # an array to keep the sorted elements during the sorting process below
+ order = [] # the array to make out the order of arranging the nodes
+ temp = {} # a temporary dictionary to pick the nodes from
+ for graviton in self.system:
+ temp.update({str(graviton.derivative): graviton.marker})
+ order = sorted(temp)
+ i = 0
+ error = wanted - self.output
+ for value in order:
+ self.cloned.append(self.system[temp[value]])
+ self.cloned[i].correct_self(learning_rate, error)
+ error *= self.cloned[i].derivative
+ i += 1
+ self.system = self.cloned
+
+Sorry for not using that MachineError
exception anywhere in my code (I will use it when I am able to deploy this API).
+To tell more about this algorithm, this gives randomized outputs (as if guessing). The number of guesses vary from 1 (for a system with one node), 2 (for two nodes) and so on to an infinite number of guesses for an infinite number of nodes.
+Also, I wanna try and find how much it can be of use (if this is something that has never been discovered, if it is something that can find a good place in the world of ML or Deep Learning) and where it can be used.
+Thanks in advance.
+Criticisms (with a clear reason) are also accepted.
+"
+"['neural-networks', 'machine-learning', 'comparison', 'function-approximation']"," Title: Why are neural networks preferred to other classification functions optimized by gradient decentBody: Consider a neural network, e.g. as presented by Nielsen here. Abstractly, we just construct some function $f: \mathbb{R}^n \to [0,1]^m$ for some $n,m \in \mathbb{N}$ (i.e. the dimensions of the input and output space) that depends on a large set of parameters, $p_j$. We then just define the cost function $C$ and calculate $\nabla_p C$ and just map $p \to p - \epsilon \nabla_p C$ repeatedly.
+The question is why do we choose $f$ to be what it is in standard neural networks, e.g. a bunch of linear combinations and sigmoids? One answer is that there a theorem saying any suitably nice function can be approximated using neural networks. But the same is true of other types of functions $f$. The Stone-Weierstrass theorem gives that we could use polynomials in $n$ variables: $$f(x) = c^0_0 + (c^1_1 x_1 + c^1_2 x_2 + \cdots + c^1_n x_n) + (c^2_{11}x_1 x_1 + c^2_{12} x_1x_2 + \cdots + c^2_{1n} x_1 x_2 + c^2_{21} x_2x_1 + c^2_{22} x_2x_2 + \cdots) + \cdots,$$
+and still have a nice approximation theorem. Here the gradient would be even easier to calculate. Why not use polynomials?
+"
+"['neural-networks', 'transformer', 'attention']"," Title: What is the weight matrix in self-attention?Body: I've been looking into self-attention lately, and in the articles that I've been seeing, they all talk about "weights" in attention. My understanding is that the weights in self-attention are not the same as the weights in a neural network.
+From this article, http://peterbloem.nl/blog/transformers, in the additional tricks section, it mentions,
+The query is the dot product of the query weight matrix and the word vector,
+ie, q = W(q)x
and the key is the dot product of the key weight matrix and the word vector, k = W(k)x
and similarly for the value it is v = W(v)x
. So my question is, where do the weight matrices come from?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'computer-vision', 'r-cnn']"," Title: How is the data labelled in order to train a region proposal network?Body: I don't get how the training of the RPN works. From the forward propagation, I have $W \times H \times k$ outputs from the RPN.
+How is the training data labeled such that I can use the loss function and update the weights through bach propagation? Is the training data labeled in the same shape of the output, as there are $W \times H \times k$ anchor boxes and we use the loss function directly or what?
+"
+"['deep-learning', 'tensorflow', 'training', 'memory']"," Title: How to calculate the GPU memory need to run a deep learning network?Body: In general, how do I calculate the GPU memory need to run a deep learning network?
+I'm asking this question because my training for some network configuration is getting out of memory.
+If the TensorFlow only store the memory necessary to the tunable parameters, and if I have around 8 million, I supposed the RAM required will be:
+RAM = 8.000.000 * (8 (float64)) / 1.000.000 (scaling to MB)
+RAM = 64 MB, right?
+The TensorFlow requires more memory to store the image at each layer?
+By the way, these are my GPU Specifications:
+
+- Nvidia GeForce 1050 4GB
+
+Networking topology
+
+- Unet
+- Input Shape (256,256,4)
+
+Model: "functional_1"
+__________________________________________________________________________________________________
+Layer (type) Output Shape Param # Connected to
+==================================================================================================
+input_1 (InputLayer) [(None, 256, 256, 4) 0
+__________________________________________________________________________________________________
+conv2d (Conv2D) (None, 256, 256, 64) 2368 input_1[0][0]
+__________________________________________________________________________________________________
+dropout (Dropout) (None, 256, 256, 64) 0 conv2d[0][0]
+__________________________________________________________________________________________________
+conv2d_1 (Conv2D) (None, 256, 256, 64) 36928 dropout[0][0]
+__________________________________________________________________________________________________
+max_pooling2d (MaxPooling2D) (None, 128, 128, 64) 0 conv2d_1[0][0]
+__________________________________________________________________________________________________
+conv2d_2 (Conv2D) (None, 128, 128, 128 73856 max_pooling2d[0][0]
+__________________________________________________________________________________________________
+dropout_1 (Dropout) (None, 128, 128, 128 0 conv2d_2[0][0]
+__________________________________________________________________________________________________
+conv2d_3 (Conv2D) (None, 128, 128, 128 147584 dropout_1[0][0]
+__________________________________________________________________________________________________
+max_pooling2d_1 (MaxPooling2D) (None, 64, 64, 128) 0 conv2d_3[0][0]
+__________________________________________________________________________________________________
+conv2d_4 (Conv2D) (None, 64, 64, 256) 295168 max_pooling2d_1[0][0]
+__________________________________________________________________________________________________
+dropout_2 (Dropout) (None, 64, 64, 256) 0 conv2d_4[0][0]
+__________________________________________________________________________________________________
+conv2d_5 (Conv2D) (None, 64, 64, 256) 590080 dropout_2[0][0]
+__________________________________________________________________________________________________
+max_pooling2d_2 (MaxPooling2D) (None, 32, 32, 256) 0 conv2d_5[0][0]
+__________________________________________________________________________________________________
+conv2d_6 (Conv2D) (None, 32, 32, 512) 1180160 max_pooling2d_2[0][0]
+__________________________________________________________________________________________________
+dropout_3 (Dropout) (None, 32, 32, 512) 0 conv2d_6[0][0]
+__________________________________________________________________________________________________
+conv2d_7 (Conv2D) (None, 32, 32, 512) 2359808 dropout_3[0][0]
+__________________________________________________________________________________________________
+conv2d_transpose (Conv2DTranspo (None, 64, 64, 256) 524544 conv2d_7[0][0]
+__________________________________________________________________________________________________
+concatenate (Concatenate) (None, 64, 64, 512) 0 conv2d_transpose[0][0]
+ conv2d_5[0][0]
+__________________________________________________________________________________________________
+conv2d_8 (Conv2D) (None, 64, 64, 256) 1179904 concatenate[0][0]
+__________________________________________________________________________________________________
+dropout_4 (Dropout) (None, 64, 64, 256) 0 conv2d_8[0][0]
+__________________________________________________________________________________________________
+conv2d_9 (Conv2D) (None, 64, 64, 256) 590080 dropout_4[0][0]
+__________________________________________________________________________________________________
+conv2d_transpose_1 (Conv2DTrans (None, 128, 128, 128 131200 conv2d_9[0][0]
+__________________________________________________________________________________________________
+concatenate_1 (Concatenate) (None, 128, 128, 256 0 conv2d_transpose_1[0][0]
+ conv2d_3[0][0]
+__________________________________________________________________________________________________
+conv2d_10 (Conv2D) (None, 128, 128, 128 295040 concatenate_1[0][0]
+__________________________________________________________________________________________________
+dropout_5 (Dropout) (None, 128, 128, 128 0 conv2d_10[0][0]
+__________________________________________________________________________________________________
+conv2d_11 (Conv2D) (None, 128, 128, 128 147584 dropout_5[0][0]
+__________________________________________________________________________________________________
+conv2d_transpose_2 (Conv2DTrans (None, 256, 256, 64) 32832 conv2d_11[0][0]
+__________________________________________________________________________________________________
+concatenate_2 (Concatenate) (None, 256, 256, 128 0 conv2d_transpose_2[0][0]
+ conv2d_1[0][0]
+__________________________________________________________________________________________________
+conv2d_12 (Conv2D) (None, 256, 256, 64) 73792 concatenate_2[0][0]
+__________________________________________________________________________________________________
+dropout_6 (Dropout) (None, 256, 256, 64) 0 conv2d_12[0][0]
+__________________________________________________________________________________________________
+conv2d_13 (Conv2D) (None, 256, 256, 64) 36928 dropout_6[0][0]
+__________________________________________________________________________________________________
+conv2d_14 (Conv2D) (None, 256, 256, 1) 65 conv2d_13[0][0]
+==================================================================================================
+Total params: 7,697,921
+Trainable params: 7,697,921
+Non-trainable params: 0
+
+This is the error given.
+---------------------------------------------------------------------------
+ResourceExhaustedError Traceback (most recent call last)
+<ipython-input-17-d4852b86b8c1> in <module>
+ 23 # Train the model, doing validation at the end of each epoch.
+ 24 epochs = 30
+---> 25 result_model = model.fit(train_gen, epochs=epochs, validation_data=val_gen, callbacks=callbacks)
+
+~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs)
+ 106 def _method_wrapper(self, *args, **kwargs):
+ 107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
+--> 108 return method(self, *args, **kwargs)
+ 109
+ 110 # Running inside `run_distribute_coordinator` already.
+
+~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
+ 1096 batch_size=batch_size):
+ 1097 callbacks.on_train_batch_begin(step)
+-> 1098 tmp_logs = train_function(iterator)
+ 1099 if data_handler.should_sync:
+ 1100 context.async_wait()
+
+~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds)
+ 778 else:
+ 779 compiler = "nonXla"
+--> 780 result = self._call(*args, **kwds)
+ 781
+ 782 new_tracing_count = self._get_tracing_count()
+
+~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
+ 838 # Lifting succeeded, so variables are initialized and we can run the
+ 839 # stateless function.
+--> 840 return self._stateless_fn(*args, **kwds)
+ 841 else:
+ 842 canon_args, canon_kwds = \
+
+~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in __call__(self, *args, **kwargs)
+ 2827 with self._lock:
+ 2828 graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
+-> 2829 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
+ 2830
+ 2831 @property
+
+~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in _filtered_call(self, args, kwargs, cancellation_manager)
+ 1846 resource_variable_ops.BaseResourceVariable))],
+ 1847 captured_inputs=self.captured_inputs,
+-> 1848 cancellation_manager=cancellation_manager)
+ 1849
+ 1850 def _call_flat(self, args, captured_inputs, cancellation_manager=None):
+
+~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
+ 1922 # No tape is watching; skip to running the function.
+ 1923 return self._build_call_outputs(self._inference_function.call(
+-> 1924 ctx, args, cancellation_manager=cancellation_manager))
+ 1925 forward_backward = self._select_forward_and_backward_functions(
+ 1926 args,
+
+~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in call(self, ctx, args, cancellation_manager)
+ 548 inputs=args,
+ 549 attrs=attrs,
+--> 550 ctx=ctx)
+ 551 else:
+ 552 outputs = execute.execute_with_cancellation(
+
+~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
+ 58 ctx.ensure_initialized()
+ 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
+---> 60 inputs, attrs, num_outputs)
+ 61 except core._NotOkStatusException as e:
+ 62 if name is not None:
+
+ResourceExhaustedError: OOM when allocating tensor with shape[8,64,256,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
+ [[node gradient_tape/functional_1/conv2d_14/Conv2D/Conv2DBackpropInput (defined at <ipython-input-17-d4852b86b8c1>:25) ]]
+Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
+ [Op:__inference_train_function_17207]
+
+Function call stack:
+train_function
+
+Is there any type of mistake in the network definition? How could I improve the network to solve this problem?
+"
+"['neural-networks', 'computer-vision', 'papers', 'attention']"," Title: How do non-local neural networks relate to attention and self-attention?Body: I've been reading non-local neural networks as explained in the original paper. My understanding is that they solve the restrained reception of local filters. I see how they are different from convolutions and fully connected networks.
+How do they relate to attention (specifically self-attention)? How do they integrate this attention?
+"
+"['neural-networks', 'reinforcement-learning', 'deep-neural-networks', 'papers']"," Title: How to understand this NN architecture?Body: I was reading a paper Multi-Agent Reinforcement Learning for Adaptive
+User Association in Dynamic mmWave Networks and I was stuck understanding the deep neural network architecture that was used. The authors gave it in Fig. 3 (on top of page 6) and they state the following (on page 9):
+
+This architecture comprises 2 multi-layers perceptron (MLP) of 32 hidden units, one RNN layer (a long short memory term - LSTM) layer with 64 memory cells followed by another 2 MLPs of 32 hidden units. The network then branches off in two MLPs of 16 hidden units to construct the duelling network.
+
+According to Fig. 3 there is one MLP, one RNN and one MLP. So why the authors said 2 MLPs?
+Assuming it is 2 MLPs, does this mean we have 2 hidden layers of 32 neurons each? So, at the end we will have:
+one input layer - one hidden layer with 32 neurons - another hidden layer with 32 neurons - one RNN layer with 64 cells - one hidden layer with 32 neurons - another hidden layer with 32 neurons - one hidden layer with 16 neurons - another hidden layer with 16 neurons - one output layer.
+"
+"['natural-language-processing', 'classification', 'datasets']"," Title: How can I classify houses given a dataset of houses with descriptions?Body: I have a dataset with a number of houses, for each house, I have a description. For example "The house is luxuriously renovated" or "The house is nicely renovated". My aim is to identify for each house whether it is luxuriously, well or poorly renovated. I am new to NLP so any tips on how to approach this problem would be much appreciated.
+"
+"['neural-networks', 'reference-request', 'applications', 'social']"," Title: What are examples of problems where neural networks have achieved human-level or higher performance?Body: What are examples of problems where neural networks have been used and have achieved human-level or higher performance?
+Each answer can contain one or more examples. Please, provide links to research papers or reliable articles that validate your claims.
+"
+"['neural-networks', 'machine-learning', 'text-classification']"," Title: Can I use one-hot vectors for text classification?Body: For an upcoming project I'm trying to write a text classifier for the IMDb sentiment analysis dataset. This needs to vectorize words using an embedding layer and then reduce the dimensions of the output with global average pooling. This is proving however to be very difficult for my low experience level, and I am struggling to wrap my head around the dimensionality involved, bearing in mind I must avoid libraries such as tensorflow that would make it very basic exercise. I am hoping that I could make it easier by encoding each word in the reviews as a one-hot vector, and passing it through a few regular dense layers. Would this work and yield decent results?
+"
+"['q-learning', 'dqn', 'deep-rl', 'curse-of-dimensionality']"," Title: Is it feasible to train a DQN with thousands of input ports?Body: I designed a DQN architecture for some problem. The problem has a parameter $m$ as the number of clients. In my situation, $m$ is large, $m\in\{100,200,\ldots,1000\}$. For this situation, the number of input ports of the DQN is some few thousand, $\{1000, 2000, \ldots, 10000\}$. For some fixed $m$, I would like to see the performance of deep Q learning on the performance. So I have to train the DQN for every change that occurs on $m$ and this should handle thousands of inputs ports for each training. Is this situation familiar in DQN and if not how to solve this issue?
+"
+"['computer-vision', 'generative-adversarial-networks', 'autoencoders']"," Title: Why don't we use auto-encoders instead of GANs?Body: I have watched Stanford's lectures about artificial intelligence, I currently have one question: why don't we use autoencoders instead of GANs?
+Basically, what GAN does is it receives a random vector and generates a new sample from it. So, if we train autoencoders, for example, on cats vs dogs dataset, and then cut off the decoder part and then input random noise vector, wouldn't it do the same job?
+"
+"['neural-networks', 'comparison', 'logistic-regression']"," Title: Given the same features, do logistic regression and neural networks produce the same output?Body: I have a binary classification problem. I have variables (features) var1, var2, var3, ..., var14.
+Using these variables (aka features) in a logistic regression, I get their weights.
+If I use the same set of variables in a neural network:
+
+- Should I get a different output?
+or
+
+- Should I get the same output?
+
+
+I developed a ROC Curve, and I have both lines overlaying on each other. I am not sure if I am missing something here.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'deep-neural-networks']"," Title: Why does using a higher representation space lead to performance increase on the training data but not on the test data?Body: I read the following from a book:
+
+You can intuitively understand the dimensionality of your representation space as “how much freedom you’re allowing the model to have when learning internal representations.” Having more units (a higher-dimensional representation space) allows your model to learn more-complex representations, but it makes the model more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).
+
+Why does using a higher representation space lead to performance increase on the training data but not on the test data?
+Surely the representations/patterns learnt in the training data will be found too in the test data.
+"
+"['reinforcement-learning', 'deep-rl', 'rewards', 'ddpg', 'hindsight-experience-replay']"," Title: Why would DDPG with Hindsight Experience Replay not converge?Body: I am trying to train a DDPG agent augmented with Hindsight Experience Replay (HER) to solve the KukaGymEnv environment. The actor and critic are simple neural networks with two hidden layers (as in the HER paper).
+More precisely, the hyper-parameters I am using are
+
+- The actor's hidden layer's sizes: [256, 128] (using ReLU activations and a tanh activation after the last layer)
+- Critic's hidden layer's sizes: [256, 128] (Using ReLU activations)
+- Maximum Replay buffer size: 50000
+- Actor learning rate: 0.000005 (Adam Optimizer)
+- Critic learning rate: 0.00005 (Adam Optimizer)
+- Discount rate: 0.99
+- Polyak constant : 0.001
+- The transitions are sampled in batches of 32 from the replay buffer for training
+- Update rate: 1 (target networks are updated after each time step)
+- The action selection is stochastic with the noise being sampled from a normal distribution of mean 0 and standard deviation of 0.7
+
+I trained the agent for 25 episodes with a maximum of 700 time-steps each and got the following reward plot:
+
+The reward shoots up to a very high value of about 8000 for the second episode and steeply falls to -2000 in the very next time step, never to rise again. What could be the reason for this behavior and how can I get it to converge?
+PS : One difference I observed while training this agent from while training a simple DDPG agent is that for simple DDPG, the episode would usually terminate at around 450 time-steps, thus never reaching the maximum specified timesteps. However, here, no episode terminated before the specified 700 maximum time steps. This might have something to do with the performance.
+"
+"['machine-learning', 'overfitting', 'cross-validation', 'r', 'early-stopping']"," Title: How to avoid over-fitting using early stopping when using R cross validation package caretBody: I have a data set with 36 rows and 9 columns. I am trying to make a model to predict the 9th column
+I have tried modeling the data using a range of models using caret to perform cross-validation and hyper parameter tuning: 'lm', random forrest (ranger) and GLMnet, with range of different folds and hyper-parameter tuning, but the modeling has not been very successful.
+Next I have tried to use some of the neural-network models. I tried the 'monmlp'. During hyper parameter tuning I could see that the RMSE drops to a level when using ~ 6 hidden units. The problem I observe using this model is
+
+- Prediction is almost equal to data
+- When doing a "manual" cross validation by removing a single datapoint and using the trained model to predict, it has no predictive power
+
+I have tried to use a range of different hidden units, but i think the problem is that the model is overfitted despite using caret cross validation feature.
+There two feedbacks I would appreciate
+
+- Is there a way to prevent overfitting, by chosen optimal number of training iterations ( optimal RMSE on out of sample ). Can this by done using caret or some other package
+- Am I using the right model?
+
+I am relatively unexperienced with ML and choosing a good model is tough: when you look at the available packages it is overwhelming:
+https://topepo.github.io/caret/train-models-by-tag.html
+"
+"['math', 'optimization', 'linear-algebra', 'principal-component-analysis']"," Title: How does PCA work when we reduce the original space to 2 or higher-dimensional space?Body: How does PCA work when we reduce the original space to a 2 or higher-dimensional space? I understand the case when we reduce the dimensionality to $1$, but not this case.
+$$\begin{array}{ll} \text{maximize} & \mathrm{Tr}\left( \mathbf{w}^T\mathbf{X}\mathbf{X}^T\mathbf{w} \right)\\ \text{subject to} & \mathbf{w}^T\mathbf{w} = 1\end{array}$$
+"
+"['natural-language-processing', 'google-translate']"," Title: What makes Google Translate fail on the Latin language?Body: As it is discussed here, and I saw it on other Latin language forums too, everybody complains about how Google Translate fails to translate the Latin language. From my personal experience, it is not that much bad on other languages, including romance languages.
+So, what makes Google Translate fail so much to translate the Latin language? Is it about its syntax and grammar or lack of data?
+"
+"['neural-networks', 'computer-vision', 'generative-adversarial-networks']"," Title: Is there any network/paper used to analyse music scores?Body: As I am curious on music theory I would like to know that If is there any such network that analyse like labeling chords, or doing a roman numeral analysis.
+Like an example below:
+
+Source
+It does not seem to be a difficult task.
+Some other examples are given here[external link]
+Also I am curious that If it is a possible task for AI to accomplish.
+"
+"['neural-networks', 'classification']"," Title: Is my ""Insane Mind"" design for a classifier novel or effective?Body: This question is in relation to a previous doubt of mine :
+Are there neural networks where nodes are randomly selected from among a set of nodes (in random orders and a random number of times)?
+I have made a bit of progress from there, refurbished my code, and got things ready.
+What I intend to make is 'Insane Mind', a model which forms random linear neural networks from a set of nodes at random times ( I made out the 'linear neural network' part from a bit of Google searches).
+The basic process involved is :
+
+
+- The system forms nodes of random weights . These nodes also have the Sigmoid function (the logistic fuction : $f(x) = \frac{1}{1 + e^{-x}}$ ) , and I termed these 'Gravitons' (because of the usage of the word 'weights' in them - sorry if my terminology work seems ambiguous...😅)
+- The input enters the system via one of the gravitons.
+- The node processes it and either passes the output to the next node or to itself .
+- Step 3 is repeated a certain number of times as the number of gravitons made for use.
+- The output of the final graviton is given as the output of the whole system.
+
+
+One thing I'm sure of this model is that this model can transform an input vector into an output vector.
+I am not sure whether this is ambiguous or similar to previously discovered model. Plus, I'd like to know if this will be effective in any situation (I believe it will be of help in classification problems).
+
+Note : I made this out of my imagination , which means this may be useless one way or the other, but still it seemed to work.
+
+Here's the training algorithm I made for this model :
+
+
+- In my Python implementation of this model, I had added a provision in the 'Graviton' class to store the derivative of the output of the graviton. Using this, the gravitons are ordered in the increasing order of the derivatives of their outputs.
+- The first graviton is taken, and its weight is modified by the error in the output.
+- The error is modified by the product of the graviton's output derivative and its weight after editing.
+- Steps 2 through 3 are done for the other gravitons as well. The final error (given by the
error
variable ) will be the product of the derivatives, the edited weights and the error in the output.
+- The set of gravitons thus formed is the next set subjected to this training.
+
+
+For extra reference, here's the code:
+
+- Insane_Mind.py :
+
+from math import *
+from random import *
+
+class MachineError(Exception):
+ '''standard exception in the API'''
+ def __init__(self, stmt):
+ self.stmt = stmt
+
+def sig(x):
+ '''Sigmoid function'''
+ try :
+ return exp(x)/(exp(x) + 1)
+ except OverflowError:
+ if x > 0 :
+ return 1
+ elif x < 0:
+ return
+
+class Graviton:
+ def __init__(self, weight, marker):
+ '''Basic unit in 'Insane Mind' algorithm'''
+ self.weight = weight
+ self.marker = marker + 1
+ self.input = 0
+ self.output = 0
+ self.derivative = 0
+
+ def process(self, input_to_machine):
+ '''processes the input'''
+ self.input = input_to_machine
+ self.output = sig(self.weight * self.input)
+ self.derivative = self.input * self.output * (1- self.output)
+ return self.output
+
+ def get_derivative_at_input(self):
+ '''returns the derivative of the output'''
+ return self.derivative
+
+ def correct_self(self, learning_rate, error):
+ '''edits the weight'''
+ self.weight += -1 * error * learning_rate * self.get_derivative_at_input() * self.weight
+
+class Insane_Mind_Base:
+ '''Insane_Mind base class - this is what we're gonna use to build the actual machine'''
+ def __init__(self, number_of_nodes):
+ '''initialiser for Insane_Mind_Base class.
+ arguments : number_of_nodes : the number of nodes you want'''
+ self.system = [Graviton(random(),i) for i in range(number_of_nodes)] # the actual system
+ self.system_size = number_of_nodes # number of nodes , or 'system size'
+
+ def output_sys(self, input_to_sys):
+ '''system output'''
+ self.output = input_to_sys
+ for i in range(self.system_size):
+ self.output = self.system[randint(0,self.system_size - 1 )].process(self.output)
+ return self.output
+
+ def train(self, learning_rate, wanted):
+ '''trains the system'''
+ self.cloned = []
+ order = []
+ temp = {}
+ for graviton in self.system:
+ temp.update({str(graviton.derivative): self.system.index(graviton)})
+ order = sorted(temp)
+ i = 0
+ error = wanted - self.output
+ for value in order:
+ self.cloned.append(self.system[temp[value]])
+ self.cloned[i].correct_self(learning_rate, error)
+ error *= self.cloned[i].derivative * self.cloned[i].weight
+ i += 1
+ self.system = self.cloned
+
+ def details(self):
+ '''gets the weights of each graviton'''
+ for graviton in self.system:
+ print("Node : {0}, weight : {1}".format(graviton.marker , graviton.weight))
+
+class Insane_Mind:
+
+ '''Actaul Insane_Mind class'''
+ def __init__(self, number_of_gravitons):
+ '''initialiser'''
+ self.model = Insane_Mind_Base(number_of_gravitons)
+ self.size = number_of_gravitons
+
+ def get(self, input):
+ '''processes the input'''
+ return self.model.output_sys(input)
+
+ def train_model(self, lrate, inputs, outputs, epoch):
+ '''train the model'''
+ if len(inputs) != len(outputs):
+ raise MachineError("Unequal sizes for training input and output vectors")
+ epoch = str(epoch)
+ if epoch.lower() == 'sys_size':
+ epoch = int(self.model.system_size)
+ else:
+ epoch = int(epoch)
+ for k in range(epoch):
+ for j in range(len(inputs)):
+ val = self.model.output_sys(inputs[j])
+ self.model.train(1/val if str(lrate).lower() == 'output' else lrate, outputs[j])
+
+ def details(self):
+ '''details of the machine'''
+ self.model.details()
+
+
+
+- Insane_Mind_Test.py :
+
+from Insane_Mind import *
+from statistics import *
+
+input_data = [3,4,3,5,4,4,3,6,5,4] # list of forces using which the coin is tossed
+output_data = [1,0,0,1,1,0,0,0,1,1] # head or tails in binary form (0 = tail (= not head), 1 = head)
+wanteds = output_data.copy()
+model = Insane_Mind(2) # Insane Mind model
+print("Before Training:")
+print("----------------")
+model.details() # fetches you weights of the model
+
+def normalize(x):
+ cloned = x.copy()
+ meanx = mean(x)
+ stdevx = stdev(x)
+ for i in range(len(x)):
+ cloned[i] = (cloned[i] - meanx)/stdevx
+ return cloned
+
+def random_catch(range_of_catches, sample_length):
+ # sample data generator. I named it random catch as part of using it in testing whether my model
+ # ' catches the correct guess'. :)
+ return [randint(range_of_catches[0], range_of_catches[1]) for i in range(sample_length)]
+
+input_data = normalize(input_data)
+output_data = normalize(output_data)
+
+model.train_model('output', input_data, output_data, 'sys_size')
+# the argument 'output' for the argument 'lrate' (learning rate) was to specify that the learning rate at # each step is the inverse of the output, and the use of 'sys_size' for the number of times to be trained
+# is used to tell the machine that the required number of epochs is equal to the size of the system or
+# the number of nodes in it.
+
+print("After Training:")
+print("----------------")
+model.details() # fetches you weights of the model
+
+predictions = [model.get(i) for i in input_data]
+
+threshold = mean(predictions)
+predictions = [1 if i >= threshold else 0 for i in predictions]
+
+print("Predicted : {0}".format(predictions))
+print("Actual:{0}".format(wanteds))
+mse_array = [(wanteds[j] - predictions[j])**2 for j in range(len(input_data))]
+print("Mean squared error:{0}".format(mean(mse_array)))
+
+accuracy = 0
+for i in range(len(predictions)):
+ if predictions[i] == wanteds[i]:
+ accuracy += 1
+
+print("Accuracy:{0}({1} out of {2} predictions correct)".format(accuracy/len(wanteds), accuracy, len(predictions)))
+
+print("______________________________________________")
+
+print("Random catch test")
+print("-----------------")
+
+times = int(input("No. of tests required : "))
+catches = int(input("No. of catches per test"))
+mse = {}
+for m in range(times):
+ wanted = random_catch([0,1] , catches)
+ forces = random_catch([1,10], catches)
+ predictions = [model.get(k) for k in forces]
+ threshold = mean(predictions)
+ predictions = [1 if value >= threshold else 0 for value in predictions]
+ mse_array = [(wanted[j] - predictions[j])**2 for j in range(len(predictions))]
+ print("Mean squared error:{0}".format(mean(mse_array)))
+ mse.update({(m + 1):mean(mse_array)})
+ accuracy = 0
+ for i in range(len(predictions)):
+ if predictions[i] == wanted[i]:
+ accuracy += 1
+ print("Accuracy:{0}({1} out of {2} predictions correct)".format(accuracy/len(wanteds), accuracy, len(predictions)))
+
+
+I tried running 'Insane_Mind_Test.py', and the results I got are :
+
+The formula I used from MSE is (please correct me if I was wrong):
+$$ MSE = \frac{\sum_{i = 1}^n (x_i - x'_i)^2}{n}$$
+where,
+$$ x_i = \text{Intended output}$$
+$$ x'_i = \text{Output predicted}$$
+$$ n = \text{Number of outputs}$$
+My main intention was to make a guess system.
+
+Note : Here, I had to think differently. I decided to classify the forces as those yielding a head and those that yield a tail (unlike what I say in the comments in the program).
+
+Thanks for all help in advance.
+Edit: Here's the training data :
+Forces Head(1) or not head(0)[rather call it tail]
+_______ ______________________
+3 1
+4 0
+3 0
+5 1
+4 1
+4 0
+3 0
+6 0
+5 1
+4 1
+
+"
+"['classification', 'autoencoders', 'time-series', 'feature-extraction', 'signal-processing']"," Title: Which type of feature extractor do you suggest to classify sensor data?Body: I have IMU (Inertial Measurment Unit- 6 axis) sensor data. The sensor attached on a car and 7 different drivers wipe on same path. I want to extract features and classify drivers. Which type of feature extractor do you guys suggest? I am planning to use PCA and Autoencoders but what do you think about classical signal properties to classify drivers?
+"
+"['reinforcement-learning', 'markov-decision-process', 'dynamic-programming', 'semi-mdp']"," Title: Bellman optimality equation in semi Markov decision processBody: I wrote a Python program for a simple inventory control problem where decision epochs are equally divided (every morning) and there is no lead time for orders (the time between submitting an order until receiving the order). I use the Bellman equations, and solve them by policy iteration (Dynamic Programming).
+Now I want to consider lead time (between 1 to 3 days with equal probabilities). As I understand, the problem is defined by Semi Markov Decision Process for considering sojourn time in each state. I am confused about the Bellman equations in this scenario because we don't know exactly when the order will be received and is it necessary to discount the reward for day two or three?
+"
+"['computer-vision', 'backpropagation', 'image-processing', 'edge-detection', 'image-recognition']"," Title: How to normalise image input to backpropogation algorithm?Body: I am implementing a simple backpropagation neural network for classifying images. One set of images are cars another set of images are buildings (houses). So far I have used Sobel Edge detector after converting the images from black and white. I need a way to remove the offset (in other words normalise the input) of where the car or where the house is in the image.
+Will taking the discrete Fourier cosine transform remove the offset? (so the input to the neural network will be the coefficients of the discrete cosine Fourier transform). To be clear, when I mean offset I mean a pair of values (across the number of pixels, and vertically the number of pixels) determining where the car or the building is in the 2D image from the origin.
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks']"," Title: What is the dimension of my output of the form (2n + 1, 2n + 1, #filters) after a MaxPooling layerBody: I'm trying to white board the different mechanisms behind a convolutional neural network.
+I have on question regarding the dimension of my volume after using a max pooling layer.
+Let's suppose I have a (21,21,#filtres) volume's dimension. If Max Pooling divide by 2 the height and width of my volume, what will be the dimension after the Max Pooling layer ?
+If odd numbers are a problem when using max pooling layer, How do I fix it ?
+Thank you !
+"
+"['neural-networks', 'machine-learning', 'applications', 'transfer-learning', 'domain-adaptation']"," Title: Why is domain adaptation and generative modelling for knowledge graphs still not applied widely in enterprise data? What are the challenges?Body: I see that domain adaptation and transfer learning has been widely adopted in image classification and semantic segmentation analysis. But it's still lacking in providing solutions to enterprise data, for example, solving problems related to business processes?
+I want to know what characteristics of the data determine the applicability or non-applicability with respect to generating models for prediction where multiple domains are involved within an enterprise information database?
+"
+"['neural-networks', 'machine-learning']"," Title: Is there a way to make my neural network discard inputs with bad results from learning?Body: What I want to achieve is this: If my desired outputs are [1, 2, 3, 4] I would rather have my network produce this output:
+[0.99, 2.01, 999, 4.01]
+than say this:
+[0.94, 1.88, 3.12, 4.1]
+So I'd rather have a few very accurate outputs and the rest completely off, than have them all be decent but no more than that. My question is, is there a known way to do this? If not, would it make sense to remove the inputs that produce poor outputs, and redo the learning phase?
+"
+"['neural-networks', 'reinforcement-learning', 'convolutional-neural-networks', 'recurrent-neural-networks', 'feedforward-neural-networks']"," Title: What's a good neural network for this problem?Body: I am very new to the field of AI so please bear with me.
+Say there is a dice with three sides, -1,0 and 1, and I want to predict which side it lands on (so only one output is needed I guess). The input variables are numerous but not that many, maybe 7-10.
+These input variables are certain formulae that involve calculations to do with wind, time, angle, momentum etc, and each formula returns which side it thinks the dice will like roll. Let's say that intuitively, by looking at these variables, I can make a very good guess at which side the dice lands on. If for example 6 out of 7 input variables say it likely that the dice will land on 1 but the 7th input suggests that it will land on 0, I would guess it lands on 1. As a human, I'm essentially consulting these inputs as a kind of "brains trust", and I act as a judge to make the final decision based on the brains trust. Of course in that example, my logic as a judge was simply majority rules, but what if some other more complicated non-linear method of judging was needed?
+I essentially want my neural network to take this role as a judge. I have read that feedforward nns have limitations regarding control flow and loops, so I'm not sure if that structure will be appropriate. I'm not sure if recurrent nn will be appropriate either as I don't care what the previous inputs were.
+Thanks
+"
+"['machine-learning', 'computer-vision', 'image-recognition']"," Title: Can we identify only the objects in specific parts of an image with computer vision?Body: I am studying computer vision for the past 3 months. I have come across the object identification problem, where given an image, CV would identify various parts in the image.
+If I give an image, and a rectangle coordinates, can CV identify the parts' names within that rectangle? For example, can I train a model to identify the parts in the below image (mountain, river, in this case)? The model should not identify other parts like flowers, sky, etc., as they come outside the rectangle).
+I tried searching but could not find similar problems. Can anyone give me a direction to solve this problem?
+
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'deep-learning', 'q-learning']"," Title: Handling a Large Discrete Action Space in Deep Q LearningBody: I am attempting to solve a timetabling problem using deep Q learning. It could be thought of as a resource allocation problem to obtain some certificate of 'optimality'. However, how to define and access the action space is alluding me. Any help, thoughts, or direction towards the literature would be appreciated. Thanks!
+The problem is entirely deterministic, the pair of the current state and action is isomorphic to the resulting state. The Q network is therefore being set up to approximate a Q value (a scalar) for the resulting state, i.e. for the current state and proposed action.
+I have so far assumed that the action space should be randomly sampled during training to generate some approximation of the Q table. This seems highly inefficient.
+I am open to reinterpretations of the action space. The problem involves a set of n individuals and at any given state a maximum of b can be 'active' and, of the remaining 'inactive' individuals, f can be made 'active' by an action. An action will need to involve making some reallocation to active individuals made up of those who are already active and the other f available people.
+To give you a sense over the numbers that I will ultimately use, $n=17, b=7$, and $f$ will hover somewhere around 7-10 (but depends on the allocations). At first this sounds tractable, but a (very) rough approximation of the cardinality of the set of actions is 17 choose 7 = 19448.
+Does anyone know a more efficient way to encode this action space? If not, is there a more sensible way to sample it (as is my current plan) than uniformly extracting actions from the space? Also when sampling the space is it valid to enforce some cap on the number of samples drawn (say 500). Please feel free to ask for further clarification.
+"
+"['gpt', 'natural-language-understanding']"," Title: Can in principle GPT language models learn physics?Body: Does anyone know of research involving the GPT models to learn not only regular texts, but also learn from physics books with the equations written in latex format?
+My intuition is that the model might learn the rules relating equations and deductions, as they can learn statistically what correlates with what. I understand that the results can also be a little nonsensical, like the sometimes surreal paragraphs written by these models.
+Have there been any attempts to do this?
+"
+['natural-language-processing']," Title: What are some programming related topics that can be solved using NLP?Body: I've been working on the Punctuation Restoration Problem for my Master's Thesis, however, me being primarily a programmer at heart, I wish I could use some of my NLP skills to solve issues related to programming in general.
+I know Microsoft does lots of research in NLP and I think after they acquired Github, they have an immense dataset to work with for any problems related to programming they want to tackle. Most recently I think they did a great job on their new python suggestion extension on VSCode.
+So, could you suggest to me some issues you think are interesting research topics? This is something that I would like to work with, but I have no idea where to start yet.
+"
+"['neural-networks', 'gpt', 'efficiency', 'computational-complexity', 'benchmarks']"," Title: What is the efficiency of trained neural networks?Body: Training neural networks takes a while. My question is, how efficient is a neural network that is completely trained (assuming it's not a model that is constantly learning)?
+I understand that this is a vague and simply difficult question to answer, so let me be more specific: Imagine we have a trained Deep Neural Net, and even to be more specific it's a GPT-3 model.
+Now, we put the whole thing on a Raspberry Pi. No internet access. The whole process takes place locally.
+
+- Will it run at all? Will it have enough RAM?
+
+- Now let's say we give it some text to analyze. Then we ask it a question. Will it take milliseconds to answer? Or is it going to be in the seconds? Minutes?
+
+
+What I'm trying to understand, once a model is trained is it fairly performant because it's just essentially a bunch of very simple function calls on top of each other, or is it very heavy to execute? (perhaps due to the sheer number of these simple function calls)
+Please correct any misunderstanding about how the whole process works if you spot any. Thank you.
+"
+"['bert', 'sentiment-analysis']"," Title: Bert for Sentiment Analysis - Connecting final output back to the inputBody: I have not found a lot of information on this, but I am wondering if there is a standard way to apply the outputs of a Bert model being used for sentiment analysis, and connect them back to the initial tokenized string of words, to gain an understanding of which words impacted the outcome of the sentiment most.
+For example, the string "this coffee tastes bad" outputs a negative sentiment. Is it possible to analyze the output of the hidden layers to then tie those results back to each token to gain an understanding of which words in the sentence had the most influence on the negative sentiment?
+The below chart is a result at my attempt to explore this, however I am not sure it makes sense and I do not think I am interpreting it correctly. I am basically taking the outputs of the last hidden layer, which in this case has shape (1, 7, 768), [CLS] + 5 word tokens + [SEP], and looping through each token summing up their values (768) and computing the average. The resulting totals are outputted in the below graph.
+
+Any thoughts around if there is any meaning to this or if i am way off on approach, would be appreciated. Might be my misunderstanding around the actual output values themselves.
+Hopefully this is enough to give someone the idea of what i am trying to do and how each word can be connected to positive or negative associations that contributed to the final classification.
+"
+"['reinforcement-learning', 'value-functions', 'sutton-barto', 'value-iteration', 'numpy']"," Title: Value Iteration failing to converge to optimal value function in Sutton-Barto's Gambler problemBody: In Example 4.3:Gambler's Problem of Sutton and Barto's book whose code is given here.
+In this code the value function array is initialized as np.zeros(states)
where states
$\in[0,100]$ and the value function for optimal policy which is returned after solving it with value iteration is same as the one given in the book, but, if we only change the initialization of the value function in the code, suppose to np.ones(states)
then the optimal value function returned changes too, which means that the value iteration algorithm converges in both the cases but to different optimal value functions,but two different optimal value function is impossible in a MDP. So why is the value iteration algorithm not converging to optimal value function?
+PS: If we change the initialization of value function array to -1*np.random.rand(states)
, then the converged optimal value function also contains negative numbers which should be impossible as rewards>=0
, hence value iteration fails to converge to optimal value function.
+"
+"['neural-networks', 'deep-neural-networks', 'research', 'resource-request']"," Title: What are the mathematical prerequisites needed to understand research papers on neural networks?Body: I know we have developed some mathematical tools to understand deep neural networks, gradient descent for optimization, and basic calculus. Recently, I encountered arxiv paper that describes higher mathematics for neural networks, such as functional analysis. For example, I remember universal approximation theorem was proved with the Hann-Banach theorem, but I lost the link of that article, so I need to find similar papers or articles to develop my understanding of neural networks mathematically (like with functional analysis, in short, I need to learn more advanced math for research), can you suggest some books or arxiv papers or articles or any other source that describes mathematics for deep neural networks?
+"
+"['reinforcement-learning', 'python', 'resource-request']"," Title: What are some programming-oriented resources for reinforcement learning?Body: I have been reading: Reinforcement Learning: An Introduction by Sutton and Barto. I admit it's a good read for learning RL whereas it's more theoretical with detailed algorithms.
+Now, I want something more programming oriented resource(s) maybe a course, book, etc. I have been exploring Kaggle, Open-source RL projects.
+I need this to learn and grasp a deeper understanding of RL from the perspective of a developer i.e optimized way of writing code, explanation about using the latest RL libraries, cloud services, etc.
+"
+"['reinforcement-learning', 'value-iteration', 'policy-iteration', 'policy-evaluation', 'policy-improvement']"," Title: Why do we need to go back to policy evaluation after policy improvement if the policy is not stable?Body:
+Above is the algorithm for Policy Iteration from Sutton's RL book. So, step 2 actually looks like value iteration, and then, at step 3 (policy improvement), if the policy isn't stable it goes back to step 2.
+I don't really understand this: it seems like, if you do step 2 to within a small $\Delta$, then your estimate of the value function should be pretty close to optimal for each state.
+So, why would you need to visit it again after policy improvement?
+It seems like policy improvement only improves the policy function, but that doesn't affect the value function, so I'm not sure why you'd need to go back to step 2 if the policy isn't stable.
+"
+"['convolutional-neural-networks', 'time-complexity']"," Title: Why does CNN forward pass take longer compared to MLP forward pass?Body: Let's take a 32 x 32 x 3 NumPy array and convolve with 10 filters of size 2 x 2 x 3 with stride 2 to produce feature maps of volume 16 x 16 x 10. The total number of operations - 16 * 16 * 10 * 2 * 2 * 2 * 3 = 61440 operations. Now, let's take an input array of length 3072 (flattening the 32 * 32 * 3 array) and dot it with a weight matrix of size 500 x 3072. The total number of operations - 500 * 3072 * 2 = 3072000 operations. The convolution takes 4-5 times longer than np.dot(w, x)
even though number of operations is less.
+Here's my code for the convolution operation:
+for i in range(16):
+ for j in range(16):
+ for k in range(10):
+ v[i, j, k] = np.sum(x[2 * i:2 * i + 2, 2 * j:2 * j + 2] * kernels[k])
+
+Is np.dot(w, x)
optimized or something? Or are my calculations wrong? Sorry if this is a silly question...
+"
+"['neural-networks', 'convolutional-neural-networks', 'reference-request']"," Title: Are there neural networks with 3-dimensional topologies?Body: The topologies (or architectures) of the neural networks that I have seen so far are only 2-dimensional. So, are there neural networks whose topology is 3-dimensional (i.e. they have a width, height, and depth)?
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'alphazero', 'alphago-zero']"," Title: In Alpha(Go)Zero, why is the policy extracted from MCTS better than the network one?Body: I've read through the Alpha(Go)Zero paper and there is only one thing I don't understand.
+The paper on page 1 states:
+
+The MCTS search outputs probabilities π of playing each move. These search probabilities usually select much stronger moves than the raw move probabilities p of the neural network fθ(s);
+
+My question: Why is this the case? Why is $\pi$ usually better than $p$? I think I can imagine why it's the case but I'm looking for more insight.
+what $\pi$ and $p$ are:
+Say we are in state $s_1$. We have a network that takes the state and produces $p_1$ (probabilities for actions) and $v_1$ (a value for the state).
+We then run MCTS from this state and extract a policy $\pi(a|s_1) = \frac{N(s_1,a)^{1/\tau}}{\sum_b N(s_1,b)^{1/\tau}}$.
+The paper is saying that $\pi(-|s_1)$ is usually better than $p_1$.
+"
+"['neural-networks', 'reinforcement-learning', 'papers', 'activation-functions', 'inverse-rl']"," Title: Can entire neural networks be composed of only activation functions?Body: Inverse Reinforcement Learning based on GAIL and GAN-Guided Cost Learning(GAN-GCL), uses a discriminator to classify between expert demos and policy generated samples.
+Adversarial iRL, build upon GAN-GCL, has its discriminator $D_{\theta, \phi}$ as a function of a state-only reward approximator $f_{\theta, \phi}$.
+$$
+D_{\theta, \phi}\left(s, a, s^{\prime}\right)=\frac{\exp \left\{f_{\theta, \phi}\left(s, a, s^{\prime}\right)\right\}}{\exp \left\{f_{\theta, \phi}\left(s, a, s^{\prime}\right)\right\}+\pi(a \mid s)},
+$$
+where $f_{\theta,\phi}$ is expressed as:
+$$f_{\theta,\phi} = g_{\theta} (s) + γh_φ (s\prime ) − h_φ (s).$$
+The optimal $*g(s)$ tries to recover the optimal reward function $r^*(s)$. While the $h(s)$ tries to recover the optimal value funtion $V^*(s)$, which makes $f_{\theta,\phi}$ interpretable as the advantage.
+My question comes from the network architecture used for $h(s)$ in the original paper.
+
+... we use a 2-layer ReLU network for the shaping term h. For the policy, we use a two-layer (32 units) ReLU gaussian policy.
+
+What is meant by the quoted text in bold, because my interpretation of that text, (shown below) doesn't seem viable
+h = nn.Sequential([nn.ReLu(), nn.ReLu()])
+
+"
+"['natural-language-processing', 'reference-request', 'natural-language-understanding', 'text-summarization']"," Title: How could facts be distinguished from opinions?Body: As a software engineer, I am searching for an existing solution or, if none exists, willing to create one that will be able to process texts (e.g. news from online media) to extract/paraphrase dry facts from them, leaving all opinions, analysis, speculations, humor, etc., behind.
+If no such solution exists, what would be a good way to start creating it (considering that I have zero experience in AI/machine learning)?
+It would be no problem to manually create a set of examples (pairs of original news + dry facts extracted), but is that basically what it takes? I doubt so.
+(This knowledge domain is already huge, so which parts of it need to be learned first and foremost to figure out how to achieve the goal?)
+"
+"['recurrent-neural-networks', 'time-series', 'feedforward-neural-networks', 'sequence-modeling', 'word2vec']"," Title: What's the difference between RNNs and Feed Forward Neural Networks if a fixed size vector can preserve sequential information?Body: I was watching a Youtube video in which the problem of trying to predict the last word in a sentence was posed. The sentence was "I took my cat for a" and the last word was "walk". The lecturer in this video stated that whilst sentences (the sequence) can be of varying lengths, if we take a really large fixed window we can model the whole sentence. In essence she said that we can convert any sentence into a fixed size vector and still preserve the order of the sentence (sequence). I was then wondering why do we need RNNs if we can just use FFNNs? Also does a fixed size vector really preserve sequential order information?
+Thank You for any help!
+"
+"['reinforcement-learning', 'reference-request', 'game-ai', 'agi', 'intelligent-agent']"," Title: Use of virtual worlds (e.g. Second Life) for training Artificial General Intelligence agents?Body: There is emerging effort for Third Wave Artificial Intelligence (Artificial General Intelligence) (http://hlc.doc.ic.ac.uk/3AI_HLC_2019.html and https://www.darpa.mil/work-with-us/ai-next-campaign) and it covers the open-ended life-long machine learning as well. Currently machine learning agents are being run on quite immutable and simple games like Atari and Go. But what about the efforts to build and run machine learning adaptable agents (or even teams of them) in the virtual worlds (like Second Life) which are complex, expanding and in which the interaction with human representatives happens? Are there efforts to do that?
+I have found some articles from 2005-2009, but Google gives no recent literature on queries like Reinforcement Learning Second Life
etc.
+So - maybe there are some efforts to do this, but I can not just Google it.
+My question is - are there references for machine learning agents for virtual worlds and if not - what are the obstacles for trying to build them? There are little risks or costs for building them for virtual worlds?
+https://meta-guide.com/embodiment/secondlife-npc-artificial-intelligence is some bibliography and it is lacking recent research, for example.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'facial-recognition']"," Title: How to identify if 2 faces contain the same person?Body: I have got numerous frames and I've detected all the faces in all the frames using Retinaface. However I need to track the faces of people over frames.
+For this purpose, I assumed I could try finding the landmarks from the face using libraries like dlib
and maybe compare these landmarks to check if they are infact the face of the same person.
+I would like to know if there are other methods or some useful resources I could refer for the same. Thanks a lot in advance.
+"
+"['machine-learning', 'math', 'papers', 'data-preprocessing', 'notation']"," Title: Do the rows of the design matrix refer to the observations or predictors?Body: I attempt to understand the formulation of dictionary learning for this paper:
+
+- Depression Detection via Harvesting Social Media: A Multimodal Dictionary Learning Solution
+- Multimodal Task-Driven Dictionary Learning for Image Classification
+
+Both papers used the exact formulation in two different domains.
+Based on my understanding, in common machine learning, we formulate our matrices, from vectors, as rows to be observations, columns to be predictors.
+Given a matrix, $A$:
+\begin{array}{lcccccc}
+& p_1 & p_2 & p_3 & p_4 & p_5 & \text { label } \\
+o_1 & 1 & 2 & 3 & 4 & 1 & 1 \\
+o_2 & 2 & 3 & 4 & 5 & 2 & 1 \\
+o_3 & 3 & 4 & 5 & 6 & 2 & 0 \\
+o_4 & 4 & 5 & 6 & 7 & 3 & 0
+\end{array}
+So, using a math notation and excluding the label, I can define this matrix, $A = [o_1, o_2, o_3, o_4] ∈ R^{4×5}$, as $A = [{(1, 2, 3, 4, 1), (2, 3, 4, 5, 2), (3, 4, 5, 6, 2), (4, 5, 6, 7, 3)}]$, and in numpy:
+import numpy as np
+
+A = np.array([[1, 2, 3, 4, 1],
+ [2, 3, 4, 5, 2],
+ [3, 4, 5, 6, 2],
+ [4, 5, 6, 7, 3]])
+
+A.shape
+# (4, 5)
+
+Am I right?
+"
+"['convolutional-neural-networks', 'python', 'image-processing']"," Title: Should image augmentation be applied before or after image resizing?Body: For the purposes of training a Convolutional Neural Network Classifier, should image augmentation be done before or after resizing the training images?
+To reduce file size and speed up training time, developers often resize training images to a set height and width using something like PIL (Python Imaging Library).
+If the images are augmented (to increase training set size), should it be done before or after resizing the members of the set?
+For simplicity sake, it would probably be faster to augment the images after resizing, but I am wondering if any useful data is lost in this process. I assume it may depend on the method used to resize the images (cropping, scaling technique, etc.)
+"
+"['neural-networks', 'deep-learning', 'gradient-descent', 'incremental-learning', 'batch-learning']"," Title: Is batch learning with gradient descent equivalent to ""rehearsal"" in incremental learning?Body: I am learning about incremental learning and read that rehearsal learning is retraining with old data. In essence, isn't this the exact same thing as batch learning (with stochastic gradient descent)? You train a model by passing in batches of data and redo this with a set number of epochs.
+If I'm understanding rehearsal learning correctly, you do the exact same thing but with "new" data. Thus, the only difference is inconsistencies in the epoch number across data batches.
+"
+"['expert-systems', 'prolog', 'production-systems']"," Title: How do I write production systems?Body: I understand that I can draw a state-space graph for any problem. However, here is the problem: I can't really figure out how to make production systems.
+I am solving the FWGC (Farmer, Wolf, Goat, Cabbage) River Crossing Puzzle using a state-space search. So, my tasks are that:
+
+- Represent the state-space graph (which I know how to do)
+
+- Write production systems.
+
+
+My questions: How do I write production systems?
+The thing that confused me, was the production system example in Rich's book (about the water jug problem), where he has imagined all the states possible and wrote the next state for them.
+Here in the FWGC problem, I see some problems while writing the production system.
+For instance, for a given state, there are multiple possible next states, i.e. a farmer can take Goat, Cabbage, Wolf, or go alone to the other side (assuming that all states are safe, just for the sake of simplicity).
+So, how would I represent the same state going to multiple next states in production systems?
+What I have tried-:
+Then, I googled a pdf
+https://www.cs.unm.edu/~luger/ai-final2/CH4_Depth-.%20Breadth-,%20and%20Best-first%20Search.pdf
+
+Is that what I call the production system for this case?
+But, here are my reasons why it should not be called a production system:
+
+- There might be other possible states as well.
+
+- It is showing only 1 solution.
+
+
+So, how do I actually learn to create production rules (I know how to make state-space representation better as I have read that J.Nillison's book which was GOLD in this matter)? And, what would be the production rules in this case?
+"
+"['convolutional-neural-networks', 'hyper-parameters', 'filters', 'convolutional-layers']"," Title: What is the intuition behind the number of filters/channels for each convolutional layer?Body: After having chosen the number of layers for a convolutional neural network, we must also choose the number of filters/channels for each convolutional layer.
+The intuition behind the filter's spatial dimension is the number of pixels in the image that must be considered to perform the recognition/detection task.
+However, I still can't find the intuition behind the number of filters. The numbers 128 and 256 are often used in the literature, but why?
+"
+"['neural-networks', 'machine-learning', 'feature-extraction']"," Title: Neural Network for locating shifting resonant frequenciesBody: I have multiple FFT's taken from a sample at different pressures, through different analysis I can see that the resonant frequencies are shifting in the spectrum for each FFT at a different pressure.
+Using conventional peak tracking has been difficult as the peaks increase/decrease in magnitude within the FFT as well as shifting in the spectrum.
+Is it possible for a neural network to 'detect'/'pick out' these frequency values?
+Any help or guidance is appreciated :)
+Thanks!
+"
+"['machine-learning', 'data-preprocessing', 'data-augmentation', 'mask-rcnn']"," Title: How much should we augment our training data?Body: I am wondering how much I should extend my training set with data augmentation. Is there somewhere a pre-defined number I can go with?
+Suppose I have 10000 images, can I go as far as 10x or 20x times, to get 100000 and 200000, respectively, images? I am wondering how will this impact model training. I am using a mask R-CNN.
+"
+"['neural-networks', 'deep-learning']"," Title: Is there any way where you can train a Neural Network with only one data point in the dataset?Body: I was working on a project involving the search for biosignatures (signs of life) on exoplanets and the probability of that planet harboring life. In this case, we know that Earth is the only planet confirmed to have life on it. So the parameters of atmospheric conditions, radius, temperature, distance from the star for planets confirmed to have life is one (Earth).
+Is there any way to use NNs to predict the probability of an exoplanet harboring life if we have the data of all these parameters for that planet?
+"
+"['convolutional-neural-networks', 'terminology', 'papers', 'complexity-theory']"," Title: Are mult-adds and FLOPs equivalent?Body: I am comparing different CNN architectures for edge implementation. Some papers describing architectures refer to mult-adds, like the MobileNet V1 paper, where it is claimed that this net has 569M mult-adds, and others refer to floating-point operations (FLOPs), like the CondenseNet paper claims 274M FLOPs.
+Are these comparable? Is 1 multiply-add equivalent to 2 floating-point operations? Any direction will be greatly appreciated.
+"
+"['machine-learning', 'ai-design', 'recommender-system', 'jaccard-similarity']"," Title: How can I build a recommendation system that takes into account some constraints or the context?Body: I am building a recommendation system that recommends relevant articles to the user. I am doing this using simple similarity-based techniques (with the Jaccard similarity) using as features the page title, the tags, and the article content.
+Now my problem is I have different "adult articles" and some are articles that expire (for example, an article about a movie in Jan 2019 would not be relevant in Dec 2019).
+I want to keep these adult articles separate, as a person who is reading about history does not want to be led to an adult article and not recommend articles that have expired or would not be relevant in the present moment.
+Should I just improve the quality of my features or tags? Or is there any other way to achieve this?
+"
+"['machine-learning', 'terminology', 'applications']"," Title: What is a ""learned emulator""?Body: In this article, the term "learned emulator" is used.
+
+Recently, scientists have started creating "learned emulators" using
+AI neural network approaches, but have not yet fully explored the
+advantages and potential pitfalls of these surrogates.
+
+What is a "learned emulator"? I believe it is related to neural networks. Where can I read more?
+"
+"['linear-regression', 'logistic-regression']"," Title: Hyper-plane in logistic regression vs linear regression for same number of featuresBody: Geometric interpretation of Logistic Regression and Linear regression is considered here.
+I was going through Logistic regression and Linear regression. In the optimization equation of both following term is used. $$W^{T}.X$$
+W is a vector which holds weights of the hyper-plane.
+I realized following about the dimensions of the fitted hyper-plane. Want to confirm it.
+Let,
+d = Number of features for both Logistic Regression and Linear Regression.
+Logistic Regression case:
+Fitted hyper-plane is d-dimensional.
+Linear Regression case:
+Fitted hyper-plane is (d + 1) dimensions.
+Example
+d = 2
+feature 1 : weight,
+feature 2 : height
+Logistic Regression:
+Its a 2 class classification.
+y : {obsess, normal)
+Linear Regression:
+y: blood pressure (real value)
+Here,
+
+- Logistic Regression will fit a 2-D line.
+- Linear Regression will fit a 3-D plane.
+
+Please confirm if this understanding is correct and same happens even in higher dimensions.
+"
+"['neural-networks', 'recurrent-neural-networks', 'backpropagation', 'deep-neural-networks', 'gradient-descent']"," Title: What are the rules behind vector product in gradient?Body: Let's suppose we have calculated the gradient and it came out to be $f(WX)(1-f(W X))X$, where $f()$ is the sigmoid function, $W$ of order $2\times2$ is the weight matrix, and $X$ is an input vector of order $2\times 1$. For ease let $f(WX)(1-f(W X))=\Bigg[
+\begin{array}{c}
+0.3 \\
+0.8 \\
+\end{array}\Bigg]$ and $X=\Bigg[
+\begin{array}{c}
+1 \\
+0 \\
+\end{array}\Bigg]$. When we multiply these vectors we will multiply them as $f(WX)(1-f(W X))\times X^T$ i.e $\Bigg[
+\begin{array}{c}
+0.3 \\
+0.8 \\
+\end{array}\Bigg]\times[1 \quad0]$. I do this because I know that we need this gradient to update a $2\times 2$ weight matrix, hence, the gradient should have size $2\times 2$. But, I don't know the law/rule behind this, if I was just given the values and had no knowledge that we need the solution to update the weight matrix, then, I might have done something like $[0.3 \quad 0.8]\times\Bigg[
+\begin{array}{c}
+1 \\
+0 \\
+\end{array}\Bigg]$ which will return a scalar. For a long chain of such operations (multiple derivatives in applying chain rule, resulting in many vectors), how do we know if the multiplication of two vectors should return a vector or matrix (dot or cross product)?
+"
+"['machine-learning', 'reinforcement-learning']"," Title: How to implement RL policies learned on a finite horizon?Body: I am modelling a ride-hailing system where passenger requests continuously arrive into the system. An RL model is developed to learn how to match those requests with drivers efficiently.
+Basically, the system can run infinitely as long as there are requests arriving (infinite horizon reality). However, in order for the RL training to conduct, the episode length should be restricted to some finite duration, say $[0,T]$ (finite horizon training).
+
+My question is how to implement the learned policy based on finite horizon $[0,T]$ to the real system with infinite horizon $[0,\infty]$?
+
+I expect there would be a conflict of objectives. The value function near $T$ is partially cut off in a finite horizon and would become an underestimate and affect policy performance in an infinite horizon implementation. To this end, I doubt the applicability of the learned policy.
+"
+"['deep-learning', 'python', 'vgg']"," Title: Strategy to input and get large images in VGG neural networksBody: I'm using a transfert-style based deep learning approach that use VGG (neural network). The latter works well with images of small size (512x512pixels), however it provides distorted results when input images are large (size > 1500px). The author of the approach suggested to divide the input large image to portions and perform style-transfert to portion1 and then to portion2 and finally concatenate the two portions to have a final large result image, because VGG was made for small images... The problem with this method is that the resulting image will have some inconsistent regions at the level of areas where the portions were "glued". How can I correct these areas ? Is there an alternative approach to this dividing method ?
+thanks
+"
+"['natural-language-processing', 'machine-translation']"," Title: Can we use NLP to understand/parse/compile programming code?Body: I wonder if we can use Natural Language Processing (NLP) to process programming code:
+Given a piece of code, can we
+
+- Translate it to human language to understand what it does? The input could be a function definition(normally lack of documentation) in Python and the output could be the documentation for that function.
+- Compile or translate it to another programming language? Compile Python code to C or machine code, or translate C code to Python code?
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'python', 'convolution']"," Title: How could I convolve a 4D image and a 4D filter with stride?Body: I want to create a CNN in Python, specifically, only with NumPy, if possible. For optimizing the time of convolution (actually correlation) in the network, I wanna try to use FFT-based convolution. The data that needs to be convoluted (correlated) is a 4D image tensor with shape [batch_size, width, height, channels]
and 4D filter tensor [filter_width, filter_height, in_channel, out_channel]
. I read a lot of articles about FFT-based convolution, but they aren't doing it in my way. Thus, I need your help.
+How could I fft-convolve a 4D image and a 4D filter with stride?
+"
+"['machine-learning', 'reference-request', 'computational-learning-theory', 'books']"," Title: What are other examples of theoretical machine learning books?Body: I am looking for a book about machine learning that would suit my physics background. I am more or less familiar with classical and complex analysis, theory of probability, сcalculus of variations, matrix algebra, etc. However, I have not studied topology, measure theory, group theory, and other more advanced topics. I try to find a book that is written neither for beginners, nor for mathematicians.
+Recently, I have read the great book "Statistical inference" written by Casella and Berger. They write in the introduction that "The purpose of this book is to build theoretical statistics (as different from mathematical statistics) from the first principles of probability theory". So, I am looking for some "theoretical books" about machine learning.
+There are many online courses and brilliant books out there that focus on the practical side of applying machine learning models and using the appropriate libraries. It seems to me that there are no problems with them, but I would like to find a book on theory.
+By now I have skimmed through the following books
+
+- Pattern Recognition And Machine Learning
+It looks very nice. The only point of concern is that the book was published in 2006. So, I am not sure about the relevance of the chapters considering neural nets, since this field is developing rather fast.
+
+- The elements of statistical learning
+This book also seems very good. It covers most of the topics as well as the first book. However, I am feeling that its style is different and I do not know which book will suit me better.
+
+- Artificial Intelligence. A Modern Approach
+This one covers more recent topics, such as natural language processing. As far as I understand, it represents the view of a computer scientist on machine learning.
+
+- Machine Learning A Probabilistic Perspective
+Maybe it has a slight bias towards probability theory, which is stated in the title. However, the book looks fascinating as well.
+
+
+I think that the first or the second book should suit me, but I do not know what decision to make.
+I am sure that I have overlooked some books.
+Are there some other ML books that focus on theory?
+"
+"['transformer', 'bert', 'transfer-learning', 'pretrained-models', 'fine-tuning']"," Title: BERT: After pretraining 880000 step, why fine-tune not work?Body: I am using pretraining code from https://github.com/NVIDIA/DeepLearningExamples
+Pretrain parameters:
+ 15:47:02,534: INFO tensorflow 140678508230464 init_checkpoint: bertbase3layer-extract-from-google
+ 15:47:02,534: INFO tensorflow 140678508230464 optimizer_type: lamb
+ 15:47:02,534: INFO tensorflow 140678508230464 max_seq_length: 64
+ 15:47:02,534: INFO tensorflow 140678508230464 max_predictions_per_seq: 5
+ 15:47:02,534: INFO tensorflow 140678508230464 do_train: True
+ 15:47:02,535: INFO tensorflow 140678508230464 do_eval: False
+ 15:47:02,535: INFO tensorflow 140678508230464 train_batch_size: 32
+ 15:47:02,535: INFO tensorflow 140678508230464 eval_batch_size: 8
+ 15:47:02,535: INFO tensorflow 140678508230464 learning_rate: 5e-05
+ 15:47:02,535: INFO tensorflow 140678508230464 num_train_steps: 10000000
+ 15:47:02,535: INFO tensorflow 140678508230464 num_warmup_steps: 10000
+ 15:47:02,535: INFO tensorflow 140678508230464 save_checkpoints_steps: 1000
+ 15:47:02,535: INFO tensorflow 140678508230464 display_loss_steps: 10
+ 15:47:02,535: INFO tensorflow 140678508230464 iterations_per_loop: 1000
+ 15:47:02,535: INFO tensorflow 140678508230464 max_eval_steps: 100
+ 15:47:02,535: INFO tensorflow 140678508230464 num_accumulation_steps: 1
+ 15:47:02,535: INFO tensorflow 140678508230464 allreduce_post_accumulation: False
+ 15:47:02,535: INFO tensorflow 140678508230464 verbose_logging: False
+ 15:47:02,535: INFO tensorflow 140678508230464 horovod: True
+ 15:47:02,536: INFO tensorflow 140678508230464 report_loss: True
+ 15:47:02,536: INFO tensorflow 140678508230464 manual_fp16: False
+ 15:47:02,536: INFO tensorflow 140678508230464 amp: False
+ 15:47:02,536: INFO tensorflow 140678508230464 use_xla: True
+ 15:47:02,536: INFO tensorflow 140678508230464 init_loss_scale: 4294967296
+ 15:47:02,536: INFO tensorflow 140678508230464 ?: False
+ 15:47:02,536: INFO tensorflow 140678508230464 help: False
+ 15:47:02,536: INFO tensorflow 140678508230464 helpshort: False
+ 15:47:02,536: INFO tensorflow 140678508230464 helpfull: False
+ 15:47:02,536: INFO tensorflow 140678508230464 helpxml: False
+ 15:47:02,536: INFO tensorflow 140678508230464 **************************
+
+Pretrain loss: (I remove nsp_loss)
+{'throughput_train': 1196.9646684552622, 'mlm_loss': 0.9837073683738708, 'nsp_loss': 0.0, 'total_loss': 0.9837073683738708, 'avg_loss_step': 1.200513333082199, 'learning_rate': '0.00038143058'}
+{'throughput_train': 1230.5063662500734, 'mlm_loss': 1.3001925945281982, 'nsp_loss': 0.0, 'total_loss': 1.3001925945281982, 'avg_loss_step': 1.299936044216156, 'learning_rate': '0.00038143038'}
+{'throughput_train': 1236.4348949169155, 'mlm_loss': 1.473339319229126, 'nsp_loss': 0.0, 'total_loss': 1.473339319229126, 'avg_loss_step': 1.2444063007831574, 'learning_rate': '0.00038143017'}
+{'throughput_train': 1221.2668264552692, 'mlm_loss': 0.9924975633621216, 'nsp_loss': 0.0, 'total_loss': 0.9924975633621216, 'avg_loss_step': 1.1603020071983337, 'learning_rate': '0.00038142994'}
+
+Fine-tune code:
+self.train_op = tf.train.AdamOptimizer(0.00001).minimize(self.loss, global_step=self.global_step)
+
+Fine-tune accuracy: (restore from my ckpt pretrained from https://github.com/NVIDIA/DeepLearningExamples)
+epoch 1:
+training step 895429, loss 4.98, acc 0.079
+dev loss 4.853, acc 0.092
+
+epoch 2:
+training step 895429, loss 4.97, acc 0.080
+dev loss 4.823, acc 0.092
+
+epoch 3:
+training step 895429, loss 4.96, acc 0.081
+dev loss 4.849, acc 0.092
+
+epoch 4:
+training step 895429, loss 4.95, acc 0.082
+dev loss 4.843, acc 0.092
+
+Without restore the pretrained ckpt:
+epoch 1:
+training step 10429, loss 2.48, acc 0.606
+dev loss 1.604, acc 0.8036
+
+Restore the google's BERT-Base pretrained ckpt. Or restore from a pretrained ckpt pretrained from https://github.com/guotong1988/BERT-GPU
+epoch 1:
+training loss 1.89, acc 0.761
+dev loss 1.351, acc 0.869
+
+"
+"['neural-networks', 'reinforcement-learning']"," Title: What are some suitable positive functions as activations of neural networks?Body: I am working on a deep Q-learning project. My project is different than normal deep Q-learning. The rewards of my neural network must be positive because I need their values to importance sample actions. I know that I can't use ReLU as the activation function of my neural network. So the only suitable functions which I know are sigmoid, softmax and exponential function. I tried working with sigmoid and softmax but they generate wrong results and the loss function diverges.
+There are two terminal states in my model. Their rewards are 1 and 0. All other states don't have any immediate rewards.
+"
+"['machine-learning', 'ai-design', 'prediction', 'ai-development']"," Title: How to train a model to predict the number of people at a certain bus stop before they cumulate in large numbers?Body: Each person probably uses an app that tracks his/her position periodically and sends it to our servers. What I want is to use these data to train a model to predict the rush hours of each bus-stop on the map, so we can send extra buses to handle the predicted cumulation before it happens.
+I have no experience in AI nor machine learning. So, which model should I use to do this?
+"
+"['machine-learning', 'deep-learning', 'prediction', 'pattern-recognition', 'architecture']"," Title: How to use a NN for seq2seq tasks?Body: I am trying to make a NN(probably with dense layers) to map a specific input to a specific output (or basically sequence2sequence). I want the model to learn the relation between the sequences and predict the output of any other input I give it.
+I have 2 files - one with the inputs and another with all the corresponding outputs and would probably use a bunch of Dense Layers with word embeddings to vectorize it into higher dimensions. However, I cannot find any good resources out there for that.
+Does anyone know how to accomplish such an NN? Which architectures are best for pattern matching? examples, links, and other resources would be very welcome. I was considering using RNN's but found them not very good in the pattern matching tasks so had ditched them. I would still consider them if someone can provide a plausible explanation...
+"
+"['neural-networks', 'deep-learning', 'zero-shot-learning']"," Title: Zero shot learning available labels in testing setBody: As we all know, zero shot learning involves a model predicting classes that it has not seen. But we are given all the attributes each class might have.
+Is it fair to assume that we are "aware" of all the class labels a dataset might have ? (Including the test set)
+"
+['research']," Title: As an AI researcher, what subjects do you find yourself referring to most often?Body: This is a bit of a weird question.
+I am hoping to create an online reference since I have some downtime. I know some about statistics but very little about computer science. As a result, the reference guide I am hoping to create will be very statistics oriented - even though I wish that it could be a reference for someone who wants to start from scratch and work their way to AI.
+While I would love to be involved with AI, from what I have read about ML and AI, seems like AI does not involve much statistics. (A lot of statistical theory is based off normal assumption and math, and ML seems to bypass that by not requiring strong assumptions nor analytical results). CS seems to be more relevant.
+And so my question is, since my guide will mostly cover statistics, how relevant would it be for someone who wants to get into AI? If it's not relevant, then I guess I'll just make my guide for someone who wants to get into stats/data science, as opposed to someone who wants to be an AI researcher.
+I guess another way to phrase my question is, as an AI researcher, when you "google" stuff, wikipedia things, or go to your notes, what subjects are you looking at and what exactly are you googling? Are you getting a refresher on how to code back propagation? Or are you getting a refresher on the pros and cons of L1 vs. L2? Do you ever look at how to implement a boosting tree or NN using a pre-existing package?
+Basically, I know that what I can provide will be relevant to HS/college stats and data science students. But what really want to do is create something useful for aspiring/current AI researchers. The former is realistic, the latter is a dream. I want to see if my dream is realistic.
+Thanks!
+"
+"['datasets', 'genetic-algorithms', 'genetic-operators']"," Title: Can we use genetic algorithms to evolve datasets?Body: Genetic algorithms are used to solve many optimization tasks.
+If I have a dataset, can I evolve it with a genetic algorithm to create an evolved version of the same dataset?
+We could consider each feature of the initial dataset as a chromosome (or individual), which is then combined with other chromosomes (features) to find more features. Is this possible? Has this been done?
+I will like to edit the details with an example so that it is easier to understand.
+Example: In practice cyber-security attacks evolve over time since it finds a new way to breach a system. The main draw-back of intrusion detection model is that it needs to be trained every time attack evolves. So I was hoping if genetic algorithm can be used on the present benchmarked datasets (like NSL-KDD) to come up with a futuristic type dataset maybe after X-number of generations.
+And check if a model is able to classify that generated dataset as well.
+"
+"['neural-networks', 'deep-learning', 'tensorflow', 'python', 'long-short-term-memory']"," Title: Extract features with CNN and pass as sequence to RNNBody: I read an article about captioning videos and I want to use solution number 4 (extract features with a CNN, pass the sequence to a separate RNN) in my own project.
+But for me, it seems really strange that in this method we use the Inception model without any retraining or something like that. Every project has different requirements and even if you use pretrained model instead of your own, you should do some training.
+And I wonder how to do this? For example, I created a project where I use the network with CNN layers and then LSTM and Dense layers. And in every epoch, there is feed-forward and backpropagation through the whole network, all layers. But what if you have CNN network to extract features and LSTM network that takes sequences as inputs. How to train CNN network if there is no defined output? This network should only extract features but the network doesn't know what features. So the question is: How to train CNN to extract relevant features and then passing these features to LSTM?
+"
+"['reinforcement-learning', 'deep-rl', 'pytorch', 'actor-critic-methods', 'a3c']"," Title: Why would the reward of A3C with LSTM suddenly drop off after many episodes?Body: I am training an A3C with stacked LSTM.
+During initial training, my model was giving descent +ve reward. However, after many episodes, its reward just goes to zero and is continuing for a long time. Is it because of LSTM?
+Is it normal?
+Should I expect it to work after the training is over or just terminate the training and increase the density of my network?
+"
+"['machine-learning', 'prediction', 'data-preprocessing', 'feature-engineering']"," Title: How to perform prediction when some features have missing values?Body: Sorry if this is too noob question, I'm just a beginner.
+I have a data set with companies' info. There are 2 kinds of features: financial (revenue and so on) and general info (like the number of employees and date of registration)
+I have to predict the probability of default. And the data has gaps: about the half of the companies have no financial data at all. But general features are 100% filled.
+What is the best practice for such a situation?
+Will be great if you can give some example links to read.
+"
+['language-model']," Title: Fundamentally, what is a perfect language model?Body: Suppose that we want to generate a sentence made of words according to language $L$:
+$$
+W_1 W_2 \ldots W_n
+$$
+Question: What is the perfect language model?
+I ask about perfect because I want to know the concept fundamentally at its fullest extent. I am not interested in knowing heuristics or shortcuts that reduce the complexity of its implementation.
+
+1. My thoughts so far
+1.1. Sequential
+One possible way to think about it is moving from left to right. So, 1st, we try to find out value of $W_1$. To do so, we choose the specific word $w$ from the space of words $\mathcal{W}$ that's used by the language $L$. Basically:
+$$
+w_1 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_1 = w)
+$$
+Then, we move forward to find the value of the next word $W_2$ as follows
+$$
+w_2 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_2 = w | W_1 = w_1)
+$$
+Likewise for $W_3, \ldots, W_n$:
+$$
+w_3 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_3 = w | W_1 = w_1, W_2=w_2)
+$$
+$$
+\vdots
+$$
+$$
+w_n = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_n = w | W_1 = w_1, W_2=w_2, \ldots W_{n-1}=w_{n-1})
+$$
+But is this really perfect? I personally doubt. I think while language is read and written usually from a given direction (e.g. left to right), it is not always done so, and in many cases language is read/written possibly in a funny order as we always do. E.g. even when I wrote this question, I jumped back and forth, then went to edit it (as I'm doing now). So I clearly didn't write it from left to right! Similarly, you, the reader; you won't really read it in a single pass from left to right, will you? You will probably read it in some funny order and go back and forth for awhile until you conclude an understanding. So I personally really doubt that the sequential formalism is perfect.
+1.2. Joint
+Here we find all the $n$ words jointly. Of course ridiculously expensive computationally (if implemented), but our goal here is to only know what is the problem at its fullest.
+Basically, we get the $n$ words as follows:
+$$
+(w_1, w_2, \ldots, w_n) = \underset{(w_1,w_2,\ldots,w_n) \in \mathcal{W}^n}{\text{arg max }} \Pr(W_1 = w_1, W_2=w_2, \ldots W_n=w_n)
+$$
+This is a perfect representation of language model in my opinion, because its answer is gauranteed to be correct. But there is this annoying aspect which is that its words candidates space is needlessly large!
+E.g. this formalism is basically saying that the following is a candidate words sequence: $(., Hello, world, !)$ even though we know that in (say) English a sentence cannot start by a dot ".".
+1.3. Joint but slightly smarter
+This is very similar to 1.2 Joint, except that it deletes the single bag of all words $\mathcal{W}$, and instead introduces several bags $\mathcal{W}_1, \mathcal{W}_2, \ldots, \mathcal{W}_n$, which work as follows:
+
+- $\mathcal{W}_1$ is a bag that contains words that can only appear as 1st words.
+- $\mathcal{W}_2$ is a bag that contains words that can only appear as 2nd words.
+- $\vdots$
+- $\mathcal{W}_n$ is a bag that contains words that can only appear as $n$th words.
+
+This way, we will avoid the stupid candidates that 1.2. Joint evaluated by following this:
+$$
+(w_1, w_2, \ldots, w_n) = \underset{w_1 \in \mathcal{W}_1,w_2 \in \mathcal{W}_2,\ldots,w_n \in \mathcal{W}_n) \in \mathcal{W}^n}{\text{arg max }} \Pr(W_1 = w_1, W_2=w_2, \ldots W_n=w_n)
+$$
+This will also guarantee being a perfect representation of a language model, yet it its candidates space is smaller than one in 1.2. Joint.
+1.4. Joint but fully smart
+Here is where I'm stuck!
+Question rephrase (in case it helps): Is there any formalism that gives the perfect correctness of 1.2. and 1.3., except for also being fully smart in that its candidates space is smallest?
+"
+['game-ai']," Title: How much can/should the non-player character know about the game's world?Body: I'm about to write a non-player character (NPC). I wonder how much the AI should know about the game's world. So, my question isn't about the amount of training data the AI has to collect. I'm interested in how much the AI is allowed to know about what's going on in the game's world.
+For example, can (shall) it have knowledge about the build queue of the player?
+To provide more details: while a human plays a game against another human, not all information of what the opponent is doing is available (e.g. the queue of the units your opponent is building). This could give you an advantage (so that you can prepare for a rush, when he's building many cheap units). Theoretically, an NPC could access and make use of that knowledge and, in addition, spare resources for scouting/spying/exploring.
+But is this the way of constructing an NPC AI? Or should this data also be restricted? I have never done anything like this before.
+I don't know where to ask else wise or what more information I could provide. So, if something in my question is unclear or unfit, please let me know what exactly.
+"
+"['natural-language-processing', 'long-short-term-memory', 'attention', 'language-model']"," Title: Appropriate metric and approach for natural language generation for small sentencesBody: I am trying to create a language generation model to generate very short sentences/words, like a rapper name generator. The sentences in my dataset are anywhere between 1 word and 15 words (3-155 characters). So far, I have tried LSTM's with 1-3 layers and inputs as subwords and characters. The results so far are not that great, I am getting ~0.5 crossentropy loss and ~50% accuracy.
+My inputs are like a sliding window with prepadding, (eg. (for a batch) Inputs = [[0,0,0,1], [0,0,1,2]...[n-4,..n-1]]
, outputs=[[0,0,1,2], ...[n-3,n-2,n-1,n]]
) where 0 is padding, 1
is the start token and n
is the end token. Outputs are 1 hot encoded.
+The model is an embedding layer, few lstm and dropout layers, followed by time distributed dense and then a dense layer.
+My doubt is, is accuracy a right metric, I am using it because at the end, I am making a classification for 4 output values. Another one is, will a transformer be suitable for this, since I want to generate small sentences, (which are nouns) and models like GPT/ Bert are more suitable for capturing dependency between long sentences.
+"
+"['machine-learning', 'reinforcement-learning']"," Title: What is the ""Hello World"" problem of Reinforcement Learning?Body: As we all know, "Hello World" is usually the first program that any programmer learns/implements in any language/framework.
+As Aurélien Géron mentioned in his book that MNIST is often called the Hello World of Machine Learning, is there any "Hello World" problem of Reinforcement Learning?
+A few candidates that I could think of are multi armed bandits problem and Cart Pole Env.
+"
+"['deep-learning', 'convolutional-neural-networks', 'image-recognition', 'incremental-learning', 'online-learning']"," Title: Is continuous learning possible with a deep convolutional neural network, without changing its topology?Body: In general, is continuous learning possible with a deep convolutional neural network, without changing its topology?
+In my case, I want to use a convolutional neural network as a classifier of heartbeat types. The ECG signal is split, and a color image is created using feature extraction. These photos (the inputs) are fed into a deep CNN, but they must be labeled by someone first.
+Are there ways to implement continuous learning in a deep neural network for image recognition? Does such an implementation make sense if the labels have to be specially prepared in advance?
+"
+"['constraint-satisfaction-problems', 'homework']"," Title: What's wrong with my answer to this constraint satisfaction problem, which needs to be solved the AC-3 algorithm?Body: I was watching the video Constraint Satisfaction: the AC-3 algorithm, and I tried to solve this question:
+
+Given the variables A, B, C and D, with domain {1, 2, 3, 4} in each of them and restrictions A> B, B = C and D ≠ A, use the AC algorithm.
+
+But the teacher told me that my answer below is wrong!
+He gave me a tip: Domain D will not be changed!
+Below, I is my answer step by step. If someone can help me find the error, I appreciate it!
+To solve this exercise, it is first necessary to organize the data in order to separate what is the domain, agenda and arc.
+
+Soon after, we will analyze the first item on the agenda “A> B” with domain A, in order to eliminate unnecessary elements from the domain.
+
+Analyze domain B with the agenda item “B <A”
+
+Analyze domain B with the agenda item “B = C” and add the constraint “A> B”
+
+Analyze domain D with the agenda item “D ≠ A” and add the constraint “B <A”
+
+Analyze domain A with the agenda item “A ≠ D”
+
+Analyze domain A with the agenda item “A> B”
+
+Analyze domain B with the agenda item “B = C”
+
+Analyze domain B with the agenda item “B <A”
+
+Result
+
+"
+"['objective-functions', 'papers', 'generative-adversarial-networks', 'attn-gan']"," Title: What is the purpose of the DAMSM loss for the generators in AttnGAN?Body: I am confused about the training part in AttnGan.
+If you observe page 3. There are two types of losses for generator network: one involving the Deep Attentional
+Multimodal Similarity Model (DAMSM) loss $(L_{DAMSM})$ and the others for individual generator $(L_{G_i})$ for $i= 1, 2, 3$.
+My doubt is: if each generator has its own loss function that is useful in training, what is the purpose in using $L_G$, i.e., with DAMSM loss function? Is my assumption wrong?
+"
+"['convolutional-neural-networks', 'tensorflow', 'keras', 'variational-autoencoder', 'filters']"," Title: How to construct input dependent convolutional filter?Body: I am constructing a convolutional variational autoencoder for images, starting out with mnist digits. Typically I would specify convolutional layers in the following way:
+input_img = layers.Input(shape=(28,28,1))
+conv1 = keras.layers.Conv2D(32, (3,3), strides=2, padding='same', activation='relu')(input_img)
+conv2 = keras.layers.Conv2D(64, (3,3), strides=2, padding='same', activation='relu')(conv1)
+...
+
+However, I would also like to construct a convolutional filter/kernel that is fixed BUT dependent on some content related to the input, which we can call an auxiliary label. This could be a class label or some other piece of relevant information corresponding to the input. For example, for MNIST I can use the class label as auxiliary information and map the digit to a (3,3) kernel and essentially generate a distinct kernel for each digit. This specific filter/kernel is not learned through the network so it is fixed, but it is class dependent. This filter will then be concatenated with the traditional convolutional filters shown above.
+input_img = layers.Input(shape=(28,28,1))
+conv1 = keras.layers.Conv2D(32, (3,3), strides=2, padding='same', activation='relu')(input_img)
+
+# TODO: add a filter/kernel that is fixed (not learned by model) but is class label specific
+# Not sure how to implement this?
+# auxiliary_conv = keras.layers.Conv2D(1, (3,3), strides=2, padding='same', activation='relu')(input_img)
+
+I know there are kernel initializers to specify initial weights https://keras.io/api/layers/initializers/, but I'm not sure if this is relevant and if so, how to make this work with a class specific initialization.
+In summary, I want a portion of the model's weights to be input content dependent so that some of the trained model's weights vary based on the auxiliary information such as class label, instead of being completely fixed regardless of the input. Is this even possible to achieve in Keras/Tensorflow? I would appreciate any suggestions or examples to get started with implementation.
+"
+"['neural-networks', 'datasets', 'math']"," Title: Could the neural network automatically calculate and get different one-to-many quantities relative to their parent quantity?Body: Let's say I have a primary dataset that its secondary dataset is hundreds to match and group like an one-to-many relationship.
+I'm new in this world of the AI but my problem is that many child groups contain the same elements or even different combinations to result the parent data but in this case more to avoid duplication is get those duplications and the some way add up the data.
+This is an example of what secondary data can look like and what I want to get from grouping it.
+Parent data
+ ID FIELD1 FIELD2 FIELD3 FIELD4 FIELD5
+ 90148001 BLABLA 40 0 35896.89479 35896.89479
+
+Child data
+ ID FIELD1 FIELD2 FIELD3 FIELD4 FIELD5
+* 90148001 BLABLA 1 1770 1769.572665 1769.572665
+* 90148001 DESCRIPTION2 1 13146 13146.45284 13146.45284
+* 90148001 BLABLA 1 2176 2176.435074 2176.435074
+* 90148001 BLABLA 1 2306 2305.716285 2305.716285
+* 90148001 BLABLA 1 2531 2531.271196 2531.271196
+* 90148001 BLABLA 1 1147 1146.803622 1146.803622
+* 90148001 BLABLA 1 1991 1990.613246 1990.613246
+* 90148001 BLABLA 1 3641 3641.394446 3641.394446
+* 90148001 BLABLA 1 2471 2470.8253 2470.8253
+* 90148001 BLABLA 1 2247 2246.984815 2246.984815
+* 90148001 BLABLA 1 2471 2470.8253 2470.8253
+
+Could a neural network be able to process, aggregate, and group those quantities?
+"
+['unsupervised-learning']," Title: What is the “Hello World” problem of Unsupervised Learning?Body: As a followup to this question, I'm interested in what the typical "Hello World" problem (first easy example problem) is for unsupervised learning.
+A quick Google search didn't find any obvious answers for me.
+"
+"['reinforcement-learning', 'math', 'papers', 'hindsight-experience-replay']"," Title: What does $r : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ mean in the article Hindsight Experience Replay, section 2.1?Body: Taken from section 2.1 in the article:
+
+We consider the standard reinforcement learning formalism consisting of an agent interacting with an environment. To simplify the exposition we assume that the environment is fully observable. An environment is described by a set of states $S$, a set of actions $A$, a distribution of initial states $p(s_0)$, a reward function $r : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, transition probabilities $p(s_{t+1} \mid s_t, a_t)$, and a discount factor $\gamma \in [0, 1]$.*
+
+How should one interpret the maths behind it?
+"
+"['reinforcement-learning', 'control-problem', 'model-based-methods', 'linear-programming']"," Title: Are linear approximators better suited to some tasks compared to complex neural net functions?Body: Model based RL attempts to learn a function $f(s_{t+1}|s_t, a_t)$ representing the environment transitions, otherwise known as a model of the system. I see linear functions are still being used in model-based RL such as in robotic manipulation to learn system dynamics, and can work effectively well. (Here, I mean in learning the model, not as an optimization method for the controller selecting the best actions).
+In model-based RL, are there situations where a learning a linear model such as using a Lyapunov function would be better suited than using a neural network, or are the examples of problems framed to use linear models when addressing them using model-based RL?
+"
+"['python', 'game-ai', 'combinatorial-games']"," Title: Are there any python implementations of GGP games or how to use game logic written in GDL in python?Body: I'm writing a General Game Playing (GGP) AI in python. I'd like to test it on some GGP games. So are there any python implementations of GGP games?
+I found in http://games.ggp.org/base games written in Game Description Language (GDL). How to use them in python if it is possible to do so?
+"
+"['natural-language-processing', 'sequence-modeling', 'text-classification']"," Title: NLP Bible verse division problem: Whats the best model/method?Body: I'm working on a project compiling various versions of the Bible into a dataset. For the most part versions separate verses discreetly. In some versions, however, verses are combined. Instead of verse 16, the marker will say 16-18. I wonder if, given I have a lot of other versions that separate them discretely, I can train an NLP model (I have about 30 versions that could act as a training set which would constitute to separate those combined verses into discrete verses. I'm fairly new at deep learning, having done a few toy projects. I wonder how to think about this problem? What kind of problem is it? I think it might be similar to auto-punctuation problems and it seems the options there are seq2seq and classifier. This makes more sense to me as a classification problem, but maybe my inexperience is what drives me that direction. Can people suggest ways to think about this problem and resources I might use?
+In answer to questions in the comment, I am dealing only with text, not images. An example might be like this:
+Genesis 2, New Revised Standard Version:
+
+5 when no plant of the field was yet in the earth and no herb of the field had yet sprung up—for the Lord God had not caused it to rain upon the earth, and there was no one to till the ground; 6 but a stream would rise from the earth, and water the whole face of the ground— 7 then the Lord God formed man from the dust of the ground, and breathed into his nostrils the breath of life; and the man became a living being.
+
+Genesis 2, The message version:
+
+5-7 At the time God made Earth and Heaven, before any grasses or shrubs had sprouted from the ground—God hadn’t yet sent rain on Earth, nor was there anyone around to work the ground (the whole Earth was watered by underground springs)—God formed Man out of dirt from the ground and blew into his nostrils the breath of life. The Man came alive—a living soul!
+
+The goal then would be to divide the message version into discrete verses in the way that the NRSV is. Certainly, a part of the guide would be that a verse always ends in some kind of punctuation, though while necessary it is not sufficient to assign a distinct verse.
+"
+"['deep-learning', 'object-detection']"," Title: How do I label images for deep learning classification?Body: I have roughly 30,000 images of two categories, which are 'crops' and 'weeds.' An example of what I have can be found below:
+
+The goal will use my training images to detect weeds among crops, given an orthomosaic GIS image of a given field. I guess you could say that I'm trying to detect certain objects in the field.
+As I'm new to deep learning, how would one go about generating training labels for this task? Can I just label the entire photo as a 'weed' using some type of text file, or do I actually have to draw bounding boxes (around weeds) on each image that will be used for training? If so, is there an easier way than going through all 30,000 of my images?
+I'm very new to this, so any specific details would really help a lot!
+"
+"['neural-networks', 'deep-learning', 'classification', 'datasets', 'data-preprocessing']"," Title: Can a neural network be trained on a dataset containing only values for true output for a classification problem?Body: I am using a dataset from Google which contains 1,27,000 data points on simulated concentrations of the atmosphere of exoplanets which can sustain life. So, the output label of all these data points is 1 i.e, probability of life existing there is 1. If I train my neural network on this data, and test it on data points with concentrations other than these, can I expect to get probability values at the output? Asking because the model knows no false labelled value.
+"
+"['machine-learning', 'training', 'datasets', 'models', 'weights']"," Title: What are some examples of functions that machine learning models compute?Body: My simple understanding of AI is that it is based on a mathematical model of a problem. If I understood correctly, the model is a polynomial equation and its weights are calculated by training the model with data sets.
+I am interested to see a few example polynomial equations (trained models) which are used in certain problem areas. I tried to search it, but so far could not find any simple answers.
+Can anyone list a few examples here?
+"
+"['reinforcement-learning', 'value-functions', 'sutton-barto', 'expectation', 'return']"," Title: What is wrong with equation 7.3 in Sutton & Barto's book?Body: Equation 7.3 of Sutton Barto book:
+$$\text{Equation: } max_s|\mathbb{E}_\pi[G_{t:t+n}|S_t = s] - v_\pi| \le \gamma^nmax_s|V_{t+n-1}(s) - v_\pi(s)| $$
+$$\text{where }G_{t:t+n} = R_{t+1} + \gamma R_{t+2} + .....+\gamma^{n-1} R_{t+n} + \gamma^nV_{t+n-1}(S_{t+n})$$
+Here $V_{t+n-1}(S_{t+n})$ is the estimate of $V_\pi(S_{t+n})$
+But the Left Hand Side of the above equation should be zeros as, for any state s, $G_{t:t+n}$ is an unbiased estimate of $v_\pi(s)$ hence $\mathbb{E}_\pi[G_{t:t+n}|S_t = s] = v_\pi(s)$.
+"
+"['convolutional-neural-networks', 'math', 'research', 'image-processing', 'statistics']"," Title: Research paths/areas for improving the performance of CNNs when faced with limited dataBody: I've been reading through the research literature for image processing, computer vision, and convolutional neural networks. For image classification and object recognition, I know that convolutional neural networks deliver state-of-the-art performance when large amounts of data are available. Furthermore, I know that Hinton et al. created "capsule networks" to try and overcome some of the fundamental limitations of CNN architecture (such as them not being rotationally invariant). However, my understanding is that capsule networks have been a failure (so far), and most people expect them to go nowhere. And CNNs have progressively been improved in various ways (Bayesian optimisation for hyper parameter tuning, new convolution kernels, etc.). It seems to me that, at the moment, and for the foreseeable future, CNNs are the best architecture available for image-related stuff.
+But, as I said, CNNs, like other Deep Learning architectures, require large amounts of data. So my question is as follows:
+What are the research areas/topics for improving CNNs in the sense of making them work more effectively (that is, have greater performance) with less data (working with small datasets)?
+I know that there is various research looking at approaches to increasing data (such as data augmentation, generative networks, etc.), but I am primarily interested in fundamental modifications to CNNs themselves, rather than purely focusing on changes to the data itself.
+And to expand upon my question, using my above definition of "performance", I am interested in these two categories:
+
+- "Computational methods" for increasing CNN performance. This would be the non-mathematical stuff that I've read about, such as just increasing the number of layers and making the CNN deeper/wider (and I think another one had to do with just making the size of the convolution kernel smaller, so that it looks at smaller pieces of the image at any one time, or something like that?).
+
+- "Mathematical methods" for increasing CNN performance. This would be the cutting-edge mathematical/statistical stuff that I've read about: things like algorithms (such as Bayesian optimization); I've come across a lot of geometric stuff; and I guess the cutting-edge convolution kernels created by the image processing people would also fall under this category.
+
+
+Obviously, this "list" is not exhaustive, and it's probably incorrect; I'm a novice to this research, so I'm trying to find my way around.
+I am interested in studying both of the above categories, but I will primarily be working from the mathematical/statistical side. And I want to work on research that is still practical and can be put to use in industry for improved performance (even if it might still be "advanced"/complex for most people in industry) – not the the highly theoretical stuff related.
+Related (but unanswered): Are there any good research papers on image identification with limited data?
+"
+"['machine-learning', 'deep-learning', 'agi', 'research', 'programming-languages']"," Title: How differentiable programming and programming language supporting it will potentially help the development towards AGI?Body: After the state of the art Deep Learning techniques/algorithms being implemented in low-level languages like Objective-C, C++, etc to high-level languages like Python, JS, etc. and with the help of huge libraries like Tensorflow, Pytorch, Scikit-Learn, etc.
+Now, Swift: Google's bet on differentiable programming, they are making Swift differential programming ready see this manifesto and they are building TensorFlow from the ground up in swift S4TF.
+So, how differentiable programming and programming language supporting it will potentially help the development towards AGI?
+"
+"['natural-language-processing', 'reference-request', 'autoencoders', 'transformer', 'bert']"," Title: Are there transformer-based architectures that can produce fixed-length vector encodings given arbitrary-length text documents?Body: BERT encodes a piece of text such that each token (usually words) in the input text map to a vector in the encoding of the text. However, this makes the length of the encoding vary as a function of the input length of the text, which makes it more cumbersome to use as input to downstream neural networks that take only fixed-size inputs.
+Are there any transformer-based neural network architectures that can encode a piece of text into a fixed-size feature vector more suitable for downstream tasks?
+Edit: To illustrate my question, I’m wondering whether there is some framework that allows the input to be either a sentence, a paragraph, an article, or a book, and produce an output encoding on the same, fixed-sized format for all of them.
+"
+"['philosophy', 'agi', 'gpt']"," Title: Is GPT-3 an early example of strong AI in a narrow setting?Body: In GPT-2, the large achievement was being able to generate coherent text over a long-form while maintaining context. This was very impressive but for GPT-2 to do new language tasks, it had to be explicitly fine-tuned for the new task.
+In GPT-3 (From my understanding), this is no longer the case. It can perform a larger array of language tasks from translation, open domain conversation, summarization, etc., with only a few examples. No explicit fine-tuning is needed.
+The actual theory behind GPT-3 is fairly simple, which would not suggest any level of ability other than what would be found in common narrow intelligence systems.
+However, looking past the media hype and the news coverage, GPT-3 is not explicitly programmed to "know" how to do these wider arrays of tasks. In fact, with limited examples, it can perform many language tasks quite well and "learn on the fly" so to speak. To me, this does seem to align fairly well with what most people would consider strong AI, but in a narrow context, which is language tasks.
+Thoughts? Is GPT-3 an early example of strong AI but in a narrower context?
+"
+"['agi', 'economics']"," Title: What is the current artificial general intelligence technology valuation?Body: That is, if AGI were an existing technology, how much would it be valued to?
+Obviously it would depend on its efficiency, if it requires more than all the existing hardware to run it, it would be impossible to market.
+This question is more about getting a general picture of the economy surrounding this technology.
+Assuming a specific definition of AGI and that we implemented that AGI, what is its potential economical value?
+Current investments in this research field are also useful data.
+"
+"['deep-learning', 'backpropagation', 'objective-functions', 'logistic-regression']"," Title: Back propagation approach to logistic regression: why is cost diverging but accuracy increasing?Body: Background I have tried to fit a logistic regression model - written using a forward / back propagation approach (as part of Andrew Ng's deep learning course) - to a very non-linear data set (see picture below). Of course, it totally fails; in Andrew Ng's course, the failure of logistic regression to fit to this motivates developing a neural net - which works quite nicely. But my question concerns what my logistic model is doing and why.
+
+The problem My logistic regression model's cost increases, even after massively reducing the learning rate. But at the same time my accuracy (slowly) increases. I simply cannot see why.
+To confuse matters even more - if I resort to a negative learning rate (essentially trying to force the calibration to higher cost values) the cost then decreases for a time until the accuracy hits 50%. After this point, the cost then inexorably increases - but the accuracy stays equal to 50%. The solution so found is to set all points to either red or blue (a reasonable fit given logistic regression simply cannot work on this data).
+My questions and thoughts on answers I have reproduced the Python code below - hopefully it's clear. My questions are:
+
+- Is there a mistake in the model that explains why negative learning rates seem to work better?
+- On the topic of why the cost increases even as accuracy asymptotes to 50%: is the issue that once the model has discovered the "all points equal to either red or blue" solution the parameters "w" and "b" just get larger and larger (in absolute terms) - driving all of the predictions closer to 1 (or conversely if it predicts all points are 0)?
+
+To explain this second question a bit more: imagine red points are defined by y = 1. Suppose parameters w, b are chosen such that the probability for every point equals 0.9. Then the model predicts all points are red - which is correct for half the points. The model can then improve half the predictions by driving w and b up (so that sigmoid ( w*x + b) --> 1). But of course, this makes half the predictions (the blue points) more and more wrong - which causes the cost function for those points - log(1 - prob) - to diverge. I don't truly see why gradient descent would do this but it's all I can think of for the peculiar behaviour of the algorithm.
+Hope this all makes sense. Hit me up if not.
+import numpy as np
+import matplotlib.pyplot as plt
+
+
+# function to create a flower-like arrangement of 1s and 0s
+def load_planar_dataset():
+ np.random.seed(1)
+ m = 400 # number of examples
+ N = int(m/2) # number of points per class
+ D = 2 # dimensionality / i.e. work in 2d plane - so X is a set of (x,y) coordinate points
+ X = np.zeros((m,D)) # data matrix where each row is a single example
+ Y = np.zeros((m,1), dtype='uint8') # labels vector (0 for red, 1 for blue)
+ a = 4 # maximum ray of the flower
+
+ for j in range(2):
+ ix = range(N*j,N*(j+1))
+ t = np.linspace(j*3.12,(j+1)*3.12,N) + np.random.randn(N)*0.2 # theta / random element mixes up some of the petals so you get mostly blue with some red petals and vice-versa
+ r = a*np.sin(4*t) + np.random.randn(N)*0.2 # radius / again random element alters shape of flower slightly
+ X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
+ Y[ix] = j
+
+ X = X.T # transpose so columns = training example as per standard in lectures
+ Y = Y.T
+
+ return X, Y
+
+# function to plot the above data plus a modelled decision boundary - works by applying model to grid of points and colouring accordingly
+def plot_decision_boundary(model, X, y):
+ # Set min and max values and give it some padding
+ x_min, x_max = X[0, :].min() - 1, X[0, :].max() + 1
+ y_min, y_max = X[1, :].min() - 1, X[1, :].max() + 1
+ h = 0.01
+ # Generate a grid of points with distance h between them
+ xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
+ # Predict the function value for the whole grid
+ Z = model(np.c_[xx.ravel(), yy.ravel()])
+ Z = Z.reshape(xx.shape)
+ # Plot the contour and training examples
+ plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
+ plt.ylabel('x2')
+ plt.xlabel('x1')
+ plt.scatter(X[0, :], X[1, :], c=y, cmap=plt.cm.Spectral)
+
+
+# sigmoid function as per sandard linear regression
+def sigmoid(z):
+ """
+ Compute the sigmoid of z
+
+ Arguments:
+ z -- A scalar or numpy array of any size.
+
+ Return:
+ s -- sigmoid(z)
+ """
+
+ s = 1. / (1. + np.exp(-z))
+
+ return s
+
+
+#
+def propagate(w, b, X, Y):
+ """
+ Implement the cost function and its gradient for the propagation explained above
+
+ Arguments:
+ w -- weights, a numpy array of size (num_px * num_px * 3, 1)
+ b -- bias, a scalar
+ X -- data of size (num_px * num_px * 3, number of examples)
+ Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
+
+ Return:
+ cost -- negative log-likelihood cost for logistic regression
+ dw -- gradient of the loss with respect to w, thus same shape as w
+ db -- gradient of the loss with respect to b, thus same shape as b
+ """
+
+ m = X.shape[1];
+
+ # forward prop
+ Z = np.dot(w.T, X) + b;
+ A = sigmoid(Z); # activiation = the prediction of the model
+
+
+ # compute cost
+ cost = - 1. / m * np.sum( (Y * np.log(A) + (1. - Y) * np.log(1. - A) ) )
+
+ #back prop for gradient descent
+
+ da = - Y / A + (1. - Y) / (1. - A)
+ dz = da * A * (1. - A) # = - Y (1-A) + (1. - Y) A = A - Y
+ dw = 1. / m * np.dot( X, dz.T )
+ db = 1. / m * np.sum(dz)
+
+ grads = {"dw": dw,
+ "db": db}
+
+ return grads, cost
+
+
+def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
+ """
+ This function optimizes w and b by running a gradient descent algorithm
+
+ Arguments:
+ w -- weights, a numpy array of size (num_px * num_px * 3, 1)
+ b -- bias, a scalar
+ X -- data of shape (num_px * num_px * 3, number of examples)
+ Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
+ num_iterations -- number of iterations of the optimization loop
+ learning_rate -- learning rate of the gradient descent update rule
+ print_cost -- True to print the loss every 100 steps
+
+ Returns:
+ params -- dictionary containing the weights w and bias b
+ grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
+ costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
+
+ """
+ costs = []
+
+ for i in range(num_iterations):
+
+ # cost /gradient calculation
+ grads, cost = propagate(w, b, X, Y)
+
+ #retrieve derivatives
+ dw = grads["dw"]
+ db = grads["db"]
+
+ # update values according to gradient descent algorithm
+ w = w - learning_rate * dw
+ b = b - learning_rate * db
+
+ # record the costs
+ if i % 100 == 0:
+ costs.append(cost)
+
+ # Print the cost every 100 training iterations
+ if print_cost:
+ print("Cost after iteration %i: %f" %(i, cost))
+
+
+
+ params = { "w": w,
+ "b": b}
+
+ grads = { "dw": dw,
+ "db": db}
+
+ return params, grads, costs
+
+
+def predict(w, b, X):
+ '''
+ Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
+
+ Arguments:
+ w -- weights, a numpy array of size (num_px * num_px * 3, 1)
+ b -- bias, a scalar
+ X -- data of size (num_px * num_px * 3, number of examples)
+
+ Returns:
+ Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
+ '''
+ Z = np.dot(w.T, X) + b
+ A = sigmoid(Z)
+
+
+ Y_prediction = (A >= 0.5).astype(int)
+
+ return Y_prediction
+
+
+
+
+np.random.seed(1) # set a seed so that the results are consistent
+
+X, Y = load_planar_dataset()
+
+# Visualize the data:
+
+plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral); # s = size of points; cmap are nicer colours
+plt.show()
+
+shape_X = X.shape
+shape_Y = Y.shape
+m = shape_Y[1] # training set size
+n = shape_X[0] # number of features (2)
+
+
+# initialise parameters
+w = np.random.rand(n, 1)
+b = 0
+
+# print accuracy of initial parameters by comparing prediction to
+print("train accuracy: {} %".format(100 - np.mean(np.abs(predict(w, b, X) - Y)) * 100))
+
+
+# fit model and print out costs every 100 iterations of the forward / back prop
+parameters, grads, costs = optimize(w, b, X, Y, num_iterations = 10000, learning_rate = 0.000005, print_cost = True)
+
+
+# return the prediction
+Y_prediction = predict(parameters["w"], parameters["b"], X)
+
+# print accuracy of fitted model
+print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction - Y)) * 100))
+
+
+# print parameters for interest
+print( parameters["w"] , parameters["b"] )
+
+# plot decision boundary
+plot_decision_boundary(lambda x: predict(parameters["w"], parameters["b"], x.T), X, Y)
+plt.show()
+
+List item
+
+"
+"['reference-request', 'agi', 'research', 'economics']"," Title: How much is currently invested in artificial general intelligence research and development?Body: How much is currently invested in artificial general intelligence research and development worldwide?
+Feel free to add company or VC names, but this is not the point. The point is to get an idea of the economics around artificial general intelligence.
+"
+"['reinforcement-learning', 'backpropagation', 'policy-gradients', 'reinforce', 'cross-entropy']"," Title: Which loss function should I use in REINFORCE, and what are the labels?Body: I understand that this is the update for the parameters of a policy in REINFORCE:
+$$
+\Delta \theta_{t}=\alpha \nabla_{\theta} \log \pi_{\theta}\left(a_{t} \mid s_{t}\right) v_{t},
+$$
+where $v_t$ is usually the discounted future reward and $\pi_{\theta}\left(a_{t} \mid s_{t}\right)$ is the probability of taken the action that the agent took at time $t$. (Tell me if something is wrong here)
+However, I don't understand how to implement this with a neural network.
+Let's say that probs = policy.feedforward(state)
returns the probabilities of taking each action, like [0.6, 0.4]
. action = choose_action_from(probs)
will return the index of the probability chosen. For example, if it chose 0.6, the action would be 0.
+When it is time to update the parameters of the policy network, what should we do? Should we do something like the following?
+gradient = policy.backpropagate(total_discounted_reward*log(probs[action])
+policy.weights += gradient
+
+And I only backpropagate this through one output neuron?
+Which loss function should I use in this case? What would the labels be?
+If you need more explanation, I have this question on SO.
+"
+"['deep-learning', 'training']"," Title: Does it make sense to train images (for object detection algorithms) with cameras that will not be used to collect future data?Body: I am training an algorithm to identify weeds within crops using the YOLOv5 algorithm. This algorithm will be used in the future to identify weeds in images collected by unmanned aircraft (drones) after making an orthomosaic images. Using the open-source LabelImg software, I am labeling images for object detection that were collected with both UAV and hand-held digital cameras. Using both platforms, I collected many images of weeds that will need to be identified.
+My question is this: Does it make sense to collect training samples from the hand-held digital camera, since it will be of much higher resolution than the UAV imagery (and thus not used for future imagery collections after the model is trained)? My initial thought is that it would be best to only use the UAV imagery, since it will be the most similar to what will be collected in the future. However, I do not want to throw out the hand-held digital imagery if it could help in the image classification process.
+"
+"['neural-networks', 'machine-learning', 'recurrent-neural-networks', 'reference-request', 'models']"," Title: Literature on computational modelling involving neuronal ensembliesBody: Straying from the current trends in deep learning, there is an, arguably, interesting idea of neuronal ensembles possibly providing an alternative to the current "layered feature detectors" framework for neural network construction by being considered a basic computational unit instead of one feature detecting neuron. This idea certainly has at least some presence in neuroscience circles, but I found it hard to find studies, attempting to obtain a working computational ensemble based model, which would try to solve any of the existing computer vision/NLP tasks or anything of the sort. This may be just due to me looking in the wrong places, but in any case, I would appreciate any references to papers, exploring building neural network architectures with neuronal ensemblies involvement.
+Just to be clear, I would be interested in any papers on computational modelling of ensemblies even if they are not trying to solve any particular ML task, but it would be better, if the topic of the research is more closely aligned with computer science instead of neurobiology even if CS connection is of a more exotic kind; for example, paper trying to see, whether you can store different concepts and their relations in the ensemble based network is more desirable than paper, trying to accurately model individual neuron and synaptic plasticity dynamics and see, that ensembles emerge, if you scale the system. But again, I would be glad to get references to research in both of these example topics and many more.
+"
+"['convolutional-neural-networks', 'image-processing', 'data-preprocessing', 'data-augmentation', 'image-recognition']"," Title: Should I remove the text overlaying some images in the dataset before training the CNN?Body: If I am attempting to train a CNN on some image data to perform image classification, but some of the images have pieces of text overlaying them (for the purpose of description to humans), then is it better for the CNN to remove the text? And if so, then how do I remove the text? Furthermore, is it a good idea to use both the images with text overlaying them and the images with the removed text for training, since it might act as a form of data augmentation?
+"
+"['genetic-algorithms', 'simulated-annealing', 'meta-heuristics', 'stopping-conditions', 'numerical-algorithms']"," Title: What are most commons methods to measure improvement rate in a meta-heuristic?Body: When I run a meta-heuristics, like a Genetic Algorithm or a Simulated Annealing, I want to have a termination criterion that stops the algorithms when there is not any significant fitness improvement.
+What are good methods for that?
+I tried something like
+$$improvement=\frac{fit(Solution_{new})}{fit(Solution_{old})}$$
+and
+$$improvement={fit(Solution_{new})}-{fit(Solution_{old})}$$
+Both options don't seem to be good, because as the old solutions get better and newer solutions even if they are good don't improve so much compare to the old.
+"
+"['deep-learning', 'objective-functions', 'image-processing', 'autoencoders', 'regularization']"," Title: Enforcing sparsity constraints that make use of spatial contiguityBody: I have a deep learning network that outputs grayscale image reconstructions. In addition to good reconstruction performance (measured through mean squared error or some other measure like psnr), I want to encourage these outputs to be sparse through a regularization term in the loss function.
+One way to do this is to add an L1 regularization term that penalizes the sum of the absolute value of pixel intensities. While this is a good start, is there any penalization that take adjacency and spatial contiguity into account? It doesn't have to be a commonly used constraint/regularization term, but even potential concepts or papers that go in this direction would be extremely helpful. In natural images, sparse pixels tend to form regions or patches as opposed to being dispersed or scattered. Are there ways to encourage regions of contiguous pixels to be sparse as opposed to individual pixels?
+"
+"['reinforcement-learning', 'python']"," Title: What framework for a project with a custom environment?Body: I'm planning an RL project and I have to decide which RL framework do I use if any at all. The project has a highly custom environment, and testing different algorithms will be required to obtain optimal results. Furthermore, it will use a custom neural network, not implemented in the popular TensorFlow/PyTorch ML frameworks. Therefore, the framework should allow for customization with regard to approximation function (1) and the environment (2). The problem is that to my current knowledge, most of the framework allows only to work with a built-in environment. Does anybody know a framework that meets the two conditions (1) and (2)? Or anybody knows a review that contains information about framework in the context of those conditions?
+"
+"['computer-vision', 'image-recognition', 'image-processing']"," Title: Would it be possible to use AI to measure pupil dilation diameters and fluctuation, on video films on a regular webcam?Body: I've been researching the topic of Cognitive Load Measurement through pupil dilation measurement. All solutions to pupil dilation measurement require some kind of special hardware setup. I was wondering if it would be possible to use AI on a regular webcam record and do those measurements later. If yes, I'd love some pointers to resources of what I need to know to be able to implement it.
+"
+"['machine-learning', 'data-preprocessing', 'supervised-learning']"," Title: Multiple Inertia sensors system based for gestures recognitionBody: I am a newbie to Machine Learning field as I am engaging to a personal project that I am trying to use the 6 degree of freedom Inertial Measurement Units(IMUs) measuring the Acceleration acting on 3 axes(x-y-z) and the Angular velocity around the same 3 axis(x-y-z). One sensor generates a set of 6 raw variables of: Acc_x, Acc_y, Acc_z, Gyro_x, Gyro_y, Gyro_z.
+Initially I have 2 of those sensors that used to be attached on to the arm (one to the part above the elbow and one to the part bellow the elbow) together they spit out a dataset of 12 raw variables that represent a specific movement of the arm, I save them as a the csv file. This is the point where I really get overwhelmed with a huge amount of data that I don't know how to process this kind of data and extract the features to differentiate the gestures.
+My dataset of the first movement I recorded looks like this:
+
+I denoted 1 for the first sensor above the elbow and 2 for the sensor below the elbow.
+Looking forward to hearing the opinions from the experts and seniors on this.
+Thank you in advanced.
+Let me know if my question is inappropriate and lack of information as it is my first time.
+"
+"['deep-learning', 'datasets', 'objective-functions']"," Title: Which loss function to choose for imbalanced datasets?Body: For imbalanced datasets (either in the context of computer vision or NLP), from what I learned, it is good to use a weighted log loss. However, in competitions, the people who are in top positions are not using weighted loss functions, but treating the classification problem as a regression problem, and using MSE as the loss function. I want to know which one should I use for imbalanced datasets? Or maybe should I combine both?
+the weighted loss I am talking is::
+neg_weights=[]
+pos_weights=[]
+for i in tqdm(range(5)):##range(num_classes)
+ neg_weights.append(np.sum(y_train[:,i],axis=0)/y_train.shape[0])
+ pos_weights.append(np.sum(1-y_train[:,i],axis=0)/y_train.shape[0])
+def customloss(y_true,y_pred):
+ y_true=tf.cast(y_true,dtype=y_pred.dtype)
+ loss=0.0
+ loss_pos=0.0
+ loss_neg=0.0
+ for i in range(5):
+ loss_pos+=-1*(K.mean(pos_weights[i]*y_true[:,i]*K.log(y_pred[:,i]+1e-8)))
+ loss_neg+=-1*(K.mean(neg_weights[i]*(1-y_true[:,i])*K.log(1-y_pred[:,i]+1e-8)))
+ loss=loss_pos+loss_neg
+ return loss
+
+the competition I was talking about is https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/109594
+"
+"['neural-networks', 'backpropagation', 'convergence']"," Title: How much can an inclusion of the number of iterations have on the training of an MLP?Body: My doubt is like this :
+
+Suppose we have an MLP. In an MLP, as per the backprop algorithm (back-propagation algorithm), the correction applied to each weight is :
+
+$$ w_{ij} := -\eta\frac{\partial E}{\partial w_{ij}}$$
+($\eta$ = learning rate, $E$ = error in the output, $w_{ij}$ = $i^{\text{th}}$ neuron in the $j^{\text{th}}$ row or layer)
+
+Now, if we put an extra factor in the correction as:
+
+$$ w_{ij} := -k\eta \frac{\partial E}{\partial w_{ij}}$$ ($k$ denotes the number of iterations at the time of correction)
+
+how much will that factor affect the learning of the network ? Will it affect the convergence of the network such that it takes time to fit to the data ?
+
+NB : I am only asking this as a doubt. I haven't tried any ML projects recently, so this is not related to anything I am doing.
+
+"
+"['neural-networks', 'reinforcement-learning', 'monte-carlo-tree-search', 'alphago-zero', 'alphago']"," Title: What is the search depth of AlphaGo and AlphaGo Zero?Body: I cannot find reliable sources but someone says it is 40 moves and someone else says it is 50+ moves. I read their papers and they use value function (NN) and policy function to trim the tree, so more layers can be searched while spending less time searching less different positions.
+My question is, is the search depth a fixed preset parameter? If so, approximately how much is it back to 2016 (AlphaGo) and 2018 (AlphaGo Zero)?
+"
+"['machine-learning', 'reinforcement-learning', 'linear-regression', 'multi-armed-bandits', 'contextual-bandits']"," Title: Is there a UCB type algorithm for linear stochastic bandit with lasso regression?Body: Why is there no upper confidence bound algorithm for linear stochastic bandits that uses lasso regression in the case that the regression parameters are sparse in the features?
+In particular, I don't understand what is hard about lasso regression that makes it hard to be used in a UCB type algorithm whereas there is a lot of work on ridge regression based UCB algorithms see e.g. Yadkori et al.
+I looked up some works e.g. Bastani and Bayati, Kim and Paik but they all do not a UCB-type algorithm, instead, they propose forced or probabilistic sampling to satisfy the compatibility condition (see Lemma EC.6. of Bastani and Bayati).
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'pytorch', 'convolutional-layers']"," Title: DQN not learning and step not stepping towards targetBody: I am trying to create a simple Deep Q-Network with 2d convolutional layers.
+I can't figure out what I am doing wrong, and the only thing I can see that doesn't seem right is when I get the model prediction for a state after the optimizer step it doesn’t seem to get closer to the target.
+I am using pixels from pong in OpenAI's gym with single-channel 90x90 images, a batch size of 32, and replay memory.
+As an example, if I try with a batch size of 1, and try running self(states)
again right after the optimizer step the output is as follows:
+current_q_values -> -0.16351485 0.29163417 0.11192469 -0.08969332 0.11081569 0.37215832
+q_target -> -0.16351485 0.5336551 0.11192469 -0.08969332 0.11081569 0.37215832
+self(states) -> -0.8427617 0.6415581 0.44988257 -0.43897176 0.8693738 0.40007943
+
+Does this look as what would be expected for a single step?
+The network with loss and optimizer:
+ self.in_layer = Conv2d(channels, 32, 8)
+ self.hidden_conv_1 = Conv2d(32, 64, 4)
+ self.hidden_conv_2 = Conv2d(64, 128, 3)
+ self.hidden_fc1 = Linear(128 * 78 * 78, 64)
+ self.hidden_fc2 = Linear(64, 32)
+ self.output = Linear(32, action_space)
+
+ self.loss = torch.nn.MSELoss()
+ self.optimizer = torch.optim.Adam(
+ self.parameters(), lr=learning_rate) # lr is 0.001
+
+def forward(self, state):
+ in_out = fn.relu(self.in_layer(state))
+ in_out = fn.relu(self.hidden_conv_1(in_out))
+ in_out = fn.relu(self.hidden_conv_2(in_out))
+ in_out = in_out.view(-1, 128 * 78 * 78)
+ in_out = fn.relu(self.hidden_fc1(in_out))
+ in_out = fn.relu(self.hidden_fc2(in_out))
+ return self.output(in_out)
+
+Then the learning block:
+ self.optimizer.zero_grad()
+
+ sample = self.sample(self.batch_size)
+ states = torch.stack([i[0] for i in sample])
+ actions = torch.tensor([i[1] for i in sample], device=device)
+ rewards = torch.tensor([i[2] for i in sample], dtype=torch.float32, device=device)
+ next_states = torch.stack([i[3] for i in sample])
+ dones = torch.tensor([i[4] for i in sample], dtype=torch.uint8, device=device)
+
+ current_q_vals = self(states)
+ next_q_vals = self(next_states)
+ q_target = current_q_vals.clone()
+ q_target[torch.arange(states.size()[0]), actions] = rewards + (self.gamma * next_q_vals.max(dim=1)[0]) * (~dones).float()
+
+ loss = fn.smooth_l1_loss(current_q_vals, q_target)
+ loss.backward()
+
+ self.optimizer.step()
+```
+
+"
+"['neural-networks', 'relu']"," Title: Is it possible to have a negative output using only ReLU activation functions, but not in the final layer?Body: I know that if you use an ReLU activation function at a node in the neural network, the output of that node will be non-negative. I am wondering if it is possible to have a negative output in the final layer, provided that you do not use any activation functions in the final layer, and all the activation functions in the previous hidden layers are ReLU?
+"
+"['reinforcement-learning', 'definitions', 'papers', 'reward-functions']"," Title: What are proxy reward functions?Body: The understanding I have is that they somehow adjust the objective to make it easier to meet, without changing the reward function.
+
+... the observed proxy reward function is the approximate solution to a reward design problem
+
+(source: Inverse Reward Design)
+But I have trouble getting how they fit the overall reward objective and got confused by some examples of them.
+I had the idea of them being small reward functions (as in the case of solving for sparse rewards) eventually leading to the main goal. But the statement below, from this post, made me question that.
+
+Typical examples of proxy reward functions include “partial credit” for behaviors that look promising; artificially high discount rates and careful reward shaping;...
+
+
+- What are they, and how would one go about identifying and integrating proxy rewards in an RL problem?
+
+- In the examples above, how would high discount rates form a proxy reward?
+
+
+I'm also curious about how they are used as a source of multiple rewards
+"
+"['reinforcement-learning', 'reference-request', 'variational-autoencoder']"," Title: How many types of variational auto-encoders are there?Body: I have been studying about auto-encoders and variational auto-encoders. I would like to know how many variants of VAEs are there today.
+If there are many variants, can they be used for feature extraction for complex reinforcement learning tasks like self-driving cars?
+"
+"['convolutional-neural-networks', 'transfer-learning']"," Title: Can we apply transfer learning between any two different CNN architectures?Body: There are many types of CNN architectures: LeNet, AlexNet, VGG, GoogLeNet, ResNet, etc. Can we apply transfer learning between any two different CNN architectures? For instance, can we apply transfer learning from AlexNet to GoogLeNet, etc.? Or even just from a "conventional" CNN to one of these other architectures, or the other way around? Is this possible in general?
+EDIT: My understanding is that all machine learning models have the ability to perform transfer learning. If this is true, then I guess the question is, as I said, whether we can transfer between two different CNN architectures – for instance, what was learned by a conventional CNN to a different CNN architecture.
+"
+"['reinforcement-learning', 'terminology', 'supervised-learning', 'hyperparameter-optimization', 'exploration-exploitation-tradeoff']"," Title: What is the meaning of ""exploration"" in reinforcement and supervised learning?Body: While exploration is an integral part of reinforcement learning (RL), it does not pertain to supervised learning (SL) since the latter is already provided with the data set from the start.
+That said, can't hyperparameter optimization (HO) in SL be considered as exploration? The more I think about this the more I'm confused as to what exploration really means. If it means exploring the environment in RL and exploring the model configurations via HO in SL, isn't its end goal "mathematically" identical in both cases?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'natural-language-processing']"," Title: What are some good models to use for spelling corrections?Body: I used OCR to extract text from an image, but there are some spelling mistakes in it :
+The text is as follows :
+'gaRBOMATED WATER\n\nSFMEETENED CARBONATED 6\nBSREDERTS: CARBONATED WATER,\nSUGAR. ACIOITY REGULATOR (338),\n\nCFFENE. CONTAINS PERMITTED NATURAL\nCOLOUR (1506) AMD ADDED FLAVOURS QUcTURAL,\nSATIRE: OENTICAL AND ARTIFICIAL PLIVOUREE\n\nCOLA\nl 1187.3 PIRANGUT, TAL. MULSHI,\nGBST. PUME 612111, MAHARASHTRA.\nHELPLINE: 1800- 180-2653\ntet indishetptine@cocs-cola.com\nAUTHORITY OF THE COCA-COLA\n‘COCA-COLA PLAZA, ATLANTA, GA 36313, USA\nme DATE OF MANUFACTURE. BATCH NO. &\nLP CNL. OF ae TAXES}:\nSE BOTTOM OF CAN.\n\nTST Fone Sor MOTHS FROM\nWe, RE WHEN STORED ft.\n\nY PLACE.\nChe coca conn\nnee\n\n| BRA License uo:\n‘ eS wo:\n\n \n\x0c'
+
+I would like to know if there are some NLP models/libraries that I can use to correct spelling mistakes(like correcting gaRBOMATED
to CARBONATED
+"
+"['machine-learning', 'natural-language-processing']"," Title: Estimating an $n$-Gram model using on bigramsBody: One of the main arguments against $n$-gram models is that, as $n$ increases, there is no way to compute $P(w_n|w_1,\cdots,w_{n-1})$ from training data (since the chance of visiting $w_n,...,w_1$ is practically zero).
+Wondering why we cannot estimate $P(w_n|w_1,\cdots,w_{n-1})$ using the following:
+Let $P_i(u|v)$ be the probability of having sequences where word $u$ comes exactly $i$ words after word $v$ (This is easy to compute).
+Then we can esitmate $P(w_n|w_1,\cdots,w_{n-1})$ as a function of $P_i(u|v)$. I could not find any reference to such approach in the literature. The most similar approach is the smoothing/backoff methods.
+Is there any reason why no-one used this approach? Or if one can share some previous work about this approach.
+P.S.1. The disadvantage of this approach, comparing with standard $n$-gram model, is its running time.
+P.S.2. We could use bucketing idea: Instead of computing/storing/using $P_i$, for every $i$, we can compute/store/use $PB_{i}=P_{2^i}$ . Then $P_i(u|v) \approx PB_{\log i}(u|v)$.
+"
+"['neural-networks', 'machine-learning', 'gradient-descent', 'weights']"," Title: What is the goal of weight initialization in neural networks?Body: This is a simple question. I know the weights in a neural network can be initialized in many different ways like: random uniform distribution, normal distribution, and Xavier initialization. But what is the weight initialization trying to achieve?
+Is it trying to allow the gradients to be large so it can quickly converge? Is it trying to make sure there is no symmetry in the gradients? Is it trying to make the outputs as random as possible to learn more from the loss function? Is it only trying to prevent exploding and vanishing gradients? Is it more about speed or finding a global maximum? What would the perfect weights (without being learned parameters) for a problem achieve? What makes them perfect? What are the properties in an initialization that makes the network learn faster?
+"
+"['machine-learning', 'overfitting']"," Title: How much overfitting is acceptable?Body: I have a deep learning configuration in which I obtain good results on the validation set but even better results in the training set. From my understanding this means that there is overfitting to some extent. What does this mean in practice? Does it mean that my model is not good and that I should not use it? If I decrease the gap between the validation and training accuracy (decreasing the overfitting) but at the same time decrease the validation accuracy, which of the two models is better?
+Below are some images to illustrate the two situations outlined previously:
+
+
+"
+"['training', 'autoencoders', 'time-complexity']"," Title: What is the time complexity for training a single-hidden layer auto-encoder?Body: What is the time complexity for training a single-hidden layer auto-encoder, for 1 epoch?
+You can assume that there are $n$ training examples, $m$ features, and $k$ neurons in the hidden layer, and that we use gradient descent and back-propagation to train the auto-encoder.
+"
+"['convolution', 'attention']"," Title: Is it possible to express attention as a Fourier convolution?Body: Convolutions can be expressed as a matrix-multiplication (see e.g. this post) and as an element-wise multiplication using the Fourier domain (https://en.wikipedia.org/wiki/Convolution_theorem).
+Attention utilizes matrix multiplications, and is as such $O(n^2)$. So, my question is, is it possible to exploit the Fourier domain for attention mechanisms by turning the matrix multiplication of attention into a large convolution between the query and the key matrices?
+"
+"['reinforcement-learning', 'environment']"," Title: What are the strategies for computationally heavy environments or long-time waiting environments?Body: I have an environment that is computationally heavy (takes several seconds to get a reward and next state). This limits reinforcement capability, due to poor sampling of the problem. There is any strategy that could be used to address the problem (e.g. If I can use the environment in parallel, then I could use a multi-agent approach)
+"
+"['convolutional-neural-networks', 'natural-language-processing', 'python', 'pytorch', 'transformer']"," Title: How to implement or avoid masking for transformer?Body: When it comes to using Transformers for image captioning is there any reason to use masking?
+I currently have a resnet101 encoder and am trying to use the features as the input for a transformer model in order to generate a caption for the image, is there any need to use masking? and what would I mask if I did need to?
+Any help would be much appreciated
+Thanks in advance.
+"
+"['deep-learning', 'filters', 'vgg']"," Title: Does replacing 3x3 filters with 3x1 and 1x3 filters improve the performance?Body: Recently I have come up with a VGG16 model for my binary classification task. I have relatively simple signal images
+
+Therefore (maybe?) other deeper models like resnet18
and Inceptionv3
were not as good. As known, VGG uses 3x3
filters for convolving the images to make feature maps. I have tried several hyper-parameters to get a desired performance. However, there are still some things I need to do. I was thinking of replacing the 3x3
conv
filters with 3x1
followed by 1x3
filters to reduce the compute. I think it will definitely do so considering the multiplications (9 operations for 3x3
and 6 for 3x1
followed by 1x3
).
+Then I came to think: If I replace all the 3x3
filters with separable filters, will I get any performance improvement?
+What are the benefits of replacing 3x3
filters with separable ones?
+Thanks
+"
+"['neural-networks', 'machine-learning', 'time-series', 'data-preprocessing', 'normalisation']"," Title: What would be a typical pre-processing and data normalization pipeline for time series data (for non-linear models such as neural networks)?Body: I've started to work on time series. I was wondering what would be the best data normalizing and pre-processing technique for non-linear models, specifically, neural networks.
+One I can think of is min-max normalization
+$$z = \frac{x - min(x)}{max(x) - min(x)}$$
+"
+"['reinforcement-learning', 'math', 'statistical-ai', 'multi-armed-bandits']"," Title: Mapping given probabilities to empirical probabilitiesBody: Consider following problem statement:
+
+You have given $n$ actions. You can perform any of them. Each action gives you success with some probability. The challenge is to perform given finite number of actions to get maximum successes.
+
+Here, we can perform actions and slowly decide upon possible probabilities of each action for success. I have no doubts in this problem.
+Now consider following variant of the problem:
+
+You have given $n$ actions. You can perform any of them. Each action gives you success with some probability. Also you are given set of $n$ probabilities, but you are not told which probability is associated with which action. The challenge is to utilise this additional information to perform given finite number of actions to get maximum successes.
+
+I have doubt in this problem that how we can map probabilities to actions? I can do some enough number of actions to gather empirical probability and them try to associate given probabilties with actions having closest empirical probabities. But, is there any algorithm for such problem in the literature?
+"
+['deep-learning']," Title: Deep learning based physics engineBody: Ridgid body simulation is a well known field with well established methods. It's still fairly computationally expensive to simulate things.
+I am interested in approaches to training deep learning networks to predict rigid body dynamics and interactions to reduce the computational load associated with simulations.
+Has this been done before and what approaches have been used?
+"
+"['neural-networks', 'classification', 'activation-functions', 'time-series']"," Title: Do we need non-linear activation function in neural networks whose task isn't classification?Body: While researching why we need non linear activation functions, all the explanations revolve around neural network being able to separate values that aren't linearly separable.
+So I wonder, if we have a neural network whose task is something else, say predicting an output value of a time series, is it still important to have an activation function that is non linear?
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'text-summarization', 'natural-language-understanding']"," Title: Is it possible to classify the subject of a conversation?Body: I would like to classify the subject of a conversation. I could classify each messages of the conversation, but I will loose some imformation because of related messages.
+I also need to do it gradually and not at the end of the conversation.
+I searched near recurrent neural network and connectionist classification but I'm not sure it answer really well my issue.
+"
+"['machine-learning', 'deep-learning', 'classification', 'computer-vision', 'regression']"," Title: How do you make a regression model from a binary labeled dataset?Body: Suppose I have a dataset with hand images. Hand completely opened is labeled as 0 and hand completely closed (fist) are labeled as 1. I also have a bunch of unlabeled images of hands which, if properly labeled would have values between 0 and 1. Because they are not completely opened and not completely closed.
+
+
+Extra info I have is the ordering between all the pairs of unlabeled images. For example, given image A and B, I can tell you which image should be predicted with higher value, but I cannot tell you what exactly is the value. The unlabeled dataset is collected by recording a video of a closing hand from completely opened to completely closed.
+What are some machine learning techniques that I can use to give the label or predict the values of hands not completely closed and not completely opened? I expect it to be based on ordering or ranking system. If it doesn't even require the ordering label (A > B?) then that would be a very smart algorithm.
+I want the values between 0 and 1. If there's a code for it that would be a plus. Thank you.
+"
+"['neural-networks', 'deep-learning', 'artificial-neuron', 'hyper-parameters']"," Title: Why one unit in the layers of neural network is not enough?Body: In a deep connected network, when every unit gets all the input features(X) so it has one parameter for every feature and every unit tweaks its parameters for loss optimization. What if we use only one unit and that one unit will have all the parameters which it can tweak for loss optimization. Is there a reason or benefit of using multiple units in every layer except the output layer?
+"
+"['machine-learning', 'definitions']"," Title: What's the threshold to call something 'machine learning'?Body: For example, if I use some iterative solvers to find a solution to a non-linear least squares problem, is that already considered machine learning?
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'trap-functions']"," Title: What are trap functions in genetic algorithms?Body: What are trap functions in genetic algorithms? Suppose you ran a GA with a trap function and examined the population midway through the run. Can someone explain what you would expect the population to look like?
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'deep-learning']"," Title: Is this ML task possible?Body: What I want to do is from an Internet challenge to transform any given image into the Polish flag using the available filters and crop tool on the iPhone camera app. Here's an example.
+There aren't nearly enough of these videos to train a neural network using a labeled dataset, and (while I haven't ruled it out) I don't think automatically inserting a polish flag into an image then adding random filters to it to create my own dataset would work out.
+My thinking is that I would feed a neural network the image and it would output a value for each filter & cropping coordinates. Then, I could easily calculate the loss by comparing the resulting picture to a picture of the polish flag. The obvious problem here is that you don't know how each of the neurons in the last layer affects the loss so you can't perform back propagation.
+Is my best bet to mathematically calculate the loss (by this I mean as opposed to using high level libraries, which would be difficult but I'm sure it's possible) so I can find the partial derivative of each last layer neuron with respect to the loss function and then backpropagate? Would this even work? Are there any alternatives that you recommend?
+"
+"['backpropagation', 'gradient-descent', 'linear-regression']"," Title: How parameter adjustment works in Gradient Descent?Body: I am trying to comprehend how the Gradient Descent works.
+I understand we have a cost function which is defined in terms of the following parameters,
+$J(𝑤_{1},𝑤_{2},.... , w_{n}, b)$
+the derivative would tell us which direction to adjust the parameters.
+i.e. $\dfrac{dJ(𝑤_{1},𝑤_{2},.... , w_{n}, b)}{d(𝑤_{1}}$ is the rate of change of the cost w.r.t $𝑤$
+The lecture kept on saying this very valuable as we are asking the question how should I change $𝑤$ to improve the cost?
+But then the Lecturer presented $w_{1}$, $w_{2}$, ... as scaler value, How can we differentiate a scalar value.
+I am fundamentally missing what is happening.
+Can anyone please guide me to any blog post, a book that I should read to understand better?
+"
+['autoencoders']," Title: How to quantify the amount of information lost by the decoder NN in an AE?Body: Is there a way to quantify the amount of information lost in the lossy part of an autoencoder where the original input is compressed to a representation with less degrees of freedom?
+I was thinking maybe to use somehow the mutual information either in the image or frequency domain.
+$$
+\mathrm{I}(X ; Y)=\sum_{y \in \mathcal{Y}} \sum_{x \in \mathcal{X}} p_{(X, Y)}(x, y) \log \left(\frac{p_{(X, Y)}(x, y)}{p_{X}(x) p_{Y}(y)}\right)
+$$
+where $p_{(X,Y)}$ is the joint probability density function which is deduced somehow empirically from a set of $N$ input and output to the network.
+Maybe it's not even an interesting question since the loss function evaluates exactly that?
+
+"
+"['machine-learning', 'optimization', 'gradient-descent', 'learning-rate', 'stochastic-gradient-descent']"," Title: Why is the learning rate generally beneath 1?Body: In all examples I've ever seen, the learning rate of an optimisation method is always less than $1$. However, I've never found an explanation as to why this is. In addition to that, there are some cases where having a learning rate bigger than 1 is beneficial, such as in the case of super-convergence.
+Why is the learning rate generally less than 1? Specifically, when performing an update on a parameter, why is the gradient generally multiplied by a factor less than 1 (absolutely)?
+"
+"['neural-networks', 'gradient-descent', 'thought-vectors', 'normalisation']"," Title: Flatten image using Neural network and matrix transposeBody: I have read a lecture note of Prof. Andrew Ng. There was something about data normalization like how can we flatten an image of (64x64x3) into a (64x64x3)*x1 vector. After that there is pictorial representation of flatten
+
+As per the picture height, length and width of the picture is 64 , 64, 3. I think nx is a row vector which is then transpose to a column vector. If there is 3 pictures I think nx contains {64,64,3,64,64,3,64,64,3}. Am I right?
+To use a 64x64x3 image as an input to our neuron, we need to flatten the image into a (64x64x3)x1 vector. And to make Wᵀx + b output a single value z, we need W to be a (64x64x3)x1 vector: (dimension of input)x(dimension of output), and b to be a single value. With N number of images, we can make a matrix X of shape (64x64x3)xN. WᵀX + b outputs Z of shape 1xN containing z’s for every single sample, and by passing Z through a sigmoid function we get final ŷ of shape 1xN that contains predictions for every single sample. We do not have to explicitly create a b of 1xN with the same value copied N times, thanks to Python broadcasting.
+As per my understanding, Wᵀ = nx and x= nxᵀ.
+Is it Wᵀ= [64,64,3,64,64,3,64,64,3] and x = [64,64,3,64,64,3,64,64,3]ᵀ?
+In that case there product will be a symmetry matrix.
+Is there any significance of symmetry matrix?
+I just messed up all the things while flatten the image. If anyone has any idea please share with me.
+Thank you in advance.
+"
+"['machine-learning', 'fuzzy-logic']"," Title: Can someone explain and help to understand this fuzzy diagram?Body: Could someone help me to understand in detail each step of this fuzzy diagram, because I am lost?
+
+"
+"['reinforcement-learning', 'q-learning', 'dqn']"," Title: How does one know that a problem is ""model-free"" in reinforcement learning?Body: Consider this slide from a Stanford lecture on reinforcement learning. It states that a model is
+
+the agent's representation of how the world changes in response to the agent's action.
+
+I've been experimenting with Q-learning for simple problems such as OpenAI's FrozenLake and Mountain Car, which both are amenable to the Q-learning framework (the latter upon discretization). I consider the topologies of the lake and the mountain to be the "worlds" (aka. environments) in the two cases, respectively.
+Q-learning is said to be "model-free". Given the two examples above, is it because neither the lake's topology nor that of the mountain are changed by the actions taken?
+"
+"['neural-networks', 'deep-learning', 'online-learning', 'incremental-learning']"," Title: How does a neural network that has been trained keep learning while in a real world scenarioBody: Say I trained a Neural Network (not RNN or CNN) to classify a particular data set.
+So I train using a specific data set & then I test using another and get an accuracy of 95% which is good enough.
+I then deploy this model in a production level environment where it will then be processing real world data.
+My question is, will this trained NN be constantly learning even in a production scenario? I can't figure out how it will because say it processes a dataset such as this:
+[ [1,2,3] ]
and gets an output of [ 0, 0.999, 0 ]
+In a training scenario it will compare the predicted output to the actual output and back propagate but in a real world scenario it will not know the actual value.
+So how does a trained model learn in a real world scenario?
+I am still very much a beginner in this field and I am not sure if the technology used is going to affect the answer to this question, but I am hoping to use Eclipse Deeplearning4J to create a NN. That being said the answer does not need to be restricted to this technology in particular as I am hoping more for the theory behind it and how it works.
+"
+"['classification', 'probability', 'probability-distribution', 'maximum-likelihood']"," Title: Estimating $\sigma_i$ according to maximum likelihood methodBody: Let be a Bayesian multivariate normal distribution classifier with distinct covariance matrices for each class and isotropic, i.e. with equal values over the entire diagonal and zero otherwise, $\mathbf{\Sigma}_i=\sigma_i^2\mathbf{I},~\forall i$.
+How can I compute the equation for estimating the parameter $\sigma_{i}$ by the maximum likelihood method? Here $\sigma_{i,j}$ is is the covariance between $x_i$ and $x_j$. So $\sigma_i$ is just the variance of $x_i$.
+Attempt:
+Suppose $\mathcal{X}_i = \{x^t_i\}^N_{t=1}$ i.i.d, $x_i^t$ is in the class $C_i$ and $x_i^t \sim \mathcal{N}(\mu, \sigma^2)$.
+Do I have to find the log-likelihood under $p(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left[ -\frac{(x-\mu)^2}{2 \sigma^2}\right]$, find the derivative and put it equal to $0$ to find the maximum?
+EDIT
+Suppose my data points are $m$-dimensional, and I have $K$ classes.
+"
+"['computer-vision', 'math', 'object-detection', 'information-theory']"," Title: Formal definition of the Object Detection problemBody: For many problems in computer science, there is a formal, mathematical problem defition.
+Something like: Given ..., the problem is to ...
+How can the Object Detection problem (i.e. detecting objects on an image) be formally defined?
+Given a set of pixels, the task is to decide
+
+- which pixels belong to an object at all,
+- which pixels belong to the same object.
+
+How can this be put into a formula?
+"
+"['neural-networks', 'activation-functions']"," Title: Why is sine activation function not used frequently since we know from fourier transforms that sine functions can combine to fit any function?Body: Pretty much the title.
+I'm no expert but from what I know, if you add up enough sine functions with proper amplitudes and frequencies you can get any function you want as a result. With that knowledge, wouldn't it make sense to have neuron's activation function be a sine function?
+"
+"['computer-vision', 'image-processing', 'robotics']"," Title: What is a wavefront algorithm?Body: I am designing and researching algorithms which I call of a wavefront nature. It is image analsyis agorithms when every pixel may change many times during the processing. I have heard this name before, but it seems it is not used widely. Are there other "wavefront algorithms"?
+"
+"['reinforcement-learning', 'markov-decision-process']"," Title: Relation between a value function of an MDP and a value function of the corresponding latent MDPBody: In paper "DeepMDP: Learning Continuous Latent Space Models for Representation Learning", Gelada et al. state in the beginning of section 2.4
+
+The degree to which a value function of $\bar{\mathcal M}$, $\bar{V}^{\bar\pi}$ approximates the value function $V^\bar\pi$ of ${\mathcal M}$ will depend on the Lipschitz norm of $\bar V^\bar\pi$ .
+
+where $\mathcal M$ is the Markov Decision Process(MDP) defined in the original state space $\mathcal S$ and $\bar{\mathcal M}$ is the MDP defined in the corresponding latent space $\bar{\mathcal S}$. $\bar\pi$ is a policy defined on the latent space, which can be applied to $\mathcal M$ by first mapping $s\in \mathcal S$ to $\bar{\mathcal S}$.
+My question is how they draw the connection between "The degree to which a value function of $\bar{\mathcal M}$, $\bar{V}^{\bar\pi}$ approximates the value function $V^\bar\pi$ of ${\mathcal M}$" and " the Lipschitz norm of $\bar V^\bar\pi$"?
+"
+"['machine-learning', 'algorithm', 'clustering', 'k-means']"," Title: How does Hartigan & Wong algorithm compare to Lloyd's and Macqueen's algorithm in K-means clustering?Body: As far I know, this is how the latter two algorithms work...
+Lloyd's algorithm
+
+- Choose the number of clusters.
+- Choose a distance metric (typically squared euclidean).
+- Randomly assign each observation to a cluster and compute the cluster centroids.
+- Iterate below until convergence (i.e. until cluster centroids stop changing):
+
+
+- Assign each observation point to the cluster whose centroid is closest.
+- Update cluster centroids only after a complete pass through all observations.
+
+Macqueen's Algorithm
+
+- Choose the number of clusters.
+- Choose a distance metric (typically squared euclidean).
+- Randomly assign each observation to a cluster and compute the cluster centroids.
+- Perform a complete pass of below (i.e. go through all observations):
+
+
+- Assign an observation to a cluster whose centroid is closest.
+- Immediately update the centroids for the two affected clusters (i.e. for the cluster that lost an observation and for the cluster that gained it).
+
+
+- Update centroids after a complete pass.
+
+How does the Hartigan & Wong algorithm compare to these two above? I read this paper in an effort to understand but it's still not clear to me. The first three steps is the same as Lloyd's and Macqueen's algorithm (as described above), but then what does the algorithm do? Does it update the centroids as often as Macqueen's algorithm does, or as often as Lloyd's algorithm does? At what point does it take into consideration the within-cluster sum of squares and how does it fit into the algorithm?
+I'm generally confused when it comes to this algorithm and would very much appreciate a step-wise explanation as to what's going on.
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'classification']"," Title: Comparing a large/general CNN to a smaller more specialized one?Body: I am still somewhat a novice in the ML world, but I had a strange idea about CNNs and wanted to ask if this would be a valid way to check the robustness of a general CNN that classifies certain images.
+Let's say that I make a CNN that takes in many different images of sports players performing a certain action (basketball shot, football kick, freestyle in swimming, flip in gymnastics, etc). Firstly, would it be possible for such a CNN to distinguish between such varied images and classify them accurately? And if so, can it be a good idea to compare this "larger" CNN to multiple "smaller" more specialized ones that take in images from one particular sport?
+In other words, I want to know that if I have a "larger" CNN that gives me an output like "football being kicked", is there a way to then double-check that output with a smaller CNN that only focuses on football moves? In essence, could we create a system where once you obtain an output from a general CNN, it automatically classifies the same image through a more specialized CNN, and then if the results are of similar accuracy, you know for sure that CNN works?
+Kind of like having a smaller CNN as a "ground-truth" for the bigger one? In my head it kind of goes like this:
+large_net_output = 'Football kick identified with 95.56% confidence'
+
+for sport in large_net:
+ if sport == 'football':
+ access = small_net_for_football
+ return small_net_for_football_output
+
+ elif sport == 'swimming':
+ access = small_net_for_swimming
+ return small_net_for_swimming_output
+
+ elif sport == 'baseball':
+ access = small_net_for_baseball
+ return small_net_for_baseball_output
+
+# and so on....
+>>> small_net_for_football_output = 'Football kick identified with 97.32% confidence'
+
+robustness_check = large_net_output - small_net_for_football_output
+print(robustness_check)
+
+>>> 'Your system is accurate within a good range of 1.76%'
+
+
+I hope this makes sense, and that this question does not cause any of your to cringe. Would appreciate any feedback on this!
+"
+"['reinforcement-learning', 'definitions', 'rewards', 'multi-armed-bandits', 'regret']"," Title: Why is regret so defined in MABs?Body: Consider a multi-armed bandit(MAB). There are $k$ arms, with reward distributions $R_i$ where $1 \leq i \leq k$. Let $\mu_i$ denote the mean of the $i^{th}$ distribution.
+If we run the multi-armed bandit experiment for $T$ rounds, the "pseudo regret" is defined as $$\text{Regret}_T = \sum_{t=1}^T \mu^* - \mu_{it},$$ where $\mu^*$ denotes the highest mean among all the $k$ distributions.
+Why is regret defined like this? From what I understand, at time-step $t$, the actual reward received is $r_t \sim R_{it} $ and not $\mu_{it}$ - so shouldn't that be a part of the expression for regret instead?
+"
+"['machine-learning', 'linear-regression', 'bias-variance-tradeoff', 'underfitting', 'inductive-bias']"," Title: Is there a connection between the bias term in a linear regression model and the bias that can lead to under-fitting?Body: Here is a linear regression model
+$$y = mx + b,$$
+where $b$ is known as $y$-intercept, but also known as the bias [1], $m$ is the slope, and $x$ is the feature vector.
+As I understood, in machine learning, there is also the bias that can cause the model to underfit.
+So, is there a connection between the bias term $b$ in a linear regression model and the bias that can lead to under-fitting in machine learning?
+"
+"['neural-networks', 'generative-model', 'legal']"," Title: Is it legal to license and sell the output of a neural network that was trained on data that you don't own the license to?Body: Is it legal to license and sell the output of a neural network that was trained on data that you don't own the license to? For example, suppose you trained WaveNet on a collection of popular music. Could you then sell the audio that the WaveNet produces? There are copyright restrictions on using samples to produce music, but the output of a generative neural network might not include any exact replicas from the training data, so it's not clear to me whether those laws apply.
+"
+"['machine-learning', 'comparison', 'definitions', 'models']"," Title: What is the difference between parametric and non-parametric models?Body: A model can be classified as parametric or non-parametric. How are models classified as parametric and non-parametric models? What is the difference between the two approaches?
+"
+"['machine-learning', 'gradient-boosting', 'boosting']"," Title: How do weak learners become strong in boosting?Body: Boosting refers to a family of algorithms which converts weak learners to strong learners. How does it happen?
+"
+['boltzmann-machine']," Title: How are Energy Based models really connected to Statistical Mechanics?Body: From statistical mechanics, the Boltzmann distribution over a system's energy states arises from the assumption that many replicas of the system are exchanging energy with each other. The distribution of these replicas in each energy level is the maximum entropy distribution subject to the constraint that their total energy is fixed, and that any one assignment of energy levels to each replica, a "microstate", satisfying this constraint, is equally probable.
+From machine learning, the so-called Energy-based model defines a Hamiltonian (energy function) to its various configurations, and uses Boltzmann's distribution to convert an "energy" to a probability over these configurations. Thus, an EBM can model a probability distribution over some data domain.
+Is there some viewpoint by which one can interpret the EBM as a "system" exchanging energy with many other replicas of that system? What semantic interpretation of EBMs connects them to the Boltzmann distribution's assumptions?
+"
+"['natural-language-processing', 'models', 'natural-language-understanding', 'text-generation']"," Title: Is there a complement to GPT/2/3 that can be trained using supervised learning methods?Body: This is a bit of a soft question, not sure if it's on topic, please let me know how I can improve it if it doesn't meet the criteria for the site.
+GPT models are unsupervised in nature and are (from my understanding) given a prompt and then they either answer the question or continue the sentence/paragraph. They also seem to be the most advanced models for producing natural language, capable of giving outputs with correct syntax and (to my eye at least) indistinguishable from something written by a human (sometimes at least!).
+However if I have a problem where I have an input (could be anything, but lets call it an image or video) and a description of the image or video as the output I could in theory train a model with convolutional filters to identify the object and describe the image (assuming any test data is within the bounds of the training data). However when I've seen models like this in the past the language is either quite simple or 'feels' like it's been produced by a machine.
+Is there a way to either train a GPT model as a supervised learning model with inputs (of some non language type) and outputs (of sentences/paragraphs); or a similar type of machine learning model that can be used for this task?
+A few notes:
+I have seen the deep learning image captioning methods - these are what I mention above. I'm more looking for something that can take an input-output pair where the output is text and the input is any form.
+"
+"['tensorflow', 'deep-rl', 'actor-critic-methods', 'proximal-policy-optimization']"," Title: How to design an observation(state) space for a simple `Rock-Paper-Scissor` game?Body: For weeks I've been working with this toy game of Rock-Paper-Scissor
. I want to use a PPO
agent learn to beat a computer opponent whose logic is defined as the code bellow.
+For short, this computer opponent, named abbey
, uses a strategy that tracks all two consecutive plays of the agent, and gives the opposite play of the most likely guess of the agent's next play, according to the agent's last play.
+I design the agent(using gym env) to have an internal state, keeping track of all counts of its two consecutive plays, in a 3x3 matrix. And then I normalized each row of the matrix to be an observation of the agent, representing the probabilities of the second play given the previous one. So the agent will get the same knowledge as what abbey
knows.
+Then I copied an PPO
network algorithm from some RL book, which works well with CartPole
. Then I did some minor changes which are commented in the code bellow.
+But the algorithm does not converge even a little, and abbey
always wins the agent about 60% of the time from first run to the last.
+I doubt the state and observation space I designed is the reason why it does not converge. All I get is that the agent maybe should find something from the histories of its own successes and fails, and find its way out.
+Can you give me some advice for the designing of a state space?
+Thank you very much.
+### define a Rock-Paper-Scissor opponent
+
+abbey_state = []
+play_order=[{
+ "RR": 0,
+ "RP": 0,
+ "RS": 0,
+ "PR": 0,
+ "PP": 0,
+ "PS": 0,
+ "SR": 0,
+ "SP": 0,
+ "SS": 0,
+ }]
+def abbey(prev_opponent_play,
+ re_init=False):
+ if not prev_opponent_play:
+ prev_opponent_play = 'R'
+ global abbey_state, play_order
+ if re_init:
+ abbey_state = []
+ play_order=[{
+ "RR": 0,
+ "RP": 0,
+ "RS": 0,
+ "PR": 0,
+ "PP": 0,
+ "PS": 0,
+ "SR": 0,
+ "SP": 0,
+ "SS": 0,
+ }]
+ abbey_state.append(prev_opponent_play)
+ last_two = "".join(abbey_state[-2:])
+ if len(last_two) == 2:
+ play_order[0][last_two] += 1
+ potential_plays = [
+ prev_opponent_play + "R",
+ prev_opponent_play + "P",
+ prev_opponent_play + "S",
+ ]
+ sub_order = {
+ k: play_order[0][k]
+ for k in potential_plays if k in play_order[0]
+ }
+ prediction = max(sub_order, key=sub_order.get)[-1:]
+ ideal_response = {'P': 'S', 'R': 'P', 'S': 'R'}
+ return ideal_response[prediction]
+
+
+### define the gym env
+import gym
+from gym import spaces
+from collections import defaultdict
+import numpy as np
+
+ACTIONS = ["R", "P", "S"]
+games = 1000
+
+class RockPaperScissorsEnv(gym.Env):
+ metadata = {'render.modes': ['human']}
+
+ def __init__(self):
+ super(RockPaperScissorsEnv, self).__init__()
+ self.action_space = spaces.Discrete(3)
+ self.observation_space = spaces.Box(low=0.0, high=1.0,
+ shape=(3,3), dtype=float)
+ self.reset()
+
+ def step(self, actions):
+ assert actions == 0 or actions == 1 or actions == 2
+ opponent_play = self.opponent_play()
+
+ self.prev_plays[self.prev_actions * 3 + actions] += 1
+ reward = self.calc_reward(actions, opponent_play)
+ terminal = False
+
+ self.calc_state(self.timestep, opponent_play, actions)
+ self.prev_actions = actions
+ self.prev_opponent_play = opponent_play
+ self.timestep += 1
+ return self.get_ob(), reward, terminal, None
+
+ def reset(self):
+ self.opponent = abbey
+ self.timestep = 0
+ self.prev_opponent_play = 0
+ self.prev_actions = 0
+ self.prev_plays = defaultdict(int)
+ self.init_state = np.zeros((3,3), dtype=int)
+ # the internal state
+ self.state = np.copy(self.init_state)
+ self.results = {"win": 0, "lose": 0, "tie": 0}
+ return self.get_ob()
+
+ def render(self, mode='human'):
+ pass
+
+ def close (self):
+ pass
+
+ def calc_reward(self, actions, play):
+ if self.timestep % games == games - 1:
+ pass
+ if actions == play:
+ self.results['tie'] += 1
+ return 0
+ elif actions == 0 and play == 1:
+ self.results['lose'] += 1
+ return -0.3
+ elif actions == 1 and play == 2:
+ self.results['lose'] += 1
+ return -0.3
+ elif actions == 2 and play == 0:
+ self.results['lose'] += 1
+ return -0.3
+ elif (actions == 1 and play == 0) or (actions == 2 and play == 1) or (actions == 0 and play == 2):
+ self.results['win'] += 1
+ return 0.3
+ else:
+ raise NotImplementedError('calc_reward something get wrong')
+
+ def opponent_play(self):
+ re_init = (self.timestep == 0)
+ opp_play = self.opponent(ACTIONS[self.prev_actions], re_init=re_init)
+ return ACTIONS.index(opp_play)
+
+ def calc_state(self, timestep, opponent_play, actions):
+ self.state[self.prev_actions][actions] += 1
+
+ def get_ob(self):
+ '''return observations'''
+ state0 = self.state[0]
+ sum0 = state0.sum()
+ state1 = self.state[1]
+ sum1 = state1.sum()
+ state2 = self.state[2]
+ sum2 = state2.sum()
+ init = np.ones(3, dtype=float) / 3.0
+ ob = np.array([
+ state0 / sum0 if sum0 else init,
+ state1 / sum1 if sum1 else init,
+ state2 / sum2 if sum2 else init,
+ ])
+ # print(ob)
+ return ob
+
+
+
+### Learning Algo copied from some book
+
+import matplotlib
+from matplotlib import pyplot as plt
+matplotlib.rcParams['font.size'] = 18
+matplotlib.rcParams['figure.titlesize'] = 18
+matplotlib.rcParams['figure.figsize'] = [9, 7]
+matplotlib.rcParams['axes.unicode_minus']=False
+
+plt.figure()
+
+import gym,os
+import numpy as np
+import tensorflow as tf
+from tensorflow import keras
+from tensorflow.keras import layers,optimizers,losses
+from collections import namedtuple
+env = RockPaperScissorsEnv()
+env.seed(2222)
+tf.random.set_seed(2222)
+np.random.seed(2222)
+os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
+assert tf.__version__.startswith('2.')
+
+
+
+gamma = 0.98
+epsilon = 0.2
+batch_size = 32
+
+Transition = namedtuple('Transition', ['state', 'action', 'a_log_prob', 'reward', 'next_state'])
+
+class Actor(keras.Model):
+ def __init__(self):
+ super(Actor, self).__init__()
+ self.fc1 = layers.Dense(18, kernel_initializer='he_normal') # I changed 100 to 18
+ self.fc2 = layers.Dense(3, kernel_initializer='he_normal') # I changed 4 to 3
+
+ def call(self, inputs):
+ x = tf.nn.relu(self.fc1(inputs))
+ x = self.fc2(x)
+ x = tf.nn.softmax(x, axis=1)
+ return x
+
+class Critic(keras.Model):
+ def __init__(self):
+ super(Critic, self).__init__()
+ self.fc1 = layers.Dense(18, kernel_initializer='he_normal') # I changed 100 to 18
+ self.fc2 = layers.Dense(1, kernel_initializer='he_normal')
+
+ def call(self, inputs):
+ x = tf.nn.relu(self.fc1(inputs))
+ x = self.fc2(x)
+ return x
+
+
+
+
+class PPO():
+ def __init__(self):
+ super(PPO, self).__init__()
+ self.actor = Actor()
+ self.critic = Critic()
+ self.buffer = []
+ self.actor_optimizer = optimizers.Adam(1e-3)
+ self.critic_optimizer = optimizers.Adam(3e-3)
+
+ def select_action(self, s):
+ s = tf.constant(s, dtype=tf.float32)
+ # s = tf.expand_dims(s, 0) # I removed this line, otherwise we will get a (1,3,3) tensor and later we will get an error
+ prob = self.actor(s)
+ a = tf.random.categorical(tf.math.log(prob), 1)[0]
+ a = int(a)
+ return a, float(prob[0][a])
+
+ def get_value(self, s):
+ s = tf.constant(s, dtype=tf.float32)
+ s = tf.expand_dims(s, axis=0)
+ v = self.critic(s)[0]
+ return float(v)
+
+ def store_transition(self, transition):
+ self.buffer.append(transition)
+
+ def optimize(self):
+ state = tf.constant([t.state for t in self.buffer], dtype=tf.float32)
+ action = tf.constant([t.action for t in self.buffer], dtype=tf.int32)
+ action = tf.reshape(action,[-1,1])
+ reward = [t.reward for t in self.buffer]
+ old_action_log_prob = tf.constant([t.a_log_prob for t in self.buffer], dtype=tf.float32)
+ old_action_log_prob = tf.reshape(old_action_log_prob, [-1,1])
+
+ R = 0
+ Rs = []
+ for r in reward[::-1]:
+ R = r + gamma * R
+ Rs.insert(0, R)
+ Rs = tf.constant(Rs, dtype=tf.float32)
+
+ for _ in range(round(10*len(self.buffer)/batch_size)):
+
+ index = np.random.choice(np.arange(len(self.buffer)), batch_size, replace=False)
+
+ with tf.GradientTape() as tape1, tf.GradientTape() as tape2:
+
+ v_target = tf.expand_dims(tf.gather(Rs, index, axis=0), axis=1)
+
+ v = self.critic(tf.gather(state, index, axis=0))
+ delta = v_target - v
+ advantage = tf.stop_gradient(delta)
+ a = tf.gather(action, index, axis=0)
+ pi = self.actor(tf.gather(state, index, axis=0))
+ indices = tf.expand_dims(tf.range(a.shape[0]), axis=1)
+ indices = tf.concat([indices, a], axis=1)
+ pi_a = tf.gather_nd(pi, indices)
+ pi_a = tf.expand_dims(pi_a, axis=1)
+ # Importance Sampling
+ ratio = (pi_a / tf.gather(old_action_log_prob, index, axis=0))
+ surr1 = ratio * advantage
+ surr2 = tf.clip_by_value(ratio, 1 - epsilon, 1 + epsilon) * advantage
+ policy_loss = -tf.reduce_mean(tf.minimum(surr1, surr2))
+ value_loss = losses.MSE(v_target, v)
+ grads = tape1.gradient(policy_loss, self.actor.trainable_variables)
+ self.actor_optimizer.apply_gradients(zip(grads, self.actor.trainable_variables))
+ grads = tape2.gradient(value_loss, self.critic.trainable_variables)
+ self.critic_optimizer.apply_gradients(zip(grads, self.critic.trainable_variables))
+
+ self.buffer = []
+
+
+def main():
+ agent = PPO()
+ returns = []
+ total = 0
+ for i_epoch in range(500):
+ state = env.reset()
+ for t in range(games):
+ action, action_prob = agent.select_action(state)
+ if t == 999:
+ print(action, action_prob)
+ next_state, reward, done, _ = env.step(action)
+ # print(next_state, reward, done, action)
+ trans = Transition(state, action, action_prob, reward, next_state)
+ agent.store_transition(trans)
+ state = next_state
+ total += reward
+ if done:
+ if len(agent.buffer) >= batch_size:
+ agent.optimize()
+ break
+ print(env.results)
+
+ if i_epoch % 20 == 0:
+ returns.append(total/20)
+ total = 0
+ print(i_epoch, returns[-1])
+
+ print(np.array(returns))
+ plt.figure()
+ plt.plot(np.arange(len(returns))*20, np.array(returns))
+ plt.plot(np.arange(len(returns))*20, np.array(returns), 's')
+ plt.xlabel('epochs')
+ plt.ylabel('total return')
+ plt.savefig('ppo-tf.svg')
+
+
+if __name__ == '__main__':
+ main()
+ print("end")
+
+
+```
+
+"
+"['reinforcement-learning', 'eligibility-traces']"," Title: Why weighting by lambda that sums to 1 ensures convergence in eligibility trace?Body: In Sutton and Barto's Book in chapter 12, they state that if weights sum to 1, then an equation's updates have "guaranteed convergence properties". Actually why it ensures convergence?
+There is a full citation from the mentioned fragment in Richard S. Sutton and Andrew G. Barto. Second Edition:
+
+Now we note that a valid update can be done not just toward any n-step return, but toward any average of n-step returns for different ns. For example, an update can be done toward a target that is half of a two-step return and half of a four-step return: $\frac{1}{2}G_{t:t+2} + \frac{1}{2}G_{t:t+4}$. Any set of n-step returns can be averaged in this way, even an infinite set, as long as the weights on the component returns are positive and sum to 1. The composite return possesses an error reduction property similar to that of individual n-step returns (7.3) and thus can be used to construct updates with guaranteed convergence properties.
+
+"
+"['natural-language-processing', 'reference-request', 'state-of-the-art', 'naive-bayes']"," Title: Are there any examples of state-of-the-art NLP applications that are still n-gram based and use Naive Bayes?Body: As far as I can tell, most NLP tasks today use word embeddings and recurrent networks or transformers.
+Are there any examples of state-of-the-art NLP applications that are still n-gram based and use Naive Bayes?
+"
+"['reinforcement-learning', 'python', 'environment']"," Title: How to let the agent choose how to populate a state space matrix in RL (using python)Body: I have an agent (drone) that has to allocate subchannels for different types of User Equipment.
+I have represented the subchannel allocation with a 2-dimentional binary matrix, that is initialized to all zeros as there is no requests at the beginning of the episode.
+When the agent chooses an action, it has to choose which subchannels to allocate to which UEs, hence populating the matrix with 1s.
+I have no idea how to do it.
+"
+"['image-segmentation', 'performance']"," Title: What is human-level performance for semantic segmentation?Body: I see so many papers claim to have an algorithm that beats 'human-level performance' for semantic segmentation tasks, but I can't find any papers reporting on what the human-level performance actually is. An analysis of the similarity between segmentations drawn by multiple different human experts would be good. Could someone point me towards a paper that reports on something like that?
+"
+"['deep-learning', 'natural-language-processing', 'backpropagation', 'transformer', 'attention']"," Title: How are weight matrices in attention learned?Body: I have been looking into transformers lately and have been reading tons of tutorials. All of them address the intuition behind attention, which I understand. However, they treat learning the weight matrices (for query, key, and value) as it is the most trivial thing.
+So, how are these weight matrices learned? Is the error just backpropagated, and the weights are updated accordingly?
+"
+"['tensorflow', 'object-detection']"," Title: Should I prefer cropped images or realistic images for object detection?Body: I am new to the field of AI but due to the high level of abstraction that comes with services such as Google VisionAI I got motivated to write an application that detects symbols in photos based on tensorflow.js and a custom model trained in Google Vision AI.
+My App is about identifying symbols in photos, very similar to traffic signs or logo detection. Now I wonder if
+
+- I should train the model based on real, distorted and complex photos that contain those symbols and lots of background noise
+- if it was enough to train the model based on cropped, clean symbols
+- A hybrid of both
+
+I started with option a and it works fine, however it was a lot of work to create the training dataset. Does the model need the distorted background to work?
+"
+"['reinforcement-learning', 'deep-rl', 'monte-carlo-tree-search', 'chess', 'state-spaces']"," Title: To solve chess with deep RL and MCTS, how should I represent the input (the state) to a neural network?Body: I'm wanting to build a NN that can create a policy for each possible state. I want to combine this with MCTS to eliminate randomness so when expansion occurs, I can get the probability of the move to winning.
+I am confident (I believe) in how to code the neural network, but the input shape is the hardest part here. I am firstly wanting to try with 2 player chess and then expand to 3 player chess.
+What is the best vector/matrix to use for the input in a chess game? How should the input be fed into a neural network to output the most promising move from the position? In addition, what format should it look like (i.e [001111000], etc.)?
+"
+['autoencoders']," Title: Are Autoencoders for noise-reduction only suited to deal with salt-and-pepper kind of noise?Body: I'm currently looking at NN to deal with noisy data. I like the Autoencoder approach https://medium.com/@aliaksei.mikhailiuk/unsupervised-learning-for-data-interpolation-e259cf5dc957 because it seems to be adaptive and does not require to be trained on specific training data.
+However, as it is described in this article it seems to rely on having none-noise samples in the input data that are true to the ground truth, so I wonder if an autoencoder also could work in the case of white or blue noise instead of salt-and-pepper noise?
+"
+"['natural-language-processing', 'pytorch', 'transformer', 'text-generation']"," Title: Transformer Language Model generating meaningless textBody: I currently learning on Transformers, so check my understanding I tried implementing a small transformer-based language model and compare it to RNN based language model. Here's the code for transformer. I'm using PyTorch inbuilt layer for Transformer Encoder
+class TransformerLM_1(nn.Module):
+
+ def __init__(self, head, vocab_size, embedding_size, dropout = 0.1, device = 'cpu',
+ pad_idx = 0, start_idx = 1, end_idx = 2, unk_idx = 3):
+
+ super(TransformerLM_1, self).__init__()
+
+ self.head = head
+ self.embedding_size = embedding_size
+ self.vocab_size = vocab_size
+ self.device = device
+ self.embed = WordEmbedding(self.vocab_size, self.embedding_size, pad_idx)
+ self.postional_encoding = PostionalEncoding(embedding_size, device)
+ self.decoder = nn.TransformerEncoderLayer(self.embedding_size, self.head)
+ self.out_linear = nn.Linear(self.embedding_size, vocab_size)
+ self.dropout = dropout
+ self.pad_idx = pad_idx
+ self.start_idx = start_idx
+ self.end_idx = end_idx
+ self.unk_idx = unk_idx
+ self.device = device
+
+
+ def make_src_mask(self, src_sz):
+ mask = (torch.triu(torch.ones(src_sz, src_sz)) == 1).transpose(0, 1)
+ mask = mask.float().masked_fill(mask == 0, 10e-20).masked_fill(mask == 1, float(0.0))
+ mask = mask.to(self.device)
+ return mask
+
+ def forward(self, x):
+ dec_in = x.clone()[:, :-1]
+ src_mask = self.make_src_mask(dec_in.size()[1])
+ src = self.embed(dec_in)
+ src = self.postional_encoding(src)
+ src = src.transpose(0,1)
+ transformer_out = self.decoder(src, src_mask)
+ out = self.out_linear(transformer_out)
+ return out
+
+I'm using teacher forcing to make it converge faster. From what I saw from the results, the text generated by the RNN model is better than transformer's.
+Here is sample generated text with the expected
+Expected: you had to have been blind not to see the scenario there for what it was and is and will continue to be for months and even years a part of south carolina that has sustained a blow that the red cross expects will cost that organization alone some $ n million <eos>
+Predicted: some <unk> been the been <unk> not be $ the total has was the may has <unk> the that that be to the <unk> the
+
+Expected: citicorp and chase are attempting to put together a new lower bid <eos>
+Predicted: a are <unk> carries n't to the together with <unk> jersey than
+
+Expected: it ' s amazing the amount of money that goes up their nose out to the dog track or to the tables in las vegas mr . katz says <eos>
+Predicted: <unk> ' s <unk> comeback money of the in mr to their <unk> and of <unk> <unk> or or <unk> the money
+
+Expected: moreover while asian and middle eastern investors <unk> gold and help <unk> its price silver does n't have the same <unk> dealers say <eos>
+Predicted: the production the routes <unk> of its
+
+Expected: a board of control spokesman said the board had not seen the claim and declined to comment <eos>
+Predicted: the board said declined of said
+
+Expected: property capital trust said it dropped its plan to liquidate because it was n't able to realize the value it had expected <eos>
+Predicted: the claims markets said its was n <unk> to sell insolvent of was n't disclosed to sell its plan
+
+Expected: similarly honda motor co . ' s sales are so brisk that workers <unk> they have n't had a saturday off in years despite the government ' s encouragement of more leisure activity <eos>
+Predicted: the honda ' credit . s s <unk>
+
+Expected: we expect a big market in the future so in the long term it will be profitable <eos>
+Predicted: it can it <unk> board
+
+Expected: u . k . composite or <unk> insurers which some equity analysts said might be heavily hit by the earthquake disaster helped support the london market by showing only narrow losses in early trading <eos>
+Predicted: the . s . s trading sell said which <unk> traders market said the be able in the the earthquake
+
+Expected: this will require us to define and <unk> what is necessary or appropriate care <eos>
+Predicted: <unk> is be the $ <unk> <unk> <unk> <unk> is the to <unk> and or
+
+As you can see Transformer fails to grasp grammar compared to RNN. Is there anything wrong with my understanding?
+EDIT
+This is one example that caught my eye
+Expected: also the big board met with angry stock specialists <eos>
+Predicted: also met specialists board met the stock big with after
+
+Most of the words predicted have is from the expected but in a different order. I have read that transformers are permutation invariant which is the reason why we include positional encoding with the word embedding.
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'rewards']"," Title: Why do my rewards reduce after extensive training using D3QN?Body: I am running a drone simulator for collision avoidance using a slight variant of D3QN. The training is usually costly (runs for at least a week) and I have observed that reward function gradually increases during training and then drastically drops. In the simulator, this corresponds to the drone exhibiting cool collision avoidance after a few thousand episodes. However, after training for more iterations it starts taking counterintuitive actions such as simply crashing into a wall (I have checked to ensure that there is no exploration at play over here).
+Does this have to do with overfitting? I am unable to understand why my rewards are falling this way.
+"
+"['search', 'monte-carlo-tree-search', 'minimax']"," Title: How does the MCTS tree look like?Body: I have come across the Monte Carlo tree search (MCTS) algorithm, but I can't find what the tree should look like. For example, does it still represent a minimax process, i.e. player 1 from the root has its child nodes as probabilities of moves, then from those child nodes, the next move is player 2, etc.? Is this how the tree looks like, so when we backpropagate we update only the winning player nodes?
+"
+"['training', 'optimization', 'gradient-descent', 'papers', 'generative-adversarial-networks']"," Title: Why scaling down the parameter many times during training will help the learning speed be the same for all weights in Progressive GAN?Body: The title is one of the special things in Progressive GAN, a paper of the NVIDIA team. By using this method, they introduced that
+
+Our approach ensures that the dynamic range, and thus the learning speed, is the same for all weights.
+
+In details, they inited all learnable parameters by normal distribution $N(0,1)$. During training time, each forward time, they will scale the result with per-layer normalization constant from He's initializer
+I reproduced the code from pytorch GAN zoo Github's repo
+def forward(self, x, equalized):
+ # generate He constant depend on the size of tensor W
+ size = self.module.weight.size()
+ fan_in = prod(size[1:])
+ weight = math.sqrt(2.0 / fan_in)
+ '''
+ A module example:
+
+ import torch.nn as nn
+ module = nn.Conv2d(nChannelsPrevious, nChannels, kernelSize, padding=padding, bias=bias)
+ '''
+ x = self.module(x)
+
+ if equalized:
+ x *= self.weight
+ return x
+
+At first, I thought the He constant will be $c = \frac{\sqrt{2}}{\sqrt{n_l}}$ as He's paper. Normally, $n_l > 2$ so $w_l$ can be scale up which lead to the gradient in backpropagation is increase as the formula in ProGan's paper $\hat{w}_i=\frac{w_i}{c}$ $\rightarrow$ prevent vanishing gradient.
+However, the code shows that $\hat{w}_i=w_i*c$.
+In summary, I can't understand why to scale down the parameter many times during training will help the learning speed be more stable.
+I asked this question on some communities e.g: StackOverflow, mathematics, Data Science, and still haven't had an answer.
+Please help me explain it, thank you!
+"
+"['convolutional-neural-networks', 'math', 'image-processing', 'data-preprocessing', 'convolution']"," Title: How to mathematically describe the convolution operation (with a Gaussian kernel)?Body: I have to build a model where I pre-process the data with a Gaussian kernel. The data are an $n\times n$ matrix (i.e one channel), but not an image, thus I can't refer to this matrix as an image and to its elements as pixels. The Gaussian kernel is built by the following function (more i.e. here)
+$$\begin{equation}
+\begin{aligned}
+g(x,y,\sigma) =
+\dfrac{1}{2\pi\sigma^2} e^{\dfrac{-(x^2+y^2)}{2\sigma^2}}.
+\end{aligned}
+\end{equation}$$
+This kernel is moving one by one element and doing convolution. In my case, most of the elements are zero, the matrix is sparse.
+How can I describe/understand the process of convolving the original data with a Gaussian kernel?
+I have been looking for some articles, but I am unable to find any mathematical explanations, only explanation in words or pseudo-code.
+"
+"['convolutional-neural-networks', 'image-segmentation', 'u-net', 'semi-supervised-learning']"," Title: Model output segmentation maps which are not fullBody: I created a VGG based U-Net in order to perform image segmentation task on yeast cells images obtained by a microscope.
+There are a couple of problems with the data:
+
+- There is inhomogeneity in the amount of yeast in the images. 1 image can have hundred of yeast cells while others can have less the one hundred.
+- The GT segmentation map is also incomplete, and some of the cells are not labeled.
+
+All in all the model, given the above problem, is able to learn in some manner. My problem is that the segmentation maps seem incomplete.
+For example:
+
+
+My loss function contains BCE, I was wondering if there is a way to force the model to create a 'fuller' segmentation maps. Something like using Random fields of some sort. Or maybe to enhance my loss function to overcome the above-mentioned problems.
+I wish to stay in the domain of simple architectures rather than using more sophisticated ones such as RCNN.
+Would appreciate any suggestions
+"
+"['machine-learning', 'naive-bayes', 'bias-variance-tradeoff', 'ensemble-learning', 'boosting']"," Title: Why don't ensembling, bagging and boosting help to improve accuracy of Naive bayes classifier?Body:
+You might think to apply some classifier combination techniques like ensembling, bagging and boosting but these methods would not help. Actually, “ensembling, boosting, bagging” won’t help since their purpose is to reduce variance. Naive Bayes has no variance to minimize.
+
+The above paragraph is mentioned in this article.
+
+- How can they say the purpose of these methods is to reduce variance?
+- Naive Bayes has no variance, is it true?
+
+Thanks in advance
+"
+"['reinforcement-learning', 'papers', 'experience-replay']"," Title: Why is it necessary to divide the priority range according to the batch size in Prioritized Experience Replay?Body: According to DeepMinds's paper Prioritized Experience Replay (2016), specifically Appendix B.2.1 "Proportional prioritization" (p. 13), one should equally divide the priority range $[0, p_\text{total}]$ into $k$ ranges, where $k$ is the size of the batch, and sample a random variable within these sub-ranges. This random variable is then used to sample an experience from the sum-tree according to its priority (probability).
+Why do we need to do that? Why not simply sampling $k$ random variables in $[0, p_\text{total}]$ and getting $k$ variables from the sum-tree without dividing the priority range into $k$ different ranges? Isn't this the same?
+"
+"['machine-learning', 'terminology']"," Title: What is meant by ""ground truth"" in the context AI?Body: What does "ground truth" mean in the context of AI especially in the context of machine learning?
+I am a little confused because I have read that the ground truth is the same as a label in supervised learning. And I think that's not quite right. I thought that ground truth refers to a model (or maybe the nature) of a problem. I always considered it as something philosophical (and that's what also the vocabulary 'ground truth' implies), because in ML we often don't build a describing model of the problem (like in classical mechanics) but rather some sort of a simulator that behaves like it is a describing model. That's what we/I call sometimes black box.
+What is the correct understanding?
+"
+"['genetic-algorithms', 'evolutionary-algorithms']"," Title: Which 6-bit string would represent an optimal solution for trap-3 in the Linkage Learning Genetic Algorithm?Body: I am struggling to learn certain Evolutionary algorithm concepts and also relations between each of them. I am going through the Linkage Learning Genetic Algorithm (LLGA) right now and came across this question:
+Which 6-bit string of LLGA would represent an optimal solution for trap-3?
+Can anyone give me an answer or explain it?
+"
+"['neural-networks', 'convolutional-neural-networks', 'training', 'image-recognition', 'image-segmentation']"," Title: Training a CNN for semantic segmentation of large 4600x4600px imagesBody: I am trying to implement a CNN (U-Net) for semantic segmentation of similar large grayscale ~4600x4600px medical images. The area I want to segment is the empty space (gap) between a round object in the middle of the picture and an outer object, which is ring-shaped, and contains the round object. This gap is "thin" and is only a small proportion of the whole image.
+In my problem having a small a gap is good, since then the two objects have a good connection to each other.
+My questions:
+
+- Is it possible to feed such large images on a CNN?
+Downscaling the images seems a bad idea since the gap is thin and most of the relevant information will be lost. From what I've seen CNN are trained on much smaller images.
+
+- Since the problem is symmetric in some sense is it a good idea to split the image in 4 (or more) smaller images?
+
+- Are CNNs able to detect such small regions in such a huge image? From what I've seen in the literature, mostly larger objects are segmented such as organs etc.
+
+
+I would appreciate some ideas and help. It is my first post on the site so hopefully I didn't make any mistakes.
+Cheers
+"
+"['math', 'intelligent-agent', 'norvig-russell']"," Title: Why would the lookup table (of a table-driven artificial agent) need to store data at pixel precision?Body: While reading the book AI A modern approach, 4th ed, I came across the section of "Agent program" with following text:
+
+It is instructive to consider why the table-driven approach to agent
+construction is doomed to failure. Let $P$ be the set of possible
+percepts and let $T$ be the lifetime of the agent (the total number of
+percepts it will receive).
+The lookup table will contain $\sum_{i=1}^T |P|^T$ entries.
+Consider the automated taxi: the visual input from a single camera
+(eight cameras is typical) comes in at the rate of roughly 70 mb per
+sec. (30 frames per sec, 1080 X 720 pixels, with 24 bits of color
+information).
+This gives a lookup table with over $10^{600,000,000,000}$ for an
+hour's driving.
+
+Could someone please explain how the lookup table number is derived? (or what the author's point is which I am missing). If I were to multiply all of the numbers $30 × 1080 × 720 × 24 × 8 × 3600$, then I get $1.6124314e+13$ which comes very close I think, but can't get what would be the reason to build a table (even though a theoretic one) in such a way - something which is obviously intractable
+edit:
+My core question is this:
+Assuming $10^{600,000,000,000}$ is derived from $30 × 1080 × 720 × 24 × 8 × 3600$, what is the purpose of storing data in the look up table at pixel precision? Wouldn't storing higher level of details be enough to solve these kind of problems (ie, autonomous driving)? Coming more from standard software database systems, I am missing that point. Thanks
+"
+"['machine-learning', 'training', 'learning-rate']"," Title: Is stable learning preferable to jumps in accuracy/lossBody: A stable/smooth learning validation curve often seems to keep improving over more epochs than an unstable learning curve. My intuition is that dropping the learning rate and increasing the patience of a model that produces a stable learning curve could lead to better validation fit.
+The counter argument is that jumps in the curve could mean that the model has just learned something significant, but they often jump back down or tail off after that.
+Is one better than the other? Is it possible to take aspects of both to improve learning?
+"
+['neural-networks']," Title: Why neural networks tend to be trained to recognize multiple things instead of just one?Body: I was watching this series: https://www.youtube.com/watch?v=aircAruvnKk
+The series demonstrates neural networks by building a simple number recognizing network.
+It got me thinking: Why neural networks try to recognize multiple labels instead of just one? In the above example, the network tries to recognize numbers from 0 to 9. What is the benefit of trying to recognize so many things simultaneously? Wouldn't it make it easier to reason about if there would be 10 different neural networks which would specialize to recognize only one number at a time?
+"
+"['neural-networks', 'reinforcement-learning', 'policy-gradients', 'deterministic-policy']"," Title: What is the loss for policy gradients with continuous actions?Body: I know with policy gradients used in an environment with a discrete action space are updated with $$
+\Delta \theta_{t}=\alpha \nabla_{\theta} \log \pi_{\theta}\left(a_{t} \mid s_{t}\right) v_{t}
+$$
+where $v_t$ could be many things that represent how good the action was. And I know that this can be calculated by performing cross entropy loss with the target being what the network would have outputted if it were completely confident in its action (zeros with the index of the action chosen being one). But I don’t understand how to apply that to policy gradients that output the mean and variance of a Gaussian distribution for a continuous action space. What is the loss for these types of policy gradients?
+I tried keeping the variance constant and updating the output with mean squared error loss and the target being the action it took. I thought this would end up pushing the mean towards actions with greater total rewards but it got nowhere in OpenAI’s Pendulum environment.
+It would also be very helpful if it was described in a way with a loss function and a target, like how policy gradients with discrete action spaces can be updated with cross entropy loss. That is how I understand it best but it is okay if that is not possible.
+Edit: for @Philipp. The way I understand it is that the loss function is the same with a continuous action space and the only thing that changes is the distribution that we get the log-probs from. In PyTorch we can use a Normal distribution for continuous action space and Categorical for discrete action space. The answer from David Ireland goes into the math but in PyTorch, that looks like log_prob = distribution.log_prob(action_taken)
for any type of distribution. It makes sense that for bad actions we would want to decrease the probability of taking the action. Below is working code for both types of action spaces to compare them. The continuous action space code should be correct but the agent will not learn because it is harder to learn the right actions with a continuous action space and our simple method isn't enough. Look into more advanced methods like PPO and DDPG.
+import torch
+import torch.nn as nn
+import torch.optim as optim
+from torch.distributions.categorical import Categorical #discrete distribution
+import numpy as np
+import gym
+import math
+import matplotlib.pyplot as plt
+
+class Agent(nn.Module):
+ def __init__(self,lr):
+ super(Agent,self).__init__()
+ self.fc1 = nn.Linear(4,64)
+ self.fc2 = nn.Linear(64,32)
+ self.fc3 = nn.Linear(32,2) #neural network with layers 4,64,32,2
+
+ self.optimizer = optim.Adam(self.parameters(),lr=lr)
+
+ def forward(self,x):
+ x = torch.relu(self.fc1(x)) #relu and tanh for output
+ x = torch.relu(self.fc2(x))
+ x = torch.sigmoid(self.fc3(x))
+ return x
+
+env = gym.make('CartPole-v0')
+agent = Agent(0.001) #hyperparameters
+DISCOUNT = 0.99
+total = []
+
+for e in range(500):
+ log_probs, rewards = [], []
+ done = False
+ state = env.reset()
+ while not done:
+ #mu = agent.forward(torch.from_numpy(state).float())
+ #distribution = Normal(mu, SIGMA)
+ distribution = Categorical(agent.forward(torch.from_numpy(state).float()))
+ action = distribution.sample()
+ log_probs.append(distribution.log_prob(action))
+ state, reward, done, info = env.step(action.item())
+ rewards.append(reward)
+
+ total.append(sum(rewards))
+
+ cumulative = 0
+ d_rewards = np.zeros(len(rewards))
+ for t in reversed(range(len(rewards))): #get discounted rewards
+ cumulative = cumulative * DISCOUNT + rewards[t]
+ d_rewards[t] = cumulative
+ d_rewards -= np.mean(d_rewards) #normalize
+ d_rewards /= np.std(d_rewards)
+
+ loss = 0
+ for t in range(len(rewards)):
+ loss += -log_probs[t] * d_rewards[t] #loss is - log prob * total reward
+
+ agent.optimizer.zero_grad()
+ loss.backward() #update
+ agent.optimizer.step()
+
+ if e%10==0:
+ print(e,sum(rewards))
+ plt.plot(total,color='blue') #plot
+ plt.pause(0.0001)
+
+
+def run(i): #to visualize performance
+ for _ in range(i):
+ done = False
+ state = env.reset()
+ while not done:
+ env.render()
+ distribution = Categorical(agent.forward(torch.from_numpy(state).float()))
+ action = distribution.sample()
+ state,reward,done,info = env.step(action.item())
+ env.close()
+
+Above is the discrete action space code for CartPole and below is the continuous action space code for Pendulum. Sigma
(variance or standard deviation) is constant here but adding it is easy. Just make the final layer have two neurons and make sure sigma
is not negative. Again, the pendulum code won't work because most environments with continuous action spaces are too complicated for such a simple method. Making it work would probably require a lot of testing for hyper parameters.
+import torch
+import torch.nn as nn
+import torch.optim as optim
+from torch.distributions.normal import Normal #continuous distribution
+import numpy as np
+import gym
+import math
+import matplotlib.pyplot as plt
+import keyboard
+
+class Agent(nn.Module):
+ def __init__(self,lr):
+ super(Agent,self).__init__()
+ self.fc1 = nn.Linear(3,64)
+ self.fc2 = nn.Linear(64,32)
+ self.fc3 = nn.Linear(32,1) #neural network with layers 3,64,32,1
+
+ self.optimizer = optim.Adam(self.parameters(),lr=lr)
+
+ def forward(self,x):
+ x = torch.relu(self.fc1(x)) #relu and tanh for output
+ x = torch.relu(self.fc2(x))
+ x = torch.tanh(self.fc3(x)) * 2
+ return x
+
+env = gym.make('Pendulum-v0')
+agent = Agent(0.01) #hyperparameters
+SIGMA = 0.2
+DISCOUNT = 0.99
+total = []
+
+for e in range(1000):
+ log_probs, rewards = [], []
+ done = False
+ state = env.reset()
+ while not done:
+ mu = agent.forward(torch.from_numpy(state).float())
+ distribution = Normal(mu, SIGMA)
+ action = distribution.sample().clamp(-2.0,2.0)
+ log_probs.append(distribution.log_prob(action))
+ state, reward, done, info = env.step([action.item()])
+ #reward = abs(state[1])
+ rewards.append(reward)
+
+ total.append(sum(rewards))
+
+ cumulative = 0
+ d_rewards = np.zeros(len(rewards))
+ for t in reversed(range(len(rewards))): #get discounted rewards
+ cumulative = cumulative * DISCOUNT + rewards[t]
+ d_rewards[t] = cumulative
+ d_rewards -= np.mean(d_rewards) #normalize
+ d_rewards /= np.std(d_rewards)
+
+ loss = 0
+ for t in range(len(rewards)):
+ loss += -log_probs[t] * d_rewards[t] #loss is - log prob * total reward
+
+ agent.optimizer.zero_grad()
+ loss.backward() #update
+ agent.optimizer.step()
+
+ if e%10==0:
+ print(e,sum(rewards))
+ plt.plot(total,color='blue') #plot
+ plt.pause(0.0001)
+ if keyboard.is_pressed("space"): #holding space exits training
+ raise Exception("Exited")
+
+
+def run(i): #to visualize performance
+ for _ in range(i):
+ done = False
+ state = env.reset()
+ while not done:
+ env.render()
+ distribution = Normal(agent.forward(torch.from_numpy(state).float()), SIGMA)
+ action = distribution.sample()
+ state,reward,done,info = env.step([action.item()])
+ env.close()
+
+David Ireland also wrote this on a different question I had:
+
+The algorithm doesn't change in this situation. Say your NN outputs the mean parameter of the Gaussian, then logπ(at|st) is just the log of the normal density evaluated at the action you took where the mean parameter in the density is the output of your NN. You are then able to backpropagate through this to update the weights of your network.
+
+"
+"['python', 'pytorch', 'regression']"," Title: How to use validation dataset in my logistic regression model?Body: I am new to machine learning and recently I joined a course where I was given a logistic regression assignment in which I had to split 20% of the training dataset for the validation dataset and then use the validation dataset to capture the minimum possible loss and then use the test dataset to find the accuracy of the model.
+Below is my code for implementing logistic regression
+class LogReg(LinReg):
+ def __init__(self, n_dim, bias=True):
+ if bias:
+ n_dim = n_dim + 1
+ super(LogReg, self).__init__(n_dim)
+ self.bias = bias
+
+ def __call__(self, x):
+ return x.mm(self.theta).sigmoid()
+
+ def compute_loss(self, x, y, lambda_reg):
+ # The function has a generic implementation, and can also work for the neural nets!
+ predictions = self(x)
+ loss = -(y * torch.log(predictions) + (1-y) * torch.log(1 - predictions)).mean()
+ regularizer = self.theta.transpose(0, 1).mm(self.theta)
+ return loss + regularizer.mul(lambda_reg)
+ @staticmethod
+ def add_bias(x):
+ ones = torch.ones((x.size(0), 1), dtype=torch.float32)
+ x_hat = torch.cat((ones, x), dim=-1)
+ return x_hat
+
+ def fit(self, x, y, num_iter=10, mb_size=32, lr=1e-1, lambda_reg=1e-2, reset=True):
+ N = x.size(0)
+ losses = []
+ x_hat = x
+ # Adding a bias term if needed
+ if self.bias:
+ x_hat = self.add_bias(x)
+ if reset:
+ self.reset() # Very important if you want to call fit multiple times
+ num_batches = x.size(0) // mb_size
+ # The outer loop goes over `epochs`
+ # The inner loop goes over the whole training data
+ for it in range(num_iter):
+ loss_per_epoch = 0
+ for batch_it in range(num_batches):
+ # has been implemented for the linear model
+ self.zero_grad()
+
+ ind = torch.randint(0, N, (mb_size, 1)).squeeze()
+ x_mb, y_mb = x_hat[ind, :], y[ind, :]
+
+ loss = self.compute_loss(x_mb, y_mb, lambda_reg)
+
+ loss.backward()
+ self.theta.data = self.theta.data - lr*self.grad().data
+ loss_per_epoch += loss.item()
+
+ loss_per_epoch /= num_batches
+ losses.append(loss_per_epoch)
+
+ return losses
+
+How should I use the validation set at the level of epoch to find the best loss?
+"
+"['reinforcement-learning', 'q-learning', 'value-functions', 'exploration-exploitation-tradeoff', 'epsilon-greedy-policy']"," Title: Can we stop training as soon as epsilon is small?Body: I'm new to reinforcement learning.
+As it is common in RL, $\epsilon$-greedy search for the behavior/exploration is used. So, at the beginning of the training, $\epsilon$ is high, and therefore a lot of random actions are chosen. With time, $\epsilon$ decreases and we often choose the best action.
+
+- I was wondering, e.g. in Q-Learning, if $\epsilon$ is small, e.g. 0.1 or 0.01, do the Q-values really still change? Do they just change their direction, i.e. the best action remains the best action but the Q-values diverge further, or do the values really change again so that the best action always changes for a given state?
+
+- If the Q-values really do still change strongly, is it because of the remaining random actions, which we still have at $\epsilon>0$ or would it still change at $\epsilon=0$?
+
+
+"
+"['reinforcement-learning', 'comparison', 'policy-gradients', 'policy-based-methods', 'value-based-methods']"," Title: What are the disadvantages of actor-only methods with respect to value-based ones?Body: While the advantages of actor-only algorithms, the ones that search directly the policy without the use of the value function, are clear (possibility of having a continuous action space, a stochastic policy, etc.), I can't figure out the disadvantages, if not general statements about less stability or bigger variance with respect to critic-only methods (with which I refer to methods that are based on the value function).
+"
+"['convolutional-neural-networks', 'tensorflow', 'keras', 'image-recognition', 'binary-classification']"," Title: Image Classification for watermarks with poor resultsBody: Just starting learning things about tensorflow and NN.
+As an exercise I decided to create a dataset of images, watermarked and not, in order to binary classify these. First of all, the dataset ( you can see it here ) was created artificially by me applying some random watermarks.
+First doubt, in the dataset I don't have both images one watermarked and one not, would be better to have?
+Second, frustrating: model stand on 0.5 accuracy, so it just produce random output :(
+Model I tried is this:
+model = tf.keras.Sequential([
+ tf.keras.layers.Conv2D(16,(1,1), activation='relu', input_shape=(150, 150, 3)),
+ tf.keras.layers.MaxPool2D(2,2),
+ tf.keras.layers.Conv2D(32,(3,3), activation='relu'),
+ tf.keras.layers.MaxPool2D(2,2),
+ tf.keras.layers.Conv2D(64,(3,3), activation='relu'),
+ tf.keras.layers.MaxPool2D(2,2),
+ tf.keras.layers.Flatten(),
+ tf.keras.layers.Dense(128, activation='elu'),
+ tf.keras.layers.Dense(64, activation='elu'),
+ tf.keras.layers.Dense(32, activation='relu'),
+ tf.keras.layers.Dense(1,activation="sigmoid")
+
+and then compiled as this:
+model.compile(optimizer='adam',
+ loss='binary_crossentropy',
+ metrics = ['accuracy'])
+
+Here below the fit:
+history = model.fit(train_data,
+ validation_data=valid_data,
+ steps_per_epoch=100,
+ epochs=15,
+ validation_steps=50,
+ verbose=2)
+
+As for any other details, code is here.
+I already checked for technical issues, I'm pretty sure image enter properly, train and validation dataset are 80/20, about 12K images for training. However accuracy bounches up and down around .5 while fitting. How can I improve?
+"
+"['neural-networks', 'sample-complexity']"," Title: Is it better to split sequences into overlapping or non-overlapping training samples?Body: I have $N$ (time) sequences of data with length $2048$. Each of these sequences correseponds to a different target output. However, I know that only a small part of the sequence is needed to actually predict this target output, say a sub-sequence of length $128$.
+I could split up each of the sequences into $16$ partitions of $128$, so that I end up with $16N$ training smaples. However, I could drastically increase the number of training samples if I use a sliding window instead: there are $2048-128 = 1920$ unique sub-sequences of length $128$ that preserve the time series. That means I could in fact generate $1920N$ unique training samples, even though most of the input is overlapping.
+I could also use a larger increment between individual "windows", which would reduce the number of sub-sequences but it could remove any autocorrelation between them.
+Is it better to split my data into $16N$ non-overlapping sub-sequences or $1920N$ partially overlapping sub-sequences?
+"
+"['machine-learning', 'computational-learning-theory', 'bias-variance-tradeoff', 'approximation-error']"," Title: What's the difference between estimation and approximation error?Body: I'm unable to find online, or understand from context - the difference between estimation error and approximation error in the context of machine learning (and, specifically, reinforcement learning).
+Could someone please explain with the help of examples and/or references?
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing', 'speech-recognition']"," Title: speech comment detection by deep speech mozilla for data setBody: I want to create a system so that when a human being says a word or command through a microphone, such as "shut down", the system can execute that command "shut down".
+I used the Deep Speech algorithm on the Persian language database, which takes audio data through a microphone and returns the text. The problem I have now is what I have to do from this point on. I need to explain that my system has to work offline and also the number of commands I have is limited.
+"
+"['categorical-data', 'sigmoid', 'multi-label-classification', 'softmax']"," Title: Should I use additional empty category in some categorical problems?Body: I try to create autonomous car using keyboard data so this is a multi class classification problem. I have keys W,A,S and D. So I have four categories. My model should decide what key should be pressed based on the screenshot (or some other data, I have some ideas). I have some API that I can use to capture keyboard data and screen (while gathering data) and also to simulate keyboard events (in autonomous mode when the car is driven by neural network).
+Should I create another category called for example "NOKEY"? I will use sigmoid function on each output neuron (instead of using softmax on the all neurons) to have probabilities from 0 to one for each category. But I could have very low probabilities for each neuron. And it can mean either that no key should be pressed or that the network doesn't know what to do. So maybe I should just create additional "artificial" category?
+What is the standard way to deal with such situations?
+"
+"['machine-learning', 'deep-learning', 'image-recognition', 'object-detection']"," Title: Machine Learning Techniques for Objects Location/Orientation in ImagesBody: what Machine Learning tool can understand in which location and orientation a picture was taken from?
+That is from pictures of similar objects, say for example pictures of car interiors.
+So given a side vent picture it will show me all the pictures with a side vent with similar views (if a picture shows a vent from the cockpit it will not show me pictures of vents taken from outside the car with open door).
+If the problem is too complicated for just one tool, could you address me to which particular field of Artificial Intelligence should I research in?
+this goes close to what I am looking for:
+https://ai.googleblog.com/2020/03/real-time-3d-object-detection-on-mobile.html
+But I thought to ask if there are more specific appropriate examples.
+"
+"['natural-language-processing', 'computer-vision', 'bert', 'transfer-learning', 'fine-tuning']"," Title: Why aren't the BERT layers frozen during fine-tuning tasks?Body: During transfer learning in computer vision, I've seen that the layers of the base model are frozen if the images aren't too different from the model on which the base model is trained on.
+However, on the NLP side, I see that the layers of the BERT model aren't ever frozen. What is the reason for this?
+"
+"['reinforcement-learning', 'tensorflow', 'actor-critic-methods', 'loss']"," Title: Variance of the Gaussian policy is not decreasing while training the agent using Soft Actor-Critic methodBody: I've written my own version of SAC(v2) for a problem with continuous action space. While training, the losses for the value network and both q functions steadily decrease down to 0.02-0.03. The loss for my actor/agent is negative and decreases to about -0.25 (I've read that it doesn't matter whether it is negative or not, but I'm not 100% sure). Despite that, the output variance from the Gaussian policy is way too high (make all the outcomes nearly uniformly likely) and is not decreasing during training.
+Does anyone know what can be a cause of that?
+My implementation is mostly based on https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/sac.py, but I resigned from computing td_errors.
+Here is the code (in case you need it).
+import tensorflow as tf
+from tensorflow.keras.layers import *
+from src.anfis.anfis_layers import *
+from src.model.sac_layer import *
+from src.anfis.anfis_model import AnfisGD
+
+hidden_activation = 'elu'
+output_activation = 'linear'
+
+
+class NetworkModel:
+ def __init__(self, training):
+
+ self.parameters_count = 2
+ self.results_count = 1
+ self.parameters_sets_count = [3, 4]
+ self.parameters_sets_total_count = sum(self.parameters_sets_count)
+
+ self.models = {}
+ self._initialise_layers() # initialises self.models[]
+
+ self.training = training
+
+ self.train()
+
+ def _initialise_layers(self):
+ # ------------
+ # LAYERS & DEBUG
+ # ------------
+
+ f_states = Input(shape=(self.parameters_count,))
+ f_actions = Input(shape=(self.results_count,))
+
+ # = tf.keras.layers.Dense(10)# AnfisGD(self.parameters_sets_count)
+ #f_anfis = model_anfis(densanf)#model_anfis(f_states)
+ f_policy_1 = tf.keras.layers.Dense(5, activation=hidden_activation)(f_states)
+ f_policy_2 = tf.keras.layers.Dense(5, activation=hidden_activation)(f_policy_1)
+ f_policy_musig = tf.keras.layers.Dense(2, activation=output_activation)(f_policy_2)
+ f_policy = GaussianLayer()(f_policy_musig)
+
+ #self.models["anfis"] = tf.keras.Model(inputs=f_states, outputs=f_anfis)
+ #self.models["forward"] = tf.keras.Model(inputs=f_states, outputs=model_anfis.anfis_forward(f_states))
+
+ self.models["actor"] = tf.keras.Model(inputs=f_states, outputs=f_policy)
+
+ self.models["critic-q-1"] = generate_q_network([f_states, f_actions])
+ self.models["critic-q-2"] = generate_q_network([f_states, f_actions])
+
+ self.models["critic-v"] = generate_value_network(f_states)
+ self.models["critic-v-t"] = generate_value_network(f_states)
+
+ # self.models["anfis"].compile(
+ # loss=tf.losses.mean_absolute_error,
+ # optimizer=tf.keras.optimizers.SGD(
+ # clipnorm=0.5,
+ # learning_rate=1e-3),
+ # metrics=[tf.keras.metrics.RootMeanSquaredError()]
+ # )
+ # self.models["forward"].compile(
+ # loss=tf.losses.mean_absolute_error,
+ # optimizer=tf.keras.optimizers.SGD(
+ # clipnorm=0.5,
+ # learning_rate=1e-3),
+ # metrics=[tf.keras.metrics.RootMeanSquaredError()]
+ # )
+ self.models["actor"].compile(
+ loss=tf.losses.mean_squared_error,
+ optimizer=tf.keras.optimizers.Adam(
+ learning_rate=1e-3),
+ metrics=[tf.keras.metrics.RootMeanSquaredError()]
+ )
+
+ def act(self, din):
+ data_input = tf.convert_to_tensor([din], dtype='float64')
+ data_output = self.models["actor"](data_input)[0]
+ return data_output.numpy()[0]
+
+ def train(self):
+ self.training.train(self, hybrid=False)
+
+
+def mean(y_true, y_pred): #ignore y_pred
+ return tf.reduce_mean(y_true)
+
+
+def generate_value_network(inputs):
+ # SAC Critic Value (Estimating rewards of being in state s)
+ f_critic_v1 = tf.keras.layers.Dense(5, activation=hidden_activation)(inputs)
+ f_critic_v2 = tf.keras.layers.Dense(5, activation=hidden_activation)(f_critic_v1)
+ f_critic_v = tf.keras.layers.Dense(1, activation=output_activation)(f_critic_v2)
+ m_value = tf.keras.Model(inputs=inputs, outputs=f_critic_v)
+ m_value.compile(
+ loss=tf.losses.mean_squared_error,
+ optimizer=tf.keras.optimizers.Adam(
+ learning_rate=1e-3),
+ metrics=[tf.keras.metrics.RootMeanSquaredError()]
+ )
+ return m_value
+
+
+def generate_q_network(inputs):
+ # SAC Critic Q (Estimating rewards of taking action a while in state s)
+ f_critic_q_concatenate = tf.keras.layers.Concatenate()(inputs)
+ f_critic_q1 = tf.keras.layers.Dense(5, activation=hidden_activation)(f_critic_q_concatenate)
+ f_critic_q2 = tf.keras.layers.Dense(5, activation=hidden_activation)(f_critic_q1)
+ f_critic_q = tf.keras.layers.Dense(1, activation=output_activation)(f_critic_q2)
+
+ m_q = tf.keras.Model(inputs=inputs, outputs=f_critic_q)
+ m_q.compile(
+ loss=tf.losses.mean_squared_error,
+ optimizer=tf.keras.optimizers.Adam(
+ learning_rate=1e-3),
+ metrics=[tf.keras.metrics.RootMeanSquaredError()]
+ )
+ return m_q;
+
+
+from src.model.training import Training
+import numpy as np
+import tensorflow as tf
+from src.constructs.experience_holder import ExperienceHolder
+
+
+class SACTraining(Training):
+
+ def __init__(self, environment):
+ super().__init__()
+ self.environment = environment
+ self.models = None
+ self.parameters_sets_count = None
+ self.parameters_sets_total_count = 0
+ self.parameters_count = 0
+
+ self.gamma = 0.99
+ self.alpha = 1.0
+ self.beta = 0.003
+ self.tau = 0.01
+
+ self.experience = ExperienceHolder(capacity=10000, cells=5) # state, action, reward, state', done
+
+ def train(self, simulation_model, **kwargs):
+ self.models = simulation_model.models
+ self.parameters_count = simulation_model.parameters_count
+ self.parameters_sets_count = simulation_model.parameters_sets_count
+ self.parameters_sets_total_count = simulation_model.parameters_sets_total_count
+
+ self.train_sac(
+ self.models,
+ epochs=300, max_steps=200, experience_batch=128, simulation=self.environment)
+
+ def train_sac(self, models, epochs, max_steps, experience_batch, simulation):
+
+ # deterministic random
+ np.random.seed(0)
+
+ history = []
+ epoch_steps = 128
+ simulation.reset()
+ update_net(models['critic-v'], models['critic-v-t'], 1.0)
+
+ for i in range(epochs):
+ print("epoch: ", i)
+ episode_reward = 0
+ reset = False
+ j = 0
+ while not(j > epoch_steps and reset):
+ j += 1
+ reset = False
+ # ---------------------------
+ # Observe state s and select action according to current policy
+ # ---------------------------
+
+ # Get simulation state
+ state_raw = simulation.get_normalised()
+ # state_unwound = [[i for t in state for i in t]]
+
+ state = [state_raw[0]] # TODO
+ state_tf = tf.convert_to_tensor(state)
+
+ # Get actions distribution from current model
+ # and their approx value from critic
+ actions_tf, _, _ = models['actor'](state_tf)
+ actions = list(actions_tf.numpy()[0])
+
+ # ---------------------------
+ # Execute action in the environment
+ # ---------------------------
+ reward, done = simulation.step_nominalised(actions)
+ episode_reward += reward
+
+ # ---------------------------
+ # Observe next state
+ # ---------------------------
+
+ state_l_raw = simulation.get_normalised()
+ state_l = [state_l_raw[0]] # TODO
+
+ # ---------------------------
+ # Store information in replay buffer
+ # ---------------------------
+
+ self.experience.save((state, actions, reward, state_l, 1 if not done else 0))
+
+ if done or simulation.step_counter > max_steps:
+ simulation.reset()
+ reset = True
+
+ # ---------------------------
+ # Updating network
+ # ---------------------------
+ if self.experience.size() > 500: # update_counter_limit:
+ exp = self.experience.replay(min(experience_batch, int(self.experience.size() * 0.8)))
+ states_tf = tf.convert_to_tensor(exp[0], dtype='float64')
+ actions_tf = tf.convert_to_tensor(exp[1], dtype='float64')
+ rewards_tf = tf.convert_to_tensor(exp[2], dtype='float64')
+ states_l_tf = tf.convert_to_tensor(exp[3], dtype='float64')
+ not_dones_tf = tf.convert_to_tensor(exp[4], dtype='float64')
+
+ with tf.GradientTape(watch_accessed_variables=True, persistent=True) as tape:
+
+ q_1_current = models['critic-q-1']([states_tf, actions_tf])
+ q_2_current = models['critic-q-2']([states_tf, actions_tf])
+ v_l_current = models['critic-v-t'](states_l_tf)
+
+ q_target = tf.stop_gradient(rewards_tf + not_dones_tf * self.gamma * v_l_current)
+ q_1_loss = tf.reduce_mean((q_target - q_1_current) ** 2)
+ q_2_loss = tf.reduce_mean((q_target - q_2_current) ** 2)
+
+ v_current = models['critic-v'](states_tf)
+ actions, policy_loss, sigma = models['actor'](states_tf)
+ q_1_policy = models['critic-q-1']([states_tf, actions_tf])
+ q_2_policy = models['critic-q-2']([states_tf, actions_tf])
+ q_min_policy = tf.minimum(q_1_policy, q_2_policy)
+
+ v_target = tf.stop_gradient(q_min_policy - self.alpha * policy_loss)
+ v_loss = tf.reduce_mean((v_target - v_current)**2)
+
+ a_loss = tf.reduce_mean(self.alpha * policy_loss - q_min_policy)
+
+ backward(tape, models['critic-q-1'], q_1_loss)
+ backward(tape, models['critic-q-2'], q_2_loss)
+ backward(tape, models['critic-v'], v_loss)
+ update_net(models['critic-v'], models['critic-v-t'], self.tau)
+
+ backward(tape, models['actor'], a_loss)
+
+ del tape
+
+ print('Loss:\n\tvalue: {}\n\tq1 : {}\n\tq2 : {}\n\tactor (ascent): {}'.format(
+ tf.reduce_mean(v_loss),
+ tf.reduce_mean(q_1_loss),
+ tf.reduce_mean(q_2_loss),
+ tf.reduce_mean(a_loss) #Gradient ascent
+
+ ))
+ print('Episode Reward: {}'.format(episode_reward))
+ print('Batch sigma: {}'.format(tf.reduce_mean(sigma)))
+
+
+def update_net(model, target, tau):
+ len_vars = len(model.trainable_variables)
+ for i in range(len_vars):
+ target.trainable_variables[i] = tau * model.trainable_variables[i] + (1.0 - tau) * target.trainable_variables[i]
+
+
+def backward(tape, model, loss):
+ grads = tape.gradient(loss, model.trainable_variables)
+ model.optimizer.apply_gradients(
+ zip(grads, model.trainable_variables))
+
+
+from tensorflow.keras import Model
+import tensorflow as tf
+import tensorflow_probability as tfp
+
+
+class GaussianLayer(Model):
+ def __init__(self, **kwargs):
+ super(GaussianLayer, self).__init__(**kwargs)
+
+ def call(self, inputs, **kwargs):
+ mu, log_sig = tf.split(inputs, num_or_size_splits=2, axis=1)
+
+ log_sig_clip = tf.clip_by_value(log_sig, -20, 2)
+ sig = tf.exp(log_sig_clip)
+
+ distribution = tfp.distributions.Normal(mu, sig)
+ output = distribution.sample()
+ actions = tf.tanh(output)
+
+ return actions, \
+ distribution.log_prob(output) - \
+ tf.reduce_sum(tf.math.log(1 - actions ** 2 + 1e-12), axis=1, keepdims=True), \
+ tf.stop_gradient(tf.keras.backend.abs(actions - tf.tanh(mu)))
+
+"
+"['datasets', 'objective-functions', 'linear-algebra']"," Title: How to find distance between 2 points when dimensions are all of different nature?Body: I have a dataset with four features:
+
+- the x coordinate
+- the y coordinate
+- the velocity magnitude
+- angle
+
+Now, I want to measure the distance between two points in the dataset, taking into account the facts that the angle dimension is toroidal, and taking into account the difference in nature of the dimensions (2 of them are distances, one of them is velocity magnitude, and the other an angle).
+What kind of distance function would suit this need?
+If I have to go for an $L^p$ norm, can I determine which value of $p$ would be apt by some means?
+Also, if you are aware, please, let me know how such problems have been solved in various applications.
+"
+"['natural-language-processing', 'transformer', 'attention']"," Title: What is the purpose of Decoder mask (triangular mask) in Transformer?Body: I'm trying to implement transformer model using this tutorial. In the decoder block of the Transformer model, a mask is passed to "pad and mask future tokens in the input received by the decoder". This mask is added to attention weights.
+import tensorflow as tf
+
+def create_look_ahead_mask(size):
+ mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
+ return mask
+
+Now my question is, how is doing this step (adding mask to the attention weights) equivalent to revealing the words to model one by one? I simply can't grasp the intuition of it's role. Most tutorials won't even mention this step like it's very obvious. Please help me understand. Thanks.
+"
+"['machine-learning', 'unsupervised-learning', 'self-organizing-map']"," Title: How does dimensionality reduction occur in Self organizing Map (SOM)?Body: We have n dimension input for SOM and the output 2-D clusters. How does it happen?
+"
+"['object-recognition', 'transfer-learning', 'one-shot-learning', 'deep-face', 'open-face']"," Title: Is it ok to perform transfer learning with a base model for face recognition to perform one-shot learning for object classification?Body: I am trying to create a model that is using a one-shot learning approach for a classification task. We do this because we do not have a lot of data and it also seems like a good way to learn this approach (it is going to be a university project). The task would be to classify objects, probably from drone/satellite image (of course zoomed one).
+My question is, do you think it would be ok to use a model for face recognition, such as DeepFace or OpenFace, and, using transfer learning, retrain it on my classes?
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'long-short-term-memory', 'transformer', 'attention']"," Title: Any comparison between transformer and RNN+Attention on the same dataset?Body: I am wondering what is believed to be the reason for superiority of transformer?
+I see that some people believe because of the attention mechanism used, it’s able to capture much longer dependencies. However, as far as I know, you can use attention also with RNN architectures as in the famous paper attention is introduced(here)).
+I am wondering whether the only reason for the superiority of transformers is because they can be highly parallelized and trained on much more data?
+Is there any experiment comparing transformers and RNN+attention trained on the exact same amount of data comparing the two?
+"
+"['comparison', 'logic', 'norvig-russell']"," Title: What is the difference between derivation and entailment?Body: In section 7.3 of the book Artificial Intelligence: A Modern Approach (3rd edition), it's written
+
+An inference algorithm that derives only entailed sentences is called sound or truth-preserving.
+
+
+The property of completeness is also desirable: an inference algorithm is complete if it can derive any sentence that is entailed.
+
+However, this does not make much sense to me. I'd like someone to kindly elaborate on this.
+"
+"['natural-language-processing', 'reference-request', 'regression']"," Title: What is the best algorithm to solve the regression problem of predicting the number of languages a Wikipedia article can be translated to?Body: I'm doing a student project where I construct a model predicting the number of languages that a given Wikipedia article is translated into (for example, the article TOYOTA is translated into 93 languages). I've tried extracting basic info (article length, number of links, etc.) to create a simple regression model, but can't get the $R^2$ value above $0.25$ or so.
+What's the most appropriate NLP algorithm for regression problems? Almost all examples I find online are classification problems. FYI I'm aware of the basics of NLP preprocessing (tokenization, lemmatization, bag of words, etc).
+"
+"['automated-theorem-proving', 'automated-reasoning']"," Title: Why is automated theorem proving so hard?Body: The problem of automated theorem proving (ATP) seems to be very similar to playing board games (e.g. chess, go, etc.): it can also be naturally stated as a problem of a decision tree traversal. However, there is a dramatic difference in progress on those 2 tasks: board games are successfully being solved by reinforcement learning techniques nowadays (see AlphaGo and AlphaZero), but ATP is still nowhere near to automatically proving even freshman-level theorems. What does make ATP so hard compared to board games playing?
+"
+"['ai-design', 'game-ai']"," Title: How to find the optimal pokemon teamBody: Pokemon is a game where 2 players each select 6 Pokemon (a team) at the beginning of the game without knowing the other player's team. Every Pokemon has one or two types. Every type is either weak, neutral or strong against every other type. This means that every 2 Pokemon matchup will either have a winner or be a tie. This also means that any team can be ranked against any other team based on the number of winning matchups they have.
+I want to write a program that can find the optimal Pokemon team out of a set of 70 provided Pokemon.
+A team is considered optimal if it has the greatest number of winning matchups against any other team. Basically, I want to calculate which team will have the most amount of favorable matchups if you were to battle it against every other possible team.
+What algorithm would be best for doing this? It is not feasible to compute matchups for every possible team. Can I do some sort of A* search with enough pruning to make it computationally feasible?
+"
+"['neural-networks', 'deep-learning', 'capsule-neural-network']"," Title: What is the status of the capsule networks?Body: What is the status of the capsule networks?
+I got an impression that capsule networks turned out not to be so useful in applications more complicated than the MNIST (at least according to this reddit discussion).
+Is this really the case? Or can they be a promising research direction (and if so, is there any specific application for which they seem the most promising)?
+"
+"['bayesian-probability', 'expectation-maximization', 'bayesian-neural-networks']"," Title: Why is the E step in expectation maximisation algorithm called so?Body: The E step on the EM algorithm asks us to set the value of the variational lower bound to be equal to the posterior probability of the latent variable, given the data points and parameters. Clearly we are not taking any expectations here, then why is it called the Expectation step? Am I missing something here?
+"
+"['reinforcement-learning', 'environment', 'function-approximation', 'sutton-barto']"," Title: Why do all states appear identical under the function approximation in the Short Corridor task?Body: This is the Short Corridor problem taken from the Sutton & Barto book. Here it's written:
+
+The problem is difficult because all the states appear identical under the function approximation
+
+But this doesn't make much sense as we can always choose states as 0,1,2 and corresponding feature vectors as
+x(S = 0,right) = [1 0 0 0 0 0]
+x(S = 0 , left) = [0 1 0 0 0 0]
+x(S = 1,right) = [0 0 1 0 0 0]
+x(S = 1 , left) = [0 0 0 1 0 0]
+x(S = 2,right) = [0 0 0 0 1 0]
+x(S = 2 , left) = [0 0 0 0 0 1]\
+So why is it written that all the states appear identical under the function approximation?
+
+"
+"['optimization', 'prediction']"," Title: Root finding in Deep Equilibrium ModelsBody: In the Deep Equilibrium Model the neural network can be seen as "infinitely deep". Training learns a nonlinear function as usual. But there is no forward propagation of input data through layers. Instead, a root finding problem is solved when data comes in.
+My question is, what is actually the function for which roots are searched, I'm struggling to see what would be unknown when data is available and parameters have been found in training?
+"
+"['natural-language-processing', 'python', 'named-entity-recognition']"," Title: Determining if an entity in free text is 'present' or 'absent'; what is this called in NLP?Body: I'm processing a semi-structured scientific document and trying to extract some specific concepts. I've actually made quite good progress without machine-learning so far, but I got to a block of true free text and I'm wondering whether a very narrow sense NLP/learning algorithm can help.
+Specifically, there are concepts I know to be important that are discussed in this section, but I'll need some NLP to get the 'sentiment'. I thought this might 'entity sentiment' analysis, however, I'm not trying to capture the writer's emotion about a concept. It's literally whether the writer of the text thinks the entity is present, absent, or is uncertain about an entity.
+Simple example. "First, there are horns. And second, the sheer size of this enormous fossil record argues for an herbivore or omnivore. The jaws are large, but this is not a carnivore."
+And say my entities are horns (presence or absence), and type of dinosaur (herbivore, omnivore, carnivore). Desired output:
+Horns (present)
+Carnivore (absent)
+Herbivore (possible/present) -- fine if it thinks 'present'
+Omnivore (possible/present) -- fine if it thinks 'present'
+
+What is the class of NLP analysis that takes an explicit input entity (or list of entities) and tries to assess based on context whether that entity is present or absent according to the writer of the input text? It's actually fine if this isn't a learning algorithm (maybe better). Bonus if you have suggestions for python packages that could be used in this narrow sense. I've looked casually through NLTK and spacy packages but they're both vast, wasn't obvious which class of model or functions I'd need to solve this problem.
+"
+"['reinforcement-learning', 'comparison', 'probability-distribution', 'kl-divergence', 'total-variational-distance']"," Title: When should one prefer using Total Variational Divergence over KL divergence in RLBody: In RL, both the KL divergence (DKL) and Total variational divergence (DTV) are used to measure the distance between two policies. I'm most familiar with using DKL as an early stopping metric during policy updates to ensure the new policy doesn't deviate much from the old policy.
+I've seen DTV mostly being used in papers giving approaches to safe RL when placing safety constraints on action distributions. Such as in Constrained Policy Optimization and Lyapunov Approach to safe RL.
+I've also seen that they are related by this formula:
+$$
+D_{TV} = \sqrt{0.5 D_{KL}}
+$$
+When you compute the $D_{KL}$ between two polices, what does that tell you about them, and how is it different from what a $D_{TV}$ between the same two policies tells you?
+Based on that, are there any specific instances to prefer one over the other?
+"
+"['reinforcement-learning', 'rewards', 'function-approximation', 'sutton-barto']"," Title: How do we derive the expression for average reward setting in continuing tasks?Body: In the average reward setting we have:
+$$r(\pi)\doteq \lim_{h\rightarrow\infty}\frac{1}{h}\sum_{t=1}^{h}\mathbb{E}[R_{t}|S_0,A_{0:t-1}\sim\pi]$$
+$$r(\pi)\doteq \lim_{t\rightarrow\infty}\mathbb{E}[R_{t}|S_0,A_{0:t-1}\sim\pi]$$
+How is the second equation derived from the first?
+"
+"['reinforcement-learning', 'gradient-descent', 'function-approximation', 'sutton-barto']"," Title: Why is the fraction of time spent in state $s$, $\mu(s)$, not in the update rule of the parameters?Body: I am reading "Reinforcement Learning: An Introduction (2nd edition)" authored by Sutton and Barto. In Section 9, On-policy prediction with approximation, it first gives the mean squared value error objective function in (9.1):
+$\bar{VE}(\boldsymbol{w}) = \sum_{s \in S} \mu(s)[v_{\pi}(s) - \hat{v}(s,\boldsymbol{w})]^2$. (9.1)
+$\boldsymbol{w}$ is a vector of the parameterized function $\hat{v}(s,\boldsymbol{w})$ that approximates the value function $v_{\pi}(s)$. $\mu(s)$ is the fraction of time spent in $s$, which measures the "importance" of state $s$ in $\bar{VE}(\boldsymbol{w})$.
+In (9.4), it states an update rule of $\boldsymbol{w}$ by gradient descent:
+$\boldsymbol{w}_{t+1} = \boldsymbol{w} -\frac{1}{2}\alpha \nabla[v_{\pi}(S_t) - \hat{v}(S_t,\boldsymbol{w})]^2$. (9.4)
+I have two questions regarding (9.4).
+
+- Why $\mu(s)$ is not in (9.4)?
+- Why is it the "minus" instead of "+" in (9.4)? In other words, why is it $\boldsymbol{w} -\frac{1}{2}\alpha \nabla[v_{\pi}(S_t) - \hat{v}(S_t,\boldsymbol{w})]^2$ instead of $\boldsymbol{w} +\frac{1}{2}\alpha \nabla[v_{\pi}(S_t) - \hat{v}(S_t,\boldsymbol{w})]^2$?
+
+"
+"['computer-vision', 'training', 'object-detection', 'supervised-learning', 'semi-supervised-learning']"," Title: Is training a CNN object detector on an image containing multiple targets that are not all annotated will teach it to miss targets?Body: I want to train a convolutional neural network for object detection (say YOLO) to detect faces. Consider this image:
+
+In this training image, I have many people, but only 2 of them are annotated. Is having this kind of images (where target classes are not all annotated) will train the network to ignore positives?
+If yes, are there any techniques to solve the issue apart from annotating the data (I don't have enough resources to do that).
+"
+"['machine-learning', 'performance', 'ensemble-learning']"," Title: Would performance of atomic models matter in ensemble methods?Body: Suppose I have two fitted ensemble models $F_1 := (f_1, f_2, f_3, \cdots f_n)$ and $G_1 := (g_1, g_2, g_3, \cdots g_n)$.
+And they were using the same ensemble methods (boosting or bagging).
+And I am using some measurement for model performance $M: f_i \to \mathbb{R}^+$, higher the better.
+And I know beforehand $M(f_i) \gt M(g_i), \forall i \in [1,n]$, can I conclude $M(F) \gt M(G) $ ?
+"
+"['papers', 'resource-request', 'implementation']"," Title: What are some alternatives to ""Papers with Code""?Body: There are lots of research papers available that are worth reading. We can read papers easily, but the associated code (not necessarily the official one developed by the authors of the paper) is often not available.
+Papers with Code (and the associated Github repo) already lists many research papers and often there is a link to the associated Github repo with the code, but sometimes the code is missing. So, are there alternatives to Papers with Code (for such cases)?
+"
+"['machine-learning', 'deep-learning', 'deep-neural-networks', 'explainable-ai', 'black-box']"," Title: What exactly is an interpretable machine learning model?Body: From this page in Interpretable-ml book and this article on Analytics Vidhya, it means to know what has happened inside an ML model to arrive at the result/prediction/conclusion.
+In linear regression, new data will be multiplied with weights and bias will be added to make a prediction.
+And in boosted tree models, it is possible to plot all the decisions as trees that results in a prediction.
+And in feed-forward neural networks, we will have weights and biases just like linear regression and we just multiply weights and add bias at each layer, limiting values to some extent using some kind of activation function at every layer, arriving finally at prediction.
+In CNNs, it is possible to see what happens to the input after having passed through a CNN block and what features are extracted after pooling (ref: what does a CNN see?).
+Like I stated above, one can easily know what happens inside an ML model to make a prediction or conclusion. And I am unclear as to what makes them un-interpretable!. So, what exactly makes an algorithm or it's results un-interpretable or why are these called black box models? Or am I missing something?
+"
+"['classification', 'objective-functions', 'loss', 'cross-entropy']"," Title: Loss function for better class separability in multi class classificationBody: So I am trying to enforce better separability in my deep learning model and was wondering what I can use besides cross entropy loss to do that? Could maybe using logarithm with different basis in cross entropy (i.e. using lower basis of logarithm than $e$ to gain steeper losses on small values, or bigger basis of logarithm to enforce plateaued losses). What would you suggest on doing?
+"
+"['neural-networks', 'reinforcement-learning', 'training', 'intelligent-agent', 'multi-agent-systems']"," Title: How to train the NN of simple agents given a reward system?Body: I'm not an expert in AI or NN, I gathered most of the information I have from the internet, and I'm looking for advice and guidance.
+I'm trying to design a NN that is going to be used by all the agents of my simulation (each agent will have its own matrix of weights). This is what I plan to have:
+
+- The NN will have 1 input layer and 1 output layer (no hidden layers).
+- The number of inputs will always be greater than the number of outputs.
+- The outputs represent the probability of an action being taken by the agent (the output node with the highest value will identify the action that will be taken). Which means there are as many output nodes are there are actions.
+
+When an agent takes an action it receives a reward: a number that represents how well the agent performed. This happens "online" that is, the agent is trained on the fly.
+What I would like to know if how to best train the NN: that is, how to update the weights of my matrix to maximise the rewards long term.
+From the research I made it seems this is close to the concept of Reinforcement Learning, but even if it was, it's not clear to me how to apply it to such a simple NN shape.
+"
+"['feedforward-neural-networks', 'residual-networks', 'binary-classification']"," Title: How to use residual learning applied to fully connected networks?Body: Is there any reason why skip connections would not provide the same benefits to fully connected layers as it does for convolutional?
+I've read the ResNet paper and it says that the applications should extend to "non-vision" problems, so I decided to give it a try for a tabular data project I'm working on.
+Try 1: My first try was to only skip connections when the input to a block matched the output size of the block (the block has depth - 1
number of layers with in_dim
nodes plus a layer with out_dim
nodes :
+class ResBlock(nn.Module):
+ def __init__(self, depth, in_dim, out_dim, act='relu', act_first=True):
+ super().__init__()
+ self.residual = nn.Identity()
+ self.block = block(depth, in_dim, out_dim, act)
+ self.ada_pool = nn.AdaptiveAvgPool1d(out_dim)
+ self.activate = get_act(act)
+ self.apply_shortcut = (in_dim == out_dim)
+
+ def forward(self, x):
+ if self.apply_shortcut:
+ residual = self.residual(x)
+
+ x = self.block(x)
+ return self.activate(x + residual)
+ return self.activate(self.block(x))
+
+The accompanying loss curve:
+
+Try 2: I thought to myself "Great, it's doing something!", so then I decided to reset and go for 30 epochs from scratch. I don't have the image saved, but this training only made it 5 epochs and then the training and validation loss curves exploded by several orders of magnitude:
+
+Try 3: Next, I decided to try to implement the paper's idea of reducing the input size to match the output when they don't match: y = F(x, {Wi}) + Mx
. I chose average pooling in place of matrix M
to accomplish this, and my loss curve became.
+
+The only difference in my code is that I added average pooling so I could use shortcut connections when input and output sizes are different:
+class ResBlock(nn.Module):
+ def __init__(self, depth, in_dim, out_dim, act='relu', act_first=True):
+ super().__init__()
+ self.residual = nn.Identity()
+ self.block = block(depth, in_dim, out_dim, act)
+ # squeeze/pad input to output size
+ self.ada_pool = nn.AdaptiveAvgPool1d(out_dim)
+ self.activate = get_act(act)
+ self.apply_pool = (in_dim != out_dim)
+
+ def forward(self, x):
+ # if in and out dims are different apply the padding/squeezing:
+ if self.apply_pool:
+ residual = self.ada_pool(self.residual(x).unsqueeze(0)).squeeze(0)
+ else: residual = self.residual(x)
+
+ x = self.block(x)
+ return self.activate(x + residual)
+)
+
+Is there a conceptual error in my application of residual learning? A bug in my code? Or is resididual learning just not applicable to this kind of data/network?
+"
+"['neural-networks', 'regression', 'hidden-layers', 'non-linear-regression', 'universal-approximation-theorems']"," Title: When are multiple hidden layers necessary?Body: I know that my question probably seems like being asked many times, but Ill try
+to be more speciffic:
+Limitations to my question:
+
+- I am NOT asking about convolutional neural networks, so please, try not to mention this as an example or as an answer as long as it is possible. (maybe only in question number 3)
+
+- My question is NOT about classification using neural networks
+
+- I am asking about a "simple" neural network designed to solve the regression type of problem. Let's say it has 2 inputs and 1 output.
+
+
+Preambula:
+As far as I understood, from the universal approximation theorem, in such a case, even if the model is nonlinear, only one hidden layer can perfectly fit a nonlinear model, as shown here
+http://neuralnetworksanddeeplearning.com/chap4.html.
+Question 1
+In this specific case, is there any added value in using extra layers?
+(maybe the model will be more precise, or faster training?)
+Question 2
+Suppose in 1st question the answer was there is no added value. In such a case will the added value appear if I enlarge inputs from two inputs as described above, to some larger number?
+Question 3
+Suppose in 2nd question the answer was there is no added value. I am still trying to pinpoint the situation where it STARTS making sense in adding more layers AND where it makes NO sense at all using one layer.
+"
+"['research', 'academia', 'dimensionality-reduction', 'data-visualization']"," Title: How do AI researchers imagine higher dimensions?Body: We can visualize single, two, and three dimensions using websites or imagination.
+In the context of AI and, in particular, machine learning, AI researchers often have to deal with multi-dimensional random vectors.
+Suppose if we consider a dataset of human faces, each image is a vector in higher dimensional space and needs to understand measures on them.
+How do they imagine them?
+I can only imagine with 3D and then approximating with higher dimensions. Is there any way to visualize in higher dimensions for research?
+"
+"['convolutional-neural-networks', 'transformer', 'ensemble-learning', 'random-forests', 'gradient-boosting']"," Title: When do the ensemble methods beat neural networks?Body: In many applications and domains, computer vision, natural language processing, image segmentation, and many other tasks, neural networks (with a certain architecture) are considered to be by far the most powerful machine learning models.
+Nevertheless, algorithms, based on different approaches, such as ensemble models, like random forests and gradient boosting, are not completely abandoned, and actively developed and maintained by some people.
+Do I correctly understand that the neural networks, despite being very flexible and universal approximators, for a certain kind of tasks, regardless of the choice of the architecture, are not the optimal models?
+For the tasks in computer vision, the core feature, which makes CNNs superior, is the translational invariance and the encoded ability to capture the proximity properties of an image or some sequential data. And the more recent transformer models have the ability to choose which of the neighboring data properties is more important for its output.
+But let's say I have a dataset, without a certain structure and patterns, some number of numerical columns, a lot of categorical columns, and in the feature space (for classification task) the classes are separated by some nonlinear hypersurface, would the ensemble models be the optimal choice in terms of performance and computational time?
+In this case, I do not see a way to exploit CNNs or attention-based neural networks. The only thing that comes to my head, in this case, is the ordinary MLP. It seems that, on the one hand, it would take significantly more time to train the weights than the trees from the ensemble. On the other hand, both kinds of models work without putting prior knowledge to data and assumptions on its structure. So, given enough amount of time, it should give a comparable quality.
+Or can there be some reasoning that neural network is sometimes bound to give rather a poor quality?
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing', 'comparison', 'object-detection']"," Title: What is the difference between text-based image retrieval and natural language object retrieval?Body: I'm working on creating a model that locates the object in the scene (2D image or 3D scene) using a natural language query. I came across this paper on natural language object retrieval, which mentions that this task is different from text-based image retrieval, in the sense that natural language object retrieval requires an understanding of objects in the image, spatial configurations, etc. I am not able to see the difference between these two approaches. Could you please explain it with an example?
+"
+"['reinforcement-learning', 'markov-decision-process', 'model-based-methods']"," Title: Is the state transition matrix known to the agents in a Markov decision processes?Body: The question is more or less in the title.
+A Markov decision process consists of a state space, a set of actions, the transition probabilities and the reward function. If I now take an agent's point of view, does this agent "know" the transition probabilities, or is the only thing that he knows the state he ended up in and the reward he received when he took an action?
+"
+"['backpropagation', 'gradient-descent', 'time-complexity', 'gated-recurrent-unit', 'bptt']"," Title: What is the time complexity for training a gated recurrent unit (GRU) neural network using back-propagation through time?Body: Let us assume we have a GRU network containing $H$ layers to process a training dataset with $K$ tuples, $I$ features, and $H_i$ nodes in each layer.
+I have a pretty basic idea how the complexity of algorithms are calculated, however, with the presence of multiple factors that affect the performance of a GRU network including the number of layers, the amount of training data (which needs to be large), number of units in each layer, epochs and maybe regularization techniques, training with back-propagation through time, I am messed up. I have found an intriguing answer for neural networks complexity out here What is the time complexity for training a neural network using back-propagation?, but that was not enough to clear my doubt.
+So, what is the time complexity of the algorithm, which uses back-propagation through time, to train GRU networks?
+"
+"['reinforcement-learning', 'python', 'robotics']"," Title: Difficulty in agent's learning with increasing dimensions of continuous actionsBody: I have been working on some RL project, where the policy is controlling the robot using its joint angles.Throughout the project I have noticed some phenomenon, which caught my attention. I have decided to create a very simplified script to investigate the problem. There it goes:
+The environment
+There is a robot, with two rotational joints, so 2 degrees of freedom. This means its continuous action space (joint rotation angle) has a dimensionality of 2. Let's denote this action vector by a. I vary the maximum joint rotation angle per step from 11 to 1 degrees and make sure that the environment is allowed to do a reasonable amounts of steps before the episode is forced to terminate on time-out.
+Our goal is to move the robot by getting its current joint configuration c closer to the goal joint angle configuration g (also two dimensional input vector).
+Hence, the reward I have chosen is e^(-L2_distance(c, g)).
+The smaller the L2_distance, the exponentially higher the reward, so I am sure that the robot is properly incentivised to reach the goal quickly.
+Reward function (y-axis: reward, x-axis: L2 distance):
+
+So the pseudocode for every step goes like:
+
+- move the joints by predicted joint angle delta
+
+- collect the reward
+
+- if time-out or joint deviates too much into some unrealistic configuration: terminate.
+
+
+Very simple environment, not to have too many moving parts in our problem.
+RL algorithm
+I use Catalyst framework to train my agent in the actor-critic setting using TD3 algorithm. By using a tested framework, which I am quite familiar with, I am quite sure that there are no implementational bugs.
+The policy is goal-driven so the actor consumes the concatenated current and goal joint configuration a= policy([c,g])
+The big question
+When the robot has only two degrees of freedom, the training quickly converges and the robots learns to solve the task with high accuracy (final L2 distance smaller than 0.01).
+Performance of the converged 2D agent. y-axis: joint angle value, x-axis: no of episodes. Crosses denote the desired goal state of the robot.:
+
+However, if the problem gets more complicated - I increase the joint dimensions to 4D or 6D, the robot initially learns to approach the target, but it never "fine-tunes" its movement. Some joints tend to oscillate around the end-point, some of them tend to overshoot.
+I have been experimenting with different ideas: making the network wider and deeper, changing the action step. I have not tried optimizer scheduling yet. No matter how many samples the agent receives or how long it trains, it never learns to approach targets with required degree of accuracy (L2_distance smaller than 0.05).
+Performance of the converged 4D agent. y-axis: joint angle value, x-axis: no of episodes. Crosses denote the desired goal state of the robot.:
+
+Training curve for 2D agent (red) and 4D agent (orange). 2D agent quickly minimises the L2 distance to something smaller than 0.05, while the 4D agent struggles to go below 0.1.:
+
+Literature research
+I have looked into papers which describe motion planning in joint space using TD3 algorithm.
+There are not many differences from my approach:
+Link 1
+Link 2
+Their problem is much more difficult because the policy needs to also learn the model of the obstacles in joint space, not only the notion of the goal. The only thing which is special about them is that they use quite wide and shallow networks. But this is the only peculiar thing.
+I am really interested, what do you guys would advise me to do, so that the robot can reach high accuracy in higher joint configuration dimensions? What am I missing here?!
+Thanks for any help in that matter!
+"
+"['reinforcement-learning', 'comparison', 'definitions', 'reward-to-go']"," Title: What is the return-to-go in reinforcement learning?Body: In reinforcement learning, the return is defined as some function of the rewards. For example, you can have the discounted return, where you multiply the rewards received at later time steps by increasingly smaller numbers, so that the rewards closer to the current time step have a higher weight. You can also have $n$-step returns or $\lambda$-returns.
+Recently, I have come across the concept of return-to-go in a few research papers, such as Prioritized Experience Replay (appendix A. Prioritization Variants, p. 12) or Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy (section Theoretical Analysis, p. 3).
+What exactly is the return-to-go? How is it mathematically defined? In which situations do we need to care about it? The name suggests that this is the return starting from a certain time step $t$, but wouldn't this be the same thing as the return (which is defined starting from a certain time step $t$ and often denoted as $G_t$ for that same reason)?
+There is also the concept of reward-to-go. For example, the reward-to-go is analyzed in the paper Learning the Variance of the Reward-To-Go, which states that the expected reward-to-go is the value function, which seems to be consistent with this explanation of the reward-to-go, where the reward-to-go is defined as
+$$\hat{R}_{t} \doteq \sum_{t^{\prime}=t}^{T} R\left(s_{t^{\prime}}, a_{t^{\prime}}, s_{t^{\prime}+1}\right)$$
+We also had a few questions that involve the reward-to-go: for example, this or this. How is the return-to-go related to the reward-to-go? Are they the same thing? For example, in this paper, the return-to-go seems to be used as a synonym for reward-to-go (as used in this article), i.e. they call $R(t)$ the "return to-go" (e.g. on page 2), which should be the return starting from time step $t$, which should actually be the reward-to-go.
+"
+"['activation-functions', 'hyper-parameters', 'weights', 'pretrained-models']"," Title: Which hyperparameters in neural network are accesible to users adjustmentBody: I am new to Neural Networks and my questions are still very basic.
+I know that most of neural networks allow and even ask user to chose hyper-parameters like:
+
+- amount of hidden layers
+- amount of neurons in each layer
+- amount of inputs and outputs
+- batches and epochs steps and some stuff related to back-propagation and gradient descent
+
+But as I keep reading and youtubing, I understand that there are another important "mini-parameters" such as:
+
+- activation functions type
+
+- activation functions fine-tuning (for example shift and slope of sigmoid)
+
+
+- whether there is an activation funciton in the output
+
+- range of weights (are they from zero to one or from -1 to 1 or -100 to +100 or any other range)
+
+- are the weights normally distributed or they just random
+
+
+etc...
+Actually the question is:
+Part a:
+Do I understand right that most of neural networks do not allow to change those "mini-parameters", as long as you are using "readymade" solutions?
+In other words if I want to have an access to those "mini-parameters" I need to program the whole neural network by myself or there are "semi-finished products"
+Part b:(edited)
+For someone who uses neural network as an everyday routine tool to solve problems(Like data scientist), How common and how often do those people deal with fine tuning things which I refer to as "mini-parameters"? Or those parameters are usually adjusted by a neural network developers who create the frameworks like pytorch, tensorflow etc?
+Thank you very much
+"
+"['genetic-algorithms', 'optimization', 'novelty-search']"," Title: Measuring novel configuration of pointsBody: I am trying to implement Novelty search; I understand why it can work better than the standard Genetic Algorithm based solution which just rewards according to the objective.
+I am working on a problem which requires to generate a fixed number of points in a 2d box centered at the origin.
+In this problem, how can I identify which is a novel configuration of points?
+Note: I have thought of one way of doing this: We call the mean of one configuration of points to be the mean of all points in that configuration (let's say this tuple is $(m_x, m_y)$, we store the mean of all configurations generated till now, now for a new configuration it's novelty can be defined as the distance of the mean of this new configuration with $(m_x, m_y)$.
+But I think it will not work greatly as some very different configuration of points can also have the same mean.
+"
+"['machine-learning', 'deep-learning', 'training', 'recurrent-neural-networks', 'backpropagation']"," Title: How does back-propagation through time work for optimizing the weights of a bidirectional RNN?Body: I am aware that back-propagation through time is used for training the recurrent neural network. But I am not able to understand how this happens for the bi-directional versions of the recurrent neural networks?
+So, I was hoping if anyone help me with:
+
+- Understanding with an example the training of bi-directional recurrent neural networks using back-propagation through time? (I tried following the original paper https://ieeexplore.ieee.org/document/650093, but it was kind of confusing for me when they perform the backward pass for training)
+
+"
+"['neural-networks', 'gradient-descent', 'cross-entropy', 'mean-squared-error', 'softmax']"," Title: What is the advantage of using cross entropy loss & softmax?Body: I am trying to do the standard MNIST dataset image recognition test with a standard feed forward NN, but my network failed pretty badly. Now I have debugged it quite a lot and found & fixed some errors, but I had a few more ideas. For one, I am using the sigmoid activation function and MSE as an error function, but the internet suggests that I should rather use softmax for the output layer, and cross entropy loss as an error function. Now I get that softmax is a nice activation function for this task, because you can treat the output as a propability vector. But, while being a nice thing to have, that's more of a convinience thing, isn't it? Easier to visualize?
+But when I looked at what the derivative of softmax & CEL combined is (my plan was to compute that in one step and then treat the activation function of the last layer as linear, as not to apply the softmax derivative again), I found:
+$\frac{δE}{δi}$ = $t$ − $o$
+(With $i$ being the input of the last layer, $t$ the one hot target vector and $o$ the prediction vector).
+That is the same as the MSE derivative. So what benefits does softmax + CEL actually have when propagating, if the gradients produced by them are exactly the same?
+"
+"['reinforcement-learning', 'q-learning', 'dqn']"," Title: Reinforcement learning simple problem: agent not learning, wrong actionBody: I am pretty new to RL and I am trying to code a simple RL task with pytorch.
+The goal/task is the following:
+The initial state is $t_o$ and the agent takes an action $\Delta_t$: $t_o +\Delta_t = t_1$.
+If $t_1$ equals 450 or 475 then it gets a reward, else he does not get a reward.
+I am training the agent with DQN algorithm on a NN ( with: 2 Linear layes: fist layer n_in=1 n_out 128 and second layer n_in=128 and n_out=5):
+
+observation space($t_i$) is 700 --> $t_i \in [0,700[$
+action space ($\Delta_t$) is 5 --> ($\Delta_t \in [-50,-25,0,25,50]$)
+
+
+epsilon_start=0.9#e-greedy threshold start value
+epsilon_end=0.01#e-greedy threshold end value
+epsilon_decay=200#e-greedy threshold decay learning_rate=0.001# NN
+optimizer learning rate batch_size=64#Q-learning batch size
+
+Unfortunately it does not seem to converge to the values $t_i=$ 450 or 475. I doesn't seem to care about getting a reward.
+How can I improve my code so that the agent learns what I am trying to teach him?
+I put my code below in case the explanations were not clear enough:
+
+import gym
+from gym import spaces
+
+class RL_env(gym.Env):
+ metadata = {'render.modes': ['human']}
+
+
+ def __init__(self):
+ super(RL_env, self).__init__()
+
+ n_actions_delta = 1 #delta_t
+ self.action_space = spaces.Discrete(5)
+
+ n_observations = 1 #time
+
+ self.observation_space = spaces.Discrete(700)
+
+ #initial time
+ self.time = 0
+
+ self.done = 0
+ self.reward = 0
+
+ def reset(self):
+ self.reward = 0
+ self.done = False
+ return self.reward
+
+ def step(self,delta_t):
+ print('self time',self.time)
+ d_t = np.arange(-50,70,25)
+
+ self.time = (self.time + d_t[delta_t])%700
+ print('delta time',d_t[delta_t],'-->','self time',self.time)
+
+
+
+ if self.time == 475 or self.time == 450:
+ self.reward = 1
+
+
+ else:
+ self.reward += 0
+
+
+ info = {}
+ print('total reward',self.reward)
+ print('\n')
+ return self.time,self.reward, self.done, info
+
+
+
+
+ def render(self, mode='human', close=False):
+ print()
+
+import gym
+import numpy as np
+import matplotlib.pyplot as plt
+import torch
+import torch.nn as nn
+import torch.optim as optim
+import torch.nn.functional as F
+from torch.autograd import Variable
+from torch.distributions import Categorical
+dtype = torch.float
+device = torch.device("cpu")
+import random
+import math
+import sys
+if not sys.warnoptions:#igrnore warnings
+ import warnings
+ warnings.simplefilter("ignore")
+
+#hyper parameters
+epsilon_start=0.9
+#e-greedy threshold start value
+epsilon_end=0.01#e-greedy threshold end value
+epsilon_decay=200#e-greedy threshold decay
+learning_rate=0.001# NN optimizer learning rate
+batch_size=64#Q-learning batch size
+
+env = RL_env()
+
+
+#use replay memory (-> to stabilize and improve our algorithm)for training: store transitions observed by agent,
+#then reuse this data later
+#sample from it randomly (batch built by transitions are decorrelated)
+class ReplayMemory:#allowes the agent to learn from earlier memories (speed up learning and break undesirable temporal correlations)
+ def __init__(self, capacity):
+ self.capacity = capacity
+ self.memory = []
+ def push(self, transition):#saves transition
+ self.memory.append(transition)
+ if len(self.memory)>self.capacity:#if length of memory arra is larger than capacity (fixed)
+ del self.memory[0]#remove 0th element
+
+ def sample(self, batch_number):#samples randomly a transition to build batch
+ return random.sample(self.memory, batch_number)
+
+ def __len__(self):
+ return len(self.memory)
+
+#Dqn NN (we want to maximize the discounted, cumulative reward)
+#idea of Q-learning: we want to approximate with NN maximal Q-function (gives max return of action in given state)
+#training update rule: use the fact that every Q-function for some policy obeys the Bellman equation
+#difference between the two sides of the equality is known as the temporal difference error (we want to min -> Huber loss)
+#calculate over batch of transitions sampled from the replay memory
+class DqnNet(nn.Module):
+ def __init__(self):
+ super(DqnNet, self).__init__()
+
+ state_space = 1
+ action_space = env.action_space.n
+ num_hid = 128
+ self.fc1 = nn.Linear(state_space, num_hid)
+ self.fc2 = nn.Linear(num_hid, action_space)
+ self.gamma=0.5 #Q-learning discount factor (ensures that reward sum converges,
+ #makes actions from far future less important)
+ def forward(self, x):
+ x = F.relu(self.fc1(x))
+ x = F.sigmoid(self.fc2(x))
+ return x
+
+#select action accordingly to epsilon greedy policy
+#sometimes we use model for choosing action, other times sample uniformly
+#probability of choosing a random action will start at epsilon_start and will decay (epsilon_decay) exponentially
+#towards epsilon_end
+steps_done=0
+def predict_action(state):
+ global steps_done
+ sample=random.random()#random number
+ eps_threshold=epsilon_end+(epsilon_start-epsilon_end)*math.exp(-1.*steps_done/epsilon_decay)
+ steps_done += 1
+ if sample>eps_threshold:
+ x = eps_threshold,model(Variable(state,).type(torch.FloatTensor)).data.max(0)[1].view(1, 1)
+ return x#chose action from model
+
+ else:
+ x = eps_threshold,torch.tensor([[random.randrange(env.action_space.n)]])
+ return x#choose random action uniformly
+
+#wtih the update_policy function we perform a single step of the optimization
+#first sample a batch, concatenate all the tensors into a single one, compute Q-value and max Q-value,
+#and combine them into loss
+def update_policy():
+ if len(memory)<batch_size:#we want to sample a batch of size 64
+ return
+ transitions = memory.sample(batch_size)#take random transition batch from experience replay memory
+ batch_state, batch_action, batch_next_state, batch_reward = zip(*transitions)#convert batch-array of Transitions
+ #to Transition of batch-arrays
+ #-->zip(*) takes iterables as arguments and return iterator
+
+ batch_state = Variable(torch.cat(batch_state))#concatenate given sequence tensors in the given dimension
+ batch_state = batch_state.resize(batch_size,1)
+ batch_action = Variable(torch.cat(batch_action))
+ batch_next_state = Variable(torch.cat(batch_next_state))
+ batch_next_state = batch_next_state.reshape(batch_size,1)
+ batch_reward = Variable(torch.cat(batch_reward))
+
+ #print('model batch state',model(Variable(batch_state[0])))
+ current_q_values = model(batch_state).gather(1, batch_action)#current Q-values estimated for all actions,
+ #compute Q, then select the columns of actions taken,
+ #these are the actions which would've been taken
+ #for each batch state according to policy_net
+ max_next_q_values = model(batch_next_state).detach().max(1)[0]#predicted Q-values for non-final-next-states
+ #(-> gives max Q)
+ expected_q_values = batch_reward + (model.gamma * max_next_q_values)
+
+ #loss is measured from error between current and newly expected Q values (Huber Loss)
+ loss = F.smooth_l1_loss(current_q_values, expected_q_values)
+
+ # backpropagation of loss to NN --> optimize model
+ optimizer.zero_grad()
+ loss.backward()
+ optimizer.step()
+ return loss, np.sum(expected_q_values.numpy())
+
+
+def train(episodes):
+ scores = []
+ Losses = []
+ Bellman = []
+ Epsilon = []
+ Times = []
+ Deltas = []
+
+
+
+ for episode in range(episodes):
+ state=env.reset()#reset environment
+ print('\n')
+ print('episode',episode)
+
+ epsilon_action = predict_action(torch.FloatTensor([state]))
+
+ action = epsilon_action[1] #after each time step predict action
+
+ next_state, reward, done,info = env.step(action.item())#step through environment using chosen action
+
+ epsilon = epsilon_action[0]
+ Epsilon.append(epsilon)
+ print(reward,'reward')
+
+ state=next_state
+ Times.append(state)
+ scores.append(reward)
+
+
+ memory.push((torch.FloatTensor([state]),action,torch.FloatTensor([next_state]),
+ torch.FloatTensor([reward])))#action is already a tensor
+ up = update_policy()#update_policy()#update policy
+
+ if up != None:
+ Losses.append(Variable(up[0]))
+ print('loss',Variable(up[0]))
+ Bellman.append(up[1])
+
+ #calculate score to determine when the environment has been solved
+ mean_score=np.mean(scores[-50:])#mean of score of last 50 episodes
+ #every 50th episode print score
+ if episode%50 == 0:
+ print('Episode {}\tScore: {}\tAverage score(last 50 episodes): {:.2f}'.format(episode,scores[-50:],mean_score))
+
+
+ #print('Losses',Losses)
+ Losses = torch.stack(Losses).numpy()
+ #print('Losses',Losses)
+ plt.plot(np.arange(len(Losses)),Losses)
+ plt.xlabel('Training iterations')
+ plt.ylabel('Loss')
+ plt.show()
+
+ Bellman = np.array(Bellman)
+ #print('Bellman',Bellman,'\n')
+ plt.plot(np.arange(len(Bellman)),Bellman)
+ plt.xlabel('Training iterations')
+ plt.ylabel('Bellman target')
+ plt.show()
+
+ #print('scores',scores)
+ plt.plot(np.arange(len(scores)),scores)
+ plt.xlabel('Training iterations')
+ plt.ylabel('Reward')
+ plt.show()
+
+ #print('epsilon',Epsilon)
+ plt.plot(np.arange(len(Epsilon)),Epsilon)
+ plt.xlabel('Training iterations')
+ plt.ylabel('Epsilon')
+ plt.show()
+
+ print('Times',Times[-25:])
+ print('Deltas',Deltas[-25:])
+
+ Times = np.array(Times)
+ print('Times',Times)
+ #plt.figure(figsize=(31,20))
+ plt.figure(figsize=(9,7))
+ plt.plot(np.arange(len(Times)),(np.array(Times)))
+ plt.xlabel('Training iterations')
+ plt.ylabel('t')
+ plt.show()
+
+ Times_1 = np.array(Times[-300:])
+ print('t',Times)
+ plt.figure(figsize=(9,7))
+ plt.plot(np.arange(len(Times_1)),(np.array(Times_1)))
+ plt.xlabel('Last 300 Training iterations')
+ plt.ylabel('t')
+ plt.ylim(0,1000)
+ plt.show()
+
+model = DqnNet()#policy
+memory = ReplayMemory(20000)
+optimizer = optim.Adam(model.parameters(), lr=learning_rate)
+
+train(10000)
+
+"
+"['neural-networks', 'reinforcement-learning', 'game-ai', 'chess', 'alphazero']"," Title: What happens when an opponent a neural network is playing with does not obey the rules of the game (i.e. cheats)?Body: For example, if AlphaZero plays with an opponent who has a right to move chess figures any way she wants, or make more than 1 move in a turn? Will a neural network adapt to that, as it adapted to an absurd move made by Lee Sedol in 2015?
+"
+"['convolutional-neural-networks', 'classification', 'computer-vision', 'image-recognition', 'object-recognition']"," Title: How can I train a CNN to detect when a person is smoking outside of shop given images from a video camera?Body: My friend is working at a pizza shop. He takes cigarette breaks in an area that is covered by the public webcam of our town.
+I now want to train a convolutional neural network to be able to detect when he is smoking.
+Can somebody direct me in the right direction, which tools/tutorials I should look at for this classification task? I already saved 18 hours worth of pictures, one a minute. He is in 28 of these images, I will probably save a few more, maybe 2-3 days. But I don't really know how to start this.
+"
+"['neural-networks', 'backpropagation']"," Title: Computation of initial adjoint for NODEBody: I'm reading the paper Neural Ordinary Differential Equations and I have a simple question about adjoint method. When we train NODE, it uses a blackbox ODESolver to compute gradients through model parameters, hidden states, and time. It uses another quantity $\mathbf{a}(t) = \partial L / \partial \mathbf{z}(t)$ called adjoint, which also satisfies another ODE. As I understand, the authors build a single ODE that computes all the gradients $\partial L / \partial \mathbf{z}(t_{0})$ and $\partial L / \partial \theta$ by solving that single ODE. However, I can't understand how do we know the value $\partial L / \partial \mathbf{z}(t_1)$ which corresponds to the initial condition for the ODE corresponds to the adjoint. I'm using this tutorial as a reference, and it defines custom forward and backward methods for solving ODE. However, for the backward computation (especially ODEAdjoint
class in the tutorial) we need to pass $\partial L / \partial \mathbf{z}$ for backpropagation, and this enables us to compute $\partial L / \partial \mathbf{z}(t_i)$ from $\partial L / \partial \mathbf{z}(t_{i+1})$, but we still need to know the adjoint value $\partial L / \partial \mathbf{z}(t_N)$. I do not understand well about how pytorch's autograd
package works, and this seems to be a barrier to understand this. Could anyone explain how it operates, and where $\partial L / \partial \mathbf{z}(t_1)$ (or $\partial L / \partial \mathbf{z}(t_N)$ if this is more comfortable) comes from? Thanks in advance.
+
+Here's my guess for the initial adjoint from simple example. Let $d\mathbf{z}/dt = Az$ be a 2-dim linear ODE with given $A \in \mathbb{R}^{2\times 2}$. If we use Euler's method as a ODE solver, then the estimate for $z(t_1)$ is explicitly given as $$\hat{\mathbf{z}}(t_1) = \mathrm{ODESolve}(\mathbf{z}(t_0), f, t_0, t_1, \theta))= \left(I + \frac{t_1 - t_0}{N}A\right)^{N} \mathbf{z}(t_0) $$ where $N$ is the number of steps for Euler's method (so that $h = (t_1 - t_0) /N$ is the step size). If we use MSE loss for training, then the loss will be
+$$
+L(\mathbf{z}(t_1)) = \Bigl|\Bigl| \mathbf{z}_1 - \left(I + \frac{t_1 - t_0}{N}A\right)^N\mathbf{z}(t_0)\Bigr|\Bigr|_2^2
+$$
+where $\mathbf{z}_1$ is the true value at time $t_1$, which is $\mathbf{z}_1 = e^{A(t_1 - t_0)}\mathbf{z}(t_0)$. Since adjoint $\mathbf{a}(t) = \partial L / \partial \mathbf{z}(t)$ satisfies $$\frac{d\mathbf{a}(t)}{dt} = -\mathbf{a}(t)^{T} \frac{\partial f(\mathbf{z}(t), t, \theta)}{\partial \mathbf{z}} = \mathbf{0},$$
+$\mathbf{a}(t)$ is constant and we get $\mathbf{a}(t_0) = \mathbf{a}(t_1)$. So we do not need to use augmented ODE for computing $\mathbf{a}(t)$. However, I still don't know what $\mathbf{a}(t_1) = \partial L / \partial \mathbf{z}(t_1)$ should be. If my understanding is correct, since $L = ||\mathbf{z}_1 - \mathbf{z}(t_1)||^{2}_{2}$, it seems that the answer might be
+$$
+\frac{\partial L}{\partial \mathbf{z}(t_1)} = 2(\mathbf{z}(t_1) - \mathbf{z}_1).
+$$
+However, this doesn't seem to be true: if it is, and if we have multiple datapoints at $t_1, t_2, \dots, t_N$, then the loss is
+$$
+L = \frac{1}{N} \sum_{i=1}^{N}||\mathbf{z}_i -\mathbf{z}(t_i)||_{2}^{2}
+$$
+and we may have
+$$
+\frac{\partial L}{\partial \mathbf{z}(t_i)} = \frac{2}{N} (\mathbf{z}(t_i) - \mathbf{z}_i),
+$$
+which means that we don't need to solve ODE associated to $\mathbf{a}(t)$.
+"
+"['monte-carlo-tree-search', 'convergence', 'game-theory']"," Title: Is Monte Carlo tree search guaranteed to converge to the optimal solution in two player zero-sum stochastic games?Body: I'm aware that convergence proofs for Monte Carlo tree search exist in the case of deterministic zero sum games and Markov decision processes.
+I have come across research which applies MCTS to zero-sum stochastic games, however I was unable to find proof that such an approach is guaranteed to converge to the optimal solution.
+If anyone is able to provide references or an explanation explaining why or why not MCTS is guaranteed to converge to the optimal solution in this setting I would appreciate it a lot.
+"
+"['convolutional-neural-networks', 'computer-vision', 'keras', 'filters', 'convolutional-layers']"," Title: What is the need for so many filters in a CNN?Body: Consider the following coding line related to CNNS
+Conv2D(64, (3,3), strides=(2, 2), padding='same')
+
+It is a convolution layer with filter size $3 \times 3$ and step size of $2\times 2$.
+I am confused about the need for $64$ filters.
+Are they doing the same task? Obviously, it is no. (one is enough in this case)
+Then how do each filter differ by? Is it in hovering over the input matrix? Or is it in the values contained by filter itself? Or differs in both hovering and content?
+I am finding difficulty in visualizing it.
+"
+"['optimization', 'papers', 'meta-heuristics', 'moth-flame-optimization']"," Title: How are the lower and upper bound values of the moths determined in the Moth-Flame Optimization algorithm?Body: I am currently implementing the Moth-Flame Optimization (MFO) Algorithm, based on the paper: Moth-Flame Optimization Algorithm: A Novel Nature-inspired Heuristic Paradigm.
+To calculate the values of the Moths, it uses two arrays of values, which contain upper and lower values for each variable. However, as far as I can see. it mentions nothing about what these values are. Quoting from the paper:
+
+As can be seen, there are two other arrays called $ub$ and $lb$. These matrixes define the upper and lower bounds of the variables as follows:
+$ub = [ub_1, ub_2, ub_3, \dots,ub_{n-1}, ub_n]$
+where $ub_i$ indicates the upper bound of the $i$-th variable.
+$lb = [lb_1, lb_2, lb_3, \dots, ub_{n-1}, ub_n]$
+where $lb_i$ indicates the lower bound of the $i$-th variable
+
+After that, it says nothing more about this matter
+So, if anyone has any idea of how these bound values are determined, please tell me!
+"
+"['neural-networks', 'backpropagation', 'linear-regression']"," Title: Linear output layer back propagationBody: So I'm stack to something that it's probably very easy but I can't get my head around it. I'm building a Neural Network that will consist of many layers with non-linear activation functions (probably ReLUs) and the last output layer will be linear because we are trying to catch a specific number and not a probability. I've done the forward propagation calculations but I'm stuck at the back propagation ones.
+Let's say that I'm gonna use the cross entropy loss function: (we will implement the MSE as well) $-(y \log (a)+(1-y) \log (1-a))$. (I understand that this is not a good option for a regression problem)
+So we can easily find the dJ/dA $d A=\frac{\partial J}{\partial A}=-\left(\frac{y}{A}-\frac{1-y}{1-A}\right)$ of the last layer and we can start going backwards finding the $\frac{\partial J}{\partial Z^{[L]}}$ which we can calculate from the equation: $d Z^{[L]}=d A*g^{\prime}\left(Z^{[L]}\right)$ The problem lies at the second part of this equation where the derivative of g is.
+What will be the outcome since we have a linear activation function which derivative is equal with 1? (Activation function: f(x) = x, f'(x) = 1)
+Will it be an identity matrix with the shape of Z[L] or a matrix full of ones with the same shape again? I'm asking about the term $g^{\prime}\left(Z^{[L]}\right)$.
+Many thanks.
+"
+"['classification', 'training', 'autoencoders']"," Title: Role of autoencoder in Hierarchical Extreme Learning MachineBody: I want to build HELM neural network that consists of autoencoder (AE) and one class classification (OC).
+HELM with AE and OC have following shape:
+
+That is, hidden layer output of AE is input of OC.
+Training of HELM consists of training AE and OC separately. In order to train each neural network in HELM, first there are generated random weights and biases between input and hidden layers, and then based of them and activation function (for example sigmoid), only weights between hidden and output layers are trained. But then what's the point in training weights between hidden and output layers in AE, since only output of its hidden layer is provided as input into one-class classifier? What is point in use AE in HELM if weights between input and hidden layers of AE are basicaly random?
+Following paper (page 6):
+https://arxiv.org/pdf/1810.05550.pdf
+confirms that output of hidden layer of AE is input for OC, but also on the contrary in Algorithms 2 and 3 (pages 6, 7), there is shown that input of OC is AE input vector multiplied by matrix of weights between hidden and output layer, what sounds weird for me.
+"
+"['deep-learning', 'restricted-boltzmann-machine']"," Title: How Restricted Boltzman Machine (RBM) generates hand-written digit?Body: I am reading RBMs from this paper. In Fig1 they show an example of generating hand-written digit using RBMs. This is the figure they are showing:
+
+In the learning step first we sample $h$ from $h \sim P(h|v)$ and then reconstructe $v$ from $v \sim P(v|h)$. Now in the generating step they are sampling $v$ from $v \sim P(v|h)$. My question is, in generating step if we do not sample $h$ from $h \sim P(h|v)$ how can we get $P(v|h)$?
+"
+"['neural-networks', 'convolutional-neural-networks', 'datasets', 'math']"," Title: Is it possible to know the distance objects are from camera based on only knowing one object's height?Body: I am doing a project where I have to know distance a particular object is from camera. In the photo I only know one of the object's height, but I don't know how far away that object is and I don't know how tall are other objects. Is it possible to write a code or do some geometry to know other objects distances from camera using only the height of one object? For example I have an image where 5 meters away there is a box which is 1 meter high, I wanna know the distance to human who is 12 meters away, or to know a distance to a dog who is 7 meters away. Maybe you guys know any datasets or models which deal with the same problem as I am facing. Any help will be appreciated.
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'computer-vision']"," Title: Computer vision - Can you put more weight on a specific part of the object?Body: Let's say I'm looking for any item that has a certain shape (outline) in a photo. but I can further classify it only according to particular features, that most of them are expected to be shown only in a smaller area of the the object itself.
+How may I give more weight, in the model, to that particular area, in order to avoid wrong classification issues?
+What is the flow, and are there specific tools that should be used for that purpose?
+Example:
+I want to detect all triangles in the image, and try to classify them like this:
+If triangle has 3 lines in its corner, it's A type. if only two lines, it's B type.
+So the triangles outline composes 100% of the object, but we can see that the area where the red lines are present, is only about 10% of the object area. How may I give more weight and tell the model to carefully look for the details in that area, so it doesn't confuse A with B or vice versa, just because the other 90% of the shape is similar.
+And Of course, I want the certainty level to be as close as possible to 100%, for both A and B, and to be distinguished from the other option.
+So my goal is the get this output:
+Purple Triangle ==> Type A, certainty, 99%. Type B: certainty: 50%
+Green Triangle ==> Type B, certainty, 99%. Type A: certainty: 50%
+
+"
+"['neural-networks', 'reinforcement-learning', 'tensorflow', 'keras', 'objective-functions']"," Title: Evaluate model multiple times in loss function? Is this reinforcement learning?Body: I am interested in models that exhibit behavior. My goal is a model that survives indefinitely on a two dimensional resource landscape. One dimension represents the location (0 to 1) and the second says if there is a resource available at that location (-1 = resource one, 0 = no resource, 1 = resource two).
+The landcape looks like this:
+location = [0, 0.2, 0.4, 0.6, 0.8, 1]
+resource = [-1, 0, 0, 0, 0, 1] (I added spaces so the elements line up)
+
+My model represents an organism deciding if it will move or rest on the landscape at each time step. The organism has reserves of each resource. The organism fills its reserve of a resource if it rests on the resource and loses 1 unit of both resources at each time step. I am considering neural networks to represent my organisms. The input would be 4 values; The location on the landscape, the resource value at that location, and the reserve levels of resource one and two. The output would be 3 values; move right, rest, move left. The highest value decides what happens. To survive indefinitely the model will have to bounce between the ends of the landscape, briefly resting on the resource. Model evaluation would go like this: start the model in the middle of the landscape with full resource reserves. Allow time to pass until one of the resource reserves is depleted (the organism dies).
+My question is this: Can my loss function be evaluating the model until it dies? 1/survival time could be the loss value to be minimized by gradient descent. Is this a reinforcement learning problem (I don't think so..?) Thanks!!
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'hyper-parameters', 'weights']"," Title: Should the range and initial values of weights and biases be adjusted to fit input and output data?Body: As a routine (in typical everyday tasks) of a data scientist, should they usually decide about weights and biases range and initial values as a function of which data they are planning to insert as an input, and which type of data they expect to get in the output? Or we usually do not deal with such fine-tuning, and let the algorithm to do it?
+One could answer that normalizing inputs solves the problem and no need to fit weights and biases, but I guess they depend also on expected output.
+To summarize:
+
+- is it common to deal with weights and biases in everyday tasks or in most of the cases existing algorithms do it well?
+
+- what are the rules of thumb for how to decide about range and initial values of weights and biases?
+
+
+"
+"['reinforcement-learning', 'q-learning', 'convergence', 'temporal-difference-methods', 'sarsa']"," Title: How to determine if Q-learning has converged in practice?Body: I am using Q-learning and SARSA to solve a problem. The agent learns to go from the start to the goal without falling in the holes.
+At each state, I can choose the action corresponding to the maximum Q value at the state (the greedy action that the agent would take). And all the actions connect some states together. I think that would show me a road from start to goal, which means the result converges.
+But some others think that as long as the agent learns how to reach the goal, the result converges. Sometimes the success rate is very high but we cannot get the road from Q table. I don't know which one means the agent is trained totally and what the converged result means.
+"
+"['convolutional-neural-networks', 'convolution']"," Title: When should we use separable convolution?Body: I was reading the "Deep Learning with Python" by François Chollet. He mentioned separable convolution as following
+
+This is equivalent to separating the learning of spatial features and
+the learning of channel-wise features, which makes a lot of sense if
+you assume that spatial locations in the input are highly correlated,
+but different channels are fairly independent.
+
+But I could not understand what he meant by saying "correlated spatial locations". Can some explain what he means or the purpose of separable convolutions? (except performance-related part).
+Edit: Separable convolution means that first depthwise convolution is applied then pointwise convolution is applied.
+"
+"['reinforcement-learning', 'definitions', 'meta-learning']"," Title: What exactly does meta-learning in reinforcement learning setting mean?Body: We can use DDPG to train agents to stack objects. And stacking objects can be viewed as first grasping followed by pick and place. In this context, how does meta-reinforcement learning fit? Does it mean I can use grasp, pick and place as training tasks and generalize to assembling objects?
+"
+['generative-adversarial-networks']," Title: GAN for specific face attribute modificationBody: A recent paper "MagGAN High Resolution Face Attribute Editing with Mask Guided GAN" published this month (October 2020) describe how an approach has been developed to deal with specific face attribute editing.
+The thing is that in this paper and related work (StarGAN, CycleGAN, AttGAN, STGAN ...) seems to tackle the process of adding / editing a face attribute (e.g. adding a hat or mustache...), but not really modifying the face attribute (like eyes, nose, lips ...)
+Is there anyway we can make a model that can edit for example the nose type/size or any related works already published?
+"
+"['natural-language-processing', 'word-embedding', 'text-classification', 'word2vec', 'benchmarks']"," Title: Bechmark models for Text Classification / Sentiment ClassificationBody: I am currently working on a novel application in NLP where I try to classify empathic and non-empathic texts. I would like to compare the performance of my model to some benchmark models. As I am working with models based on Word2Vec embeddings, the benchmark models should also be based on Word2Vec, however I am looking for some relatively easy, quick to implement models.
+Do you have any suggestions?
+"
+"['terminology', 'facial-recognition', 'affective-computing', 'emotion-recognition']"," Title: What is the formal terminology for emotion recognition AI?Body: I'm researching the use of emotion recognition in Intelligent Tutoring Systems and trying to more effectively find and formally reference materials. My question is whether this is the most formal terminology (i.e. "emotion recognition"), because I've also seen "affect recognition" and "affective computing". Maybe it's a matter of taste, but I know sometimes the market terminology is different from the engineering terminology and I'd like to be more in tune with the engineers.
+Maybe there is a leading classification system of related technologies (e.g. facial recognition, sentiment analysis, etc.)?
+I'm seeing the "affective-computing" tag now, but not sure if these tags reflect a formal classification system in the field of AI.
+"
+"['computer-vision', 'state-of-the-art', 'conditional-random-field']"," Title: Are Markov Random Fields and Conditional Random Fields still used in computer vision?Body: Back before deep learning, there were a lot of different attempts at computer vision. Some involved Conditional Random Fields and Markov Random Fields, which were both computationally difficult and hard to understand/implement.
+Are these areas still being developed in the computer vision domain? What was the end result of this line of study? I haven't seen any papers on this topic be cited in top-performing benchmarks, so I assume nobody cares about them anymore, but I wanted to ask.
+"
+"['reinforcement-learning', 'training', 'ai-milestones']"," Title: Training while predicting on datasetBody: I have been trying to figure out whether if I train a model and then while predicting is it possible to train images too just like humans
+Somehow converting valid images to the dataset by asking us when an object is shown
+Like the Google Photos somehow they ask us if they predicted a face correctly and then reinforces on it
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'backpropagation', 'vanishing-gradient-problem', 'bptt']"," Title: In LSTMs, how does the additive property enables better balancing of gradient values during backpropagation?Body: There are two sources that I'm using to to try and understand why LSTMs reduce the likelihood of the vanishing gradient problem associated with RNNs.
+Both of these sources mention the reason LSTMs are able to reduce the likelihood of the vanishing gradient problem is because
+
+- The gradient contains the forget gate's vector of activions
+- The addition of four gradient values help balance gradient values
+
+I understand (1), but I don't understand what (2) means.
+
+Any insight would greatly be appreciated!
+"
+"['neural-networks', 'machine-learning', 'comparison', 'non-linear-regression', 'curve-fitting']"," Title: What is the difference between neural networks and other ways of curve fitting?Body: For simplicity, let's assume we want to solve a regression problem, where we have one independent variable and one dependent variable, which we want to predict. Let's also assume that there is a nonlinear relationship between the independent and dependent variables.
+No matter the way we do it, we just need to build a proper curved line based on existing observations, such that the prediction is the best.
+I know we can solve this problem with neural networks, but I also know other ways to create such curves. For example:
+
+- splines
+
+- kriging
+
+- lowess
+
+- Something I think would also work (do not know if exists): fitting curve using a series of Fourier sine waves, and so on
+
+
+My questions are:
+
+- Is it true that neural networks are just one of the ways to fit a non-linear curve to the data?
+
+- What are the advantages and disadvantages of choosing a neural network over other approaches? (maybe it becomes better when I have many independent variables, and another little guess: maybe the neural network is better in omitting the effect of linear dependent input variables?)
+
+
+"
+"['training', 'generative-adversarial-networks', 'generative-model', 'mnist']"," Title: What is the right way to train a generator in a GAN?Body: I am not fully understanding how to train a GAN's generator. I have a few questions below, but let me first describe what I am doing.
+I am using the MNIST dataset.
+
+- I generate a batch of random images (the faked ones) with the generator.
+
+- I train the discriminator with the set composed of faked images and real MNIST images.
+
+- After the training phase, the discriminator modifies the weights in the direction of recognizing fake (probability 0) from real (probability 1) ones.
+
+- At this point, I have to consider the combined model of generator and discriminator (keep untrainable the discriminator) and put in the generator as input the faked images with the tag of 1s (as was real one).
+
+
+My questions are:
+Why do I have to set to real these fake images, and what fake images are these? The one generated in the first round from the generator itself? Or only the one classified as faked by the discriminator? (Then they could be both real images classified wrongly or fake images classified in the right way). Finally, what the generator does to these faked images?
+"
+"['deep-learning', 'objective-functions', 'generative-adversarial-networks', 'generative-model', 'discriminator']"," Title: How to define loss function for Discriminator in GANs?Body: To train the discriminator network in GANs we set the label for the true samples as $1$ and $0$ for fake ones. Then we use binary cross-entropy loss for training.
+Since we set the label $1$ for true samples that means $p_{data}(x) = 1$ and now binary cross-entropy loss is:
+$$L_1 = \sum_{i=1}^{N} P_{data}(x_i)log(D(x)) + (1-P_{data}(x_i))log(1-D(x))$$
+$$L_1 = \sum_{i=1}^{N} P_{data}(x_i)log(D(x))$$
+$$L_1 = E_{x \sim P_{data}(x)}[log(D(x))]$$
+For the second part, since we set the label $0$ for fake samples that means $p_{z}(z) = 0$ and now binary cross-entropy loss is:
+$$L_2 = \sum_{i=1}^{N} P_{z}(z_i)log(D_{G}(z)) + (1-P_{z}(z_i))log(1-D_{G}(z))$$
+$$L_2 = \sum_{i=1}^{N} 1-P_{z}(z_i)log(1-D_{G}(z))$$
+$$L_2 = E_{z \sim \bar{P_{z}(z)}}[log(1-D_{G}(z))]$$
+Now we combine those two losses and get:
+$$L_D = E_{x \sim P_{data}(x)}[log(D(x))] + E_{z \sim \bar{P_{z}(z)}}[log(1-D_{G}(z))]$$
+When I was reading about GANs I saw that the loss function for discriminator is defined as:
+$$L_D = E_{x \sim P_{data}(x)}[log(D(x))] + E_{z \sim P_{z}(z)}[log(1-D_{G}(z))]$$
+Should not it be $E_{z \sim \bar{P_{z}(z)}}$ instead of $E_{z \sim P_{z}(z)}$ ?
+"
+"['machine-learning', 'training', 'autoencoders', 'support-vector-machine', 'feature-selection']"," Title: Why does the training time of SVMs dramatically decrease after applying dimensionality reduction to the features?Body: Training an SVM with an RBF kernel model with c = 5.5
and gamma = 1.06
, for a 5-class classification problem on the NSL-KDD train data-set with 122 features using one vs rest strategy takes $2162$ seconds. Also, considering binary classification (c = 10
, gamma = 4
), it takes $520.56$ seconds.
+After dimensionality reduction, from 122 to 30, using a sparse auto-encoder, the training time falls dramatically, from $2162$ to $240$ and $520$ to $170$, while using the same hyperparameters for the RBF-kernel.
+What is the reason for that? Is it not true that using kernel neutralized the effect of high dimensions?
+"
+"['reinforcement-learning', 'q-learning', 'value-functions', 'notation', 'random-variable']"," Title: In the definition of the state-action value function, what is the random variable we take the expectation of?Body: I know that
+$$\mathbb{E}[g(X) \mid A] = \sum\limits_{x} g(x) p_{X \mid A}(x)$$
+for any random variable $X$.
+Now, consider the following expression.
+$$\mathbb{E}_{\pi} \left[ \sum \limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1} \mid s_t = s, a_t = a \right]$$
+It is used for the calculation of Q values.
+I can understand the following
+
+- $A$ is $\{s_t = s, a_t = a\}$ .i.e., agent has been performed action $a$ on state $s$ at time step $t$ and
+
+- $g(X)$ is $\sum\limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1}$ i.e., return (long run reward).
+
+
+What I didn't understand is what is $X$ here. i.e., what is the random variable on which we are calculating long-run rewards?
+My guess is policy function. It is averaging long-run rewards over all possible policy functions. Is it true?
+"
+"['neural-networks', 'reinforcement-learning', 'deep-learning', 'game-ai']"," Title: How to define Agar.io state and action space?Body: I am trying to implement an AI bot for my Agar.io clone using deep neural network.
+However, I am struggling with the state and action space of the AI bot.
+Because the bot can take real number for position and velocity, can I say the state space is continuous?
+For the action space, I am thinking something like (velocityX, velocityY, "split to half", "eject mass").
+What should be the number of input nodes in the input layer for my Neural network? And what are those input(observations, rewards)?
+As the number of players and AI bots are changing, how can I train a dynamic network with changing input node number?
+For the outputs, how can I get a continuous action output like velocity?
+As a reference, you can learn about the game rules from this short youtube video:
+20 Rules and Game Mechanism of Agar (How to Play Agar.io)
+"
+"['reinforcement-learning', 'python', 'q-learning']"," Title: How to create a Q-Learning agent when we have a matrix as an action space?Body: I have a 2-dimentional matrix as an action space, the rows being a resource to be allocated, and the columns are the users that we will allocate the resources to. (I built my own RL environment)
+The possible actions are 'Zero' or 'One'. One if the resource was allocated to the user, Zero if not.
+I have a constraint related to the resource allocation, which states that each resource can be allocated to one user only, and the resource should only be allocated to users who have requested a resource to be allocated to them, and that would be the state space which is another matrix.
+A penalty would be applied if the agent violates the constraints and the episode would end and the reward would equal the penalty. Otherwise, the reward would equal the sum of all the users that were satisfied with the allocation.
+I am struggling with the implementation. The agent starts by exploring, then little by little it starts exploiting. When it gets to be more exploitative, I've noticed that the action matrix's values are all set to 'One', and the penalty always has the same value from episode to episode.
+"
+"['neural-networks', 'activation-functions', 'function-approximation', 'universal-approximation-theorems']"," Title: Smallest possible network to approximate the $sin$ functionBody: The main goal is: Find the smallest possible neural network to approximate the $sin$ function.
+Moreover, I want to find a qualitative reason why this network is the smallest possible network.
+I have created 8000 random $x$ values with corresponding target values $sin(x)$. The network, which am currently considering, consists of 1 input neuron, 3 neurons in two hidden layers, and 1 output neuron:
+Network architecture:
+
+The neural network can be written as function
+$$y = sig(w_3 \cdot sig(w_1 \cdot x) + w_4 \cdot sig(w_2 \cdot x)),$$
+where $\text{sig}$ is the sigmoid activation function.
+$tanh$ activation function:
+When I use $tanh$ as an activation function, the network is able to hit the 2 extrema of the $sin$ function:
+
$tanh$ activation function"" />
+Sigmoid activation function:
+However, when I use the sigmoid activation function $\text{sig}$, only the first extremum is hit. The network output is not a periodic function but converges:
+
+My questions are now:
+
+- Why does one get a better approximation with the $tanh$ activation function? What is a qualitative argument for that?
+- Why does one need at least 3 hidden neurons? What is the reason that the approximation with $tanh$ does not work anymore, if one uses only 2 hidden neurons?
+
+I really appreciate all your ideas on this problem!
+"
+"['deep-learning', 'reference-request', 'autoencoders', 'latent-variable']"," Title: Is it possible to have a variable-length latent vector in an autoencoder?Body: I'm trying to have a simple autoencoder but with variable latent length (the network can produce variable latent lengths with respect to the complexity of the input), but I've not seen any related work to get idea from. Have you seen any related work? Do you have any idea to do so?
+Actually, I want to use this autoencoder for transmitting the data over a noisy channel, so having a variable-length may help.
+"
+"['natural-language-processing', 'math', 'word2vec', 'cbow', 'skip-gram']"," Title: Is my interpretation of the mathematics of the CBOW and Skip-Gram models correct?Body: I am a mathematics student who is learning NLP, so I have paid a high amount of attention on the mathematics used in the subject, but my interpretations may or may not be right sometimes. Please correct me if any of them are incorrect or do not make sense.
+I have learned CBOW and Skip-Gram models.
+I think I have understood the CBOW model, and here is my interpretation: First, we fix a number of neighbors of the unknown center word which we would like to predict; let the number be $m$. We then input the original characteristic vectors (vectors of zeros and ones only) of those $2m$ context words. By multiplying those vectors by a matrix, we obtain $2m$ new vectors. Next, we take the average of those $2m$ vectors and this is our hidden layer, namely $v$. We finally multiply $v$ with another matrix, and that is the "empirical" result.
+I tried to follow the logic to Skip-Gram similarly, but I have been stuck. I understand that Skip-Gram is kind of a "reversal" of CBOW, but the specific steps have given me a hard time. So, in Skip-Gram, we only have a center word, and based upon that we are trying to predict $2m$ context words. By similar steps, we obtain a hidden layer, which is again a vector. The final process also involves multiplication with a matrix, but I don't know how we can get $2m$ new vectors based upon one, unless we have $2m$ different matrices?
+"
+"['reinforcement-learning', 'papers', 'generative-adversarial-networks', 'reinforce', 'seq2seq']"," Title: SeqGAN - Policy gradient objective function interpretationBody: Could someone clear my doubt on the loss function used in SeqGAN paper . The paper uses policy gradient method to train the generator which is a recurrent neural network here.
+
+- Have I interpreted the terms correctly?
+- What are we summing over? The entire vocabulary of words?
+
+Loss function - my interpretation:
+
+"
+['transfer-learning']," Title: Transfer Learning of Numerical DataBody: It seems like transfer learning is only applicable to neural networks. Is this a correct assumption?
+While I was looking for examples of Transfer Learning, most seemed to be based on image data, audio data, or text data. I was not able to find an example of training a neural network on numerical data.
+I want to use transfer learning in this particular scenario: I have a lot of numerical data from an old environment, with binary classification labels, and utilizing this, want to train on a new environment to do the same binary classification.
+The dataset would look something like this
+Is this possible? What would the model look like?
+"
+"['neural-networks', 'feedforward-neural-networks', 'hidden-layers']"," Title: Can the hidden layer prior to the ouput layer have less hidden units than the output layer?Body: I attended an introductory class about neural network and I had a question regarding how to choose the number of hidden units per hidden layer.
+I remember that the Professor saying that there is no rule for choosing the number of hidden units and that having many of them along with many hidden layers can cause the network to overfit the data and under learn.
+However, I still have this question where assuming that we have a network with an input layer of n
input nodes, a first hidden layer of 4
hidden units, a second layer of X
hidden units and an output layer of 5
units. Now if I follow the Professor's saying, it would mean that I am allowed to have X = 3
or X = 4
in layer 2.
+Is that actually allowed? Won't we have some sort of information gain passing from 4 (or 3) nodes to 5? The example is illustrated below.
+
+"
+"['neural-networks', 'reference-request', 'weights', 'weights-initialization']"," Title: How efficient is SCAWI weight initialization method?Body: I'm currently in the middle of a project (for my thesis) constructing a deep neural network. Since I'm still in the research part, I'm trying to find various ways and techniques to initialize weights. Obviously, every way will be evaluated and we will choose the one that fits best with our data set and our desired outcome.
+I'm all familiar with the Xavier initialization, the classic random one, the He initialization, and zeros. Searching through papers I came across the SCAWI one (Statistically Controlled Activation Weight Initialization). If you have used this approach, how efficient is it?
+(Also, do you know any good sources to find more of these?)
+"
+"['deep-learning', 'recurrent-neural-networks', 'long-short-term-memory', 'computational-complexity', 'gated-recurrent-unit']"," Title: What is the computational complexity in terms of Big-O notation of a Gated Recurrent Unit Neural network?Body: I have been digging up of articles across the internet in context of computational complexity of GRU. Interestingly, I came across this article, http://cse.iitkgp.ac.in/~psraja/FNNs%20,RNNs%20,LSTM%20and%20BLSTM.pdf, where it takes the following notations:
+Let I be the number of inputs, K be the number of outputs and H be the number of cells in the hidden layer
+And then goes on to explain the computational complexity of FNNs, RNNs, BRNNs, LSTM and BLSTM computational complexity is O(W) i.e., the total number of edges in the network.
+where
+
+- For FNN: $W = IH + HK$ ( I get this part as, for fully connected networks, we have connections from each input to each node in hidden and subsequently for hidden to output nodes)
+
+- For RNN: $W = IH + H^2$ + HK ( The formula is pretty same as is it for FNN but where does this $H^2$ come into picture?)
+
+- For LSTM : $W = 4IH + 4H^2 + 3H + HK$ (It becomes more difficult as it comes down to LSTM as to where the 4's and 3's come into the equation? )
+
+
+Continuing with these notations, can I get a similar notation for GRU as well? This can be very helpful for understanding.
+"
+"['neural-networks', 'reinforcement-learning', 'activation-functions', 'policies', 'state-of-the-art']"," Title: Dynamically adapting activation functionBody: I am training a network through reinforcement learning. The policy network learns rotations, but depending on the actual input (state), the output of the network should be restricted to be in certain bounds otherwise it mostly fails to reach these bounds. I am using tanh
as last activation function. So, I wonder if there could be a way to modify this last activation function s.th. it can adaptively change bounds depending on input? Or would this have a negative impact in learning?
+I would also be open for papers or publications tackling these kind of problems. Thank you for your help!
+"
+"['convolutional-neural-networks', 'activation-functions', 'relu', 'vanishing-gradient-problem', 'adam']"," Title: How to decide if gradients are vanishing?Body: I am trying to debug a convolutional neural network. I am seeing gradients close to zero.
+How can I decide whether these gradients are vanishing or not? Is there some threshold to decide on vanishing gradient by looking at the values?
+I am getting values close to $4$ decimal places (e.g. $0.0001$) and, in some cases, close to $5$ decimal places (e.g. $0.00001$).
+
+The CNN seems not to be learning since the histogram of weight is also quite similar in all epochs.
+I am using the ReLU activation function and Adam optimizer. What could be the reason for the vanishing gradient in the case of the ReLU activation function?
+If it is possible, please, point me to some resources that might be helpful.
+"
+"['reinforcement-learning', 'reference-request']"," Title: Which reinforcement learning approach to use when there are 2 collaborative agents?Body: Suppose we are training an environment with 2 collaborative agents with Reinforcement Learning. We define the following example: There is a midfielder and a striker. The midfielder's reward depends on how many goals are scored, which however depends on the attacker's performance. And the striker's performance depends on how good the midfielder is at making his passes.
+For this type of problem, what do you recommend to study?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'reward-design', 'reward-functions']"," Title: How should I define the reward function to solve the Wumpus game with deep Q-learning?Body: I'm writing a DQN agent for the Wumpus game.
+Is the reward function to train the Q-networks (target network and policy) the same as the score of the game, i.e. +1000 for picking up gold, -1000 for falling in pits and dying from the wumpus, -1 each move?
+This is naturally cumulative, in that the score changes after each action taken by the agent. Alternatively, is it just a +1 for win, -1 for a loss and 0 in all other situations?
+"
+"['neural-networks', 'machine-learning', 'autoencoders', 'multi-label-classification']"," Title: how to handle highly imbalanced multilabel classification?Body: I am working on a multilabel classification in which I am having 206 labels. When I saw the percentage of the number of 1's in each label they are way less than 0.1% for each label. The maximum percentage of ones in labels is 0.034%.
+Below is the distribution of percentage of one's in each labels
+
+If I simply build a multilabel classification single model. The score it gives may be high but it got biased towards zeros very much so, it doesn't give probability of a label to be one very high. And if I want to build for each label different model, I can treat it as a bunch of imbalanced data and apply smote algorithm to each model, But I have a doubt whether can smote produce a good amount of data to balance because we know how imbalance my data is. Now, doubt is can I gave a try to autoencoders, which I heard good at fraud detection when the data is having a percentage of one's less than 1% or such. Will it perform better in my case? because if it can work well, then I will study autoencoders.
+"
+"['neural-networks', 'reinforcement-learning', 'dqn']"," Title: Improving DQN with fluctuationsBody: Hello :) I'm pretty new to this community, so let me know if I posted anything incorrectly and I'll try to change it.
+I'm working on the project which aim is to create self-driving agent in CARLA. I built a neural network Xception (decaying ε-greedy). The other parameters are:
+
+EPISODES: 100
+GAMMA: 0.3
+EPSILON_DECAY: 0.9
+MIN_EPSILON: 0.001
+BATCH: 16
+
+Due to the limited computer resources I chose 100 or 300 epochs to train the model, but it generates much fluctuations:
+
+
+
+EPISODES: 100
+GAMMA: 0.7
+EPSILON_DECAY: 0.9
+MIN_EPSILON: 0.001
+BATCH: 16
+
+
+Can anyone suggest how can I improve my results? Or it is only the issue of small number of epochs?
+"
+"['neural-networks', 'machine-learning', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: How do LSTMs work if the following two matrices are not able to be multiplied?Body:
+In the above diagram, the shape of some of the matrices can be seen in the yellow highlight. For instance:
+The hidden state at timestep t-1 ($h_{t-1}$) has shape $(na, m)$
+The input data at timestep t ($x_{t}$) has shape $(nx, m)$
+$Z_{t}$ has shape $(na+nx, m)$ since the hidden state and input data are concatenated in LSTMs.
+$W_{c}$ has shape $(na, na+nx)$
+$W_{c}$ • $Z_{t}$ has shape $(na, m)$ = $i_{t}$
+$W_{i}$ • $Z_{t}$ has shape $(na, m)$ = $ĉ_{t}$
+When working through the network to the point $i_{t}$ and $ĉ_{t}$, how can these two be dot producted when the multiplication is not of the form (m x n)(n x p) as per the matrix multiplication definition?:
+
+"
+"['convolutional-neural-networks', 'training', 'convergence']"," Title: How to have closer validation loss and training loss in training a CNNBody: I am using an AlexNet architecture as my Convolutional Neural Network.
+A learning rate of 0.00007 and 128 batch_size.
+I have 20000 data and 10% test, 40% validation, and 50% for training.
+I used 100 epochs to train my network and here are my results for Loss and Accuracy.
+I would like to ask how can I get closer validation and training loss in these plots?
+At first, I guess the number of epochs was not enough, but I tried more epochs and my results didn't change.
+Can I say my training process is complete with this distance between train and validation loss?
+Is there any way to have closer loss plots?
+
+
+"
+"['math', 'robotics']"," Title: How do I find the data-point with respect to a given frame?Body: I've been reading this paper that formulates invariant task-parametrized HSMMs. In section 3.1 (Model Learning), the task parameters are represented in $F$ coordinate systems defined by $\{A_j,b_j\}_{j=1}^F$, where $A_j$ denotes the rotation of the frame as an orientation matrix and $b_j$ represents the origin of the frame. Each datapoint $\xi_t$ is observed from the viewpoint of $F$ different experts/frames, with $\xi_t^{(j)} = A_j^{-1}(\xi_t - b_j)$ denoting the datapoint w.r.t. frame $j$.
+How is $\xi_t^{(j)} = A_j^{-1}(\xi_t - b_j)$ derived? I understand that we must subtract $b_j$, but I'm not sure if I should pre-multiply by $A_j$ or $A_j^{-1}$, so it'd be great if someone could help me understand this better. Since $A_j$ is an orientation matrix, I'd guess that it's orthogonal, and so $A_j^{-1} = A_j^T$ - and it may just be a matter of convention (i.e. depending on how $A_j$ is defined). The details aren't clear from the paper though, and I'd appreciate any help!
+"
+"['robotics', 'hidden-markov-model', 'imitation-learning']"," Title: How do multiple coordinate systems help in capturing invariant features?Body: I've been reading this paper that formulates invariant task-parametrized HSMMs. The task parameters are represented in $F$ coordinate systems defined by $\{A_j,b_j\}_{j=1}^F$, where $A_j$ denotes the rotation of the frame as an orientation matrix and $b_j$ represents the origin of the frame. Each datapoint $\xi_t$ is observed from the viewpoint of $F$ different experts/frames, with $\xi_t^{(j)} = A_j^{-1}(\xi_t - b_j)$ denoting the datapoint w.r.t. frame $j$. I quote from the abstract:
+
+"Generalizing manipulation skills to new situations requires extracting invariant patterns from demonstrations. For example, the robot needs to understand the demonstrations at a higher level while being invariant to the appearance of the objects, geometric aspects of objects such as its position, size, orientation and viewpoint of the observer in the demonstrations."
+
+
+"The algorithm takes as input the demonstrations with respect to different coordinate systems describing virtual landmarks or objects of interest with a task-parameterized formulation, and adapt the segments according to the environmental changes in a systematic manner."
+
+Though it makes some intuitive sense, I'm not fully convinced why working with multiple coordinate systems would help us capture invariant patterns in demonstrations, and leave aside the scene-specific details. That is the goal, right? On a very high level, I see that having access to more "viewpoints" may help the robot understand the environment better, and neglect viewpoint-specific biases to focus on invariant patterns across different frames. However, this is very handwavy - and I'd love to know specific details about why using multiple viewpoints is a good idea in this case.
+Thanks!
+"
+"['reinforcement-learning', 'graph-theory']"," Title: How can I evaluate a reinforcement learning algorithm over an entire problem space?Body: I am working on implementing an RL agent and I want to demonstrate its effectiveness over a bounded problem space. The setting is essentially a queueing network and so it can be represented as a graph. I want to consider the agent's performance over all graphs up to order $n$ and with average degree from $0$ (edgeless) to $n-1$ (fully connected).
+I have looked into generating random graphs using the Erdős–Rényi model, for example. My thought is that I could show the average performance of my agent for different settings of number of nodes and edge probability (under this particular graph generation model).
+Are there any established techniques that are along the lines of this approach?
+"
+"['neural-networks', 'backpropagation', 'feedforward-neural-networks', 'multilayer-perceptrons']"," Title: How is the error calculated with multiple output neurons in the neural network?Body: Machine Learning books generally explains that the error calculated for a given sample $i$ is:
+$e_i = y_i - \hat{y_i}$
+Where $\hat{y}$ is the target output and $y$ is the actual output given by the network. So, a loss function $L$ is calculated:
+$L = \frac{1}{2N}\sum^{N}_{i=1}(e_i)^2$
+The above scenario is explained for a binary classification/regression problem. Now, let's assume a MLP network with $m$ neurons in the output layer for a multiclass classification problem (generally one neuron per class).
+What changes in the equations above? Since we now have multiple outputs, both $e_i$ and $y_i$ should be a vector?
+"
+"['neural-networks', 'deep-learning', 'backpropagation']"," Title: XOR problem with bipolar representationBody: I am taking a course in Machine Learning and the Professor introduced us to the XOR problem.
+I understand the XOR problem is not linearly separable and we need to employ Neural Network for this problem.
+However, he mentioned XOR works better with Bipolar representation(-1, +1) which I have not really understand.
+I am wondering what Bipolar representation would be better than Binary Representation? Whats the rationale for saying so?
+"
+"['reinforcement-learning', 'deep-rl', 'monte-carlo-tree-search', 'alphago-zero']"," Title: Why is tree search/planning used in reinforcement learning?Body: In AlphaGo Zero, MCTS is used along with policy networks. Some sources say MCTS (or planning in general) increases the sample efficiency.
+Assumed the transition model is known and the computational cost of interacting through planning is the same as interacting with the environment, I do not see the difference between playing many games versus playing a single game, but plan at each step.
+Furthermore, given a problem with a known transition model, how do we know combining learning and tree search will likely be better than pure learning?
+"
+"['deep-learning', 'recurrent-neural-networks', 'gradient-descent', 'vanishing-gradient-problem']"," Title: How does vanish gradient restrict RNN to not work for long range dependencies?Body: I am really trying to understand deep learning models like RNN, LSTMs etc. I have gone through many tutorials of RNN and have learned that RNN cannot work for long Range dependencies, like:
+Consider trying to predict the last word in the text “I grew up in France… I speak fluent French.” Recent information suggests that the next word is probably the name of a language, but if we want to narrow down which language, we need the context of France, from further back. It’s entirely possible for the gap between the relevant information and the point where it is needed to become very large. Unfortunately, as that gap grows, RNNs become unable to learn to connect the information.
+it comes due to vanish gradient problem. However, I could not understand that how to vanish gradient creates an issue for RNN to not work for long-range dependencies.
+Since, as I know that vanish gradient usually comes when we have many hidden layers and the gradient for the first layer usually produced too low and that affects the training process. However, everyone connects this issue with vanish gradient, technically what is the relationship RNN (long-range dependencies) with vanish gradient?
+I am really sorry if it is a weird question
+"
+"['deep-learning', 'tensorflow', 'keras', 'feedforward-neural-networks']"," Title: How to tell a neural network that: ""your i-th input is special""Body: Assume that I have a fully connected network that takes in a vector containing 1025
elements. First 1024
elements are related to the input image of size 32 x 32 x 1
, and the last element in the vector (1025-th element
) is a control bit that I call it special input.
+When this bit is zero
, the network should predict if there is a cat
in the image or not, and when this bit is one
, it should predict if there is a dog
in the image or not.
+So how can I tell the network that your 1025-th element
should be special to you and you should pay more attention to it?
+Note that it's just an example and the real problem is more complex than this. So please don't bypass the goal of this question by using tricks special to this example. Any idea is appreciated.
+"
+"['computer-vision', 'image-recognition', 'reference-request', 'object-detection']"," Title: Are there any known models/techniques to determine whether a person in a store is a customer or a store representative?Body: Are there any known models/techniques to determine whether a person in a store is a customer or a store representative?
+For example, customer representatives can wear uniforms and then one possible way to identify customer representatives is by the color of their uniform, texture, etc. On the other hand, a customer can also wear the same color clothes as that of a customer representative. Likewise, a customer representative could be wearing "normal clothes." So the main problems that may occur could be:
+
+- A customer becomes misclassified as a customer representative.
+- A customer representative representative becomes misclassified as a customer
+
+So using clothing as the only proxy to classify people as customers or customer representatives seems to be flaky. Any other known ideas?
+"
+"['clustering', 'dimensionality-reduction', 'statistics']"," Title: What is meant by subspace clustering in MFA?Body:
+The basic idea of MFA is to perform subspace clustering by assuming the covariance structure for each component of the form, $\Sigma_i = \Lambda_i \Lambda_i^T + \Psi_i$, where $\Lambda_i \in \mathbb{R}^{D\times d}$, is the factor loadings matrix with $d < D$ for parsimonious representation of the data, and $Ψ_i$ is the diagonal noise matrix. Note that the mixture of probabilistic principal component analysis (MPPCA) model is a special case of MFA with
+the distribution of the errors assumed to be isotropic with $Ψ_i = Iσ_i^2$.
+
+What is meant by subspace clustering here, and how does $\Sigma_i = \Lambda_i \Lambda_i^T + \Psi_i$ accomplish the same? I understand that this is a dimensionality reduction technique since $\text{rank}(\Lambda_i) \leq d < D$. It'd be great if someone could help me understand more, and/or suggest resources I could look into for learning about this as an absolute beginner.
+From what I understand, $x = \Lambda z + u$ is one factor-analyzer (right?), i.e. the generative model in maximum likelihood factor analysis. This paper goes on to define a mixture of factor-analyzers indexed by $\omega_j$, where $j = 1,...,m$. The generative model now obeys the distribution $$P(x) = \sum_{i=1}^m \int P(x|z,\omega_j)P(z|\omega_j)P(\omega_j)dz$$ where, $P(z|\omega_j) = P(z) = \mathcal{N}(0,I)$. How does this help/achieve the desired objective? Why take the sum from $1$ to $m$? Where is subspace clustering happening, and what's happening on a high-level when we are using this mixture of factor-analyzers?
+"
+"['neural-networks', 'classification']"," Title: What is the best neural network model to classify an x(t) signal according two classes?Body: I am a beginner in AI methods. I have a collection of x(t) data, where x are some signal amplitudes and t is a time. My testing data are divided into two classes, say those from good and bad experimental samples. I need to classify the signals from unknown samples as good or bad according to their similarity to these two classes. What kind of a neural network is the best in this case? Could you recommend me some example in the literature where such a problem is considered?
+"
+"['reinforcement-learning', 'python', 'q-learning']"," Title: Q-learning agent stuck at taking same actionsBody: I have created my own RL environment where I have a 2-dimensional matrix as a state space, the rows represent the users that are asking for a service, and 3 columns representing 3 types of users; so if a user U0 is of type 1 is asking for a service, then the first row would be (0, 1, 0) (first column is type 0, second is type 1...).
+The state space values are randomly generated each episode.
+I also have an action space, representing which resources were allocated to which users.
+The action space is a 2-dimensional matrix, the rows being the resources that the agent has, and the columns represent the users. So, suppose we have 5 users and 6 resources, if user 1 was allocated resource 2, then the 3rd line would be like this: ('Z': a value zero was chosen, 'O': a value one was chosen)
+(Z, O, Z, Z, Z)
+The possible actions are a list of tuples, the length of the list is equal to the number of users + 1, and the length of each tuple is equal to the number of users.
+Each tuple has one column set to 'O', and the rest to 'Z'. (Each resource can be allocated to one user only). So the number of the tuples that have one column = 'O', is equal to the number of users, and then there is one tuple that has all columns set to 'Z', which means that the resource was not allocated to any users.
+Now, when the agent chooses the action, for the first resource it picks an action from the full list of possible action, then for the second resource, the action previously chosen is removed from the possible actions, so it chooses from the actions left, and so on and so forth; and that's because each user can be allocated one resource only. The action tuple with all 'Z' can always be chosen.
+When the agent allocates a resource to a user that didn't request a service, a penalty is given (varies with the number of users that didn't ask for a service but were allocated a resource), otherwise, a reward is given (also varies depending on the number of users that were satisfied).
+The problem is, the agent always tends to pick the same actions, and those actions are the tuple with all 'Z' for all the users. I tried to play with the q_values initial values; q_values is a dictionary with 2 keys: 1st key: the state being a tuple representing each possible state from the state space, meaning (0, 0, 0) & (1, 0, 0) & (0, 1, 0) & (0, 0, 1), combined with each action from the possible actions list.
+I also tried different learning_rate values, different penalties and rewards etc. But it always does the same thing.
+
+
+"
+"['machine-learning', 'hidden-markov-model', 'kalman-filter']"," Title: Is there an equivalent model to the Hidden Markov Model for continuous hidden variables?Body: I understand that Hidden Markov Models are used to learn about hidden variables $z_i$ with the help of observable variables $\xi_i$. On Wikipedia, I read that while the $\xi_i$'s can be continuous (say Gaussian), the $z_i$'s are discrete. Is this necessary, and why? Are there ways in which I could extend this to continuous domains?
+"
+"['neural-networks', 'convolutional-neural-networks', 'recurrent-neural-networks', 'long-short-term-memory', 'multilayer-perceptrons']"," Title: Is there a common way to build a neural network that seeks to extract spatial and temporal information simultaneously?Body: Is there a common way to build a neural network that seeks to extract spatial and temporal information simultaneously? Is there an agreed up protocol on how to extract this information?
+What combination of layers works: convolution + LSTM? What would be the alternatives?
+"
+"['deep-neural-networks', 'gradient-boosting']"," Title: Has ""deep vs. wide"" been resolved?Body: All else being equal, including total neuron count, I give the following definitions:
+
+- wide is a parallel ensemble, where good chunks of the neurons have the same inputs because the inputs are shared and they have different outputs.
+- deep is a series ensemble, where for the most part neurons have as input the output of other neurons and few inputs are shared.
+
+For CART ensembles the parallel (wide) ensemble is a random forest while the series (deep) ensemble is a gradient boosted machine. For several years the GBM was the "winningest" on kaggle.
+Is there a parallel of that applied to Neural networks? Is there some reasonable measure that indicates whether deep outperforms wide when it comes to neural networks? If I had the same count of weights to throw at a tough problem, all else being equal should they be applied more strongly in parallel or in series?
+"
+"['alphago-zero', 'alphago', 'deterministic-policy', 'stochastic-policy']"," Title: Did Alphago zero actually beat Alphago 100 games to 0?Body: tl;dr
+Did AlphaGo and AlphaGo play 100 repetitions of the same sequence of boards, or were there 100 different games?
+Background:
+Alphago was the first superhuman go player, but it had human tuning and training.
+AlphaGo zero learned to be more superhuman than superhuman. Its supremacy was shown by how it beat AlphaGo perfectly in 100 games.
+My understanding of AlphaGo and AlphaGo are that they are deterministic, not stochastic.
+If they are deterministic, then given a board position they will always make the same move.
+The way that mathematicians count the possible games in chess is to account for different board positions. As I understand it, and I could be wrong, if they have the exact same sequence of board positions then it does not count as a different game.
+If they make the same sequence of moves 100 times, then they did not play 100 different games, but played one game for 100 repetitions.
+Question:
+So, using the mathematical definition, did AlphaGo and AlphaGo Zero play only one game for 100 iterations or did they play 100 different games?
+References:
+
+"
+"['neural-networks', 'genetic-algorithms', 'neuroevolution', 'fitness-functions', 'artificial-life']"," Title: Is it possible to perform neuroevolution without a fitness function?Body: My question is about neuroevolution (genetic algorithm + neural network): I want to create artificial life by evolving agents. But instead of relying on a fitness function, I would like to have the agents reproduce with some mutation applied to the genes of their offspring and have some agents die through natural selection. Achieve evolution in this manner is my goal.
+Is this feasible? And has there been some prior work on this? Also, is it somehow possible to incorporate NEAT into this scheme?
+So far, I've implemented most of the basics in amethyst (a parallel game engine written in Rust), but I'm worried that the learning will happen very slowly. Should I approach this problem differently?
+"
+"['deep-learning', 'classification', 'ai-design', 'object-detection', 'scene-classification']"," Title: How can I determine whether a video's frame is realistic (was recorded by a camera) or contains computer-generated graphics?Body: Given a video, I'm trying to classify whether it is a graphical (computer-generated) or realistic scene. For instance, if it contains computer-generated graphics, credit, moving bugs, blue screen, etc. it will be computer-generated graphics, and if it is a realistic scene captured by camera, it will be a realistic scene.
+How can we achieve that with AI? Do we have any working solutions available?
+Some examples of graphical scenes:
+
+
+
+"
+"['reinforcement-learning', 'policies', 'sutton-barto', 'multi-armed-bandits', 'upper-confidence-bound']"," Title: Why do we have two similar action selection strategies for UCB1?Body: In the literature, there are at least two action selection strategies associated with the UCB1's action selection strategy/policy. For example, in the paper Algorithms for the multi-armed bandit problem (2000/2014), at time step $t$, an action is selected using the following formula
+$$
+a^*(t) \doteq \arg \max _{i=1 \ldots k}\left(\hat{\mu}_{i}+\sqrt{\frac{2 \ln t}{n_{i}}}\right) \tag{1}\label{1},
+$$
+where
+
+- $\hat{\mu}_{i}$ is an estimate of the expected return for arm $i$
+- $n_i$ is the number of times the action $i$ is selected
+- $k$ is the number of arms/actions
+
+On the other hand, Sutton & Barto (2nd edition of the book) provide a slightly different formula (equation 2.10)
+$$
+a^*(t) \doteq \arg \max _{i=1 \ldots k}\left(\hat{\mu}_{i}+c\sqrt{\frac{\ln t}{n_{i}}}\right) \tag{2}\label{2},
+$$
+where $c > 0$ is a hyper-parameter that controls the amount of exploration (as explained in the book or here).
+Why do we have these two formulas? I suppose that both are "upper confidence bounds" (and, in both cases, they are constants, though one is a hyper-parameter), but why (and when) would we use one over the other? They are not equivalent because $c$ only needs to be greater than $0$, i.e. it can be arbitrarily large (although, in the mentioned book, the authors use $c=2$ in one experiment/figure). If $c = \sqrt{2}$, then they are the same.
+The answer to my question can probably be found in the original paper that introduced UCB1 (which actually defines the UCB1 as in \ref{1}), or in a paper that derives the bound, in the sense that the bound probably depends on some probability of error, but I have not fully read it yet, so, if you know the answer, feel free to derive both bounds and relate the two formulas.
+"
+"['deep-learning', 'terminology', 'optimizers']"," Title: What do we mean by ""infrequent features""?Body: I am reading this blog post: https://ruder.io/optimizing-gradient-descent/index.html. In the section about AdaGrad, it says:
+
+It adapts the learning rate to the parameters, performing smaller updates (i.e. low learning rates) for parameters associated with frequently occurring features, and larger updates (i.e. high learning rates) for parameters associated with infrequent features.
+
+But I am not sure about the meaning of infrequent features: is it that the value of a given feature changes rarely?
+"
+"['machine-learning', 'classification', 'ai-design', 'regression', 'algorithm-request']"," Title: How to find a parameter combination for a black box using AI?Body: I am working on a project where I encountered a component which takes 96 arguments (all integer values) and outputs 12 float values.
+I would like to find a useful combination of these 96 values to receive the output that I want while avoiding random guessing, so the desired behavior would be that I provide the outcome and receive the 96 inputs to use them in my component.
+Unfortunately I am not that experienced in that field. If I think about how I can implement this, my first thought was a kind of classification task, since I could build a dataset but the problem here is that I need integer values.
+A second guess was a regression but would that be possible for y as an output vector?
+Are there some other approaches that could fit to my use case?
+"
+"['reinforcement-learning', 'comparison', 'actor-critic-methods', 'policy-based-methods', 'value-based-methods']"," Title: Is reinforcement learning only about determining the value function?Body: I started reading some reinforcement learning literature, and it seems to me that all approaches to solving reinforcement learning problems are about finding the value function (state-value function or action-state value function).
+Are there any algorithms or methods that do not try to calculate the value function but try to solve a reinforcement learning problem differently?
+My question arose because I was not convinced that there is no better approach than finding the value functions. I am aware that given the value function we can define an optimal policy, but are there not other ways to find such an optimal policy?
+Also, is the reason why I don't encounter any non value-based methods that these are just less successful?
+"
+"['machine-learning', 'comparison', 'cross-entropy', 'softmax', 'categorical-crossentropy']"," Title: Why are there two versions of softmax cross entropy? Which one to use in what situation?Body: I have seen 2 forms of softmax cross-entropy loss and are confused by the two. Which one is the right one?
+For example in this Quora answer, there are 2 answers:
+
+- $L(\mathbf{w})=\frac{1}{N} \sum_{n=1}^{N} H\left(p_{n}, q_{n}\right)=-\frac{1}{N} \sum_{n=1}^{N}\left[y_{n} \log \hat{y}_{n}+\left(1-y_{n}\right) \log \left(1-\hat{y}_{n}\right)\right]$
+
+- $\mathrm{L}(y, \hat{y})=-\Sigma y(i) \log \hat{y}(i)$, which is only the first part of the version one.
+
+
+"
+"['reinforcement-learning', 'open-ai', 'implementation', 'advantage-actor-critic']"," Title: What is the difference between step_model and train_model in the OpenAI implementation of the A2C algorithm?Body: I'm struggling a little with understanding the OpenAI implementation of A2C in the baselines
(version 2.9.0) package. From my understanding, one step_model
acts in different parallel environments and gathers experiences (calculates the gradients, I think), and sends them to the train_model
that trains with them. After this, the step_model
gets updated from the train_model
.
+What I am unsure about is if both step_model
and train_model
are actor-critic models or if step_model
is actor and train_model
is a critic (or vice versa). Does the step_model
use the advantage function or is it just the train_model
?
+"
+"['natural-language-processing', 'computational-linguistics']"," Title: Are FSA and FSTs used in NLP nowadays?Body: Finite state automata and transducers are computational models that were widely used decades before in natural language processing for morphological parsing and other nlp tasks. I wonder if these computational models are still used in NLP nowadays for significant purposes. If these models are in use, can you give me some examples ?
+"
+"['deep-learning', 'computer-vision', 'signal-processing']"," Title: Deep Learning based image restoration using multiple framesBody: Suppose we have a sequence of still images each of which has been contaminated by some particles(ex, dust/sand/smoke) making the images very poor in certain areas.
+What architecture would be best to teach image regeneration using multiple frames? The simplest technique is to simply find a way to detect what parts of the image are contaminated and uncontaminated and pull uncontaminated sections from each frame.
+"
+"['restricted-boltzmann-machine', 'probabilistic-graphical-models']"," Title: Purpose of the hidden variables in a Restricted Boltzmann MachineBody: From the part titled Introducing Latent Variables under subsection 2.2 in this tutorial:
+
+Introducing Latent Variables. Suppose we want to model an $m$-dimensional
+unknown probability distribution $q$ (e.g., each component of a sample corresponds to one of m pixels of an image). Typically, not all variables $\mathbf{X} = (X_v)_{v \in V}$ in an MRF need to correspond to some observed component, and the number of nodes is larger than $m$. We split $\mathbf{X}$ into visible (or observed) variables $\mathbf{V} = (V_1,...,V_m)$ corresponding to the components of the observations and latent (or hidden) variables $\mathbf{H} = (H_1,...,H_n)$ given by the remaining $n = |\mathbf{V}| − m$ variables. Using latent variables allows to describe complex distributions over the visible variables by means of simple (conditional) distributions. In this case the Gibbs distribution of an MRF describes the joint probability distribution of $(\mathbf{V},\mathbf{H})$ and one is usually interested in the marginal distribution of $\mathbf{V}$ which is given by:
+$$p(\mathbf{v}) = \sum_{\mathbf{h}} p(\mathbf{v},\mathbf{h}) = \frac{1}{Z} \sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h})}$$
+where $Z = \sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h})}$. While the visible variables correspond to the components of an observation, the latent variables introduce dependencies between the visible variables (e.g., between pixels of an input image).
+
+I have a question about this part:
+
+While the visible variables correspond to the components of an observation, the latent variables introduce dependencies between the visible variables (e.g., between pixels of an input image).
+
+Given a set of nodes $\mathbf{X}$ in a Markov Random Field $G$, the joint distribution of all the nodes is given by:
+$$p(\mathbf{X}) = \frac{1}{Z} \prod_{c \in C} \phi(c)$$
+Where $Z$ is the partition function and $C$ is the set of cliques in $G$. To ensure that the joint distribution is positive, the following factors can be used:
+$$\phi(c) = e^{-E(c)}$$
+Such that:
+$$p(\mathbf{X}) = \frac{1}{Z} e^{-\sum_{c \in C} E(c)}$$
+Where $E$ is the energy function.
+I am not sure why there is a need to introduce hidden variables and express $p(\mathbf{v})$ as a marginalization of $p(\mathbf{v},\mathbf{h})$ over $\mathbf{h}$. Why can't $p(\mathbf{v})$ be expressed as:
+$$p(\mathbf{v}) = \frac{1}{Z} e^{-\sum_{v \in \mathbf{v}} E(v)}$$
+directly? I think it may be because the factors only encode dependencies between variables in cliques, and so may not be able to encode dependencies between variables that are in two separate cliques. The purpose of the hidden variables are then to encode these "long-range" dependencies between visible variables not in cliques. However, I am not sure about this reasoning.
+Any help would be greatly appreciated.
+By the way, I am aware of this question, but I think the answer is not specific enough.
+"
+"['machine-learning', 'classification', 'support-vector-machine', 'binary-classification']"," Title: Support Vector Machine Convert optimisation problem from argmax to argminBody: I'm new to the AI Stackexchange and wasn't certain if this should go here or to Maths instead but thought the context with ML may be useful to understand my problem. I hope posting this question here could help another student learning about Support Vector Machines some day.
+I'm currently learning about Support Vector Machines at university and came across a weird step I could not understand. We were talking about basic SVMs and formulated the optimisation problem $\max_{w,b} \{ \frac{1}{||w||} \min_n(y^{(n)}f(x^{(n)}))\}$ which we then simplified down to $\max_{w,b} \{ \frac{1}{||w||}\}$ by introducing $\kappa$ as a scaling factor for $w$ and $b$ according to the margin of the SVM. Now our lecturer converted it without explanation into a quadratic optimisation problem as $\min_{w,b}\{\frac{1}{2} ||w||^2\}$ which I could not explain myself. I hope someone with context can help me how this is possible and what math or trick is behind this approach?
+
+Notation information:
+
+- $w$ - weight matrix
+- $b$ - bias (sometimes denoted $w_0$ I believe?)
+- $x^{(n)}$ - Independent variable (vector)
+- $y^{(n)}$ - Dependent variable (scalar classifying the input in a binary classifcation as $y=1$ or $y=-1$)
+
+Thank you very much!
+"
+"['optimization', 'gradient-descent', 'perceptron']"," Title: Why is the perceptron criterion function differentiable?Body: I'm reading chapter one of the book called Neural Networks and Deep Learning from Aggarwal.
+In section 1.2.1.1 of the book, I'm learning about the perceptron. One thing that book says is, if we use the sign function for the following loss function: $\sum_{i=0}^{N}[y_i - \text{sign}(W * X_i)]^2$, that loss function will NOT be differentiable. Therefore, the book suggests us to use, instead of the sign function in the loss function, the perceptron criterion which will be defined as:
+$$ L_i = \max(-y_i(W * X_i), 0) $$
+The question is: Why is the perceptron criterion function differentiable? Won't we face a discontinuity at zero? Is there anything that I'm missing here?
+"
+"['machine-learning', 'terminology', 'datasets', 'vectors']"," Title: What is the reason for taking tuples as vectors rather than points?Body: Across the literature of artificial intelligence, especially machine learning, it is normal to treat the tuples of datasets as vectors.
+Although there is a convention to treat them as data points. Treating them as vectors is also considerable.
+It is easy to understand the tuples of datasets as points over space. But what is the purpose of treating them as vectors?
+"
+"['neural-networks', 'keras', 'genetic-algorithms', 'hyperparameter-optimization', 'principal-component-analysis']"," Title: Unable to meet desired mean squared errorBody: I wish to get MSE < 0.5 on test data (https://easyupload.io/zr7xf3) which is 20% of given data chosen randomly. But I am reaching 0.73 using both plain Ridge Regression as well as a neural network with about 6 layers with some elementary regularization, dropout and choice of other parameters. Overfitting also occurs.
+Suggest. I believe a Bayesian optimization or a genetic algorithm for parameters is required.
+I did no feature selection (as top 4 features showed no improvement) and non-linear methods exploration.
+My solutions -
+Ridge - Alpha = 0.002 (Grid searched)
+Neural Network efforts =
+reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
+ patience=10, min_lr=0.001)
+
+
+es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=10, restore_best_weights=True)
+
+
+model_b = Sequential()
+
+
+model_b.add(Dense(2048, kernel_initializer='he_uniform',input_dim = X.shape[1], activation='relu', kernel_regularizer=regularizers.l2(l2=1e-6)))
+model_b.add(BatchNormalization(beta_regularizer = regularizers.l2(0.00001)))
+
+
+# The Hidden Layers :
+model_b.add(Dense(1024, kernel_initializer='lecun_normal',activation='selu',kernel_regularizer=regularizers.l2(l2=1e-6)))
+
+
+model_b.add(BatchNormalization(beta_regularizer = regularizers.l2(0.00001)))
+
+
+model_b.add(Dense(1024, kernel_initializer='lecun_normal',activation='selu',kernel_regularizer=regularizers.l2(l2=1e-6)))
+
+
+model_b.add(BatchNormalization(beta_regularizer = regularizers.l2(0.00001)))
+
+
+model_b.add(Dropout(0.5))
+
+
+model_b.add(Dense(512, kernel_initializer='normal',activation='relu'))
+
+
+model_b.add(Dense(512, kernel_initializer='normal',activation='relu'))
+
+
+model_b.add(Dense(256, kernel_initializer='normal',activation='relu'))
+
+# The Output Layer :
+
+
+model_b.add(Dense(1, kernel_initializer='normal',activation='linear'))
+optimizer = SGD(lr = 0.0001)
+
+
+model_b.compile(loss='mean_squared_error', optimizer= optimizer)
+
+
+model_b.fit(X_train, y_train, batch_size=70,
+ epochs=256,
+ validation_data=(X_test, y_test),callbacks = [es])
+
+
+predb = model_b.predict(X_test)
+
+If anyone has free time, may answer.
+Best
+"
+"['machine-learning', 'deep-learning', 'information-theory']"," Title: Applications of Information Theory in Machine LearningBody: How is information theory applied to machine learning, and in particular to deep learning, in practice? I'm more interested in concepts that yielded concrete innovations in ML, rather than theoretical constructions.
+Note that, I'm aware that basic concepts such as entropy is used for training decision trees, and so on. I'm looking for applications which use slightly more advanced concepts from information theory, whatever they are.
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'deep-rl', 'pomdp']"," Title: How does one stack multiple observations in the input layer of a convolutional neural network?Body: The paper, Deep Recurrent Q-Learning for Partially Observable MDPs, talks about stacking multiple observations in the input of a convolutional neural network.
+How does this exactly work? Do the convolutional filters loop over each observation (image)?
+(I know this isn't the right group to request this, but I'll highly appreciate if someone could also suggest a framework that helps with this.)
+"
+"['reference-request', 'performance', 'metric', 'multi-label-classification']"," Title: How do you measure multi-label classification accuracy?Body: Multi-label assignment is the task in machine learning to assign to each input value a set of categories from a fixed vocabulary where the categories need not be statistically independent, so precluding building a set of independent classifiers each classifying the inputs as belong to each of the categories or not.
+Machine learning also needs a measure by which the model may be evaluated. So this is the question how do we evaluate a multi-label classifier?
+We can’t use the normal recall, accuracy and F measures since they require a binary is it correct or not measure of each categorisation. Without such a measure we have no obvious means to evaluate models nor to measure concept drift.
+"
+"['deep-learning', 'weights-initialization']"," Title: What is the justification for Kaiming He initialization?Body: I've been trying to understand where the formulas for Xavier and Kaiming He initialization come from. My understanding is that these initialization schemes come from a desire to keep the gradients stable during back-propagation (avoiding vanishing/exploding gradients).
+I think I can understand the justification for Xavier initialization, and I'll sketch it below. For He initialization, what the original paper actually shows is that that initialization scheme keeps the pre-activation values (the weighted sums) stable throughout the network. Most sources I've found explaining Kaiming He initialization seem to just take it as "obvious" that stable pre-activation values will somehow lead to stable gradients, and don't even mention the apparent mismatch between what the math shows and what we're actually trying to accomplish.
+The justification for Xavier initialization (introduced here) is as follows, as I understand it:
+
+- As an approximation, pretend the activation functions don't exist and we have a linear network. The actual paper says we're assuming the network starts out in the "linear regime", which for the sigmoid activations they're interested in would mean we're assuming the pre-activations at every layer will be close to zero. I don't see how this could be justified, so I prefer to just say we're disregarding the activation functions entirely, but in any case that's not what I'm confused about here.
+
+- Zoom in on one edge in the network. It looks like $x\to_{w} y$, connecting the input or activation value $x$ to the activation value $y$, with the weight $w$. When we do gradient descent we consider $\frac{\partial C}{\partial w}$, and we have:
+$$\frac{\partial C}{\partial w}=x\frac{\partial C}{\partial y}$$
+So if we want to avoid unstable $\frac{\partial C}{\partial w}$-s, a sufficient (not necessary, but that's fine) condition is to keep both those factors stable - the activations and the gradients with respect to activations. So we try to do that.
+
+- To measure the "size" of an activation, let's look at its mean and variance (where the randomness comes from the random weights). If we use zero-mean random weights all i.i.d. on each layer, then we can show that all of the activation values in our network are zero-mean, too. So controlling the size comes down to controlling the variance (big variance means it tends to have large absolute value and vice versa). Since the gradients with respect to activations are calculated by basically running the neural network backwards, we can show that they're all zero-mean too, so controlling their size comes down to controlling their variance as well.
+
+- We can show that all the activations on a given layer are identically distributed, and ditto for the gradients with respect to activations on a given layer. If $v_n$ is the variance of the activations on layer $n$, and if $v'_n$ is the variance of the gradients, we have
+$$v_{n+1}=v_n k_n \sigma^2$$
+$$v'_n=v_{n+1} k_{n+1} \sigma^2$$
+
+
+where $k_i$ is the number of neurons on the $i$-th layer, and $\sigma^2$ is the variance of the weights between the $n$-th and $n+1$-th layers. So to keep either of the growth factors from being too crazy, we would want $\sigma^2$ to be equal to both $1/k_n$ and $1/k_{n+1}$. We can compromise by setting it equal to the harmonic mean or the geometric mean or something like that.
+
+- This stops the activations from exploding out of control, and stops the gradients with respect to activations from exploding out of control, which by step (2) stops the gradients with respect to the weights (which at the end of the day are the only things we really care about) from growing out of control.
+
+However, when I look at the paper on He initialiation, it seems like almost every step in this logic breaks down. First of all, the math, if I understand correctly, shows that He initialization can control the pre-activations, not the activations. Therefore, the logic from step (2) above that this tells us something about the gradients with respect to the weights fails. Second of all, the activation values in a ReLU network like the authors are considering are not zero-mean, as they point out themselves, but this means that even the reasoning as to why we should care about the variances, from step (3), fails. The variance is only relevant for Xavier initialization because in that setting the mean is always zero, so the variance is a reasonable proxy for "bigness".
+So while I can see how the authors show that He initialization controls the variances of the pre-activations in a ReLU network, for me the entire reason as to why we should care about doing this has fallen apart.
+"
+"['genetic-algorithms', 'c++']"," Title: Genetic algorithm stuck and cannot find an optimal solutionBody: I'm working on SLAP (storage location assignment problem) using genetic algorithm implemented manually in the C++ programming language. The problem is fairly simple, we do have N
products, which we want to allocate to M
warehouse location slots (N
might and might not be equal to M
).
+Let's begin with the encoding of the chromosomes. The chromosome length is equal to number of products (i.e. each product is one gene). Each product has one integer value (allele value), representing the location its allocated to.
+Let me show you on simple example.
+Products Average picking rate Location slots Location number
+Prod1 0.4 Location 1, slot 1 (1) // 3rd best
+Prod2 0.3 Location 1, slot 2 (2) // 4th best
+Prod3 0.2 Location 2, slot 1 (3) // The best
+Prod4 0.1 Location 2, slot 2 (4) // 2nd best
+
+We aim for optimal allocation of products (Prod1-4) to location slots (1-4). The better the allocation is, the faster we can process all the products in customer orders. Now let's say the Location 2 is closer to the warehouse entrance/exit, so its more attractive, and the lower the location slot number is, the faster we can pick product out of the location slot. So the optimal allocation should be:
+Product Location number
+Prod1 3
+Prod2 4
+Prod3 1
+Prod4 2
+
+And expressed as the chromosome:
++---+---+---+---+
+| 3 | 4 | 1 | 2 |
++---+---+---+---+
+
+This allocation will lead to the best warehouse performance. Now let me show you my crossover operator (based on TSP crossover https://www.permutationcity.co.uk/projects/mutants/tsp.html):
+void crossoverOrdered(std::vector<int32_t>& lhsInd, std::vector<int32_t>& rhsInd)
+{
+ int32_t a, b;
+ int32_t pos = 0;
+ int32_t placeholder = -1;
+ int32_t placeholderCount = 0;
+
+ std::vector<int32_t> o1, o1_missing, o1_replacements;
+ std::vector<int32_t> o2, o2_missing, o2_replacements;
+
+ while(true)
+ {
+ do
+ {
+ a = randomFromInterval(pos, constants::numberDimensions);
+ b = randomFromInterval(pos, constants::numberDimensions);
+ }
+ while(a == b);
+
+ if(a > b) std::swap(a, b);
+
+ // Insert from first parent
+ for(int32_t i = pos; i < a; ++i)
+ {
+ o1.push_back(lhsInd.at(i));
+ o2.push_back(rhsInd.at(i));
+ }
+
+ // Insert placeholders
+ for(int32_t i = a; i < b; ++i)
+ {
+ ++placeholderCount;
+ o1.push_back(placeholder);
+ o2.push_back(placeholder);
+ }
+
+ if(b >= constants::numberDimensions - 1)
+ {
+ for(int32_t i = b; i < constants::numberDimensions; ++i)
+ {
+ o1.push_back(lhsInd.at(i));
+ o2.push_back(rhsInd.at(i));
+ }
+
+ break;
+ }
+ else
+ {
+ pos = b;
+ }
+ }
+
+ // Find missing elements
+ for(int32_t i = 0; i < constants::problemMax; ++i)
+ {
+ if(std::find(o1.begin(), o1.end(), i) == o1.end()) o1_missing.push_back(i);
+ if(std::find(o2.begin(), o2.end(), i) == o2.end()) o2_missing.push_back(i);
+ }
+
+ // Filter missing elements and leave only those which are in the second parent (keep the order)
+ for(int32_t i = 0; i < static_cast<int32_t>(rhsInd.size()); i++)
+ {
+ if(std::find(o1_missing.begin(), o1_missing.end(), rhsInd.at(i)) != o1_missing.end()) o1_replacements.push_back(rhsInd.at(i));
+ }
+
+ // Filter missing elements and leave only those which are in the second parent (keep the order)
+ for(int32_t i = 0; i < static_cast<int32_t>(lhsInd.size()); i++)
+ {
+ if(std::find(o2_missing.begin(), o2_missing.end(), lhsInd.at(i)) != o2_missing.end()) o2_replacements.push_back(lhsInd.at(i));
+ }
+
+ // Replace placeholders in offspring 1
+ for(int32_t i = 0; i < placeholderCount; ++i)
+ {
+ auto it = std::find(o1.begin(), o1.end(), placeholder);
+ *it = o1_replacements.at(i);
+ }
+
+ // Replace placeholders in offspring 2
+ for(int32_t i = 0; i < placeholderCount; ++i)
+ {
+ auto it = std::find(o2.begin(), o2.end(), placeholder);
+ *it = o2_replacements.at(i);
+ }
+
+ // Assign new offsprings
+ lhsInd.assign(o1.begin(), o1.end());
+ rhsInd.assign(o2.begin(), o2.end());
+}
+
+My mutation operator(s):
+void mutateOrdered(std::vector<int32_t>& ind)
+{
+ int32_t a, b;
+
+ do
+ {
+ a = randomFromInterval(0, constants::numberDimensions);
+ b = randomFromInterval(0, constants::numberDimensions);
+ }
+ while(a == b);
+
+ std::rotate(ind.begin() + a, ind.begin() + b, ind.begin() + b + 1);
+}
+
+void mutateInverse(std::vector<int32_t>& ind)
+{
+ int32_t a, b;
+
+ do
+ {
+ a = randomFromInterval(0, constants::numberDimensions);
+ b = randomFromInterval(0, constants::numberDimensions);
+ }
+ while(a == b);
+
+ if(a > b) std::swap(a, b);
+
+ std::reverse(ind.begin() + a, ind.begin() + b);
+}
+
+I tried to use roulette, truncate, tournament and rank selection alorithms, but each with similar results.
+This is my configuration:
+populationSize = 20
+selectionSize = 5
+eliteSize = 1
+probabilityCrossover = 0.6
+probabilityMutateIndividual = 0.4
+probabilityMutateGene = 0.2
+
+My fitness function is fairly simple, since it's real number returned by simulation program which simulates picking of orders on the current allocation we gave it. Unfortunately I cannot provide this program as its confidential. It's just a real number representing how good the current allocation is, the better the allocation is, the lower the number is (i.e. its minimization problem).
+The problem
+This genetic algorithm can find better solutions than just random allocation, the problem is, it gets "stuck" after lets say few thousand generations and it fails to improve furthermore, even though there are better solutions, and it will go even 20k generations with exact same ellite chromosom (don't improve at all). I tried to increase crossover/mutation probability and population size but none of it worked. Thanks for any help.
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks', 'architecture', 'hidden-layers']"," Title: Are there deep neural networks that have inputs connected with deeper hidden layers?Body: Are there any architectures of deep neural networks that connect input neurons not only with the first hidden layer but also with deeper ones (red lines on the picture)?
+
+If so could you give some names or links to research papers?
+"
+"['speech-recognition', 'hidden-markov-model']"," Title: Determining observation and state spaces for viterbi algorithm in a simple word recognition system using HMMBody: The system I'm trying to implement is a microcontroller with a connected microphone which have to recognise single words. the feature extraction is done using MFCC (and is working).
+
+- the system have to recognise [predefined, up to 20] single words each one up to 1 seconds length
+- input audio is sampled with a frequency of 10KHz and 8 bits resolution
+- the window is 256 sample wide (25.6 ms), hann windowed with a 15ms step (overlaying windows)
+- the total MFCC features representing each window, is about 18 features
+
+I've done the above things, and tested the outputs for accuracy and computation speed so there is not much concern about the computations. now I have to implement a HMM for word recognition. I've read about the HMM and I think these parameters need to be addressed:
+
+- the hidden states are the "actual" pieces of the word with 25.6ms length
represented in 18 MFCC features
. and they count up to maximum of 64 sets in a single word (because the maximum length for input word is 1sec and each window is (25.6 - 10)millisecs)
+- I should use Viterbi algorithm to find out the most probable word spoken untill the current state. so, if the user is saying "STOP", the Viterbi can suggest it (with proper learning of course) when the user has spoken "STO.." . so it's some kind of prediction too.
+- I have to determine the other HMM parameters like the emission and transition. the wikipedia page for Viterbi which has written the algorithm, shows the input/output as:
+
![]()
+
+from the above:
+
+- what is observation space? the user may talk anything so it seems indefinite to me
+- the state space is obviously the set containing all the possible
MFCC feature sets
used in the learned word set. how I learn or hardcode that ?
+
+thanks for reading this long question patiently.
+"
+"['neural-networks', 'activation-functions', 'neat', 'neuroevolution']"," Title: Evolved networks fail to solve XORBody: My implementation of NEAT consistently fails to solve XOR completely. The species converge on different sub-optimal networks which map all input examples but one correctly (most commonly (1,1,0)). Do you have any ideas as to why that is?
+Some information which might be relevant:
+
+- I use a plain logistic activation function in each non-input node 1/(1 + exp(-x)).
+- Some of the weights seem to grow quite large in magnitude after a large number of epochs.
+- I use the sum squared error as the fitness function.
+- Anything over 0.5 is considered a 1 (for comparing the output with the expected)
+
+Here is one example of an evolved network. Node 0 is a bias node, the other red node is the output, the green are inputs and the blue "hidden". Disregard the labels on the connections.
+
+EDIT: following the XOR suggestions on the NEAT users page of steepening the gain of the sigmoid function, a network that solved XOR was found for the first time after ca 50 epochs. But it still fails most of the time. Here is the network which successfully solved XOR:
+
+"
+"['convolutional-neural-networks', 'tensorflow', 'autoencoders']"," Title: Why do we add additional axis in CNN autoencoder while denoising?Body: I am currently learning about autoencoders and I follow https://www.tensorflow.org/tutorials/generative/autoencoder
+When denoising images, authors of tutorial add an additional axis to the data and I cannot find any explanation why... I would appreciate any answer or suggestion :)
+x_train = x_train[..., tf.newaxis]
+x_test = x_test[..., tf.newaxis]
+
+Then the encoder is built from the following layers:
+ self.encoder = tf.keras.Sequential([
+ layers.Input(shape=(28, 28, 1)),
+ layers.Conv2D(16, (3,3), activation='relu', padding='same', strides=2),
+ layers.Conv2D(8, (3,3), activation='relu', padding='same', strides=2)])
+
+ self.decoder = tf.keras.Sequential([
+ layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
+ layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
+ layers.Conv2D(1, kernel_size=(3,3), activation='sigmoid', padding='same')])
+
+"
+"['object-detection', 'image-segmentation', 'u-net', 'mask-rcnn']"," Title: Getting bounding box/boundaries from segmentations in UNet Nuclei SegmentationBody: From my understanding, in a tissue where nuclei are present and need to be detected, we need to predict bounding boxes (either rectangular/circular or in the shape of the nucleus, i.e. as in instance segmentation). However, a lot of research papers start with semantic segmentation. Again, what I understood is semantic segmentation won't give the location, bounding box or count of nuclei. It will just tell that some stuff is probably nuclei and rest is probably background.
+So, what is the bridging that I am missing when trying to detect nuclei from semantic segmentation. I have personally done semantic segmentation but I can't seem to count/predict bounding boxes because I can't understand how to do that (for example if semantic segmentation gave a probable region for nuclei which is actually a mixture of 3 nuclei overlapping). Semantic segmentation (in the example) just stops right there.
+
+- Thresholding algorithm like Watershed might not work in some cases as demonstrated in [Nuclei Detection][1] at 23:30 onwards.
+- Edge detection between segmented nuclei and background would not separate overlapping nuclei.
+- Finding local maxima and putting a dot there might give rise to false positives.
+- Finding IoU but what if the output of segmentation is not a region of classification (1s and 0s) but a continuous probability map from values between 0 to 1.
+- Isn't finding contours and getting bounding boxes from masks using opencv a parametric method? What I mean is, it being an image processing technique, there are chances it will work for some images and won't work for some.
+
+"
+"['machine-learning', 'training', 'math', 'restricted-boltzmann-machine', 'probabilistic-graphical-models']"," Title: How do I derive the gradient of the log-likelihood of an RBM?Body: In a Restricted Boltzmann Machine (RBM), the likelihood function is:
+$$p(\mathbf{v};\mathbf{\theta}) = \frac{1}{Z} \sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}$$
+Where $E$ is the energy function and $Z$ is the partition function:
+$$Z = \sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}$$
+The log-likelihood function is therefore:
+$$ln(p(\mathbf{v};\mathbf{\theta})) = ln\left(\sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}\right) - ln\left(\sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}\right)$$
+Since the log-likelihood function cannot be computed, its gradient is used instead with gradient descent to find the optimal parameters $\mathbf{\theta}$:
+$$\frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}} = -\frac{1}{\sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}} \sum_{\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}\right] + \frac{1}{\sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}} \sum_{\mathbf{v},\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}\right]$$
+Since:
+$$p(\mathbf{h}|\mathbf{v}) = \frac{p(\mathbf{v},\mathbf{h})}{p(\mathbf{v})} = \frac{\frac{1}{Z} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}}{\frac{1}{Z} \sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}} = \frac{e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}}{\sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}}$$
+Then:
+$$\frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}} = -\sum_{\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot p(\mathbf{h}|\mathbf{v}) \right] + \frac{1}{\sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}} \sum_{\mathbf{v},\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}\right]$$
+Also, since:
+$$ \frac{e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}}{Z} = \frac{e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}}{\sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}} = p(\mathbf{v},\mathbf{h})$$
+Then:
+$$\begin{align} \frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}} &= -\sum_{\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot p(\mathbf{h}|\mathbf{v}) \right] + \sum_{\mathbf{v},\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot p(\mathbf{v},\mathbf{h})\right] \\ &= -\mathbb{E}_{p(\mathbf{h}|\mathbf{v})}\left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \right] + \mathbb{E}_{p(\mathbf{v},\mathbf{h})}\left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \right] \end{align}$$
+Since both of these are expectations, they can be approximated using Monte Carlo integration:
+$$ \frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}} \approx -\frac{1}{N} \sum_{i = 1}^{N} \left[\frac{\partial E(\mathbf{v},\mathbf{h}_i;\mathbf{\theta})}{\partial \mathbf{\theta}} \right] + \frac{1}{M} \sum_{j=1}^{M} \left[\frac{\partial E(\mathbf{v}_j,\mathbf{h}_j;\mathbf{\theta})}{\partial \mathbf{\theta}} \right] $$
+The first term can be computed beacuse it is easy to sample from $p(\mathbf{h}|\mathbf{v})$. However, it is difficult to sample from $p(\mathbf{v},\mathbf{h})$ directly, but since it is easy to sample from $p(\mathbf{v}|\mathbf{h})$, then Gibbs sampling is used to sample from both $p(\mathbf{h}|\mathbf{v})$ and $p(\mathbf{v}|\mathbf{h})$ to approximate a sample from $p(\mathbf{v},\mathbf{h})$.
+My questions are:
+
+- Is my understanding and math correct so far?
+- In the expression for the gradient of the log-likelihood, can expectations be interchanged with partial derivatives such that:
+
+$$\begin{align} \frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}} &= -\mathbb{E}_{p(\mathbf{h}|\mathbf{v})}\left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \right] + \mathbb{E}_{p(\mathbf{v},\mathbf{h})}\left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \right] \\ &= - \frac{\partial}{\partial \mathbf{\theta}} \mathbb{E}_{p(\mathbf{h}|\mathbf{v})}\left[E(\mathbf{v},\mathbf{h};\mathbf{\theta}) \right] + \frac{\partial}{\partial \mathbf{\theta}} \mathbb{E}_{p(\mathbf{v},\mathbf{h})}\left[E(\mathbf{v},\mathbf{h};\mathbf{\theta}) \right] \\ &= \frac{\partial}{\partial \mathbf{\theta}} \left(\mathbb{E}_{p(\mathbf{v},\mathbf{h})}\left[E(\mathbf{v},\mathbf{h};\mathbf{\theta}) \right] - \mathbb{E}_{p(\mathbf{h}|\mathbf{v})}\left[E(\mathbf{v},\mathbf{h};\mathbf{\theta}) \right] \right) \\ &\approx \frac{\partial}{\partial \mathbf{\theta}} \left(\frac{1}{M} \sum_{j=1}^{M} \left[E(\mathbf{v}_j,\mathbf{h}_j;\mathbf{\theta}) \right] - \frac{1}{N} \sum_{i = 1}^{N} \left[E(\mathbf{v},\mathbf{h}_i;\mathbf{\theta}) \right] \right) \end{align}$$
+
+- After approximating the gradient of the log-likelihood, the update rule for the parameter vector $\mathbf{\theta}$ is:
+
+$$\mathbf{\theta}_{t+1} = \mathbf{\theta}_{t} + \epsilon \frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}}$$
+Where $\epsilon$ is the learning rate. Is this update rule correct?
+"
+"['machine-learning', 'convolutional-neural-networks', 'training', 'batch-size']"," Title: Are there any rules for choosing batch size?Body: I am training a CNN with a batch size of 128, but I have some fluctuations in the validation loss, which are greater than one. I want to increase my batch size to 150 or 200, but, in the code examples I have come across, the batch size is always something like 32, 64, 128, or 256. Is it a rule? Can I use other values for it?
+"
+"['machine-learning', 'k-nearest-neighbors']"," Title: How to manually draw a $k$-NN decision boundary with $k=1$ given the dataset and labels?Body: How to manually draw a $k$-NN decision boundary with $k=1$ knowing the dataset
+
+the labels are
+
+and the euclidean distance between two points is defined as
+
+"
+"['reinforcement-learning', 'reference-request', 'gym', 'state-of-the-art']"," Title: What are the state-of-the-art results in OpenAI's gym environments?Body: What are the state-of-the-art results in OpenAI's gym environments? Is there a link to a paper/article that describes them and how these SOTA results were calculated?
+"
+"['computer-vision', 'tensorflow', 'python', 'keras', 'opencv']"," Title: Distinguishing between handwritten compound fraction and subtractionBody: I am working in a project named "Handwritten Math Evaluation". So what basically happens in this is that there are 11 classes of (0 - 9) and (+, -) each containing 50 clean handwritten digits in them. Then I trained a CNN model for it with 80 % of data used in training and 20 % using in testing of model which results in an accuracy of 98.83 %. Here is the code for the architecture of CNN model:
+import pandas as pd
+import numpy as np
+import pickle
+np.random.seed(1212)
+import keras
+from keras.models import Model
+from keras.layers import *
+from keras import optimizers
+from keras.layers import Input, Dense
+from keras.models import Sequential
+from keras.layers import Dense
+from keras.layers import Dropout
+from keras.layers import Flatten
+from keras.layers.convolutional import Conv2D
+from keras.layers.convolutional import MaxPooling2D
+from keras.utils import np_utils
+from keras import backend as K
+from keras.utils.np_utils import to_categorical
+from keras.models import model_from_json
+import matplotlib.pyplot as plt
+model = Sequential()
+model.add(Conv2D(30, (5, 5), input_shape =(28,28,1), activation ='relu'))
+model.add(MaxPooling2D(pool_size =(2, 2)))
+model.add(Conv2D(15, (3, 3), activation ='relu'))
+model.add(MaxPooling2D(pool_size =(2, 2)))
+model.add(Dropout(0.2))
+model.add(Flatten())
+model.add(Dense(128, activation ='relu'))
+model.add(Dense(50, activation ='relu'))
+model.add(Dense(12, activation ='softmax'))
+# Compile model
+model.compile(loss ='categorical_crossentropy',
+ optimizer ='adam', metrics =['accuracy'])
+model.fit(X_train, y_train, epochs=1000)
+
+Now each image in dataset is preprocessed as follows:
+import cv2
+im = cv2.imread(path)
+im_gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
+ret, im_th = cv2.threshold(im_gray, 90, 255, cv2.THRESH_BINARY_INV)
+ctrs, hier = cv2.findContours(im_th.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
+rects = [cv2.boundingRect(ctr) for ctr in ctrs]
+rect = rects[0]
+im_crop =im_th[rect[1]:rect[1]+rect[3],rect[0]:rect[0]+rect[2]]
+im_resize = cv2.resize(im_crop,(28,28))
+im_resize = np.array(im_resize)
+im_resize=im_resize.reshape(28,28)
+
+I have made an evaluation function which solves simple expression like 7+8 :-
+def evaluate(im):
+ s = ''
+ data = []
+ im_gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
+ ret, im_th = cv2.threshold(im_gray, 90, 255, cv2.THRESH_BINARY_INV)
+ ctrs, hier = cv2.findContours(im_th.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
+ sorted_ctrs = sorted(ctrs, key=lambda ctr: cv2.boundingRect(ctr)[0])
+ boundingBoxes = [cv2.boundingRect(c) for c in ctrs]
+ look_up = ['0','1','2','3','4','5','6','7','8','9','+','-']
+ i=0
+ for c in ctrs:
+ rect = boundingBoxes[i]
+ im_crop = im_th[rect[1]:rect[1]+rect[3], rect[0]:rect[0]+rect[2]]
+ im_resize = cv2.resize(im_crop,(28,28))
+ im_resize = np.array(im_resize)
+ im_resize = im_resize.reshape(28,28,1)
+ data.append(im_resize)
+ i+=1
+ data = np.array(data)
+ predictions = model.predict(data)
+ i=0
+ while i<len(boundingBoxes):
+ rect = boundingBoxes[i]
+ print(rect[2],rect[3])
+ print(predictions[i])
+ s += look_up[predictions[i].argmax()]
+ i+=1
+ return s
+
+I need help extending this to compound fractions, but the problem is that the vinculum /
is identical to the subtraction sign -
when resized to (28, 28). So I need help in distinguishing between them.
+This is my first question, so please let me know if any details are left.
+"
+"['classification', 'computer-vision', 'object-detection', 'image-processing', 'data-preprocessing']"," Title: Why do we resize images before using them for object detection?Body: In object detection, we can resize images by keeping the ratio the same as the original image, which is often known as "letterbox" resize.
+My questions are
+
+- Why do we need to resize images? If we resize images to have all the same dimensions, given that some original images are too long vertically or horizontally, we will lose a lot of features in those images.
+
+- If the "letterbox" method is better than "normal resize" (i.e. without keeping the aspect ratio, e.g. the result of the application of OpenCV's
resize
function with the parameter interpolation
set to cv2.INTER_AREA
), why don't people apply it in the classification task?
+
+
+"
+"['training', 'objective-functions', 'image-segmentation', 'metric', 'u-net']"," Title: Could the data augmentation lead to the model learning features which corresponds to data augmented data and not to the real data?Body: I am trying to train a Unet network with Synthetic data to do binary segmentation due to the fact that is is not easy to collect real data.
+And there is something in the training process that I do not understand.
+I have a gap in the IoU metrics between the training and the validation (despite having really similar data).
+My training Iou is around 95 % and my validation is around 70 %.
+And the dice loss is around 0.007.
+The IoU is calculated on the inverted mask used for the loss.
+So I do not understand why there is this gap whereas the images in validation has been created from the same background dataset and the same object dataset which has been randomly placed on background ( + rotation and rescaled randomly). The only difference is an aggressive data augmentation used for training dataset.
+In my opinion, it is not overfitting since the loss value and comportment is very similar for train and val. Moreover, it seems very unlikely to me that the model overfit with same backgrounds and objects or at least model should have very good IoU for train and val if it was overfitting.
+So could the data augmentation lead to the model learning features which corresponds to data augmented data (even if loss is similar) and not to the real data explaining the gap in IoU between train and val ?
+"
+"['neural-networks', 'ai-design', 'object-detection', 'object-recognition', 'yolo']"," Title: Get object's orientation or angle after object detectionBody: I'm trying to get a detected car's orientation when object detection is applied. For instance, when we apply object detection on a car and get a bounding box, is there any ways or methods to calculate where the heading is or the orientation or direction of the car (just 2D plane is fine)?
+Any thoughts or ideas would be helpful.
+
+"
+"['reinforcement-learning', 'dqn', 'exploration-exploitation-tradeoff', 'epsilon-greedy-policy']"," Title: Should the exploration rate be updated at the end of the episode or at every step?Body: My agent uses an $\epsilon$-greedy strategy to learn. The exploration rate (i.e. $\epsilon$) decays throughout the training. I've seen examples where people update $\epsilon$ every time an action is taken, while others update it at the end of the episode. If updated at every action, $\epsilon$ is more continuous. Does it matter? Is there a standard? Is one better than another?
+"
+"['deep-learning', 'deep-neural-networks', 'data-preprocessing', 'geometric-deep-learning', 'binary-classification']"," Title: Should binary feature be in one or two columns in deep neural networks?Body: Let's assume I have a simple feedforward neural network whose input contains binary 0/1 features and output is also binary two classes.
+Is it better, worse, or maybe totally indifferent, for every such binary feature to be in just one column or maybe it would be better to split one feature into two columns in a way that the second column will have the opposite value, like that:
+feature_x (one column scenario)
+
+[0]
+
+[1]
+
+[0]
+
+
+feature_x (two columns scenario)
+
+[0, 1]
+
+[1, 0]
+
+[0, 1]
+
+I know this might seem a bit weird and probably it is not necessary, but I have a feeling like there might be a difference for a network especially for its inner workings and how neurons in the next layers see such data. Has anyone ever researched that aspect?
+"
+"['neural-networks', 'computer-vision', 'object-detection', 'object-recognition']"," Title: Find object's location in an area using computer visionBody: I'm trying to see how to detect the location of a soccer ball in the field using the live camera. What are some ways to achieve this?
+1- Assuming we have a fixed camera with a wide shot. How to find the ball location on the actuall field?
+2- A camera is zooming into the ball. But we know the location of the camera and maybe it's turning angle. Can we estimate the ball location on the field using this info? Or maybe we need additional info? Can we do it with two cameras as reference points?
+Any thoughts would be helpful.
+
+
+"
+"['natural-language-processing', 'bert', 'language-model']"," Title: Is there a way to provide multiple masks to BERT in MLM task?Body: I'm facing a situation where I've to fetch probabilities from BERT MLM for multiple words in a single sentence.
+Original : "Mountain Dew is an energetic drink"
+Masked : "[MASK] is an energetic drink"
+
+But BERT MLM task doesn't consider two tokens at a time for the MASK. I strongly think that there should be some sort of work around that I'm unable to find other than fine-tuning.
+"
+['python']," Title: How can I predict an anomaly based on FFT of multiple signals?Body: I have 999 signals, each with separate day timestamp, each T=10s long, sampled with fs=25kHz. This gives N=250,000 samples in total.
+My task was to obtain the averaged magnitude spectrum for each signal. For example, for k=100, the signal is divided into k-equal fragments, 0.1s and 2500 samples long. Then FFT is computed on all fragments and mean value is calculated for all spectral component (mean for each frequency from DC to Nyquist Frequency).
+The averaged spectrum for each signal for k=100 contains 1251 values and 1251 frequency points (0-fs/2).
+My question is, how can prepare the train dataset for multiple Machine Learning models based on that data, so I can predict when is the threshold time, before the failure of the machine occurs?
+Do i treat each spectral component (frequency) as separate feature? Or there is a different approach ?
+"
+"['deep-learning', 'convolutional-neural-networks', 'classification', 'object-recognition', 'one-shot-learning']"," Title: Single-Shot Learning for Object Re-IdentificationBody: I am looking for a way to re-identify/classify/recognize x real life objects (x < 50) with a camera. Each object should be presented to the AI only once for learning and there's always only one of these objects in the query image. New objects should be addable to the list of "known" objects. The objects are not necessarily part of ImageNet nor do I have a training dataset with various instances of these objects.
+Example:
+
+In the beginning I have no "known" objects. Now I present a
+smartphone, a teddy bear and a pair of scissors to the system. It
+should learn to re-identify these three objects if presented in the
+future. The objects will be the exact same objects, i.e. not a different phone, but definitely in a different viewing angle, lighting etc.
+
+My understanding is that I would have to place each object in an embedding space and do a simple nearest neighbor lookup in that space for the queries. Maybe just use a trained ResNet, cut off the classification and simply use the output vector for each object? Not sure what the best way would be.
+Any advice or hint to the right direction would be highly appreciated.
+"
+"['reinforcement-learning', 'self-play', 'muzero']"," Title: How does MuZero learn to play well for both sides of a two-player game?Body: I'm coding my own version of MuZero. However, I don't understand how it supposed to learn to play well for both players in a two-player game.
+Take Go for example. If I use a single MCTS to generate an entire game (to be used in the training stage), couldn't MuZero learn to play badly for black in order to become good at predicting a win for white? What is forcing it to play well at every turn?
+"
+"['natural-language-processing', 'applications', 'natural-language-understanding', 'natural-language-generation']"," Title: Are there any meaningful books entirely written by an artificial intelligence?Body: Are there any meaningful books entirely written by an artificial intelligence? I mean something with meaning, unlike random words or empty books.
+Something that can be charactersed as fiction literature.
+If yes, then I think it is also interesting to know if any of those books is available for sale. Is there a specific name for such books? Like "robot books"?
+"
+"['neural-networks', 'artificial-neuron', 'hidden-layers']"," Title: If neurons performed the operation of an entire layer, would that make the neural network more effective?Body: (I have a very primitive understanding of neural networks, so please forgive the lack of technicality here.)
+I am used to seeing a neuron in a neural network as something that-
+
+- Takes the inputs and multiplies them by their weights,
+- then sums them up,
+- and after that it applies the activation function to the sum.
+
+Now, what if it was "smarter"? Say, a single neuron could do the function of an entire layer in a network, could that make the network more effective? This comes from an article I was reading at Quanta, where the author says:
+
+Later, Mel and several colleagues looked more closely at how the cell might be managing multiple inputs within its individual dendrites. What they found surprised them: The dendrites generated local spikes, had their own nonlinear input-output curves and had their own activation thresholds, distinct from those of the neuron as a whole. The dendrites themselves could act as AND gates, or as a host of other computing devices.
+
+
+...realised that this meant that they could conceive of a single neuron as a two-layer network. The dendrites would serve as nonlinear computing subunits, collecting inputs and spitting out intermediate outputs. Those signals would then get combined in the cell body, which would determine how the neuron as a whole would respond.
+
+My thoughts: I know that Backpropagation is used to "teach" the network in the normal case, and the fact that neurons are simply activation buttons is somehow related to that. So, if neurons were to be more complicated, it would reduce efficiency. However, I am not sure of this: why would complex individual components make the network less effective?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl']"," Title: Why not use the target network in DQN as the predictor after trainingBody: Target network in DQN is known to make the network more stable, and the loss is like "how good I'm now compared to using the target". What I don't understand is, if the target network is the stable one, why do we keep using/saving the first model as the predictor instead of the target?
+I see in the code everywhere:
+
+- Model
+- Target model
+- Train model
+- Copy to target
+- Get loss between them
+
+At the end, the model is saved and used for prediction and not the target.
+"
+"['game-ai', 'monte-carlo-tree-search', 'algorithm-request', 'minimax', 'heuristics']"," Title: How can I improve the performance of my approach to solving a 1-player version of the card game ""The Game"" by Steffen Benndorf?Body: I would like to create an AI for the 1 player version of the card game called "The Game" by Steffen Benndorf (rules here: https://nsv.de/wp-content/uploads/2018/05/the-game-english.pdf).
+The game works with four rows of cards. Two rows are in ascending order (numbers 1–99), and two rows are in descending order (numbers 100–2). The goal is to lay as many cards as possible, all 98 if possible, in four rows of cards. The player can have a maximum of 8 cards in his hand and has to play at least 2 cards before drawing again. He can only play a greater value on an ascending row and a smaller value on a descending row with one single exception that lets him play in the reverse order: whenever the value of the number card is exactly 10 higher or lower.
+I already implemented a very simple hard-coded AI that just picks the card with the smallest difference and prioritizes a +10/-10 play when possible. With some optimizations, I can get the AI to score 20 points (the number of cards left) on average which is decent (less than 10 points in an excellent score) but I'm stuck there and I would like to go further.
+As there is randomness because of the draw pile, I was wondering if it was possible to implement a robust and not hard-coded AI to play this game.
+Currently, my AI is playing piecemeal with a very simple heuristic. I do not see how to improve this heuristic, so I am wondering if it is possible to improve the performance by having a view over several turns for example. But I don't see how to simulate the next rounds since they will depend on the cards drawn.
+"
+"['monte-carlo-tree-search', 'muzero']"," Title: How to choose the first action in a Monte Carlo Tree Search?Body: I'm working on reimplementing the MuZero paper. In the description of the MCTS (page 12), they indicate that a new node with associated state $s$ is to be initialized with $Q(s,a) = 0$, $N(s,a) = 0$ and $P(s,a) = p_a$. From this, I understand that the root node with state $s_0$ will have edges with zero visits each, zero value and policy evaluated on $s_0$ by the prediction network.
+So far so good. Then they explain how actions are selected, according to the equation (also on page 12):
+
+But for the very first action (from the root node) this will give a vector of zeros as argument to the argmax: $Q(s_0,a) = 0$ and $\sum_bN(s_0,b)=0$, so even though $P(s_0,a)$ is not zero, it will be multiplied by a zero weight.
+Surely there is a mistake somewhere? Or is it that the very first action is uniformly random?
+"
+"['reinforcement-learning', 'deep-learning', 'dqn', 'reinforce', 'catastrophic-forgetting']"," Title: If REINFORCE agent suddenly drops, how do I verify if it's due to catastrophic forgetting?Body: I am using the default implementations of REINFORCE, DQN and c51 available from the tf.agents repo (links). As you can see, DQN manages to improve performance while REINFORCE seems to suffer from catastrophic forgetting. OTOH, c51 is not able to learn much and performs like a random policy throughout.
+The environment looks like this -
+
+- action = [66, 1]
+- states = [20, 1]
+- max possible state value = 20
+- steps per episode = 20
+- Hidden Layer dimension = (128, 128)
+- learning rate = 0.001 (constant throughout)
+- Epsilon (exploration factor) = 0.2 with decay of 0.05 every 4000 episodes
+- Discount factor = 0.9
+- relay memory size = 10,000
+
+Every episode runs for 20 steps and the rewards are collected for every step.
+
+actual episode value is the plot is the x-axis value multiplied by 50
+What could be the possible reasons for such a performance of c51 and DQN? And based on the state space, are my hyperparameters correct or some of them need more tuning? I will increase the replay memory size but other than that to check for catastrophic forgetting, I am not sure how to diagnose other issues.
+"
+"['convolutional-neural-networks', 'computer-vision']"," Title: Generating data from a High-Res. RGB image for a CNNBody: Say I want to build a detection model that detects the existence of X or NO X.
+The only piece of information I have, though, is a high res. RGB image, say 100k (width) x ~1000 pixels (height).
+Let's also assume I cannot browse the internet to grab more data. I am stuck with this High resolution image. Can I somehow "slice" this image into multiple images and use said images as input data for my CNN?
+How would I do so?
+"
+"['reinforcement-learning', 'proofs', 'reward-shaping', 'reward-functions']"," Title: Why does a negative reward for every step really encourage the agent to reach the goal as quickly as possible?Body: If we shift the rewards by any constant (which is a type of reward shaping), the optimal state-action value function (and so optimal policy) does not change. The proof of this fact can be found here.
+If that's the case, then why does a negative reward for every step encourage the agent to quickly reach the goal (which is a specific type of behavior/policy), given that such a reward function has the same optimal policy as the shifted reward function where all rewards are positive (or non-negative)?
+More precisely, let $s^*$ be the goal state, then consider the following reward function
+$$
+r_1(s, a)=
+\begin{cases}
+-1, & \text{ if } s \neq s^*\\
+0, & \text{ otherwise}
+\end{cases}
+$$
+This reward function $r_1$ is supposed to encourage the agent to reach $s^*$ as quickly as possible, so as to avoid being penalized.
+Let us now define a second reward function as follows
+\begin{align}
+r_2(s, a)
+&\triangleq r_1(s, a) + 1\\
+&=
+\begin{cases}
+0, & \text{ if } s \neq s^*\\
+1, & \text{ otherwise}
+\end{cases}
+\end{align}
+This reward function has the same optimal policy as $r_1$, but does not incentivize the agent to reach $s^*$ as quickly as possible, given that the agent does not get penalized for every step. So, in theory, $r_1$ and $r_2$ lead to the same behavior. If that's the case, then why do people say that $r_1$ encourage the agents to reach $s^*$ as quickly as possible? Is there a proof that shows that $r_1$ encourages a different type of behaviour than $r_2$ (and how is that even possible given what I have just said)?
+"
+"['machine-learning', 'problem-solving', 'gradient-boosting', 'xor-problem']"," Title: Can XGBoost solve XOR problem?Body: I've read that decision trees are able to solve XOR operation so I conclude that XGBoost algorithm can solve it as well.
+But my tests on the datasets (datasets that should be highly "xor-ish") do not produce good results, so I wanted to ask whether XGBoost is able to solve this type of problem at all, or maybe I should use a different algorithm like ANN?
+EDIT: I found a similar question with a negative answer here.
+Could someone please confirm that XGBoost cannot perform XOR operation due to "greedy approach" and whether this can maybe be changed in parameters?
+"
+"['machine-learning', 'math', 'research']"," Title: Can any area of math come into play in Machine Learning Research?Body: As I read online following areas in mathematics come into play in ML research
+
+- Linear Algebra
+- Calculus
+- Differential Equations
+- Probability
+- Statistics
+- Discrete Mathematics
+- Optimization
+- Analytic Geometry
+- Topology
+- Numerical and Real Analysis
+
+Can / Are any other areas of math used in ML research? If so what other areas? ex: Number theory
+"
+"['machine-learning', 'training', 'datasets', 'stochastic-gradient-descent', 'testing']"," Title: Should we also shuffle the test dataset when training with SGD?Body: When training machine learning models (e.g. neural networks) with stochastic gradient descent, it is common practice to (uniformly) shuffle the training data into batches/sets of different samples from different classes. Should we also shuffle the test dataset?
+"
+"['machine-learning', 'python', 'backpropagation', 'implementation', 'softmax']"," Title: How am I supposed to code equation 4.57 from the book ""Machine Learning: An Algorithmic Perspective""?Body: Consider the equation 4.57 (p. 108) from section 4.6 of the Book Machine Learning: An Algorithmic Perspective, where the derivative of the softmax function is explained
+$$\delta_o(\kappa) = (y_\kappa - t_\kappa)y_\kappa(\delta_{\kappa K} - y_K),$$
+which is derived from equation 4.55 (p. 107)
+$$y_{\kappa}(1 - y_{\kappa}),$$
+which is to compute the diagonal of the Jacobian, and equation 4.56 (p. 107)
+$$-y_{\kappa}y_K$$
+In the book, it is not explained how they go from 4.55 and 4.56 to 4.57, it is just given, but I cannot follow how it is derived.
+Moreover, in equation 4.57, the Kronecker's delta function is used, but how would one handle cases $i=j$, then we must have some for loop? Does having an $i$ and $j$ imply we need a nested for loop?
+Also, I have tried to just compute the derivative of softmax according to the $i=j$ case only, and my model was faster (since we're not computing the jacobian) and accurate, but this assumes the error function is logarithmic, which I would like to code the general case.
+"
+"['machine-learning', 'ai-design', 'python', 'tensorflow', 'keras']"," Title: How to design my Neural Network for Game AIBody: For my school project, I have to develop an agent to play my game.
+
+The base I have is a 'GameManager' which call 2 AIs, each taking a random move to do.
+To make my AI perform, I decided to make a deep RL algorithm.
+Here is how I've designed my solution.
+1st : the board is a 8x8 board. making 112 possible lines to draw.
+2nd : on each decision, my Agent has to choose 1 line in the remaining one.
+3rd : each decision the Agent take is one among 112 possible.
+I read some codes on the internet, the most relevant for me was a 'CartPole' example, which is a cart we have to slide to prevent a mass to fall.
+I made an architecture which is this one:
+a game is simulated:
+the board is clean, making all 112 possibilities available.
+Our Agent is interroged by the gameManager to make a move passing him the actual state of the game
+(the state shape is a 112*1 vector of Boolean values, 1 means a line can be drawn, 0 means there is already a line on this position)
+(the action shape is a vector of 112*1 Boolean values, All values are set to 'False' except the line we want to draw)
+So, our Agent return his move decision.
+Each time our agent perform a move, i store the initial state, the action we take, the reward we get performing the action, the state we reach and a boolean to know if the game is done or not.
+The rewards I choose are:
++1 if our action make us close a box,
+-1 if our action make other close a box,
++10 if our action make us win the game,
+-10 if our action make us loose the game
+The point is it's my 1st Deep learning project and I'm not sure about the mecanism i'm doing.
+when i launch a simulation, the Neural Network is running, but the move he does seems not to be better and better.
+I give you the code I've wrote:
+Here is the gameManager code:
+while True:
+hasMadeABox = False
+gameIsEnd = False
+rewardFCB = 0
+doneFCB = False
+
+
+if GRAPHIC_MODE:
+ for event in pygame.event.get():
+ if event.type == pygame.QUIT:
+ pygame.quit()
+ sys.exit()
+ disp_board()
+
+if playerTurns=="1":
+ stateFCB = possibilitiesToBoolList(possible_moves[0])
+
+boolArrayPossibleMoves = possibilitiesToBoolList(possible_moves[0])
+
+theAction = players[playerTurns].play(boxes, possible_moves[0], boolArrayPossibleMoves, False)
+#print(possible_moves[0])
+#print(theAction)
+if playerTurns =="1":
+ actionFCB = theAction
+
+if playerTurns=="1":
+ is_box = move(True, theAction)
+
+elif playerTurns=="2":
+ is_box = move(False, theAction)
+
+if is_box:
+ if playerTurns =="1":
+ #rewardFCB = 1
+ rewardFCB = 1
+ pass
+ else:
+ rewardFCB = -1
+ hasMadeABox = True
+
+if check_complete():
+ gameIsEnd = True
+ rewardFCB += 10 if score[0]>score[1] else -10 #does loosing is a reward null or negativ ?
+ queueOfLastGame.pop(0)
+
+ #Scotch pour affichage winrate
+ isWin = 1 if score[0]>score[1] else -1
+ queueOfLastGame.append(isWin)
+ if queueOfLastGame.count(-1)+queueOfLastGame.count(1) > 0:
+ print(queueOfLastGame.count(1)/(queueOfLastGame.count(-1)+queueOfLastGame.count(1)) * 100 , " % Winrate")
+
+ doneFCB = True
+
+
+if playerTurns=="1" and hasMadeABox:
+ #si c'est notre IA vient de faire un carré
+ #on connait directement l'état qui succede
+ nextStateFCB = possibilitiesToBoolList(possible_moves[0])
+
+if playerTurns=="2":
+ nextStateFCB = possibilitiesToBoolList(possible_moves[0])
+
+
+if nextStateFCB is not None:
+ bufferSARS.append([stateFCB, actionFCB, rewardFCB, nextStateFCB, doneFCB])
+ #ai_player_1.remember(stateFCB, actionFCB, rewardFCB, nextStateFCB, doneFCB)
+ rewardFCB = 0
+ nextStateFCB = None
+
+if gameIsEnd:
+ flushBufferSARS()
+ reset()
+ continue
+
+#switch user to play if game is not end
+if not hasMadeABox:
+ playerTurns="1" if playerTurns == "2" else "2"
+
+And here's my code about the Agent:
+class Agent:
+def __init__(self, name, possibleActions, stateSize, actionSize, isHuman=False, alpha=0.001, alphaDecay=0.01, batchSize=2048, learningRate=0.1, epsilon= 0.9, gamma = 0.996, hasToTrain=True):
+
+ self._memory = deque(maxlen=100000)
+ self._actualEpisode=1
+ self._episodes=7000
+ self._name=name
+ self._possibleAction=possibleActions
+ self._isHuman=isHuman
+ self._epsilon=epsilon
+ self._epsilonDecay = 0.99
+ self._epsilonMin = 0.05
+ self._gamma=gamma
+ self._stateSize=stateSize
+ self._actionSize=actionSize
+ self._alpha=alpha
+ self._alphaDecay=alphaDecay
+ self._hasToTrain=hasToTrain
+ self._batchSize=batchSize
+
+ self._totalIllegalMove = 0
+ self._totalLegalMove = 0
+
+ self._path = "./modelWeightSave/"
+
+ self._model = self._buildModel()
+
+
+def save_model(self):
+ self._model.save(self._path)
+
+def getName(self):
+ return self._name
+
+def _buildModel(self):
+ model = Sequential()
+
+ model.add(Dense(128, input_dim=self._stateSize, activation='relu'))
+ model.add(Dense(256, kernel_initializer='normal', activation='relu'))
+ model.add(Dense(256, kernel_initializer='normal', activation='relu'))
+ model.add(Dense(256, kernel_initializer='normal', activation='relu'))
+ model.add(Dense(128, kernel_initializer='normal', activation='relu'))
+ model.add(Dense(self._actionSize, kernel_initializer='normal', activation='relu'))
+
+ model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=self._alpha), metrics=['accuracy'])
+ #if os.path.isfile(self._path):
+ #model.load_weights(self._path)
+ return model
+
+def act(self, UNUSED_state, stateAsBool):
+ playableIndexes = []
+ for i in range(len(stateAsBool[0])):
+ if stateAsBool[0][i] == 1:
+ playableIndexes.append(i)
+ indexForRand = playableIndexes[random.randint(0, len(playableIndexes) - 1)]
+
+ if np.random.random() <= self._epsilon:
+ action= [0]*self._actionSize
+ action[indexForRand]=1
+
+ else:
+ arrayState = np.array(stateAsBool)
+
+ action = self._model.predict(arrayState)
+ #Set index of max esperence to 1, we play this line.
+ tmp=[0]*self._actionSize
+ tmp[np.argmax(action)] = 1
+ action = tmp
+
+ isLegalMove = True
+ if sum(action) != 1:
+ isLegalMove = False
+ for i in range(len(action)):
+ if action[i] == 1:
+ if stateAsBool[0][i] == 0:
+ isLegalMove = False
+ break
+
+ if isLegalMove:
+ pass
+ #print("Legal move")
+ else:
+ #print("Illegal move")
+ #AI try to play on an already draw line, we choose a random line in remainings
+ self._totalIllegalMove+=1
+ action = [0] * self._actionSize
+ action[indexForRand] = 1
+
+ #print("My AI took action : ",action)
+ return action
+
+def remember(self, state, action, reward, nextState, done):
+ self._memory.append((state.copy(), action, reward, nextState, done))
+
+ self._actualEpisode+=1
+ if self._actualEpisode > self._episodes:
+ self._actualEpisode = 0
+ self.replay(self._batchSize)
+
+def replay(self, batchSize):
+ x_batch, y_batch = [], []
+ minibatch = random.sample(self._memory, min(len(self._memory), self._batchSize))
+ for state, action, reward, next_state, done in minibatch:
+ actionIndex = np.argmax(action)
+ y_target = self._model.predict(state)
+ y_target[0][actionIndex] = reward if done else reward + self._gamma * np.max(self._model.predict(next_state)[0])
+ x_batch.append(state[0])
+ y_batch.append(y_target[0])
+ self._model.fit(np.array(x_batch), np.array(y_batch),epochs=10, batch_size=len(x_batch), verbose=1)
+
+ if self._epsilon > self._epsilonMin:
+ self._epsilon *= self._epsilonDecay
+
+ self.save_model()
+
+def play(self, board, state, statesAsBool, player):
+ actionTaken= self.act(state, statesAsBool)
+ return actionTaken
+
+def callBackOnPreviousMove(self, state, action, reward, nextState, done):
+ self.remember(state, action, reward, nextState, done)
+
+Example of output i have during fit method:
+Epoch 1/10
+
+1/1 [==============================] - 0s 0s/step - loss: 109.9612 - accuracy: 0.8867
+
+Epoch 2/10
+
+1/1 [==============================] - 0s 998us/step - loss: 109.9467 - accuracy: 0.8867
+
+Epoch 3/10
+
+1/1 [==============================] - 0s 0s/step - loss: 109.9456 - accuracy: 0.8867
+
+Epoch 4/10
+
+1/1 [==============================] - 0s 0s/step - loss: 109.9332 - accuracy: 0.8867
+
+Epoch 5/10
+
+1/1 [==============================] - 0s 998us/step - loss: 109.9339 - accuracy: 0.8867
+
+Epoch 6/10
+
+1/1 [==============================] - 0s 0s/step - loss: 109.9337 - accuracy: 0.8867
+
+Epoch 7/10
+
+1/1 [==============================] - 0s 997us/step - loss: 109.9305 - accuracy: 0.8867
+
+Epoch 8/10
+
+1/1 [==============================] - 0s 0s/step - loss: 109.9314 - accuracy: 0.8867
+
+Epoch 9/10
+
+1/1 [==============================] - 0s 0s/step - loss: 109.9306 - accuracy: 0.8867
+
+Epoch 10/10
+
+1/1 [==============================] - 0s 0s/step - loss: 109.9301 - accuracy: 0.8867
+
+My questions are:
+
+- Is my architecture good
+(inputs = [0,0,1,1,0,0,1,0.....,1,0] (112x1 shape) to represent the state, and
+output = [0,0,0,0,0,0,0,0,0,1,0,0,0...0,0,0,0] (112x1 shape with only one '1') )
+to represent an action ?
+
+- How to nicely choose the architecture of the Neural Network model (self._model) (I have only the basics of Neural Network, so I don't really know all activation fonction, how to design the hiden layers, choose a loss...)
+
+- To train my NN, is it good to call the 'fit' function with (state, action) as parameter to make it learn?
+
+- Is there something really important I forget in my design to make it work?
+
+
+"
+"['neural-networks', 'objective-functions', 'mean-squared-error']"," Title: What is the definition of a loss function in the context of neural networks?Body: I have read what the loss function is, but I am not sure if I have understood it. For each neuron in the output layer, the loss function is usually equal to the square of the difference value of the neuron and the result we want. Is that correct?
+"
+"['reinforcement-learning', 'open-ai', 'gym', 'a3c']"," Title: How do I create a custom gym environment based on an image?Body: I am trying to create my own gym environment for the A3C algorithm (one implementation is here). The custom environment is a simple login form for any site. I want to create an environment from an image. The idea is to take a screenshot of the web page and create an environment from this screenshot for the A3C algorithm. I know the doc and protocol for creating a custom environment. But I don't understand how to create an environment, exactly, based on a screenshot.
+If I do so
+self.observation_space = gym.spaces.Box(low=0, high=255, shape=(128, 128, 3), dtype=np.uint8)
+
+I get a new pic.
+Here's the algorithm that I am trying to implement (page 39 of the master's thesis Deep Reinforcement Learning in Automated User Interface Testing by Juha Eskonen).
+
+"
+"['neural-networks', 'overfitting', 'loss', 'accuracy', 'epochs']"," Title: Why does the accuracy drop while the loss decrease, as the number of epochs increases?Body: I've been trying to find the optimal number of epochs that I should train my neural network (that I just implemented) for.
+The visualizations below show the neural network being run with a variable number of epochs. It is quite obvious that the accuracy increases with the number of epochs. However, at 75 epochs, we see a dip before the accuracy continues to rise. What is the cause of this?
+
+"
+"['transformer', 'word-embedding', 'bert']"," Title: Is there a pretrained (NLP) transformer that uses subword n-gram embeddings for tokenization like fasttext?Body: I know that several tokenization methods that are used for tranformer models like WordPiece for Bert and BPE for Roberta and others. What I was wondering if there is also a transformer which uses a method for tokenization similarly to the embeddings that are used in the fasttext library, so based on the summations of embeddings for the n-grams the words are made of.
+To me it seems weird that this way of creating word(piece) embeddings that can function as the input of a transformer isn't used in these new transformer architectures. Is there a reason why this is not tried yet? Or is this question just an result of my inability to find the right papers/repo's.
+"
+"['deep-learning', 'image-segmentation']"," Title: Is Webpage Semantic Segmentation possible nowadays?Body: I'm trying to do some research about semantic segmentation for webpages, in particular e-commerce webpages. I found some articles which provide some solutions based on very old dataset and those solutions in my opinion can't be effective for modern websites, in particular e-commerce. I would like to semantically infer the images bounding box, text, price etc..
+Another problem is related with the size of webpage screenshot which are huge, I resized to 1024x512, but I think that I can't resize the image more otherwise I loose quality.
+I built a very complex neural network in order to semantically infer text, images and background, (not classification but just segmentation), and the results are not so bad, but they are far from my expectations which seems strange to me, as we have many DNN able to do semantic segmentation of road, building, car etc for example. One problem is for sure the lack of a dataset with detailed labels. I didn't find any dataset that can satisfy my requests.
+QUESTION: Any idea to help the network learn better the structure of a webpage just with a screenshot?
+My DNN essentially is built as an auto-encoder architecture based on Segnet, with some modifications, skip connections, unpooling etc, I think that it is a good network.
+references:
+https://clgiles.ist.psu.edu/pubs/CVPR2017-connets.pdf
+https://link.springer.com/chapter/10.1007/978-981-13-0020-2_33
+"
+"['philosophy', 'evolutionary-algorithms']"," Title: Isn't evolutionary theory the essence of intelligence after all?Body: The theory of evolution seems to be intelligent as it creates life
+The mechanism of evolutionary theory consists of mutation, recombination, and natural selection like a genetic algorithm.
+Isn't this evolutionary mechanism itself the same as the essence of human intelligence?
+"
+"['deep-learning', 'generative-adversarial-networks', 'loss', 'wasserstein-metric', 'wasserstein-gan']"," Title: WGAN-GP Loss formalizationBody: I have to write the formalization of the loss function of my network, built following the WGAN-GP model. The discriminator takes 3 consecutive images as input (such as 3 consecutive frames of a video) and must evaluate if the intermediate image is a possible image between the first and the third.
+
+I thought something like this, but is it correct to identify x1, x2 and x3 coming from Pr even if they are 3 consecutive images? Only the first is chosen randomly, the others are simply the next two.
+EDIT:
+
+EDIT 2:
+I replaced Pr with p_r(x1, x3) and p_r(x1, x2, x3) to reinforce the fact that x2 and x3 are taken after x1, so they depend on the choice of x1. Is it more correct this way?
+
+"
+"['classification', 'anomaly-detection']"," Title: How to classify anomalies between two sound datasets?Body: I have two sound datasets and each one has 80% normal and 20% anomalous data points. The first one is a rock song and the second one is a mellow indie song. I use half of the normal data as a baseline in each dataset. I identify anomalies using isolation forest in each dataset and found 25 anomalies in the first rock song dataset and 12 in the mellow threshold. Now my question is how can I classify an anomaly as a rock song specific one? Do you think building a simple linear regression classifier should work?
+"
+"['objective-functions', 'gradient-descent', 'variational-autoencoder', 'kl-divergence']"," Title: What is the impact of scaling the KL divergence and reconstruction loss in the VAE objective function?Body: Variational autoencoders have two components in their loss function. The first component is the reconstruction loss, which for image data, is the pixel-wise difference between the input image and output image. The second component is the Kullback–Leibler divergence which is introduced in order to make image encodings in the latent space more 'smooth'. Here is the loss function:
+\begin{align}
+\text { loss }
+&=
+\|x-\hat{x}\|^{2}+\operatorname{KL}\left[N\left(\mu_{x}, \sigma_{x}\right), \mathrm{N}(0,1)\right] \\
+&=
+\|x-\mathrm{d}(z)\|^{2}+\operatorname{KL}\left[N\left(\mu_{x^{\prime}} \sigma_{x}\right), \mathrm{N}(0,1)\right]
+\end{align}
+I am running some experiments on a dataset of famous artworks using Variational Autoencoders. My question concerns scaling the two components of the loss function in order to manipulate the training procedure to achieve better results.
+I present two scenarios. The first scenario does not scale the loss components.
+
+Here you can see the two components of the loss function. Observe that the order of magnitude of the Kullback–Leibler divergence is significantly smaller than that of the reconstruction loss. Also observe that 'my famous' paintings have become unrecognisable. The image shows the reconstructions of the input data.
+
+In the second scenario I have scaled the KL term with 0.1. Now we can see that the reconstructions are looking much better.
+
+
+Question
+
+- Is it mathematically sound to train the network by scaling the components of the loss function? Or am I effectively excluding the KL term in the optimisation?
+
+- How to understand this in terms of gradient descent?
+
+- Is it fair to say that we are telling the model "we care more about the image reconstructions than 'smoothing' the latent space"?
+
+
+I am confident that my network design (convolutional layers, latent vector size) have the capacity to learn parameters to create proper reconstructions as a Convolutional Autoencoder with the same parameters is able to reconstruct perfectly.
+Here is a similar question.
+Image Reference:
+https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
+"
+"['deep-learning', 'architecture']"," Title: What exactly are deep learning primitives?Body: I came across the concept of "deep learning primitives" from the Nvidia talk Jetson AGX Xavier New Era Autonomous Machines (on slide 44).
+There doesn't seem to be a lot of articles in the community on this concept. I was able to find one definition from here, where it defined deep learning primitives as the "fundamental building blocks of deep networks" like fully connected layers, convolutions layers, etc.
+I was curious to find out if a self-attention layer is a primitive, I came across this OpenDNN issue and one person explained that self-attention layers can be built by other primitives like inner product, concat, etc.
+So my question is what exactly are primitives in deep learning? What makes a convolution layer a primitive and a self-attention layer not a primitive?
+"
+"['neural-networks', 'convolutional-neural-networks', 'training', 'object-detection']"," Title: Will changing the dimension reduction size of a neural network (i.e. SSD ResNet-50) change the overall outcome and accuracy of the model?Body: I am training a convolutional neural network to detect objects (weeds amongst crops, in my case) using TensorFlow. The original dimensions of the raw training photos are 4000 x 3000 pixels, which must be resized to become workable. The idea here is to label objects in the training images (using Label-Img), train the model, and use it to detect weeds in certain situations.
+According to TensorFlow 2 Detection Model Zoo, there are algorithms designed for different speeds, which involves initially resizing the images to a specified dimension. Although this is not a coding question, here is an example of SSD ResNet-50, which initially resizes the input images to 1024 x 1024 pixels:
+model {
+ ssd {
+ num_classes: 1
+ image_resizer {
+ fixed_shape_resizer {
+ height: 1024
+ width: 1024
+ }
+ }
+ feature_extractor {
+ type: "ssd_resnet50_v1_fpn_keras"
+ depth_multiplier: 1.0
+ min_depth: 16
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ weight: 0.00039999998989515007
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ mean: 0.0
+ stddev: 0.029999999329447746
+ }
+ }
+ activation: RELU_6
+ batch_norm {
+ decay: 0.996999979019165
+ scale: true
+ epsilon: 0.0010000000474974513
+ }
+ }
+ override_base_feature_extractor_hyperparams: true
+ fpn {
+ min_level: 3
+ max_level: 7
+ }
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ y_scale: 10.0
+ x_scale: 10.0
+ height_scale: 5.0
+ width_scale: 5.0
+ }
+ }
+ matcher {
+ argmax_matcher {
+ matched_threshold: 0.5
+ unmatched_threshold: 0.5
+ ignore_thresholds: false
+ negatives_lower_than_unmatched: true
+ force_match_for_each_row: true
+ use_matmul_gather: true
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ box_predictor {
+ weight_shared_convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ weight: 0.00039999998989515007
+ }
+ }
+ initializer {
+ random_normal_initializer {
+ mean: 0.0
+ stddev: 0.009999999776482582
+ }
+ }
+ activation: RELU_6
+ batch_norm {
+ decay: 0.996999979019165
+ scale: true
+ epsilon: 0.0010000000474974513
+ }
+ }
+ depth: 256
+ num_layers_before_predictor: 4
+ kernel_size: 3
+ class_prediction_bias_init: -4.599999904632568
+ }
+ }
+ anchor_generator {
+ multiscale_anchor_generator {
+ min_level: 3
+ max_level: 7
+ anchor_scale: 4.0
+ aspect_ratios: 1.0
+ aspect_ratios: 2.0
+ aspect_ratios: 0.5
+ scales_per_octave: 2
+ }
+ }
+ post_processing {
+ batch_non_max_suppression {
+ score_threshold: 9.99999993922529e-09
+ iou_threshold: 0.6000000238418579
+ max_detections_per_class: 100
+ max_total_detections: 100
+ use_static_shapes: false
+ }
+ score_converter: SIGMOID
+ }
+ normalize_loss_by_num_matches: true
+ loss {
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ classification_loss {
+ weighted_sigmoid_focal {
+ gamma: 2.0
+ alpha: 0.25
+ }
+ }
+ classification_weight: 1.0
+ localization_weight: 1.0
+ }
+ encode_background_as_zeros: true
+ normalize_loc_loss_by_codesize: true
+ inplace_batchnorm_update: true
+ freeze_batchnorm: false
+ }
+}
+train_config {
+ batch_size: 64
+ data_augmentation_options {
+ random_horizontal_flip {
+ }
+ }
+ data_augmentation_options {
+ random_crop_image {
+ min_object_covered: 0.0
+ min_aspect_ratio: 0.75
+ max_aspect_ratio: 3.0
+ min_area: 0.75
+ max_area: 1.0
+ overlap_thresh: 0.0
+ }
+ }
+ sync_replicas: true
+ optimizer {
+ momentum_optimizer {
+ learning_rate {
+ cosine_decay_learning_rate {
+ learning_rate_base: 0.03999999910593033
+ total_steps: 100000
+ warmup_learning_rate: 0.013333000242710114
+ warmup_steps: 2000
+ }
+ }
+ momentum_optimizer_value: 0.8999999761581421
+ }
+ use_moving_average: false
+ }
+ fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED"
+ num_steps: 100000
+ startup_delay_steps: 0.0
+ replicas_to_aggregate: 8
+ max_number_of_boxes: 100
+ unpad_groundtruth_tensors: false
+ fine_tune_checkpoint_type: "classification"
+ use_bfloat16: true
+ fine_tune_checkpoint_version: V2
+}
+train_input_reader {
+ label_map_path: "PATH_TO_BE_CONFIGURED"
+ tf_record_input_reader {
+ input_path: "PATH_TO_BE_CONFIGURED"
+ }
+}
+eval_config {
+ metrics_set: "coco_detection_metrics"
+ use_moving_averages: false
+}
+eval_input_reader {
+ label_map_path: "PATH_TO_BE_CONFIGURED"
+ shuffle: false
+ num_epochs: 1
+ tf_record_input_reader {
+ input_path: "PATH_TO_BE_CONFIGURED"
+ }
+}
+
+Because I will be labeling many pictures in the future, I need to decide on a dimension to resize my original ones to (literature review says 1100 x 1100 has been used in previous projects).
+If I were to change the image resizer in the code above to 1100 x 1100, for example, would that have any effect on model accuracy/training loss? Would it even run? I'm fairly new to this, so any insights on this would be greatly appreciated!
+Note: I am using a NVIDIA GPU, so that helps speed the process quite a bit. Google Colab also can be used.
+"
+"['deep-learning', 'convolutional-neural-networks', 'object-detection', 'image-processing']"," Title: How to verify classification model trained on classification dataset on a detection dataset for classification purpose?Body: I am working on a problem that involves two tasks - detection and classification. There is no single dataset for both tasks. I am training two models, separate on detection dataset and another on classification dataset. I use the images from the detection dataset as input and get classification predictions on top of detected bounding boxes.
+Dataset description :
+
+- Classification - Image of the single object (E.g. Car) in the center with a classification label.
+- Detection - Image with multiple objects (E.g. 4 Cars) with bounding box annotations.
+
+Task - Detect objects(e.g. cars) from detection datasets and classify them into various categories.
+How do I verify whether the classification model trained on the classification dataset is working on images from detection dataset? (In terms of classification accuracy)
+I cannot manually label the images from the detection dataset for individual class labels. (Need expert domain knowledge)
+How do I verify my classification model?
+Is there any technique to do this ? Like domain transfer or any weakly-supervised method ?
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'objective-functions', 'face-recognition']"," Title: What is a ""center loss""?Body: I have seen that a center loss is beneficial in computer vision, especially in face recognition. I have tried to understand this concept from the following material
+
+- A Discriminative Feature Learning Approach for Deep Face Recognition
+- https://www.slideshare.net/JisungDavidKim/center-loss-for-face-recognition
+
+However, I could not understand the concept clearly. If someone can explain with the help of an example, that would be appreciated.
+"
+"['machine-learning', 'definitions', 'ai-field', 'case-based-reasoning', 'instance-based-learning']"," Title: Is case-based reasoning a machine learning technique?Body: A few years ago when I was in university, I had implemented (for my final year project) an Itinerary Planning System, which incorporates an AI technique called "case-based reasoning".
+Is case-based reasoning a machine learning technique or an AI technique (that is not machine learning)?
+"
+"['recurrent-neural-networks', 'hyperparameter-optimization', 'word-embedding', 'hyper-parameters', 'sentiment-analysis']"," Title: Why is an embedding of dimension 400 enough to represent 70000 words?Body: I am learning PyTorch on Udacity. In lesson 8, section 11: Training the Model, the instructor writes:
+
+Then I have my embedding and hidden dimension. The embedding dimension is just a smaller representation of my vocabulary of 70k words and I think any value between like 200 and 500 or so would work, here. I've chosen 400. Similarly, for our hidden dimension, I think 256 hidden features should be enough to distinguish between positive and negative reviews.
+
+There are more than 70000 different words. How could those more than 70000 unique words be represented by just 400 embeddings? How does an embedding look like? Is it a number?
+Moreover, why would 256 hidden features be enough?
+"
+"['deep-learning', 'convolutional-neural-networks', 'classification', 'multi-label-classification', 'image-recognition']"," Title: CNN to detect presence/absense of label on images with mixed labelsBody: Here's my problem: I work with medical image classification, and currently I have 3 classes:
+
+- class A: images with lesion 1 only; and images with lesion 1 and N other lesions
+- class B: images with 2 other lesions (no lesion 1)
+- class C: images with no lesion
+
+The goal is to classify into "lesion 1", "other lesion", "no lesion". I'd like to know some approach/method/paper/clue for this classification. I think the presence of other lesions on both class A and B is confusing the model (the validation accuracy and f1-score are very low).
+Thanks in advance.
+"
+"['convolutional-neural-networks', 'convolution', 'linear-algebra', 'principal-component-analysis', 'convolutional-layers']"," Title: Is the 3d convolution associative given that it can be represented as matrix multiplication?Body: I'm trying to understand if a 3D convolution of the sort performed in a convolutional layer of a CNN is associative. Specifically, is the following true:
+$$
+X \otimes(W \cdot Q)=(X \otimes W) \cdot Q,
+$$
+where
+
+- $\otimes$ is a convolution,
+- $X$ is a 3D input to a convolution layer,
+- $W$ is a 4D weights matrix reshaped into 2 dimensions,
+- and $Q$ is a PCA transformation matrix.
+
+To elaborate: say I take my 512 convolutional filters of shape ($3 \times 3 \times 512$), flatten across these three dimensions to give a ($4096 \times 512$) matrix $W$, and perform PCA on that matrix, reducing it to say dimensions of ($4096 \times 400$), before reshaping back into ($400$) 3d filters and performing convolution.
+Is this the same as when I convolve $X$ with $W$, and then perform PCA on that output using the same transformation matrix as before?
+I know that matrix multiplication is associative i.e. $A(BC)=(AB)C$, and I have found that convolution operations can be rewritten as matrix multiplication.
+So my question is, if I rewrite the convolution as matrix multiplication, is it associative with respect to the PCA transformation (another matrix multiplication)?
+For example, does $X' \cdot (W' \cdot Q) = (X' \cdot W') \cdot Q$, where $X'$ and $W'$ represent the matrices necessary to compute the convolution in matrix multiplication form?
+To try and figure it out, I looked to see how convolutions could be represented as matrix multiplications, since I know matrix multiplications are associative. I've seen a few posts/sites explaining how 2D convolutions can be rewritten as matrix multiplication using Toeplitz matrices (e.g. in this Github repository or this AI SE post), however, I'm having trouble expanding on it for my question.
+I've also coded out simple convolutions with a $W$ matrix of $4 \times 3$, an $X$ matrix of $4 \times 2$, and using sklearn
's PCA to reduce $W$ to $4 \times 2$. If I do this both ways, the output is not the same, leading me to think this kind of associativity does not exist. But how can I explain this with linear algebra?
+Can anyone explain whether this is or is not the case, with a linear algebra explanation?
+"
+"['reinforcement-learning', 'sarsa']"," Title: Intuitively, how does it make sense to take an action $A'$ when the environment already ended?Body: The update equation for SARSA is $Q(S,A) = R + \gamma Q(S',A')$. Consider this: I take an action $A$ that leads to the terminal state. Now my $S'$ would be one of the terminal states. So...
+
+- Intuitively, how does it make sense to take an action $A'$ when the environment already ended? Or is this something you just do anyway?
+
+- Once a terminal state-action pair is reached, you update the previous state-action pair and then start the game loop all over again. But this means that the terminal state-action pair ($Q(S',A')$ in my example) is never updated. So, if your initial estimate of $Q(S',A')$ was wrong, you would never be able to fix it which would be very problematic. (And you can't set all the terminal values to zero because you are using function approximators)
+
+
+So, how do I resolve these issues?
+"
+"['deep-learning', 'training', 'object-detection']"," Title: When training deep learning models for object detection in images, do you need a large number of images, or a large number of training samples?Body: I am training a deep learning model for object detection. The consensus is that the more images that you have, the better the results will be. All the tutorials that I have seen say that more images are key.
+I am labeling objects in my images with Label-Img, which provides the algorithm with specific training samples on the images. For my images, I am using photos with dimensions of 1100 x 1100 pixels. In my case, I could generate anywhere between 50-100 high-quality training samples per image. For example:
+
+In cases such as this where large numbers of training samples can be generated from a single image, do you really need several hundred images? Or can you lessen the number of images because of the number of training samples?
+"
+"['graphs', 'clustering', 'linear-algebra', 'graph-theory', 'spectral-analysis']"," Title: What exactly is the eigenspace of a graph (in spectral clustering)?Body: When we find the eigenvectors of a graph (say in the context of spectral clustering), what exactly is the vector space involved here? Of what vector space (or eigenspace) are we finding the eigenvalues of?
+"
+"['reinforcement-learning', 'environment']"," Title: How can reinforcement learning be applied when the goal location or environment is unknown?Body: I am studying RL. I was thinking whether a new state value or the observation is provided by the environment before the agent actually implements the action.
+Take the maze problem as an example. Each state consists of all the available cells information, provided by the environment. But what if the environment is unknown? For example, there is a maze with an unknown destination cell. The agent needs to find the destination cell. The state is 1 or 0, meaning the destination reached or not. But the environment, which is the maze, can only provide the state at cell $i$ which is 0 or 1 only when the agent reaches cell $i$.
+Can this still be solved by RL? I am confused about the environment setup.
+"
+"['reinforcement-learning', 'alphazero', 'alphago-zero', 'alphago', 'muzero']"," Title: Is which sense was AlphaGo ""just given a rule book""?Body: I was told that AlphaGo (or some related program) was not explicitly taught even the rules of Go -- if it was "just given the rulebook", what does this mean? Literally, a book written in English to read?
+"
+"['reinforcement-learning', 'dynamic-programming']"," Title: Is it possible to retrieve the optimal policy from the state value function?Body: One can easily retrieve the optimal policy from the action value function but how about obtaining it from the state value function?
+"
+"['neural-networks', 'deep-neural-networks', 'activation-functions', 'performance', 'relu']"," Title: How is the performance of a model affected by adding a ReLU to fully connected layers?Body: How significant is adding a ReLU to fully connected (FC) layers? Is it necessary, or how is the performance of a model affected by adding ReLU to FC layers?
+"
+"['convolutional-neural-networks', 'backpropagation', 'convolution', 'filters']"," Title: In CNNs, why do we sum the filter derivatives w.r.t the loss function to get the final gradient?Body: In a Convolutional Neural Network, unlike the fully connected layers, the same filter is used multiple times on the input while convolving - so during backpropagation, we get multiple derivatives for the filter parameters w.r.t the loss function. My question is, why do we sum all the derivatives to get the final gradient? Because, we don't sum the output of the convolution during forward pass. So, isn't it more sensible to average them? What is the intuition behind this?
+PS: although I said CNN, what I'm actually doing is correlation for simplicity of learning.
+"
+"['search', 'branching-factors', 'homework', 'space-complexity', 'iddfs']"," Title: What is the space complexity of iterative deepening search?Body: When using iterative deepening, is the space complexity, $O(d)$, where $b$ is the branching factor and $d$ the length of the optimal path (assuming that there is indeed one)?
+"
+"['search', 'breadth-first-search', 'branching-factors', 'space-complexity']"," Title: What is the space complexity of breadth-first search?Body: When using the breadth-first search algorithm, is the space complexity $O(b^d)$, where $b$ is the branching factor and $d$ the length of the optimal path (assuming that there is indeed one)?
+"
+"['neural-networks', 'machine-learning', 'activation-functions', 'universal-approximation-theorems']"," Title: Can most of the basic machine learning models be easily represented as simple neural network architectures?Body: I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. In chapter 1.2.1 Single Computational Layer: The Perceptron, the author says the following:
+
+Different choices of activation functions can be used to simulate different types of models used in machine learning, like least-squares regression with numeric targets, the support vector machine, or a logistic regression classifier. Most of the basic machine learning models can be easily represented as simple neural network architectures.
+
+I remember reading something about it being mathematically proven that neural networks can approximate any function, and therefore any machine learning method, or something along these lines. Am I remembering this correctly? Would someone please clarify my thoughts?
+"
+"['reinforcement-learning', 'dqn', 'policies', 'off-policy-methods', 'epsilon-greedy-policy']"," Title: Off-policy full-random training in easy-to-explore environmentBody: Let say we are in an environment where a random agent can easily explore all the states of an environment (for example: tic-tac-toe).
+In those environments, using off-policy algorithm, is it a good practice to train using exclusively random actions, instead or epsilon-greedy, Boltzmann or whatever ?
+For my mind, it seems logical, but I have never heard about it before.
+"
+"['search', 'branching-factors', 'breadth-first-search', 'space-complexity', 'bidirectional-search']"," Title: What is the space complexity of bidirectional search?Body: Is the space complexity of the bidirectional search, where the breadth-first search is used for both the forward and backward search, $O(b^{d/2})$, where $b$ is the branching factor and $d$ the length of the optimal path (assuming that there is indeed one)?
+"
+"['comparison', 'optimization', 'exploration-exploitation-tradeoff', 'meta-heuristics', 'moth-flame-optimization']"," Title: What is the difference between exploitation and exploration in the context of optimization?Body: In the paper Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm (2015, published in Knowledge-Based Systems)
+
+The test functions are divided to three groups: unimodal, multi-modal, and composite. The unimodal functions ($F1 - F7$) are suitable for benchmarking the exploitation of algorithms since they have one global optimum and no local optima. In contrary, multi-modal functions ($F8 - F13$) have a massive number of local optima and are helpful to examine exploration and local optima avoidance of algorithms
+
+I imagine that exploration means it goes searching for something in unknown regions from a starting point. But exploitation would search more around the starting (or current point).
+It is more or less that? What else differentiates both concepts?
+"
+"['neural-networks', 'deep-learning', 'papers', 'generative-adversarial-networks', 'stochastic-gradient-descent']"," Title: In the MINE paper, why is $\hat{G}_B$ biased, and how does the exponential moving average reduce the bias?Body: While reading the Mutual Information Neural Estimation (MINE) paper [1] I came across section 3.2 Correcting the bias from the stochastic gradients. The proposed method requires the computation of the gradient
+$$\hat{G}_B = \mathbb{E}_B[\nabla_{\theta}T_{\theta}] - \frac{\mathbb{E_B}[\nabla_{\theta}T_{\theta}e^{T_{\theta}}]}{\mathbb{E}_B[e^{T_{\theta}}]},$$
+where $\mathbb{E}_B$ denotes the expectation operation w.r.t. a minibatch $B$, and $T_{\theta}$ is a neural network parameterized by $\theta$. The authors claim that this gradient estimation is biased and that can be reduced by simply performing an exponential moving average filtering.
+Can someone give me a hint to understand these two points:
+
+- Why is $\hat{G}_B$ biased, and
+- How does the exponential moving average reduce the bias?
+
+"
+"['search', 'proofs', 'uniform-cost-search', 'bidirectional-search', 'optimality']"," Title: If uniform cost search is used for bidirectional search, is it guaranteed the solution is optimal?Body: If uniform cost search is used for both the forward and backward search in bidirectional search, is it guaranteed the solution is optimal?
+"
+"['reinforcement-learning', 'multi-armed-bandits', 'thompson-sampling']"," Title: Multi-armed bandits: reducing stochastic multi-armed bandits to bernoulli banditsBody: Agrawal and Goyal (http://proceedings.mlr.press/v23/agrawal12/agrawal12.pdf page 3) discussed how we can extend Thompson sampling for bernoulli bandits to Thompson sampling for stochastic bandits in general by simply Bernoulli sampling with the received reward $r_t \in [0,1]$.
+My question is whether such extension from Bernoulli bandits to general stochastic bandits hold in general and not only for Thompson sampling. E.g. can I prove properties such as lower bounds on regret for Bernoulli bandits and always transfer these results to general stochastic bandits?
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'deep-learning', 'classification']"," Title: Where can I find pre-trained agents able to play games with multiple stages like exploration, dialog, combat?Body: My goal is to create an ML model to be able to classify different game stages, e.g., dialog with a non-player character, exploration, combat with enemy, in-game menu etc.
+In order to do that, I am looking for an agent pre-trained on such a game. I am intending to develop a model using this pre-trained agent to produce a data set (frames-labels) and finally I will use that data set to train a model to classify those different stages.
+I could only find a pre-trained model for the Doom, however, it is not much appropriate for my case because it does not have different game stages (it is merely based on running & shooting).
+Training my own Reinforcement Learning Agent is a whole another workload in terms of both time and GPU such a game needs.
+Any single idea could help me a lot. Thanks!
+"
+"['machine-learning', 'function-approximation']"," Title: Can models get 100% accuracy on solved games?Body: I had a question today that I feel it must have an answer already, so I'm shopping around.
+If we ask a model to learn the binary OR function, we get perfect accuracy with every model (as far as I know).
+If we ask a model to learn the XOR function we get perfect accuracy with some models and an approximation with others (e.g. perceptrons).
+This is due to the way perceptrons are designed -- it's a surface the algorithm can't learn. But again, with a multi-layered neural network, we can get 100% accuracy.
+So can we perfectly learn a solved game as well?
+Tic-tac-toe is a solved game; an optimal move exists for both players in every state of the game. So in theory our model could learn tic-tac-toe as well as it could a logic function, right?
+"
+"['search', 'uniform-cost-search', 'completeness']"," Title: Why is the completeness of UCS guaranteed only if the cost of every step exceeds some small positive constant?Body: I was reading Artificial Intelligence: A Modern Approach 3rd Edition, and I have reached to the UCS algorithm.
+I was reading the proof that UCS is complete.
+The book state that:
+
+Completeness is guaranteed provided the cost of every step exceeds some small positive constant $\epsilon .$
+
+And that's because UCS will be stuck if there is a path with an infinite
+sequence of zero-cost actions.
+Why the step cost must exceed $\epsilon$? Isn't enough for it to be greater than zero?
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'machine-translation', 'google-translate', 'seq2seq']"," Title: How is Google Translate able to convert texts of different lengths?Body: According to my experience with Tensorflow and many other frameworks, neural networks have to have a fixed shape for any output, but how does Google translate convert texts of different lengths?
+"
+"['neural-networks', 'deep-learning', 'training', 'backpropagation', 'gradient-descent']"," Title: Why should the weight updates be proportional to input?Body: I'm reading the book Grokking Deep Learning. Regarding weight updates during training, it has the following code and explanation:
+direction_and_amount = (pred - goal_pred) * input
+weight = weight - direction_and_amount
+
+It explains the motivation behind multiplying the prediction difference with input using three cases: scaling, negative reversal and stopping.
+
+What are scaling, negative reversal, and stopping? These three attributes have the combined effect of translating the pure error into the absolute amount you want to change weight. They do so by addressing three major edge cases where the pure error isn’t sufficient to make a good modification to weight.
+
+These three cases are:
+
+- Negative input,
+- zero input and
+- the value of input (scaling).
+
+Negative and zero cases are very obvious. However, I didn't understand scaling. Regarding scaling, there's the following explanation:
+
+Scaling is the third effect on the pure error caused by multiplying it by input. Logically, if the input is big, your weight update should also be big. This is more of a side effect, because it often goes out of control. Later, you’ll use alpha to address when that happens.
+
+But I didn't understand it. Considering the linear regression problem, why weight update should be big if the input is big?
+"
+"['overfitting', 'variational-autoencoder', 'restricted-boltzmann-machine']"," Title: How can I reconstruct sparse one-hot encodings using an RBM?Body: I am currently working with a categorical-binary RBM, where there are 50 categorical visible units and 25 binary hidden units. The categorical visible units are expressed in one-hot encoding format, such that if there is 5 categories, then the visible units are expressed as a $50 \times 5$ array, where each row is the one-hot encoding of a category from 1 to 5.
+Ideally, the RBM should be able to reconstruct the visible units. However, since the visible units are in one-hot encoding, then the visible units array contains a lot of zeros. This means the RBM quickly learns to guess all zeros for the entire array to minimize the reconstruction loss. How can I force the RBM to not do this and to instead guess 1's where the category occurs and 0's otherwise?
+Note that I would still have this problem with a regular autoencoder.
+"
+"['deep-learning', 'convolutional-neural-networks', 'autoencoders', 'image-segmentation']"," Title: How can I improve the performance on unseen data for semantic segmentation using an auto-encoder?Body: I am using simple autoencoders for the task of semantic segmentation on the VOC2012 dataset. I am currently using a simple autoencoder based model. It is trained on adam optimizer with cross-entropy loss on 21 classes 0 - 20. You can find the code here: https://github.com/parthv21/VOC-Semantic-Segmentation
+My Architecture:
+ self.encoder = nn.Sequential(
+ nn.Conv2d(3, 64, 3, stride=2, padding=1),
+ nn.LeakyReLU(),
+ nn.Conv2d(64, 128, 3, stride=2, padding=1),
+ nn.LeakyReLU(),
+ nn.Conv2d(128, 256, 3, stride=2, padding=1),
+ nn.LeakyReLU(),
+ nn.Conv2d(256, 512, 3, stride=2, padding=1),
+ )
+
+ self.decorder = nn.Sequential(
+ nn.ConvTranspose2d(512, 256, 3, stride=2, padding=1, output_padding=1),
+ nn.LeakyReLU(),
+ nn.ConvTranspose2d(256, 128, 3, stride=2, padding=1, output_padding=1),
+ nn.LeakyReLU(),
+ nn.ConvTranspose2d(128, 64, 3, stride=2, padding=1, output_padding=1),
+ nn.LeakyReLU(),
+ nn.ConvTranspose2d(64, 21, 3, stride=2, padding=1, output_padding=1),
+ )
+
+After 200 iterations I am getting the following output
+Training Data
+
+Validation Data
+
+Is a more complex architecture the only way I can fix this problem? Or can I fix this with a different loss function like dice or more regularization? The same issue happened after training for 100 iterations. So the model is not generalizing for some reason.
+Edit
+I also tried adding weights to CrossEntropy
such that w_label = 1 - frequency(label). The idea was that 0 label for the background which was more common would contribute less to the loss, and other labels which were rare, would contribute more to the loss. But that did not help:
+
+Another thing I tried was ignoring label 0 for background in the loss. But that created horrible results even for training data:
+
+"
+"['superintelligence', 'feature-selection', 'feature-engineering']"," Title: Is automated feature engineering a path to general AI?Body: I recently came across the featuretools
package, which facilitates automated feature engineering. Here's an explanation of the package:
+https://towardsdatascience.com/automated-feature-engineering-in-python-99baf11cc219
+
+Automated feature engineering aims to help the data scientist by
+automatically creating many candidate features out of a dataset from
+which the best can be selected and used for training.
+
+I only have limited experience with ML/AI techniques, but general AI is something that I'd been thinking about for a while before exploring existing ML techniques. One idea that kept popping up was the idea of analyzing not just raw data for patterns but derivatives of data, not unlike what featuretools
can do. Here's an example:
+
+It's not especially difficult to see the figure above as two squares, one that is entirely green and one with a blue/green horizontal gradient. This is true despite the fact that the gradient square is not any one color and its edge is the same color as the green square (i.e., there is no hard boundary).
+However, let's say that we calculate the difference between each pixel and the pixel to its immediate left. Ignoring for a moment that RGB is 3 separate values, let's call the difference between each pixel column in the gradient square X
. The original figure is then transformed into this, essentially two homogenous blocks of values. We could take it one step further to identify a hard boundary (applying a similar left-to-right transformation again) between the two squares.
+
+Once a transformation is performed, there should be some way to assess the significance of the transformation output. This is a simple and clean example where there are two blocks of homogenous values (i.e., the output is clearly not random). If it's true that our minds use any kind of similar transformation process, the number of transformations that we perform would likely be practically countless, even in brief instances of perception.
+Ultimately, this transformation process could facilitate finding the existence of order in data. Within this framework, perhaps "intelligence" could be defined simply as the ability to detect order, which could require applying many transformations in a row, a wide variety of types of transformations, an ability to apply transformations with a high probability of finding something significant, an ability to assess significance, etc.
+Just curious if anyone has thoughts on this, if there are similar ideas out there beyond simple automated feature engineering, etc.
+"
+"['reinforcement-learning', 'policy-gradients', 'value-functions', 'off-policy-methods', 'soft-actor-critic']"," Title: In Soft Actor Critic, why is the action sampled from current policy instead of replay buffer on value function update?Body: While reading the original paper of Soft Actor Critic, I came across on page number 5, under equation (5) and (6)
+$$
+J_{V}(\psi)=\mathbb{E}_{\mathbf{s}_{t} \sim \mathcal{D}}\left[\frac{1}{2}\left(V_{\psi}\left(\mathbf{s}_{t}\right)-\mathbb{E}_{\mathbf{a}_{t} \sim \pi_{\phi}}\left[Q_{\theta}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)-\log \pi_{\phi}\left(\mathbf{a}_{t} \mid \mathbf{s}_{t}\right)\right]\right)^{2}\right]
+\tag{5}\label{5}
+$$
+$$
+\hat{\nabla}_{\psi} J_{V}(\psi)=\nabla_{\psi} V_{\psi}\left(\mathbf{s}_{t}\right)\left(V_{\psi}\left(\mathbf{s}_{t}\right)-Q_{\theta}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)+\log \pi_{\phi}\left(\mathbf{a}_{t} \mid \mathbf{s}_{t}\right)\right)
+\tag{6}\label{6}
+$$
+The following quote:
+
+where the actions are sampled according to the current policy, instead of the replay buffer
+
+In the context of deriving the formulation of the (estimated) gradient for the value function square residual error (Equation 5 in the paper)
+I'm having a hard time understanding why they use the action sampled from the current policy instead of the replay buffer. My intuition tells me that this is because SAC is an off policy Reinforcement Learning algorithm, and Q-learning uses $\max Q$ in one-step Q-value function update (to keep it off-policy), but why would sampling one action from the current policy still make it off-policy?
+I first asked a friend of mine (researcher in RL) and the answer I got was
+
+"If the action is sampled with the current policy given any state the update is on-policy."
+
+I've checked SpinningUpRL by OpenAI's explanation of SAC but they only make it more clear which action is sampled from the current policy, and which one is from the replay buffer, but does not specify why.
+Does this have anything to do with the stochastic policy? Or the entropy term in the update equation?
+So I'm still quite confused. Link/references to explanation are also appreciated!
+"
+"['convolutional-neural-networks', 'classification', 'image-recognition']"," Title: What are ways to learn a classifier for labelling a series of images rather than individual images?Body: ... and how do I reword my question in the title?
+I have a dataset where each "instance" has a "series" of multiple photos taken from different angles. I need to classify each instance as a 0 or a 1.
+A little over half of the images in each series probably do not contain the information required for a classification. Only some of the images are taken from an angle where the relevant clue is visible.
+For training I have many such series and they are labelled at a series level, but not at an image level.
+My current approach is to use a standard architecture like ResNet. I pass each image through the CNN then I combine the features by averaging, then put that through a sigmoid activated layer. I'm concerned that the network won't be able to learn because the "clue" is so buried among everything else.
+Questions:
+
+- Is there a better/standard way to do this? Would going RNN help? What if the images are not really in a meaningful sequence?
+
+- If my way is good, is arithmetic averaging the right way to combine the features?
+
+- Would it be worth spending the time to label each image as "has positive clue"/"does not have positive clue"? Should I add a "not possible to tell"? What if it is possible to tell but it's just humans that can't tell?
+
+
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'double-dqn']"," Title: Why do we minimise the loss between the target Q values and 'local' Q values?Body: I have a question regarding the loss function of target networks and current (online) networks. I understand the action value function. What I am unsure about is why we seek to minimise the loss between the qVal
for the next state in our target network and the current state in the local network. The Nature paper by Mnih et al. is well explained, however, I am not getting from it the purpose of the above. Here is my training portion from a script I am running:
+for state, action, reward, next_state, done in minibatch:
+ target_qVal = self.model.predict(state)
+
+ # print(target_qVal)
+
+ if done:
+ target_qVal[0][action] = reward #done
+ else:
+ # predicted q value for next state from target model
+ pred = self.target_model.predict(next_state)[0]
+ target_qVal[0][action] = reward + self.gamma * np.amax(pred)
+
+ # indentation position?
+ self.model.fit(np.array(state),
+ np.array(target_qVal),
+ batch_size=batch_size,
+ verbose=0,
+ shuffle=False,
+ epochs=1)
+
+I understand that the expected return is the immediate reward plus the cumulative sum of discounted rewards looking into the future $s'$ (correct me if I'm wrong in my understanding) when following a given policy.
+My fundamental misunderstanding is the loss equation:
+$$L = [r + \gamma \max Q(s',a'; \theta') - Q(s,a; \theta)],$$
+where $\theta'$ and $\theta$ are the weights of the target and online neural networks, respectively.
+Why do we aim to minimize the Q value of the next state in the target model and the Q value of the current state in the online model?
+A bonus question would be, in order to collect $Q(s,a)$ values for dimensionality reduction (as in Mnih et al t-sne plot), would I simply collect the target_qVal[0]
values during training and feed them into a list after each step to accumulate the Q values over time?
+"
+"['objective-functions', 'generative-adversarial-networks']"," Title: Explain the difference in graphical patterns between discriminator fake loss and generator loss in GANBody: In GAN (generative adversarial networks), let us take "binary cross-entropy" as the loss function for discriminator $$(overall \; loss = -\sum log(D(x_i)) -\sum log(1-D(G(z_i))) $$
+$$ where \; x_i = real \; image \; pixel \; matrix$$
+$$ and \; z_i = a \; vector \; from \; latent \; space$$.
+Let us define discriminator real loss and fake loss:
+$$ d_{fake \; loss} = -\sum log(1-D(G(Z)))$$
+$$ d_{real \; loss} = -\sum log(D(x))$$
+$$ d_{fake \; loss} \; implies \; discriminator \; loss \; against \; fake \; images$$
+$$ d_{real \; loss} \; implies \; discriminator \; loss \; against \; real \; images$$
+Generator Loss :
+$$ g_{loss} = -\sum log(D(G(z_i)))$$
+Since the functions are similar, we should be expecting some similarity in graphical patterns (i.e since none of the functions are inherently oscillatory, I expect that if one comes out to be oscillatory, the other one should be the same as well). But, If you refer to chapter 10 of the book "Generative Adversarial Networks with python by Jason Brownlee", we find some difference. The following are the graphs published in the book
+
+
+Can anyone explain the difference in the plots between discriminator fake loss and generator loss (mathematically)?
+"
+"['probability', 'fuzzy-logic']"," Title: How to calculate probability from fuzzy membership grade?Body: Suppose we have the fuzzy membership grade for a person $x$ with a set $S = \text{set of tall people}$ be $0.9$, i.e. $\mu_S(x)=0.9$.
+Does this mean that the probability of person $x$ being tall is $0.9$?
+"
+"['convolutional-neural-networks', 'dqn', 'double-dqn']"," Title: DQN rgb input channels problem using pytorchBody: I've been trying to learn about CNN's and reinforcement learning and I found this project to play with: https://github.com/adityajn105/flappy-bird-deep-q-learning
+I've been trying to change the code to work with RGB input instead of grayscale. Pre-precosessing part is fine, but I'm having a problem with state and next_state I guess because they're deque and when deque is appended shape is (4,H,W) because it's appended 4 times (4 frames). Problem I'm having is when I append frames to deque which are RGB it becomes something like (4,H,W,3). I tried some stuff that came to mind and that I googled and read about online, but I still had problems with dimensions. What should be done so that it works with RGB instead of grayscale?
+"
+"['bias-variance-tradeoff', 'learning-curve']"," Title: Bias-variance tradeoff and learning curves for non-deep learning modelsBody: I am following a course on machine learning and am confused about the bias-variance trade-off relationship
+to learning curves in classification.
+I am seeing some conflicting information online on this.
+The scikit-learn learning curve looks like the top 2 curves here: ![]()
+(source: scikit-learn.org)
+What I don't understand is: how do we read bias from this? If we look at this image
+where each blue dot is a model. I think the bias would be the green curve being high. But high bias
+indicates underfitting, right? So shouldn't the red curve be high then too?
+
+High variance would be the gap between green and red, is this correct?
+My question is how do the red and green curves relate to underfitting and overfitting,
+and how do learning curves fit with the figure with the concentric circles? Is bias purely related to the red curve, or is a model with a low validation score and high train score also a high bias model?
+"
+"['implementation', 'variational-autoencoder', 'cross-entropy', 'evidence-lower-bound', 'categorical-crossentropy']"," Title: How does the implementation of the VAE's objective function equate to ELBO?Body: For a lot of VAE implementations I've seen in code, it's not really obvious to me how it equates to ELBO.
+$$L(X)=H(Q)-H(Q:P(X,Z))=\sum_ZQ(Z)logP(Z,X)-\sum_ZQ(Z)log(Q(Z))$$
+The above is the definition of ELBO, where $X$ is some input, $Z$ is a latent variable, $H()$ is the entropy. $Q()$ is a distribution being used to approximate distribution $P()$, which in the above case both $P()$ and $Q()$ are discrete distributions, because of the sum.
+A lot of the times when VAEs are built for reconstructing discrete data types, let's say for example an image, where each pixel can be black or white or $0$ or $1$. The main steps of a VAE that I've seen in code are as follows:
+
+- $\text{Encoder}(Y) \rightarrow Z_u, Z_{\sigma}$
+- $\text{Reparameterization Trick}(Z_\mu, Z_\sigma) \rightarrow Z$
+- $\text{Decoder}(Z) \rightarrow \hat{Y}$
+- $L(Y)= \text{CrossEntropy}(\hat{Y}, Y) - 0.5*(1+Z_{\sigma}-Z_{\mu}^2-exp(Z_\sigma))$
+
+where
+
+- $Z$ represents the latent embedding of the auto-encoder
+- $Z_\mu$ and $Z_\sigma$ represent the mean and standard deviation for sampling for $Z$ from a Gaussian distribution.
+- $Y$ represents the binary image trying to be reconstructed
+- $\hat{Y}$ represents its reconstruction from the VAE.
+
+As we can see from the ELBO, it's the entropy of the latent distribution being learned, $Q()$, which is a Gaussian, and the cross entropy of the latent distribution being learned $Q()$ and the actual distribution $P()$ with $Z$ intersected with $X$.
+The main points that confuse me are
+
+- how $\text{CrossEntropy}(\hat{Y}, Y)$ equates to the CE of the distribution for generating latents and its Gaussian approximation, and
+- how $(0.5*(1+Z_{\sigma}-Z_{\mu}^2-exp(Z_\sigma)))$ equates to the entropy
+
+Is it just assumed the CE of $Y$ with $\hat{Y}$ also leads to the CE of the latent distribution with it's approximation, because they're part of $\hat{Y}$'s generation? It still seems a bit off because you're getting the cross entropy of $Y$ with it's reconstruction, not the Gaussian distribution for learning latents $Z$.
+Note: $Z_\sigma$ is usually not softplused to be strictly positive as required by a Gaussian distribution, so I think that's what $exp(Z_\sigma)$ is for.
+"
+"['deep-learning', 'python', 'keras', 'deep-rl', 'deep-neural-networks']"," Title: DQN layers when state space and action space are multi dimensionalBody: I have built my own RL environment, where a state is composed of two elements: the agent's position and a matrix of 0s and 1s (1 if a user has requested a service from the agent, 0 otherwise); an action is composed of 3 elements: the movement the agent chooses (up, down, left or right), a matrix of 0s and 1s (1 if a resource has been allocated to a user, 0 otherwise), and a vector representing the allocation of another type of resource (the vector contains the values allocated to the users).
+I am currently trying to build a Deep Q Learning agent, I am a bit confused however as to what model (example Sequential), what type of layers (example Dense layers), how many layers, what activation mode I should use, and what the state and action sizes are. (Taking this code as a reference cartpole dqn agent)
+I also do not know what my inputs and outputs should be.
+The examples I have come across are rather simple and I don't know how to approach setting it all up for my agent.
+"
+"['comparison', 'terminology', 'genetic-algorithms', 'evolutionary-algorithms', 'hyperparameter-optimization']"," Title: What is the difference between sensitivity analysis and parameter tuning?Body: I tried different values of genetic algorithm operators:
+
+- many crossover rates from 20% to 80%
+- many crossover rates from 1% to 20%
+- varying the population size
+
+The study of different parameter values is called quantitative parameter tuning or sensitivity analysis. What is the difference between the two terms?
+"
+"['neural-networks', 'convolutional-neural-networks', 'computational-learning-theory']"," Title: Given a dataset and a neural network, is there some heuristic or theorem to determine whether this neural network has enough capacity?Body: What is the consensus regarding NN "capacity" or expressive power? I remember reading somewhere that expressive power grows exponentially with depth, but I cannot seem to find that exact paper.
+If I have some dataset and some neural network, is there some heuristic, theorem, or result that may help me to decide whether this particular neural network I've chosen has enough expressive power (or capacity) to learn that data?
+Something similar to VC-dimension and VC-inequality, but regarding more complex models such as NNs.
+I suspect that there is no simple answer, but, generally, what would be the answer to this question?
+Overfitting on some subset might be a start, but that doesn't really tell me how the model behaves when there's more data, it only tells me that it's not fundamentally broken and can learn something.
+I know it's a complex matter, but I'll be grateful for any help, be it some practical stuff, as well as some references, papers, etc. Of course, I googled some papers, but if you have something particularly interesting, please share.
+"
+"['machine-learning', 'classification', 'homework', 'primality-test']"," Title: Which ML approach could determine that a number greater than 5 is not prime, knowing that a number is not prime if it ends with an even digit or 5?Body: I have started studying ML just a short while ago, so that my questions will be very elementary. That being so, if they are not welcome, just tell me and I'll stop asking them.
+I gave myself a homework project which is to make an ML algorithm be able to learn that, if the last digit of a number $n$ is $0, 2, 4, 5, 6$ or $8$, then it cannot be a prime, provided $n > 5$. Note that, if a number $n$ ends with $0, 2, 4, 6, 8$, then it is even, so it is divisible by $2$, hence not prime. Similarly, numbers ending in $5$ are divisible by $5$, so they cannot be prime.
+Which ML approach should I choose to solve this problem? I know that I don't need ML to solve this problem, but I am just trying to understand which ML approach I could use to solve this problem.
+So far, I have only learned about two ML approaches, namely linear regression (LR) and $k$-nearest neighborhoods, but they both seem inappropriate in this case since LR seems to be a good choice in finding numerical relations between input and output data and KNN seems to be good at finding clusters, and "primality" has neither of these characteristics.
+"
+"['reinforcement-learning', 'deep-rl', 'papers', 'muzero']"," Title: How is MuZero's second binary plane for chess defined?Body: From the MuZero paper (Appendix E, page 13):
+
+In chess, 8 planes are used to encode the action. The first one-hot plane encodes which position the piece was moved from. The next two planes encode which position the piece was moved to: a one-hot plane to encode the target position, if on the board, and a second binary plane to indicate whether the target was valid (on the board) or not. This is necessary because for simplicity our policy action space enumerates a superset of all possible actions, not all of which are legal, and we use the same action space for policy prediction and to encode the dynamics function input. The remaining five binary planes are used to indicate the type of promotion, if any (queen, knight, bishop, rook, none).
+
+Is the second binary plane all zeros or all ones? Or, something else?
+How is it known if the move is off the board? For my game, I know if it is a legal move on the board, but do not know if the move is off the board.
+"
+"['neural-networks', 'training', 'weights-initialization']"," Title: Are there any downsides of using a fixed seed for a neural network's weight initialization?Body: For example, if we set the random seed to be 0, will we run into any problems? For example, maybe for seed 0, we can only reach a certain training error, but other seeds will converge to a much lower error
+I'm specifically concerned about supervised learning on point cloud data, but curious about whether it matters in general whenever you use a neural network.
+"
+"['neural-networks', 'convolutional-neural-networks', 'implementation', 'convolution', 'pooling']"," Title: Is average pooling equivalent to a strided convolution with a specific constant kernel?Body: It seems to me that average pooling can be replaced by a strided convolution with a constant kernel. For instance, a 3x3 pooling would be equivalent to a strided convolution (of stride $3$) with a $3 \times 3$ matrix of constants, with each entry being $\frac{1}{9}$.
+However, I haven't found any mention of this fact online (perhaps it's too trivial of an observation)? Why then are explicit pooling layers needed if they can be realized by convolutions?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'alphago', 'imitation-learning']"," Title: Initialising DQN with weights from imitation learning rather than policy gradient networkBody: In AlphaGo, the authors initialised a policy gradient network with weights trained from imitation learning. I believe this gives it a very good starting policy for the policy gradient network. the imitation network was trained on labelled data of (state, expert action) to output a softmax policy denoting the probability of actions for each state.
+In my case, I would like to use weights learnt from the imitation network as initial starting weights for a DQN. Initial rewards from running the policy are high but start to decrease (for a while) as the DQN trains and later increases again.
+This could suggest that the effect of initialising weights from the imitation network has little effect, since it kind of undo's the effect of initialising weights from the imitation network.
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'experience-replay']"," Title: What are the implications of storing the alternative situation (that could have been experienced) in the replay buffer?Body: Consider an environment where there are 2 outcomes (e.g. dead and alive) and a discrete set of actions. For example, a game where the agent has 2 guns $A$ and $B$ to shoot a monster (the monster dies only if the correct gun is used).
+Let's say we store the experience $e_1 = (s,a_1, r_1, s'_1)$ in the replay buffer $D$, where
+
+- $s$: we have the monster to kill
+- $a_1$: choose and use gun $A$
+- $r_1$: $-1$ (the wrong gun)
+- $s'_1$: monster is alive
+
+But we also save the alternative situation $e_2 = (s, a_2, r_2, s'_2)$ in the replay buffer $D$, where
+
+- $s$: the same state (we have the monster)
+- $a_2$: choose and use gun $B$
+- $r_2$: $(-1) * r = 1$
+- $s'_2$: the monster is dead
+
+I can't find a topic about this technique, or don't know what to look for.
+"
+"['linear-algebra', 'gaussian-process']"," Title: How to interpret the variance calculation in a Guassian processBody: I answered another question here about the mean prediction of a GP, but I have a hard time coming up with an intuitive explanation of the variance prediction of a GP.
+Thew specific equation that I am speaking of is equation 2.26 in the Gaussian process book...
+$$
+\mathbb{V}[f_*] = k(x_*, x_*) - k_{x*}^TK_{xx}^{-1}k_{x*}
+$$
+I have a number of questions about this...
+
+- if $k(x_*, x_*)$ is the result of the kernel function with a single point $x_*$, then won't this value always be 1 (assuming an RBF kernel) since the kernel will give 1 for a covariance with itself ($k(x, x) =\exp\{-\frac{1}{2}|| x - x ||^2\}$)
+
+- If the kernel value $k(x_*, x_*)$ is indeed one for any single arbitrary point, then how can I interpret the last multiplication on the RHS? $K_{xx}^{-1}k_{x*}$ is the solution to $Ax = b$, which is the vector which $K_{xx}$ projects into $k_{x*}$, but then my intuition breaks down and I cannot explain anymore.
+
+- If the kernel value $k(x_*, x_*)$ is indeed one for any single arbitrary point, then can we view the whole term as the prior variance being reduced by the some sort of similarity between the test point and the training points?
+
+- Is it every possible for this variance to be greater than 1? Or is the prior variance of 1 seen as the maximum, which can only be reduced by observing more data?
+
+
+"
+"['convolutional-neural-networks', 'python', 'tensorflow', 'dqn', 'multilayer-perceptrons']"," Title: Keras DQN Model with Multiple Inputs and Multiple OutputsBody: I am trying to create a DQN agent where I have 2 inputs: the agent's position and a matrix of 0s and 1s.
+The output is composed of the agent's new chosen position, a matrix of 0s and 1s (different from the input matrix), and a vector of values.
+The first input is fed to an MLP network, the second input (matrix) is fed to a convolutional layer, then their outputs are fed to a FC network, or at least that's the idea.
+This is my attempt so far, having this tutorial as a reference.
+Here is the code:
+First, create the MLP network
+def create_mlp(self, arr, regress=False): # for the position input
+ # define MLP network
+ print("Array", arr)
+ model = Sequential()
+ model.add(Dense(env.rows * env.cols, input_shape=(len(arr)//2, len(arr)), activation="relu"))
+ model.add(Dense((env.rows * env.cols)//2, activation="relu"))
+
+ # check to see if the regression node should be added
+ if regress:
+ model.add(Dense(1, activation="linear"))
+
+ # return our model
+ return model
+
+Then, the CNN
+def create_cnn(self, width, height, depth=1, regress=False): # for the matrix
+ # initialize the input shape and channel dimension
+ inputShape = (height, width, depth)
+ output_nodes = 6e2
+
+ # define the model input
+ inputs = Input(shape=inputShape)
+
+ # if this is the first CONV layer then set the input
+ # appropriately
+ x = inputs
+
+ input_layer = Input(shape=(width, height, depth))
+ conv1 = Conv2D(100, 3, padding="same", activation="relu", input_shape=inputShape) (input_layer)
+ pool1 = MaxPooling2D(pool_size=(2,2), padding="same")(conv1)
+ flat = Flatten()(pool1)
+ hidden1 = Dense(200, activation='softmax')(flat) #relu
+
+ batchnorm1 = BatchNormalization()(hidden1)
+ output_layer = Dense(output_nodes, activation="softmax")(batchnorm1)
+ output_layer2 = Dense(output_nodes, activation="relu")(output_layer)
+ output_reshape = Reshape((int(output_nodes), 1))(output_layer2)
+ model = Model(inputs=input_layer, outputs=output_reshape)
+
+ # return the CNN
+ return model
+
+Then, concatenate the two
+def _build_model(self):
+ # create the MLP and CNN models
+ mlp = self.create_mlp(env.stateSpacePos)
+ cnn = self.create_cnn(3, len(env.UEs))
+
+ # create the input to our final set of layers as the *output* of both
+ # the MLP and CNN
+ combinedInput = concatenate([mlp.output, cnn.output])
+
+ # our final FC layer head will have two dense layers, the final one
+ # being our regression head
+ x = Dense(len(env.stateSpacePos), activation="relu")(combinedInput)
+ x = Dense(1, activation="linear")(x)
+
+ # our final model will accept categorical/numerical data on the MLP
+ # input and images on the CNN input, outputting a single value
+ model = Model(inputs=[mlp.input, cnn.input], outputs=x)
+
+ opt = Adam(lr=self.learning_rate, decay=self.epsilon_decay)
+ model.compile(loss="mean_absolute_percentage_error", optimizer=opt)
+
+ print(model.summary())
+
+ return model
+
+I have an error:
+A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 32, 50), (None, 600, 1)]
+
+The line of code that gives the error is:
+combinedInput = concatenate([mlp.output, cnn.output])
+
+This is the MLP summary
+
+And this is the CNN summary
+
+I'm a beginner at this, and I'm not where my mistakes are, the code does not work obviously but I do not know how to correct it.
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'inference']"," Title: Difference between Neural Compute Stick 2 and Google Coral USB for edge computingBody: I am trying understand machine learning inferece, and i would like to know what exactly is the difference between Google Coral USB and Movidius Intel Neural Compute Stick 2. From what i could gather the google coral USB speeds up the frame rate, but that doesn't look clear to me. My questions are:
+What exactly is the benefit from both of them in units? Like, is it frame rate? prediction speed? Are both visual processing units? And lastly, do i need to keep my neural network in a single computer board for training or can i have it at a cloud?
+"
+"['reinforcement-learning', 'dqn', 'reinforce', 'continuous-action-spaces', 'discrete-action-spaces']"," Title: What adapts an algorithm to continuous or to discrete action spaces?Body: Some RL algorithms can only be used for environments with continuous action spaces (e.g TD3, SAC), while others only for discrete action spaces (DQN), and some for both
+REINFORCE and other policy gradient variants have the choice of using a categorical policy for discrete actions and a Gaussian policy for continuous action spaces, which explains how they can support both. Is that interpretation completely correct?
+For algorithms that learn a Q function or a Q function and a policy, what places this restriction for their use on either discrete or continuous action spaces environments?
+In the same regard, if an algorithm suited for discrete spaces is to be tuned to handle continuous action spaces, or vice versa. What does such a configuration involve?
+"
+"['neural-networks', 'applications', 'self-replicating-machines']"," Title: What are the use-cases of self-replicating neural networks?Body: I started researching the subject of self-replication in neural networks, and unexpectedly I saw that there is not much research on this subject. I should mention I am new in the field of NNs.
+This idea seems to be very appealing, but now I am having problems coming up with an actual use case. The paper Neural Network Quine from 2018 seems to be one of the main ones addressing this topic.
+So, what are the use-cases of self-replicating neural networks? Why isn't this subject more thoroughly researched?
+"
+"['reinforcement-learning', 'dqn', 'exploration-strategies']"," Title: In DQN, is it possible to make some actions more likely?Body: In a general DQN framework, if I have an idea of some actions being better than some other actions, is it possible to make the agent select the better actions more often?
+"
+"['object-detection', 'yolo']"," Title: How is the shape of the anchor boxes predefined in YOLO algorithm?Body: I am not sure if I really understand how anchor boxes are defined. As far as I understand, in YOLO algorithm you define a set of "good" shapes (anchor boxes) that may contain the object you are trying to detect for each cell.
+However, I don't really understand how do you predefine the shape of these anchor boxes. As far as I have seen, there are examples in which the algorithm outputs bx, by, bh and bw values for each anchor box. Are you actually giving "freedom" to the algorithm to define these four values, or somehow is it being defined a fixed ratio between bh and bw for each of the anchor boxes? And how is this ratio defined in the output?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'off-policy-methods', 'sarsa']"," Title: Is Deep SARSA learning a feasible approach?Body: I noticed that SARSA has been rarely used in the deep RL setting. Usually, the training for DQN is done off-policy. I think one of the major reasons for this is due to greater sample efficiency in training due to the reuse of experiences in training off-policy. For SARSA, I would think that at every time step of an update, a stochastic gradient update of that sample would have to be performed and then the sample would have to be thrown away.
+The approach while might take longer to train, might still allow the agent to do relatively well. Would deep SARSA implementation perform as well as DQN in terms of final performance (since SARSA would definitely take longer to train)?
+"
+"['reinforcement-learning', 'q-learning', 'policy-evaluation']"," Title: Can we use Q-learning update for policy evaluation (not control)?Body: For policy evaluation purposes, can we use the Q-learning algorithm even though, technically, it is meant for control?
+Maybe like this:
+
+- Have the policy to be evaluated as the behaviour policy.
+- Update the Q value conventionally (i.e. updating $Q(s,a)$ using the action $a'$ giving highest $Q(s',a')$ value)
+- The final $Q(s,a)$ values will reflect the values for the policy being evaluated.
+
+Am I missing something here, given that I have not seen Q-learning being used anywhere for evaluation purposes?
+"
+"['reinforcement-learning', 'multi-armed-bandits', 'upper-confidence-bound', 'thompson-sampling', 'exploration-strategies']"," Title: Why aren't exploration techniques, such as UCB or Thompson sampling, used in full RL problems?Body: Why aren't exploration techniques, such as UCB or Thompson sampling, typically used in bandit problems, used in full RL problems?
+Monte Carlo Tree Search may use the above-mentioned methods in its selection step, but why do value-based and policy gradient methods not use these techniques?
+"
+"['neural-networks', 'reinforcement-learning', 'environment']"," Title: What is the appropriate way of passing a list of integers that represents the environment to a neural network's dense layer?Body: I'm training an RL agent using the DQN algorithm to do a specific task. The environment is represented by a list of $10$ integer numbers from $0$ to $20$. An example would be $[5, 15, 8, 8, 0, \dots]$.
+Is it okay to pass the list as floats to the dense layer, or would that impede the learning process? What is the right way to go about passing integer numbers to the neural network?
+"
+"['deep-learning', 'object-detection', 'object-recognition', 'yolo', 'data-augmentation']"," Title: If random rotations are included in the data augmentation process, how are the new bounding boxes calculated?Body: When studying bounding box-based detectors, it's not clear to me if data augmentation includes adding random rotations.
+If random rotations are added, how is the new bounding box calculated?
+"
+"['reinforcement-learning', 'reward-design', 'reward-functions', 'zero-sum-games']"," Title: How can I discourage the RL agent from drawing in a zero-sum game?Body: My agent receives $1, 0, -1$ rewards for winning, drawing, and losing the game, respectively. What would be the consequences of setting reward to $-1$ for draws? Would that encourage the agent to win more or will it have no effect at all? Is it appropriate to do so?
+"
+"['reinforcement-learning', 'reward-design', 'reward-functions', 'bayesian-statistics', 'thompson-sampling']"," Title: Thompson sampling with Bernoulli prior and non-binary reward updateBody: I am solving a problem for which I have to select the best possible servers (level 1) to hit for a given data. These servers (level 1) in turn hit some other servers (level 2) to complete the request. The level 1 servers have the same set of level 2 servers integrated with them. For a particular request, I am getting success or failure as a response.
+For this, I am using Thompson Sampling with Bernoulli prior. On success, I am considering reward as 1 and, for failure, it is 0. But in case of failure, I am receiving errors as well. In some error, it is evident that the error is due to some issue at the server (level 1) end, and hence reward 0 makes sense, but some error results from request data errors or issue at level 2 servers. For these kinds of errors, we can't penalize the level 1 servers with reward 0 nor can we reward them with value 1.
+Currently, I am using 0.5 as a reward for such cases.
+Exploring over the Internet, I couldn't find any method/algorithm to calculate the reward for such cases in a proper (informed) way.
+What could be the possible way to calculate reward in such cases?
+"
+"['machine-learning', 'pattern-recognition']"," Title: Find repeating patterns in sequence dataBody: I have database of sequential events for multiple animals.
+The events are represented by integers so it looks something like:
+Animal A: [1,6,4,2,5,7,8]
+Animal B: [1,6,5,4,1,6,7]
+Animal C: [5,4,2,1,6,4,3]
+
+I can manually see that for each event 6 event 1 first happens.
+And event 4 happens quickly after a 1,6 combination.
+But these are easy to spot in such a small dataset, the real lists are 10000+ events per animal.
+Is there a way to use an algorithm or machine learning to search for these kinds of patterns?
+"
+"['object-detection', 'yolo']"," Title: Object detection noise filteringBody: In my project, I am detecting only one class, which is "airplane", using yolov5. However, at some frames, the neural network labels some of the buildings as airplanes, which obviously are not. This noise happens like 1 frame among 60 frames. How should I treat this issue? Which algorithms can be applied to filter out?
+"
+"['convolutional-neural-networks', 'activation-functions', 'relu', 'leaky-relu', 'elu']"," Title: What are the benefits of using ELU over other activation functions in CNNs?Body: I have come up with some examples of CNNs (segmentation CNNs) that use ELU (exponential linear unit) as an activation function.
+What are the benefits of this activation function over others, such as RELU or leaky RELU?
+"
+"['minimax', 'game-theory', 'zero-sum-games']"," Title: Optimal mixed strategy in two player zero sum gamesBody: I am currently studying game theory based on Peter Norvig's 3rd edition introduction to artificial intelligence book. In chapter 17.5, the two player zero sum game can be solved by using the $\textbf{minimax}$ theorem
+$$max_x \, min_y \, x^TAy = min_x \, max_y \, x^TAy = v$$
+where $x$ is the probability distribution of actions by the max player (in the left equation) and the min player (in the right equation).
+Regarding the minimax theorem, I have 2 questions.
+
+- Do both the min and the max players have the same probability distribution of actions ? In the book by Peter Norvig, the book demonstrated that in the game of $\textbf{Morra}$, both the min and max player had $[\frac{7}{12}:one, \frac{5}{12}:two]$ for the probability distributions.
+
+- Also, regarding minimax game tree, is the difference between minimax game tree and the zero sum game the fact that for minimax game tree, the opponent can react to the first player's move whereas for zero sum game defined in 17.5, both players are unaware of each other's move ?
+
+
+"
+"['neural-networks', 'terminology', 'transformer', 'attention']"," Title: Why are ""Transformers"" called this way?Body: What is the reason behind the name "Transformers", for Multi Head Self-Attention-based neural networks from Attention is All You Need?
+I have been googling this question for a long time, and nowhere I can find any explanation.
+"
+"['reinforcement-learning', 'off-policy-methods', 'value-functions', 'importance-sampling', 'return']"," Title: When learning off-policy with multi-step returns, why do we use the current behaviour policy in importance sampling?Body: When learning off-policy with multi-step returns, we want to update the value of $Q(s_1, a_1)$ using rewards from the trajectory $\tau = (s_1, a_1, r_1, s_2, a_2, r_2, ..., s_n, a_n, r_n, s_n+1)$. We want to learn the target policy $\pi$ while behaving according to policy $\mu$. Therefore, for each transition $(s_t, a_t, r_t, s_{t+1})$, we apply the importance ratio $\frac{\pi(a_t | s_t)}{\mu(a_t | s_t)}$.
+My question is: if we are training at every step, the behavior policy may change at each step and therefore the transitions of the trajectory $\tau$ are not obtained from the current behavior policy, but from $n$ behavior policies. Why do we use the current behavior policy in the importance sampling? Should each transition use the probability of the behavior policy of the timestep at which that transition was collected? For example by storing the likelihood $\mu_t(a_t | s_t)$ along with the transition?
+"
+"['machine-learning', 'computer-vision', 'object-detection', 'image-processing', 'google-cloud']"," Title: How to compute dominant colors in an image?Body: I was trying Google Cloud's Vision API, and how the dominant colors part shows. I uploaded a sample image, and here is the results for the dominant colors. I realized it doesn't simply count pixel colors and cluster them. The background has many gray pixels which are not included.
+How does it perform dominant colors? How can I do something similar?
+
+"
+['generative-adversarial-networks']," Title: GAN model predictions before training is predictableBody: I have a dataset of 3000 8x8 images, and I would like to train a GAN for an image generation purpose.
+I am planning to start with a simple GAN model and see if it overfits. Before training, I try to do a comparison of the discriminator model prediction using real image input against the whole GAN model prediction using random seed input. My thought process is that since this model is not trained yet, the output for real images and fake images by the discriminator should not be predictable.
+However, the discriminator model prediction using real image input always returns a value very close to 1.0
, and the whole GAN model prediction using random seed input always returns a value near 0.5
with a small deviation. I suspect that during training, the model would simply pull the 0.5
value near 0.0
and would never actually learn from the dataset.
+I try to increase the training parameters and different initializers, but the output is still the same.
+By ruling out the possibility a bad dataset, what could be the reason for this situation?
+This is some sneak peek of the generator and discriminator model building: https://pastebin.com/ehMDP7k6
+"
+"['neural-networks', 'reinforcement-learning', 'policy-gradients']"," Title: Understanding neural network achitectures in policy gradient reinforcement learning for continuous state and action spaceBody: I am trying to train a neural network using reinforcement learning / policy gradient methods. The states, i.e. the inputs, as well as the actions I am trying to sample are vectors with each element being a real number like in this question: https://math.stackexchange.com/questions/3179912/policy-gradient-reinforcement-learning-for-continuous-state-and-action-space
+The answer that was given there already helped me a lot. Also, I have been trying to understand Chapter 13: Policy Gradient Methods from "Reinforcement Learning: An Introduction" by Sutton et al. and in particular Section 13.7: Policy Parametrization for Continous Actions.
+My current level of understanding that I can use the weights in the network to calculate the mean/means and the standard deviation/covariance matrix. I can then use them to define a multivariate Gaussian distribution and randomly sample an action from there.
+For now, I have one main question: In the book it says, that I have to split the weights, i.e. the policy's parameter vector, into two parts: $\theta = [\theta_{\mu}, \theta_{\sigma}]$. I can then use each part together with a feature vector to calculate the means and the covariance matrix. However, I was wondering, how this is usually done? Do I train two separate networks? I am not sure how an architecture of this will look like. Also, I am not sure what the output nodes will be in this case. Do they have any meaning like for supervised learning?
+So far, I have only found papers that talk about this issue rather theoretically like it is presented in the book. I would be very happy to understand how this is actually implemented. Thank you very much!
+"
+"['computer-vision', 'training', 'keras', 'generative-adversarial-networks']"," Title: Why do we need to provide false labels to the discriminator on purpose to train GANs?Body: This is the tutorial that I used to learn about GANs. In this tutorial, it taught us to intentionally provide false labels to "fool" the discriminator, but does it make the discriminator actually inaccurate? I don't quite understand his explanation, can anyone help me?
+"
+"['deep-learning', 'face-recognition']"," Title: In a face database containing multiple images per subject, how do we determine the face image which is most suited for face recognition?Body: Let us imagine a face database with several subjects, each subject having multiple face images. How do we determine which is the best face suitable for face recognition purposes?
+"
+"['comparison', 'search', 'uniform-cost-search', 'dijkstras-algorithm', 'shortest-path-problem']"," Title: What is the difference between the uniform-cost search and Dijkstra's algorithm?Body: Every computer science student (including myself, when I was doing my bachelor's in CS) probably encountered the famous single-source shortest path Dijkstra's algorithm (DA). If you also took an introductory course on artificial intelligence (as I did a few years ago, during my bachelor's), you should have also encountered some search algorithms, in particular, the uniform-cost search (UCS).
+A few articles on the web (such as the Wikipedia article on DA) say that DA (or a variant of it) is equivalent to the UCS. The famous Norvig and Russell's book Artificial Intelligence: A Modern Approach (3rd edition) even states
+
+The two-point shortest-path algorithm of Dijkstra (1959) is the origin of uniform-cost search. These works also introduced the idea of explored and frontier sets (closed and open lists).
+
+How exactly is DA equivalent to UCS?
+"
+"['geometric-deep-learning', 'graphs', 'graph-neural-networks']"," Title: How does a GCN handle new input graphs?Body: Quick questions to see whether I understand GCNs correctly.
+Is it correct that, if I have trained a GCN, it can take arbitrary graphs as input, assuming the feature size is the same?
+I can't seem to find explicit literature on this.
+"
+['neural-networks']," Title: How to extract the main text from a formatted text file?Body: My idea is to model and train a neural network that receives a text version of a PDF file as the input and gives the content text as output.
+Take the scenario:
+
+- One prints a PDF file to a text file (the text file does not have images, but has the main text, headings, page numbers, some other footer text, and so on, and keeps the same number of columns - two for instance - of text);
+
+- This text file is submitted to a tool that strips everything that is not the main content of the text in one single text column (one text stream), keeping the section titles, paragraphs, and the text in a readable form (does not mix columns);
+
+- The tool generates a new version of the original text file containing only the main text portion, ready to be used for other purposes where the striped parts would be considered noise.
+
+
+How to model this problem in a way a neural network can handle it?
+Update 1
+Here are some clarifications on the problem.
+PDF file
+The picture below shows two pages of a pdf version of a scientific paper. This is just to set the context, the PDF file is not the input for this problem, it is just to understand where the actual input data comes from.
+
+The color boxes show some parts of interest for this discussion. Red boxes are headers and footers. We are not interested in them. Blue and green boxes are content text blocks. Different colors were used to emphasize the text is organized in columns and that is part of the problem. Those blue and green boxes are what we actually want.
+Text file
+If a use the ""save as text file"" feature of my free PDF reader, I get a text file similar to the image below.
+
+The text file is continuous, but I put the equivalent of the first two pages of the PDF file side-by-side just to make things easier to compare. We can see the very same colored boxes. In terms of words, those boxes contain the same text as in the PDF version.
+Understanding the problem
+When we read a paper, we are usually not very interested in footers or headers. The main content is what we actually read and that will provide us with the knowledge we are looking for. In this case, the text is inside blue and green boxes.
+So, what we want here is to generate a new version of the input (text) file organized in one single text stream (one column if you will), with the text laid-out in a form someone can read it, which means, alternating the blue and the green boxes.
+However, if the original PDF has no footers, it should work in the same way, providing the main text content. If the text comes in three of four columns, the final product must be a text in good condition to be read without losing any information.
+Any pictures will be simply stripped off the text version of the paper and we are fine with that.
+"
+"['neural-networks', 'machine-learning', 'linear-regression']"," Title: Which machine learning technique can I use to match one set of data points to another?Body: I have two measuring devices. Both measure the same thing. One is accurate, the other is not, but does correlate with a non-fixed offset, some outliers, and some noise.
+I won't always be using the accurate device. The nonfixed offset makes things difficult, but I'm certain there is sufficient similarity to make a link using a machine learning (or AI) technique and to convert one set of numbers to a good approximation of the other.
+One is a footbed power meter and gives power in Watts every second. The other is a crank-based power meter, also outputting Watts at 1Hz. The footbed power is much less than the crank (which I know to be accurate), but it does track the increases and decreases in power, just with more noise and, as I say, a non-fixed offset (and by non-fixed I mean, at low power the offset is different to that at high power, I don't mean it isn't consistent, it is consistent). Both measure cadence which may be a useful metric to help find a pattern.
+I will be collecting sets of data from both and hoped to plug the footbed data in as a column of values with the crank data as another column representing the truth, so after training, the model would be able to transform the footbed data to an approximation of the crank data.
+Anyway, I'm completely lost as to how to begin. I've tried searching, but, clearly, I'm using the wrong keywords. Does anyone have any pointers, please?
+"
+"['tensorflow', 'keras', 'objective-functions', 'image-segmentation']"," Title: Loss function decays linearly in segmentation MRI fasciaBody: I am working on a segmentation of MRI images of the thigh. I am trying to segment the fascia, there is a slight imbalance between the background and the mask. I have about 1400 images from 30 patients for training and 200 for validation. I am working with keras. The loss function is combination of weighted cross entropy and dice loss (smooth factor of dice loss = 2)
+def combo_loss(y_true, y_pred,alpha=0.6, beta=0.6): # beta before 0.4
+ return alpha*tf.nn.weighted_cross_entropy_with_logits(y_true, y_pred, pos_weight=beta)+((1-alpha)*dice_coefficient_loss(y_true, y_pred))
+
+When I use a value alpha greater thatn 0.5 (the weighted cross entropy) the loss rapidly decreases during the first epoch. Afterwards if slowly decreases in a linear manner.
+
+Why is this happening? What would be a good approach reasonable approach to choose the values of alpha and beta?
+"
+"['reinforcement-learning', 'markov-decision-process', 'sarsa', 'markov-chain', 'markov-property']"," Title: Can $Q$-learning or SARSA be thought of a Markov Chain?Body: I might just be overthinking a very simple question but nonetheless the following has been bugging me a lot.
+Given an MDP with non-trivial state and action sets, we can implement the SARSA algorithm to find the optimal policy or the optimal state-action-value function $Q^*(s,a)$ by the following iteration:
+$$Q(s_t,a_t)\leftarrow Q(s_t,a_t) + \alpha(r_t + \gamma Q(s_{t+1}, a_{t+1}) - Q(s_t,a_t)).$$
+Assuming each state-action pair is visited infinitely often, fix one such pair $(s,a)$ and denote the time sequence of visiting the said pair as $t_1 < t_2 < t_3 < \dots t_n\dots.$ Also, let $Q_{t_n}(s,a) = X_n$ for ease of notation and consider the sequence of random variables:
+$$X_0, X_1, \dots X_n,\dots $$
+Can $\{X_n\}_{n\geq 0}$ be though of a discrete-time Markov Chain on $\mathbb{R}$ ? My intuition says no, because the recurrence equation will look like:
+$$X_{n+1} = (1-\alpha)X_n + \alpha(r_{t_n} +\gamma Q_{t_n}(s',a'))$$
+and that last term $Q_{t_n}(s',a')$ will be dependent on the path even if we condition on $X_n = x.$
+However, I am not quite able to rigorously write an answer in the either direction. I will greatly appreciate if someone can resolve this issue that I am having in either direction.
+"
+"['search', 'ida-star', 'stopping-conditions']"," Title: When does IDA* consider the goal has been found?Body: I was reading about IDA* and I found this link explaining IDA* and providing an animation for it.
+Here is a picture of the solution.
+
+I know what is the cutoff condition (it depends on F), and the search is like DFS if the value of (f) of the node is less or equal to the cutoff, and like IDF it is iterative
+My question is:
+In the animation, when the threshold is 7 and after expanding the parent of the goal (14), they stated that a solution has been found, so, if we found the goal after expanding a node which is <= the cutoff value, can we consider it the solution without applying any condition-test on it? (it's F <= threshold): for example, if there was another level where there is a goal and it can be found with value 13 (less than 14), like the following pic:
+
+when the threshold is 7, 11 will not be expanded so we will never get (13)
+So, what is the correct solution?
+"
+"['deep-learning', 'recurrent-neural-networks', 'long-short-term-memory', 'language-model']"," Title: How is input defined for a biaxial lstm network for generating music?Body: I am reading Composing Music With Recurrent Neural Networks by Daniel D. Johnson. But I am really confused about the input passed to this network. If we pass notes of music along the time axis, then what is passed along the note axis?
+
+The author says:
+
+If we make a stack of identical recurrent neural networks, one for each output note, and give each one a local neighborhood (for example, one octave above and below) around the note as its input, then we have a system that is invariant in both time and notes: the network can work with relative inputs in both directions.
+
+This might mean that the inputs passed to the network along the note axis are fixed representations of notes in the vocabulary. But I am not sure.
+I am also having a hard time understanding the input passed to this network as the author explains a few paragraphs below. (Position, Pitchclass, Previous Vicinity, Previous Context, Beat).
+Also, at some point, the author talks about RNN along the note axis. But in the architecture, there only seems to be RNN along the time axis. I would really appreciate is anyone could give me some more information to understand how this Biaxial Network is setup. This article by Deep Dark Learning was a little helpful but I am still not fully sure what is going on here.
+"
+"['convolutional-neural-networks', 'training', 'object-recognition', 'categorical-crossentropy', 'alexnet']"," Title: Can the (sparse) categorical cross-entropy be greater than one?Body: I am using AlexNet CNN to classify my dataset which contains 10 classes and 1000 data for each class, with 60-30-10, splits for train, validation, and test. I used different batch sizes, learning rates, activation functions, and initializers. I'm using the sparse categorical cross-entropy loss function.
+However, while training, my loss value is greater than one (almost equal to 1.2) in the first epoch, but until epoch 5 it comes near 0.8.
+Is it normal? If not, how can I solve this?
+"
+"['reinforcement-learning', 'proofs', 'policies', 'optimal-policy', 'optimality']"," Title: Given two optimal policies, is an affine combination of them also optimal?Body: If there are two different optimal policies $\pi_1, \pi_2$ in a reinforcement learning task, will the linear combination (or affine combination) of the two policies $\alpha \pi_1 + \beta \pi_2, \alpha + \beta = 1$ also be an optimal policy?
+Here I give a simple demo:
+In a task, there are three states $s_0, s_1, s_2$, where $s_1, s_2$ are both terminal states. The action space contains two actions $a_1, a_2$.
+An agent will start from $s_0$, it can choose $a_1$, then it will arrive $s_1$,and receive a reward of $+1$. In $s_0$, it can also choose
+$a_2$, then it will arrive $s_2$, and receive a reward of $+1$.
+In this simple demo task, we can first derive two different optimal policy $\pi_1$, $\pi_2$, where $\pi_1(a_1|s_0) = 1$, $\pi_2(a_2 | s_0) = 1$. The combination of $\pi_1$ and$\pi_2$ is
+$\pi: \pi(a_1|s_0) = \alpha, \pi(a_2|s_0) = \beta$. $\pi$ is an optimal policy, too. Because any policy in this task is an optimal policy.
+"
+"['neural-networks', 'reference-request', 'genetic-algorithms', 'genetic-operators']"," Title: Is there some known pattern for selecting a batch of candidates for the next generation?Body: I'm a beginner with a classic "racing car" sandbox and a homemade simple neural network.
+My pattern:
+
+- Copy the "top car" (without mutation) to the next generation
+
+- If there are some cars still running (because simulation reached the 30s win condition), then copy a mutated version of them for the next generation.
+
+- Fill the rest of the pool with mutation of the "top car".
+
+
+But this is just some dumb intuitive pattern I made on the fly while playing with my code. Perhaps I should copy the cars that are still running as-is instead of mutating them. Or, perhaps, some selection method I don't know about.
+A new random track is generated at each new generation. a "top car" may be good on a track and crash immediately on the next track. I just feel that basing everything on the top car is wrong because of the track randomness.
+Is there some known pattern for selecting a batch of candidates? (paper, google-fu keyword, interesting blog, etc.)
+I don't know what to search for. I don't even know the name of my network or any vocabulary related to AI.
+"
+"['reinforcement-learning', 'deep-rl', 'feature-selection', 'state-spaces', 'observation-spaces']"," Title: Does the order in which the features are concatenated to create the state (or observation) matter?Body: I'm experimenting with an RL agent that interacts with the following environment. The learning algorithm is double DQN. The neural network represents the function from state to action. It's build with Keras sequential model and has two dense layers. The observation in the environment consists of the following features
+
+- the agent's position in an N-dimensional grid,
+- metrics that represent the hazards of adjacent cells (temperatures, toxicity, radiation, etc.) of adjacent cells, and
+- some parameters that represent the agent's current characteristics (health, mood, etc.).
+
+There are patterns to the distribution of hazards and the agent's goal is to learn to navigate safely through space.
+I am concatenating these features, in the aforementioned order, into a tensor, which is fed into the double DQN.
+Does the order in which the features are concatenated to create the state (or observation) matter? Is it possible to group the features in some way to increase the learning speed? If I mix up the features randomly, would that have any effect or it doesn't matter to the agent?
+"
+"['neural-networks', 'overfitting', 'multilayer-perceptrons', 'generalization']"," Title: Why don't neural networks project the data into higher dimensions first, then reduce the size of each layer thereafter?Body: Background
+From my understanding (and following along with this blog post), (deep) neural networks apply transformations to the data such that the data's representation to the next layer (or classification layer) becomes more separate. As such, we can then apply a simple classifier(s) to the representation to chop up the regions where the different classes exist (as shown by this blog post).
+If this is true and say we have some noisy data where the classes are not easily separable, would it make sense to push the input to a higher dimension, so we can more easily separate it later in the network?
+For example, I have some tabular data that is a bit noisy, say it has 50 dimensions (input size of 50). To me, it seems logical to project the data to a higher dimension, such that it makes it easier for the classifier to separate. In essence, I would project the data to say 60 dimensions (layer out dim = 60), so the network can represent the data with more dimensions, allowing us to linearly separate it. (I find this similar to how SVMs can classify the data by pushing it to a higher dimension).
+Question
+Why, if the above is correct, do we not see many neural network architectures projecting the data into higher dimensions first then reducing the size of each layer thereafter?
+I learned that if we have more hidden nodes than input nodes, the network will memorize rather than generalize.
+"
+"['search', 'a-star', 'breadth-first-search', 'graph-search', 'tree-search']"," Title: Why do we use the tree-search version of breadth-first search or A*?Body: In Artificial Intelligence A Modern Approach, search algorithms are divided into tree-search version and graph-search version, where the graph-search version keeps an extra explored set to avoid expanding nodes repeatedly.
+However, in breadth-first search or A* search, I think we still need to keep the expanded nodes in the memory so that we can track the path from the root to the goal. (The node structure contains its parent node which can be used to extract the solution path, but only if we have those nodes kept in the memory)
+So, if I'm right, why do we need the tree-search version in BFS and A*, given that the expanded nodes still need to be stored? Why not just use the graph-search version then?
+If I'm wrong, so how do we track the solution path given that the expanded nodes have been discarded?
+"
+"['object-detection', 'yolo', 'pretrained-models', 'fine-tuning']"," Title: Is it possible to improve the average precision of YOLO trained on Open Images Dataset by fine-tuning it with COCO?Body: I consider pre-training a YOLOv5 with Google Open Images Object Detection dataset. The dataset includes general domain categories with ~15 M box samples. After the pre-training is done, I will fine-tune the model on MS COCO dataset.
+I would like to do it, if I can improve AP by ~7%. Do you think that it is possible, and I have a logical expectation? Unfortunately, I could not find anywhere anyone has tried an Open Images pre-trained object detector with MSCOCO training.
+"
+"['variational-autoencoder', 'batch-size', 'batch-learning']"," Title: Why would a VAE train much better with batch sizes closer to 1 over batch size of 100+?Body: I've been training a VAE to reconstruct human names and when I train it on a batch size of 100+ after about 5 hours of training it tends to just output the same thing regardless of the input and I'm using teacher forcing as well. When I use a lower batch size for example 1 it super overfitted and a batch size of 16 tended to give a much better generalization. Is there something about VAEs that would make this happen? Or is it just my specific problem?
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'value-functions', 'model-based-methods', 'model-free-methods']"," Title: Why does Monte Carlo policy evaluation relies on action-value function rather than state-value function?Body: Here is David Silver's lecture on that. Look at 9:30 to 10:30.
+He says that, since it is model-free learning, the environment's dynamics are unknown, so the action-value function $Q$ is used.
+
+- But then state-values are already calculated (via first-visit or every-visit). So, why aren't these values used?
+
+- Secondly, even if we were to use $Q$, we have $Q^{\pi}(s,a) = R(s) + \gamma \sum_{s'}P(s'|s,a)V^{\pi}(s')$, so we still need to know the transition model, which is unknown.
+
+
+What am I missing here?
+"
+"['neural-networks', 'reinforcement-learning', 'dqn', 'applications']"," Title: What kind of problems is DQN algorithm good and bad for?Body: I know this is a general question, but I'm just looking for intuition. What are the characteristics of problems (in terms of state-space, action-space, environment, or anything else you can think of) that are well solvable with the family of DQN algorithms? What kind of problems are not well fit for DQNs?
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'gradient-descent', 'graphs']"," Title: How to use unmodified input in neural network?Body: My question may be a bit hard to explain...
+My neural network learns a categorical distribution, which serves as an index. This index will look up the value (= action_mean) in Input 2.
+From this action_mean, I create a normal distribution where the network has to learn to adjust the standard deviation. The output of the network is a sample of this normal distribution.
+Since the value of action_mean is directly taken from the input, somehow the gradient can't be computed or gives Nones, respectively, because the output of the net is not completely connected with the input.
+Would there be a way to link my action_mean with the input value, without changing the input values itself? To describe my problem, I attached a simplified computational graph how tensorboard shows it.
+I would be very thankful for any help!
+
+"
+"['transfer-learning', 'r-cnn', 'self-supervised-learning', 'instance-segmentation', 'pretext-tasks']"," Title: Is it possible to pre-train a CNN in a self-supervised way so that it can later be used to solve an instance segmentation task?Body: I would like to use self-supervised learning (SSL) to learn features from images (the dataset consists of similar images with small differences), then use the resulting trained model to bootstrap an instance segmentation task.
+I am thinking about using Faster R-CNN, Mask R-CNN, or ResNet for the instance segmentation task, which is pre-trained in an SSL way by solving a pretext task, with the aim that this will lead to higher accuracy and also teach the CNNs with fewer examples during the downstream task.
+Is it possible to use SSL to pre-train e.g. a faster R-CNN on a pretext task (for example, rotation), then use this pre-trained model for instance segmentation with the aim to get better accuracy?
+"
+"['neural-networks', 'activation-functions', 'sigmoid']"," Title: How to use sigmoid as transfer function when input is not (0,1) range in ANN?Body: I am building my first ANN from scratch. I know that I need a transfer function and I want to use the sigmoid function as my teacher recommended that. That function can be between 0 and 1, but my input values for the network are between -5 and 20. Someone told me that I need to scale the function so that it is in the range of -5 and 20 instead of 0 and 1. Is this true? Why?
+"
+"['neural-networks', 'deep-neural-networks', 'explainable-ai', 'grad-cam']"," Title: In GradCAM, why is activation strength considered an indicator of relevant regions?Body: In the GradCAM paper section 3 they implicitly propose that two things are needed to understand which areas of an input image contribute most to the output class (in a multi-label classification problem). That is:
+
+- $A^k$ the final feature maps
+- $\alpha_k^c$ the average pooled partial derivatives of the output class scores $y^c$ with respect to the the final feature maps $A_k$.
+
+The second point is clear to me. The stronger the derivative, the more important the $k$th channel of the final feature maps is.
+The first point is not, because the implicit assumption is that non-zero activations have more significance than activations close to zero. I know it's tempting to take that as a given, but for me it's not so obvious. After all, neurons have biases, and a bias can arbitrarily shift the reference point, and hence what 0 means. We can easily transform two neurons [0, 1] to [1, 0] with a linear transformation.
+So why should it matter which regions of the final feature maps are strongly activated?
+
+EDIT
+To address a comment further down, this table explains why I'm thinking about magnitude rather than sign of the activations.
+
+It comes from thinking about the possible variations of
+$$
+L_{Grad-CAM}^c = ReLU\bigl( \sum_k \alpha_k^c A^k \bigr)
+$$
+"
+"['reinforcement-learning', 'reinforce', 'td-lambda']"," Title: Why is TD(0) not converging to the optimal policy?Body: I am trying to implement the basic RL algorithms to learn on this 10x10 GridWorld (from REINFORCEJS by Kaparthy).
+Currently I am stuck at TD(0). No matter how many episodes I run, when I am updating the policy after all episodes are done according to the value function I dont get the optimal value function which I obtain when toggle td learning on the grid from the link I provided above.
+The only way I am getting the optimal policy is when I am updating the policy in each iteration and then following the updated policy when calculating the TD target. But according to the algorithm from Sutton and Barto a given policy (which is fixed over all episodes - see line 1 below) should be evaluated.
+
+Using alpha=0.1
and gamma=0.9
after 1000 episodes my Td(0) algo finds the following value function
+[[-0.04 -0.03 -0.03 -0.05 -0.08 -0.11 -0.09 -0.06 -0.05 -0.07]
+ [-0.08 -0.04 -0.04 -0.06 -0.1 -0.23 -0.11 -0.06 -0.07 -0.11]
+ [-0.13 -inf -inf -inf -inf -0.58 -inf -inf -inf -0.25]
+ [-0.24 -0.52 -1.23 -2.6 -inf -1.4 -1.28 -1.12 -0.95 -0.62]
+ [-0.28 -0.49 -0.87 -1.28 -inf -2.14 -2.63 -1.65 -1.38 -1.04]
+ [-0.27 -0.42 -0.64 -0.94 -inf 0.97 -1.67 -2.01 -2.79 -1.62]
+ [-0.26 -0.36 -0.69 -0.93 -inf -1.17 -1.72 -1.92 -2.75 -1.82]
+ [-0.25 -0.38 -0.67 -2.27 -inf -2.62 -2.74 -1.55 -1.31 -1.14]
+ [-0.23 -0.31 -0.66 -1.2 -0.98 -1.24 -1.48 -1.02 -0.7 -0.7 ]
+ [-0.2 -0.29 -0.43 -0.62 -0.64 -0.77 -0.87 -0.67 -0.54 -0.48]]
+
+where -inf
are walls in the grid. If I update the policy according to that value function I am getting.
+[['e ' 'e ' 'w ' 'w ' 'w ' 'w ' 'e ' 'e ' 's ' 'w ']
+ ['e ' 'n ' 'n ' 'w ' 'w ' 'e ' 'e ' 'e ' 'n ' 'w ']
+ ['n ' 'XXXX' 'XXXX' 'XXXX' 'XXXX' 'n ' 'XXXX' 'XXXX' 'XXXX' 'n ']
+ ['n ' 'w ' 'w ' 's ' 'XXXX' 'n ' 'e ' 'e ' 'e ' 'n ']
+ ['n ' 'w ' 'w ' 'w ' 'XXXX' 's ' 'n ' 'n ' 'n ' 'n ']
+ ['s ' 'w ' 'w ' 'w ' 'XXXX' 's ' 'w ' 'n ' 'n ' 'n ']
+ ['s ' 'w ' 'w ' 'w ' 'XXXX' 'n ' 'w ' 's ' 's ' 's ']
+ ['s ' 'w ' 'w ' 'w ' 'XXXX' 'n ' 'e ' 's ' 's ' 's ']
+ ['s ' 'w ' 'w ' 's ' 's ' 's ' 's ' 'e ' 's ' 's ']
+ ['n ' 'w ' 'w ' 'w ' 'w ' 'w ' 'e ' 'e ' 'e ' 'w ']]
+
+where (n, w, s, e)
= (north, west, south, east)
. According to the result from Andrey Kaparthys simulation (from here) the final policy should look like this
+
+Notes:
+
+- I did not use any exploration
+- when the agent ends up in the final state
[5, 5]
I used the value of the starting state [0, 0]
as the value of its successor state V(S_{t+1})
. The episode is then finished and the agent starts again in the starting state.
+- In every state the agent takes a random action taken from north, west, south or east. If he ends in a wall the value of the next state is just the value where the agent currently is in. And it stays in its state and takes a random action again.
+
+I am scratching my head on this for a while now but I dont understand what I am missing.
+
+- The value function has to converge. Meaning my policy should be the same as on the website (picture 2)?
+- Only the value of my final state is positive while on the website simulation the whole optimal trajectory has positive values. I know that this is because on the website they update the policy in every step. But shouldn't it also work without updating it iteratively like I did it?
+- Since I am taking a random action (from
n,w,s,e
) in every step in every episode for example state [6, 5]
or [6, 6]
(the one below the terminal state) can not really take advantage of the positivity of the terminal state since they are surrounded by more negative-reward-states than this positive-reward-state. This is why after so many iterations the values are getting negative.
+
+I appreciate any help. Thanks in advance.
+"
+"['neural-networks', 'machine-learning', 'unsupervised-learning', 'autoencoders', 'supervised-learning']"," Title: Which models can I use for supervised learning with images?Body: I have to do a project that detects fabric surface errors and I will use machine learning methods to deal with it. I have a dataset that includes around six thousand fabric surface images with the size 256x256. This dataset is labeled, one thousand of it was labeled as NOK that means fabric surface with error, and the rest was labeled as OK which means fabric surface without an error.
+I read a lot of papers about fabric surface error detection with machine learning methods, and I saw that "autoencoders" are used to do it. But as I saw that the autoencoders are used in unsupervised learning models without labels. I need to do it with supervised learning models. Is there any model that can I use for fabric surface error detection with images in the supervised learning? Can be autoencoders used for it or is there any better model to do it?
+"
+"['reinforcement-learning', 'dqn', 'hyperparameter-optimization', 'performance', 'c51']"," Title: How should I change the hyper-parameters of the C51 algorithm, in order to obtain higher reward?Body: I have a scenario where, in an ideal situation, the greedy approach is the best, but when non-idealities are introduced which can be learned, DQN starts doing better. So, after checking what DQN achieved, I tried C51 using the standard implementation from tf.agents
(link). A very nice description is given here. But, as shown in the image, C51 does extremely bad.
+
+As you can see, C51 stays at the same level throughout. When learning, the loss right from the first iteration is around 10e-3 and goes on to 10e-5, which definitely impacts the change in the weights. But I am not sure how this can be solved.
+The scenario is
+
+- 1 episode consists of 10 steps and the episode only ends after the 10th step, the episode never ends earlier.
+
+- states at each step are integer values and can take values between 0 and 1. In the image, states are of shape 20*1.
+
+- actions have the shape 20*1
+
+- learning rate = 10e-3
+
+- exploration factor $\epsilon$ starts out at 0.2 and decays up to 0.01
+
+
+C51 has 3 additional parameters, which help it to learn the distribution of q-values
+num_atoms = 51 # u/param {type:"integer"}
+min_q_value = -20 # u/param {type:"integer"}
+max_q_value = 20 # u/param {type:"integer"}
+
+num_atoms
is the number of support that the learned distribution will have, and min_q_value
and max_q_value
are the endpoints of the q-value distribution. I set them as 51 (the first paper and other implementations keep it as 51 and hence the name 51), and the min and max are set as the min and max possible rewards.
+So, if anyone could help me with fine-tuning the parameters for C51 to work, I would be very grateful.
+"
+"['reinforcement-learning', 'deep-learning', 'natural-language-processing', 'prediction']"," Title: Predict next event based on previous events and discrete reward valuesBody: Suppose, I have several sequences that include a series of text (the length of sequence can be varied). Also, I have some related reward value. however, the value is not continuous like the text. It has many missing values. Here is an example of the dataset.
+Sequence 1 Sequence 2 .............. Sequence n
+------------ ---------- -------------
+Action Reward Action Reward Action Reward
+ A C D
+ B 5 A B 6
+ C A 7 A
+ C 6 B 10 D
+ A C A
+ B 2 A B
+ .. ... ...
+ ... ..... .....
+ D 5 C 4 D
+
+Now I want to predict the next action based on the reward value. The idea is I want to predict the actions that leads to more rewards. Previously, I used only action data to predict the next action using LSTM and GRU. However, how could I use this reward value in this prediction? I was if thinking Reinforcement learning (MDP) could solve the problem. However, as the rewards are discrete, I am not sure if RL could do that. Also, is it possible to solve this problem with Inverse RL? I have some knowledge of deep learning. However, I am new to reinforcement learning. If anyone gives me some suggestion or provide me useful papers link regarding this problem, it could help me a lot.
+"
+"['reinforcement-learning', 'dqn']"," Title: DQN fails to learn useful policy for the Taxi environment (Dietterich 200)Body: I'm building an agent to solve the Taxi environment. I've seen this problem solved with Q-Learning algorithms but my DQN consistently fails to learn anything. The environment has a discrete observation space, I one-hot encode the state before feeding it to the DQN. I also went ahead to implement Hindsight Experience Replay to help the learning process but the DQN still doesn't learn anything. What can I do to fix this?
+I've heard that DQN doesn't excel at environments that require planning to succeed, if that's the case, which algorithms would work well for this environment?
+EDIT
+When I posted this question, my DQN was learning from only 2 step transitions, since this environment can go on for several timesteps without any positive reward, I updated the agent to use transitions of 200 steps. Since I'm using Hindsight Experience Replay, my agent is sure to receive rewards within 200 timesteps even if it didn't meet the goal. I tried this and my agent still hasn't improved, it continually performs worse than the random agent baseline. I checked the contents of the buffer, I observed transitions that do lead to several rewards because their goals have been modified during HER and yet the DQN agent doesn't learn anything.
+Also, I'm using TensorFlow's tf_agents for my implementation. Here's a link to the code. I repurposed this example.
+I hope this helps
+"
+"['papers', 'supervised-learning', 'notation', 'expectation', 'mean-squared-error']"," Title: What is the meaning of these equations in Noise2Noise paper?Body: I am trying to understand what is meant by following equations in the Noise2Noise paper by Nvidia.
+
+What is meant by the equation in this image? What is $\mathbb{E}_y\{y\}$? And how should I try to visualize these equations?
+"
+"['objective-functions', 'support-vector-machine', 'perceptron', 'binary-classification']"," Title: Why doesn't the set $\{ -2, +2 \}$ in $E(X) = (y − \text{sign}\{\overline{W} \cdot \overline{X} \}) \in \{ −2, +2 \}$ include $0$?Body: I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. Chapter 1.2.1.2 Relationship with Support Vector Machines says the following:
+
+The perceptron criterion is a shifted version of the hinge-loss used in support vector machines (see Chapter 2). The hinge loss looks even more similar to the zero-one loss criterion of Equation 1.7, and is defined as follows:
+$$L_i^{svm} = \max\{ 1 - y_i(\overline{W} \cdot \overline{X}_i), 0 \} \tag{1.9}$$
+Note that the perceptron does not keep the constant term of $1$ on the right-hand side of Equation 1.7, whereas the hinge loss keeps this constant within the maximization function. This change does not affect the algebraic expression for the gradient, but it does change which points are lossless and should not cause an update. The relationship between the
+perceptron criterion and the hinge loss is shown in Figure 1.6. This similarity becomes
+particularly evident when the perceptron updates of Equation 1.6 are rewritten as follows:
+$$\overline{W} \Leftarrow \overline{W} + \alpha \sum_{(\overline{X}, y) \in S^+} y \overline{X} \tag{1.10}$$
+Here, $S^+$ is defined as the set of all misclassified training points $\overline{X} \in S$ that satisfy the condition $y(\overline{W} \cdot \overline{X}) < 0$. This update seems to look somewhat different from the perceptron, because the perceptron uses the error $E(\overline{X})$ for the update, which is replaced with $y$ in the update above. A key point is that the (integer) error value $E(X) = (y − \text{sign}\{\overline{W} \cdot \overline{X} \}) \in \{ −2, +2 \}$ can never be $0$ for misclassified points in $S^+$. Therefore, we have $E(\overline{X}) = 2y$ for misclassified points, and $E(X)$ can be replaced with $y$ in the updates after absorbing the factor of $2$ within the learning rate.
+
+Equation 1.7 is as follows:
+
+$$L_i^{(0/1)} = \dfrac{1}{2} (y_i - \text{sign}\{ \overline{W} \cdot \overline{X_i} \})^2 = 1 - y_i \cdot \text{sign} \{ \overline{W} \cdot \overline{X_i} \} \tag{1.7}$$
+
+And figure 1.6 is as follows:
+
+
+
+It is said that we are dealing with the case of binary classification, where $y \in \{ -1, +1 \}$. But the author claims that $E(X) = (y − \text{sign}\{\overline{W} \cdot \overline{X} \}) \in \{ −2, +2 \}$, which doesn't include the case of $0$. So shouldn't the $\{ -2, +2 \}$ in $E(X) = (y − \text{sign}\{\overline{W} \cdot \overline{X} \}) \in \{ −2, +2 \}$ be $\{ -2, 0, +2 \}$?
+"
+"['objective-functions', 'support-vector-machine', 'perceptron', 'binary-classification', 'hinge-loss']"," Title: How should we interpret this figure that relates the perceptron criterion and the hinge loss?Body: I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. Chapter 1.2.1.2 Relationship with Support Vector Machines says the following:
+
+The perceptron criterion is a shifted version of the hinge-loss used in support vector machines (see Chapter 2). The hinge loss looks even more similar to the zero-one loss criterion of Equation 1.7, and is defined as follows:
+$$L_i^{svm} = \max\{ 1 - y_i(\overline{W} \cdot \overline{X}_i), 0 \} \tag{1.9}$$
+Note that the perceptron does not keep the constant term of $1$ on the right-hand side of Equation 1.7, whereas the hinge loss keeps this constant within the maximization function. This change does not affect the algebraic expression for the gradient, but it does change which points are lossless and should not cause an update. The relationship between the
+perceptron criterion and the hinge loss is shown in Figure 1.6. This similarity becomes particularly evident when the perceptron updates of Equation 1.6 are rewritten as follows:
+$$\overline{W} \Leftarrow \overline{W} + \alpha \sum_{(\overline{X}, y) \in S^+} y \overline{X} \tag{1.10}$$
+Here, $S^+$ is defined as the set of all misclassified training points $\overline{X} \in S$ that satisfy the condition $y(\overline{W} \cdot \overline{X}) < 0$. This update seems to look somewhat different from the perceptron, because the perceptron uses the error $E(\overline{X})$ for the update, which is replaced with $y$ in the update above. A key point is that the (integer) error value $E(X) = (y − \text{sign}\{\overline{W} \cdot \overline{X} \}) \in \{ −2, +2 \}$ can never be $0$ for misclassified points in $S^+$. Therefore, we have $E(\overline{X}) = 2y$ for misclassified points, and $E(X)$ can be replaced with $y$ in the updates after absorbing the factor of $2$ within the learning rate.
+
+Equation 1.6 is as follows:
+
+$$\overline{W} \Leftarrow \overline{W} + \alpha \sum_{\overline{X} \in S} E(\overline{X})\overline{X}, \tag{1.6}$$
+where $S$ is a randomly chosen subset of training points, $\overline{X} = [x_1, \dots, x_d]$ is a data instance (vector of $d$ feature variables), $\overline{W} = [w_1, \dots, w_d]$ are the weights, $\alpha$ is the learning rate, and $E(\overline{X}) = (y - \hat{y})$ is an error value, where $\hat{y} = \text{sign}\{ \overline{W} \cdot \overline{X} \}$ is the prediction and $y$ is the observed value of the binary class variable.
+
+Equation 1.7 is as follows:
+
+$$L_i^{(0/1)} = \dfrac{1}{2} (y_i - \text{sign}\{ \overline{W} \cdot \overline{X_i} \})^2 = 1 - y_i \cdot \text{sign} \{ \overline{W} \cdot \overline{X_i} \} \tag{1.7}$$
+
+And figure 1.6 is as follows:
+
+
+
+Figure 1.6 looks unclear to me. What is figure 1.6 showing, and how is it relevant to the point that the author is trying to make?
+"
+"['neural-networks', 'generative-model', 'information-theory', 'minimum-description-length', 'entropy']"," Title: How does NN follows law of energy conservation?Body: Communication requires energy, and using energy requires communication. According to Shannon, the entropy value of a piece of information provides an absolute limit on the shortest possible average length of a message without losing information as it is transmitted. (https://towardsdatascience.com/entropy-the-pillar-of-both-thermodynamics-and-information-theory-138d6e4872fa)
+I don't know whether Neural Network actually deals with information flow or not. This information flow is taken from the idea of entropy. Since I haven't found any paper or ideas based on the law of energy for neural networks. The law of energy states that energy can neither be created nor destroyed. If it is creating information (energy) (e.g. in the case of a generative model), then some information may be lost while updating weights. How is Neural Network ensuring this energy conservation?
+"
+"['neural-networks', 'reinforcement-learning', 'deep-learning', 'deep-rl']"," Title: Why are shallow networks so prevalent in RL?Body: In deep learning, using more layers in a neural network adds the capacity to capture more features. In most RL papers, their experiments use a 2 layer neural network. Learning to Reset, Constrained Policy Optimization, Model-based RL with stability guarantees just to name a few - these are papers I personally remember but there are definitely many others.
+I came across this question whose answers generally agree that yes, RL using a shallow network is considered deep RL, but the reason for preference of shallow networks was not part of the question.
+In MetaMimic (2018), the authors trained the largest neural net RL algorithm at the time (a residual net with 20 convolution layers) for one-shot imitation learning. The paper demonstrates that larger networks as policy approximators generalize better and represent many behaviours.
+So, why are shallow 2 layer networks so widely used?
+"
+"['natural-language-processing', 'text-generation', 'natural-language-generation']"," Title: Making generated texts from ""data-to-text"" more variableBody: I am diving in data-to-text generation for long articles (> 1000 words). After creating a template and fill it with data I am currently going down on paragraph level and adding different paragraphs, which are randomly selected and put together. I also added on a word level different outputs for date, time and number formats.
+The challenge I see is, that when creating large amounts of such generated texts they become boring to read as the uniqueness for the reader goes down.
+Furthermore, I also think it's easy to detect that such texts have been autogenerated. However, I still have to validate this hypotheses.
+I was wondering if there is an even better method to bring in variability in such a text?
+Can you suggest any methods, papers, resources or share your experience within this field.
+I highly appreciate your replies!
+"
+"['reinforcement-learning', 'policy-gradients', 'hyperparameter-optimization', 'proximal-policy-optimization', 'trust-region-policy-optimization']"," Title: Why does PPO lead to a worse performance than TRPO in the same task?Body: I am training an agent with an Actor-Critic network and update it with TRPO so far. Now, I tried out PPO and the results are drastically different and bad. I only changed from TRPO to PPO, the rest of the environment and rewards are the same. PPO is just a more efficient method compared to TRPO and has proven to be a state-of-the-art method in RL. So, why shouldn't it work? I just thought to ask if someone knows roughly how to transform configuration parameters from TRPO to PPO.
+Here some more details about my configurations.
+TRPO
+
+- Actor loss: $-\log(\pi) * A$ where $A$ are advantages
+- Critic Loss: MSE(predicted_values, discounted return)
+- Desired KL Divergence for Actor and Critic: 0.005
+- Conjugate gradient iterations: 20
+- Residual tolerance in conjugate gradient: 1e-10
+- Damping coefficient for Fisher Product: 1e-3
+
+
+PPO
+
+- Actor and Critic optimizer and learning rate: Adam with 0.0001
+- Actor loss: negative minimum of either:
+
+- $\frac{\pi}{\pi_{old}} * A$
+- $clamp(\frac{\pi}{\pi_{old}}, 1-0.1, 1+0.1) * A$
+
+
+- Critic loss: MSE(predicted_values, discounted_rewards)
+- Optimization iterations: 10
+
+
+
+The rest of my problem set-up is absolutely the same. But somehow I get completely different results while training, as you can see on the plots above. I also changed learning rates and optimization iterations, gradient clipping, optimizing with mini-batches and $-log(\pi) * A$ as Loss for PPO, but neither helped. Taking importance sampling $\frac{\pi}{\pi_{old}} * A$ as loss for TRPO gives the same results there.
+Can someone please help me to understand where could be the problem? Or which parameters I would need to change in PPO?
+"
+"['machine-learning', 'generative-adversarial-networks', 'generative-model']"," Title: Generating fake faces containing specific features with GANsBody: I'm trying to understand how DeepFakes are generated and so far I understood that they're mostly generated through the usage of GANs and autoencoders.
+The autoencoders part is understandable, but what I cannot understand is how to generate faces with GANs that match destination face.
+GANs consist of a generator and a discriminator. The generator is getting noise input which is randomly selected from a normal distribution and feedback from the discriminator. The discriminator is taught how the real data looks like and just classifies if the data fed to him is real or fake. Depending on the answer - one of them (generator/discriminator) updates its model. If the discriminator guesses right, the generator is getting updated if not, then the discriminator is the one that is updating its model.
+So after the training part is over, we can feed the generator more noise to achieve more fake data. In DeepFake videos, we normally try to swap the destination face with the input face. My problem with that is that the destination face has specific features for example it has closed eyes, smiles, rotates its head. If we feed the generator noise, how can we control the process to achieve similar facial features that are in the destination face?
+I've found papers about GANs that can control some of the features of generated faces (StyleGANs). Although I'm not sure how would it be possible to extract "special features" of destination face and generate them with StyleGANs.
+I will be extremely grateful for any help in understanding the concept of DeepFake with GANs.
+Thanks a lot.
+"
+"['neural-networks', 'overfitting', 'convergence', 'gradient-boosting', 'validation-loss']"," Title: React on train-validation curve after treningBody: I have a regression task that I tray to solve with AI.
+I have around 6M rows with about 30 columns. (originally there was 100, but I reduce it with drop feature importance)
+I understand basic principle: Look if model overfit or underfit - according change the parameters. In theory.
+I would ask for help with two graphs:
+
+- If I understand correctly what is going on
+- How would you attack the situation.
+
+1. Graph
+
+
+- I use LightGBM
+- learning_rate = 1
+- max_depth = 3
+- num_leaves = 2**15,
+- number of iterations = 4000
+
+If I understand this model is Underfitting. The validation and training is falling, but not very much...
+BUT: The number of iteration is now too large and place higher number is not ok. Learning rate is 1 (as hight as it gets). Only the max_depth is low, but if it is higher (I try 30) the graph is same, just the values are worse.
+So, what to do, so that model would not underfit.
+
+- Graph
+
+
+
+- I use Neural Nets
+- epochs=200,
+- batch_size=64
+
+The model
+i = Input(shape=(100,), name='input')
+x = Dense(128)(i)
+x = Dense(64)(i)
+o = Dense(1, activation='relu', name='output')(x)
+
+Here I am not sure. This doesn't really looks like underfit, but more that doesn't converged.
+Is this right?
+So, should I create more complex model (more neurones or more layers?)?
+And how much epochs do I need to see this behaviour? Because, in the beginning I use only 10 epochs, for faster development, and I thought that model is overfitting. Only when I use more epochs I see that I was wrong.
+How would you start to "debug" this neural net? What would be the plan of attack?
+"
+"['ai-design', 'logic', 'sudoku', 'conjunctive-normal-form']"," Title: How to translate sudoku XV boards in CNF format?Body: I'm trying to implement the logic for a Sudoku XV puzzle, that it's essentially a standard sudoku with the addition of X and V markers between some pairs of squares. X markers in adjacent pairs requires that the sum of the two values is 10. Similarly, the V marks requires that the sum of the values is equal to 5.
+(Assume that $$ S_{xyz} $$ stands for [digit][row][column])
+I've written the following CNF formulae that handle the logic of a standard Sudoku puzzle:
+There is at least one number in each entry:
+$$ \bigwedge_{x=1}9\bigwedge_{y=1}9\bigwedge_{z=1}9S_{xyz} $$
+Each number appears at most once in each row:
+$$ \bigwedge_{y=1}9\bigwedge_{z=1}9\bigwedge_{x=1}{8\bigwedge_{i=x+1}9}{(\lnot S}_{xyz\ }\vee\lnot S_{iyz\ }) $$
+Each number appears at most once in each column:
+$$ \bigwedge_{x=1}9\bigwedge_{z=1}9\bigwedge_{y=1}{8\bigwedge_{i=x+1}9}{(\lnot S}_{xyz\ }\vee\lnot S_{xiz\ }) $$
+Each number appears at most once in each 3x3 sub-grid:
+$$
+\bigwedge_{z=1}9\bigwedge_{i=0}2\bigwedge_{j=0}{2\bigwedge_{x=1}2\bigwedge_{y+1}3\bigwedge_{k=x+1}3\bigwedge_{l=1,\ \ y \neq l}3}{(\lnot S}_{(3i+x)(3j+y)z\ }\vee\lnot S_{(3i+k)(3j+l)z\ })
+$$
+Unfortunately, I'm stuck, and I don't really know how I can express the logic for X and V markers, and most importantly how to invalidate squares that contain neither an X nor a V marker that have digits summing to 5 or 10.
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'temporal-difference-methods', 'value-functions', 'return']"," Title: Is the expected value we sample in TD-learning action-value Q or state-value V?Body: Both MC and TD are model-free and they both follow a sample trajectory (in the case of TD, the trajectory is cut-short) to estimate the return (we basically are sampling Q values). Other than that, the underlying structure of both algorithms is exactly the same. However, from the blogs and texts I read, the equations are expressed in terms of V and NOT Q. Why is that?
+"
+"['neural-networks', 'activation-functions']"," Title: Has the logistic map ever been used as an activation function?Body: I find the logistic map absolutely fascinating. Both in itself (because I love fractal) and because it is observed in nature (see: https://www.youtube.com/watch?v=ovJcsL7vyrk).
+I'm wondering if anyone tried it as an activation function in some way or another with any kind of success.
+I like it because it has some kind of "I'm not sure what to do" above ~3.0 and the less confidence the more chaotic the response is. It gives the possibility to explore some other solution to escape a local optimum (not sure I use this word correctly). And below 3 it's still a nice and smooth activation function like, eg, a tanh.
+Eg : the reward i got isn't the reward i expect, and the higher the difference the more i'll explore other solution. But it's still gradual, from 1 choice, to 2 choice, 4, 8, 16, ... until it become chaotic. (giving the possibility to experiment some pseudo-random choice). And below this threshold it still act as a usable "good old" activation function.
+Another good side is that it's gpu-friendly and don't need many iteration for this application since a little bit of uncertainty (even below the threshold) isn't undesirable. see : https://upload.wikimedia.org/wikipedia/commons/6/63/Logistic_Map_Animation.gif
+Edit : so, ok, i tested it on my extremely naive racetrack. (feedforward, no feedback, no error, no fitness, only genetic selection for the car that didn't crash). It does work, for sure. I don't see any advantage in practive but with such a naive NN, there isn't much i can tell.
+My implementation :
+def logi(r):
+ x = .6 # the initial population doesn't matter so i took .6
+ for _ in range(random.randrange(10,40)):
+ x = r * x * (1 - x)
+ return x
+
+The activation take 8% of my laptop cpu (while is was invisible on my radar with leaky leru)
+
+"
+"['neural-networks', 'deep-learning', 'long-short-term-memory', 'papers', 'google']"," Title: Does this diagram represent several LSTMs, or one through several timesteps?Body: I'm trying to read this paper describing Google's LSTM architecture for machine translation. It features this diagram on page 4:
+
+I'm interested in the encoder block, on the left. Apparently, the pink and green cells are LSTMs. However, I can't tell if the x-axis is space or time. That is, are the LSTM cells on a given row all the same cell, with time flowing forward from left to right? The diagram on the next page in the paper seems to suggest that.
+"
+"['autonomous-vehicles', 'control-theory']"," Title: How do self-driving cars perform lane changes?Body: I am a bit stuck trying to understand how a lane change is performed from an operational point of view.
+Let's assume a self-driving car uses an occupancy grid map for local planning, this map may even have the detected lane boundaries. It's following a slow car and decides to overtake, but how does it know where the centre of the adjacent lane is? Does it use a separate map, or is there a separate data structure that is used to keep the lane information which informs the car where the centre of the adjacent lane is?
+Alternatively, does the car just decide to start drifting off to the side until it picks up the lane boundaries and then centres itself?
+"
+"['reinforcement-learning', 'value-functions', 'bellman-equations']"," Title: How are afterstate value functions mathematically defined?Body: In this answer, afterstate value functions are mentioned, and that temporal-difference (TD) and Monte Carlo (MC) methods can also use these value functions. Mathematically, how are these value functions defined? Yes, they are a function of the next state, but what's the Bellman equation here? Is it simply defined as $v(s') = \mathbb{E}\left[ R_t \mid S_t = s, A_t = a, S_{t+1} = s' \right]$? If yes, how can we define it in terms of the state, $v(s)$, and state-action, $q(s, a)$, value functions, or as a Bellman (recursive) equation?
+Sutton & Barto's book (2nd edition) informally describe afterstate value functions in section 6.8, but they don't provide a formal definition (i.e. Bellman equation in terms of reward or other value functions), so that's why I am asking this question.
+"
+"['reinforcement-learning', 'combinatorial-games', 'hierarchical-rl']"," Title: Hierarchical reinforcement learning for combinatorial complexityBody: I want to try a hierarchical reinforcement learning (HRL) approach to hard logical problems with combinatorial complexity, i.e. games like chess or Rubik's cube. The majority of HRL papers I have found so far focus either on training a control policy or they tackle quite simple games.
+By HRL I mean all methods that (among others):
+
+- split hard and complex problem into a series of simpler ones
+- create desired intermediate goals (or spaces of such goals)
+- somehow think in terms of 'what to achieve' rather than 'how to achieve'
+
+Do you know any examples of solving logically hard problems with HRL or maybe just any promising approaches to such problems?
+"
+"['comparison', 'simulated-annealing', 'meta-heuristics', 'local-search', 'stochastic-hill-climbing']"," Title: What is the difference between Stochastic Hill Climbing and Simulated Annealing?Body: I am reading about local search: hill climbing, and its types, and simulated annealing
+One of the hill climbing versions is "stochastic hill climbing", which has the following definition:
+
+Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search algorithm selects one neighbor node at random and decides whether to choose it as a current state or examine another state
+
+Some sources mentioned that it can be used to avoid local optima.
+Then I was reading about simulated annealing and its definition:
+
+At every iteration, a random move is chosen. If it improves the situation then the move is accepted, otherwise it is accepted with some probability less than 1
+
+So, what is the main difference between the two approaches? Does the stochastic choose only random (uphill) successor? If it chooses only (uphill-successors), then how does it avoid local optima?
+"
+"['deep-learning', 'convolutional-neural-networks', 'image-segmentation', 'u-net', 'binary-classification']"," Title: Semantic segmentation failing in small instance detectionBody: I performed semantic segmentation with U-net. My dataset consists of grayscale images of defects. After training the dataset for I got an metric accuracy of only 0.3 - 0.4 IOU. Eventhough it is merely low it performs well enough to identify instances that are huge means the prediction performs well enough in places where there is a standard intensity change(color change) and they are bigger instances. There are many other instances where there is no color change and it occoupies only few pixels in image(smaller instances) and the prediction rate is almost 0 on these instances.
+I also tried Resdiual connection in the downsampling part of U-net likewise in ResNet. But still its the same and for smaller instances I used dilated convolution blocks in between the skip connections for encoder and decoder of U-net based on some papers. But still I cannot have a higher accuracy in my network and prediction rate for smaller instances are really poor. Although I use only 350 images for training with Data Augmentations. My image size is also 256,256.
+Is there any other method I can try to increase the accuracy and prediction rate for smaller instances?
+Any suggestion would be helpful.
+"
+"['reinforcement-learning', 'multi-armed-bandits']"," Title: Multi-armed bandit problem without getting rewardsBody: In a 2-armed-bandit problem, an agent has an opportunity to see n reward for each action. Now the agent should choose actions m times and maximize the expected reward in these m decisions. but it cant see the reward of them. what is the best approach for this?
+"
+"['comparison', 'transformer', 'gpt', 'positional-encoding']"," Title: What is the difference between the positional encoding techniques of the Transformer and GPT?Body: I know the original Transformer and the GPT (1-3) use two slightly different positional encoding techniques.
+More specifically, in GPT they say positional encoding is learned. What does that mean? OpenAI's papers don't go into detail very much.
+How do they really differ, mathematically speaking?
+"
+"['reinforcement-learning', 'environment', 'reward-design', 'reward-functions']"," Title: If the reward function of an environment depends on some initial conditions, should I create a separate environment for each condition?Body: I would like some guidance on how to design an Environment for a Reinforcement Learning agent where the stopping conditions and rewards for the environment change based on an initial set of input parameters.
+For example, let's say that a system generated alert triggers the instantiation of the RL environment, whereby the RL agent is launched to make decisions in the environment, based on the alert. The alert has two priorities "HIGH" and "LOW", when the priority is "HIGH" the stopping condition is a reward of "100" and when the priority is "LOW", the stopping condition is a reward of "1000".
+In this scenario, is it preferable to create two separate environments based on the priority (input parameter) of the alert? Or is this a common requirement that should be designed into the environment/agent? If so, how? Note that I have simplified the scenario, so there could be multiple conditions (e.g., alert, system type, etc), but I am just trying to find a basic solution for the general case.
+"
+"['algorithm', 'minimax', 'game-theory', 'alpha-beta-pruning', 'optimality']"," Title: Minimax algorithm with only partial visibilityBody: I'm trying to implement the minimax algorithm with alpha beta pruning on a game that works like this:
+
+- Player 1 plays (x1, y1).
+
+- Player 2 can only see the x-value (x1) that Player 1 played (and not the y-value y1). Player 2 plays (x2, y2).
+
+- An action event happens, which may change the heuristic of the current game state.
+
+- Player 2 plays (x3, y3).
+
+- Player 1 can only see the x-value (x3) that Player 2 played (and not the y-value y3). Player 1 plays (x4, y4).
+
+- Action event. The game continues with alternating starting players for a maximum depth of 10.
+
+
+To do so, I have been treating each turn as you regularly would with the minimax algorithm, with each player making moves given the set of moves already played, including the possibly hidden move from the turn before. However, I've noticed that my algorithm will return moves for Player 2 that assume that Player 1 plays a certain way when, in a "real" game, it may be the case that Player 1 plays something else (and vice versa). For example, if Player 2 could guarantee a win on a given turn under all circumstances (when Player 1 plays first for that series), it might not play optimally when it assumes Player 1 will not play its maximum-strength move.
+I believe it is doing this precisely because it assumes that all moves are visible (a fully visible game state). And indeed, if that were the case, the sequence of moves it returns would be optimal.
+How can I remedy this?
+I do not believe that a probability-based algorithm (e.g. Expectiminimax) is necessary, since the game is entirely deterministic. The partial visibility part is making things difficult, though.
+Something tells me that changing the turn order in my algorithm might be a solution to this problem, since the action event is the only time the game heuristic is changed.
+"
+"['machine-learning', 'classification', 'probability-distribution', 'selection-bias']"," Title: Is this referring to the true underlying distribution, or the distribution of our sample?Body: I am currently studying the paper Learning and Evaluating Classifiers under Sample Selection Bias by Bianca Zadrozny. In the introduction, the author says the following:
+
+One of the most common assumptions in the design of learning algorithms is that the training data consist of examples drawn independently from the same underlying distribution as the examples about which the model is expected to make predictions. In many real-world applications, however, this assumption is violated because we do not have complete control over the data gathering process.
+For example, suppose we are using a learning method to induce a model that predicts the side-effects of a treatment for a given patient. Because the treatment is not given randomly to individuals in the general population, the available examples are not a random sample from the population. Similarly, suppose we are learning a model to predict the presence/absence of an animal species given the characteristics of a geographical location. Since data gathering is easier in certain regions than others, we would expect to have more data about certain regions than others.
+In both cases, even though the available examples are not a random sample from the true underlying distribution of examples, we would like to learn a predictor from the examples that is as accurate as possible for this distribution. Furthermore, we would like to be able to estimate its accuracy for the whole population using the available data.
+
+It's this part that I am confused about:
+
+In both cases, even though the available examples are not a random sample from the true underlying distribution of examples, we would like to learn a predictor from the examples that is as accurate as possible for this distribution.
+
+What exactly is "this distribution"? Is it referring to the true underlying distribution, or the distribution of our sample (which, as was said, is not necessarily a "good" reflection of the underlying distribution, since it is not a random sample)?
+"
+"['genetic-algorithms', 'hyperparameter-optimization', 'crossover-operators', 'mutation-operators', 'genetic-operators']"," Title: What is the impact of changing the crossover and mutation rates?Body: What is the impact of using a:
+
+- low crossover rate
+
+- high crossover rate
+
+- low mutation rate
+
+- high mutation rate
+
+
+"
+"['reinforcement-learning', 'environment', 'policies']"," Title: How to use and update a shared/global policy between Reinforcement Learning AgentsBody: I would be grateful for some guidance on a RL problem I am trying to solve where multiple RL agents use a common/global policy at the initial state of an episode in the RL Environment, and then update this common/shared policy once the episode is completed.
+Below is an example of the problem scenario:
+
+- An alert triggers a RL agent to execute a "episode" in the Environment
+- Multiple alerts (e.g., episodes) can occur at the same time, or, one alert may still be being processed (e.g., the episode has not finished) before another alert is triggered (e.g., another episode begins).
+
+Below are the conditions of the Environment and desired behaviour of the RL Agent:
+
+- Multiple episodes can run at once (e.g., another episode starts before another finishes).
+- For each episode a "instance" of the RL agent uses the latest version of a common policy.
+- After each episode the RL agent updates the common policy.
+- Common policy updates are "queued" using versioning in code to prevent race conditions.
+
+Q: How can multiple RL agents in this case use a common policy at the beginning of an episode and then update a common policy after completing it? - All I have found are discussions related to Q-Learning, where agents can update a shared Q-table, or later update a "global" Q-table without any examples of how this can be achieved and whether there are also methods for other approaches such as TD rather than only Q-Learning, for example
+Q: Does this sound like a traditional multi-agent scenario, at least conceptually? If so, how might one go about implementing this, any examples would be really helpful.
+Any help on this is greatly appreciated!
+EDIT:
+Since doing more investigation I have found this reference on Mathworks:
+Link, which is similar to the above problem, but not exact.
+"
+"['neural-networks', 'regression', 'function-approximation']"," Title: Neural network architecture with inputs and outputs being an unkown function eachBody: I am trying to set up a neural network architecture that is able to learn the points of one function (blue curves) from the points of an other one (red curves). I think that it could be somehow similar to the problem of learning a functional like it was described in this question here. I don't know at all what this (let's call it) functional looks like, I just see the 'blue' response of it to the 'red' input.
+
+The inputs of the network would be the (e.g. 100) points of a red curve and the outputs would probably be the (e.g. 50) points of the blue curve. This is where my problem begins. It tried to implement a simple dense network with two hidden layers and around 200-300 neurons each. Obviously it didn't learn much.
+I have the feeling that I somehow need to tell the network that the points next to each other (e.g. input points $x_0$ and $x_1$) are correlated and the function they belong to is differentiable. For the inputs this could be archieved by using convolutional layers, I suppose. But I don't really now how to specify, that the output nodes are correlated with each other as well.
+At the beginning I had high hopes in the approach using Generalized Regression Neural Networks as presented here, where a lowpass filter is implemented with NNs. However, as I understood, only the filter coefficients are predicted. As I don't know anything about the general structure of my functional, this will not help me here ...
+Do you have any other suggestions for NN architectures that could be helpful for this problem? Any hint is appreciated, thank you!
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'long-short-term-memory', 'word-embedding', 'gated-recurrent-unit']"," Title: Can One-Hot Vectors be used as Inputs for Recurrent Neural Networks?Body: When using an RNN to encode a sentence, one normally takes each word, passes it through an embedding layer, and then uses the dense embedding as the input into the RNN.
+Lets say instead of using dense embeddings, I used a one-hot representation for each word, and fed that sequence into the RNN. My question is which of these two outcomes is correct:
+
+- Due to the way in which an RNN combines inputs, since these vectors are all orthogonal, absolutely nothing can be combined, and the entire setup does not make sense.
+
+- The setup does make sense and it will still work, but not be as effective as using a dense embedding.
+
+
+I know I could run an experiment and see what happens, but this is fundamentally a theoretical question, and I would appreciate if someone could clarify so that I have a better understanding of how RNNs combine inputs. I suspect that the answer to this question would be the same regardless of whether we are discussing a vanilla RNN or an LSTM or GRU, but if that is not the case, please explain why.
+Thank you.
+"
+"['reinforcement-learning', 'q-learning', 'hyperparameter-optimization', 'hyper-parameters', 'epsilon-greedy-policy']"," Title: What should the value of epsilon be in the Q-learning?Body: I am trying to understand Reinforcement Learning and already explored different Youtube videos, blog posts, and Wikipedia articles.
+What I don't understand is the impact of $\epsilon$. What value should it take? $0.5$, $0.6$, or $0.7$?
+What does it mean when $\epsilon = 0$ and $\epsilon = 1$? If $\epsilon = 1$, does it mean that the agent explores randomly? If this intuition is right, then it will not learn anything - right? On the other hand, if I set $\epsilon = 0$, does this imply that the agent doesn't explore?
+For a typical problem, what is the recommended value for this parameter?
+"
+"['reinforcement-learning', 'comparison', 'q-learning', 'off-policy-methods', 'on-policy-methods']"," Title: Why does off-policy learning outperform on-policy learning?Body: I am self-studying about Reinforcement Learning using different online resources. I now have a basic understanding of how RL works.
+I saw this in a book:
+
+Q-learning is an off-policy learner. An off-policy learner learns the value of an optimal policy independently of the agent’s actions, as long as it explores enough.
+
+
+An on-policy learner learns the value of the policy being carried out by the agent, including the exploration steps.
+
+However, I am not quite understanding the difference. Secondly, I came across that off-policy learner works better than the on-policy agent. I don't understand why that would be i.e. why off-policy would be better than the on-policy.
+"
+"['computer-vision', 'image-recognition', 'object-detection', 'image-processing', 'face-recognition']"," Title: Face detection and replacement in photosBody: I have 2 photos, and my goal is to detect the face in one and place it on the face of the person in the other photo- basically face detection and replacement. It's not deep fakes. It's more of a computer vision-based approach for smoothing the edges and stitching.
+How can we achieve that? What are the approaches and techniques to do that? Some tutorials or code or github repos would be very helpful.
+
+"
+"['deep-learning', 'computer-vision', 'object-detection', 'anomaly-detection']"," Title: Object Detection as a means of Anomaly DetectionBody: Is it possible to train an Object Detector (e.g. SSD), to detect when something is not in the image. Imagine an assembly line that transports some objects. Each object needs to have 5 screws. If the Object Detector detects 4 screws, we know that one is missing, hence there is an anomaly.
+Actually this is an Anomaly Detection task where there is something else than a screw (e.g. a hole), but unsupervised anomaly detectors are hard to train and not as stable as object detectors.
+Is my assumption correct, that even though it is not really an object detection task, one can use such methods?
+"
+"['python', 'pattern-recognition', 'sequence-modeling', 'data-mining']"," Title: Mining repeated subsequences in a given sequenceBody: Given an alphabet $I=\left\{i_1,i_2,\dots,i_n\right\}$ and a sequence $S=[e_1,e_2,\dots,e_m]$, where items $e_j \in I$, I am interested in finding every single pattern (subsequence of $S$) that appears in $S$ more than $N$ times, and that has a length $L$ between two limits: $m<L<M$. In case of overlapping between patterns, only the longest pattern should be considered.
+For example, given $N=3$, $m=3$ and $M=6$, an alphabet $I=\left\{A,B,C,D,E,F\right\}$, and the following sequence $S$ (the asterisks simply mark the positions of the patterns in this example):
+$$
+S=[A,A,*F,F,A,A*,B,D,E,F,E,D,*F,F,A,A*,F,C,*C,A,B,D,D*,C,C,*C,A,B,D,D*,C,A,C,B,E,A,B,C,*F,F,A,A*,E,A,B,C,A,D,E,*F,F,A,A*,B,C,D,A,E,A,B,*C,A,B,D,D*,]
+$$
+The sought algorithm should be able to return the following patterns:
+$$
+[C,A,B,D,D], [F,F,A,A]
+$$
+together with their respective positions within $S$.
+An available Python implementation would be very desirable (and a parallel implementation, even more so).
+I was reading about BIDE algorithms, but I think this is not the correct approach here. Any ideas? Thanks in advance!
+"
+"['deep-learning', 'comparison', 'deep-neural-networks', 'optimization', 'regularization']"," Title: What are the conceptual differences between regularisation and optimisation in deep neural nets?Body: I'm trying to wrap my mind around the concepts of regularisation and optimisation in neural nets, especially around their differences.
+In my current understanding, regularisation is intended to tackle overfitting whereas optimisation is about convergence.
+However, even though regularisation adds terms to the loss function, both approaches seem to do most of their things during the update phase, i.e. they work directly on how weights are updated.
+If both concepts are focused on updating weights,
+
+- what are the conceptual differences, or why aren't both L2 and Adam, for example, called either optimisers or regularisers?
+
+- Can/should I use them together?
+
+
+"
+"['deep-learning', 'computer-vision', 'keras', 'bayesian-deep-learning', 'uncertainty-quantification']"," Title: Why is my Keras prediction always close to 100% for one image class?Body: I am using Keras (on top of TF 2.3) to train an image classifier. In some cases I have more than two classes, but often there are just two classes (either "good" or "bad"). I am using the tensorflow.keras.applications.VGG16
class as base model with a custom classifier on top, like this:
+input_layer = layers.Input(shape=(self.image_size, self.image_size, 3), name="model_input")
+base_model = VGG16(weights="imagenet", include_top=False, input_tensor=input_layer)
+model_head = base_model.output
+model_head = layers.AveragePooling2D(pool_size=(4, 4))(model_head)
+model_head = layers.Flatten()(model_head)
+model_head = layers.Dense(256, activation="relu")(model_head)
+model_head = layers.Dropout(0.5)(model_head)
+model_head = layers.Dense(len(self.image_classes), activation="softmax")(model_head)
+
+As you can see in the last (output) layer I am using a softmax
activation function. Then I compile the whole model with the categorical_crossentropy
loss function and train with one-hot-encoded image data (labels).
+All in all the model performs quite well, I am happy with the results, I achieve over 99% test and validation accuracy with our data set. There is one thing I don't understand though:
+When I call predict()
on the Keras model and look at the prediction results, then these are always either 0 or 1 (or at least very, very close to that, like 0.000001 and 0.999999). So my classifier seems to be quite sure whether an image belongs to either class "good" or "bad" (for example, if I am using only two classes). I was under the assumption, however, that usually these predictions are not that clear, more in terms of "the model thinks with a probability of 80% that this image belongs to class A" - but as said in my case it's always 100% sure.
+Any ideas why this might be the case?
+"
+"['machine-learning', 'data-preprocessing', 'feature-extraction', 'mnist']"," Title: Statistical method for selecting features for classificationBody: I'm working on a classifier for the famous MNIST handwritten data set.
+I want to create a few features on my own, and I want to be able to estimate which feature might perform better before actually training the classifier. Lets say that I create the feature which calculates the ratio of ink used between two halves of a digit. Note that with ink I mean how much white there is used (which ranges from 0-255 per pixel).
+
+
+For example I would calculate the ratio between the total amount of white in the left and right halves (seperated by red line). I could also do the same with the op and bottom half, or seperate the digits diagonally. With this I can calculate the mean and standard deviation.
+
+I imagine that the left / right ratio might give some differences for other numbers. But the ratios might all be closer to the average.
+Is there some method for estimating which feature might perform better compared to others? I.e. is there a method which gives a numerical value on how "seperable" a data set is?
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'importance-sampling']"," Title: How to compute the Retrace target for multi-step off-policy Reinforcement Learning?Body: I am implementing the A3C algorithm and I want to add off-policy training using Retrace but I am having some trouble understanding how to compute the retrace target. Retrace is used in combination with A3C for example in the Reactor.
+I often see the retrace update written as
+\begin{equation}
+\Delta Q(s, a) = \sum_{t' = t}^{T} \gamma^{t'-t}\left(\prod_{j=t+1}^{t'}c_j\right) \delta_{t'}
+\end{equation}
+with $\delta_{t'} = r(s_{t'}, a_{t'}) + \gamma \mathbb{E}[Q(s_{t'+1}, a_{t'+1})] - Q(s_{t'}, a_{t'})$ and $c_j$ being the Retrace factors $c_j = \lambda \min(c, \frac{\pi(a_j|s_j)}{b(a_j|s_j)})$.
+Now, when employing neural networks to approximate $Q_{\theta}(s, a)$ it is often easier to define a loss
+\begin{equation}
+\mathcal{L}_{\theta} = \left(G_t - Q(s, a)\right)^2
+\end{equation}
+and let the backward function and the optimizer do the update. How can I write the Retrace target $G_t$ to use in such a setup?
+Is it correct to write it as follows?
+\begin{equation}
+G_t = \sum_{t'=t}^T \gamma^{t'-t} \left(\prod_{j=t+1}^{t'}c_j\right) (r_{t'} + \gamma Q(s_{t'+1}, a_{t'+1}) - Q(s_{t'}, a_{t'}))
+\end{equation}
+and then compute $\mathcal{L}$ as above, take the gradient $\nabla\mathcal{L}_{\theta}$ and perform the update step $Q(s_t, a_t) = Q(s_t, a_t) + \alpha \nabla\mathcal{L}_{\theta}$ ?
+"
+"['reinforcement-learning', 'q-learning', 'state-spaces']"," Title: What constitutes a large space state (in Q-learning)?Body: I know this might be specific to different problems, but does anyone know if there is any rule of thumb or references on what constitutes a large state space?
+I know that, according to multiple papers, tabular Q-learning is not suitable for these problems with large state spaces, but I can't find any indication of what would constitute a large state space
+I've been working on a problem where my state space is about 30x30 and updating my Q-learning with the tabular method runs and works well. Would 100x100 start to become too big or 400x400?
+"
+"['datasets', 'math', 'definitions']"," Title: Is it okay to think of any dataset in artificial intelligence as a mathematical set?Body: A dataset is a collection of data points. It is known that the data points in the dataset can repeat. And the repetition does matter for building AI models.
+So, why does the word dataset contain the word set? Does it have any relation with the mathematical set, where order and repetition do not matter?
+"
+"['reinforcement-learning', 'q-learning', 'state-spaces']"," Title: Do I need to know in advance all possible number of states in Q-Learning?Body: In Q-learning, is it mandatory to know all possible states that can the agent may end up in?
+I have a network with 4 source nodes, 3 sink nodes, and 4 main links. The initial state is the status network where the sink nodes have its resources at its maximum. In a random manner, I generate service requests from the source nodes to the sink nodes. These service requests are generated at random timesteps, which means that, from state to state, the network status may stay the same.
+When a service request is launched, the resources from the sink nodes change, and the network status changes.
+The aim of the agent is to balance the network by associating each service request to a sink node along with the path.
+I know that in MDP you are supposed to have a finite set of states, my question is: if that finite set of states is supposed to be all possible states that can happen, or is just a number that you consider enough to optimize the Q-table?
+"
+"['autoencoders', 'variational-autoencoder', 'dimensionality-reduction']"," Title: Compressing Parameters of an Response SystemBody: I have an input-output system, which is fully determined by 256 parameters, of which I know a significant amount are of less importance to the input-output pattern.
+The data I have is some (64k in total) input-parameter-output match.
+My goal is to compress these 256 parameters to a smaller scale (like 32) using an encoder of some kind while being able to preserve the response pattern.
+But I can't seem to find a proper network for this particular problem, because I'm not trying to fit these parameters (they all have a mean of one and variance of 1/4), but rather its influence on the output, so traditional data-specific operations will not work in this case.
+"
+"['deep-learning', 'convolutional-neural-networks', 'comparison', 'geometric-deep-learning', 'graph-neural-networks']"," Title: How can we derive a Convolution Neural Network from a more generic Graph Neural Network?Body: Convolution Neural Network (CNNs) operate over strict grid-like structures ($M \times N \times C$ images), whereas Graph Neural Networks (GNNs) can operate over all-flexible graphs, with an undefined number of neighbors and edges.
+On the face of it, GNNs appear to be neural architectures that can subsume CNNs. Are GNNs really generalized architectures that can operate arbitrary functions over arbitrary graph structures?
+An obvious follow-up - How can we derive a CNN out of a GNN?
+Since non-spectral GNNs are based on message-passing that employ permutation-invariant functions, is it possible to derive a CNN from a base-architecture of GNN?
+"
+"['neural-networks', 'echo-state-network', 'reservoir-computing']"," Title: What do echo state networks give us over a generic RNN resevoir?Body: Slightly generalizing the definition in Jaeger 2001, let's define a reservoir system to be any system of the form
+$$h_{t}=f(h_{t-1}, x_t)$$
+$$y_t=g(Wh_t)$$
+where $f$ and $g$ are fixed and $W$ is a learnable weight matrix. The idea is we feed a sequence of inputs $x_t$ into the system, which has some fixed initial state $h_0$, and thereby generate the sequence of outputs $y_t$. $f$ is fixed (for example, a randomly generated RNN) we can then attempt to learn $W$ in some way in order to get the system to have the behavior that we want.
+Now we add the echo state condition: the system has the echo state condition iff for any left-infinite sequence $...x_{-3}, x_{-2}, x_{-1}, x_0$, there is only one sequence of states $h_t$ consistent with this input sequence.
+Seen from this perspective, any training procedure that could be applied to an echo state system could be applied to a generic reservoir system. So what do we get out of the echo state condition? Is there some reason to think echo state systems will generalize better, or be more quickly trainable? Jaeger does not seem to attempt to argue in this direction, he just describes how to train an ESN, but as I've said, nothing about these training methods seems to require the echo state property.
+"
+"['machine-learning', 'deep-learning', 'regularization', 'capsule-neural-network']"," Title: How significant is the decoder part of the capsule network?Body: Capsule Networks use an encoder-decoder structure, where the encoder part consists of the capsule layers (PrimiaryCaps and DigitCaps) and is also the part of the capsule network which performs the actual classification. On the other hand, the decoder attempts to reconstruct the original image from the output of the correct DigitCap. The Decoder in the Capsule Network is used as the regularizer of the network as it helps the network learn better features.
+I can see how the decoder is helpful for datasets such as MNIST where all image classes have clear differences and the input size if the image is quite small. However, if the input has large dimensions and differences between image classes are quite small, I see the decoder network as overkill, as it will find it hard to reconstruct images for different classes.
+In my case, my dataset consists of 3D MRI images of patients which have Alzheimer's Disease and those who do not. I am down-sampling the images and producing 8 3D patches which will be used as input to the network. The patches still have high dimensions considering that these are 3D, and there are not many clear differences between patches of the two image classes.
+My questions here are:
+
+- How significant is the decoder part of the capsule network? CNNs that perform image classification, usually do not have a decoder part. Why does the capsule network rely on the decoder to learn better features?
+
+- Are there any alternatives to the decoder within the capsule network, acting as a regularizer? Can the decoder be ignored completely?
+
+
+"
+"['game-ai', 'minimax', 'game-theory', 'alpha-beta-pruning', 'checkers']"," Title: What is a good way of identifying volatile positions for a checkers game?Body: I am implementing an AI for a mobile checkers game, and have used alpha-beta pruning with Minimax.
+Now I have the problem of horizontal effect, and need to do Quiesence search to avoid that.
+Any advice on what makes a position volatile for a checkers game?
+I want to consider the cases when player can take a piece, and also when any piece can be taken by opponent a volatile position, and continue searching for another depth.
+Anything else?
+"
+"['python', 'tensorflow', 'keras', 'autoencoders', 'bottlenecks']"," Title: Autoencoder: predictions missing for nodes in the bottleneck layerBody: I'm using tf.Keras to build a deep-fully connected autoencoder. My input dataset is a dataframe with shape (19947,), and the purpose of the autoencoder is to predict normalized gene expression values. They are continuous values that range from [0,~620000].
+I tried different architectures and I'm using relu activation for all layers. To optimize I'm using adam with mae loss.
+The problem I have is the network trains successfully (although the train loss is still terrible) but when I'm predicting I notice that although the predictions do make sense for some nodes, there are always a certain number of nodes that only output 0. I've tried changing the number of nodes of my bottleneck layer (output) and it always happens even when I decrease the output number.
+Any ideas on what I'm doing wrong?
+tf.Keras code:
+input_layer = keras.Input(shape=(19947,))
+simple_encoder = keras.models.Sequential([
+ input_layer,
+ keras.layers.Dense(512, activation='relu'),
+ keras.layers.Dense(128, activation='relu'),
+ keras.layers.Dense(16, activation='relu')
+])
+simple_decoder = keras.models.Sequential([
+ keras.layers.Dense(128, activation='relu'),
+ keras.layers.Dense(512, activation='relu'),
+ keras.layers.Dense(19947, activation='relu')
+])
+simple_ae = keras.models.Sequential([simple_encoder, simple_decoder])
+simple_ae.compile(optimizer='adam', loss='mae')
+simple_ae.fit(X_train, X_train,
+ epochs=1000,
+ validation_data=(X_valid, X_valid),
+ callbacks=[early_stopping])
+
+Output of encoder.predict with 16 nodes on the bottleneck layer. 7 nodes predict only 0's and 8 nodes predict "correctly"
+
+"
+"['reinforcement-learning', 'q-learning']"," Title: Why is the policy implied by Q-learning deterministic, when it always chooses the action with highest probability?Body: Q-learning uses the maximizing value at each step, which implies that there is a probability distribution and it happens to choose the one with the highest probability. There is no direct mapping between a particular state to ONLY a particular action but a bunch of actions with varying probabilities. I don't understand.
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'classification', 'object-recognition']"," Title: Image classification - Need method to classify ""unknown"" objects as ""trash"" (3D objects)Body: We have an image classifier that was built using CNN with faster R-CNN and Yolov5.
+It is designated to run on 3D objects.
+All of those objects have similar "features" structure, but the actual features of each object class are somewhat different one from another. Therefore, we strive to detect the classes based on those differences in features.
+In theory there are thousands of different classes, but for now we have trained the model to detect 4 types of classes, by training it on data sets that includes many images from different angles for each of those 4 classes (1,000 images each).
+The main problem we face is that whenever the model runs on an "unknown" object, it may still classify it as one of our 4 classes, and sometimes it will do it with a high probability score (0.95), which hinders the whole credibility of our model results.
+We think it might be since we are using SoftMax, which seems to force the model to assign an unknown object to one of the 4 classes.
+We want to know what will the best way to overcome this issue.
+We tried adding a new, fifth "trash" class, with 1,000 images of "other" objects that do not belong to our four classes, but it significantly reduced the confidence level for our test images, so we are not sure if this is at all a progress.
+"
+"['convolutional-neural-networks', 'brain', 'neuroscience', 'neocognitron']"," Title: Are convolutional neural networks inspired by the human brain?Body: The Deep Learning book by Goodfellow et al. states
+
+Convolutional networks stand out as an example of neuroscientific principles influencing deep learning.
+
+Are convolutional neural networks (CNNs) really inspired by the human brain?
+If so, how? In particular, what structures within the brain do CNN-like neuron groupings occur?
+"
+"['machine-learning', 'papers', 'objective-functions', 'math']"," Title: How did the variance and double summation of the covariance come to the L2 minimization equation?Body: I am trying to understand the last two lines of this math notation (from this paper).
+
+How did the Var and double summation of the Cov come to the equation?
+The first two lines I understood something like $(a-b)^2 = a^2 -2ab +b^2$.
+"
+"['reinforcement-learning', 'papers', 'trust-region-policy-optimization', 'action-spaces']"," Title: Why does each component of the tuple that represents an action have a categorical distribution in the TRPO paper?Body: I was going through the TRPO paper, and there was a line under Appendix D "Approximating Factored Policies with Neural Networks" in the last paragraph which I am unable to understand
+
+The action consists of a tuple $(a_1, a_2..... , a_K)$ of integers $a_k\in\{1, 2,......,N_k\} $ and each of these components is assumed to have a categorical distribution.
+
+I can't seem to get how each component has a categorical distribution. I think it should be the tuple that has a categorical distribution.
+I think I am getting something wrong.
+"
+"['reinforcement-learning', 'definitions', 'markov-decision-process', 'return']"," Title: Why is it useful to define the return as the sum of the rewards from time $t$ onward rather than up to $t$?Body: Why is it useful to define the return as the sum of the rewards from time $t$ onward rather than up to $t$?
+The return for an MDP is usually defined as
+$$G_t=R_{t+1}+R_{t+2}+ \dots +R_T$$
+Why is this defined as the return? Is there anything useful about this?
+It seems like it's more useful to define the return as $$G_t=R_0+ \dots+R_t,$$ because your "return", so to speak, is the "profit from investment" so it seems like your return will be your accumulated reward from taking actions up to that point.
+"
+"['reinforcement-learning', 'hyper-parameters', 'return', 'discount-factor']"," Title: For episodic tasks with an absorbing state, why can't we both have $\gamma=1$ and $T= \infty$ in the definition of the return?Body: For episodic tasks with an absorbing state, why can't $\gamma=1$ and $T= \infty$?
+In Sutton and Barto's book, they say that, for episodic tasks with absorbing states that becomes an infinite sequence, then the return is defined by:
+$$G_t=\sum_{k=t+1}^{T}\gamma^{k-t-1}R_k$$
+This allows the return to be the same whether the sum is over the first $T$ rewards, where $T$ is the time of termination or over the full infinite sequence, with $T=\infty$ xor $\gamma=1$.
+Why can't we have both? I don't see how they can both be set to those parameters. It seems like, if you have an absorbing state, the rewards from terminal onward will just be 0 and not be affected by $\gamma$ or $T$.
+Here's the full section of the book on page 57 in the 2nd edition
+
+I think the reasoning behind this also leads to why for policy evaluation where
+$$v_\pi(s)=\sum_a\pi(a|s)\sum_{s',r}p(s',r|s,a)[r+\gamma v_\pi(s')]$$
+"Has an existence and uniqueness guarantee only if $\gamma < 1$ or termination is guaranteed under $\pi$"(page 74). This part I'm also a bit confused by, but seems related.
+"
+"['neural-networks', 'deep-learning', 'feedforward-neural-networks', 'weights', 'weights-initialization']"," Title: Why should variance(output) equal variance(input) in Xavier Initialisation?Body: In a lot of explanations online for Xavier Initialization, I see the following:
+
+With each passing layer, we want the variance to remain the same. This helps us keep the signal from exploding to a high value or vanishing to zero. In other words, we need to initialize the weights in such a way that the variance remains the same for x and y. This initialization process is known as Xavier initialization.
+
+Source https://prateekvjoshi.com/2016/03/29/understanding-xavier-initialization-in-deep-neural-networks/
+However, the intuition behind why var(output) should equal var(inputs) is never explained. Does anyone know why intuitively var(output) should equal var(inputs)?
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning']"," Title: Looking for a good approach for building an automated director for a racing game spectator modeBody: I'm building a tool that should assist a director to broadcast a racing game. I want this tool to suggest the human director which car to focus on and with which camera (among the available ones). I can access quite a lot of data about the current race so I can extrapolate some parameters(like car positions, how many cars near to each other there are, how close they are, last time the camera was switched etc) to be used in the decision making process. I would like the AI to learn from the human director in order to suggest him according to his "direction style".
+My idea is to split the problem in 2 sub problems: the first is the choice of the car to focus on and the second is the choice of the camera to use (or cameras, since is fairly common to switch cameras while following the same car). My plan was to use some sort of Q-learning, rewarding the AI whenever one of the generated suggestions is chosen by the director but I guess it would be really difficult to define a set of states and moreover it would probably take ages before it would start to give some useful suggestions.
+Are there some other good approaches I could consider? I'm also thinking about using a neural network so maybe the learning process would be faster.
+"
+"['tensorflow', 'keras', 'actor-critic-methods']"," Title: Advantage Actor Critic model implementation with TensorflowjsBody: I am trying to implement an Actor Critic method that controls an RC car. For this I have implemented a simulated environment and actor critic tensorflowjs models.
+My intention is to train a model to navigate an environment without colliding with various obstacles.
+For this I have the following:
+State(continuous):
+
+- the sensors distance(left, middle, right): [0..1,0..1,0..1]
+
+Action(discrete):
+
+- 4 possible actions(move forward, move back, turn left, turn right)
+
+Reward(cumulative):
+
+- moving forward is encouraged
+- being close to an obstacle is penalized
+- colliding with an obstacle is penalized
+
+The structure of the models:
+buildActor() {
+ const model = tf.sequential();
+ model.add(tf.layers.inputLayer({inputShape: [this.stateSize],}));
+
+ model.add(tf.layers.dense({
+ units: parseInt(this.config.hiddenUnits),
+ activation: 'relu',
+ kernelInitializer: 'glorotUniform',
+ }));
+
+ model.add(tf.layers.dense({
+ units: parseInt(this.config.hiddenUnits/2),
+ activation: 'relu',
+ kernelInitializer: 'glorotUniform',
+ }));
+
+ model.add(tf.layers.dense({
+ units: this.actionSize,
+ activation: 'softmax',
+ kernelInitializer: 'glorotUniform',
+ }));
+
+ this.compile(model, this.actorLearningRate);
+
+ return model;
+ }
+
+buildCritic() {
+ const model = tf.sequential();
+
+ model.add(tf.layers.inputLayer({inputShape: [this.stateSize],}));
+
+ model.add(tf.layers.dense({
+ units: parseInt(this.config.hiddenUnits),
+ activation: 'relu',
+ kernelInitializer: 'glorotUniform',
+ }));
+
+ model.add(tf.layers.dense({
+ units: parseInt(this.config.hiddenUnits/2),
+ activation: 'relu',
+ kernelInitializer: 'glorotUniform',
+ }));
+
+ model.add(tf.layers.dense({
+ units: this.valueSize,
+ activation: 'linear',
+ kernelInitializer: 'glorotUniform',
+ }));
+
+ this.compile(model, this.criticLearningRate);
+
+ return model;
+ }
+
+The models are compiled with an adam optimized and huber loss:
+compile(model, learningRate) {
+ model.compile({
+ optimizer: tf.train.adam(learningRate),
+ loss: tf.losses.huberLoss,
+ });
+ }
+
+Training:
+trainModel(state, action, reward, nextState) {
+ let advantages = new Array(this.actionSize).fill(0);
+
+ let normalizedState = normalizer.normalizeFeatures(state);
+ let tfState = tf.tensor2d(normalizedState, [1, state.length]);
+ let normalizedNextState = normalizer.normalizeFeatures(nextState);
+ let tfNextState = tf.tensor2d(normalizedNextState, [1, nextState.length]);
+
+ let predictedCurrentStateValue = this.critic.predict(tfState).dataSync();
+ let predictedNextStateValue = this.critic.predict(tfNextState).dataSync();
+
+ let target = reward + this.discountFactor * predictedNextStateValue;
+ let advantage = target - predictedCurrentStateValue;
+ advantages[action] = advantage;
+ // console.log(normalizedState, normalizedNextState, action, target, advantages);
+
+ this.actor.fit(tfState, tf.tensor([advantages]), {
+ epochs: 1,
+ }).then(info => {
+ this.latestActorLoss = info.history.loss[0];
+ this.actorLosses.push(this.latestActorLoss);
+ }
+ );
+
+ this.critic.fit(tfState, tf.tensor([target]), {
+ epochs: 1,
+ }).then(info => {
+ this.latestCriticLoss = info.history.loss[0];
+ this.criticLosses.push(this.latestCriticLoss);
+ }
+ );
+
+ this.advantages.push(advantage);
+ pushToEvolutionChart(this.epoch, this.latestActorLoss, this.latestCriticLoss, advantage);
+ this.epoch++;
+ }
+
+You ca give the simulation a spin on https://sergiuionescu.github.io/esp32-auto-car/sym/sym.html .
+I found that some behaviors are being picked up - the model learns to prioritize moving forward after a few episodes, but then hits the wall and it reprioritizes spinning - but seems to completely 'forget' that moving forward was ever prioritized.
+I've been trying to follow https://keras.io/examples/rl/actor_critic_cartpole/ to a certain extent, but have not found an equivalent of the way back-propagation is handled there - GradientTape.
+Is it possible to perform training similar to the Keras example in Tensorflowjs?
+The theory i've went through on Actor Critic mentions that the Critic should estimate the reward yet to be obtain until the rest of the episode, but i am training the critic with:
+reward + this.discountFactor * predictedNextStateValue
where reward is the cumulative reward until the current step.
+Should i keep track of a maximum total reward in previous episodes and subtract my reward from that instead?
+When i am training the actor i am generating a zero filled advantages tensor:
+let advantages = new Array(this.actionSize).fill(0);
+let target = reward + this.discountFactor * predictedNextStateValue;
+let advantage = target - predictedCurrentStateValue;
+advantages[action] = advantage;
+
+All other actions than the taken one will receive a 0 advantage. Could this discourage any previous actions the were proven beneficial?
+Should i average out the advantages per state and action?
+Thanks for having the patience to go trough all of this.
+"
+"['deep-learning', 'objective-functions', 'papers', 'activation-functions']"," Title: What is the intuition behind equations 10, 11 and 12 of the paper ""Noise2Noise: Learning Image Restoration without Clean Data""?Body: Can anyone help me understand these functions described in the paper Noise2Noise: Learning Image Restoration without Clean Data
+I have read the portion A.4 in the appendix but need a more detailed and easier to understand explanation. Especially where signum or sign function (sgn) is coming from along with other explanations.
+(10),(11),(12)
+
+"
+"['machine-learning', 'data-preprocessing', 'cross-validation']"," Title: How to fill NaNs in Cross-Validation?Body: I have been searching this but did not find the answer, so sorry if this is a duplicated question.
+I was working with cross-validation, where some doubts came to my mind, and I am not sure which is the correct answer.
+Let's say I have a mixed dataset, with numerical and categorical features. I want to perform a K-Fold Cross-Validation with it, with a K=10. Some of these numerical features are missing, so I decided that I will replace those NaNs with the average of that feature.
+My steps are the following ones:
+
+- Read the entire dataset
+- Perform One Hot Encoding to categorical features.
+- Divide my data into different folds. Let's say that I will use 90% for training, 10% for validating.
+- For every different combination of folds, I replace the missing values from the training and validating sets separately. This means, on one hand, I get the average of the missing values of the training part, and on the other hand the average of the missing values of the validating part.
+- Normalize the data of the training and validating sets between [0, 1] separately, as I did before.
+- Train the correspondant model.
+
+So let's put a simple example of a dataset of 20 rows with N columns. Once I do steps 1 and 2, in the first iteration I will select the 18 first rows as a training set, and the last two rows as validating set.
+I fill the missing values of the 9 first rows with the average of those 18 rows. Then the same for the 2 last rows.
+Then, again, normalize in the same way, separately. And do this for every combination of folds.
+I am doing it like this, because otherwise, from my understanding, is that you are training your model with biased data. You should not have access to the validation data, thus you should not be able to do the average with those numbers. Hence I am using only the numbers of the training part. If I do the average with the entire dataset, this will make my model overfitting.
+I am not so sure about the normalization step, as I do not really think this will have the same impact. But here I do not really know...
+Is this approach correct? Or should I do the average and normalization with the entire dataset? Why?
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'sutton-barto']"," Title: Suppose every-visit MC was used instead of first-visit MC on blackjack. Would you expect the results to be different?Body: This is a question from page 94 of Sutton and Barto's RL book 2020.
+I read in someone's compiled GitHub answers to this book's exercises their answer was: "No because each state in an episode of blackjack is unique."
+I think my answer is more yes, but I'm thinking in terms of casino blackjack, where they have multiple decks shuffled together and add in the dropped cards back into the deck every X games in order to prevent card counting and 1 game can be seen as an episode. I think in this case that first-visit MC and every-visit MC would have drastically different results, given that, at the start of the new episode, the state of the deck, which is only partially observed, will change the value of taking an action given a state (because I believe the cards left in the deck affect the value of an action, but the deck is not totally observable).
+If this is blackjack, where the discarded cards are added back in and shuffled every episode, I'll agree that it shouldn't make a difference.
+Are there any flaws in my conjecture?
+"
+"['reinforcement-learning', 'q-learning', 'state-spaces', 'observation-spaces']"," Title: What happens when the agent faces a state that never before encountered?Body: I have a network with nodes and links, each of them with a certain amount of resources (that can take discrete values) at the initial state. At random time steps, a service is generated, and, based on the agent's action, the network status changes, reducing some of those nodes and links resources.
+The number of all possible states that the network can have is too large to calculate, especially since there is the random factor when generating the services.
+Let's say that I set the state space large enough (for example, 5000), and I use Q-Learning for 1000 episodes. Afterwards, when I test the agent ($\max Q(s,a)$), what could happen if the agent faces a state that did not encounter during the training phase?
+"
+"['neural-networks', 'convolutional-neural-networks', 'computer-vision', 'image-recognition', 'image-processing']"," Title: Training a model to identify certain differences between images?Body: Newbie to CV here so sorry of this is basic. Here's the deal, I have a program that I run many times. and each run I produce a screenshot. I need to compare screenshots from N-1 and N runs and make sure they aren't different in any dramatic way. Of course there are some minor changes like logos and pictures getting updated, etc.
+SO far I've used something as simple as absdiff from opencv to highlight the difference regions and then use some sort of threshold to determine whether something passes or not. But I want to make it slightly intelligent but I'm not 100% sure how to proceed. Google hasn't yielded ghe best answers.
+Essentially, I want to train the model on many different pairs of images and have the output be binary, yes or no depending on whether it should pass or not. In theory, I should be able to plug in 2 images and based on previous training, it should be able to tell me whether there is significant difference or not. What are some ways I might approach this, particularly with regards to what kinds of models to use.
+The requirements here might seem amorphous but that's kinda the nature of the problem. the differences could be, in theory, anything. I am hoping that there will be patterns between different images and that a model would pick up on that. Things like the name of a document is 045 instead of 056 or a logo is slightly updated.
+"
+"['reinforcement-learning', 'epsilon-greedy-policy']"," Title: Should my agent be taking varying number of steps?Body: My environment is set up so that my self-driving agent can take maximum of 400 steps (which is the end goal) before it resets with a completion reward. Despite attaining the end goal during the $\epsilon$-greedy stage, it still kills/crashes itself in subsequent episodes.
+I would like to know if this common in RL (D3QN) scenarios.
+A graph showing episodes vs steps has been placed below.
+
+As one can see, the agent reaches 400 steps in episode 1000. But, in the subsequent episode, it falls down below 50 steps.
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'alphazero', 'alphago-zero', 'alphago']"," Title: AlphaGo Zero: does $Q(s_t, a)$ dominate $U(s_t, a)$ in difficult game states?Body: AlphaGo Zero
+AlphaGo Zero uses a Monte-Carlo Tree Search where the selection phase is governed by $\operatorname*{argmax}\limits_a\left( Q(s_t, a) + U(s_t, a) \right)$, where:
+
+- the exploitation parameter is $Q(s_t, a) = \displaystyle \frac{\displaystyle \sum_{v_i \in (s_t, a)} v_i}{N(s_t, a)}$ (i.e. the mean of the values $v_i$ of all simulations that passes through edge $(s_t, a)$)
+- the exploration parameter is $U(s_t, a) = c_{puct} P(s_t,a) \frac{\sqrt{\sum_b N(s_t, b)}}{1 + N(s_t, a)}$ (i.e. the prior probability $P(s_t, a)$, weighted by the constant $c_{puct}$, the number of simulations that passes through $(s_t, a)$, as well as the number of simulations that passes through $s_t$).
+
+The prior probability $P(s_t, a)$ and simulation value $v_i$ are both outputted by the deep neural network $f_{\theta}(s_t)$:
+
+This neural network takes as an input the raw board representation s of the position and its history, and outputs both move probabilities and a value, (p, v) = fθ(s). The vector of move probabilities p represents the probability of selecting each move a (including pass), pa = Pr(a| s). The value v is a scalar evaluation, estimating the probability of the current player winning from position s.
+
+My confusion
+My confusion is that $P(s_t, a)$ and $v_i$ are probabilities normalized to different distributions, resulting in $v_i$ being about 80x larger than $P(s_t,a)$ on average.
+The neural network outputs $(p, v)$, where $p$ is a probability vector given $s_t$, normalized over all possible actions in that turn. $p_a = P(s_t, a)$ is the probability of choosing action $a$ given state $s_t$. A game of Go has about 250 moves per turn, so on average each move has probability $\frac{1}{250}$, i.e. $\mathbb{E}\left[ P(s_t, a) \right] = \frac{1}{250}$
+On the other hand, $v$ is the probability of winning given state $s_t$, normalized over all possible end-game conditions (win/tie/lose). For simplicity sake, let us assume $\mathbb{E} \left[ v_i \right] \ge \frac{1}{3}$, where the game is played randomly and each outcome is equally likely.
+This means that the expected value of $v_i$ is at least 80x larger than the expected value of $P(s_t, a)$. The consequence of this is that $Q(s_t, a)$ is at least 80x larger than $U(s_t, a)$ on average.
+If the above is true, then the selection stage will be dominated by the $Q(s_t, a)$ term, so AlphaGo Zero should tend to avoid edges with no simulations in them (edges where $Q(s_t, a) = 0$) unless all existing $Q(s_t, a)$ terms are extremely small ($< \frac{1}{250}$), or the MCTS has so much simulations in them that the $\frac{\sqrt{\sum_b N(s_t, b)}}{1 + N(s_t, a)}$ term in $U(s_t, a)$ evens out the magnitudes of the two terms. The latter is not likely to happen since I believe AlphaGo Zero only uses $1,600$ simluations per move, so $\sqrt{\sum_b N(s_t, b)}$ caps out at $40$.
+Selecting only viable moves
+Ideally, MCTS shouldn't select every possible move to explore. It should only select viable moves given state $s_t$, and ignore all the bad moves. Let $m_t$ is the number of viable moves for state $s_t$, and let $P(s_t, a)$ = 0 for all moves $a$ that are not viable. Also, let's assume the MCTS never selects a move that is not viable.
+Then the previous section is partly alleviated, because now $\mathbb{E} \left[ P(s_t, a) \right] = \frac{1}{m_t}$. As a result, $Q(s_T, a)$ should only be $\frac{m_t}{3}$ times larger than $U(s_t, a)$ on average. Assuming $m_t \le 6$, then there shouldn't be too much of an issue
+However, this means that AlphaGo Zero works ideally only when the number of viable moves is small. In a game state $s_t$ where there are many viable moves ($>30$) (e.g. a difficult turn with many possible choices), the selection phase of the MCTS will deteriorate as described in the previous section.
+Questions
+I guess my questions are:
+
+- Is my understanding correct, or have I made mistake(s) somewhere?
+- Does $Q(s_t, a)$ usually dominate $U(s_t, a)$ by this much in practice when the game state has many viable moves? Is the selection phase usually dominated by $Q(s_t, a)$ during these game states?
+- Does the fact that $Q(s_t, a)$ and $U(s_t, a)$ being in such different orders of magnitude (when the game state has many viable moves) affect the quality of the MCTS algorithm, or is MCTS robust to this effect and still produces high quality policies?
+- How common is it for a game state to have many viable moves (>30) in Go?
+
+"
+"['neural-networks', 'deep-learning', 'reference-request', 'feature-selection', 'features']"," Title: What is the impact of the number of features on the prediction power of a neural network?Body: What is the impact of the number of features on the prediction power of an ANN model (in general)? Does an increase in the number of features mean a more powerful prediction model (for approximation purpose)?
+I'm asking these questions because I am wondering if there is any benefit in using two variables (rather than one) to predict one output.
+If there is a scientific paper that answers my question, I would thank you.
+"
+"['computer-vision', 'image-recognition', 'transfer-learning', 'self-supervised-learning', 'pretext-tasks']"," Title: Is it possible to use self-supervised learning on different images for the pretext and downstream tasks?Body: I have just come across the idea of self-supervised learning. It seems that it is possible to get higher accuracies on downstream tasks when the network is trained on pretext tasks.
+Suppose that I want to do image classification on my own set of images. I have limited data on these images and maybe I can use self-supervised learning to achieve better accuracies on these limited data.
+Let's say that I try to train a neural network on a pretext task of predicting the patch position relative to the center patch on different images that are readily available in quantity, such as cats, dogs, etc.
+If I try to initialise the weights of my neural network, then do image classification on my own images, which are vastly different from that of the images used in the pretext task, would self-supervised learning work because the images for the pretext and downstream tasks are different?
+TLDR: Must the images used in the pretext task and the downstream tasks be the same?
+"
+"['neural-networks', 'reinforcement-learning', 'actor-critic-methods']"," Title: What difference does it make whether Actor and Critic share the same network or not?Body: I'm learning about Actor-Critic reinforcement learning algorithms. One source I encountered mentioned that Actor and Critic can either share one network (but use different output layers) or they can use two completely separate networks. In this video he mentions that using two separate networks works for simpler problems, such as Mountain Car. However, more complex problems like Lunar Lander works better with a shared network. Why is that? Could you explain what difference that choosing one design over another would that make?
+"
+"['reinforcement-learning', 'q-learning', 'convergence', 'learning-rate', 'dyna']"," Title: If $\alpha$ decreases over time, why is Q-learning guaranteed to converge?Body: Q-Learning is guaranteed to converge if $\alpha$ decreases over time.
+On page 161 of the RL book by Sutton and Barto, 2nd edition, section 8.1, they write that Dyna-Q is guaranteed to converge if each action-state pair is selected an infinite number of times and if $\alpha$ decreases appropriately over time.
+It seems that it would be better if $\alpha$ increased over time, as it is the learning rate of the gradient of Q-function values ($R+\gamma\max_aQ(S',a)-Q(S,A)$), and, initially, they start off incredibly inaccurate, because they are initialized arbitrarily and over time converge to the true values, hence you'd want to weight them more as time increases rather than decrease it?
+Why is this a convergence criterion?
+"
+"['deep-learning', 'tensorflow', 'keras', 'deep-rl', 'deep-neural-networks']"," Title: DQN Agent with a 2D matrix as input in KerasBody: I have a Reinforcement Learning environment where the state is a 2D matrix with 0s and 1s (only one column with the value of 1 in each row).
+Example:
+(
+ (0, 1, 0),
+ (0, 0, 1),
+ (1, 0, 0),
+ (0, 0, 0),
+ (0, 1, 0)
+)
+
+The action the agent must take is for each row in the input, choose one resource out of 12 resources the agent has if there is a column with the value of 1 in that row, else choose no resource if the row has 0s only (example: row[3]
wouldn't have any resources chosen for it by the agent). The rows correspond to the users the agent must allocate resources to.
+In the step() method in the RL environment, the agent would receive a reward or a penalty depending on the action. If the reward is positive, the agent updates the state matrix, putting a 0 instead of 1 in the rows corresponding to the users that were allocated resources, which should be the next state. If the reward is negative, the episode ends, the environment resets and a new state is received by the agent
+It came to my understanding that, in a deep learning approach, the DQN agent would receive a 2D matrix of 0s and 1s as input to its neural network (the state matrix), and output a vector with the chosen resources for each row of the input.
+The network must choose a resource out of 12 resources for each row if that row has a 1 in it, and no resource is chosen if there is no column with the value of 1 in that row of the input. In other words, the network must choose an element out of 12 and output a vector with the chosen elements, depending on the input matrix.
+Is there a way to do this using Deep Q-Learning and neural networks ?
+"
+"['neural-networks', 'deep-learning', 'deepmind', 'quantum-computing', 'alpha-fold']"," Title: Is AlphaFold just making a good estimate of the protein structure?Body: In the news, DeepMind's AlphaFold is said to have solved the protein folding problem using neural networks, but isn't this a problem only optimised quantum computers can solve?
+To my limited understating, the issue is that there are too many variables (atomic forces) to consider when simulating how an amino acid chain would fold, in which case only a quantum computer can be used to simulate it.
+Is the neural network just making a very good estimate, or is it simulating the actual protein structure?
+"
+"['neural-networks', 'tensorflow', 'hidden-layers', 'algorithmic-bias', 'dense-layers']"," Title: What's the purpose of layers without biases?Body: I noticed that the TensorFlow library includes a use_bias
parameter for the Dense
layer, which is set to True
by default, but allows you to disable it. At first glance, it seems unfavorable to turn off the biases, as this may negatively affect data fitting and prediction.
+What is the purpose of layers without biases?
+"
+"['machine-learning', 'alpha-fold']"," Title: Can AlphaFold predict proteins with metals well?Body: There are certain proteins that contain metal components, known as metalloproteins. Commonly, the metal is at the active site which needs the most prediction precision. Typically, there is only one (or a few) metals in a protein, which contains far more other atoms. So, the structural data that we could be used to train AlphaFold will contain far less information about the metal elements. Not to mention most proteins don't have metals at all (it is estimated that only 1/2-1/4 of all proteins contain metals [1]).
+Given that maybe there is not enough structural data about protein local structure around metal atoms (e.g. Fe, Zn, Mg, etc.), then AlphaFold cannot predict local structure around metals well. Is that right?
+I also think that the more complex electron shell of metal also makes the data less useful, since its bounding pattern is more flexible than carbon, etc.
+"
+"['natural-language-processing', 'transformer', 'bert', 'attention']"," Title: Transformers: how does the decoder final layer output the desired token?Body: In the paper Attention Is All You Need, this section confuses me:
+
+In our model, we share the same weight matrix between the two embedding layers [in the encoding section] and the pre-softmax linear transformation [output of the decoding section]
+
+Shouldn't the weights be different, and not the same? Here is my understanding:
+For simplicity, let us use the English-to-French translation task where we have $n^e$ number of English words in our dictionary and $n^f$ number of French words.
+
+- In the encoding layer, the input tokens are $1$ x $n^e$ one-hot vectors, and are embedded with a $n^e$ x $d^{model}$ learned embedding matrix.
+
+- In the output of the decoding layer, the final step is a linear transformation with weight matrix $d^{model}$ x $n^f$, and then applying softmax to get the probability of each french word, and choosing the french word with the highest probability.
+
+
+How is it that the $n^e$ x $n^{model}$ input embedding matrix share the same weights as the $d^{model}$ x $n^f$ decoding output linear matrix? To me, it seems more natural for both these matrices to be learned independently from each other via the training data, right? Or am I misinterpreting the paper?
+"
+"['natural-language-processing', 'transformer', 'bert', 'attention']"," Title: Transformers: how to get the output (keys and values) of the encoder?Body: I was reading the paper Attention Is All You Need.
+It seems like the last step of the encoder is a LayerNorm(relu(WX + B) + X), i.e. an add + normalization. This should result in a $n$ x $d^{model}$ matrix, where $n$ is the length of the input to the encoder.
+How do we convert this $n$ x $d^{model}$ matrix into the keys $K$ and values $V$ that are fed into the decoder's encoder-decoder attention step?
+Note that, if $h$ is the number of attention heads in the model, the dimensions of $K$ and $V$ should both be $n$ x $\frac{d^{model}}{h}$. For $h=8$, this means we need a $n$ x $\frac{d^{model}}{4}$ matrix.
+Do we simply add an extra linear layer that learns a $d^{model}$ x $\frac{d^{model}}{4}$ weight matrix?
+Or do we use the output of the final Add & Norm layer, and simply use the first $\frac{d^{model}}{4}$ columns of the matrix and discard the rest?
+"
+"['deep-learning', 'keras', 'dqn', 'deep-rl', 'deep-neural-networks']"," Title: How to build a DQN agent with state and action being arrays?Body: I have a Reinforcement-Learning environment where the state is an array of 0s and 1s with length equals to the number of users the agent must satisfy (11 users).
+The agent must choose one of 12 resources for the 11 users according to the state array. If state[0] == 1
, that means that user0 needs a resource, so the agent must choose a resource out of the 12 resources it has. So, the action array's first element would be, for example: action[0] = 10
, which means that resource 10 was allocated to user0.
+If the next user (user1) is asking for a resource as well, then the number of resources to choose from is 12 - 1
, in other words, because resource10 was already allocated to user0, it cannot be allocated to another user.
+If state[X] == 0
, it means that userX is not asking for a resource, therefore it must not be allocated any resource.
+An example of a state array:
+[1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0]
+
+An example of an action array according to the state array example: (resource count starts at 0 | -1 indicates no resource was allocated)
+[10, 2, -1, -1, -1, 3, 11, 5, -1, -1, -1]
+
+I'm new to Reinforcement Learning and Deep Learning, and I have no idea how to translate that into a neural network.
+"
+"['transformer', 'attention']"," Title: Is the Decoder mask (triangular mask) applied only in the first decoder block, or to all blocks in Decoder?Body: The Decoder mask, also called "look-ahead mask", is applied in the Decoder side to prevent it from attending future tokens. Something like this:
+[0, 1, 1, 1, 1]
+[0, 0, 1, 1, 1]
+[0, 0, 0, 1, 1]
+[0, 0, 0, 0, 1]
+[0, 0, 0, 0, 0]
+
+But is this mask applied only in the first Decoder block? Or to all its blocks?
+"
+"['natural-language-processing', 'machine-translation']"," Title: Finding or creating a dataset for Neural Text SimplificationBody: I'm currently starting a research project focused on NLP.
+One of the steps involved in this project will be the development of a text simplification system, probably using a neural encoder-decoder architecture.
+For most Text Simplification research available, the most commonly used dataset is one derived from pairing Wikipedia entries in both English and Simplified English. My problem arises from the fact that the focus of my research is not on the English Language, but rather in Portuguese, specifically Portugal Portuguese.
+There exists no Simple Portuguese Wikipedia page and it seems that there exists no publicly available text simplification dataset in Portugal Portuguese at all. Due to this fact I'm curious if there would be any way of tackling this problem. Maybe having a dataset simply of complex Portuguese and simple portuguese, but with no pairings, although I'm not quite sure how that could be formulated to train a NN with.
+So my question is if there are any text simplification datasets in Portugal (or maybe Bazil, as a last resource) Portuguese and, if not, what would be the optimal way to build that dataset.
+Thank you.
+"
+"['terminology', 'papers', 'generative-adversarial-networks', 'variational-autoencoder', 'loss']"," Title: What is the ""contradictory loss"" in the ""Old Photo Restoration via Deep Latent Space Translation"" paper?Body: In page 4 of the paper Old Photo Restoration via Deep Latent Space
+Translation, it says the encoder $E_{R,X}$ of $VAE_1$ tries to fool the discriminator with a contradictory loss to ensure that $R$ and $X$ are mapped to the same space. What do they mean by "contradictory loss"?
+"
+"['neural-networks', 'natural-language-processing', 'objective-functions', 'transformer', 'attention']"," Title: What is the cost function of a transformer?Body: The paper Attention Is All You Need describes the transformer architecture that has an encoder and a decoder.
+However, I wasn't clear on what the cost function to minimize is for such an architecture.
+Consider a translation task, for example, where give an English sentence $x_{english} = [x_0, x_1, x_2, \dots, x_m]$, the transformer decodes the sentence into a French sentence $x_{french}' = [x_0', x_1', \dots, x_n']$. Let's say the true label is $y_{french} = [y_0, y_1, \dots, y_p]$.
+What is the object function of the transformer? Is it the MSE between $x_{french}'$ and $y_{french}$? And does it have any weight regularization terms?
+"
+"['neural-networks', 'natural-language-processing', 'gradient-descent', 'transformer', 'attention']"," Title: What is the gradient of an attention unit?Body: The paper Attention Is All You Need describes the Transformer architecture, which describes attention as a function of the queries $Q = x W^Q$, keys $K = x W^K$, and values $V = x W^V$:
+$\text{Attention(Q, K, V)} = \text{softmax}\left( \frac{QK^T}{\sqrt{d_k}} \right) V \\
+= \text{softmax}\left( \frac{x W^Q (W^K)^T x}{\sqrt{d_k}} \right) x W^V$
+In the Transformer, there are 3 different flavors of attention:
+
+- Self-attention in the Encoder, where the queries, keys, and values all come from the input to the Encoder.
+- Encoder-Decoder attention in the Decoder, where the queries come from the input to the Decoder, and the keys and values come from the output of the Encoder
+- Masked self-attention in the Decoder, where the queries, keys and values all come from the input to the Decoder, and, for each token, the $\text{softmax}\left( \frac{QK^T}{\sqrt{d_k}} \right)$ operation is masked out (zero'd out) for all tokens to the right of that token (to prevent look-ahead, which is cheating during training).
+
+What is the gradient (i.e. the partial derivatives of the loss function w.r.t. $x$, $W^Q$, $W^K$, $W^V$, and any bias term(s)) of each of these attention units? I am having a difficult time wrapping my head around derivating a gradient equation because I'm not sure how the softmax function interacts with the partial derivatives, and also, for the Encoder-Decoder attention in the Decoder, I'm not clear how to incorporate the encoder output into the equation.
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks', 'regression', 'feature-extraction']"," Title: How should I use deep learning to find the rotation of an object from its 2D image?Body: I have 6600 images and I am supposed to know the rotation of the object in each image. So, given an image, I want to regress to a single value.
+My attempt: I use Resnet-18 to extract a feature vector of length 1000 from an image. This is then passed to three fully-connected layers: fc(1000, 512) -> fc(512, 64) -> fc(64, 1)
+The problem I am facing right now is that my training loss and validation loss immediately go down after the first 5 epochs and then they barely change. But my training and validation accuracy fluctuates wildly throughout.
+I understand that I am experiencing over-fitting and I have done the following to deal with it:
+
+- data augmentation (Gaussian noise and color jittering)
+- L1 regularization
+- dropout
+
+So far, nothing seems to be changing the results much. The next thing I haven't tried is reducing the size of my neural net. Will that help? If so, how should I reduce the size?
+"
+"['deep-learning', 'convolutional-neural-networks', 'long-short-term-memory', 'time-series']"," Title: Time series prediction using LSTM and CNN-LSTM: which is better?Body: I am working on LSTM and CNN to solve the time series prediction problem.
+I have seen some tutorial examples of time series prediction using CNN-LSTM. But I don't know if it is better than what I predicted using LSTM.
+Could using LSTM and CNN together be better than predicting using LSTM alone?
+"
+"['comparison', 'autonomous-vehicles']"," Title: What is the difference between object tracking and trajectory prediction?Body: In autonomous driving, we know that the behaviour prediction module is concerned with understanding how the agents in the environment will behave.
+Similarly, in the perception module, the tracking algorithms are responsible for getting an estimate of the object's state over time.
+"
+"['keras', 'recurrent-neural-networks', 'data-visualization']"," Title: How to graphically represent a RNN architecture implemented in Keras?Body: I'm trying to create a simple blogpost on RNNs, that should give a better insight into how they work in Keras. Let's say:
+model = keras.models.Sequential()
+model.add(keras.layers.SimpleRNN(5, return_sequences=True, input_shape=[None, 1]))
+model.add(keras.layers.SimpleRNN(5, return_sequences=True))
+model.add(keras.layers.Dense(1))
+
+I came up with the following visualization (this is only a sketch), which I'm quite unsure about:
+
+The RNN architecture is comprised of 3 layers represented in the picture.
+Question: is this correct? Is the input "flowing" thought each layer neuron to neuron or only though the layers, like in the picture below. Is there anything else that is not correct - any other visualizations to look into?
+
+Update: my assumptions are based on my understanding from what I saw in Geron's book. The recurrent neurons are connected, see: https://pasteboard.co/JDXTFVw.png ... he then proceeds to talk about connections between different layers, see: https://pasteboard.co/JDXTXcz.png - did I misunderstand him or is it just a peculiarity in keras framework?
+"
+"['machine-learning', 'reference-request', 'adversarial-ml', 'healthcare', 'taxonomy']"," Title: Is there a taxonomy of adversarial attacks?Body: I am a medical doctor working on methodological aspects of health-oriented ML. Reproducibility, replicability, generalisability are critical in this area. Among many questions, some are raised by adversarial attacks (AA).
+My question is to be considered from a literature review point of view: suppose I want to check an algorithm from an AA point of view:
+
+- is there a systematic methodology approach to be used, relating format of the data, type of models, and AA? Conceptually, is there a taxonomy of AA? If so, practically, are some AA considered as gold standards?
+
+"
+"['machine-learning', 'deep-learning', 'tensorflow', 'keras', 'transformer']"," Title: Why does the loss stops reducing after a point in this Transformer Model?Body: Context
+I was making a Transformer Model to convert English Sentences to German Sentences. But the loss stops reducing after some time.
+Code
+import string
+import re
+from tensorflow.keras.preprocessing.text import Tokenizer
+from tensorflow.keras.preprocessing.sequence import pad_sequences
+from tensorflow.keras.models import Model
+from tensorflow.keras.layers import Embedding, LSTM, RepeatVector, Dense, Dropout, BatchNormalization, TimeDistributed, AdditiveAttention, Input, Concatenate, Flatten
+from tensorflow.keras.layers import Activation, LayerNormalization, GRU, GlobalAveragePooling1D, Attention
+from tensorflow.keras.optimizers import Adam
+from tensorflow.nn import tanh, softmax
+import time
+from tensorflow.keras.losses import SparseCategoricalCrossentropy, CategoricalCrossentropy
+from numpy import array
+from tensorflow.keras.utils import plot_model
+from sklearn.utils import shuffle
+import time
+import tensorflow as tf
+from numpy import array
+import numpy as np
+from tensorflow.keras.models import load_model
+from tensorflow.keras.datasets.imdb import load_data
+
+def load_data(filename):
+ file = open(filename, 'r')
+ text = file.read()
+ file.close()
+ return text
+
+def to_lines(text):
+ return text.split('\n')
+
+def clean_data(pair):
+ pair = 'start_seq_ ' + pair + ' end_seq_'
+
+ re_print = re.compile('[^%s]' % re.escape(string.printable))
+ table = str.maketrans('', '', string.punctuation)
+ tokens = [token.translate(table) for token in pair.split()]
+ tokens = [token.lower() for token in tokens]
+ tokens = [re_print.sub('', token) for token in tokens]
+ tokens = [token for token in tokens if token.isalpha()]
+ return tokens
+
+lines = to_lines(load_data('/content/drive/My Drive/spa.txt'))
+
+english_pair = []
+german_pair = []
+language = []
+for line in lines:
+ if line != '':
+ pairs = line.split('\t')
+ english_pair.append(clean_data(pairs[0]))
+ german_pair.append(clean_data(pairs[1]))
+
+ language.append(clean_data(pairs[0]))
+ language.append(clean_data(pairs[1]))
+
+english_pair = array(english_pair)
+german_pair = array(german_pair)
+language = array(language)
+
+def create_tokenizer(data):
+ tokenizer = Tokenizer()
+ tokenizer.fit_on_texts(data)
+ return tokenizer
+
+def max_len(lines):
+ length = []
+ for line in lines:
+ length.append(len(line))
+ return max(length)
+
+tokenizer = create_tokenizer(language)
+
+vocab_size = len(tokenizer.word_index) + 1
+
+max_len = max_len(language)
+
+def create_sequences(sequences, max_len):
+ sequences = tokenizer.texts_to_sequences(sequences)
+ sequences = pad_sequences(sequences, maxlen=max_len, padding='post')
+ return sequences
+
+X1 = create_sequences(english_pair, max_len)
+X2 = create_sequences(german_pair, max_len)
+Y = create_sequences(german_pair, max_len)
+
+
+X1, X2, Y = shuffle(X1, X2, Y)
+
+training_samples = int(X1.shape[0] * 1.0)
+
+train_x1, train_x2, train_y = X1[:training_samples], X2[:training_samples], Y[:training_samples]
+test_x1, test_x2, test_y = X1[training_samples:], X2[training_samples:], Y[training_samples:]
+
+train_x2 = train_x2[:, :-1]
+test_x2 = test_x2[:, :-1]
+train_y = train_y[:, 1:].reshape(-1, max_len-1)
+test_y = test_y[:, 1:].reshape(-1, max_len-1)
+
+train_x2 = pad_sequences(train_x2, maxlen=max_len, padding='post')
+test_x2 = pad_sequences(test_x2, maxlen=max_len, padding='post')
+
+train_y = pad_sequences(train_y, maxlen=max_len, padding='post')
+test_y = pad_sequences(test_y, maxlen=max_len, padding='post')
+
+All code above just prepares the Data, so if you want you can skip that part.
+Code After this starts implementing the Transformer Model.
+class EncoderBlock(tf.keras.layers.Layer):
+ def __init__(self, mid_ffn_dim, embed_dim, num_heads, max_len, batch_size):
+ super(EncoderBlock, self).__init__()
+ # Variables
+ self.batch_size = batch_size
+ self.max_len = max_len
+ self.mid_ffn_dim = mid_ffn_dim
+ self.embed_dim = embed_dim
+ self.num_heads = num_heads
+ self.attention_vector_len = self.embed_dim // self.num_heads
+ if self.embed_dim % self.num_heads != 0:
+ raise ValueError('I am Batman!')
+
+ # Trainable Layers
+ self.mid_ffn = Dense(self.mid_ffn_dim, activation='relu')
+ self.final_ffn = Dense(self.embed_dim)
+
+ self.layer_norm1 = LayerNormalization(epsilon=1e-6)
+ self.layer_norm2 = LayerNormalization(epsilon=1e-6)
+
+ self.combine_heads = Dense(self.embed_dim)
+
+ self.query_dense = Dense(self.embed_dim)
+ self.key_dense = Dense(self.embed_dim)
+ self.value_dense = Dense(self.embed_dim)
+
+ def separate_heads(self, x):
+ x = tf.reshape(x, (-1, self.max_len, self.num_heads, self.attention_vector_len))
+ return tf.transpose(x, perm=[0, 2, 1, 3])
+
+ def compute_self_attention(self, query, key, value):
+ score = tf.matmul(query, key, transpose_b=True)
+ dim_key = tf.cast(tf.shape(key)[-1], tf.float32)
+ scaled_score = score / tf.math.sqrt(dim_key)
+ weights = tf.nn.softmax(scaled_score, axis=-1)
+ output = tf.matmul(weights, value)
+ return output
+
+ def self_attention_layer(self, x):
+ query = self.query_dense(x)
+ key = self.key_dense(x)
+ value = self.value_dense(x)
+
+ query_heads = self.separate_heads(query)
+ key_heads = self.separate_heads(key)
+ value_heads = self.separate_heads(value)
+
+ attention = self.compute_self_attention(query_heads, key_heads, value_heads)
+
+ attention = tf.transpose(attention, perm=[0, 2, 1, 3])
+ attention = tf.reshape(attention, (-1, self.max_len, self.embed_dim))
+
+ output = self.combine_heads(attention)
+ return output
+
+ def get_output(self, x):
+ attn_output = self.self_attention_layer(x)
+ out1 = self.layer_norm1(x + attn_output)
+
+ ffn_output = self.final_ffn(self.mid_ffn(out1))
+
+ encoder_output = self.layer_norm2(out1 + ffn_output)
+ return encoder_output
+
+class DecoderBlock(tf.keras.layers.Layer):
+ def __init__(self, mid_ffn_dim, embed_dim, num_heads, max_len, batch_size):
+ super(DecoderBlock, self).__init__()
+ # Variables
+ self.batch_size = batch_size
+ self.max_len = max_len
+ self.mid_ffn_dim = mid_ffn_dim
+ self.embed_dim = embed_dim
+ self.num_heads = num_heads
+ self.attention_vector_len = self.embed_dim // self.num_heads
+ if self.embed_dim % self.num_heads != 0:
+ raise ValueError('I am Batman!')
+
+ # Trainable Layers
+
+ self.query_dense1 = Dense(self.embed_dim, name='query_dense1')
+ self.key_dense1 = Dense(self.embed_dim, name='key_dense1')
+ self.value_dense1 = Dense(self.embed_dim, name='value_dense1')
+
+ self.mid_ffn = Dense(self.mid_ffn_dim, activation='relu', name='dec_mid_ffn')
+ self.final_ffn = Dense(self.embed_dim, name='dec_final_ffn')
+
+ self.layer_norm1 = LayerNormalization(epsilon=1e-6)
+ self.layer_norm2 = LayerNormalization(epsilon=1e-6)
+ self.layer_norm3 = LayerNormalization(epsilon=1e-6)
+
+ self.combine_heads = Dense(self.embed_dim, name='dec_combine_heads')
+
+ self.query_dense2 = Dense(self.embed_dim, name='query_dense2')
+ self.key_dense2 = Dense(self.embed_dim, name='key_dense2')
+ self.value_dense2 = Dense(self.embed_dim, name='value_dense2')
+
+ def separate_heads(self, x):
+ x = tf.reshape(x, (-1, self.max_len, self.num_heads, self.attention_vector_len))
+ return tf.transpose(x, perm=[0, 2, 1, 3])
+
+ def compute_self_attention(self, query, key, value):
+ score = tf.matmul(query, key, transpose_b=True)
+ dim_key = tf.cast(tf.shape(key)[-1], tf.float32)
+ scaled_score = score / tf.math.sqrt(dim_key)
+ weights = tf.nn.softmax(scaled_score, axis=-1)
+ output = tf.matmul(weights, value)
+ return output
+
+ def masking(self, x):
+ b = []
+ for batch in range(x.shape[0]):
+ bat = []
+ for head in range(x.shape[1]):
+ headd = []
+ for word in range(x.shape[2]):
+ current_word = []
+ for represented_in in range(x.shape[3]):
+ if represented_in > word:
+ current_word.append(np.NINF)
+ else:
+ current_word.append(0)
+ headd.append(current_word)
+ bat.append(headd)
+ b.append(bat)
+ return b
+
+ def compute_masked_self_attention(self, query, key, value):
+ score = tf.matmul(query, key, transpose_b=True)
+ score = score + self.masking(score)
+ score = tf.convert_to_tensor(score)
+
+ dim_key = tf.cast(tf.shape(key)[-1], tf.float32)
+ scaled_score = score / tf.math.sqrt(dim_key)
+ weights = tf.nn.softmax(scaled_score, axis=-1)
+ output = tf.matmul(weights, value)
+ return output
+
+ def masked_self_attention_layer(self, x):
+ query = self.query_dense1(x)
+ key = self.key_dense1(x)
+ value = self.value_dense1(x)
+
+ query_heads = self.separate_heads(query)
+ key_heads = self.separate_heads(key)
+ value_heads = self.separate_heads(value)
+
+ attention = self.compute_masked_self_attention(query_heads, key_heads, value_heads)
+
+ attention = tf.transpose(attention, perm=[0, 2, 1, 3])
+ attention = tf.reshape(attention, (-1, self.max_len, self.embed_dim))
+
+ output = self.combine_heads(attention)
+ return output
+
+ def second_attention_layer(self, x, encoder_output):
+ query = self.query_dense2(x)
+ key = self.key_dense2(encoder_output)
+ value = self.value_dense2(encoder_output)
+
+ query_heads = self.separate_heads(query)
+ key_heads = self.separate_heads(key)
+ value_heads = self.separate_heads(value)
+
+ attention = self.compute_self_attention(query_heads, key_heads, value_heads)
+
+ attention = tf.transpose(attention, perm=[0, 2, 1, 3])
+ attention = tf.reshape(attention, (-1, self.max_len, self.embed_dim))
+
+ output = self.combine_heads(attention)
+ return output
+
+ def get_output(self, x, encoder_output):
+ masked_attn_output = self.masked_self_attention_layer(x)
+ out1 = self.layer_norm1(x + masked_attn_output)
+
+ mutli_head_attn_output = self.second_attention_layer(out1, encoder_output)
+ out2 = self.layer_norm2(out1 + mutli_head_attn_output)
+
+ ffn_output = self.final_ffn(self.mid_ffn(out2))
+ decoder_output = self.layer_norm3(out2 + ffn_output)
+ return decoder_output
+
+embed_dim = 512
+mid_ffn_dim = 1024
+
+num_heads = 8
+max_len = max_len
+batch_size = 32
+
+encoder_block1 = EncoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
+encoder_block2 = EncoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
+encoder_block3 = EncoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
+
+decoder_block1 = DecoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
+decoder_block2 = DecoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
+decoder_block3 = DecoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
+
+# Define Loss and Optimizer
+loss_object = SparseCategoricalCrossentropy()
+optimizer = Adam()
+
+embedding = Embedding(vocab_size, embed_dim, name='embedding')
+position_embedding = Embedding(vocab_size, embed_dim)
+
+final_transformer_layer = Dense(vocab_size, activation='softmax')
+
+def positional_embedding(x):
+ positions = tf.range(start=0, limit=max_len, delta=1)
+ positions = position_embedding(positions)
+ return x + positions
+
+def train_step(english_sent, german_sent, german_trgt):
+ with tf.GradientTape() as tape:
+ english_embedded = embedding(english_sent)
+ german_embedded = embedding(german_sent)
+
+ english_positioned = positional_embedding(english_embedded)
+ german_positioned = positional_embedding(german_embedded)
+
+ # Encoders
+ encoder_output = encoder_block1.get_output(english_positioned)
+ encoder_output = encoder_block2.get_output(encoder_output)
+ encoder_output = encoder_block3.get_output(encoder_output)
+
+ # Decoders
+ decoder_output = decoder_block1.get_output(german_positioned, encoder_output)
+ decoder_output = decoder_block2.get_output(decoder_output, encoder_output)
+ decoder_output = decoder_block3.get_output(decoder_output, encoder_output)
+
+ # Final Output
+ transformer_output = final_transformer_layer(decoder_output)
+
+ # Compute Loss
+ loss = loss_object(german_trgt, transformer_output)
+
+ variables = embedding.trainable_variables + position_embedding.trainable_variables + encoder_block1.trainable_variables + encoder_block2.trainable_variables
+ variables += encoder_block3.trainable_variables + decoder_block1.trainable_variables + decoder_block2.trainable_variables + decoder_block3.trainable_variables
+ variables += final_transformer_layer.trainable_variables
+
+ gradients = tape.gradient(loss, variables)
+ optimizer.apply_gradients(zip(gradients, variables))
+
+ return float(loss)
+
+def train(epochs=10):
+ batch_per_epoch = int(train_x1.shape[0] / batch_size)
+ for epoch in range(epochs):
+ for i in range(batch_per_epoch):
+ english_sent_x = train_x1[i*batch_size : (i*batch_size)+batch_size].reshape(batch_size, max_len)
+ german_sent_x = train_x2[i*batch_size : (i*batch_size)+batch_size].reshape(batch_size, max_len)
+ german_sent_y = train_y[i*batch_size : (i*batch_size)+batch_size].reshape(batch_size, max_len, 1)
+
+ loss = train_step(english_sent_x, german_sent_x, german_sent_y)
+
+ print('Epoch ', epoch, 'Batch ', i, '/', batch_per_epoch, 'Loss ', loss)
+
+train()
+
+
+And the Code is done! But the loss stops reducing at around value of 1.2 after some time. Why is this happening?
+Maybe Important
+I tried debugging the model, by passing random input integers, and the model was still performing the same way it did when I gave real Sentences as input.
+When I tried training the model with just 1 training sample, the loss stops reducing at around 0.2. When I train it with 2 training samples, the result was the approximately the same as when I trained it with 1 training sample.
+When I stopped shuffling the dataset the loss gone till around 0.7 and again stopped learning.
+I tried simplifying the model by removing some encoder and decoder blocks but the results were approximately the same. I even tried making the model more complex but the results were again approximately the same.
+"
+"['deep-learning', 'computer-vision', 'reference-request']"," Title: How to calculate the distance between the camera and an object using Computer Vision?Body: I want to create a Deep Learning model that measures the distance between the camera and certain objects in an image. Is it possible? Please, let me know some resources related to this task.
+"
+"['classification', 'reference-request', 'autoencoders', 'supervised-learning', 'transfer-learning']"," Title: Literature on the advantages of using an auto-encoder for classificationBody: Given a supervised problem with X, y input pairs, one can do two things for obtaining the function f that maps X with y with Neural Networks (and in general in machine learning):
+
+- Deploy directly a supervised learning algorithm that maps X to y
+
+- Deploy a (variational) auto-encoder for learning useful features, and then using these for training the supervised learning algorithm
+
+
+I would like to be pointed to some papers/blogs that explain which technique is better and when or where they conduct empirical benchmarking experiments.
+"
+"['neural-networks', 'python', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: Understanding LSTM through exampleBody: I want to code up one time step in a LSTM. My focus is on understanding the functioning of the forget gate layer, input gate layer, candidate values, present and future cell states.
+Lets assume that my hidden state at t-1 and xt are the following. For simplicity, lets assume that the weight matrices are identity matrices, and all biases are zero.
+htminus1 = np.array( [0, 0.5, 0.1, 0.2, 0.6] )
+xt = np.array( [-0.1, 0.3, 0.1, -0.25, 0.1] )
+
+I understand that forget state is sigmoid of htminus1
and xt
+So, is it?
+ft = 1 / ( 1 + np.exp( -( htminus1 + xt ) ) )
+
+>> ft = array([0.47502081, 0.68997448, 0.549834 , 0.4875026 , 0.66818777])
+
+I am referring to this link to implement of one iteration of one block LSTM. The link says that ft
should be 0 or 1. Am I missing something here?
+How do I get the forget gate layer as per schema given in the below mentioned picture? An example will be illustrative for me.
+
+Along the same lines, how do I get the input gate layer, it
and vector of new candidate values, \tilde{C}_t
as per the following picture?
+
+Finally, how do I get the new hidden state ht
as per the scheme given in the following picture?
+A simple, example will be helpful for me in understanding. Thanks in advance.
+
+"
+"['monte-carlo-tree-search', 'alphazero', 'alphago', 'deepmind', 'continuous-action-spaces']"," Title: What would be the AlphaGo's performance in continuous action space?Body: During my research for Google DeepMind's Go-playing program Alpha Go and its successor Alpha Go Zero, I discovered that the system uses a clever pipeline and an interplay of blocks of both policy and value networks to play the game of Go in such a way, that it is able to outperform even the best players in the world. This is in particular remarkable, because the game of Go was considered to be unsolvable a few years ago. This success gained international attention and it was labeled as a breakthrough in the community of AI. It is also not a secret that the research team behind AlphaGo and AlphaGo Zero used lots of computation power to create such a sophisticated system.
+But, since each board configuration is considered as a distinct state, where algorithms can be applied really well, and just consider AlphaGo Zero, which uses no prior knowledge and can figure out how the play the game of go from scratch, my question is the following:
+Is there any way to state (theoretically) how the performance of AlphaGo would be in continuous action spaces (e.g. self-driving cars)?
+"
+"['reinforcement-learning', 'long-short-term-memory', 'model-based-methods']"," Title: Using an LSTM for model-based RL in a POMDPBody: I am trying to set up an experiment where an agent is exploring an n x n gridworld environment, of which the agent can see some fraction at any given time step. I'd like the agent to build up some internal model of this gridworld.
+Now the environment is time-varying, so I figured it would useful to try using an LSTM so the agent can learn potentially useful information about how the environment changes. However, since the agent can only see some of the environment, each observation that could be used to train this model would be incomplete (i.e. the problem is partially-observable from this perspective). Thus I imagine that training such a network would be difficult since there would be large gaps in the data - for example, it may make an observation at position [0, 0] at t = 0, and then not make another observation there until say t = 100.
+My question is twofold
+
+- Is there a canonical way of working around partial observability in LSTMs? Either direct advice or pointing to useful papers would both be appreciated.
+- Can an LSTM account for gaps in time between observations?
+
+Thanks!
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'objective-functions', 'bellman-equations']"," Title: How is the DQN loss derived from (or theoretically motivated by) the Bellman equation, and how is it related to the Q-learning update?Body: I'm doing a project on Reinforcement Learning. I programmed an agent that uses DDQN. There are a lot of tutorials on that, so the code implementation was not that hard.
+However, I have problems understanding how one should come up with this kind of algorithms by starting from the Bellman equation, and I don't find a good understandable explanation addressing this derivation/path of reasoning.
+So, my questions are:
+
+- How is the loss to train the DQN derived from (or theoretically motivated by) the Bellman equation?
+- How is it related to the usual Q-learning update?
+
+According to my current notes, the Bellman equation looks like this
+$$Q_{\pi} (s,a) = \sum_{s'} P_{ss'}^a (r_{s,a} + \gamma \sum_{a'} \pi(a'|s') Q_{\pi} (s',a')) \label{1}\tag{1} $$
+which, to my understanding, is a recursive expression that says:
+The state-action pair gives a reward that is equal to the sum over all possible states $s'$ with the probability of getting to this state after taking action $a$ (denoted as $P_{ss'}^a$, which means the environment acts on the agent) times the reward the agent got from taking action $a$ in state $s$ + discounted sum of the probability of the different possible actions $a'$ times the reward of the state, action pair $s',a'$.
+The Q-Learning iteration (intermediate step) is often denoted as:
+$$Q^{new}(s,a) \leftarrow Q(s,a) + \alpha (r + \gamma \max_a Q(s',a') - Q(s,a)) \label{2}\tag{2}$$
+which means that the new state, action reward is the old Q value + learning rate, $\alpha$, times the temporal difference, $(r + \gamma \max_a Q(s',a') - Q(s,a))$, which consists of the actual reward the agent received + a discount factor times the Q function of this new state-action pair minus the old Q function.
+The Bellman equation can be converted into an update rule because an algorithm that uses that update rule converges, as this answer states.
+In the case of (D)DQN, $Q(s,a)$ is estimated by our NN that leads to an action $a$ and we receive $r$ and $s'$.
+Then we feed in $s$ as well as $s'$ into our NN (with Double DQN we feed them into different NNs). The $\max_a Q(s',a')$ is performed on the output of our target network. This q-value is then multiplied with $\gamma$ and $r$ is added to the product. Then this sum replaces the q-value from the other NN. Since this basic NN outputted $Q(s,a)$ but should have outputted $r + \gamma \max_a Q(s',a')$ we train the basic NN to change the weights, so that it would output closer to this temporal target difference.
+"
+"['machine-learning', 'python', 'autonomous-vehicles']"," Title: Algorithms for training a two motor powered rc car without steering servoBody: Previously, I have build a donkey car where the steering of the two front wheels was done using a motor servo. This project was a success and the car was able to drive autonomously after training was done.
+source Donkey Car 2
+Now:
+I have this Rc Car kit that has two motors on the back, powering two wheels and a trolley wheel in front.
+The steering is supposed to be done by playing around with the two back motors.
+My question is:
+Is there any method modify the donkey car code, so I can train the model?
+Considering that Donkey Car uses the angle of the servo to train the model, and now I just have the information of the two back wheels and no servo steering the vehicle
+Not sure if there approach that is specific to this concept.
+"
+"['machine-learning', 'reinforcement-learning', 'models', 'unsupervised-learning', 'supervised-learning']"," Title: Aside from specific training sets, what distinguishes the capabilities of different AI implementations?Body: (Disclaimer: I don't know much about ML/AI, besides some basic ideas behind it all.)
+It seems like ML/AI models can often be boiled down to statistics, where certain levers (weights) get fine-tuned based on the specific input of a large set of training data.
+Clearly, ML/AI models don't only distinguish themselves in their training data alone, otherwise there would not be so many improvements happening in the field all the time. My question therefore is: What does distinguish different models of the same category?
+If I have an AI that completes real-life pictures that have some missing parts, and an AI that completes a painting with missing parts, what key concepts separates the two?
+If I have an AI detecting text in an image, and an AI detecting... trees in an image, what key concepts separates the two?
+In other words, what is stopping me from "taking" an existing implementation of a certain AI category, and just feeding it my specific training set + rewards (i.e. judgement criteria for good vs bad output), in order to solve a specific task?
+In yet again other words, if I wanted to use ML/AI to build a new model for a specific task, what concepts and topics would I need to pay extra attention to? (I guess you could say I'm trying to reverse engineer the learning process of the field here. I don't have the time to properly teach myself and become an "expert", but find it all very interesting and would still like to use some of the wonderful things people have done.)
+"
+"['neural-networks', 'generative-adversarial-networks', 'hyper-parameters', 'deepfakes']"," Title: How does noise input size affect fake image generation with GANs?Body: In Generative Adversarial Networks, the Generator takes noise vector as input and feeds it forward to create an image. The noise vector consists of random numbers sampled from the normal distribution. In several examples that I've encountered, the noise vector had 100 numbers (implementation 1, implementation 2). Is there a reason this number is used? How does noise size affect the generation image?
+"
+"['neural-networks', 'convolutional-neural-networks', 'python', 'pytorch', 'pretrained-models']"," Title: How to use a conv2d layer after a flatten?Body: I am not familiar with Deep learning and Pytorch. And I want to know how to deal, in general with such a situation. So, I was wondering if I used a pretrained model (EfficientNet for example) if I want to change the _fc attribute and use conv2d in it, how can I recover a 2D structure? Because the pretrained model flattens it just before _fc.
+for example, the pretrained model outputs a flattened feature vector of 1280 elements what I did is the following:
+self.efficient_net._fc = nn.Sequential(
+ nn.Linear(1280, 1225),
+ nn.Unflatten(dim=1, unflattened_size=(1, 35, 35)),
+ nn.Conv2d(1, 35, kernel_size=1),
+ ...,
+ )
+
+I didn't have a specific height and width to recover in the 2D structure, so I assumed that h = w = some size and I use a linear layer whose output is equal to the square of the "some size". In the example above 35² = 1225. I am not sure if the unflatten is the correct way to do this. Then I added the conv2d. My code works but it doesn't give good results which probably means that the 2D structure I recovered does not capture any meaningful information.
+Can anyone enlighten me with general knowledge about how things are done in my situation, or give me some comments? Thank you!
+"
+"['natural-language-processing', 'transformer', 'language-model', 'pos-tagging']"," Title: When we translate a text from one language to another, how does the frequency of various POS tags change?Body: When we translate a text from one language to another, how does the frequency of various POS tags change?
+So, let's say we have a text in English, with 10% nouns, 20% adjectives, 15% adverbs, 25% verbs, etc., which we now translate to German, French, or Hindi. Can we say that in these other languages the POS tag frequency will remain the same as earlier?
+"
+"['reference-request', 'applications', 'transformer', 'sequence-modeling', 'audio-processing']"," Title: Can we use transformers for audio classification tasks?Body: Since transformers are good at processing sequential data, can we also use them for audio classification problems (same as RNNs)?
+"
+"['reference-request', 'supervised-learning']"," Title: Is there any known approach to generate sets of objects?Body: I am looking for some known approach, or some previous work, on the following problem:
+Let $\Sigma$ be an alphabet of symbols and $\Sigma^*$ be the set of all the strings that you can compose from this alphabet. Furthermore, let $f:\Sigma^*\rightarrow2^{\Sigma^*}$ be a function that assigns a certain set of $\Sigma$-strings to each $\Sigma$-string. Suppose you have a dataset $\mathcal{D}\subseteq\Sigma^*\times2^{\Sigma^*}$ of input-output pairs.
+With this data, the goal is to learn a function $f^\prime:\Sigma^*\rightarrow2^{\Sigma^*}$ that, given a string $\sigma\in\Sigma^*$, gives any superset of $f(\sigma)$, e.g. $f^\prime(\sigma)\supseteq f(\sigma)$. Of course, returning the set of all strings is not a good solution, so $f^\prime(\sigma)$ should not be much larger than $f(\sigma)$ (to give a rough idea, if $|f(\sigma)|=10$, then $|f^\prime(\sigma)|=100$ would still be ok, but $|f^\prime(\sigma)|=10000$ wouldn't). To give an intuitive reason behind this, I have already an algorithm which, given a $\sigma$ and a set $S\supseteq f(\sigma)$, returns $f(\sigma)$. However, this algorithm has a extremely high time-complexity (growing with $|S|$), and I want to use this machine learning approach to narrow down the search.
+I would like to use any Machine Learning approach (from Evolutionary Computing to Deep Learning) to solve this problem.
+So far my only idea would be to use an encoder-decoder architecture. I construct character embeddings for all symbols in $\Sigma$, and then through some neural architecture (I was thinking about an LSTM) I aggregate them to obtein a string representation. Given this, the decoder generates in sequence all elements of the corresponding set (by a similar, but inverse, fashion).
+This is clearly not optimal, because sets lack any meaningful order, and this approach is order-dependent (by nature of LSTMs and decoders in general). Of course I could always sort all sets, but this still imposes a structure to my problem that is not there, and I feel like this could make it harder to solve.
+So, in sum, my question is: Is there any known approach to the problem of generating sets of objects from a given input in the literature? If not, how could I improve my approach?
+"
+"['convolutional-neural-networks', 'computer-vision', 'attention', 'filters']"," Title: What is the difference between Attention Gate and CNN filters?Body: Attention models/gates are used to focus/pay attention to the important regions. According to this paper, the authors describe that a model with Attention Gate (AG) can be trained from scratch. Then the AGs automatically learn to focus on the target.
+What I am having trouble understanding is that, in the context of computer vision, doesn't a filter from the convolutional layers learn the region of interest?
+The authors say that adding Attention Gate reduces complexity when compared with multi-stage CNNs. But the job a trained AG would do is the same as that of a filter in a convolutional layer that would lead to the correct output, right?
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'exploration-exploitation-tradeoff', 'upper-confidence-bound']"," Title: Why is the ideal exploration parameter in the UCT algorithm $\sqrt{2}$?Body: From Wikipedia, in the Monte-Carlo Tree Search algorithm, you should choose the node that maximizes the value:
+$${\displaystyle {\frac {w_{i}}{n_{i}}}+c{\sqrt {\frac {\ln N_{i}}{n_{i}}}}},$$
+where
+
+- ${w_{i}}$ stands for the number of wins for the node considered after the $i$-th move,
+
+- ${n_{i}}$ stands for the number of simulations for the node considered after the $i$-th move,
+
+- $N_{i}$ stands for the total number of simulations after the $i$-th move run by the parent node of the one considered
+
+- $c$ is the exploration parameter—theoretically equal to$\sqrt{2}$; in practice usually chosen empirically.
+
+
+Here (and I've seen in other places as well) it claims that the theoretical ideal value for $c$ is $\sqrt{2}$. Where does this value come from?
+(Note: I did post this same question on cross-validated before I knew about this (more relevant) site)
+"
+"['neural-networks', 'backpropagation', 'optimization', 'numerical-algorithms']"," Title: Why is second-order backpropagation useful?Body: Raul Rojas's book on Neural Networks dedicates section 8.4.3 to explaining how to do second-order backpropagation, that is, computing the Hessian of the error function with respect to two weights at a time.
+What problems are easier to solve using this approach rather than first-order backpropagation?
+"
+"['deep-learning', 'papers', 'optimization', 'learning-rate', 'adam']"," Title: Why is Adam trapped in bad/suspicious local optima after the first few updates?Body: In the paper On the Variance of the Adaptive Learning Rate and Beyond, in section 2, the authors write
+
+To further analyze this phenomenon, we visualize the histogram of the absolute value of gradients on a log scale in Figure 2. We observe that, without applying warmup, the gradient distribution is distorted to have a mass center in relatively small values within 10 updates. Such gradient distortion means that the vanilla Adam is trapped in bad/suspicious local optima after the first few updates.
+
+Here is figure 2 from the paper.
+
+Can someone explain this part?
+
+Such gradient distortion means that the vanilla Adam is trapped in bad/suspicious local optima after the first few updates.
+
+Why is this true?
+"
+"['deep-learning', 'research']"," Title: What is the process of inventing deep neural network models? How researchers deal with long training times?Body: After reading this topic on GitHub how long time it takes to train YOLOV3 on coco dataset I was wondering how researchers deal with long training times while inventing new architectures.
+I imagine that to evaluate the model you need to train it first. How do they make tweaks in their architectures, e.g. tweaking layers, adding pooling, dropout, etc., if training can take a few days? Is it pure art and it is designed roughly or is it a more deliberate process?
+What are the steps of engineering new architecture using deep neural networks?
+"
+"['machine-learning', 'audio-processing']"," Title: Can Machine Learning be used to synthesize engine sounds?Body: I'm working on a project to equip model locomotives with sound boards. I'm in the process of designing the board at the moment, and the idea is to allow users to load their own sound files onto an SD card plugged into the board.
+Conventionally, model locomotive sounds are collected from high-fidelity microphones placed on and around the real engine in question. The engine is started up then put through idle and all of the different notches, as well as dynamic braking, horn and bell sounds, etc. This practice is very expensive because you have to find a willing (usually small) railroad or museum, pay for travel expenses, and diesel fuel ain't exactly cheap at the volumes these engines go through. Secondly, newer engines are hard to record because railroads aren't exactly in the business of letting hobbyists tape microphones all over their money making machines. As such, the main cost for a sound board comes not from the circuit's BOM cost, but from the effort required to get sounds from locomotives.
+What there's plenty of are YouTube videos of amateur rail enthusiasts taking videos of locomotive sightings at close(ish) proximity, including startups and shutdowns. My question is - is there a way to take a bunch of different audio recordings of the same engine, remove noise and the doppler effect, and from that create a profile that can be used to simulate what the engine might sound like at different throttle notches? Is machine learning the right tool for this?
+"
+"['neural-networks', 'convolutional-neural-networks', 'convolution', 'filters']"," Title: Is there anything that ensures that convolutional filters don't end up the same?Body: I trained a simple model to recognize handwritten numbers from the mnist dataset. Here it is:
+model = Sequential([
+ Conv2D(filters=1, kernel_size=(3,1), padding='valid', strides=1, input_shape=(28, 28, 1)),
+ Flatten(),
+ Dense(10, activation='softmax')])
+
+I experimented with varying the number of filters for the convolutional layer, while keeping other parameters constant(learning rate=0.0001, number of episodes=2000, training batch size=512). I used 1, 2, 4, 8, and 16 filters, and the model accuracy was 92-93% for each of them.
+From my understanding, during the training the filters may learn to recognize various types of edges in the image (e.g, vertical, horizontal, round). This experiment made me wonder whether any of the filters end up being duplicate -- having the same or similar weights. Is there anything that prevents them from that?
+"
+"['reinforcement-learning', 'value-functions', 'value-iteration']"," Title: Are the relative magnitudes of the learned and optimal state value function the same?Body: I have been reading recently about value and policy iteration. I tried to code the algorithms to understand them better and in the process I discovered something and I am not sure why is the case (or if my code is doing the right thing)
+If I compute the expected value until convergence I get the following grid:
+
+If I compute the optimal value with value iteration I will get the following.
+
+As you can see, the values are different but their relative magnitude is similar, i.e. the 3rd column in the last row has the greatest value in both computations.
+I believe this makes sense, as the expected value will tend to accumulate values in "promising" cells. But I assume that the first computation won't help much because it does not tell us what is the optimal policy, whereas the second, does.
+Is my understanding correct?
+"
+"['reinforcement-learning', 'implementation', 'proximal-policy-optimization', 'continuous-action-spaces']"," Title: How are continuous actions sampled (or generated) from the policy network in PPO?Body: I am trying to understand and reproduce the Proximal Policy Optimization (PPO) algorithm in detail. One thing that I find missing in the paper introducing the algorithm is how exactly actions $a_t$ are generated given the policy network $\pi_\theta(a_t|s_t)$.
+From the source code, I saw that discrete actions get sampled from some probability distribution (which I assume to be discrete in this case) parameterized by the output probabilities generated by $\pi_\theta$ given the state $s_t$.
+However, what I don't understand is how continuous actions are sampled/generated from the policy network. Are they also sampled from a (probably continuous) distribution? In that case, which type of distribution is used and which parameters are predicted by the policy network to parameterize said distribution?
+Also, is there any official literature that I could cite which introduces the method by which PPO generates its action outputs?
+"
+"['reinforcement-learning', 'dqn', 'implementation', 'double-dqn']"," Title: Would it make sense to share the layers (except the last one) of the neural networks in Double DQN?Body: Context: Double Q-learning was introduced to prevent the maximization bias from q-learning. Instead of learning a single Q-network, we can learn two (or in general $K > 1$) and our Q-estimate would be the min across all these Q-networks.
+Question:
+Does it make sense to share the layers of these Q-networks (except the last layer)?
+So, instead of having 2 networks of size [64, 64, 2] (with ~8.5K parameters in total) we can have one network of size [64, 64, 4] (with ~4.3K params).
+I couldn't see much of a downside to this, but all the implementations I've seen keep two completely different networks.
+"
+"['neural-networks', 'deep-learning', 'datasets', 'data-preprocessing', 'imbalanced-datasets']"," Title: How robust are deep networks to class imbalance?Body: Before deep learning, I worked with machine learning problems where the data had a large class imbalance (30:1 or worse ratios). At that time, all the classifiers struggled, even after under-sampling the represented classes and creating synthetic examples of the underrepresented classes -- except Random Forest, which was a bit more robust than the others, but still not great.
+What are guidelines for class distribution when it comes to deep learning (CNNs, ResNets, transformers, etc)? Must the representation of each class be 1:1? Or maybe it's "good enough" as long as it is under some ratio like 2:1? Or is deep learning completely immune to class imbalance as long as we have enough training data?
+Furthermore, as a general guideline, should each class have a certain minimum number of training examples (maybe some multiple of the number of weights of the network)?
+"
+"['neural-networks', 'machine-learning', 'prediction']"," Title: How to deal with predictions for data outside the range of the training dataset in neural networks?Body: I’ve set up a neural network model to experiment with predicting foreign exchange rates based on various economic data. The model learned fine and the test data is OK ($R^2 = 0.88$).
+But I can't figure out how to input data for scenarios where new data is outside the range of the datasets used to train the model. For example, if US debt is increased (again), then it will be outside the data range used for the training datasets, so, when normalised using the same parameters as the training dataset (0-1 scale), it will be greater than 1, so the model rejects it.
+Everything I've read says to normalise the new data using the training data parameters (understandably), but I can't find anything that explains how to use the model to make predictions where new data is outside the range of the training datasets.
+In parallel, I've used a regression model, but I'm fairly new to neural networks and would like to find a way of using these for this kind of prediction model. Any help gratefully received.
+"
+"['reinforcement-learning', 'deep-rl', 'reference-request', 'proximal-policy-optimization']"," Title: What are the differences between Proximal Policy Optimization versions PPO1 and PPO2?Body: When Proximal Policy Optimization (PPO) was released, it was accompanied by a paper describing it.
+Later, the authors at OpenAI introduced a second version of PPO, called PPO2 (whereas the original version is now commonly referred to as PPO1). Unfortunately, the several changes made between PPO1 and PPO2 are pretty much undocumented (as stated over here).
+Someone associated with OpnenAI's baselines Deep Reinforcement Learning repository commented that the main advancement of PPO2 (compared to PPO1) was the use of a more advanced parallelism strategy, leading to improved performance. Unfortunately, the person omitted naming further changes made.
+Now, I was wondering if anyone is aware of a (reliable) source of information or (preferably) even some published literature that lists all the numerous differences between PPO1 and PPO2.
+"
+"['reinforcement-learning', 'action-spaces', 'discrete-action-spaces']"," Title: How to implement RL model with increasing dimensions of state space and action space?Body: I've read in this discussion that "reinforcement learning is a way of finding the value function of a Markov Decision Process".
+I want to implement an RL model, whose state space and action space dimensions would increase, as the MDP progresses. But I don't know how to define it it terms of e.g. Q-learning or some similar method.
+Precisely, I want to create a model, that would generate boolean circuits. At each step, it could perform four different actions:
+
+- apply $AND$ gate on two wires,
+- apply $OR$ gate on two wires,
+- apply $NOT$ gate on one wire,
+- add new wire.
+
+Each of the first three actions could be performed on any currently available wires (targets). Also, the number of wires will change over time. It might increase if we perform fourth action, or decrese after e.g. application of an $AND$ gate (taking as input two wires and outputting just one).
+"
+"['machine-learning', 'recommender-system', 'k-nearest-neighbors']"," Title: what is the correct approach for KNN in item based recommendation system?Body: if I make an application for movies and each user in the system can rate the movies. And I want to make a recommendation system to recommend movies to active user based on his rating for other movies. using item based collaborative filtering using KNN.
+when we find the similarities between the movies and pick the top k items, which approach is correct?
+1- calculate the similarities between all movies and then take the top k for every movie the user rated it highly. (the dataset is a matrix represent the rating values for each item from each user)
+2- The KNN is applied to all the movies that the user likes, one after the other, and we find the similarity between each movie and the films that the user did not rate, so that for each film we take the top K of similar films (the user not rated yet) for each movie the user rated highly, then show it to the user . (the dataset for each time we apply knn is a matrix contain rating for each item the user rated and all other items that the user not rated yet).
+"
+"['machine-learning', 'k-means', 'bias-variance-tradeoff', 'spectral-clustering', 'gaussian-mixture-models']"," Title: Why does k-means have more bias than spectral clustering and GMM?Body: I ran into a 2019-Entrance Exam question as follows:
+
+The answer mentioned is (4), but some search on google showed me maybe (1) and (2) is equal to (4). Why would k-means be the algorithm with the highest bias? (Can you please also provide references to valid material to study more?)
+"
+"['neural-networks', 'natural-language-processing', 'papers', 'transformer', 'attention']"," Title: What is different in each head of a multi-head attention mechanism?Body: I have a difficult time understanding the "multi-head" notion in the original transformer paper. What makes the learning in each head unique? Why doesn't the neural network learn the same set of parameters for each attention head? Is it because we break query, key and value vectors into smaller dimensions and feed each portion to a different head?
+"
+"['convolutional-neural-networks', 'image-recognition', 'optical-character-recognition']"," Title: Can a convolutional neural network classify text document images?Body: I know convolutional neural networks are commonly used for image recognition, but I was wondering if they would be able to distinguish between predominantly text-based documents vs something like objects. For example, if you trained using images of the first page of invoices matched to a vendor name, could you get a CNN to predict the vendor based on an image? If not, is there a different AI technique better suited that is purely image-based, or would it require OCR and leveraging the text in the invoice?
+Update: based on a comment, my ask my not be clear. I'm not trying to see if the CNN can differentiate between a document (mostly text based image) and a photo image. I want to know if based on a gif/jpeg/png of a document (no OCR performed) a CNN would be able to classify the documents, which basically could be used as a means of identifying the vendor.
+"
+"['deep-learning', 'comparison', 'optimization', 'stochastic-gradient-descent', 'momentum']"," Title: How are these equations of SGD with momentum equivalent?Body: I know this question may be so silly, but I can not prove it.
+In Stanford slide (page 17), they define the formula of SGD with momentum like this:
+$$
+v_{t}=\rho v_{t-1}+\nabla f(x_{t-1})
+\\
+x_{t}=x_{t-1}-\alpha v_{t},
+$$
+where:
+
+- $v_{t+1}$ is the momentum value
+- $\rho$ is a friction, let say it's equal 0.9
+- $\nabla f(x_{t-1})$ is the gradient of the objective function at iteration $t-1$
+- $x_t$ are the parameters
+- $\alpha$ is the learning rate
+
+However, in this paper and many other documents, they define the equation like this:
+$$
+v_{t}=\rho v_{t-1}+\alpha \nabla f(x_{t-1})
+\\
+x_{t}=x_{t-1}- v_{t},
+$$
+where $\rho$ and $\alpha$ still have the same value as in the previous formula.
+I think it should be
+$$v_{t}=\alpha \rho v_{t-1}+\alpha \nabla f(x_{t-1})$$
+if we want to multiply the learning rate inside the equation.
+In some other document (this) or normal form of momentum, they define like this:
+$$
+v_{t}= \rho v_{t-1}+ (1- \rho) \nabla f(x_{t-1})
+\\
+x_{t}=x_{t-1}-\alpha v_{t}
+$$
+I can not understand how can they prove those equations are similar. Can someone help me?
+"
+['r']," Title: Why does my neural network to predict $x$ given $\sin(x)$ not generalize?Body: I made a simple feedforward neural network (FFNN) to predict $x$ from $\sin(x)$. It failed. Does it mean the model has overfitted? Why doesn't it work?
+
+set.seed(1234567890)
+Var3 <- runif(500, 0, 20)
+mydata3 <- data.frame(Sin=sin(Var3),Var=Var3)
+set.seed(1234567890)
+winit <- runif(5500, -1, 1)
+#hidUnit <- c(9,1)
+set.seed(1234567890)
+nn3 <-neuralnet(formula = Var~Sin,data = mydata3,
+ hidden =c(4,2,1),startweights =winit,
+ learningrate = 0.01,act.fct = "tanh")
+
+plot(mydata3, cex=2,main='Predicting x from Sin(x)',
+ pch = 21,bg="darkgrey",
+ ylab="X",xlab="Sin(X)")
+points(mydata3[,1],predict(nn3,mydata3), col="darkred",
+ cex=1,pch=21,bg="red")
+
+legend("bottomleft", legend=c("true","predicted"), pch=c(21,21),
+ col = c("darkgrey","red"),cex = 0.65,bty = "n")
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'recurrent-neural-networks', 'backpropagation', 'vanishing-gradient-problem']"," Title: How do I infer exploding or vanishing gradients in Keras?Body: It may already be obvious that I am just a practitioner and just a beginner to Deep Learning. I am still figuring out lots of "WHY"s and "HOW"s of DL.
+So, for example, if I train a feed-forward neural network, or an image classifier with CNNs, or just an OCR problem with GRUs, using something like Keras, and it performs very poorly or takes more time to train than it should be, it may be because of the gradients getting vanished or exploding, or some other problem.
+But, if it is due to the gradients getting very small or very big during the training, how do I figure that out? Doing what will I able to infer that something has happened due to the gradient values?
+And what are the precautions I should take to avoid it from the beginning (since training DL models with accelerated computing costs money) and if it has happened, how do I fix it?
+
+This question may sound like a duplicate of How to decide if gradients are vanishing?, but actually not, since that question focuses on CNNs, while I am asking about problem with gradients in all kinds of deep learning algorithms.
+"
+"['convolutional-neural-networks', 'tensorflow', 'transfer-learning', 'batch-normalization', 'fine-tuning']"," Title: Why shouldn't batch normalisation layers be learnable during fine-tuning?Body: I have been reading this TensorFlow tutorial on transfer learning, where they unfroze the whole model and then they say:
+
+When you unfreeze a model that contains BatchNormalization
layers in order to do fine-tuning, you should keep the BatchNormalization
layers in inference mode by passing training=False
when calling the base model. Otherwise the updates applied to the non-trainable weights will suddenly destroy what the model has learned.
+
+My question is: why? The model's weights are adapting to the new data, so why do we keep the old mean and variance, which was calculated on ImageNet? This is very confusing.
+"
+"['comparison', 'search', 'a-star', 'heuristic-functions', 'evaluation-functions']"," Title: What is the difference between the heuristic function and the evaluation function in A*?Body: I am reading college notes on state search space. The notes (which are not publicly available) say:
+
+
+- To do state-search space, the strategy involves two parts: defining a heuristic function, and identifying an evaluation function.
+
+- The heuristic is a smart search of the available space. The evaluation function may be well-defined (e.g. the solution solves a problem and receives a score) or may itself be the heuristic (e.g. if chess says pick A or B as the next move and picks A, the evaluation function is the heuristic).
+
+- Understand the difference between the heuristic search algorithm and the heuristic evaluation function.
+
+
+
+I'm trying step 3 (to understand). Can I check, using the A* search as an example, that the:
+Heuristic function: estimated cost from the current node to the goal, i.e. it's a heuristic that's calculating the simplest way to get to the goal (in A*; $h(n)$), so the heuristic function is calculating $h(n)$ for a series of options and picking the best one.
+Evaluation function: $f(n) = g(n) + h(n)$.
+"
+"['machine-learning', 'classification', 'decision-trees', 'features']"," Title: What does the depth of a decision tree depend on?Body: In these notes, we have the following statement
+
+The depth of a learned decision tree can be larger than the number of training examples used to create the tree
+
+This statement is false, according to the same notes, where it is written
+
+False: Each split of the tree must correspond to at least one training example, therefore, if there are $n$ training examples, a path in the tree can have length at most $n$
+Note: There is a pathological situation in which the depth of a learned decision tree can be larger than number of training examples $n$ - if the number of features is larger than $n$ and there exist training examples which have same feature values but different labels.
+
+I had written on my notes that the depth of a decision tree only depends on the number of features of the training set and not on the number of training samples. So, what does the depth of the decision tree depend on?
+"
+"['reinforcement-learning', 'reference-request', 'policies', 'kl-divergence', 'wasserstein-metric']"," Title: Are there some notions of distance between two policies?Body: I want to determine some distance between two policies $\pi_1 (a \mid s)$ and $\pi_2 (a \mid s)$, i.e. something like $\vert \vert \pi_1 (a \mid s) - \pi_2(a \mid s) \vert \vert$, where $\pi_i (a\mid s)$ is the vector $(\pi_i (a_1 \mid s), \dots, \pi_i(a_n \mid s))$. I am looking for a sensible notion for such a distance.
+Are there some standard norms/metrics used in the literature for determining a distance between policies?
+"
+"['deep-learning', 'natural-language-processing', 'transformer', 'time-series', 'attention']"," Title: How to handle long sequences with transformers?Body: I have a time series sequence with 10 million steps. In step $t$, I have a 400 dimensional feature vector $X_t$ and a scalar value $y_t$ which I want to predict during inference time and I know during the train time. I want to use a transformer model. I have 2 questions:
+
+- If I want to embed the 400 dimensional input feature vector into another space before feeding into the transformer, what are the pros and cons of using let's say 1024 and 64 for the embedding space dimension? Should I use a dimension more than 400 or less?
+- When doing position embedding, I cannot use a maximum position length of 10 million as that blows up the memory. What is the best strategy here if I want to use maximum position length of 512? Should I chunk the 10 million steps into blocks of size 512 and feed each block separately into the transformer? If so, how can I connect the subsequent blocks to take full advantage of parallelization while keeping the original chronological structure of the sequence data?
+
+"
+"['machine-learning', 'terminology', 'papers', 'naive-bayes', 'selection-bias']"," Title: $\frac{P(x_1 \mid y, s = 1) \dots P(x_n \mid y, s = 1) P(y \mid s = 1)}{P(x \mid s = 1)}$ indicates that naive Bayes learners are global learners?Body: I am currently studying the paper Learning and Evaluating Classifiers under Sample Selection Bias by Bianca Zadrozny. In section 3. Learning under sample selection bias, the author says the following:
+
+We can separate classifier learners into two categories:
+
+- local: the output of the learner depends asymptotically only on $P(y \mid x)$
+- global: the output of the learner depends asymptotically both on $P(x)$ and on $P(y \mid x)$.
+
+The term "asymptotically" refers to the behavior of the learner as the number of training examples grows. The names "local" and "global" were chosen because $P(x)$ is a global distribution over the entire input space, while $P(y \mid x)$ refers to many local distributions, one for each value of $x$. Local learners are not affected by sample selection bias because, by definition $P(y \mid x, s = 1) = P(y \mid x)$ while global learners are affected because the bias changes $P(x)$.
+
+Then, in section 3.1.1. Naive Bayes, the author says the following:
+
+In practical Bayesian learning, we often make the assumption that the features are independent given the label $y$, that is, we assume that
+$$P(x_1, x_2, \dots, x_n \mid y) = P(x_1 \mid y) P(x_2 \mid y) \dots P(x_n \mid y).$$
+This is the so-called naive Bayes assumption.
+With naive Bayes, unfortunately, the estimates of $P(y \mid x)$ obtained from the biased sample are incorrect. The posterior probability $P(y \mid x)$ is estimated as
+$$\dfrac{P(x_1 \mid y, s = 1) \dots P(x_n \mid y, s = 1) P(y \mid s = 1)}{P(x \mid s = 1)} ,$$
+which is different (even asymptotically) from the estimate of $P(y \mid x)$ obtained with naive Bayes without sample selection bias. We cannot simplify this further because there are no independence relationships between each $x_i$, $y$, and $s$. Therefore, naive Bayes learners are global learners.
+
+Since it is said that, for global learners, the output of the learner depends asymptotically both on $P(x)$ and on $P(y \mid x)$, what is it about $\dfrac{P(x_1 \mid y, s = 1) \dots P(x_n \mid y, s = 1) P(y \mid s = 1)}{P(x \mid s = 1)}$ that indicates that naive Bayes learners are global learners?
+
+EDIT: To be clear, if we take the example given for the local learner case (section 3.1. Bayesian classifiers), then it is evident:
+
+Bayesian classifiers compute posterior probabilities $P(y \mid x)$ using Bayes' rule:
+$$P(y \mid x) = \dfrac{P(x \mid y)P(y)}{P(x)}$$
+where $P(x \mid y)$, $P(y)$ and $P(x)$ are estimated from the training data. An example $x$ is classified by choosing the label $y$ with the highest posterior $P(y \mid x)$.
+We can easily show that bayesian classifiers are not affected by sample selection bias. By using the biased sample as training data, we are effectively estimating $P(x \mid y, s = 1)$, $P(x \mid s = 1)$ and $P(y \mid s = 1)$ instead of estimating $P(x \mid y)$, $P(y)$ and $P(x)$. However, when we substitute these estimates into the equation above and apply Bayes' rule again, we see that we still obtain the desired posterior probability $P(y \mid x)$:
+$$\dfrac{P(x \mid y, s = 1) P(y \mid s = 1)}{P(x \mid s = 1)} = P(y \mid x, s = 1) = P(y \mid x)$$
+since we are assuming that $y$ and $s$ are independent given $x$. Note that even though the estimates of $P(x \mid y, s = 1)$, $P(x \mid s = 1)$ and $P(y \mid s = 1)$ are different from the estimates of $P(x \mid y)$, $P(x)$ and $P(y)$, the differences cancel out. Therefore, bayesian learners are local learners.
+
+Note that we get $P(y \mid x)$. However, in the global case, it is not clear how we get $P(x)$ and $P(y \mid x)$ (as is required for global leaners) from $\dfrac{P(x_1 \mid y, s = 1) \dots P(x_n \mid y, s = 1) P(y \mid s = 1)}{P(x \mid s = 1)}$.
+"
+"['reinforcement-learning', 'time-series', 'delayed-rewards']"," Title: How to deal with the time delay in reinforcement learning?Body: I have a question regarding the time delay in reinforcement learning (RL).
+In the RL, one has state, reward and action. It is usually assumed that (as far as I understand it) when the action is executed on the system, the state changes immediately and that the new state can then be analysed (influencing the reward) to determine the next action. However, what if there is a time delay in this process. For example, when some action is executed at time $t_1$, we can only get its effect on the system at $t_2$ (You can imagine a flow: the actuator is in the upstream region and the sensor is in the downstream region, so that there will be a time delay between the action and the state). How do we deal with this time delay in RL?
+"
+"['convolutional-neural-networks', 'training', 'overfitting', 'loss']"," Title: Why do the training and validation loss curves diverge?Body: I was training a CNN model on TensorFlow. After a while I came back and saw this loss curve:
+
+The green curve is training loss and the gray one is validation loss. I know that before epoch 394 the model in heavily overfitted, but I have no idea what happened after that.
+Also, this is accuracy curves if it helps:
+
+I'm using categorical cross-entropy and this is the model I am using:
+
+and here is link to PhysioNet's challenge which I am working on: https://physionet.org/content/challenge-2017/1.0.0/
+"
+"['reinforcement-learning', 'logistic-regression']"," Title: How to frame this problem using RL?Body: How should this problem be framed in the domain of RL for preventing users from exceeding their bank account balance and being overdrawn?
+For example, a user has 1000 in an account, and proceeds to withdraw 300, 400, 500, making the user overdrawn by 200 :((300+400+500) - 1000).
+Treating this as a supervised learning problem, I could use logistic regression. The input feature is the transaction amounts. The input features for a training instance, 300,400,500 and the output feature occurs if the account is overdrawn or not overdrawn with corresponding values of 1 and 0 respectively. For simplicity, we will assume the number of transactions is consistent and is always 3.
+For RL, a state could be represented as a series of transactions, but how should the reward be assigned?
+Update:
+Here my RL implementation of the problem:
+import torch
+from collections import defaultdict
+gamma = .1
+alpha = 0.1
+epsilon = 0.1
+n_episode = 2000
+overdraft_limit = 1000
+
+length_episode = [0] * n_episode
+total_reward_episode = [0] * n_episode
+
+episode_states = [[700,100,200,290,500] , [400,100,200,300,500] , [212, 500,100,100,200,500]]
+
+def gen_epsilon_greedy_policy(n_action, epsilon):
+ def policy_function(state, Q):
+ probs = torch.ones(n_action) * epsilon / n_action
+ best_action = torch.argmax(Q[state]).item()
+ probs[best_action] += 1.0 - epsilon
+ action = torch.multinomial(probs, 1).item()
+ return action
+ return policy_function
+
+def is_overdrawn(currentTotal):
+ return currentTotal >= overdraft_limit
+
+# Actions are overdrawn or not, 0 - means it is not overdrawn, 1 - means that it will be overdrawn
+def get_reward(action, currentTotal):
+ if action == 0 and is_overdrawn(currentTotal):
+ return -1
+ elif action == 0 and not is_overdrawn(currentTotal):
+ return 1
+ if action == 1 and is_overdrawn(currentTotal):
+ return 1
+ elif action == 1 and not is_overdrawn(currentTotal):
+ return -1
+ else :
+ raise Exception("Action not found")
+
+def q_learning(gamma, n_episode, alpha,n_action):
+ """
+ Obtain the optimal policy with off-policy Q-learning method
+ @param gamma: discount factor
+ @param n_episode: number of episodes
+ @return: the optimal Q-function, and the optimal policy
+ """
+ Q = defaultdict(lambda: torch.zeros(n_action))
+ for ee in episode_states :
+ for episode in range(n_episode):
+ state = ee[0]
+ index = 0
+ currentTotal = 0
+ while index < len(ee)-1 :
+ currentTotal = currentTotal + state
+ next_state = ee[index+1]
+ action = epsilon_greedy_policy(state, Q)
+# print(action)
+ reward = get_reward(action, currentTotal)
+ td_delta = reward + gamma * torch.max(Q[next_state]) - Q[state][action]
+ Q[state][action] += alpha * td_delta
+
+ state = next_state
+ index = index + 1
+
+ length_episode[episode] += 1
+ total_reward_episode[episode] += reward
+
+ policy = {}
+ for state, actions in Q.items():
+ policy[state] = torch.argmax(actions).item()
+ return Q, policy
+
+epsilon_greedy_policy = gen_epsilon_greedy_policy(2, epsilon)
+
+optimal_Q, optimal_policy = q_learning(gamma, n_episode, alpha, 2)
+
+print('The optimal policy:\n', optimal_policy)
+print('The optimal Q:\n', optimal_Q)
+
+This code prints:
+The optimal policy:
+ {700: 0, 100: 0, 200: 1, 290: 1, 500: 0, 400: 0, 300: 1, 212: 0}
+The optimal Q:
+ defaultdict(<function q_learning.<locals>.<lambda> at 0x7f9371b0a3b0>, {700: tensor([ 1.1110, -0.8890]), 100: tensor([ 1.1111, -0.8889]), 200: tensor([-0.8889, 1.1111]), 290: tensor([-0.9998, 1.0000]), 500: tensor([ 1.1111, -0.8889]), 400: tensor([ 1.1110, -0.8890]), 300: tensor([-1.0000, 1.0000]), 212: tensor([ 1.1111, -0.8888])})
+
+The optimal policy is to inform us if 700 is added to the balance, then the customer will not overdraw (0). If 200 is added to the balance, then the customer will overdraw(1). What avenues can I explore to improve upon this method as this is quite basic, but I'm unsure as to what approach I should take in order to improve the solution.
+For example, this solution just looks at the most recent additions to the balance to determine if the customer is overdrawn. Is this a case of adding new features to the training data?
+I'm just requesting a critique on this solution so I can improve it. How can I improve the representation of the state?
+"
+"['reinforcement-learning', 'comparison', 'value-functions', 'bellman-equations']"," Title: Are the state-action values and the state value function equivalent for a given policy?Body: Are the state-action values and the state value function equivalent for a given policy? I would assume so as the value function is defined as $V(s)=\sum_a \pi(a|s)Q_{\pi}(s,a)$. If we are operating a greedy policy and hence acting optimally, doesn't this mean that in fact the policy is deterministic and then $\pi(a|s)$ is $1$ for the optimal action and $0$ for all others? Would this then lead to an equivalence between the two?
+Here is my work to formulate some form of proof where I start with the idea that a policy is defined to be better than a current policy if for all states then $Q_{\pi}(S,\pi^∗(s))\geq Vπ_{\pi}(s)$ :
+I iteratively apply the optimal policy to each time step until I eventually get to a fully optimal time step of rewards
+$$Vπ_{\pi}(s)≤Q_{\pi}(S,\pi^∗(s))$$
+$$=Eπ[R_{t+1}+\gamma V_{\pi}(St+1)|St=s]$$
+$$\leq E[Rt+1+\gamma Q_{\pi}(S_{t+1},\pi^∗(S_{t+1})|S_t=s]$$
+$$\leq E[Rt+1+\gamma Rt+2+\gamma 2Q \pi^*(S_{t+2},\pi^∗(S_{t+2})|S_t=s]$$
+$$\leq E[R_{t+1}+\gamma R_{t+2}+....|S_t=s]$$
+$$=V\pi^∗(s)$$
+I would say that our final two lines are in fact inequalities, and for me this makes intuitive sense in that if we are always taking a deterministic greedy action our value function and Q function are the same. As detailed here, for a given policy and state we have that $V(s)=\sum_a \pi(a|s)Q_{\pi}(s,a)$ and if the policy is optimal and hence greedy then $\pi(a|s)$ is deterministic.
+"
+"['neural-networks', 'objective-functions', 'activation-functions', 'regression']"," Title: Which NN would you choose to estimate a continuous function $f:\mathbb R^2 \rightarrow \mathbb R$?Body: Suppose we want to estimate a continuous function $f:\mathbb R^2 \rightarrow \mathbb R$ based on a sample using a NN (around 1000 examples). This function is not bounded. Which architecture would you choose ? How many layers/neurons ? Which activation functions ? Which loss function ?
+Intuitively, I would go with one hidden layer, 2 neurons, $L^2$ loss, and maybe the Bent identity for the output and a sigmoid in the hidden layer ?
+What are the advantages of doing something "fancier" than that ?
+Would you also have chosen to use a NN for this job or would you have considered a regression SVM for example or something else (knowing that precision is the goal)?
+"
+"['convolutional-neural-networks', 'generative-adversarial-networks']"," Title: Can I start with perfect discriminator in GAN?Body: In many implementations/tutorials of GANs that I've seen so far (e.g. this), the generator and discriminator start with no prior knowledge. They continuously improve their performance with training. This makes me wonder — is it possible to use a pre-trained discriminator? I have two motivations for doing so:
+
+- Eliminating the overhead of training the discriminator
+- Being able to use already existing cool models
+
+Would the generator be able to learn just the same, or is it dependent on the fact that they start from scratch?
+"
+"['reinforcement-learning', 'markov-decision-process', 'monte-carlo-methods', 'model-based-methods']"," Title: When we have multiple traces, do we average over traces or the total number of times we have visited that state?Body: I am confused about the workings of the first- and every-visit MC.
+My first question is, when we have multiple traces, do we average over traces or the total number of times we have visited that state?
+So, in the example:
+$$\tau_1 = \text{House} +3, \text{House} +2, \text{School} -4, \text{House} +4, \text{School} -3, \text{Holidays}$$
+$$\tau_2 = \text{House} -2, \text{House} +3, \text{School} -3, \text{Holidays},$$
+where we have states of either House, Holidays, or School, with the numerical values being the immediate rewards.
+For every-visit MC to find the state value of HOUSE, with $\gamma$=1, my intuition would be to create a return list, R, that looks like the following
+$$R_1=[3+2−4+4−3, 2−4+4−3, 4−3]= [2, −1, 1]$$
+$$R_2=[−2+3−3, 3−3]=[−2, 0]$$
+$$R_1+R_2=[2,−1, 1,−2, 0]$$
+which, when averaged over 5 visits, is 0 and the correct answer, but I would like if you could confirm if the methodology is correct?
+However, another approach would be to compute the average returns for each trace. Which is correct?
+"
+"['reinforcement-learning', 'monte-carlo-methods']"," Title: Is my pseudocode titled ""Monte Carlo Exploring Starts (with model)"" correct?Body: Reinforcement Learning: An Introduction second edition, Richard S. Sutton and Andrew G. Barto:
+
+
+- We made two unlikely assumptions above in order to easily obtain this guarantee of
+convergence for the Monte Carlo method. ... For now we focus on the assumption that policy evaluation operates on an infinite
+number of episodes. This assumption is relatively easy to remove. In fact, the same issue
+arises even in classical DP methods such as iterative policy evaluation, which also converge
+only asymptotically to the true value function.
+
+
+
+
+- There is a second approach to avoiding the infinite number of episodes nominally
+required for policy evaluation, in which we give up trying to complete policy evaluation
+before returning to policy improvement. On each evaluation step we move the value
+function toward q⇡k, but we do not expect to actually get close except over many steps.
+We used this idea when we first introduced the idea of GPI in Section 4.6. One extreme
+form of the idea is value iteration, in which only one iteration of iterative policy evaluation
+is performed between each step of policy improvement. The in-place version of value
+iteration is even more extreme; there we alternate between improvement and evaluation
+steps for single states.
+
+
+The original pseudocode:
+
+Monte Carlo ES (Exploring Starts), for estimating $\pi \approx \pi_{*}$
+Initialize:
+$\quad$ $\pi(s) \in \mathcal{A}(s)$ (arbitrarily), for all $s \in \mathcal{S}$
+$\quad$ $Q(s, a) \in \mathbb{R}$ (arbitrarily), for all $s \in \mathcal{S}, a \in \mathcal{A}(s)$
+$\quad$ $Returns(s, a) \leftarrow$ empty list, for all $s \in \mathcal{S}, a \in \mathcal{A}(s)$
+Loop forever (for each episode):
+$\quad$ Choose $S_{0} \in \mathcal{S}, A_{0} \in \mathcal{A}\left(S_{0}\right)$ randomly such that all pairs have probability $\geq 0$
+$\quad$ Generate an episode from $S_{0}, A_{0},$ following $\pi: S_{0}, A_{0}, R_{1}, \ldots, S_{T-1}, A_{T-1}, R_{T}$
+$\quad$ $G \leftarrow 0$
+$\quad$ Loop for each step of episode, $t=T-1, T-2, \ldots, 0$
+$\quad\quad$ $G \leftarrow \gamma G+R_{t+1}$
+$\quad\quad$ Unless the pair $S_{t}, A_{t}$ appears in $S_{0}, A_{0}, S_{1}, A_{1} \ldots, S_{t-1}, A_{t-1}:$
+$\quad\quad\quad$ Append $G$ to $Returns\left(S_{t}, A_{t}\right)$
+$\quad\quad\quad$ $Q\left(S_{t}, A_{t}\right) \leftarrow \text{average}\left(Returns\left(S_{t}, A_{t}\right)\right)$
+$\quad\quad\quad$ $\pi\left(S_{t}\right) \leftarrow \arg \max _{a} Q\left(S_{t}, a\right)$
+
+I want to make the same algorithm but with a model. The book states:
+
+
+- With a model, state values alone are sufficient to determine a policy; one simply looks ahead one step and chooses whichever action leads to the best combination of reward and next state, as we did in the chapter on DP.
+
+
+So based on the 1st quote I must use "stars exploration" and "one evaluation — one improvement" ideas (as well as in model-free version) to make the algorithm converge.
+My version of the pseudocode:
+
+Monte Carlo ES (Exploring Starts), for estimating $\pi \approx \pi_{*}$ (with model)
+Initialize:
+$\quad$ $\pi(s) \in \mathcal{A}(s)$ (arbitrarily), for all $s \in \mathcal{S}$
+$\quad$ $V(s) \in \mathbb{R}$ (arbitrarily), for all $s \in \mathcal{S}$
+$\quad$ $Returns(s) \leftarrow$ empty list, for all $s \in \mathcal{S}$
+Loop forever (for each episode):
+$\quad$ Choose $S_{0} \in \mathcal{S}, A_{0} \in \mathcal{A}\left(S_{0}\right)$ randomly such that all pairs have probability $\geq 0$
+$\quad$ Generate an episode from $S_{0}, A_{0},$ following $\pi: S_{0}, A_{0}, R_{1}, \ldots, S_{T-1}, A_{T-1}, R_{T}$
+$\quad$ $G \leftarrow 0$
+$\quad$ Loop for each step of episode, $t=T-1, T-2, \ldots, 1$:
+$\quad\quad$ $G \leftarrow \gamma G+R_{t+1}$
+$\quad\quad$ Unless $S_{t}$ appears in $S_{0}, S_{1}, \ldots, S_{t-1}:$
+$\quad\quad\quad$ Append $G$ to $Returns \left(S_{t}\right)$
+$\quad\quad\quad$ $V\left(S_{t}\right)\leftarrow\text{average}\left(Returns\left(S_{t}\right)\right)$
+$\quad\quad\quad$ $\pi\left(S_{t-1}\right) \leftarrow \operatorname{argmax}_{a} \sum_{s^{\prime}, r} p\left(s^{\prime}, r \mid S_{t-1}, a\right)\left[\gamma V\left(s^{\prime}\right)+r\right]$
+
+— Here I update the policy in $S_{t-1}$ because the step before we update $V(S_{t})$ and changes to $V(S_{t})$ don't affect $\pi (S_{t})$, but affect $ \pi (S_{t-1})$, as $S_{t}$ is in $S'$ for $S_{t-1}$.
+Pseudocode as images:
+
+
+
+
+"
+"['probability-distribution', 'kl-divergence', 'wasserstein-metric', 'total-variational-distance']"," Title: Why is KL divergence used so often in Machine Learning?Body: The KL Divergence is quite easy to compute in closed form for simple distributions -such as Gaussians- but has some not-very-nice properties. For example, it is not symmetrical (thus it is not a metric) and it does not respect the triangular inequality.
+What is the reason it is used so often in ML? Aren't there other statistical distances that can be used instead?
+"
+"['deep-learning', 'convolutional-neural-networks', 'training', 'datasets', 'data-augmentation']"," Title: Is there any rule of thumb to determine the amount of data needed to train a CNNBody: I am training an AlexNet Convolutional Neural Network to classify images in a dataset. I want to know if there is any general rule for using data augmentation in training a neural network. How can I make sure about the amount of data, and how can I know if I need more data?
+"
+"['neural-networks', 'deep-learning', 'datasets', 'math', 'supervised-learning']"," Title: Can we use ML to do anything else other than predicting (in the case of mathematical problems)?Body: (The math problem here just serves as an example, my question is on this type of problems in general).
+Given two Schur polynomials, $s_\mu$, $s_\nu$, we know that we can decompose their product into a linear combination of other Schur polynomials.
+$$s_\mu s_\nu = \sum_\lambda c_{\mu,\nu}^\lambda s_\lambda$$
+and we call $c_{\mu,\nu}^\lambda$ the LR coefficient (always an non-negative integer).
+Hence, a natural supervised learning problem is to predict whether the LR coefficient is of a certain value or not given the tuple $<\mu, \nu, \lambda>$. This is not difficult.
+My question is: can we either use ML/RL to do anything else other than predicting (in this situation) or extract anything from the prediction result? In other words, a statement like "oh, I am 98% confident that this LR coefficient is 0" does not imply anything mathematically interesting?
+"
+"['tensorflow', 'data-science', 'u-net', 'accuracy']"," Title: Why do I get higher average dice accuracy for less dataBody: I am working on image segmentation of MRI thigh images with deep learning (Unet). I noticed that I get a higher average dice accuracy over my predicted masks if I have less samples in the test data set.
+I am calculating it in tensorflow as
+def dice_coefficient(y_true, y_pred, smooth=0.00001):
+y_true_f = K.flatten(y_true)
+y_pred_f = K.flatten(y_pred)
+intersection = K.sum(y_true_f * y_pred_f)
+return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
+
+the difference is 0.003 if I have 4x more samples.
+I am calculating the dice coefficient over each MRI 2D slice
+Why could this be?
+This figure shows how the accuracy decreases with the fraction of samples. I start with 0.1 of the data until the whole data set. The splitting of the data was random
+"
+"['reinforcement-learning', 'deep-rl', 'alphazero', 'chess', 'notation']"," Title: In AlphaZero, do we need to store the data of terminal states?Body: I have a question about the training data used during the update/back-propagation step of the neural network in AlphaZero.
+From the paper:
+
+The data for each time-step $t$ is stored as ($s_t, \pi_t, z_t$) where $z_t = \pm r_T$ is the game winner from the perspective of the current player at step $t$. In parallel (Figure 1b), new network parameters $\Theta_i$ are trained from data ($s,\pi, z$) sampled uniformly among all time-steps of the last iteration(s) of self-play
+
+Regarding the policy at time $t$ ($\pi_t$), I understood this as the probability distribution of taking some action that is proportional to the visit count to each child node, i.e. during MCTS, given some parent node (state) at time $t$, if some child node (subsequent state) $a$ is visited $N_a$ times and all children nodes are visited $\sum_b N_b$ times, then the probability of $a$ (and its corresponding move) being sampled is $\frac{N_a}{\sum_b N_b}$, and this parametrizes the distribution $\pi_t$. Is this correct? If this is the case, then for some terminal state $T$, we can't parametrize a distribution because we have no children nodes (states) to visit. Does that mean we don't add ($s_T, \pi_T, z_T$) to the training data?
+Also, a followup question regarding the loss function:
+
+$l = (z-v)^2 - \pi^T log\textbf{p} + c||\Theta||^2$
+
+I'm confused about this $\pi^T$ notation. My best guess is that this is a vector of actions sampled from all policies in the $N$ X $(s_t, \pi_t, z_t)$ minibatch, but I'm not sure. (PS the $T$ used in $\pi^T$ is different from the $T$ used to denote a terminal state if you look at the paper. Sorry for the confusion, I don't know how to write two different looking T's)
+"
+"['deep-learning', 'transformer', 'attention', 'weights', 'efficiency']"," Title: In the multi-head attention mechanism of the transformer, why do we need both $W_i^Q$ and ${W_i^K}^T$?Body: In the Attention is all you need paper, on the 4th page, we have equation 1, which describes the self-attention mechanism of the transformer architecture
+$$
+\text { Attention }(Q, K, V)=\operatorname{softmax}\left(\frac{Q K^{T}}{\sqrt{d_{k}}}\right) V
+$$
+Everything is fine up to here.
+Then they introduce the multi-head attention, which is described by the following equation.
+$$
+\begin{aligned}
+\text { MultiHead }(Q, K, V) &=\text { Concat}\left(\text {head}_{1}, \ldots, \text {head}_{\mathrm{h}}\right) W^{O} \\
+\text { where head}_{\mathrm{i}} &=\text {Attention}\left(Q W_{i}^{Q}, K W_{i}^{K}, V W_{i}^{V}\right)
+\end{aligned}
+$$
+Once the multi-head attention is motivated at the end of page 4, they state that for a single head (the $i$th head), the query $Q$ and key $K$ inputs are first linearly projected by $W_i^Q$ and $W_i^K$, then dot product is calculated, let's say $Q_i^p = Q W_i^Q$ and $K_i^p = K W_i^K$.
+Therefore, the dot product of the projected query and key becomes the following from simple linear algebra.
+$$Q_i^p {K_i^p}^\intercal = Q W_i^Q {W_i^K}^T K^T = Q W_i K^T,$$
+where
+$$W_i = W_i^Q {W_i^K}^T$$
+Here, $W$ is the outer product of query projection by the key projection matrix. However, it is a matrix with shape $d_{model} \times d_{model}$. Why did the authors not define only a $W_i$ instead of $W_i^Q$ and $W_i^K$ pair which have $2 \times d_{model} \times d_{k}$ elements? In deep learning applications, I think it would be very inefficient.
+Is there something that I am missing, like these 2 matrices $W_i^Q$ and $W_i^K$ should be separate because of this and that?
+"
+"['reinforcement-learning', 'multi-armed-bandits', 'epsilon-greedy-policy', 'upper-confidence-bound']"," Title: Multi Armed Bandits with large number of armsBody: I'm dealing with a (stochastic) Multi Armed Bandit (MAB) with a large number of arms.
+Consider a pizza machine that produces a pizza depending on an input $i$ (equivalent to an arm). The (finite) set of arms $K$ is given by $K=X_1\times X_2 \times X_3\times X_4$ where $X_j$ denote the set of possible amounts of ingredient $j$.
+e.g.
+$X_1=\{$ small, medium, large $\}$ (amount of cheese) or
+$X_2=\{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\}$ (slices of salami)
+Thus, running the pizza machine with input $i$ is equivalent to pulling arm $i\in K$. Due to the different permutations, the number of arms $|K|$ is very large (between 100,000 and 1,000,000). Depending on the pulled arm $i$, the machine generates a pizza (associated with a reward that indicates how delicious the pizza is).
+However, the machine's rewards are non-static. Pulling an arm $i$ generates a reward according to an unknown (arm specific) distribution $P_i$, with all rewards drawn from $P_i$ beeing i.i.d.. In addition, it is possible to normalize all rewards to the interval [0,1].
+The above problem corresponds to the standard problem of a stochastic MAB, but is characterized by the large number of arms. In the case of the pizza machine, several days of computation time are available to determine the best pizza, so the number of itarations is allowed to be large as well.
+In my investigation of MAB algorithms addressing a large number of arms, I came across studies that could call up to a few thousand arms.
+Are there algorithms that exist in the MAB domain, which specifically deal with large problem instances (e.g.with $|K|>100,000$)?
+"
+"['reinforcement-learning', 'deep-learning', 'deep-rl', 'chess', 'alphazero']"," Title: Stack of Planes as the Action Space Representation for AlphaZero (Chess)Body: I have a question regarding the action space of the policy network used in AlphaZero.
+From the paper:
+
+We represent the policy π(a|s) by a 8 × 8 × 73 stack of planes encoding a probability distribution over 4,672 possible moves. Each of the 8 × 8 positions identifies the square from which to “pick up” a piece. The first 56 planes encode possible ‘queen moves’ for any piece: a number of squares [1..7] in which the piece will be moved, along one of eight relative compass directions {N,NE,E,SE,S,SW,W,NW}......The policy in Go is represented identically to AlphaGo Zero (29), using a flat distribution over 19 × 19 + 1 moves representing possible stone placements and the pass move. We also tried using a flat distribution over moves for chess and shogi; the final result was almost identical although training was slightly slower.
+
+I don't understand why a stack of planes is used for the action space here. I'm also not entirely sure I understand how this representation is used. My guess is that for Chess, the 8x8 plane represents the board, and each square has a probability assigned to it of picking up a piece on that square (let's assume that all illegal moves haven't been masked yet, so all squares have probability mass on them). From there, we choose from possible 'Queen' type moves or 'Knight' type moves, which total to 73 different types of moves. Is this interpretation correct? How would one go from this representation to sampling a legal move (i.e. how is this used to parametrize a distribution I can actually sample moves from?)
+During MCTS when expanding a leaf node, we get $p_a$, the probability of taking some action $a$ from the policy head, so I would also need to be able to go from this 'planes' representation to the probability of taking a specific action.
+The paper also mentions trying out 'flat distributions', which I'm not entirely sure what this means either.
+"
+"['machine-learning', 'support-vector-machine', 'logistic-regression']"," Title: Does the image is logistic regression or SVM, and why?Body: Does the image is logistic regression or SVM, and why?
+
+"
+"['transformer', 'attention', 'weights']"," Title: In attention models with multiple layers, are weight matrices shared across layers?Body: In articles that describe neural architectures with multiple attention layers of the same form, are the weight matrices usually the same across the layers? Consider as an example, "Attention is all you need". The authors stack several layers of multi-head self-attention in which each layer has the same number of heads. Each head $i$ involves a trainable weight matrix $W_{i}^{Q}$. There is no subscript, superscript, or any other indication that this matrix is different for each layer. My questions is this: are there separate $W_{i}^{Q}$ for layers $1,2,3,...$ or is this a single matrix shared throughout layers?
+My intuition is that the authors of the paper wanted to cut down on notation, and that the matrices are different in different layers. But I want to be sure I understand this, since I see the same kind of thing in many other papers as well.
+"
+"['proofs', 'perceptron', 'xor-problem']"," Title: Is there a proof to explain why XOR cannot be linearly separable?Body: Can someone explain to me with a proof or example why you can't linearly separate XOR (and therefore need a neural network, the context I'm looking at it in)?
+I understand why it's not linearly separable if you draw it graphically (e.g. here), but I can't seem to find a formal proof somewhere, I wanted to try and understand it with either an equation or example written down. I'm wondering if one exists (I guess it has to do with contradictions?), but I can't seem to find it? I have seen this, but it's more a reason than a proof.
+"
+"['deep-learning', 'optimization', 'gradient-descent', 'optimizers']"," Title: In the update rule of RMSprop, do we divide by a matrix?Body: I've been trying to understand RMSprop for a long time, but there's something that keeps eluding me.
+Here is a screenshot from this video by Andrew Ng.
+
+From the element-wise comment, from what I understand, $dW$ and $db$ are matrices, so that must mean that $S_{dW}$ is a matrix (or tensor) as well.
+So, in the update rule, do they divide a matrix by another matrix? From what I saw on google, no such action exists.
+"
+"['deep-learning', 'computer-vision']"," Title: How to normalize for perceptual loss when training neural net from scratch?Body: Let's say we are training a new neural network from scratch. I calculate the mean and standard deviation of my dataset (assume I am training a fully convolutional neural net and my dataset is images) and I standardise each channel of all images based on that mean and standard deviation. My output will be another image.
+I want to use for example VGG for perceptual loss (VGG's weights will be frozen). Perceptual loss is when you input your prediction to a pretrained network to extract features from it. Then you do the same for the ground truth and the L2 distance between the features from ground truth and features from prediction is called perceptual loss.
+As far as I know, I am supposed to standardise my data based on the mean and standard deviation VGG was trained with (since I am using VGG for inference essentially), which is different than the mean and standard deviation of my dataset. What is the correct way to do this? Should I undo the standardization of my dataset by multiplying standard deviation and adding the original mean to the output of my network, and then restandardise using VGG's statistics to calculate the loss? Or should I continue without restandardising?
+"
+"['reinforcement-learning', 'keras', 'open-ai', 'gym', 'ddpg']"," Title: How to deal with KerasRL DDPG algorithm getting stuck in a local optima?Body: I am using KerasRL DDPG to try to learn a policy on my own custom environment, but the agent is stuck in a local optima although I am adding the OrnsteinUhlenbeck randomization process. I used the exact same DDPG to solve Pendulum-v0 and it works, but my environment is a more complex with a continuous space/action space.
+How do you deal with local optima problem in reinforcement learning? is it just an exploitation issue?
+More details:
+My state space is not pixels, it is numerical, in fact it's a metro line simulator and the state space is the velocity, the position of each train on the line and the number of passengers at each station. I need to control the different trains so I am not trying to control only one train but all the operational trains and each one can have different actions such as speed or not, stay longer on the next station or not etc.
+1/ I am using the same ANN architecture for the actor and critic: 3 FC layers with (512, 256, 256) hidden units.
+2/ Adam optimizer for both the actor and critic with a small lr=1e-7 and clipnorm=1.
+3/ nb_steps_warmup_critic=1000, nb_steps_warmup_actor=1000
+4/ SequentialMemory(limit=1000000, window_length=1)
+5/ The environement is a simulator of a metro line with a continuous state and action space
+"
+"['deep-learning', 'natural-language-processing', 'transformer', 'attention', 'performance']"," Title: Can the attention mechanism improve the performance in the case of short sequences?Body: I am aware that the attention mechanism can be used to deal with long sequences, where problems related to gradient vanishing and, more generally, representing effectively the whole sequence arise.
+However, I was wondering if attention, applied either to seq2seq RNN/GRU/LSTM or via Transformers, can contribute to improving the overall performance (as well as giving some sort of interpretability through the attention weights?) in the case of relatively short sequences (let's say around 20-30 elements each).
+"
+"['neural-networks', 'training', 'weights', 'weights-initialization']"," Title: Why is there a Uniform and Normal version of He / Xavier initialization in DL libraries?Body: Two of the most popular initialization schemes for neural network weights today are Xavier and He. Both methods propose random weight initialization with a variance dependent on the number of input and output units. Xavier proposes
+$$W \sim \mathcal{U}\Bigg[-\frac{\sqrt{6}}{\sqrt{n_{in}+n_{out}}},\frac{\sqrt{6}}{\sqrt{n_{in}+n_{out}}}\Bigg]$$
+for networks with $\text{tanh}$ activation function and He proposes
+$$W \sim \mathcal{N}(0,\sqrt{s/n_{in}})$$
+for $\text{ReLU}$ activation. Both initialization schemes are implemented in the most commonly used deep learning libraries for python, PyTorch and TensorFlow.
+However, for both versions we have a normal and uniform version. Now the main argument of both papers is about the variance of the information at initialization time (which is dependent on the non-linearity) and that it should stay constant across all layers when back-propagating. I see how one can simply adjust the bounds $[-a,a]$ of a uniform variable in such a way that the random variable has the desired standard deviation and vice versa ($\sigma = a/\sqrt{3}$), but I'm not sure why we need a normal and a uniform version for both schemes? Wouldn't it be just enough to have only normal or only uniform? Or uniform Xavier and normal He as proposed in their papers?
+I can imagine uniform distributions are easier to sample from a computational point of view, but since we do the initialization operation only once at the beginning, the computational cost is negligible compared to that from training. Further uniform variables are bounded, so there are no long tail observations as one would expect in a normal. I suppose that's why both libraries have truncated normal initializations.
+Are there any theoretical, computational or empirical justifications for when to use a normal over a uniform, or a uniform over a normal weight initialization regardless of the final weight variance?
+"
+"['deep-learning', 'hyperparameter-optimization', 'bayesian-optimization', 'grid-search', 'random-search']"," Title: Bayesian hyperparameter optimization, is it worth it?Body: In the Deep Learning book by Goodfellow et al., section 11.4.5 (p. 438), the following claims can be found:
+
+Currently, we cannot unambiguously recommend Bayesian hyperparameter
+optimization as an established tool for achieving better deep learning results or for obtaining those results with less effort. Bayesian hyperparameter optimization sometimes performs comparably to human experts, sometimes better, but fails catastrophically on other problems. It may be worth trying to see if it works on a particular problem but is not yet sufficiently mature or reliable
+
+Personally, I never used Bayesian hyperparameter optimization. I prefer the simplicity of grid search and random search.
+As a first approximation, I'm considering easy AI tasks, such as multi-class classification problems with DNNs and CNNs.
+In which cases should I take it into consideration, is it worth it?
+"
+"['deep-learning', 'deep-neural-networks', 'residual-networks']"," Title: Regression For Elliptical Curve Public Key Generation Possible?Body: As part of a learning more about deep learning, I have been experimenting with writing ResNets with Dense layers to do different types of regression.
+I was interested in trying a harder problem and have been working on a network that, given a private key, could perform point multiplication along ECC curve to obtain a public key.
+I have tried training on a dataset of generated keypairs, but am seeing the test loss values bounce around like crazy with train loss values eventually decreasing after many epochs due to what I assume is overfitting.
+Is this public key generation problem even solvable with a deep learning architecture? If so, am I doing something wrong with my current approach?
+"
+"['machine-learning', 'datasets']"," Title: Why can't we combine both training and validation data, given that both types of data are used for developing the model?Body: Sorry if I sound confused. I read that data to be fed to a machine are divided into training, validation and test data. Both training and validation data are used for developing the model. Test data is used only for testing the model and no tuning of the model is done using test data.
+Why is there a need to separate out training and validation data since both sets of data are for developing/tuning the model? Why not keep things simple and combine both data sets into a single one?
+"
+"['reinforcement-learning', 'probability', 'inference', 'probabilistic-graphical-models']"," Title: In RL as probabilistic inference, why do we take a probability to be $\exp(r(s_t, a_t))$?Body: In section 2 the paper Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review the author is discussing formulating the RL problem as a probabilistic graphical model. They introduce a binary optimality variable $\mathcal{O}_t$ which denotes whether time step $t$ was optimal (1 if so, 0 otherwise). They then define the probability that this random variable equals 1 to be
+$$\mathbb{P}(\mathcal{O}_t = 1 | s_t, a_t) = \exp(r(s_t, a_t)) \; .$$
+My question is why do they do this? In the paper they make no assumptions about the value of the rewards (e.g. bounding it to be non-positive) so in theory the rewards can take any value and thus the RHS can be larger than 1. This is obviously invalid for a probability. It would make sense if there was some normalising constant, or if the author said that the probability is proportional to this, but they don't.
+I have searched online and nobody seems to have asked this question which makes me feel like I am missing something quite obvious so I would appreciate if somebody could clear this up for me please.
+"
+"['deep-learning', 'convolutional-neural-networks', 'comparison', 'terminology', 'architecture']"," Title: What's the difference between architectures and backbones?Body: In the paper "ForestNet: Classifying Drivers of Deforestation in Indonesia using Deep Learning on Satellite Imagery", the authors talk about using:
+
+- Feature Pyramid Networks (as the architecture)
+- EfficientNet-B2 (as the backbone)
+
+
+Performance Measures on the Validation Set. The RF model that only inputs data from the visible Landsat 8 bands achieved the lowest
+performance on the validation set, but the incorporation of auxiliary
+predictors substantially improved its performance. All of the CNN
+models outperformed the RF models. The best performing model, which we
+call ForestNet, used an FPN architecture with an EfficientNet-B2
+backbone. The use of SDA provided large performance gains on the
+validation set, and land cover pre-training and incorporating
+auxiliary predictors each led to additional performance improvements.
+
+What's the difference between architectures and backbones? I can't find much online. Specifically, what are their respective purposes? From a high-level perspective, what would integrating the two look like?
+"
+"['algorithm', 'optimization']"," Title: What is the best algorithm for optimizing profit, rather than making predictions?Body: I am new to machine learning, so I am not sure which algorithms to look at for my business problem. Most of what I am seeing in tools like KNIME are geared toward making a prediction/classification, and focusing on the accuracy of that single prediction/classification.
+Instead, in general terms, I want to optimize toward maximum profit of a business process/strategy, rather than simply trying to choose the "best" transaction from within a set of possible transactions, which is quite different. The latter approach will simply give the best "transaction success percentage", without regard for overall profit of the strategy in the aggregate.
+This is how the business problem is structured: Each Opportunity is a type of business strategy "game" between Entities. Each Entity is unaware, unaffected, and uninterested in conditions or events outside of the Opportunity, such as Observers. Each Opportunity is an independent event with no affect on other concurrent or future Opportunities, and with no effect on decisions unrelated to the Opportunity itself. Each Opportunity will have one and only one Awarded Entity, which is the Entity that "wins" the business process.
+Observers, however, may create a Market for each Opportunity. Within such an independent, ephemeral Market, the Observers may bid among themselves as to which Entity will be the Awarded Entity for the Opportunity. Each Bid is a fixed-size transaction. A Bid is associated with only one Entity within the Opportunity. Thus, a Bid is a type of vote on the outcome of the Opportunity. There is no limit to the number of Bids that an Observer may place into the Market, but each Observer may only place Bids on a single Entity within the Market. Thus, the total amount of Bids on an individual Entity within the Opportunity represent the confidence level, within that Market, of the prediction.
+At the resolution of the Opportunity, one Entity will be the Awarded Entity for the Opportunity. This determination is made based on factors outside of the control of the Observers. The Observers have no influence over the Opportunity nor the Entities within it. When the Awarded Entity is determined, the total value of Bids placed on that Entity are refunded to the Observers that placed them. Additionally, the value of all Bids placed on other Entities are shared among the Observers who bid correctly on the Awarded Entity. Each Observer that placed a correct Bid on the Awarded Entity is entitled to a fraction of the remaining Market value, in equal proportion to the number of fixed Bids placed. In other words, the Market is a zero sum scenario. Bids may be placed at any time during the duration of the Opportunity, from when its Market is created, up until a deadline just shortly before its resolution. The total number of outstanding Bids on each Entity is another data point that is available in real time, and which fluctuates during the duration of the Opportunity, based on total Bids placed and the ratios between Bids on each Entity participating in the Opportunity.
+To support the Observers' evaluation and prediction of Awarded Entity within the scope of Opportunities, there are thousands of data points available, as well as extensive history and analytics regarding each Entity involved. Each Observer will employ their own unique strategy to predict Opportunity outcomes. The objective of this algorithm is to optimize a prediction strategy that does not optimize for "percentage of correct predictions", but rather "maximum gain". Rather than be "most correct most often", the model should strive to use the data to create advantages for maximum gain in the aggregate, rather than strive to be the most correct. An Observer is rewarded not for being correct most often, but for recognizing inefficiencies in the Bids within the Market.
+I am considering hand-coding a genetic algorithm for this, so that I can write a custom fitness function that computes overall profitability of the strategy, and run the generations to optimize profit instead of individual selection accuracy. However, I'd rather use an out-of-the-box algorithm supported by a tool like KNIME if possible.
+Thank you!
+"
+['applications']," Title: Are there applications of Grenander's pattern theory in pattern recognition or for implementing algorithms?Body: I came across Grenander's work "Probabilities on Algebraic Structures" recently, and I found that much of Grenander's work focused on what he called "Pattern Theory." He's written many texts on the matter, and, from what I've seen, they seem like an attempt to unify some mathematical underpinnings of pattern representation. However, I'm not sure what this really means in practice, nor how it relates to results we already have in learning theory. The mathematical aspect of the work is really quite intriguing, but I am skeptical as to its practicality.
+Are there any applications of Grenander's pattern theory? Either for getting a better theoretical understanding of certain methods of pattern recognition or for directly implementing algorithms?
+Some links to what I'm referring to:
+
+"
+"['reinforcement-learning', 'ai-design', 'action-spaces']"," Title: How should I define the action space for a card game like Magic: The Gathering?Body: I'm trying to learn about reinforcement learning techniques. I have little background in machine learning from university, but never more than using a CNN on the MNIST database.
+My first project was to use reinforcement learning on tic-tac-toe and that went well.
+In the process, I thought about creating an AI that can play a card game like Magic: The Gathering, Yu-gi-oh, etc. However, I need to think of a way to define an action space. Not only are there thousands of combinations of cards possible in a single deck, but we also have to worry about the various types of decks the machine is playing and playing against.
+Although I know this is probably way too advanced for a beginner, I find attempting a project like this, challenging and stimulating. So, I looked into several different approaches for defining an action space. But I don't think this example falls into a continuous action space, or one in which I could remove actions when they are not relevant.
+I found this post on this stack exchange that seems to be asking the same question. However, the answer I found didn't seem to solve any of my problems.
+Wouldn't defining the action space as another level of game states just mask the exact same problem?
+My main question boils down to:
+Is there an easy/preferred way to make an action space for a game as complex as Magic? Or, is there another technique (other than RL) that I have yet to see that is better used here?
+"
+"['deep-learning', 'proofs', 'mean-squared-error']"," Title: How do I prove that the MSE is zero when all predictions are equal to the corresponding labels?Body: In the back-propogation algorithm, the error term is:
+$$
+E=\frac{1}{2}\sum_k(\hat{y}_k - y_k)^2,
+$$
+where $\hat{y}_k$ is a vector of outputs from the network, $y_k$ is the vector of correct labels (and we work out the error by calculating predicted-observed, squaring the answer, and then summing the answers for each $k$ (and dividing by 2).
+How do you prove that if this answer is $0$ (i.e., if $E=0$), then $\hat{y}_k=y_k$ for all $k$?
+"
+"['natural-language-processing', 'reference-request', 'language-model', 'text-generation']"," Title: Are there any good alternatives to an LSTM language model for text generation?Body: I have a trained LSTM language model and want to use it to generate text. The standard approach for this seems to be:
+
+- Apply softmax function
+- Take a weighted random choice to determine the next word
+
+This is working reasonably well for me, but it would be nice to play around with other options. Are there any good alternatives to this?
+"
+"['neural-networks', 'reinforcement-learning', 'q-learning', 'dqn', 'deep-rl']"," Title: How to build a Neural Network to approximate the Q-function?Body: I am learning reinforcement learning with Q-learning using online resources, like blog posts, youtube videos, and books. At this point, I have learned the underpinning concepts of reinforcement learning and how to update the q values using a lookup table.
+Now, I want to create a neural network to replace the lookup table and approximate the Q-function, but I am not sure how to design the neural network. What would be the architecture for my neural network? What are the inputs and outputs?
+Here are the two options I can think of.
+
+- The input of the neural network is $(s_i, a_i)$ and the output is $Q(s_i,a_i)$
+
+- The input is $(s_i)$ and the output is a vector $[Q(s_i,a_1), Q(s_i,a_2), \dots, Q(s_i,a_N)]$
+
+
+Is there any other alternative architecture?
+Also, how to reason about which model would be logically better?
+"
+"['reinforcement-learning', 'function-approximation', 'feature-selection', 'tile-coding', 'coarse-coding']"," Title: How does uniform offset tiling work with function approximation?Body: I get the fundamental idea of how tilings work, but, in Barton and Sutton's book, Reinforcement Learning: An Introduction (2nd edition), a diagram, on page 219 (figure 9.11), showing the variations of uniform offset tiling has confused me.
+
+I don't understand why all 8 of these figures are instances of uniformly offset tilings. I thought uniformly offset meant ALL tilings have to be offset an equal amount from each other which is only the case for the bottom left figure. Is my understanding wrong?
+"
+"['machine-learning', 'similarity', 'bag-of-features', 'siamese-neural-network']"," Title: On learning to rank tasks. Could it be that the input of the Siamese network is a vector, or should it be exclusively raw text?Body: I'm developing a method to document and query representation as concept vectors (bag-of-concepts). I want to train a machine learning model on ranking (learning to rank a task). So I have document vector V1 and query vector V2. How should I use these two numerical vectors in learning the way to rank a task? What are the possible scenarios?
+Do I calculate relevance (similarity) by cosine and then enter the result as a single feature into a neural network? Is it correct to apply Hadamard to produce a single vector representing the features of a document and query pair, and then train a neural network with it? Can two vectors (document and query vector) be entered into the Siamese network in order to evaluate the relevance? One told me this is not possible because the network only take raw text as input and extracts features. Hence, it is useless to enter a vector that was generated by my vectorization method.
+"
+"['classification', 'papers', 'logistic-regression', 'selection-bias']"," Title: What is meant by ""the number of examples is reduced"", and why is this the case?Body: I am currently studying the paper Learning and Evaluating Classifiers under Sample Selection Bias by Bianca Zadrozny. In section 3.2. Logistic Regression, the author says the following:
+
+3.2. Logistic regression
+In logistic regression, we use maximum likelihood to find the parameter vector $\beta$ of the following model:
+$$P(y = 1 \mid x) = \dfrac{1}{1 + \exp(\beta_0 + \beta_1 x_1 + \dots + \beta_n x_n)}$$
+With sample selection bias we will instead fit:
+$$P(y = 1 \mid x, s = 1) = \dfrac{1}{1 + \exp(\beta_0 + \beta_1 x_1 + \dots + \beta_n x_n)}$$
+However, because we are assuming that $y$ is independent of $s$ given $x$ we have that $P(y = 1 \mid x, s = 1) = P(y = 1 \mid x)$. Thus, logistic regression is not affected by sample selection bias, except for the fact that the number of examples is reduced. Asymptotically, as long as $P(s = 1 \mid x)$ is greater than zero for all $x$, the results on a selected sample approach the results on a random sample. In fact, this is true for any learner that models $P(y \mid x)$ directly. These are all local learners.
+
+This part is unclear to me:
+
+However, because we are assuming that $y$ is independent of $s$ given $x$ we have that $P(y = 1 \mid x, s = 1) = P(y = 1 \mid x)$. Thus, logistic regression is not affected by sample selection bias, except for the fact that the number of examples is reduced.
+
+What is meant by "the number of examples is reduced", and why is this the case?
+"
+"['machine-learning', 'features', 'labels']"," Title: Is it possible to flip the features and labels after training a model?Body: The goal of this program is to predict a game outcome given a game-reference-id
, which is a serial number like so:
+
+id,totalGreen,totalBlue,totalRed,totalYellow,sumNumberOnGreen,sumNumberOnBlue,sumNumberOnRed,sumNumberOnYellow,gameReferenceId,createdAt,updatedAt
+1,1,3,2,0,33,27,41,0,1963886,2020-08-07 20:27:49,2020-08-07 20:27:49
+2,1,4,1,0,36,110,31,0,1963887,2020-08-07 20:28:37,2020-08-07 20:28:37
+3,1,3,2,0,6,33,83,0,1963888,2020-08-07 20:29:27,2020-08-07 20:29:27
+4,2,2,2,0,45,58,44,0,1963889,2020-08-07 20:30:17,2020-08-07 20:30:17
+5,0,2,4,0,0,55,82,0,1963890,2020-08-07 20:31:07,2020-08-07 20:31:07
+6,2,4,0,0,36,116,0,0,1963891,2020-08-07 20:31:57,2020-08-07 20:31:57
+7,3,2,1,0,93,16,40,0,1963892,2020-08-07 20:32:47,2020-08-07 20:32:47
+
+Here's the link for a full training dataset.
+After training the model, it becomes difficult to use the model to predict the game output, since the game-reference-id
is the only independent column, while others are random.
+Is there a way to flip the features with the labels during prediction?
+"
+"['neural-networks', 'training', 'gradient-descent', 'ai-security', 'training-datasets']"," Title: During neural network training, can gradients leak sensitive information in case training data fed is encrypted (homomorphic)?Body: Some algorithms in the literature allow recovering the input data used to train a neural network. This is done using the gradients (updates) of weights, such as in Deep Leakage from Gradients (2019) by Ligeng Zhu et al.
+In case the neural network is trained using encrypted (homomorphic) input data, what could be the output of the above algorithm? Will the algorithm recover the data in clear or encrypted (as it was fed encrypted)?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'keras']"," Title: Can I constrain my neurons in a neural network in according to the orders of the input?Body: I'm working with data that is ranked. So the inputs are 1,2,3 etc. This means the smaller numbers (ranks) are preferred to the larger ones. Hence the order is important. I want to estimate a number using regression; however, with the constraint that the order of the numbers must be monotonic non-linear.
+Imagine the following input table:
+1, 2
+2, 3
+3, 1
+
+For instance, if the output is 1000 for each input, then the estimation could be:
+1 * (800) + 2 * (100) = 1000
+2 * (300) + 3 * (60) = 780
+3 * (150) + 1 * (400) = 850
+
+Evidently, the estimated Xs are monotonic non-linear decreasing.
+For the first column: 1 -> 800, 2 -> 600 (2x300), 3 -> 450 (3x150)
+for the second columns: 1 -> 400, 2 -> 200 (2x100), 3-> 180 (2x60)
+So here's the question. Can I ensure my model (neural network) enforces the given constraint? I am using Keras.
+"
+"['neural-networks', 'terminology', 'backpropagation']"," Title: What is asymmetric relaxation backpropagation?Body: In Chapter 8, section 8.5.2, Raul Rojas describes how the weights for a layer of a neural network can be calculated using a pseudoinverse of the sigmoid function in the nodes, he explains this is an example of symmetric relaxation.
+But the chapter doesn't explain what asymmetric relaxation would be or how it is done.
+So, what is asymmetric relaxation and how would it be done in a simple neural network using a sigmoid function in its nodes?
+"
+"['neural-networks', 'imbalanced-datasets', 'selection-bias', 'training-datasets', 'test-datasets']"," Title: How do I select the (number of) negative cases, if I'm given a set of positive cases?Body: We were given a list of labeled data (around 100) of known positive cases, i.e. people that have a certain disease, i.e. all these people are labeled with the same class (disease). We also have a much larger amount of data that we can label as negative cases (all patients that are not on the known positive patient's list).
+I know who the positives are, but how do I select negative cases to create a labeled dataset of both positives and negatives, on which to first train a neural network, and then test it?
+This is a common problem in the medical field, where doctors have lists of patients that are positive, but, in our case, we were not given a specific list of negative cases.
+I argued for picking a number that represents the true prevalence of the disease (around 1-2%). However, I was told that this isn't necessary and to do a 50:50 split of positives to negatives. It seems that doing it this way will not generalize outside our test and train datasets.
+What would you do in this case?
+"
+"['neural-networks', 'training', 'performance', 'weights']"," Title: Is the performance of a neural network, which was trained with encrypted data and weights, affected if the weights are decrypted?Body: Suppose that a neural network is trained with encrypted (for example, with homomorphic encryption and, more precisely, with the Paillier partial scheme) data. Moreover, suppose that it is also trained with encrypted weights.
+If the neural network's weights are decrypted, is the performance of the neural network theoretically preserved or affected?
+"
+"['deep-learning', 'generative-adversarial-networks', 'variational-autoencoder', 'u-net']"," Title: Hairstyle Virtual Try OnBody: I want to help people with cancer who are under chemotherapy, and generally people who have lost their hair to Virtually Try-On Toupees/Wigs on their head.
+VTO must support both the frontal and side positions of the head.
+At first, I thought I could use traditional deep-learning to find landmarks, then place the hairstyle on the head.
+But the results were unrealistic and inaccurate.
+2D Placing of Hair:
+Some Hairstyles and Portraits fit well together:
+
+But some don't:
+
+They need manual modification:
+
+Sometimes the original hair needs to be erased:
+
+GANs promise more natural results with less manual engineering
+changing hairstyle using GAN:
+
+changing hairstyle using GAN:
+
+So, I decided to use GANs for this Task.
+Is it a good choice or is there an alternate solution?
+"
+"['terminology', 'transformer', 'bert', 'natural-language-understanding']"," Title: What is MNLI-(m/mm)?Body: I came across the term MNLI-(m/mm) in Table 1 of the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. I know what MNLI stands for, i.e. Multi-Genre Natural Language Inference, but I'm just unsure about the -(m/mm) part.
+I tried to find some information about this in the paper GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding, but this explained only the basic Multi-Genre Language Inference concept. I assume that the m/mm part was introduced later, but this doesn't make any sense because the BERT paper appeared earlier.
+It would be nice if someone knows this or has a paper that explains this.
+"
+"['reinforcement-learning', 'definitions', 'value-functions', 'bellman-equations']"," Title: What is the Bellman Equation actually telling?Body: What does the Bellman equation actually say? And are there many flavours of that?
+I get a little confused when I look for the Bellman equation, because I feel like people are telling slightly different things about what it is. And I think the Bellman Equation is just basic philosophy and you can do whatever you want with that.
+The interpretations that I have seen so far:
+Let's consider this grid world.
++--------------+
+| S6 | S7 | S8 |
++----+----+----+
+| S3 | S4 | S5 |
++----+----+----+
+| S0 | S1 | S2 |
++----+----+----+
+
+
+- Rewards: S1:10; S3:10
+- Starting Point: S0
+- Horizon: 2
+- Actions: Up, Down, Left, Right (If an action is not valid because there is no space, you remain in your position)
+
+The V-Function/Value:
+It tells you how good is it to be in a certain state.
+With a horizon of 2, one can reach:
+S0==>S3 (Up) (R 5)
+S0==>S0 (Down) (R 0)
+S0==>S1 (Right)(R10)
+S0==>S0 (Left) (R 0)
+
+From that onwards
+S0==>S3 (Up) (R 5)
+S0==>S0 (Down) (R 0)
+S0==>S1 (Right)(R10)
+S0==>S0 (Left) (R 0)
+
+S1==>S4 (Up) (R 0)
+S1==>S1 (Down) (R10)
+S1==>S2 (Right)(R 0)
+S1==>S0 (Left) (R 0)
+
+S3==>S6 (Up) (R 0)
+S3==>S0 (Down) (R 0)
+S3==>S3 (Right)(R 5)
+S3==>S2 (Left) (R10)
+
+Considering no discount, this would mean that it is R=45 good to be in S0, because these are the options. Of course, you can't grab every reward, because you have to decide. Do I need to consider the best next state yet, because this would obviously reduce my expected total reward, but as I can only make two steps it would tell me what is really possible. Not what the overall Reward R(s) in that range is.
+The Q-Function/Value
+This function takes a state and an action, but I am not sure. If that means that I have a reward function that just considers my actions as well to give me a reward. Because in the previous example I just have to land on a state (It doesn't really matter how I get there). But this time I get a reward, when I choose a certain action. R(s,a)
+But otherwise I do not rate the best action and select that next state to calculate the next state. I choose every next step and from that I choose the 2nd next.
+Optimization V-function or Q-function
+This works the same as V-Function or Q-Function, but it just considers the next best award. Some sort of greedy approach:
+First step:
+S0==>S3 (Up) (R 5) [x]
+S0==>S0 (Down) (R 0) [x]
+S0==>S1 (Right)(R10)
+S0==>S0 (Left) (R 0) [x]
+
+Second Step:
+S1==>S4 (Up) (R 0) [x]
+S1==>S1 (Down) (R10)
+S1==>S2 (Right)(R 0) [x]
+S1==>S0 (Left) (R 0)
+
+So, this would say that is the best I can do in two steps. I know that there is a problem, because when I just follow a greedy approach I risk that I won't get the best result, if I would have had a reward of 1000 on S2 later.
+But still, I just want to know, if I have a correct understanding. I know there might be many flavours and interpretations but at least I want to know that is the correct name of these approaches.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'natural-language-processing', 'sequence-modeling']"," Title: Building a resume recommendation for a job post?Body: There are few challenges I am facing when building a resume recommendation for a particular job positing.
+Let's say we convert the resume into a vector on n-dimensions and job description also as an n-dimension vector and in order to see how similar, we can use any similarity metrics like cosine.
+Now, for me the biggest problem with such approach is not able to provide more importance to the job title required. Some times, for a cloud engineer position I am getting java developer resume recommended in top 10 just because some of the skills/keywords are overlapping between two so their embeddings becomes similar.
+I want to provide more weightage to the job title as well. What are some possible things I can do to make my recommendations consider or put bit emphasis on job title.
+Note:- A common job title lookup in resume will fails because people write job titles in multiple ways.
+(java engineer (or) java developer) | (cloud engineer or aws engineer) etc.,
+How can I overcome this issue?
+"
+"['reinforcement-learning', 'deep-rl', 'policy-gradients', 'markov-property']"," Title: Policy gradient: Does it use the Markov property?Body: To derive the policy gradient, we start by writing the equation for the probability of a certain trajectory (e.g. see spinningup tutorial):
+$$
+\begin{align}
+P_\theta(\tau) &= P_\theta(s_0, a_0, s_1, a_1, \dots, s_T, a_T) \\
+& = p(s_0) \prod_{i=0}^T \pi_\theta(a_i | s_i) p(s_{i+1} | s_i, a_i)
+\end{align}
+$$
+The expression is based on the chain rule for probability. My understanding is that the application of the chain rule should give up this expression:
+$$
+p(s_0)\prod_{i=0}^T \pi_\theta(a_i|s_i, a_{i-1}, s_{i-1}, a_{i-2}, \dots, s_0, a_0) p(s_{i+1} | s_i, a_i, s_{i-1}, a_{i-1}, \dots, a_0, s_0)
+$$
+Then the Markov property should be applicable, producing the desired equality. This should only depend on the latest state-action pair.
+Here are my questions:
+
+- Is this true?
+
+- I watched this lecture about policy gradients, and at this time during the lecture, Sergey says that: "at no point did we use the Markov property when we derived the policy gradient", which left me confused. I assumed that the initial step of calculating the trajectory probability was using the Markov property.
+
+
+"
+['graph-neural-networks']," Title: Spectral Networks and Deep Locally Connected Networks on GraphsBody: I’m reading the paper Spectral Networks and Deep Locally Connected Networks on Graphs and I’m having a hard time understanding the notation shown in the picture below (the scribbles are mine):
+
+Specifically, I don’t understand the notation for the matrix F. Why does it include an i and a j?
+"
+"['deep-learning', 'reference-request', 'generative-adversarial-networks', 'transformer', 'attention']"," Title: Recent deep learning textbook (i.e. covering at least GANs, LSTM and transformers and attention)Body: I am searching for an academic (i.e. with maths formulae) textbook which covers (at least) the following:
+
+- GAN
+- LSTM and transformers (e.g. seq2seq)
+- Attention mechanism
+
+The closest match I got is Deep Learning (2016, MIT Press) but it only deals with part of the above subjects.
+"
+"['neural-networks', 'regularization', 'weights', 'l2-regularization', 'l1-regularization']"," Title: When is using weight regularization bad?Body: Regularization of weights (e.g. L1 or L2) keeps them small and standardized, which can help reduce data overfitting. From this article, regularization sounds favorable in many cases, but is it always encouraged? Are there scenarios in which it should be avoided?
+"
+"['neural-networks', 'deep-learning']"," Title: Interpretation of Inner Product in a two-tower modelBody: I have seen at quite a few places the use of two-tower architecture. This(Fig 6) is one of the examples. Each tower computes embedding of a concept which is orthogonal to the concepts in the rest of the towers. Also, I have seen that the next step on getting an n-dimensional embedding from the towers an inner product is taken to measure Cosine similarity.
+I would like to know how to interpret this Inner Product. The two n-dimensional embeddings represents different concepts. What is the meaning of similarity here ?
+"
+"['deep-learning', 'convolutional-neural-networks', 'reference-request', 'video-classification', 'action-recognition']"," Title: How can I do video classification while taking into account the temporal dependencies of the frames?Body: I need to solve a video classification problem. While looking for solutions, I only found solutions that transform this problem into a series of simpler image classification tasks. However, this method has a downside: we ignore the temporal relationship between the frames.
+So, how can I perform video classification, with a CNN, by taking into account the temporal relationships between frames?
+"
+"['machine-learning', 'deep-rl', 'alphazero']"," Title: Is there a training data capacity limit for AlphaZero (Chess)?Body: In AlphaZero, we collect ($s_t, \pi_t, z_t$) tuples from self-play, where $s_t$ is the board state, $\pi_t$ is the policy, and $z_t$ is the reward from winning/losing the game. In other DeepRL off-policy algorithms (I'm assuming here that AlphaZero is off-policy (?)) like DQN, we maintain a memory buffer (say, 1 million samples) and overwrite the buffer with newer samples if it's at capacity. Do we do the same for AlphaZero? Or do we continually add new samples without overwriting older ones? The latter option sounds very memory heavy, but I haven't read anywhere that older samples are overwritten.
+"
+"['reinforcement-learning', 'policies', 'model-based-methods', 'model-free-methods', 'value-based-methods']"," Title: What kind of reinforcement learning method does AlphaGo Deepmind use to beat the best human Go player?Body: In reinforcement learning, there are model-based versus model-free methods. Within model-based ones, there are policy-based and value-based methods.
+AlphaGo Deepmind RL model has beaten the best Go human player. What kind of reinforcement model does it use? Why is this particular model appropriate for Go game?
+"
+"['convolutional-neural-networks', 'objective-functions', 'accuracy', 'l2-regularization', 'l1-regularization']"," Title: What is the effect of too harsh regularization?Body: While training a CNN model, I used an l1_l2
regularization (i.e. I applied both $L_1$ and $L_2$ regularization) on the final layers. While training, I saw the training and validation losses are dropping very nicely, but the accuracies aren't changing at all! Is that due to the high regularization rate?
+"
+"['reinforcement-learning', 'q-learning', 'double-q-learning']"," Title: Is there any toy example that can exemplify the performance of double Q-learning?Body: I recently tried to reproduce the results of double Q-learning. However, the results are not satisfying. I have also tried to compare double Q learning with Q-learning in Taxi-v3, FrozenLake without slippery, Roulette-v0, etc. But Q-learning outperforms double Q-learning in all of these environments.
+I am not sure whether if there is something wrong with my implementation as many materials about double Q actually focus on double DQN. While at the same time of checking, I wonder is there any toy example that can exemplify the performance of double Q-learning?
+"
+"['reinforcement-learning', 'training', 'dqn', 'deep-rl', 'double-dqn']"," Title: DDQN Agent in Othello (Reversi) game struggle to learnBody: This is my first question on this forum and I would like to welcome everyone.
+I am trying to implement DDQN Agent playing Othello (Reversi) game. I have tried multiple things but the agent which seems to be properly initialized does not learn against random opponent. Actually the score is about 50-60% won games out of nearly 500. Generally if it gets some score after first 20-50 episodes it stays on the same level. I have doubts on the process of learning and how to decide when the agent is trained. Current flow is as follows:
+
+- Initialize game state.
+- With epsilon greedy policy choose the action to make based on currently available actions depending on game state
+- Get opponent to make his action
+- Get the reward as number of flipped places that remain after opponent move.
+- Save the observation to replay buffer
+- If number of elements in replay buffer equals or higher than batch size do the training.
+
+What I do not know is when do I know when to stop the training. Previously this agent trained against MinMax algorithm learned how to win 100% games because MinMax played exactly the same every time. I would like the agent to generalize the game. Right now I save the network weights after the game is won but I think it does not matter. I can't see that this agent find some policy and improve over time. Whole code for the environment, agent and training loop can be found here: https://github.com/MikolajMichalski/RL_othello_mgr I would appreciate any help. I would like to understand how the RL works :)
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks']"," Title: Appropriate convolutional neural network architecture when the input consists of two distinct signalsBody: I have a dataset consisting of a set of samples. Each sample consists of two distinct desctized signals S1(t), S2(t). Both signals are synchronous; however, they show different aspects of a phenomena.
+I want to train a Convolutional Neural Network, but I don't know which architecture is appropriate for this kind of data.
+I can consider two channels for input, each corresponding to one of the signals. But, I don't think convolving two signals can produce appropriate features.
+I believe the best way is to process each signal separately in the first layers, then join them in the classification layers in the final step. How can I achieve this? What architecture should I use?
+"
+"['machine-learning', 'perceptron']"," Title: What is the equation to update the weights in the perceptron algorithm?Body: I'm trying to understand the solution to question 4 of this midterm paper.
+The question and solution is as follows:
+
+I thought that the process for updating weights was:
+error = target - guess
+new_weight = current_weight + (error)(input)
+
+I do not understand for example, for number 2 below, how that sum is determined. For example, I want to understand whether to update the weight or not. The calculation is:
+x1(w1) + x2(w2)
+(10)(1) + (10)(1) = 20
+20 > 0, therefore update.
+
+But the equation to obtain the same answer in the solution is:
+1(10 + 10) 20
+20 > 0, therefore update.
+
+I understand that these two equations are essentially the same, but written differently. But for example, in step 5, what do the elements in g5 mean. What do the -8, -16 and -2 represent?
+p.s. I know in a previous (now deleted) post of mine, I asked a question related to the use of LaTeX instead for maths equations. If someone can show me a simple way to convert these equations online, I'm more than happy to use it. However, I'm unfamiliar with this software, so I need some sort of converter.
+"
+"['neural-networks', 'machine-learning', 'applications', 'binary-classification']"," Title: Can you use machine learning for data with binary outcomes?Body: I am totally new to artificial intelligence and neural networks and have a broad question that I hope is appropriate to ask here.
+I am an ecologist working in animal movement and I want to use AI to apply to my field. This will be one of the few times this has been attempted so there is not much literature to help me here.
+My dataset is binary. In short, I have the presence (1) and absence (0) of animal locations that are associated with a series of covariates (~20 environmental conditions such as temperature, etc.). I have ~1 million rows of data to train the model on with a ratio of 1:100 (presence:absence).
+Once trained, I would like a model that can predict if an animal will be in a location (or give a probability) based on new covariates (environmental conditions).
+Is this sort of thing possible using AI?
+(If so, where should I be looking for resources? I write in R, should I learn Python?)
+"
+"['machine-learning', 'object-detection', 'ensemble-learning', 'boosting']"," Title: What ensemble methods are used in the state-of-the-art models?Body: What ensemble methods are used in the state-of-the-art models?
+When I surveyed the state-of-the-art methods of classification and detection, e.g. ImageNet, COCO, etc., I noticed that are few or even no references to the use of ensemble methods like bagging or boosting.
+Is it a bad idea to use them?
+However, I observed that many use ensemble in Kaggle competitions.
+What makes it so different between the two groups of researchers?
+"
+"['reinforcement-learning', 'markov-decision-process', 'pomdp']"," Title: Understanding example for Improved Policy Iteration for POMDPsBody: I was going through this paper by Hansen. This paper proposes policy improvement by first converting set of $\alpha$ vectors into finite state controller and then comparing them to obtain improved policy. The whole algorithm is summmarised as follows:
+
+Section 4 of this paper explains the algorithm with example. I am unable to get the example, specifically how it forms those states, what are the numbers inside each state and how they are exactly calculated:
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'comparison', 'graph-neural-networks']"," Title: Are Graph Neural Networks generalizations of Convolutional Neural Networks?Body: In lecture 4 of this course, the instructor argues that GNNs are generalizations of CNNs, and that one can recover CNNs from GNNs.
+He presents the following diagram (on the right) and mentions that it represents both a CNN and a GNN. In particular, he mentions that if we particularize the graph shift operator (i.e the matrix S, which in the case of a GNN could represent the adjacency matrix or the Laplacian) to represent a directed line graph, then we obtain a time convolutional filter (which I hadn't heard of before watching this, but now I know that all it does is shift the graph signal in the direction of the arrows at each time step).
+
+That part I understand. What I don’t understand is how we can obtain 2D CNNs (the ones that we would for example apply to images) from GNNs. I was wondering if someone could explain.
+EDIT: I found part of the answer here. However, it seems that the image convolution as defined is a bit different from what I’m used to. It seems like the convolution considers pixels only to the left and above of the “current” pixel whereas I’m used to convolutions considering both left, right, above, and below
+"
+"['reinforcement-learning', 'reference-request', 'value-functions', 'bellman-equations', 'bellman-operators']"," Title: How to derive matrix form of the Bellman operators?Body: Reading the Retrace paper (Safe and efficient off-policy reinforcement learning) I saw they often use a matrix form of the Bellman operators, for example as in the picture below. How do we derive those forms? Could you point me to some reference in which the matter is explained?
+I am familiar with the tabular RL framework, but I'm having trouble understanding the steps from operators to this matrix form. For example, why does $Q^{\pi} = (I -\gamma P^{\pi})^{-1}r$? I know that for the value $V$ we can write
+\begin{align}
+V = R + \gamma P^{\pi} V \\
+V - \gamma P^{\pi} V = R \\
+V (I -\gamma P^{\pi}) = R \\
+V = R(I - \gamma P^{\pi})^{-1}
+\end{align}
+but this seems slightly different.
+Picture from Safe and efficient off-policy reinforcement learning:
+
+"
+"['game-ai', 'gpt', 'dialogue-systems', 'integration']"," Title: Is it possible to integrate the GPT-3 by OpenAPI inside Unity3D or any game-engine?Body: My company has full access to beta testing for GPT-3. We wanted to try it for some games or game mechanics within Unity3D. Is it possible to use it for dialogues or with unity scripts?
+The Documents of OpenAI does not say anything about this possibility, so I'm not sure.
+"
+"['genetic-algorithms', 'artificial-life']"," Title: Is it possible that the fittest individuals in an Artificial Life population may be successful by not actively pursuing the rules of the environment?Body: I'm trying to understand Artificial Life (e.g. here for a simple background) in Computational Evolution.
+I understand that in this set of methods, you set up a dynamic environment (e.g. the ecology of the environment) and then you set a series of rules; e.g.:
+
+- You need energy to reproduce.
+- You intake energy from food sources.
+- For nourishment, you can
+eat plants, animals, or steal food.
+- You must stay alive until you reproduce.
+- Every action consumes energy.
+- When you have no energy left, you die.
+
+I think I need a set of rules that govern the survival of an artificial life. You run the environment and see what persists (there's a set of rules instead of a fitness score), and the individuals that survive are said to be successful.
+I can imagine a scenario where a successful organism in this environment consumes a lot of food, reproduces, but possibly runs out of energy and dies. I'm wondering if there's ever a situation where an organism does very little (or nothing), and still be successful? I'm not sure if this question makes sense, please let me know if clarification is needed. Given the specified environment, I want to know if the most active organism will always be the most successful. The most active organism would be the one that obtains the most food/energy/reproduces the most. Or is it possible to not be the most active organism and still be successful?
+"
+"['machine-learning', 'classification', 'ai-design', 'prediction', 'time-series']"," Title: How to do early classification of time series event with small dataset?Body: I would like to build a real-time binary classifier that can predict an event of interest that is occurring as soon as it starts. These are electromyographic signals, and the event classification should be able to classify the event as early as possible. This is because the next stage of the algorithm has to make a decision before the end of the event.
+I don't know what kind of algorithm/approach I should use here. I suppose RNN with LSTM cells should do the job, but the dataset is quite small as physiological signals are not easy to gather.
+I have seen many algorithms that windowed the signals (from the training set) and labeled each window as an event of interest if at least part of the event is contained in the window. Each window is then fed to a machine learning algorithm. Then, the prediction uses a sliding window in real-time. But this approach doesn't take into account the temporal aspect of the event as there is no link between each window seen by the ML algorithm.
+Do you have any tips or resources I could use to solve the problem?
+"
+['image-generation']," Title: Searching for 3D cad AI synthesis project (New CAD file form two similar CAD model)Body: I saw this post: Google AI generates images of 3D models with realistic lighting and reflections.
+
+Is there any project to test this capability for combining two 3d cad files of a similar model and get a new combined model as shown (Hoogle AI ...)? I would like to use software such as Blender, SolidWorks or some project via Google Colab (testable without the need to install any software).
+"
+"['papers', 'value-functions', 'policies', 'pomdp', 'policy-iteration']"," Title: How to obtain the policy in the form of a finite-state controller from the value function vectors over the belief space of the POMDP?Body: I was reading this paper by Hansen.
+It says the following:
+
+A correspondence between vectors and one-step policy choices plays an important role in this interpretation of a policy. Each vector in $\mathcal{V}'$ corresponds to the choice of an action, and for each possible observation, choice of a vector in $\mathcal{V}$. Among all possible one-step policy choices, the vectors in $\mathcal{V}'$ correspond to those that optimize the value of some belief state. To describe this correspondence between vectors and one-step policy choices, we introduce the following notation. For each vector $\mathcal{v}_i$ in $\mathcal{V}'$, let $a(i)$ denote the choice of action and, for each possible observation $z$, let $l(i,z)$ denote the index of the successor vector in $\mathcal{V}$. Given this correspondence between vectors and one-step policy choices, Kaelbling et al. (1996) point out that an optimal policy for a finite-horizon POMDP can be represented by an acyclic finite-state controller in which each machine state corresponds to a vector in a nonstationary value function.
+
+
+I am unable to guess how the left-side finite-state controller is formed from the right side belief space diagram. Does the above text provide enough explanation for the conversion? If yes, I am not really able to fully get it. Can someone please explain?
+"
+"['classification', 'papers', 'optimization', 'support-vector-machine', 'selection-bias']"," Title: How does the support vector machine constraint imply that sample selection bias will not systematically affect the output of the optimisation?Body: I am currently studying the paper Learning and Evaluating Classifiers under Sample Selection Bias by Bianca Zadrozny. In section 3.4. Support vector machines, the author says the following:
+
+3.4. Support vector machines
+In its basic form, the support vector machine (SVM) algorithm (Joachims, 2000a) learns the parameters $a$ and $b$ describing a linear decision rule
+$$h(x) = \text{sign}(a \cdot x + b),$$
+whose sign determines the label of an example, so that the smallest distance between each training example and the decision boundary, i.e. the margin, is maximized. Given a sample of examples $(x_i, y_i)$, where $y_i \in \{ -1, 1 \}$, it accomplishes margin maximization by solving the following optimization problem:
+$$\text{minimize:} \ V(a, b) = \dfrac{1}{2} a \cdot a \\ \text{subject to:} \ \forall i : \ y_i[a \cdot x_i + b] \ge 1$$
+The constraint requires that all examples in the training set are classified correctly. Thus, sample selection bias will not systematically affect the output of this optimization, assuming that the selection probability $P(s = 1 \mid x)$ is greater than zero for all $x$.
+
+How does the constraint that all examples in the training set are classified correctly imply that sample selection bias will not systematically affect the output of the optimisation? Furthermore, why is it necessary to assume that the selection probability is greater than zero for all $x$? These are not clear to me.
+"
+"['neural-networks', 'classification', 'training']"," Title: The last target name is missed in the test setBody: I am training a neural network with a dataset that has 51 classes and 6766 data in it. I used 80% for the training set, 10% for validation, and 10% for the test. After training I got confusion matrix and I find out the last class is missed in the test set. So, I used data augmentation and used 27064 data and 80-10-10 splits again, but the last class name is missed again. I changed the size of the test split but the problem was not solved, and in every trial that I made, only the last class name is missed. How can I solve this?
+
+EDIT: my dataset is images and in the original dataset, I have 104 data from the last class, after augmentation the entire dataset the last class has 416 data.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'objective-functions', 'gradient-descent']"," Title: Loss randomly changing, incorrect output (even for low loss) when trying to overfit on a single set of input and outputBody: I am trying to make a neural network framework from scratch in C++ just for fun, and to test my backpropagation, I thought it would be an easy way to test the functionality if I give it one input - a randomized size 10 vector, and one output: a size 5 vector containing all 1s, and train it a bunch of times to see if the loss will decrease. Essentially trying to make it overfit
+The problem is that for each run that I do, the loss either shoots up and goes to nan, or reduces a lot, going to 0.000452084 or other similar small values. However even in the low end of things, my output (which should be close to all 1s, as the "ground truth") is something like:
+0.000263654
+1e-07
+8.55893e-05
+1e-07
+0.999651
+
+The only close value close to 1 being the last element.
+My network consists of the input layer 10 neurons, one 10 neuron dense layer with RELU activation, and another 5 neuron dense layer for output, with SoftMax activation. I am using categorical cross entropy as my loss function, and I am normalizing my gradient by dividing it by the norm of my gradient if it is over 1.0. I initialize my weights to be random values between -0.1 and 0.1
+To calculate the gradient of the loss function, I use -groundTruth/predictedOutput
. To calculate
+the other derivatives, I dot the derivative of that layer with the gradient of the previous layer with respects to its activation function.
+Before this problem I was having exploding gradients, which the gradient scaling fixed, however it was very weird that that would even happen on a very small network like this, which could be related to the problem I am currently having. Is the implementation not correct or am I missing something very obvious?
+Any ideas about this weird behavior, and where I should look first? I am not sure how to show a minimal reproduceable example as that would require me to paste the whole codebase, but I am happy to show pieces of code with explanation. Any advice welcomed!!
+"
+"['neural-networks', 'reinforcement-learning', 'policy-gradients', 'multi-agent-systems']"," Title: Designing Policy-Network for Deep-RL with Large, Variable Action SpaceBody: I am attempting a project involving training an agent to play a game using deep reinforcement learning.
+This project has a few features that complicate the design of the neural network:
+
+- The action space per state is very large (can be over 1000 per state)
+- The set of actions available in each state very wildly between states, both in size and the actions available.
+- The total action-space (the union of each state's action-space) is way too large to enumerate.
+- The action space is discrete, not continuous.
+- The game is adversarial, multi-agent.
+
+Most RL neural networks I've seen involve the input of a state, and an output of a constance action size, where each element of the output is either an action's q-score or probability. But since my game's action space non-constant per state, I believe this design will not work for this game.
+I have seen an alpha-go style network, which outputs a probability for all actios ever possible, and then zeros out actions not possible in the given state and re-normalized the probabilities. However, since the total action-space in my game is way to large to enumerate, I don't believe this solution will work either.
+I have seen several network designs for large, discrete action spaces, including:
+
+- design a network to input a state-action pair and output a single score value, and train it via a value-based algorithm (such as q-learning). To select an action given a state, pass in every possible state-action pair to get each action's score, then select the action with the highest score.
+- (Wolpertinger architecture) have a network output a continous embedding, which is then mapped to a discrete action, and train it via deterministic policy gradient.
+- divide actions into a sequence of simpler actions, and train an RNN to output a sequence of these simpler actions. Train this network via a value-based algorithm (such as q-learning).
+
+However, all of these solutions are designed for either value-based or deterministic policy gradient algorithms; none of them output probabilities over the action space. This seems to be an issue since at least a very large portion of the multi-agent deep-RL algorithms I've seen involve a network that outputs a probability over the action-space. Therfore, I don't want to limit myself to value-based and deterministic-policy algorithms.
+How can I design a neural network that outputs a probability over the action space for my game?
+If not, what would be some good solutions to this problem?
+"
+"['convolutional-neural-networks', 'pytorch', 'hyper-parameters', 'filters', 'transpose-convolution']"," Title: How will the filter size affect the transpose convolution operation?Body: After a series of convolutions, I am up-sampling a compressed representation, I was curious what is the methodology I should follow to choose an optimum kernel size for up-sampling.
+
+- How will the filter (or kernel) size affect the transpose convolution operation (e.g. when using
ConvTranspose2d
)? Will a larger kernel help upsample with better detail or a small-sized kernel? And how would padding fit in this scenario?
+
+- At what rate should the depth(Channels i.e number of filters) decrease while upsampling i.e from (Dx24x24) to (D/2 or D/4, 48, 48)
+Eg: If i/p to TransposeConvolution is (cxhxw)64x8x8 how o/p quality be different for o/p of shape 32x16x16 and 16x16x16?
+
+
+"
+['reinforcement-learning']," Title: RL: Encoding action conditioned on previous actionBody: I have a card game where on a player's turn, the player sequentially draws two cards. Each card may be drawn from another player's discard stack (face up), or from the deck (face down).
+Thinking how to encode this into an action space, I could naively assume the two draws are independent. The action space would simply be a binary vector of 2 * (1 + (number_of_players - 1)), which I could post-filter to limit for empty draw piles (and can't draw from own pile).
+However, when playing the game myself, I noticed that it's sometimes advantageous to draw the initial card from the deck, then select the draw pile for the second card based on the value of the first one drawn. But how would this be encoded into an action space? Would it be better to think of these are two separate actions, even thought they are part of the same "turn"?
+"
+"['tensorflow', 'keras', 'image-segmentation', 'u-net']"," Title: Validation accuracy higher than training accurarcyBody: I implemented the unet in TensorFlow for the segmentation of MRI images of the thigh. I noticed I always get a higher validation accuracy by a small gap, independently of the initial split. One example:
+
+So I researched when this could be possible:
+
+- When we have an "easy" validation set. I trained it for different initial splitting, all of them showed a higher validation accuracy.
+- Regularization and augmentation may reduce the training accuracy. I removed the augmentation and dropout regularization and still observed the same gap, the only difference was that it took more epochs to reach convergence.
+- The last thing I found was that in Keras the training accuracy and loss are averaged over each iteration of the corresponding epoch, while the validation accuracy and loss is calculated from the model at the end of the epoch, which might make the the training loss higher and accuracy lower.
+
+So I thought that if I train and validate on the same set, then I should get the same curve but shifted by one epoch. So I trained only on 2 batches and validated on the same 2 batches (without dropout or augmentation). I still think there is something else happening because they don't look quite the same and at least at the end when the weights are not changing anymore, the training and validation accuracy should be the same (but still the validation accuracy is higher by a small gap). This is the plot:
+
+
+Is there anything else that can be increasing the loss values, this is the model I am using:
+def unet_no_dropout(pretrained_weights=None, input_size=(512, 512, 1)):
+inputs = tf.keras.layers.Input(input_size)
+conv1 = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
+conv1 = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv1)
+pool1 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv1)
+conv2 = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool1)
+conv2 = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv2)
+pool2 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv2)
+conv3 = tf.keras.layers.Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool2)
+conv3 = tf.keras.layers.Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv3)
+pool3 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv3)
+conv4 = tf.keras.layers.Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool3)
+conv4 = tf.keras.layers.Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
+#drop4 = tf.keras.layers.Dropout(0.5)(conv4)
+pool4 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv4)
+
+conv5 = tf.keras.layers.Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool4)
+conv5 = tf.keras.layers.Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv5)
+#drop5 = tf.keras.layers.Dropout(0.5)(conv5)
+
+up6 = tf.keras.layers.Conv2D(512, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
+ tf.keras.layers.UpSampling2D(size=(2, 2))(conv5))
+merge6 = tf.keras.layers.concatenate([conv4, up6], axis=3)
+#merge6 = tf.keras.layers.concatenate([conv4, up6], axis=3)
+conv6 = tf.keras.layers.Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge6)
+conv6 = tf.keras.layers.Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6)
+
+up7 = tf.keras.layers.Conv2D(256, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
+ tf.keras.layers.UpSampling2D(size=(2, 2))(conv6))
+merge7 = tf.keras.layers.concatenate([conv3, up7], axis=3)
+conv7 = tf.keras.layers.Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge7)
+conv7 = tf.keras.layers.Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv7)
+
+up8 = tf.keras.layers.Conv2D(128, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
+ tf.keras.layers.UpSampling2D(size=(2, 2))(conv7))
+merge8 = tf.keras.layers.concatenate([conv2, up8], axis=3)
+conv8 = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge8)
+conv8 = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv8)
+
+up9 = tf.keras.layers.Conv2D(64, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
+ tf.keras.layers.UpSampling2D(size=(2, 2))(conv8))
+merge9 = tf.keras.layers.concatenate([conv1, up9], axis=3)
+conv9 = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge9)
+conv9 = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
+conv9 = tf.keras.layers.Conv2D(2, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
+conv10 = tf.keras.layers.Conv2D(1, 1, activation='sigmoid')(conv9)
+
+model = tf.keras.Model(inputs=inputs, outputs=conv10)
+
+model.compile(optimizer = Adam(lr = 2e-4), loss = 'binary_crossentropy', metrics = [tf.keras.metrics.Accuracy()])
+#model.compile(optimizer=tf.keras.optimizers.Adam(2e-4), loss=combo_loss(alpha=0.2, beta=0.4), metrics=[dice_accuracy])
+#model.compile(optimizer=RMSprop(lr=0.00001), loss=combo_loss, metrics=[dice_accuracy])
+
+if (pretrained_weights):
+ model.load_weights(pretrained_weights)
+
+return model
+
+and this is how I save the model:
+model_checkpoint = tf.keras.callbacks.ModelCheckpoint('unet_ThighOuterSurfaceval.hdf5',monitor='val_loss', verbose=1, save_best_only=True)
+model_checkpoint2 = tf.keras.callbacks.ModelCheckpoint('unet_ThighOuterSurface.hdf5', monitor='loss', verbose=1, save_best_only=True)
+
+model = unet_no_dropout()
+history = model.fit(genaug, validation_data=genval, validation_steps=len(genval), steps_per_epoch=len(genaug), epochs=80, callbacks=[model_checkpoint, model_checkpoint2])
+
+"
+"['neural-networks', 'deep-learning', 'terminology', 'architecture', 'yolo']"," Title: What is a unified neural network model?Body: In many articles (for example, in the YOLO paper, this paper or this one), I see the term "unified" being used. I was wondering what the meaning of "unified" in this case is.
+"
+"['papers', 'generative-adversarial-networks', 'implementation', 'wasserstein-gan']"," Title: Wasserstein GAN: Implemention of Critic Loss Correct?Body: The WGAN paper concretely proposes Algorithm 1 (cf. page 8). Now, they also state what their loss for the critic and the generator is.
+When implementing the critic loss (so lines 5 and 6 of Algorithm 1), they maximize the parameters $w$ (instead of minimizing them as one would normally do) by writing $w \leftarrow w + \alpha \cdot \text{RMSProp}\left(w, g_w \right)$. Their loss seems to be $$\frac{1}{m}\sum_{i = 1}^{m}f_{w}\left(x^{\left(i\right)} \right) - \frac{1}{m}\sum_{i = 1}^{m} f_{w}\left( g_{\theta}\left( z^{\left( i\right)}\right)\right).\quad \quad (1)$$
+The function $f$ is the critic, i.e. a neural network, and the way this loss is implemented in PyTorch in this youtbe video (cf. minutes 11:00 to 12:26) is as follows:
+critic_real = critic(real_images)
+critic_fake = critic(generator(noise))
+loss_critic = -(torch.mean(critic_real) - torch.mean(critic_fake))
+My question is: In my own experiments with the CelebA dataset, I found that the critic loss is negative, and that the quality of the images is better if the negative critic loss is higher instead of lower, so $-0.75$ for the critic loss resulted in better generated iamges than a critic loss of $-1.26$ e.g.
+Is there an error in the implementation in the youtube video of Eq. (1) and Algorithm 1 of the WGAN paper maybe? In my opinion, the implementation in the video is correct, but I am still confused then on why I get better images when the loss is higher ...
+Cheers!
+"
+"['terminology', 'evolutionary-algorithms', 'genetic-programming', 'grammatical-evolution', 'codon']"," Title: What is a ""codon"" in grammatical evolution?Body: The term codon is used in the context of grammatical evolution (GE), sometimes, without being explicitly defined. For example, it is used in this paper, which introduces and describes PonyGE 2, a Python library for GE, but it's not clearly defined. So, what is a codon?
+"
+"['reinforcement-learning', 'reference-request', 'time-series', 'model-based-methods']"," Title: Model-based RL for time series dataBody: I have time-series data. When I take an action, it impacts the next state, because my action directly determines the next state, but it is not known what the impact is.
+To be concrete: I have $X(t)$ and $a(t-1)$, where $X(t)$ is n-dimensional time-series data and $a(t)$ is 1-dimensional time-series data. At time $t$, they together represent the observation/state space. Also, at time $t$, the agent makes a decision about $a(t)$. This decision (action) $a(t)$ directly defines the next state space $X(t+1)$ and rewards, by some function $f$ & $g$, $f(a(t), X(t)) = X(t+1)$ and $g(a(t), X(t)) = R(t+1)$.
+I have to estimate this impact, i.e. where I will end up (what will be the next state). I decided to use a model-based RL algorithm, because, from my knowledge, model-based RL does exactly this.
+Can you advise me on a good paper and Github code, to implement this project?
+As I noticed, there do not exist many works on Model-based RL.
+"
+"['convolutional-neural-networks', 'convolution', 'yolo', 'filters', 'convolutional-layers']"," Title: Do filters have as many layers as the depth of the input in CNNs?Body: Firstly as an example here is the architecture of YOLOv2
+
+I am trying to understand the depth of an output of a convolutional layer. For example, the first convolutional layer has the shape 3x3x32. So there are 32 filters with shape 3x3, but each filter has 3 layers and these 3 layers convolve over 3 layers of the input. At the end, values of the 3 layers are summed up and to generate 1 layer. For 32 filters, we get an output with 32 layers.
+If we look at the next layer, 64 filters with size 3x3 and each filter should have 32 layers. Because input has 32 layers. Is this inference true? If it is not, how does it work?
+"
+"['monte-carlo-tree-search', 'monte-carlo-methods', 'tree-search']"," Title: Unclear definition of a ""leaf"" and diverging UTC values in the Monte Carlo Tree SearchBody: I have two questions regarding the Selection and Expansion steps in the Monte Carlo Tree Search Algorithm. In order to state the questions, I recall the algorithm that I believe is the one most commonly associated with the MCTS. It is described as a repeated iteration of the following four steps:
+
+- Selection: Start from root R. Choose a leaf node L by iteration of some choice algorithm that determines which child node to choose at each branching. UTC being a prominent choice.
+- Expansion: Create one or more offspring nodes, unless L is terminal. Choose one of them, say C.
+- Simulation: Play the game from C, randomly or according to some heuristic.
+- Backpropagation: Update rewards and number of simulations for each node on the branch R-->C.
+
+When implementing this algorithm by myself I was unclear about the following interpretation of step 1 and 2:
+Q1. When expanding the choices at the leaf node L, do I expand all, a few or just one child? If I expand all, then the tree grows exponentially large on each MCTS step, I suspect. When I expand one or a few, then either the selection step itself becomes problematic or the term leaf does. The first problem arises, because after the expansion step the node L is no longer a leaf and can never be chosen again during the selection step and in turn all the children that were not expanded will never be probed. If, however, the node L keeps being a leaf node, contrary to graph-theoretic nomenclature, then during the selection step one would need to check at each node, whether there are non-expanded child-nodes. According to which algorithm should one then choose whether to continue down the tree or expand at this non-leaf "leaf" some more yet unexpanded children?
+Q2. Related to the first question, but slightly more in the direction of the exploitation-exploration part of the selection, I am puzzled about the UTC selection step, which again raises issues for each of the above-mentioned expansion methods: In case that a few or all child-nodes are chosen during expansion at the leaf, one is faced with the problem that some of those nodes will not be simulated in that MCTS step and subsequently will have a diverging UTC value $w_i/n_i + c \sqrt{\frac{\ln{N_i}}{n_i}}\to \infty$, since $n_i\to 0$. On the other hand, in case that only one child is chosen, we are facing the issue that no UTC value can be assigned to the "unborn" children on the way. In other words, one cannot use UTC to decide whether to choose a child node according to UTC at each branching or to expand the tree at that node (since all nodes within the tree may have some unexpanded child nodes).
+"
+"['game-ai', 'minimax', 'exploration-strategies', 'heuristic-functions', 'board-games']"," Title: Strategy for playing a board game with Minimax algorithmBody: I want to build a player for the following game:
+You have a board where position 1
is your player, position 2
is the rival player, -1
is a blocked cell and some positive value is a bonus. You can move up, down, left, or right. Also, each bonus has a timer until it disappears (number of steps). Furthermore, each move has a timeout limit. At the end game, when at least one of the players is stuck, we check the scores and announce the winner.
+Board example:
+ -1 -1 0 0 0 -1 -1 -1
+ -1 0 -1 -1 -1 0 0 340
+ -1 -1 0 0 0 -1 0 0
+ -1 0 0 -1 1 -1 0 -1
+ -1 0 0 -1 -1 0 0 0
+ 0 0 -1 -1 -1 0 2 -1
+ 0 -1 0 0 -1 0 0 600
+ -1 -1 0 0 -1 -1 -1 -1
+ 0 -1 0 0 0 0 -1 -1
+
+I'm using the MiniMax algorithm with a time limit to play the game. If we got to children, we return $\infty$ for a player win, $-\infty$ for the rival win, and $0$ for a tie. If we got to a specific depth, we calculate the heuristic value. If we got timeout in some place in MiniMax, then we return the last calculated direction. I'm trying to figure out a good strategy to win this game or get to a tie if no solution is possible.
+What heuristic function would you define?
+What I thought - four factors:
+
+- $f_A$ - The number of steps possible from each direction from the current position.
+- $f_B$ - The analytical distance from the center.
+- $f_C=\max_{b\in Bonus}\frac{X * I}{Y}$ - where $X$ is value of the bonus, $I$ is $1$ if we can get to the bonus, before it disappears (otherwise $0$) and $Y$ is the distance between the bonus and the player.
+- $F_D$ - The distance between the players.
+The final formula:
+$$
+f(s)=0.5\cdot(9-f_A(s))+0.2\cdot f_C(s)-0.2\cdot f_D(s)-0.1\cdot f_B(s)
+$$
+
+I'm not sure if it will be a good strategy for that game or not. How would you define the heuristic function? It should also be quick to calculate it because the game has a timeout for each move.
+In order words, what will give us the best indication that our player is going to win/lose/tie?
+"
+"['neural-networks', 'feedforward-neural-networks', 'computational-complexity', 'space-complexity']"," Title: What is the space complexity for training a neural network using back-propagation?Body: Suppose that a simple feedforward neural network (FFNN) contains $n$ hidden layers, $m$ training examples, $x$ features, and $n_l$ nodes in each layer. What is the space complexity to train this FFNN using back-propagation?
+I know how to find the space complexity of algorithms. I found an answer here, but here it is said that the space complexity depends on the number of units, but I think it must also depend on the input size.
+Can someone help me in finding its worst-case space complexity?
+"
+"['generative-adversarial-networks', 'generative-model']"," Title: Why is it easier to construct adversarial examples relative to training neural networks?Body: I was having looking at this lecture by Ian Goodfellow and my doubt is around 18:00 timestamp where he explains generation of adversarial examples using FGSM.
+He mentions that the there is a linear relationship between the input to the model and the output as the activation functions are piece-wise linear with a small number of pieces. I'm not very clear what he means by input and output. Is he referring to inputs and outputs of a single layer or the input image and final output?
+He states that the relation between the parameters (weights) of a model and the output are non-linear which is what makes it difficult to train a neural network, thus it is much easier to find an adversarial example.
+Could someone explain what is linear in what? and how linearity helps in adversarial example construction?
+EDIT: As per my understanding FGSM method relies on the linearity of the loss function with respect to the input image. It constructs an adversarial example by perturbing the input in the direction of the gradient of the loss function w.r.t image. I am not able to understand why this works?
+"
+['games-of-chance']," Title: What progress has been made in computerized bridge play?Body: Computer programs have been produced for games such as Chess, Go, Poker, StarCraft 2, Dota. The best ones, Deep Blue and AlphaGo , AlphaZero, Pluribus,... are now considered better than the best human players. More to the point, the computers' game results have been influencing human play.
+Apparently, computers are not yet better than human players in Bridge. There can be computer simulations of various hands and hypothetical opposing hands. But what progress have computers made in playing human players in tournaments? Have any new theories of bidding or play evolved as a result of computer-human interaction in Bridge?
+
+This is question was asked at Board & Card Games Q&A however I think here it might get better answer
+"
+"['gaming', 'state-of-the-art', 'computational-complexity', 'imperfect-information', 'games-of-chance']"," Title: Why multiplayer, imperfect information, trick-taking card games are hard for AI?Body: AI reached a super-human level in many complex games such as Chess, Go ,Texas hold’em Poker, Dota2 and StarCarft2. However it still did not reach this level in trick-taking card games.
+Why there is no super-human AI playing imperfect-information, multi-player, trick-taking card games such as Spades, Whist, Hearts, Euchre and Bridge?
+In particular, what are the obstacles for making a super-human AI in those games?
+
+I think those are the reasons that makes Spades hard for AI to master:
+
+- Imperfect information games pose two distinct problems: move selection and inference.
+
+- The size of the game tree isn't small, however larger games have been mastered.
+I. History size: $14!^4 = 5.7\cdot10^{43}$
+II. There are $\frac{52!}{13!^4}= 5.4\cdot10^{28}$ possible initial states.
+III. Each initial information set can be completed into a full state in $\frac{39!}{13!^3}=8.45\cdot10^{16} $ ways
+
+- Evaluation only at terminal states.
+
+- Multiplayer games:
+I. harder to prune - search algorithms are less effective
+II. opponent modeling is hard
+III. Goal choosing - several goals are available, need to change goals during rounds according to the reveled information.
+
+- Agent need to coordinate with a partner: conventions, signals.
+
+
+"
+"['transformer', 'named-entity-recognition', 'inference', 'reformer']"," Title: How can I use this Reformer to extract entities from a new sentence?Body: I have been looking at the NER example with Trax in this notebook. However, the notebook only gives an example for training the model. I can't find any examples of how to use this model to extract entities from a new string of text.
+I've tried the following:
+
+- Instantiate the model in 'predict' mode. When trying this I get the same error reported in https://github.com/google/trax/issues/556
AssertionError: In call to configurable 'SelfAttention' (<class 'trax.layers.research.efficient_attention.SelfAttention'>)
+- Instantiate the model in 'eval' mode and then running
model(sentence)
as I would with other models. In this case the instantiation works but I get the following error when running the model: TypeError: Serial.forward input must be a tuple or list; instead got <class 'numpy.ndarray'>
. Presumably this is because in 'eval' mode the model needs 2 entries passed in rather than one sentence
+
+How can I use this Reformer to extract entities from a new sentence?
+"
+"['deep-rl', 'monte-carlo-tree-search', 'alphazero']"," Title: Are inputs into AlphaZero the same during the evaluate step in MCTS and during test time?Body: From the AlphaZero paper:
+
+The input to the neural network is an N × N × (M T + L) image stack that represents state using a concatenation of T sets of M planes of size N × N . Each set of planes represents the board position at a time-step t − T + 1, ..., t, and is set to zero for time-steps less than 1
+
+From the original AlphaGo Zero paper:
+
+Expand and evaluate (Figure 2b). The leaf node $s_L$ is added to a queue for neural network evaluation, $(d_i(p), v) = f_\Theta(d_i(s_L))$, where $d_i$ is a dihedral reflection or rotation selected uniformly at random from i∈[1..8]
+
+Ignoring the dihedral reflection, the formula in the original paper $f_\Theta(s_L)$ implies that only the board corresponding to $s_L$ is passed to the neural network for evaluation when expanding a node in MCTS, not including the 7 boards from the 7 previous time steps. Is this correct?
+"
+"['deep-learning', 'generative-adversarial-networks', 'generative-model']"," Title: Optimum Discriminator for label smoothed GANBody: I was reading the paper called Improved Techniques for Training GANs. And, in the one-sided label smoothing part, they said that optimum discriminator with label smoothing is
+$$ D^*(x)=\frac{\alpha \cdot p_{data}(x) + \beta \cdot p_{model}(x)}{p_{data}(x) + p_{model}(x)}$$
+I could not understand where this is come from. How did we get this result?
+Note: By the way, I knew how to find optimal discriminator in vanilla GAN i.e.
+$$ D^*(x) = \frac{p_{r}(x)}{p_{r}(x) + p_g(x)} $$
+"
+"['deep-rl', 'monte-carlo-tree-search', 'alphazero']"," Title: How does AlphaZero's MCTS work when starting from the root node?Body: From the AlphaGo Zero paper, during MCTS, statistics for each new node are initialized as such:
+
+${N(s_L, a) = 0, W (s_L, a) = 0, Q(s_L, a) = 0, P (s_L, a) = p_a}$.
+
+The PUCT algorithm for selecting the best child node is $a_t = argmax(Q(s,a) + U(s,a))$, where $U(s,a) = c_{puct} P(s,a) \frac{\sqrt{\sum_b N(s,b)}}{1 + N(s, a)}$.
+If we start from scratch with a tree that only contains the root node and no children have been visited yet, then this should evaluate to 0 for all actions $a$ that we can take from the root node. Do we then simply uniformly sample an action to take?
+Also, during the expand() step when we add an unvisited node $s_L$ to the tree, this node's children will also have not been visited, and we run into the same problem where PUCT will return 0 for all actions. Do we do the same uniform sampling here as well?
+"
+"['machine-learning', 'comparison', 'math', 'definitions', 'models']"," Title: What is the fundamental difference between an ML model and a function?Body: A model can be roughly defined as any design that is able to solve an ML task. Examples of models are the neural network, decision tree, Markov network, etc.
+A function can be defined as a set of ordered pairs with one-to-many mapping from a domain to co-domain/range.
+What is the fundamental difference between them in formal terms?
+"
+"['machine-learning', 'generative-adversarial-networks', 'regularization', 'r1-regularization']"," Title: Can someone explain R1 regularization function in simple terms?Body: I'm trying to understand the R1 regularization function, both the abstract concept and every symbol in the formula.
+According to the article, the definition of R1 is:
+
+It penalizes the discriminator from deviating from the Nash Equilibrium via penalizing the gradient on real data alone: when the generator distribution produces the true data distribution and the discriminator is equal to 0 on the data manifold, the gradient penalty ensures that the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game.
+$R_1(\psi ) = \frac{\gamma}{2}E_{pD(x)}\left [ \left \| \bigtriangledown D_{\psi}(x) \right \|^2 \right ]$
+
+I have basic understanding of how GAN's and back-propagation works. I understand the idea of punishing the discriminator when he deviates from the Nash equilibrium. The rest of it gets murky, even if it might be basic math. For example, I'm not sure why it matters if the gradient is orthogonal to the data.
+On the equation part, it's even more unclear. The discriminator input is always an image, so I assume $x$ is an image. Then what is $\psi$ and $\gamma$?
+(I understand this is somewhat of a basic question, but seems there are no blogs about it for us simple non-researchers, math challenged people who fail to understand the original article )
+"
+"['neural-networks', 'attention', 'seq2seq']"," Title: What's the difference between content-based attention and dot-product attention?Body: I'm following this blog post which enumerates the various types of attention.
+It mentions content-based attention where the alignment scoring function for the $j$th encoder hidden state with respect to the $i$th context vector is the cosine distance:
+$$
+e_{ij} = \frac{\mathbf{h}^{enc}_{j}\cdot\mathbf{h}^{dec}_{i}}{||\mathbf{h}^{enc}_{j}||\cdot||\mathbf{h}^{dec}_{i}||}
+$$
+It also mentions dot-product attention:
+$$
+e_{ij} = \mathbf{h}^{enc}_{j}\cdot\mathbf{h}^{dec}_{i}
+$$
+To me, it seems like these are only different by a factor. If we fix $i$ such that we are focusing on only one time step in the decoder, then that factor is only dependent on $j$. Specifically, it's $1/\mathbf{h}^{enc}_{j}$.
+So we could state: "the only adjustment content-based attention makes to dot-product attention, is that it scales each alignment score inversely with the norm of the corresponding encoder hidden state before softmax is applied."
+What's the motivation behind making such a minor adjustment? What are the consequences?
+
+Follow up question:
+What's more, is that in Attention is All you Need they introduce the scaled dot product where they divide by a constant factor (square root of size of encoder hidden vector) to avoid vanishing gradients in the softmax. Any reason they don't just use cosine distance?
+"
+"['machine-learning', 'weights', 'multi-label-classification', 'imbalanced-datasets']"," Title: What does ""adding class weights for an imbalanced dataset"" mean in the case of multi-label classification?Body: Suppose I have the following toy data set:
+
+Each instance has multiple labels at a time.
+You can see I have 2 instances for Label2. However, only one instance for the other labels. It means that we have class imbalanced issues.
+I read about adding class weights for an imbalanced dataset. However, I could not understand how it actually works and why it is beneficial.
+Can anyone explain this method generally, as well as according to my given toy data set?
+In addition to that, how do we handle these missing labels (nan)?
+"
+"['convolutional-neural-networks', 'gradient-descent']"," Title: Is there anything that ensures that convolutional filters end up different from one another?Body: I found this question very interesting, and this is a follow up on it.
+Presumably, we'd want all the filters to converge towards some complementary set, where each filter fills as large a niche as possible (in terms of extracting useful information from the previous layer), without overlapping with another filter.
+A quick thought experiment tells me (please correct me if I'm wrong) that if two filters are identical down to maximum precision, then without adding in any other form of stochastic differentiation between them, their weights will be updated in the same way at each step of gradient descent during training. Thus, it would be a very bad idea to initialise all filters in the same way prior to training, as they would all be updated in exactly the same way (see footnote 1).
+On the other hand, a quick thought experiment isn't enough to tell me what would happen to two filters that are almost identical, as we continue to train the network. Is there some mechanism causing them to then diverge away from one another, thereby filling their own "complementary niches" in the layer? My intuition tells me that there must be, otherwise using many filters just wouldn't work. But during back-propagation, each filter is downstream, and so they don't have any way of communicating with one another. At the risk of anthropomorphising the network, I might ask "How do the two filters collude with one another to benefit the network as a whole?"
+
+Footnotes:
+
+- Why do I think this? Because the expession for the partial derivative of the $k$th filter weights with respect to the cost $\partial W^k/\partial C$ will be identical for all $k$. From the perspective of back-propagation, all paths through the filters look exactly the same.
+
+"
+"['reinforcement-learning', 'dqn', 'markov-decision-process']"," Title: How can I model a problem as an MDP if the agent does not follow the successive order of states?Body: In my problem, the agent does not follow the successive order of states, but selects with $\epsilon$-greedy the best pair (state, action) from a priority queue. More specifically, when my agent goes to a state $s$ and opens its available actions $\{ a_i \}$, then it estimates each $(s,a)$ pair (regression with DQN) and stores it into the queue. In order for my agent to change to state $s'$, it picks the best pair from the queue instead of following one of the available actions $\{ a_i \}$ of $s$. I note that a state has a partially-different action set from the others.
+However, in this way, how can I model my MDP if my agent does not follow the successive order of states?
+More specifically, I have a focused crawler that has an input of a few seeds URLs. I want to output as many as possible relevant URLs with the seeds. I model the RL framework as follows.
+
+- State: the webpage,
+- Actions: the outlink URLs of the state webpage,
+- Reward: from external source I know if the webpage content is relevant.
+
+The problem is that, while crawling, if the agent keeps going forward by following the successive state transition, it can fall into crawling traps or local optima. That is the reason why a priority queue is used importantly in crawling. The crawling agent does not follow anymore the successive order of state transitions. Each state-action pair is added to the priority queue with its estimated action value. For each time, it selects the most promising state-action pair among all pairs in the queue. I note that each URL action can be estimated taking into account the state-webpage where it was extracted.
+"
+"['machine-learning', 'terminology', 'definitions', 'embeddings']"," Title: In the machine learning literature, what does it mean to say that something is ""embedded"" in some space?Body: In the machine learning literature, I often see it said that something is "embedded" in some space. For instance, that something is "embedded" in feature space, or that our data are "embedded" in dot product space, etc. However, I've never actually seen an explanation of what this is supposed to mean. So what does it actually mean to say that something is "embedded" in some space?
+"
+"['deep-learning', 'recurrent-neural-networks']"," Title: Training speed in GPU vs CPU for LSTMBody: I was experimenting seq2seq model which was the bi-LSTM encoder/decoder with attention. When I compared the training times on GPU vs CPU while varying the batch size, I got
+
+
+- CPU on the small batch size (32) is fastest. It's even ~1.5 time
+faster than GPU using the same batch size.
+- When I increase the batch size (upto 2000), GPU becomes faster than CPU due to the
+parallelization.
+
+(2) looks reasonable to me. However, I am a bit perplexed by the observation (1). The average sequence length is around 15~20. Even with the batch size 32, I expected that GPU should be faster but it turned out not. I used PyTorch LSTM.
+Does it look normal? In RNN-style seq2seq, could CPU be faster than GPU?
+"
+"['neural-networks', 'convolutional-neural-networks']"," Title: Output volume proof for convolutional neural networkBody: As I've been dabbling into the sliding window concept, I stumbled on a question that asked me to find the number of windows needed on a 1D image of $W$ size, knowing the window size $K$ and the stride $S$.
+As much as I tried, I couldn't find a formula by myself (the closest I got was this one : $N=\frac{W + x(K-S)}{K}$ where $x$ was the number of overlapping rectangle zones, which seemed to be $x=N-1$ but the reccurence wasn't what I was looking for and it could be wrong as I was reasoning through induction).
+I find the right formula on Internet at last (this one : $N=\frac{W-K+2P}{S}+1$ with $P$ the padding but my problem didn't needed one) but I can't find the proof of it.
+Is there any place where I could find the proof ?
+"
+"['reinforcement-learning', 'rewards', 'multi-agent-systems']"," Title: Is there multi-agent reinforcement learning model in which (some of the) reward is given by other agent and not by the external environment?Body: The traditional setting of multiagent reinforcement learning (MARL) is the mode in which there is set of agents and external environment. And the reward is given to each agent - individually or collectively - by the external environment.
+My question is - is there MARL model in which the reward is given by one agent to the other agent, meaning that one agent is incurring costs and other agent - revenue (or maybe even a profit?
+Effectively that means distributed supervision: only some agents face the environment with real reward/supervision and then this supervision is more or less effectively propgated to other agents that learn/do their own specialized tasks that are part of collective task ececuted/solved distributively in MARL.
+"
+"['weights', 'logistic-regression']"," Title: How do we interpret the images of weights in logistic regressionBody: The following images are
+a) The weights of a logistic regression model trained on MNIST.
+b) The sign of the weights of a logistic regression
+How do these images represent the weights?
+Would be grateful for any help.
+Source of the research paper
+
+"
+"['computer-vision', 'image-segmentation']"," Title: How does general image background removal AI work?Body: I'm well aware of the inner workings of CNN models for object detection, and although I've not worked on a semantic segmentation problem I can imagine how it works.
+With these types of models, we need to say "segment out the humans", or "segment out the X". But what about when I say something like "segment out the subject of this photo, whatever it happens to be". For example, see this service: https://removal.ai/
+Without too much imagination I might guess that they apply a multiclass segmentation model and just show any foreground pixels, no matter what class they belong to. So we'd hope that the subject is in one of the classes that the model was trained for, and that there are no other class instances in the image that shouldn't be captured. But is there a more general way?
+"
+"['pytorch', 'transformer']"," Title: Transformers: How to use the target mask properly?Body: I try to apply Transformers to an unusual use case - predict the next user session based on the previous one. A user session is described by a list of events per second, e.g. whether the user watches a particular video, clicks a specific button, etc. Typical sessions are around 20-30 seconds, I pad them to 45 seconds. Here's a visual example of 2 subsequent sessions:
+
+x
axis is time in seconds, y
axis is the list of events (black line divides the 2 sessions). I extend the vocabulary with 2 additional tokens - start and end of a session (<sos>
and <eos>
), where <sos>
is a one-hot vector at the very beginning and <eos>
- similar vector at the end of the session (which makes this long red line).
+Now I use these extended vectors of events as embeddings and want to train a Transformer model to predict the next events in the current session based on previous events in this (target) session and all events in the previous (source) session. So pretty much like seq2seq autoregressive models, but in a bit unusual settings.
+Here's the problem. When I train a Transformer using the built-in PyTorch components and square subsequent mask for the target, my generated (during training) output is too good to be true:
+
+Although there's some noise, many event vectors in the output are modeled exactly as in the target. After checking train-val-test split is correct, my best guess is that the model cheats by attending to the same day in the target, which the mask should have prevented. The mask is (5x5 version for brevity):
+[[0., -inf, -inf, -inf, -inf],
+ [0., -inf, -inf, -inf, -inf],
+ [0., 0., -inf, -inf, -inf],
+ [0., 0., 0., -inf, -inf],
+ [0., 0., 0., 0., -inf]]
+
+Note that since I use <sos>
in both - source and target - mask[i, i]
is set to -inf
(except for mask[0, 0]
for numerical reasons), so the output timestamp i
should not attend to the target timestamp i
.
+The code for the model's forward
method:
+def forward(self, src, tgt):
+ memory = self.encoder(src)
+ out = self.decoder(tgt, memory, self.tgt_mask.type_as(tgt))
+ out = torch.sigmoid(out)
+ return out
+
+I also tried to avoid the target mask altogether and set it to all -inf
(again, except for the first column for numerical stability), but the result is always the same.
+Am I using the mask the wrong way? If the mask looks fine, what other reasons could lead to such a "perfect" result?
+
+After shifting the target to the right as suggested in the accepted answer I get the following result:
+
+Which is much more realistic. One suspicious thing is that out[t]
now resembles tgt[t - 1]
, but it can be explained by the fact that the user state tends to be "sticky", e.g. if a user watches a video at t - 1
, most likely he will watch it at t
as well.
+"
+"['convolutional-neural-networks', 'image-recognition']"," Title: Considerations when doing image classification where the object is not the subjectBody: I've come across two types of image classification tasks
+
+- cat/dog classification the whole picture is either a cat or a dog. Simple.
+- this image contains a cat classification. There's a whole chaotic scene, and the image may contain a cat nestled in there somewhere.
+
+Type 2 seems to be way more prevalent in real life application. Here are just some examples:
+
+- Determining the sex of an insect. Maybe the male and female look pretty much the same, but the male has a small bump in some location that takes up a tiny part of the image.
+- Determining the presence of an animal call in an audio spectrogram.
+- Finding defects in road surfaces.
+
+My question is: for task type 2, what are some key modifications we'd make to the normal approach of post-training on an ImageNet trained architecture like Resnet? Shouldn't the architecture be modified somehow to be more fitted to task type 2?
+Before someone mentions using object detection algorithms I'd like to add the rule that we only have global image labels, not bounding box annotation or co-ordinates of any sort.
+"
+"['machine-learning', 'reinforcement-learning', 'intelligent-agent', 'checkers', 'credit-assignment-problem']"," Title: learning rate and credit assignment problem in checkersBody: I have implemented an AI agent to play checkers based on the design written in the first chapter of
+Machine Learning, Tom Mitchell, McGraw Hill, 1997.
+We train the agent by letting it plays against its self.
+I wrote the prediction to get how good a board is for white, so when the white plays he must choose the next board with the maximum value, and when black plays it must choose the next board with the minimum value.
+Also, I let the agent explore other states by making it choose a random board among the valid next boards, I let that probability to be equal to $0.1$.
+The final boards will have training values:
+
+100 if this final board is win
for white.
+
+
+-100 if this final board is lose
for white.
+
+
+0 if this final board is draw
.
+
+The intermediate boards will have a training value equal to the prediction of the next board where it is white turn.
+The model is based on a linear combination of some features (see the book for full description).
+I start by initializing the parameters of the model to random values.
+When I train the agent, he lost against himself always or draw in a stupid way, but the error converges to zero.
+I was thinking that maybe we should let the learning rate smaller (like 1e-5), and when I do that the agent learns in a better way.
+I think that this happened because of the credit assignment problem, because a good move may appear in a loose game, therefore, considered a loose move, so white will never choose it when he plays, but when we let the learning rate to be very small that existence of a good move in a losing game will change its value by a very little amount, and that good move should appear more in win games so its value converges to the right value.
+Is my reasoning correct? and if not so what is happened?
+"
+"['minimax', 'alpha-beta-pruning']"," Title: Does the alpha/beta value of parent nodes change if the alpha beta value of the child node changes?Body: I want to do alpha-beta pruning on this tree:
+
+- Consider nodes J and K. K is the max. Therefore, node D has an alpha value of 20, node B has a beta value of 20.
+
+- Move to Node E. Pass the beta value of 20 to node E. Node L has an alpha value of 30, therefore, at this point 30 (alpha) > 20 (beta) and we can prune the E to M branch.
+
+- Now is my question. My original beta value at node B was 20, and the alpha value passed up to node A was 20. Then, in step 2, I changed the alpha value to be 30. Do i then change the beta value at node B to be 30, and the alpha value at node A to be 30? (and therefore pass 30 as the alpha value to node C)? Or do I keep the original value of 20 at node B, A and C?
+
+
+
+"
+"['deep-learning', 'objective-functions', 'generative-adversarial-networks', 'generative-model', 'cross-entropy']"," Title: Where is the mistake in my derivation of the GAN loss function?Body: I was pondering on the loss function of GAN, and the following thing turned out
+\begin{aligned}
+ L(D, G)
+& = \mathbb{E}_{x \sim p_{r}(x)} [\log D(x)] + \mathbb{E}_{x \sim p_g(x)} [\log(1 - D(x)] \\
+& = \int_x \bigg( p_{r}(x) \log(D(x)) + p_g (x) \log(1 - D(x)) \bigg) dx \\
+& =-\left[CE(p_r(x), D(x))+CE(p_g(x), 1-D(x)) \right] \\
+\end{aligned}
+Where CE stands for cross-entropy. Then, by using law of large numbers:
+\begin{aligned}
+L(D, G)
+& = \mathbb{E}_{x \sim p_{r}(x)} [\log D(x)] + \mathbb{E}_{x \sim p_g(x)} [\log(1 - D(x)] \\
+& =\lim_{m\to \infty}\frac{1}{m}\sum_{i=1}^{m}\left[1\cdot \log(D(x^{(i)}))+1\cdot \log(1-D(x^{(i)}))\right]\\
+& =- \lim_{m \to \infty} \frac{1}{m}\sum_{i=1}^{m} \left[CE(1, D(x))+CE(0, D(x))\right]
+\end{aligned}
+As you can see, I got a very strange result. This should be wrong intuitively because in the last equation first part is for real samples, and the second is for generated samples. However, I am curious about where are the mistakes?
+(Please explain with math).
+"
+"['machine-learning', 'natural-language-processing', 'algorithm']"," Title: How to design a NLP algorithm to find a food item in menu card list?Body: I am new to NLP and AI in general. I am just expecting springboard information so that I can skip all the introduction to NLP websites. I have just started studying NLP and want to know how to go about solving this problem. I am creating a chatbot that will take voice input from customers ordering food at restaurants. The customer input I am expecting as;
+
+I want to order Chicken Biryani
+
+
+Can I have a Veg Pizza, please
+
+
+Coca-cola etc
+
+I want to write an algorithm that can separate the name of the food item from the user input and compare it with the list of food items in my menu card and come up with the right item.
+I am new to NLP, I am studying it online for this particular project, I can do the required coding, I just need help with the overall algo or sort of flow chart. It will save my time tremendously. Thanks.
+"
+"['probability-distribution', 'maximum-likelihood']"," Title: What is emperical distribution in MLE?Body: I was reading the book Deep Learning by Ian Goodfellow. I had a doubt in the Maximum likelihood estimation section (Pg 131). I understand till the Eq 5.58 which describes what is being maximized in the problem.
+$$
+ \theta_{\text{ML}} = \text{argmax}_{\theta} \sum_1^m \log(p_{\text{model}}(x^{(i)};\theta))
+$$
+However the next equation 5.59 restates this equation as:
+$$
+\theta_{\text{ML}} = \text{argmax}_{\theta} E_{x \sim \hat{p}_{\text{data}}}(\log(p_{\text{model}}(x;\theta))
+$$
+where $$\hat{p}_{\text{data}}$$ is described as the empirical distribution defined by the training data.
+Could someone explain what is meant by this empirical distribution? It seems to be different from the distribution parametrized by theta as that is described by $$ p_{\text{model}} $$
+"
+"['neural-networks', 'backpropagation']"," Title: Backpropogation rule for the output layer of a multi-layer network - What does the rule do in ambiguous cases?Body: This is the back-propogation rule for the output layer of a multi-layer network:
+$$W_{jk} := W_{jk} - C \dfrac{\delta E}{\delta W_{jk}}$$
+What does this rule do in the more ambiguous cases such as:
+(1) The output of a hidden node is near the middle of a sigmoid curve?
+(2) The graph of error with respect to weight is near a maximum or minimum?
+"
+"['algorithm', 'genetic-algorithms']"," Title: Are there any disadvantages to using a variable population size in genetic algorithms?Body: When implementing a genetic algorithm, I understand the basic idea is to have an initial population of a certain size. Then, we pick two individuals from a population, construct two new individuals (using mutation and crossover), repeat this process X number of times and the replace the old population with the new population, based on selecting the fittest.
+In this method, the population size remains fixed. In reality in evolution, populations undergo fluctuations in population sizes (e.g. population bottlenecks, and new speciations).
+I understand the disadvantages of variable populations sizes from a biological view are, for example, a bottleneck will reduce the population to minimal levels, so not much evolution will occur. Are there disadvantages to using variable population sizes in genetic algorithms, from a programming perspective? I was thinking the numbers per population could follow a distribution of some sort so they don't just randomly fluctuate erratically, but maybe this does not make sense to do.
+"
+"['algorithm', 'genetic-algorithms']"," Title: How to detect that the fitness landscape of a genetic algorithm is changing over time?Body: I understand that in each generation of a genetic algorithm, that generation must re-prove it's fitness (and then the fittest of that population is taken for the next population).
+In this case, I guess it's a presumption that if you take the fittest of each generation, and use them to form the basis of the next generation, that your population as a whole is getting fitter with time.
+But algorithmically, how can I detect this? If there's no end goal known, then I can't measure the error/distance from goal? So how can you tell how much each generation is becoming fitter by?
+"
+"['natural-language-processing', 'computer-vision', 'time-series']"," Title: Time series analysis using computer vision principlesBody: I'm just starting to explore topics within computer vision and curious if there are any concepts in that area that could be applied to segmenting multivariate time series with the goal of grouping individual data points similar to how a human might do the same. I know that there are a number of time series segmentation methods, but in-depth explanations of multivariate methods are more scarce and it seems like somewhat of an underdeveloped topic overall. Since segmentation is such a fundamental part of CV and is inherently multidimensional, I'm wondering if concepts there can be modified to apply to time series.
+Specifically, I'd like to be able segment a time series and reformulate a prediction problem as something closer to a language processing problem. The process would look something like this:
+
+- Segment a multivariate time series into near-homogenous segments of variable length. Some degree of preprocessing might be required but I can worry about that separately.
+- Encode the properties of each segment based on summary statistics (e.g., mean, variance, derivative values, etc.) such that the segments fall into discrete buckets.
+- Each bucket will represent a "word" and the goal of the model will be to predict the next word given a series of words, i.e., the next segment given a series of segments.
+
+In a few days of reading about CV, it seems like there's a ton to learn. If there are traditional time series segmentation techniques that are more suitable, that would be of interest, but I'd still be curious about a CV approach since that approach likely better aligns with how a person might look at a graph to identify segments.
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'return']"," Title: When updating the state-action value in the Monte Carlo method, is the return the same for each state-action pair?Body: Referring to this post, in the following formula to update the state-action value
+$$ Q(s,a) = Q(s,a) + \alpha (G − Q(s,a)),$$
+is the value of $G$ (the return) the same for every state-action $(s,a)$ pair?
+I am a little confused about this point, so I will thank any clarification.
+"
+"['neural-networks', 'convolutional-neural-networks', 'generative-adversarial-networks', 'dc-gan']"," Title: How is the latent vector transforming to a feature map in DCGAN (Generator structure)?Body: I'm working on the code trying to generate new images using DCGAN model. The structure of my code is from the PyTorch tutorial here.
+I'm a bit confused trying to find and understand how the latent vector is transforming to the feature map from the Generator part (this line of code is what I'm interested in) :
+nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False)
+
+It means the latent vector (nz) of shape 100x1 transforming to the 512 matrices of 4x4 size (ngf=64). How does it happen? Hence, I can't even clear to myself how the length of the latent vector influence to the generated image.
+P.S. The left part of the Genarator structure is clear.
+The only idea that I got is :
+
+- E.g. there is a latent vector of size 100 as input (100 random values).
+- We interact each value of the input latent vector with a 4x4 kernel.
+- In this way we get 100 different 4x4 matrices (each matrix is for one value from latent vector).
+- Then we summarize all these 100 matrices and get one final matrix - one feature map.
+- We get necessary number of feature maps taking different kernels.
+
+Is this right? Or does it happen in other way?
+"
+"['machine-learning', 'deep-learning', 'breadth-first-search', 'depth-first-search']"," Title: A way to leverage machine learning to reduce DFS/BFS search time on a tree graph?Body: I'm not very knowledgeable in this field but I'm wondering if any research or information already exists for the following situation:
+I have some data that may or may not look similar to each other. Each represents a node that represents a vector of size 128.
+
+And they are inserted into a tree graph according to similarity.
+
+and a new node is created with an edge connecting the most similar vertex node found in the entire tree graph.
+Except I'm wasting a lot of time searching through the entire graph to insert each new node when I could narrow down my search according to previous information. Imagine a trunk node saying "Oh, I saw a node like you before, it went down this branch. And don't bother going down that other branch." I could reduce the cost of searching the entire tree structure if there was a clever way to remember if a similar node went down a certain path.
+I've thought about some ways to use caching or creating a look-up table but these are very memory intensive methods and will become slower the longer the program runs on. I have some other ideas I am playing around with but I was hoping someone could point me in the right direction before I started trying out weird ideas.
+Edit: added a better (more realistic) graph picture
+"
+"['machine-learning', 'perceptron']"," Title: What are the general inequalities needed for the logic gate perceptrons?Body: I'm trying to understand how the logic gates (e.g. AND, OR, NOT, NAND) can be built into single-layer perceptrons.
+I understand specific examples of weights and thresholds for the gates, but I'm stuck on understanding how to translate these to general inequalities for these gates.
+Is my reasoning correct in the table below, or are there cases where these general inequalities do not make sense for this problem? Am I missing other logic gates that can be done in a similar fashion? (e.g., I know XOR cannot)?
+In the table below, a perceptron has two input nodes, and one output node. W1 and W2 are the weights (real values) on those input nodes. T is the threshold, above which, the perceptron will fire. I have come up with example values that would work for each logic gate (e.g., for the AND gate, a perceptron with two input weights, W1 = 1 and W2 = 1, and a threshold = 2, will fire, and I'm trying to understand more generally, what is the equation needed for each gate).
+
+
+
+
+Gate |
+Example W1, W2 |
+Threshold |
+General inequalities |
+
+
+
+
+AND |
+1,1 |
+2 |
+W1 + W2 >= t, where W1, W2 > 0 |
+
+
+OR |
+1,1 |
+1 |
+W1 > t or W2 > t |
+
+
+NOT |
+-1 |
+-0.49 |
+W1 > 2(t) |
+
+
+NAND |
+-2,-2 |
+3 |
+W1 + W2 <= t |
+
+
+
+
"
+"['neural-networks', 'multilayer-perceptrons']"," Title: How to draw a 3-dimensonal shape's neural networkBody: I am reading an exam question about NN (that I cannot publish, for copyright reasons). The question says: 'Construct a rectangle in 2D space. Define the lines, and then define the weights and threshold that will only fire for points inside the rectangle.'
+I understand that this is an example of a rectangle drawn as a NN (i.e. this NN will fire, if the point is in the rectangle, where the rectangle is defined by the lines X = 4; X = 1, Y = 2, Y = 5).
+
+In this diagram, since it's a rectangle, the equations of the line in this example are x = 4, x =1, y=2, y=5, so I left the other weights out (as they equal to 0).
+I'm now wondering how this could be translated to a 3D structure. For example, if a 3D shape was defined by the points:
+(0,0,0), (0,1,0), (0,0,1), (0,1,1), (1,0,0), (1,1,0), (1,0,1), (1,1,1)
+I wanted to draw a hyperplane that separates the corner point (1,1,1) from the other points in this cube. Can this 3D shape be drawn similarly to below (maybe it would be easier to understand, if there were other numbers except 1 and 0 in the co-ordinates)?
+Would I draw this with 3 nodes in the input layer, still one node in the output layer, I just don't understand what the hidden layer should look like? Would it have 24 nodes? One for each surface of the cube, with relevant X and Y values?
+"
+"['neural-networks', 'gradient-descent', 'calculus']"," Title: What is the derivative of a specific output with respect to a specific weight?Body: If I have a neural network, and say the 6th output node of the neural network is:
+$$x_6 = w_{16}y_1 + w_{26}y_2 + w_{36}y_3$$
+What does that make the derivative of:
+$$\frac{\partial x_6}{\partial w_{26}}$$
+I guess that it's how is $x_6$ changing with respect to $w_{26}$, so, therefore, is it equal to $y_2$ (since the output, $y_2$, will change depending on the weight added to the input)?
+"
+"['backpropagation', 'objective-functions', 'gradient-descent', 'linear-algebra', 'softmax']"," Title: Why is the derivative of the softmax layer shaped differently than the derivative of other neurons?Body: If the derivative is supposed to give the rate of change of a function at that point, then why is the derivative of the softmax layer (a vector) the Jacobian matrix, which has a different shape than the output/softmax vector? Why is the shape of the softmax vector's derivative (the Jacobian) different than the shape of the derivative of the other activation functions, such as the ReLU and sigmoid?
+"
+"['neural-networks', 'computational-learning-theory', 'multilayer-perceptrons', 'function-approximation', 'universal-approximation-theorems']"," Title: What is the number of neurons required to approximate a polynomial of degree n?Body: I learned about the universal approximation theorem from this guide. It states that a network even with a single hidden layer can approximate any function within some bound, given a sufficient number of neurons. Or mathematically, ${|g(x)−f(x)|< \epsilon}$, where ${g(x)}$ is the approximation, ${f(x)}$ is the target function and is $\epsilon$ is an arbitrary bound.
+A polynomial of degree $n$ has at maximum $n-1$ turning points (where the derivative of the polynomial changes sign). With each new turning point, the approximation seems to become more complex.
+I'm not necessarily looking for a formula, but I'd like to get a general idea on how to figure out the sufficient number of neurons is for a reasonable approximation of a polynomial with a single layer of the neural network (you may consider "reasonable" to be $\epsilon = 0.0001$). To ask in other words, how would adding one more neuron affect the model's ability to express a polynomial?
+"
+"['machine-learning', 'function-approximation', 'perceptron']"," Title: What is the smallest upper bound for a number of functions in a range that are computable by a perceptron?Body: I'm reading this book chapter, and I'm looking at the questions on the last page. Can someone explain question 2 on the last page to me, or show me an example of a solution so I can understand it?
+The question is:
+
+Consider a simple perceptron with $n$ bipolar inputs and threshold $\theta = 0$. Restrict each of the weights to have the value $−1$ or $1$. Give the smallest upper bound you can find for the number of functions from $\{−1, 1 \}^n$ to $\{−1, 1\}$ which are computable by this perceptron. Prove that the upper bound is sharp, i.e., that all functions are different.
+
+What I understand:
+
+- A perceptron is a very simple network with $n$ input nodes, a weight assigned to each input nodes, which are then summed to be above/not above a threshold ($\theta$).
+
+- In this example, there are $n$ input nodes, and the value of each input node is either $−1$ or $1$. And we want to map them to outputs of either $−1$ or $1$.
+
+
+What I'm confused about:
+Is it asking how many different ways can you map input values of $\{−1, 1\}$ to $\{−1, 1\}$ output?
+For example, is the answer, where each tuple in this list is input1, input2 and label, as described above:
+$$[(1,1,1), (1,1,-1), (-1,1,-1), (-1,1,1), (1,-1,1), (1,-1,-1), (-1,-1,-1)]$$
+"
+"['neural-networks', 'convolutional-neural-networks', 'natural-language-processing', 'reference-request', 'embeddings']"," Title: Is there any research work that shows that we should explicitly mark the word boundaries for 1D CNNs?Body: I'm doing character embedding for NLP tasks using one-dimensional convolutional neural networks (see Chiu and Nichols (2016) for the motivation). I haven't found any empirical evidence of whether or not marking the word boundaries makes a difference. As an example, a 1-D CNN with kernel size 2 would take "the"
as input and use {"th", "he"}
in its filters. But if I explicitly marked the boundaries it would give me {"t", "th", "he", "h"}
.
+Is there a go-to paper or project that definitively answers this question?
+"
+"['reinforcement-learning', 'state-spaces']"," Title: Does it make sense to include constant states into reinforcement learning formulation?Body: Does it make sense to incorporate constant states in the Markov Decision Process and employ a reinforcement learning algorithm to solve it?
+For instance, for applications of personalization, I would like to include users' attributes, like age and gender, as states, along with other changing states. Does it make sense to do so? I have a sequential problem, so I assume the contextual bandit does not fit here.
+"
+"['deep-learning', 'computer-vision', 'image-recognition', 'image-segmentation', 'domain-adaptation']"," Title: Training a classifier on different datasets with different image conditions for different labels causes the model to infer using the backgroundBody: I have an interesting problem related to training the model on two different datasets for the target feature on images taken on different conditions, which might affect the model's ability to generalize.
+To explain I will give examples of images from the two different datasets.
+Dataset 1 sample image:
+
+Dataset 2 sample image:
+
+As you see the images are captured in two completely different conditions. I am afraid that the model will infer from the background information that it shouldn't use to predict the plant diseases, what makes the problem worse is that some plant diseases only exist in one dataset and not the other, if all the diseases are contained in both datasets then I wouldn't think there would be a problem.
+I am assuming I need a way to unify the background by somehow detecting the leaf pixels in the images and unifying the background in a way that makes the model focuses on the important features.
+I've tried some segmentation methods but the methods I tried don't always give desirable results for all the images.
+What is the recommended approach here? All help is appreciated
+Further explanation of the problem.
+Ok so I will explain one more thing, my model on the two datasets works fine when training and validating, It got a 94% accuracy.
+The problem is, even though the model performs well on the datasets I have, I am afraid that when I use the model on real-life conditions (say someone capturing an image with their phone) the model will be heavily biased towards predicting labels in the second dataset (the one with actual background) since the background is similar and it somehow associated the background with the inference process.
+I have tried downloading a leaf image of a label that is contained on the first dataset ( the one with the white background), where the image had a real-life background, the model as expected failed to predict the correct label and predicted a label contained in the second dataset, I am assuming it was due to the background. I have tried this experiment multiple times, and the model consistently failed in similar scenarios
+I used some Interpretability techniques as well to visualize the important pixels and it seems like the model is using the background for inference, but I am not an expert in interpreting these graphs so I am not 100% sure.
+
+
+
+
+"
+"['machine-learning', 'reinforcement-learning', 'training', 'datasets', 'checkers']"," Title: Is training on single game each time appropriate for an agent to learn to play checkersBody: I was facing a problem I mentioned in a previous question but after a while, I realize that maybe the problem in the dataset not in the learning rate.
+I build the dataset from white positions only i.e the boards when it's white's turn.
+Each data set consists of one game.
+First, I tried to let the agent plays a game then learn from it immediately, but that did not work,
+and the agent converges to one state of playing (he only lose or win against itself, or draw in a stupid way).
+Second, I tried to let the agent plays about 1000 games against its self then train on each game separately, but that also does not work.
+Note: the first and second approach describes one iteration of the learning process, I let the agent repeat them so in total it trained on about 50000 games.
+Is my approach wrong? or my dataset must be built in another way? maybe train the agent on several games at once?
+My project is here if someone needs to take a closer look at it: CheckerBot
+"
+"['neural-networks', 'neat', 'neuroevolution', 'fitness-functions', 'forward-pass']"," Title: How can I perform the forward pass in a neural network evolved with NEAT, given that some connections may not exist or there may be loopy connections?Body: I have a problem that arose as part of a NEAT (Neuro Evolution Through Augmenting Topologies) implementation that I am writing. I am wanting it to produce topologies or graphs that describe neural networks, similar to the one below.
+
+Here, nodes 0
and 1
are inputs, and 4
is the output node, the rest of the nodes are hidden nodes. Each of these nodes can have some activation function defined for them (not necessary that all the hidden nodes have the same activation function)
+Now, I want to perform the forward pass of this neural network with some data, and, based on how well it performed in that task, I assign it with a fitness value, which is used as part of the NEAT evolutionary algorithm to move towards better architectures and weights.
+So, as part of the evolution process, I can have connections that can cause internal loops in the hidden layers and there is the possibility that a skip connection is made. Because of this, I feel the regular matrix-based forward pass (of fully connected MLPs) will not work in order to perform the forward pass of these evolved neural networks, and hence I want to know if an algorithm exists that can solve this problem.
+In short, I want this neural network to just take the inputs and provide me outputs - no training involved at all, so I'm not interested in the back-propagation part now.
+The only way to solve this that I see is to use something on the lines of a job queue (the queue will consist of the nodes that needs processing in order). I feel this is extremely inefficient and I cannot allocate this simulation method a proper stop condition. Or, even when to take output from the neural network graph and consider it.
+Can anybody at least point me in the right direction?
+"
+"['machine-learning', 'deep-learning', 'papers', 'optimization', 'gradient-descent']"," Title: How to derive compact convex set K and its diameter D to program Accelegrad algorithm in practice?Body: Given the original paper (https://arxiv.org/pdf/1809.02864.pdf), I would like to implement the Accelegrad algorithm for which I report the pseudocode of the paper:
+
+In the pseudocode, the authors refer to a compact convex set $K$ of diameter $D$. The question is whether I can know such elements. I think that they are theoretical conditions to satisfy some theorems. The problem is that the diameter $D$ is used in the learning rate and also the convex set $K$ is used to perform the projection of the gradient descent. How can I proceed?
+"
+"['reinforcement-learning', 'deep-learning', 'tensorflow', 'q-learning', 'open-ai']"," Title: Simple DQN too slow to trainBody: I have been trying to solve the OpenAI lunar lander game with a DQN taken from this paper
+https://arxiv.org/pdf/2006.04938v2.pdf
+The issue is that it takes 12 hours to train 50 episodes so something must be wrong.
+import os
+import random
+import gym
+import numpy as np
+from collections import deque
+import tensorflow as tf
+from tensorflow.keras.models import Sequential
+from tensorflow.keras.layers import Dense
+from tensorflow.keras.optimizers import Adam
+from tensorflow.keras import Model
+
+ENV_NAME = "LunarLander-v2"
+
+DISCOUNT_FACTOR = 0.9
+LEARNING_RATE = 0.001
+
+MEMORY_SIZE = 2000
+TRAIN_START = 1000
+BATCH_SIZE = 24
+
+EXPLORATION_MAX = 1.0
+EXPLORATION_MIN = 0.01
+EXPLORATION_DECAY = 0.99
+
+class MyModel(Model):
+ def __init__(self, input_size, output_size):
+ super(MyModel, self).__init__()
+ self.d1 = Dense(128, input_shape=(input_size,), activation="relu")
+ self.d2 = Dense(128, activation="relu")
+ self.d3 = Dense(output_size, activation="linear")
+
+ def call(self, x):
+ x = self.d1(x)
+ x = self.d2(x)
+ return self.d3(x)
+
+class DQNSolver():
+
+ def __init__(self, observation_space, action_space):
+ self.exploration_rate = EXPLORATION_MAX
+
+ self.action_space = action_space
+ self.memory = deque(maxlen=MEMORY_SIZE)
+
+ self.model = MyModel(observation_space,action_space)
+ self.model.compile(loss="mse", optimizer=Adam(lr=LEARNING_RATE))
+
+ def remember(self, state, action, reward, next_state, done):
+ self.memory.append((state, action, reward, next_state, done))
+
+ def act(self, state):
+ if np.random.rand() < self.exploration_rate:
+ return random.randrange(self.action_space)
+ q_values = self.model.predict(state)
+ return np.argmax(q_values[0])
+
+ def experience_replay(self):
+ if len(self.memory) < BATCH_SIZE:
+ return
+ batch = random.sample(self.memory, BATCH_SIZE)
+ state_batch, q_values_batch = [], []
+ for state, action, reward, state_next, terminal in batch:
+ # q-value prediction for a given state
+ q_values_cs = self.model.predict(state)
+ # target q-value
+ max_q_value_ns = np.amax(self.model.predict(state_next)[0])
+ # correction on the Q value for the action used
+ if terminal:
+ q_values_cs[0][action] = reward
+ else:
+ q_values_cs[0][action] = reward + DISCOUNT_FACTOR * max_q_value_ns
+ state_batch.append(state[0])
+ q_values_batch.append(q_values_cs[0])
+ # train the Q network
+ self.model.fit(np.array(state_batch),
+ np.array(q_values_batch),
+ batch_size = BATCH_SIZE,
+ epochs = 1, verbose = 0)
+ self.exploration_rate *= EXPLORATION_DECAY
+ self.exploration_rate = max(EXPLORATION_MIN, self.exploration_rate)
+
+def lunar_lander():
+ env = gym.make(ENV_NAME)
+ observation_space = env.observation_space.shape[0]
+ action_space = env.action_space.n
+ dqn_solver = DQNSolver(observation_space, action_space)
+ episode = 0
+ print("Running")
+ while True:
+ episode += 1
+ state = env.reset()
+ state = np.reshape(state, [1, observation_space])
+ scores = []
+ score = 0
+ while True:
+ action = dqn_solver.act(state)
+ state_next, reward, terminal, _ = env.step(action)
+ state_next = np.reshape(state_next, [1, observation_space])
+ dqn_solver.remember(state, action, reward, state_next, terminal)
+ dqn_solver.experience_replay()
+ state = state_next
+ score += reward
+ if terminal:
+ print("Episode: " + str(episode) + ", exploration: " + str(dqn_solver.exploration_rate) + ", score: " + str(score))
+ scores.append(score)
+ break
+ if np.mean(scores[-min(100, len(scores)):]) >= 195:
+ print("Problem is solved in {} episodes.".format(episode))
+ break
+ env.close
+if __name__ == "__main__":
+ lunar_lander()
+
+Here are the logs
+root@b11438e3d3e8:~# /usr/bin/python3 /root/test.py
+2021-01-03 13:42:38.055593: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
+2021-01-03 13:42:39.338231: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
+2021-01-03 13:42:39.368192: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
+2021-01-03 13:42:39.368693: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
+pciBusID: 0000:01:00.0 name: GeForce GTX 1080 computeCapability: 6.1
+coreClock: 1.8095GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
+2021-01-03 13:42:39.368729: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
+2021-01-03 13:42:39.370269: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
+2021-01-03 13:42:39.371430: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
+2021-01-03 13:42:39.371704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
+2021-01-03 13:42:39.373318: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
+2021-01-03 13:42:39.374243: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
+2021-01-03 13:42:39.377939: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
+2021-01-03 13:42:39.378118: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
+2021-01-03 13:42:39.378702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
+2021-01-03 13:42:39.379127: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
+2021-01-03 13:42:39.386525: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3411185000 Hz
+2021-01-03 13:42:39.386867: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4fb44c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
+2021-01-03 13:42:39.386891: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
+2021-01-03 13:42:39.498097: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
+2021-01-03 13:42:39.498786: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4fdf030 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
+2021-01-03 13:42:39.498814: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1080, Compute Capability 6.1
+2021-01-03 13:42:39.498987: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
+2021-01-03 13:42:39.499416: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
+pciBusID: 0000:01:00.0 name: GeForce GTX 1080 computeCapability: 6.1
+coreClock: 1.8095GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
+2021-01-03 13:42:39.499448: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
+2021-01-03 13:42:39.499483: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
+2021-01-03 13:42:39.499504: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
+2021-01-03 13:42:39.499523: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
+2021-01-03 13:42:39.499543: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
+2021-01-03 13:42:39.499562: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
+2021-01-03 13:42:39.499581: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
+2021-01-03 13:42:39.499643: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
+2021-01-03 13:42:39.500113: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
+2021-01-03 13:42:39.500730: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
+2021-01-03 13:42:39.500772: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
+2021-01-03 13:42:39.915228: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
+2021-01-03 13:42:39.915298: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
+2021-01-03 13:42:39.915322: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
+2021-01-03 13:42:39.915568: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
+2021-01-03 13:42:39.916104: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
+2021-01-03 13:42:39.916555: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6668 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
+Running
+2021-01-03 13:42:40.267699: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
+
+This is the GPU stats
++-----------------------------------------------------------------------------+
+| NVIDIA-SMI 450.66 Driver Version: 450.66 CUDA Version: 11.0 |
+|-------------------------------+----------------------+----------------------+
+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+| | | MIG M. |
+|===============================+======================+======================|
+| 0 GeForce GTX 1080 Off | 00000000:01:00.0 On | N/A |
+| 0% 53C P2 46W / 198W | 7718MiB / 8111MiB | 0% Default |
+| | | N/A |
++-------------------------------+----------------------+----------------------+
+
+As you can see, TensorFlow does not compute on the GPU but reserves the memory so I'm assuming it's because the inputs of the neural networks are too small and it uses the CPU instead.
+To make sure the GPU was installed properly, I ran a sample from their documentation and it uses the GPU.
+Is it an issue with the algorithm or the code? Is there a way to utilize the GPU in this case?
+Thanks!
+"
+"['optical-character-recognition', 'metric', 'handwritten-characters']"," Title: What are the disadvantages to using a distance metric in character recognition predictionBody: I am reading this paper, that is discussing the use of distance metrics for character recognition predicton.
+I can see the advantages of using a distance metrics in predictions like character recognition: You can model a set of known character's pixel vectors, and then measure a new unseen vector against this model, get the distance, and if the distance is low, predict that this unseen vector belongs to a particular class.
+I'm wondering if there's any disadvantages to using distance metrics as the cost function in character recognition? For example, I was thinking maybe the distance calculation is slow for large images (would have to calculate the distance between each item in two long image vectors)?
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'off-policy-methods', 'importance-sampling']"," Title: Why do we need importance sampling?Body: I was studying the off-policy policy improvement method. Then I encountered importance sampling. I completely understood the mathematics behind the calculation, but I am wondering what is the practical example of importance sampling.
+For instance, in a video, it is said that we need to calculate the expected value of a biased dice, here $g(x)$, in terms of the expected value of fair dice, $f(x)$. Here is a screenshot of the video.
+
+Why do we need that, when we have the probability distribution of the biased dice?
+"
+"['deep-learning', 'computer-vision', 'object-detection', 'non-max-suppression', 'single-shot-multibox-detector']"," Title: How are nested bounding boxes handled in object detection (and in particular in the case of the SSD)?Body: The basic approach to non-maximum-suppression makes sense, but I am kind of confused about how you handle nested bounding boxes.
+Suppose you have two predicted boxes, with one completely enclosing another. What happens under this circumstance (in particular, in the case of the Single Shot MultiBox Detector)? Which bounding box do you select?
+"
+['terminology']," Title: Confusion between function learned and the underlying distributionBody: Let us assume that I am working on a dataset of black and white dog images.
+Each image is of size $28 \times 28$.
+Now, I can say that I have a sample space $S$ of all possible images. And $p_{data}$ is the probability distribution for dog images. It is easy to understand that all other images get a probability value of zero. And it is obvious that $n(S)= 2^{28 \times 28}$.
+Now, I am going to design a generative model that sample from $S$ using $p_{data}$ rather than random sampling.
+My generative model is a neural network that takes random noise (say, of length 100) and generates an image of the size $28 \times 28$. My function is learning a function $f$, which is totally different from the function $p_{data}$. It is because of the reason that $f$ is from $R^{100}$ to $S$ and $p_{data}$ is from $S$ to $[0,1]$.
+In the literature, I often read the phrases that our generative model learned $p_{data}$ or our goal is to get $p_{data}$, etc., but in fact, they are trying to learn $f$, which just obeys $p_{data}$ while giving its output.
+Am I going wrong anywhere or the usage in literature is somewhat random?
+"
+"['convolutional-neural-networks', 'computer-vision', 'comparison', 'action-recognition']"," Title: What are the pros and cons of 3D CNN and 2D CNN combined with optical flow for action recognition?Body: For action recognition or similar tasks, one can either use 3D CNN or combine 2D CNN with optical flow. See this paper for details.
+Can someone tell the pros/cons of each, in terms of accuracy, cost such as computation and memory requirement, etc.? In other words, is the computation overhead of 3D CNN justified by its accuracy improvement? Under what scenarios would one prefer one over another?
+3D CNNs are also used for volumetric data, such as MRI images. Can 2D CNN + optical flow be used here?
+I understand 2D CNNs and 3D CNNs, but I do not know about optical flow (my background is not computer-vision).
+"
+"['reinforcement-learning', 'q-learning', 'reference-request', 'research']"," Title: Research into social behavior in Prisoner's DilemmaBody: I've been working on research into reproducing social behavior using multi-agent reinforcement learning. My focus has been on a GridWorld-style game, but I was thinking that maybe a simpler Prisoner's Dilemma game could be a better approach. I tried to find existing research papers in this direction, but couldn't find any, so I'd like to describe what I'm looking for in case anyone here knows of such research.
+I'm looking for research into scenarios where multiple RL agents are playing Iterated Prisoner's Dilemma with each other, and social behaviors emerge. Let me specify what I mean by "social behaviors." Most research I've seen into RL/IPD (example) focuses on how to achieve the ideal strategy, and how to get there the fastest, and what common archetypes of strategies emerge. That is all nice and well, but not what I'm interested in.
+An agent executing a Tit-for-Tat strategy is giving positive reinforcement to the other player for "good" behavior, and negative reinforcement for "bad" behavior. That is why it wins. My key point here is that this carrot-and-stick method is done individually rather than in groups. I want to see it evolve within a group.
+I want to see an entire group of agents evolve to punish and reward other players according to how they behaved with the group. I believe that fascinating group dynamics could be observed in that scenario.
+I programmed such a scenario a decade ago, but by writing an algorithm manually, not using deep RL. I want to do it using deep RL, but first I want to know whether there are existing attempts.
+Does anyone know whether such research exists?
+"
+"['reinforcement-learning', 'incremental-learning', 'active-learning']"," Title: What are good techniques for continuous learning in production?Body: I was wondering which AI techniques and architectures are used in environments that need predictions to continually improve by the feedback of the user.
+So let's take some kind of recommendation system, but not for a number of $n$ products, but for some problem of higher space. It's initially trained, but should keep improving by the feedback and corrections applied by the user. The system should continue to improve its outcomes on-the-fly in production, with each interaction.
+Obviously, (deep) RL seems to fit this problem, but can you really deploy this learning process to production? Is it really capable of improving results on-the-fly?
+Are there any other techniques or architectures that can be used for that?
+I'm looking for different approaches in general, in order to be able to compare them and find the right one for problems of that kind. Of course, there always is the option to retrain the whole network, but I was wondering whether there are some online, on-the-fly techniques that can be used to adjust the network?
+"
+"['neural-networks', 'backpropagation', 'calculus']"," Title: For the generalised delta rule in back-propogation, do you subtract the target from the obtained output, or vice versa?Body: When I look up the generalised delta rule equation for back-propogation, I am seeing two conflicting equations.
+For example, here (slide 20), given $o$ (the output, defined in slide 18), $z$ (the activated output) and a target $t$, defined in slide 17, then:
+$\frac{\delta E}{\delta Z} = o(1-o)(o-t)$
+When I look for the same equation else, e.g. here, slide 14, it says, given $o$ the output and $y$ the label, then (using slightly different notation $\beta_k$):
+$\beta_k = o_k(1-o_k)(y_k-o_k)$
+I can see here that these two equations are almost the same, but not quite. One subtracts the output from the target, and one subtracts the target from the output.
+The reason why I'm asking this is I'm trying to do question 29 and 30 of this paper, and they are using the second equation ($\beta_k$) but my college notes (that I can't copy and paste due to copyright) define the equation according to the first equation $\frac{\delta E}{\delta Z}$. I'm wondering which way is correct, do you subtract the target from the obtained output, or vice versa?
+"
+"['deep-learning', 'reference-request', 'objective-functions', 'categorical-data']"," Title: Correct way to work with both categorical and continuous features togetherBody: I have a time series with both continuous and categorical features, and I want to do a prediction task.
+I will elaborate:
+The data is composed of 100Hz sampling of some voltages, kind of like an ecg signal, and of some categorical features such as "green", "na" and so on.
+In total, the number of features can reach 300, of which most are continuous.
+The prediction should take in a chunk of frames and predict a categorical variable for this chunk of frames.
+
+I want to create a deep learning model that can handle both categorical and continuous features.
+Best I can think of is two separate losses, like MSE and cross entropy, and a hyperparameter to tune between them, kind of like regularization.
+Best I could find on this subject was this, with an answer from 2015.
+I wonder if something better was invented, since then, or maybe just someone here knows something better.
+"
+"['ai-design', 'computational-complexity', 'embedded-design']"," Title: How to know if a real-time classifier is achivable in a low-power emdedded system?Body: Say I have an Machine/Deep learning algorithm I developed on a desktop pc to achieve a real-time classification of time series events from a sensor. Once the algorithm is trained and performs good, I want to implement it on an low power embbeded system, with the same sensor, to classify events in real-time:
+
+- How can I know if the low power embedded system is fast enough to allow real-time classification regarding the algorithm (knowing it in advance would avoid to implement and try multiple architectures) ?
+- Machine/Deep learning algorithm are usually developed in python. Is there easy ways to transfer the code from python to a more embeddable langage ?
+
+"
+"['reinforcement-learning', 'policy-gradients', 'actor-critic-methods', 'reinforce']"," Title: Why do we use big batch/epoch size in policy gradient methods (vpg specifically)?Body: I am re-implementing vpg and using Spinning Up as reference implementation. I noticed that the default epoch size is 4000. I also see cues in papers that big batch size is quite standard.
+My implementation doesn't batch XP together just applies the update after every episode. It turns out my implementation is more sample efficient than the reference implementation on simple problems (like CartPole or LunarLander) even though I haven't added the critic yet! Of course this could be due to number of reasons, for example I've only done parameter search on my implementation.
+But it would make sense anyway: bigger batch size is generally considered better only because the GPU is faster processing many samples parallel. Is this the reason here? It would make sense but it is surprising for me as I thought sample efficiency is considered more important than computing efficiency in RL.
+"
+"['deep-learning', 'convolutional-neural-networks', 'backpropagation', 'facial-recognition', 'siamese-neural-network']"," Title: How do gradients are flown back into the Siamese network when branching is done?Body: I am curious about the working of a Siamese network. So, let us suppose I am using a triplet loss for my network and I have instantiated single CNN 3 times and there are 3 inputs to the network. So, during a forward pass, each of the networks will give me an embedding for each image, and I can get the distance and calculate the loss and compare it with the output, so that my model is ready to propagate the gradients to update the weights.
+The Question: How do these weights get updated during the back propagation? Just because we are using 3 inputs and 3 branches of the same network and we are passing the inputs one by one (I suppose), how do the gradients are updated? Are these series? Like the one branch will update, then the second and then the third. But won't it be a problem because each branch would try to update based on its output? If in parallel, then which branch is responsible for the gradients update? I mean to say that I am unable to get the idea how weights are updated in Siamese network. Can someone please explain in simpler terms?
+"
+"['neural-networks', 'papers', 'feedforward-neural-networks', 'linear-algebra', 'federated-learning']"," Title: How to express a fully connected neural network succintly using linear algebra?Body: I'm currently reading the paper Federated Learning with Matched Averaging (2020), where the authors claim:
+
+A basic fully connected (FC) NN can be formulated as: $\hat{y} = \sigma(xW_1)W_2$ [...]
+Expanding the preceding expression
+$\hat{y} = \sum_{i=1}^{L} W_{2, i \cdot } \sigma(\langle x, W_{1,\cdot i} \rangle))$, where $ i\cdot$ and $\cdot i$ denote the ith row and column correspondingly and $L$ is the number of hidden units.
+
+I'm having a hard time wrapping my head around how it can be boiled down to this. Is this rigorous? Specifically, what is meant by the ith row and column? Is this formula for only one layer or does it work with multiple layers?
+Any clarification would be helpful.
+"
+"['natural-language-processing', 'spacy', 'named-entity-recognition']"," Title: Should we use a pre-trained model or a blank model for custom entity training of NER in spacy?Body: Further to my last question, I am training a custom entity of FOODITEM to be recognized by Spacy's Name Entity Recognition engine. I am following tutorials online, following is the advise given in most of the tutorials;
+
+Load the model or create an empty model
+
+
+We can create an empty model and train it with our annotated dataset or we can use the existing spacy model and re-train with our annotated data.
+
+But none of the tutorials tell how/why to choose between the two options. Also, I don't understand how will the choice affect my final output or the training of the model.
+How do I make the choice between a pre-trained model or a blank model? What are the factors to consider?
+"
+"['convolutional-neural-networks', 'image-segmentation', 'convolution', 'u-net', 'convolutional-layers']"," Title: What is the use of the regular convolutional layer in expansion path of U-Net?Body: I was going through the paper on U-Net. U-net consists of a contracting path followed by an expanding path. Both the paths use a regular convolutional layer. I understand the use of convolutional layers in the contracting path, but I can't figure out the use of convolutional layers in the expansive path. Note that I'm not asking about the transpose convolutions, but the regular convolutions in the expansive path.
+
+"
+"['neural-networks', 'tensorflow', 'python', 'research', 'pytorch']"," Title: Can TensorFlow, PyTorch, and other mainstream ML frameworks be used for research-grade work in AI?Body: Many authors of research papers in AI (e.g. arXiv) write their neural networks from the ground-up, using low-level languages like C++ to implement their theories. Can existing open source frameworks also be used for this purpose, or are their implementations too limited?
+Can, for example, TensorFlow be used to craft an original network architecture that shows improvements on existing benchmarks? Can original mathematical work be coded into a high-level framework like TensorFlow such that original research on network architectures/approaches be demonstrated in a paper?
+A quick search reveals many papers using C++ in their implementation:
+
+"
+"['machine-learning', 'terminology', 'definitions', 'support-vector-machine']"," Title: What are support values in a support vector machine?Body: I started reading up on SVM and very little is defined of what are support values. I reckon it's they are denoted as $\alpha$ in most formulations.
+"
+"['machine-learning', 'metric', 'scikit-learn', 'roc-auc', 'multiclass-classification']"," Title: When computing the ROC-AUC score for multi-class classification problems, when should we use One-vs-Rest and One-vs-One?Body: The sklearn's documentation of the method roc_auc_score
states that the parameter multi_class
can take the value 'OvR'
(which stands for One-vs-Rest) or 'OvO'
(which stands for One-vs-One). These values are only applicable for multi-class classification problems.
+Does anyone know in what particular cases we would use OvR as opposed to OvO? In the general academic literature, is there a preference given to one?
+"
+"['convolutional-neural-networks', 'training', 'image-segmentation', 'matlab', 'u-net']"," Title: Semantic segmentation CNN outputs all zeroesBody: I'm using MATLAB 2019, Linux, and UNet (a CNN specifically designed for semantic segmentation). I'm training the network to classify all pixels in an image as either cell or background to get segmentations of cells in microscopic images. My problem is the network is classifying every single pixel as background, and seems to just be outputting all zeroes. The validation accuracy improves a little at the very start of the training but than plateaus at around 60% for the majority of the training time. The network doesn't seem to be training very well and I have no idea why.
+Can anyone give me some hints about what I should look into more closely? I just don't even know where to start with debugging this.
+Here's my code:
+ % Set datapath
+ datapath = '/scratch/qbi/uqhrile1/ethans_lab_data';
+
+ % Get training and testing datasets
+ images_dataset = imageDatastore(strcat(datapath,'/bounding_box_cropped_resized_rgb'));
+ load(strcat(datapath,'/gTruth.mat'));
+ labels = pixelLabelDatastore(gTruth);
+ [imdsTrain, imdsVal, imdsTest, pxdsTrain, pxdsVal, pxdsTest] = partitionCamVidData(images_dataset,labels);
+
+ % Weight segmentation class importance by the number of pixels in each class
+ pixel_count = countEachLabel(labels); % count number of each type of pixel
+ frequency = pixel_count.PixelCount ./ pixel_count.ImagePixelCount; % calculate pixel type frequencies
+ class_weights = mean(frequency) ./ frequency; % create class weights that balance the loss function so that more common pixel types won't be preferred
+
+ % Specify the input image size.
+ imageSize = [512 512 3];
+
+ % Specify the number of classes.
+ numClasses = 2;
+
+ % Create network
+ lgraph = unetLayers(imageSize,numClasses);
+
+ % Replace the network's classification layer with a pixel classification
+ % layer that uses class weights to balance the loss function
+ pxLayer = pixelClassificationLayer('Name','labels','Classes',pixel_count.Name,'ClassWeights',class_weights);
+ lgraph = replaceLayer(lgraph,"Segmentation-Layer",pxLayer);
+
+ %% TRAIN THE NEURAL NETWORK
+
+ % Define validation dataset-with-labels
+ validation_dataset_with_labels = pixelLabelImageDatastore(imdsVal,pxdsVal);
+
+ % Training hyper-parameters: edit these settings to fine-tune the network
+ options = trainingOptions('adam', 'LearnRateSchedule','piecewise', 'LearnRateDropPeriod',10, 'LearnRateDropFactor',0.3, 'InitialLearnRate',1e-3, 'L2Regularization',0.005, 'ValidationData',validation_dataset_with_labels, 'ValidationFrequency',10, 'MaxEpochs',3, 'MiniBatchSize',1, 'Shuffle','every-epoch');
+
+ % Set up data augmentation to enhance training dataset
+ aug_imgs = {};
+ numberOfImages = length(imdsTrain.Files);
+ for k = 1 : numberOfImages
+ % Apply cutout augmentation
+ img = readimage(imdsTrain,k);
+ cutout_img = random_cutout(img);imwrite(cutout_img,strcat('/scratch/qbi/uqhrile1/ethans_lab_data/augmented_dataset/img_',int2str(k),'.tiff'));
+ end
+ aug_imdsTrain = imageDatastore('/scratch/qbi/uqhrile1/ethans_lab_data/augmented_dataset');
+ % Add other augmentations
+ augmenter = imageDataAugmenter('RandXReflection',true, 'RandXTranslation',[-10 10],'RandYTranslation',[-10 10]);
+ % Combine augmented data with training data
+ augmented_training_dataset = pixelLabelImageDatastore(aug_imdsTrain, pxdsTrain, 'DataAugmentation',augmenter);
+
+ % Train the network
+ [cell_segmentation_nn, info] = trainNetwork(augmented_training_dataset,lgraph,options);
+
+ save cell_segmentation_nn
+
+"
+"['reinforcement-learning', 'datasets', 'environment', 'sarsa', 'on-policy-methods']"," Title: How should I generate datasets for a SARSA agent when the environment is not simple?Body: I am currently working on my master's thesis and going to apply Deep-SARSA as my DRL algorithm. The problem is that there is no datasets available and I guess that I should generate them somehow. Datasets generation seems a common feature in this specific subject as stated in [1]
+
+When a dataset is not available, learning is performed through experience.
+
+I am wondering how to generate datasets when the environment is not as simple as a tic-tac-toe or a maze problem and what the experience means.
+PS: The environment consists of 15 mobile users and 3 edge servers, each of which covers a number of mobile users. Each mobile user might generate a computationally heavy-load task and at the beginning of each timestep and can process the task itself or requests its associated edge server to do the processing. If the associated edge server is not capable of processing, due to some reasons, it requests a nearby edge server to lend it a hand. The optimization problem (reward) is to reduce time and energy consumption (multi-objective optimization). Each server has a DRL agent that makes offloading decisions.
+I'd really appreciate your suggestions and help.
+"
+"['convolutional-neural-networks', 'computer-vision', 'fully-convolutional-networks']"," Title: In the DeepView paper, do they use the same FCN for all depth slices AND all views?Body: I'm trying to replicate a paper from Google on view synthesis/lightfields from 2019: DeepView: View Synthesis with Learned Gradient Descent and this is the PDF.
+Basically the input to the neural network comes from a set of cameras which number is variable, and the output is a stack of images which number is also variable. For that they use both a Fully Convolutional Network and Learned Gradient Descent.
+I don't know if I am understanding this correctly: (in each LGD iteration) They use the same network for all depth slices AND all views. Is this correct?
+This is the LGD network, not much important to the question but it helps you understand the setup. You can see at least 3 LGD iterations. Part b) is just the calculation they do in the "green gradient boxes" on part a).
+
+This is the inside of the CNNs. On each LGD iteration they use basically the same architecture, but the weights are different per iteration.
+
+For me the confusing part is that they represent each view as a different network, but they don't represent each depth slice as a different network. As you can see in the next image they do say that they use the same parameters for all depth slices, and that the order of the views doesn't matter so it must be that they're also reusing the parameters for all views, right? So if I understand correctly, this is a matter of reusing the same model for all depths and all views. BTW note that the maxpool kind of operation is over each view.
+Also I have a question on the practicalities of the implementation. I'll be implementing this with normal 2D convolution layers, so if I want them to run independent of the views and depth slices, I guess I could concatenate views and depth slices in the "batch" dimension? I mean, before the maximum k operation, and then reuse the output.
+This is what they say:
+
+Thanks
+"
+"['python', 'image-processing', 'data-preprocessing', 'optical-character-recognition']"," Title: In OCR, how should I deal with the warped text on the sides of oval objects?Body: Consider an image that contains one can (or bottle, or any similar oval object), which has texts all over it. In the image below, I have many bottles, but you can assume that each image only contains one such object.
+
+As we can see, in each can, the text can flow from left to right, and any OCR system may miss the text on the left and right sides of the can, as they are not aligned with the camera angle.
+So, is there any solution/s for this, like preprocessing in a certain way, so that we can read the text or make this round object into a straight one? (If there is any Python program that can solve this problem, could you please share it with me?)
+"
+"['comparison', 'applications', 'generative-adversarial-networks', 'variational-autoencoder', 'image-generation']"," Title: What are the fundamental differences between VAE and GAN for image generation?Body: Starting from my own understanding, and scoped to the purpose of image generation, I'm well aware of the major architectural differences:
+
+- A GAN's generator samples from a relatively low dimensional random variable and produces an image. Then the discriminator takes that image and predicts whether the image belongs to a target distribution or not. Once trained, I can generate a variety of images just by sampling the initial random variable and forwarding through the generator.
+
+- A VAE's encoder takes an image from a target distribution and compresses it into a low dimensional latent space. Then the decoder's job is to take that latent space representation and reproduce the original image. Once the network is trained, I can generate latent space representations of various images, and interpolate between these before forwarding through the decoder which produces new images.
+
+
+What I'm more interested is the consequences of said architectural differences. Why would I choose one approach over the other? And why? (for example, if GANs typically produce better quality images, any ideas why that is so? is it true in all cases or just some?)
+"
+"['reinforcement-learning', 'pomdp', 'state-spaces', 'conditional-probability']"," Title: How to update the observation probabilities in a POMDP?Body: How can I update the observation probability for a POMDP (or HMM), in order to have a more accurate prediction model?
+The POMDP relies on observation probabilities that match an observation to a state. This poses an issue as the probabilities are not exactly known. However, the idea is to make them more accurate over time. The simplest idea would be to count the appeared observation as well as the states and use Naive Bayes estimators.
+For example, $P(s' \mid a,s)$ is the probability that a subsequent state $s'$ is reached, given that the action $a$ and the previous state $s$ are known: In that simple case I can just count and then apply e.g. Naive Bayes estimators.
+But, if I have an observation probability $P(z \mid s')$ (where $z$ is the observation) depending on a state, it's not as trivial to just count up the observation and the states, as I can not say that a state really was reached (Maybe I made an observation, but I was in a different state than wanted). I can just make an observation and hope I was in a certain state. But I can not say if e.g. I was in $s_1$ or maybe $s_2$. I think the update of the observation probability is only possible in the late aftermath.
+So, what are good approaches to estimate my state?
+"
+"['deep-learning', 'comparison', 'image-recognition', 'transfer-learning', 'few-shot-learning']"," Title: How is few-shot learning different from transfer learning?Body: To my understanding, transfer learning helps to incorporate data from other related datasets and achieve the task with less labelled data (maybe in 100s of images per category).
+Few-shot learning seems to do the same, with maybe 5-20 images per category. Is that the only difference?
+In both cases, we initially train the neural network with a large dataset, then fine-tune it with our custom datasets.
+So, how is few-shot learning different from transfer learning?
+"
+"['natural-language-processing', 'bert', 'text-generation']"," Title: T5 or BERT for sentence correction/generation task?Body: I have sentences with some grammatical errors , with no punctuations and digits written in words... something like below:
+
+As you can observe, a proper noun , winston isnt highlighted with capital in Sample column. 'People' is spelled wrong and there are no punctuations in Sample column. The date in the first row isnt in right format. I have millions of rows like this and want to train a model to learn punctuations and corrections. Can a single BERT or T5 handle this task? or only option is to try 1 model for each task?
+Thanks in advance
+"
+['image-generation']," Title: Can you give me a piece of advise of the network sructure that would be suitable for my task?Body: I have 2 small images. They are basically the same, but differ in rotation and size. I should estimate the parameters for affine transform to get them similar. What network structure can be suitable for this task? For example, those based on convolutional networks did badly, because the pictures are too small.
+"
+"['objective-functions', 'papers', 'image-segmentation', 'u-net', 'cross-entropy']"," Title: Have I understood the loss function from the original U-Net paper correctly?Body: In the original U-Net paper, it is written
+
+The energy function is computed by a pixel-wise soft-max over the final
+feature map combined with the cross entropy loss function.
+...
+$$
+E=\sum_{\mathbf{x} \in \Omega} w(\mathbf{x}) \log \left(p_{\ell(\mathbf{x})}(\mathbf{x})\right) \tag{1}\label{1}
+$$
+
+where $w(\mathbf{x})$ is a weight map (I'm not interested in that part right now), and $p_{k}(\mathbf{x})$ is
+$$
+p_{k}(\mathbf{x})=\exp \left(a_{k}(\mathbf{x})\right) /\left(\sum_{k^{\prime}=1}^{K} \exp \left(a_{k^{\prime}}(\mathbf{x})\right)\right)
+$$
+The pixel-wise softmax with $a_{k}(\mathbf{x})$ being the activation in feature channel $k$ at pixel position $\mathbf{x}$ and $K$ the number of classes. Then $\ell(\mathbf{x})$ from $p_{\ell(\mathbf{x})}$ is the true label of each pixel, i.e. if the pixel at position $\mathbf{x}$ is part of class $1$, then $p_{\ell(\mathbf{x})}$ is equal to $p_1(\mathbf{x})$.
+As far as is understand $-E$ should be the cross-entropy function. Right? I've already done the math for the binary case (ignoring $w(\mathbf{x})$) and it seemed to be equal.
+"
+"['reinforcement-learning', 'deep-rl', 'ddpg', 'mapping-space']"," Title: How does DDPG algorithm know about my action mapping function?Body: I am using DDPG to solve a RL problem. The action space is given by the Cartesian product $[0,20]^4\times[0,6]^4$. The actor
is implemented as a deep neural network with an output dimension equals to $8$ with tanh
activation.
+So, given a state s
, an action is given by a = actor(s)
where a
contains real numbers in [-1,1]
. Next, I map this action a
into a valid action valid_a
that belongs to the action space $[0,20]^4\times[0,6]^4$. Than, I use valid_a
to calculate the reward.
+My question is: how does the DDPG algorithm know about this mapping that I am doing? In what part of the DDPG algorithm should I specify this mapping? Should I provide a bijective mapping to guarantee that the DDPG algorithm learns bad from good actions?
+"
+"['neural-networks', 'reinforcement-learning', 'policy-gradients']"," Title: $\nabla \log \pi$ with respect to some parameters constantly being zeroBody: I am new to reinforcement learning. May I ask a simple (and maybe a bit silly) question here? I am trying to use the "one-step actor-critic" method to train a robot on a gridworld. Let's focus on the actor as there is nothing puzzling me for the critic.
+I used a feedforward ANN with one hidden layer to parameterize the action preference function (i.e. the $h$ function). The ANN has one bias node in the input layer to connect to all the hidden nodes. Therefore, there are three sets of weights associated with the $h$ function -- the weights connecting the inputs to the hidden nodes (let's call it $W1$ matrix), the weights connecting the hidden nodes to the outputs (let's call it $W2$ matrix), and the weights connecting the bias node to the hidden nodes (let's call it $c$ vector).
+I used the exponential soft-max as the policy function (i.e. the $\pi$ function). That is,
+$$\pi(a|s,W1,W2,c) = \displaystyle\frac{e^{h(a,s,W1,W2,c)}}{\sum_be^{h(b,s,W1,W2,c)}}.$$
+The inputs to the ANN are the state/feature vector, and the outputs of the ANN are the action preference values (i.e. the $h$ values). With these action preference values, the $\pi$ function can compute the probabilities for each action to be chosen.
+It is easy to derive that
+$$\nabla_{W1/W2/c} \log \pi(a|s,W1,W2,c) = \nabla_{W1/W2/c} h(a,s,W1,W2,c)-\sum_b \big[\nabla_{W1/W2/c}h(b,s,W1,W2,c)\pi(b|s,W1,W2,c)\big]$$
+In the above, $/$ means "or".
+My puzzle is, I found that $\nabla_{W2} h(\cdot,s,W1,W2,c) \equiv \sigma$, where $\sigma$ is the vector of the sigmoid activation values. That implies, $\nabla_{W2}$ is independent of actions!? Consequently, $\nabla_{W2} \log \pi(\cdot|s,W1,W2,c) \equiv 0$, which implies that $W2$ will not be updated at all...
+Where did I get wrong?
+(Actually, the above puzzle of mine extends to any policy gradient method as long as $\nabla \log \pi$ is involved and a feedforward ANN is used to approximate the action preferences.)
+"
+"['machine-learning', 'natural-language-processing', 'objective-functions', 'spacy', 'named-entity-recognition']"," Title: How to understand 'losses' in Spacy's custom NER training engine?Body: From the tid-bits, I understand of neural networks (NN), the Loss function is the difference between predicted output and expected output of the NN. I am following this tutorial, the losses are included at line #81 in the nlp.update()
function.
+I am getting losses in the range 300-100. How to interpret them? What should be the ideal output of this losses variable? I went through Spacy's documentation, but nothing much is written there about losses. Also, please let me know the links to relevant theories to understand this in general.
+"
+"['prediction', 'autonomous-vehicles', 'forecasting', 'kalman-filter']"," Title: Which is the best algorithm to predict the trajectory of a vehicle using lat/lon data?Body: I'm using Kalman Filter approaches and I've just implemented the extended Kalman filter (EKF) with my object 2D trajectory. However, I have a mess of alternative approaches that may fit better like Unscented Kalman Filter (UFK), particle filters, adaptive filtering, etc.
+How can I choose the most suitable algorithm for my case? In addition, are there algorithms that can predict more than one step ahead?
+"
+"['machine-learning', 'comparison', 'probability-distribution', 'maximum-likelihood']"," Title: How can a probability density value be used for the likelihood calculation?Body: Consider our parametric model $p_\theta$ for an underlying probabilistic distribution $p_{data}$.
+Now, the likelihood of an observation $x$ is generally defined as $L(\theta|x) = p_{\theta}(x)$.
+The purpose of the likelihood is to quantify how good the parameters are. How can the probability density at a given observation, $p_{\theta}(x)$, measure how good the parameters $\theta$ are?
+Is there any relation between the goodness of parameters and the probability density value of an observation?
+"
+"['machine-learning', 'classification', 'objective-functions', 'support-vector-machine', 'homework']"," Title: If the training data are linearly separable, which of the following $L(w)$ has less optimum answer for $w$, when $y = w^Tx$?Body: I'm studying machine learning and I came into a challenging question.
+
+The answer is 2. But based on my ML notes, all of them are true. Where are the wrong points?
+"
+"['reinforcement-learning', 'td-lambda', 'bootstrapping', 'lambda-return', 'lambda-return-algorithm']"," Title: How does bootstrapping work with the offline $\lambda$-return algorithm?Body: In Barton and Sutton's book, Reinforcement Learning: An Introduction (2nd edition), an expression, on page 289 (equation 12.2), introduced the form of the $\lambda$-return defined as follows
+$$G_t^{\lambda} = (1-\lambda)\sum_{n=1}^{\infty} \lambda^{n-1}G_{t:t+n} \label{12.2}\tag{12.2}$$
+with the truncated return defined as
+$$ G_{t:t+n} \doteq R_{t+1} +\gamma R_{t+2} + \ldots + \gamma^{n-1} R_{t+n} + \gamma^{n}\hat{v}(S_{t+n}, \mathbf{w}_{t+n-1}) \label{12.1}\tag{12.1}$$
+However, slightly later in the text, page 290 (equation 12.4), the update algorithm for the offline $\lambda$-return algorithm is defined as
+$$
+\mathbf{w}_{t+1} \doteq \mathbf{w}_{t}+\alpha\left[G_{t}^{\lambda}-\hat{v}\left(S_{t}, \mathbf{w}_{t}\right)\right] \nabla \hat{v}\left(S_{t}, \mathbf{w}_{t}\right), \quad t=0, \ldots, T-1 \label{12.4}\tag{12.4}
+$$
+My question is: how do we bootstrap the truncated returns in the update algorithm?
+The way the truncated return is currently defined can not plausibly be used, since we would not have access to $\mathbf{w}_{t+n-1}$, as we are in the process of finding $\mathbf{w}_{t+1}$. I suspect $\mathbf{w}_{t}$ is used for bootstrapping in all returns, but that would alter the definition of the truncated return which I just wanted to clarify.
+
+And as a follow-up question: What weights are used for bootstrapping in the online $\lambda$-return algorithm described on page 298?
+I assume it's either $\mathbf{w}_{t-1}^{h}$ or $\mathbf{w}_{h-1}^{h-1}$, it's briefly mentioned that the online $\lambda$-return algorithm performs slightly better than the offline one at the end of the episode which leads me to believe the latter is used otherwise the two algorithms would be identical.
+Any insight into either question would be great.
+"
+"['deep-learning', 'natural-language-processing', 'recurrent-neural-networks', 'reference-request']"," Title: Is there a reference that describes Recurrent Neural Networks for NLP tasks?Body: I would like some references of works that try to understand the functioning of any kind of RNN in natural language processing tasks. They can be any work that tries to explain the functioning of the model by studying the structure of the model itself. I have the feeling that it is very common for researchers to use models, but there is still little theory about how they work in solving natural language processing tasks.
+"
+"['machine-learning', 'classification', 'support-vector-machine', 'homework', 'kernel-trick']"," Title: Why is the margin attained with $\Phi=\left[2 x, 2 x^{2}\right]^{T}$ greater than the margin attained with $\Phi=\left[x, x^{2}\right]^{T}$?Body: I am trying to understand the solution to part 4 of problem 3 from the midterm exam 6.867 Machine learning: Mid-term exam (October 15, 2003).
+For reproducibility, here is problem 3.
+
+We consider here linear and non-linear support vector machines (SVM) of the form:
+$$
+\begin{equation}
+\begin{aligned}
+\min w_{1}^{2} / 2 & \text { subject to } y_{i}\left(w_{1} x_{i}+w_{0}\right)-1 \geq 0, \quad i=1, \ldots, n, \text { or } \\
+\min \mathbf{w}^{T} \mathbf{w} / 2 & \text { subject to } y_{i}\left(\mathbf{w}^{T} \Phi_{i}+w_{0}\right)-1 \geq 0, \quad i=1, \ldots, n
+\end{aligned}
+\end{equation}
+$$
+where $\Phi_{i}$ is a feature vector constructed from the corresponding real-valued input $x_{i}$. We wish to compare the simple linear SVM classifier $\left(w_{1} x+w_{0}\right)$ and the non-linear classifier
+$\left(\mathbf{w}^{T} \Phi+w_{0}\right)$, where $\Phi=\left[x, x^{2}\right]^{T}$.
+
+Here is part 4 of the same problem.
+
+In general, is the margin we would attain using scaled feature vectors
+$\Phi=\left[2 x, 2 x^{2}\right]^{T}$
+
+- greater
+- equal
+- smaller
+- any of the above
+
+
+The correct answer is the first (greater). Why is that the case?
+"
+"['reinforcement-learning', 'proximal-policy-optimization', 'reinforce', 'continuous-action-spaces', 'exploration-strategies']"," Title: Are actions deterministic during testing in continuous action space PPO?Body: In a continuous action space (for instance, in PPO, TRPO, REINFORCE, etc.), during training, an action is sampled from the random distribution with $\mu$ and $\sigma$. This results in an inherent exploration. However, during testing, when we no longer need to explore but exploit, the action should be deterministic, i.e. just $\mu$, right?
+"
+"['reinforcement-learning', 'dqn', 'action-spaces', 'constrained-optimization']"," Title: How to use DQN when the action space can be different at different time steps?Body: I would like to employ DQN to solve a constrained MDP problem. The problem has constraints on action space. At different time steps till the end, the available actions are different. It has different possibilities as below.
+
+- 0, 1, 2, 3, 4
+- 0, 2, 3, 4
+- 0, 3, 4
+- 0, 4
+
+Does this mean I need to learn 4 different Q networks for these possibilities? Also, correct me if I am wrong, it looks like if I specify the action size is 3, then it automatically assumes the actions are 0, 1, 2
, but, in my case, it should be 0, 3, 4
. How shall I implement this?
+"
+"['reinforcement-learning', 'markov-decision-process', 'proofs', 'value-functions', 'policies']"," Title: Does there necessarily exist ""dominated actions"" in a MDP?Body: In a Markov Decision Process, is it possible that there exists no "dominated action"?
+I define a dominated action the following way:
+we say that $(s,a)$ is a dominated action, if $\forall \pi, a \notin \text{argmax}\ q^{\pi}(s,.)$, where $\pi$ are policies.
+For now, I am only considering the cases where all q-values are distinct and therefore the max is always unique.
+I also only consider the case of deterministic policies (mappings from state space to action space).
+We can consider MDP in which each state has at least 2 actions available to get rid of the corner cases where there is only one possible policy.
+I am struggling to find a counter-example or a proof.
+"
+"['convolutional-neural-networks', 'implementation', 'convolutional-layers', 'pooling']"," Title: How do you pass the image from one convolutional layer to another in a CNN?Body: I am currently trying to write a CNN from scratch, but I don't understand how to feed the information from a max-pooling layer to the next convolutional layer. Specifically, I don't know what to do with the 6 filtered and pooled images from the first convolutional and max-pooling layers. How do I feed those images into the next convolutional layer?
+"
+"['reinforcement-learning', 'papers', 'reinforce', 'notation']"," Title: What does the parameter $y$ stand for in function $g(y,\mu,\sigma)$ related to REINFORCE algorithm?Body: I am wondering what the parameter $y$ in the function $g(y,\mu,\sigma)=\frac{1}{(2\pi)^{1/2}\sigma}e^{-(y-\mu)^{2/2\sigma^2}}$ stands for in Section 6 (page 14) of the paper introducing the REINFORCE family of algorithms.
+Drawing an analogy to Equation 4 of the same paper, I would guess that it refers to the outcome (i.e. sample) of sampling from a probability distribution parameterized by the parameters $\mu$ and $\sigma$. However, I am not sure whether that is correct or not.
+"
+"['image-processing', 'image-generation']"," Title: What algorithm would you advise me to use for my task?Body: I have an image and a mask. I want the image to be the same, but rotated, scaled and positioned like mask. What can I use?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'reference-request', 'double-dqn']"," Title: Can DQN outperform DoubleDQN?Body: I found a similar post about this issue, but unfortunately I did not find a proper answer. Are there any references where DQN is better than DoubleDQN, that is DoubleDQN does not improve DQN ?
+"
+"['natural-language-processing', 'sentiment-analysis', 'stemming', 'lemmatization']"," Title: How can I find words in a string that are related to a given word, then associate a sentiment to that found word?Body: I came up with an NLP-related problem where I have a list of words and a string. My goal is to find any word in the list of words that is related to the given string.
+Here is an example.
+Suppose a word from the list is healthy. If the string has any of the following words: healthy, healthier, healthiest, not healthy, more healthy, zero healthy, etc., it will be extracted from the string.
+Also, I want to judge whether the extracted word/s is/are bearing positive/negative sentiment.
+Let me further explain what I mean by using the previous example.
+Our word was healthy. So, for instance, if the word found in the string was healthier, then we can say it is bearing positive sentiment with respect to the word healthy. If we find the word not healthy, it is negative with respect to the word healthy.
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'double-dqn', 'double-q-learning']"," Title: Why does regular Q-learning (and DQN) overestimate the Q values?Body: The motivation for the introduction of double DQN (and double Q-learning) is that the regular Q-learning (or DQN) can overestimate the Q value, but is there a brief explanation as to why it is overestimated?
+"
+"['neural-networks', 'classification', 'tensorflow', 'time-series']"," Title: Multivariate time-series classification with many variablesBody: I am attempting to use time-series classification algorithms for fraud detection applications. I have came across several works in the literature that propose novel techniques for multivariate time-series classification, however, most of these approaches treat each feature as an individual signal.
+Now, my processing of my data transforms a transactions dataset into a tensor; 1 dimension where each observation is an account, 1 dimension where each element is a transaction and 1 dimension for the transaction attributes. The transactions dataset has a large number of features, many of which are one-hot encoded categorical variables. Therefore, I am not really sure that multivariate time-series classification algorithms such as CNN or LSTM will work in this case, since it will treat every one-hot encoded feature as a signal on its own.
+What would be an alternative approach in this case? Would applying PCA on the data to capture the most significant features help instead of the ordinary features?
+"
+"['reinforcement-learning', 'terminology', 'reference-request', 'reward-functions', 'markov-property']"," Title: Is my reward function non-Markovian?Body: I am working on an RL problem where the time when the agent obtains the reward for taking action $a$ in time step $t$ is stochastic. In fact, there is no immediate reward for taking action $a$ in time step $t$, and, for example, the agent may obtain the reward in time step $t+k$ (where $k>1$). I was wondering if this kind of reward function is categorized as a non-Markovian reward function, and which RL method works better (to approximate/find the optimum policy) in this environment?
+PS: It differs from the sparse reward problems. In my problem, there is a non-zero reward associated with every action taken. However, the agent does not receive any reward immediately. In fact, once the agent takes an action, like $a$, at a time step like $t$, the agent does not have any control over when he/she will receive the reward associated with that action. The time when the reward is received is stochastic.
+"
+"['machine-learning', 'feature-selection', 'swarm-intelligence', 'sigmoid', 'binary-flower-pollination-algorithm']"," Title: In the Binary Flower Pollination Algorithm (using the sigmoid function), is it possible that no feature is selected?Body: I'm trying to use the Binary Flower Pollination Algorithm (BFPA) for feature selection. In the BFPA, the sigmoid function is used to compute a binary vector that represents whether a feature is selected or not. Here are the relevant equations from the paper (page 4).
+$$
+S\left(x_{i}^{j}(t)\right)=\frac{1}{1+e^{-x_{i}^{j}(t)}} \tag{4}\label{4}
+$$
+\begin{equation}
+x_{i}^{j}(t)=\left\{\begin{array}{ll}
+1 & \text { if } S\left(x_{i}^{j}(t)\right)>\sigma \\
+0 & \text { otherwise }
+\end{array}\right.
+\tag{5}\label{5}
+\end{equation}
+In my case, I noticed that my algorithm sometimes returns a zero vector (i.e. all elements are zeros, such as $[0,0,0,0,0,0,0,0,0]$), which means that no feature is selected (a feature would be selected when it is $1$), which makes the fitness function returns an error.
+Is it correct that sigmoid returns result like that?
+"
+"['neural-networks', 'feedforward-neural-networks', 'universal-approximation-theorems']"," Title: Is it really possible to create the ""Perfect Cylinder"" used in Universal Approximation Theorem for 1-hidden layer Neural Network?Body: There are proofs for the universal approximation theorem with just 1 hidden layer.
+The proof goes like this:
+
+- Create a "bump" function using 2 neurons.
+
+- Create (infinitely) many of these step functions with different angles in order to create a tower-like shape.
+
+- Decrease the step/radius to a very small value in order to approximate a cylinder. This is what I'm not convinced of
+
+- Using these cylinders one can approximate any shape. (At this point it's basically just a packing problem like this.
+
+
+In this video, minute 42, the lecturer says
+
+In the limit that's going to be a perfect cylinder. If the cylinder is small enough. It's gonna be a perfect cylinder. Right ? I have control over the radius.
+
+Here are the slides.
+
+Here is a pdf version from another university, so you do not have to watch the video.
+Why am I not convinced?
+I created a program to plot this, and even if I decrease the radius by orders of magnitude it still has the same shape.
+Let's start with a simple tower of radius 0.1:
+
+Now let's decrease the radius to 0.01:
+
+Now, you might think that it gets close to a cylinder, but it just looks like it is approximating a perfect cylinder, because of the zoomed out effect.
+Let's zoom in:
+
+Let's decrease the radius to 0.0000001.
+
+Still not a perfect cylinder. In fact, the "quality" of the cylinder is the same.
+Python code to reproduce (requires NumPy and matplotlib): https://pastebin.com/CMXFXvNj.
+So my questions are:
+Q1 Is it true that we can get a perfect cylinder solely by decreasing the radius of the tower to 0 ?
+Q2 If this true, why is there no difference when I plot it with different radii(0.1, vs 1e-7) ?
+Both towers have the same shape
+Clarification: What do I mean with: same shape ? Let's say we calculate the volume of an actual cylinder(Vc) with the same raius and height as our tower and divide it by the volume of the tower(Vt) .
+Vc = Volume Cylinder
+Vt = Volume Tower
+ratio(r) = Vc/Vt
+What this documents/lectures claim that is the ratio of these 2 volumes depends on the radius but in my view it's just constant.
+So what they are saying is that: lim r -> 0 for ratio(r) = 1
+But my experiments show that: lim r -> 0 for ratio(r) = const and don't depend on the radius at all.
+Q3 Preface
+An objection i got multiple times once from Dutta and once from D.W is that just decreasing the radious and plotting it isn't mathematical rigorous.
+So let's assume in the limit of r=0 it's really a perfect cylinder.
+One possible explanation for this would be that the limit is a special case and one can't approximate towards it
+But if that is true this would imply that there is no use for it since it's impossible to have a radius of exactly zero. It would only be useful if we could get gradually closer to a perfect cylinder by decreasing the radius.
+Q3 So why should we even care about this then ?
+Further Clarifications
+The original universal approximation theorem proof for single hidden layer neural networks was done by G. Cybenko. Then I think people tried to make some visual explations for it. I am NOT questioning the paper ! But i am questioning the visual explanation given in the linked lecutre/pdf (made by other people)
+"
+"['comparison', 'models', 'image-segmentation', 'u-net', 'state-of-the-art']"," Title: How and why do state-of-the-art models in medical segmentation differ from general segmentation models?Body: I am just getting into medical image segmentation and have been able to understand the state-of-the-art architectures, like Double UNet, UNet++, and Multiresunet.
+What I haven't understood yet: Why are these approaches better for medical segmentation than, for example, HRNet-OCR, which currently tops the rankings of the Cityscapes dataset, and vice versa?
+"
+"['autoencoders', 'variational-autoencoder']"," Title: VAE giving near zero output when latent space dimension is largeBody: I'm training a VAE to reconstruct some input (channels picked up by some MIMO BS for context) and I ran an experiment on the training set to see how the performance improves with the latent space dimension.
+My VAE structure is as follows : Input : 2048 -> 1024 -> 512 -> Latent space dimension -> 512 -> 1024 -> Output : 2048
+Here is what I get in terms of relative error when the latent space dimension goes from 2 to 100 :
+
+Everything works as expected at the beginning, but the error starts rising up at around 50 and I have no idea why. With a large latent space dimension, the output is orders of magnitude smaller than the input, which explains the relative error of value 1.
+Here is the same figure when I run the exact same experiment but with a normal autoencoder this time.
+This time the results are consistent.
+What's wrong with my VAE ?
+"
+"['classification', 'datasets', 'meta-learning', 'few-shot-learning', 'reptile-algorithm']"," Title: In few-shot classification, should I use my custom dataset as the validation dataset and mini-ImageNet as the training dataset?Body: I am new to few-shot learning, and I wanted to get a hands-on understanding of it, using Reptile algorithm, applied to my custom dataset.
+My custom dataset has 30 categories, with 5 images per category, so this would be a 30 way 5 shot.
+Given a new image, I wish to be able to classify it into one of 30 categories. I changed train_shots = 5
, classes = 30
in the linked example, and got the training output as
+batch 0: train=0.050000 test=0.050000
+batch 1: train=0.050000 test=0.050000
+
+Should the custom dataset be used as a validation set, with mini-ImageNet as a training dataset, so that the knowledge is transferred? Or can I use only a custom dataset with only $30*5=150$ images for training?
+"
+"['deep-learning', 'monte-carlo-methods', 'dropout', 'uncertainty-quantification', 'mc-dropout']"," Title: How can I use Monte Carlo Dropout in a pre-trained CNN model?Body: In Monte Carlo Dropout (MCD), I know that I should enable dropout during training and testing, then get multiple predictions for the same input $x$ by performing multiple forward passes with $x$, then, for example, average these predictions.
+Let's suppose I want to fine-tune a pre-trained model and get MCD uncertainty estimations, how should I add dropout layers?
+
+- on the fully-connected layers;
+- after every convolutional layer.
+
+I've read some papers and implementations where one applies dropout at fully-connected layers only, using a pre-trained model. However, when using a custom model, usually one adds dropout after every convolutional layer. This work builds two configurations:
+
+- dropout on the fully-connected layers;
+- dropout after resnet blocks.
+
+The first configuration performs better, but I'm unsure if this is an actual uncertainty estimation from resnet. The results show that there is a correlation between high uncertainty predictions and incorrect predictions. So, would this be a good way of estimating uncertainty? My shot is "yes", because even though there are no nodes being sampled from the backbone, the sampling in the fully-connected layer forces a smooth variation in the backbone, generating a low-variance ensemble. But, I'm quite a beginner on MCD so, any help would be appreciated.
+"
+"['machine-learning', 'terminology', 'intelligent-agent']"," Title: Is a team of ML scientists an ""intelligent agent""?Body: I am writing about the role of machine learning scientists in developing a solution. Is there a term for the humans who do learning?
+Can we call a "team of machine learning scientists with their computers working on some ML problem" an intelligent agent? Is "cognizer" the right term? I know that "learner" is reserved for an ML algorithm. I just want a shorter term for their role in cognition, learning.
+"
+"['machine-learning', 'terminology', 'papers', 'support-vector-machine']"," Title: What is the smoothness assumption in SVMs?Body: In this research paper, we have the following claim
+
+the smoothness assumption that underlies many kernel methods such as Support Vector Machines (SVMs) does not hold for deep neural networks trained through backpropagation
+
+Does smoothness here refer to no sharp rise/fall in gradients?
+"
+"['deep-learning', 'definitions', 'transfer-learning', 'fine-tuning']"," Title: What is the definition of pre-training?Body: I want to pre-train a model (combined by two popular modules A and B, and both are large blocks), then fine-tune it on downstream tasks.
+What if for the weight initialization for pre-training, module A is initialized from some checkpoints, while B's is from random? Can I still claim the process as pre-training? Or the modules in the model must all be initialized from random, and so we can call it pre-training?
+If parts of the module's weights are 'contaminated' by checkpoints, it is only can be called fine-tune?
+"
+"['reinforcement-learning', 'intelligent-agent', 'multi-agent-systems']"," Title: What place do Agent Communications Language have in Multi-Agent Systems nowadays?Body: I am currently working on implementing a Multi-Agent System for Smart Grids.
+There's a lot of literature for that and some things confuse me. I have read that there is FIPA, which aimed to create a unified Agent Communication Language. So multiple Agents are talking to each other and FIPA specifies how the messages should be sent and processed. However, it is pretty old.
+In newer papers, where Multi-Agent Reinforcement Learning Algorithms are proposed, FIPA or generally any ACL isn't mentioned. I believe that is because in MARL, communication is done by observing the states of the other agents, rather than communicating explicitly. Also in MARL, the decision making is not based on negotiation like in FIPA but with the learned policy.
+I am now super confused if I got it right.
+Is FIPA still a thing I should worry about when I design my Multi-Agent System?
+Is there any other thing to handle communication in MARL other than sharing states?
+Any help would be really appreciated, thank you very much :)
+"
+"['machine-learning', 'deep-learning', 'autoencoders', 'pytorch', 'variational-autoencoder']"," Title: variational auto encoder loss goes down but does not reconstruct input. out of debugging ideasBody: My variational autoencoder seems to work for MNIST, but fails on slightly "harder" data.
+By "fails" I mean there are at least two apparent problems:
+
+- Very poor reconstruction, for example sample reconstructions from the last epoch on validation set
+
+
+
+without any regularization at all.
+The last reported losses from console are val_loss=9.57e-5, train_loss=9.83e-5
which I thought would imply exact reconstructions.
+- validation loss is low (which does not seem to reflect the reconstruction), and always lower than training loss which is very suspicious.
+
+![]()
+
+For MNIST everything looks fine (with less layers!).
+
+I will give as much nformation as I can, since I am not sure what I should provide to help anyone help me.
+
+Firstly, here is the full code
+You will notice loss calculation and logging is very simple and straight forward and I can't seem to find what's wrong.
+import torch
+from torch import nn
+import torch.nn.functional as F
+from typing import List, Optional, Any
+from pytorch_lightning.core.lightning import LightningModule
+from Testing.Research.config.ConfigProvider import ConfigProvider
+from pytorch_lightning import Trainer, seed_everything
+from torch import optim
+import os
+from pytorch_lightning.loggers import TensorBoardLogger
+# import tfmpl
+import matplotlib.pyplot as plt
+import matplotlib
+from Testing.Research.data_modules.MyDataModule import MyDataModule
+from Testing.Research.data_modules.MNISTDataModule import MNISTDataModule
+from Testing.Research.data_modules.CaseDataModule import CaseDataModule
+import torchvision
+from Testing.Research.config.paths import tb_logs_folder
+from Testing.Research.config.paths import vae_checkpoints_path
+from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint
+
+
+class VAEFC(LightningModule):
+ # see https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
+ # for possible upgrades, see https://arxiv.org/pdf/1602.02282.pdf
+ # https://stats.stackexchange.com/questions/332179/how-to-weight-kld-loss-vs-reconstruction-loss-in-variational
+ # -auto-encoder
+ def __init__(self, encoder_layer_sizes: List, decoder_layer_sizes: List, config):
+ super(VAEFC, self).__init__()
+ self._config = config
+ self.logger: Optional[TensorBoardLogger] = None
+ self.save_hyperparameters()
+
+ assert len(encoder_layer_sizes) >= 3, "must have at least 3 layers (2 hidden)"
+ # encoder layers
+ self._encoder_layers = nn.ModuleList()
+ for i in range(1, len(encoder_layer_sizes) - 1):
+ enc_layer = nn.Linear(encoder_layer_sizes[i - 1], encoder_layer_sizes[i])
+ self._encoder_layers.append(enc_layer)
+
+ # predict mean and covariance vectors
+ self._mean_layer = nn.Linear(encoder_layer_sizes[
+ len(encoder_layer_sizes) - 2],
+ encoder_layer_sizes[len(encoder_layer_sizes) - 1])
+ self._logvar_layer = nn.Linear(encoder_layer_sizes[
+ len(encoder_layer_sizes) - 2],
+ encoder_layer_sizes[len(encoder_layer_sizes) - 1])
+
+ # decoder layers
+ self._decoder_layers = nn.ModuleList()
+ for i in range(1, len(decoder_layer_sizes)):
+ dec_layer = nn.Linear(decoder_layer_sizes[i - 1], decoder_layer_sizes[i])
+ self._decoder_layers.append(dec_layer)
+
+ self._recon_function = nn.MSELoss(reduction='mean')
+ self._last_val_batch = {}
+
+ def _encode(self, x):
+ for i in range(len(self._encoder_layers)):
+ layer = self._encoder_layers[i]
+ x = F.relu(layer(x))
+
+ mean_output = self._mean_layer(x)
+ logvar_output = self._logvar_layer(x)
+ return mean_output, logvar_output
+
+ def _reparametrize(self, mu, logvar):
+ if not self.training:
+ return mu
+ std = logvar.mul(0.5).exp_()
+ if std.is_cuda:
+ eps = torch.FloatTensor(std.size()).cuda().normal_()
+ else:
+ eps = torch.FloatTensor(std.size()).normal_()
+ reparameterized = eps.mul(std).add_(mu)
+ return reparameterized
+
+ def _decode(self, z):
+ for i in range(len(self._decoder_layers) - 1):
+ layer = self._decoder_layers[i]
+ z = F.relu((layer(z)))
+
+ decoded = self._decoder_layers[len(self._decoder_layers) - 1](z)
+ # decoded = F.sigmoid(self._decoder_layers[len(self._decoder_layers)-1](z))
+ return decoded
+
+ def _loss_function(self, recon_x, x, mu, logvar, reconstruction_function):
+ """
+ recon_x: generating images
+ x: origin images
+ mu: latent mean
+ logvar: latent log variance
+ """
+ binary_cross_entropy = reconstruction_function(recon_x, x) # mse loss TODO see if mse or cross entropy
+ # loss = 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
+ kld_element = mu.pow(2).add_(logvar.exp()).mul_(-1).add_(1).add_(logvar)
+ kld = torch.sum(kld_element).mul_(-0.5)
+ # KL divergence Kullback–Leibler divergence, regularization term for VAE
+ # It is a measure of how different two probability distributions are different from each other.
+ # We are trying to force the distributions closer while keeping the reconstruction loss low.
+ # see https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
+
+ # read on weighting the regularization term here:
+ # https://stats.stackexchange.com/questions/332179/how-to-weight-kld-loss-vs-reconstruction-loss-in-variational
+ # -auto-encoder
+ return binary_cross_entropy + kld * self._config.regularization_factor
+
+ def _parse_batch_by_dataset(self, batch, batch_index):
+ if self._config.dataset == "toy":
+ (orig_batch, noisy_batch), label_batch = batch
+ # TODO put in the noise here and not in the dataset?
+ elif self._config.dataset == "mnist":
+ orig_batch, label_batch = batch
+ orig_batch = orig_batch.reshape(-1, 28 * 28)
+ noisy_batch = orig_batch
+ elif self._config.dataset == "case":
+ orig_batch, label_batch = batch
+
+ orig_batch = orig_batch.float().reshape(
+ -1,
+ len(self._config.case.feature_list) * self._config.case.frames_per_pd_sample
+ )
+ noisy_batch = orig_batch
+ else:
+ raise ValueError("invalid dataset")
+ noisy_batch = noisy_batch.view(noisy_batch.size(0), -1)
+
+ return orig_batch, noisy_batch, label_batch
+
+ def training_step(self, batch, batch_idx):
+ orig_batch, noisy_batch, label_batch = self._parse_batch_by_dataset(batch, batch_idx)
+
+ recon_batch, mu, logvar = self.forward(noisy_batch)
+
+ loss = self._loss_function(
+ recon_batch,
+ orig_batch, mu, logvar,
+ reconstruction_function=self._recon_function
+ )
+ # self.logger.experiment.add_scalars("losses", {"train_loss": loss})
+ tb = self.logger.experiment
+ tb.add_scalars("losses", {"train_loss": loss}, global_step=self.current_epoch)
+ # self.logger.experiment.add_scalar("train_loss", loss, self.current_epoch)
+ if batch_idx == len(self.train_dataloader()) - 2:
+ # https://pytorch.org/docs/stable/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_embedding
+ # noisy_batch = noisy_batch.detach()
+ # recon_batch = recon_batch.detach()
+ # last_batch_plt = matplotlib.figure.Figure() # read https://github.com/wookayin/tensorflow-plot
+ # ax = last_batch_plt.add_subplot(1, 1, 1)
+ # ax.scatter(orig_batch[:, 0], orig_batch[:, 1], label="original")
+ # ax.scatter(noisy_batch[:, 0], noisy_batch[:, 1], label="noisy")
+ # ax.scatter(recon_batch[:, 0], recon_batch[:, 1], label="reconstructed")
+ # ax.legend(loc="upper left")
+ # self.logger.experiment.add_figure(f"original last batch, epoch {self.current_epoch}", last_batch_plt)
+ # tb.add_embedding(orig_batch, global_step=self.current_epoch, metadata=label_batch)
+ pass
+ self.logger.experiment.flush()
+ self.log("train_loss", loss, prog_bar=True, on_step=False, on_epoch=True)
+ return loss
+
+ def _plot_batches(self, orig_batch, noisy_batch, label_batch, batch_idx, recon_batch, mu, logvar):
+ # orig_batch_view = orig_batch.reshape(-1, self._config.case.frames_per_pd_sample,
+ # len(self._config.case.feature_list))
+ #
+ # plt.figure()
+ # plt.plot(orig_batch_view[11, :, 0].detach().cpu().numpy(), label="feature 0")
+ # plt.legend(loc="upper left")
+ # plt.show()
+
+ tb = self.logger.experiment
+ if self._config.dataset == "mnist":
+ orig_batch -= orig_batch.min()
+ orig_batch /= orig_batch.max()
+ recon_batch -= recon_batch.min()
+ recon_batch /= recon_batch.max()
+
+ orig_grid = torchvision.utils.make_grid(orig_batch.view(-1, 1, 28, 28))
+ val_recon_grid = torchvision.utils.make_grid(recon_batch.view(-1, 1, 28, 28))
+
+ tb.add_image("original_val", orig_grid, global_step=self.current_epoch)
+ tb.add_image("reconstruction_val", val_recon_grid, global_step=self.current_epoch)
+
+ label_img = orig_batch.view(-1, 1, 28, 28)
+ pass
+ elif self._config.dataset == "case":
+ orig_batch_view = orig_batch.reshape(-1, self._config.case.frames_per_pd_sample,
+ len(self._config.case.feature_list)).transpose(1, 2)
+ recon_batch_view = recon_batch.reshape(-1, self._config.case.frames_per_pd_sample,
+ len(self._config.case.feature_list)).transpose(1, 2)
+
+ # plt.figure()
+ # plt.plot(orig_batch_view[11, 0, :].detach().cpu().numpy())
+ # plt.show()
+ # pass
+
+ n_samples = orig_batch_view.shape[0]
+ n_plots = min(n_samples, 4)
+ first_sample_idx = 0
+
+ # TODO either plotting or data problem
+ fig, axs = plt.subplots(n_plots, 1)
+ for sample_idx in range(n_plots):
+ for feature_idx, (orig_feature, recon_feature) in enumerate(
+ zip(orig_batch_view[sample_idx + first_sample_idx, :, :],
+ recon_batch_view[sample_idx + first_sample_idx, :, :])):
+ i = feature_idx
+ if i > 0: continue # or scale issues don't allow informative plotting
+
+ # plt.figure()
+ # plt.plot(orig_feature.detach().cpu().numpy(), label=f'orig{i}, sample{sample_idx}')
+ # plt.legend(loc='upper left')
+ # pass
+
+ axs[sample_idx].plot(orig_feature.detach().cpu().numpy(), label=f'orig{i}, sample{sample_idx}')
+ axs[sample_idx].plot(recon_feature.detach().cpu().numpy(), label=f'recon{i}, sample{sample_idx}')
+ # sample{sample_idx}')
+ axs[sample_idx].legend(loc='upper left')
+ pass
+ # plt.show()
+
+ tb.add_figure("recon_vs_orig", fig, global_step=self.current_epoch, close=True)
+
+ def validation_step(self, batch, batch_idx):
+ orig_batch, noisy_batch, label_batch = self._parse_batch_by_dataset(batch, batch_idx)
+
+ recon_batch, mu, logvar = self.forward(noisy_batch)
+
+ loss = self._loss_function(
+ recon_batch,
+ orig_batch, mu, logvar,
+ reconstruction_function=self._recon_function
+ )
+
+ tb = self.logger.experiment
+ # can probably speed up training by waiting for epoch end for data copy from gpu
+ # see https://sagivtech.com/2017/09/19/optimizing-pytorch-training-code/
+ tb.add_scalars("losses", {"val_loss": loss}, global_step=self.current_epoch)
+
+ label_img = None
+ if len(orig_batch) > 2:
+ self._last_val_batch = {
+ "orig_batch": orig_batch,
+ "noisy_batch": noisy_batch,
+ "label_batch": label_batch,
+ "batch_idx": batch_idx,
+ "recon_batch": recon_batch,
+ "mu": mu,
+ "logvar": logvar
+ }
+ # self._plot_batches(orig_batch, noisy_batch, label_batch, batch_idx, recon_batch, mu, logvar)
+
+ outputs = {"val_loss": loss, "recon_batch": recon_batch, "label_batch": label_batch,
+ "label_img": label_img}
+ self.log("val_loss", loss, prog_bar=True, on_step=False, on_epoch=True)
+ return outputs
+
+ def validation_epoch_end(self, outputs: List[Any]) -> None:
+ first_batch_dict = outputs[-1]
+
+ self._plot_batches(
+ self._last_val_batch["orig_batch"],
+ self._last_val_batch["noisy_batch"],
+ self._last_val_batch["label_batch"],
+ self._last_val_batch["batch_idx"],
+ self._last_val_batch["recon_batch"],
+ self._last_val_batch["mu"],
+ self._last_val_batch["logvar"]
+ )
+ self.log(name="VAEFC_val_loss_epoch_end", value={"val_loss": first_batch_dict["val_loss"]})
+
+ def test_step(self, batch, batch_idx):
+ orig_batch, noisy_batch, label_batch = self._parse_batch_by_dataset(batch, batch_idx)
+
+ recon_batch, mu, logvar = self.forward(noisy_batch)
+
+ loss = self._loss_function(
+ recon_batch,
+ orig_batch, mu, logvar,
+ reconstruction_function=self._recon_function
+ )
+
+ tb = self.logger.experiment
+ tb.add_scalars("losses", {"test_loss": loss}, global_step=self.global_step)
+
+ return {"test_loss": loss, "mus": mu, "labels": label_batch, "images": orig_batch}
+
+ def test_epoch_end(self, outputs: List):
+ tb = self.logger.experiment
+
+ avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
+ self.log(name="test_epoch_end", value={"test_loss_avg": avg_loss})
+
+ if self._config.dataset == "mnist":
+ tb.add_embedding(
+ mat=torch.cat([o["mus"] for o in outputs]),
+ metadata=torch.cat([o["labels"] for o in outputs]).detach().cpu().numpy(),
+ label_img=torch.cat([o["images"] for o in outputs]).view(-1, 1, 28, 28),
+ global_step=self.global_step,
+ )
+
+ def configure_optimizers(self):
+ optimizer = optim.Adam(self.parameters(), lr=self._config.learning_rate)
+ return optimizer
+
+ def forward(self, x):
+ mu, logvar = self._encode(x)
+ z = self._reparametrize(mu, logvar)
+ decoded = self._decode(z)
+ return decoded, mu, logvar
+
+
+def train_vae(config, datamodule, latent_dim, dec_layer_sizes, enc_layer_sizes):
+ model = VAEFC(config=config, encoder_layer_sizes=enc_layer_sizes, decoder_layer_sizes=dec_layer_sizes)
+
+ logger = TensorBoardLogger(save_dir=tb_logs_folder, name='VAEFC', default_hp_metric=False)
+ logger.hparams = config
+
+ checkpoint_callback = ModelCheckpoint(dirpath=vae_checkpoints_path)
+ trainer = Trainer(deterministic=config.is_deterministic,
+ # auto_lr_find=config.auto_lr_find,
+ # log_gpu_memory='all',
+ # min_epochs=99999,
+ max_epochs=config.num_epochs,
+ default_root_dir=vae_checkpoints_path,
+ logger=logger,
+ callbacks=[checkpoint_callback],
+ gpus=1
+ )
+ # trainer.tune(model)
+ trainer.fit(model, datamodule=datamodule)
+ best_model_path = checkpoint_callback.best_model_path
+ print("done training vae with lightning")
+ print(f"best model path = {best_model_path}")
+ return trainer
+
+
+def run_trained_vae(trainer):
+ # https://pytorch-lightning.readthedocs.io/en/latest/test_set.html
+ # (1) load the best checkpoint automatically (lightning tracks this for you)
+ trainer.test()
+
+ # (2) don't load a checkpoint, instead use the model with the latest weights
+ # trainer.test(ckpt_path=None)
+
+ # (3) test using a specific checkpoint
+ # trainer.test(ckpt_path='/path/to/my_checkpoint.ckpt')
+
+ # (4) test with an explicit model (will use this model and not load a checkpoint)
+ # trainer.test(model)
+
+
+
+Parameters
+I am getting very similar results for any combination of parameters I am (manually) using. Maybe I didn't try something.
+num_epochs: 40
+batch_size: 32
+learning_rate: 0.0001
+auto_lr_find: False
+
+noise_factor: 0.1
+regularization_factor: 0.0
+
+train_size: 0.8
+val_size: 0.1
+num_workers: 1
+
+dataset: "case" # toy, mnnist, case
+mnist:
+ enc_layer_sizes: [784, 512,]
+ dec_layer_sizes: [512, 784]
+ latent_dim: 25
+ n_classes: 10
+ classifier_layers: [20, 10]
+toy:
+ enc_layer_sizes: [2, 200, 200, 200]
+ dec_layer_sizes: [200, 200, 200, 2]
+ latent_dim: 8
+ centers_radius: 4.0
+ n_clusters: 10
+ cluster_size: 5000
+case:
+ #enc_layer_sizes: [ 1800, 600, 300, 100 ]
+ #dec_layer_sizes: [ 100, 300, 600, 1800 ]
+ #frames_per_pd_sample: 600
+
+ enc_layer_sizes: [ 10, 600, 300, 300 ]
+ dec_layer_sizes: [ 600, 300, 300, 10 ]
+ frames_per_pd_sample: 10
+
+ latent_dim: 300
+ n_classes: 10
+ classifier_layers: [ 20, 10 ] # unused right now.
+
+ feature_list:
+ #- V_0_0 # 0, X
+ #- V_0_1 # 0, Y
+ #- V_0_2 # 0, Z
+ - pads_0
+ enc_kernel_sizes: [] # for conv
+ end_strides: []
+ dec_kernel_sizes: []
+ dec_strides: []
+
+is_deterministic: False
+
+real_data_pd_dir: "D:/pressure_pd"
+case_dir: "real_case_20_min"
+case_file: "pressure_data_0.pkl"
+
+
+
+Data
+For Mnist everything works fine.
+When changing to my specific data, results are as above.
+The data is a time series, of several features. To dumb this down even more, I am feeding just a single feature, sliced to equal-length chunks, and fed into the input layer as a vector.
+The fact that the data is a time series could maybe help modeling in the future, but for now I want to just refer to it as chunks of data, which I believe I am doing.
+code:
+from torch.utils.data import Dataset
+import matplotlib.pyplot as plt
+import torch
+from Testing.Research.config.ConfigProvider import ConfigProvider
+import os
+import pickle
+import pandas as pd
+from typing import Tuple
+import numpy as np
+
+
+class CaseDataset(Dataset):
+ def __init__(self, path):
+ super(CaseDataset, self).__init__()
+ self._path = path
+
+ self._config = ConfigProvider.get_config()
+ self.frames_per_pd_sample = self._config.case.frames_per_pd_sample
+ self._load_case_from_pkl()
+ self.__len = len(self._full) // self.frames_per_pd_sample # discard last non full batch
+
+ def _load_case_from_pkl(self):
+ assert os.path.isfile(self._path)
+ with open(self._path, "rb") as f:
+ p = pickle.load(f)
+
+ self._full: pd.DataFrame = p["full"]
+ self._subsampled: pd.DataFrame = p["subsampled"]
+ self._misc: pd.DataFrame = p["misc"]
+
+ feature_list = self._config.case.feature_list
+ self._features_df = self._full[feature_list].copy()
+
+ # normalize from -1 to 1
+ features_to_normalize = self._features_df.columns
+ self._features_df[features_to_normalize] = \
+ self._features_df[features_to_normalize].apply(lambda x: (((x - x.min()) / (x.max() - x.min())) * 2) - 1)
+
+ pass
+
+ def __len__(self):
+ # number of samples in the dataset
+ return self.__len
+
+ def __getitem__(self, index: int) -> Tuple[np.array, np.array]:
+ data_item = self._features_df.iloc[index * self.frames_per_pd_sample: (index + 1) * self.frames_per_pd_sample, :].values
+ label = 0.0
+ # plt.figure()
+ # plt.plot(data_item[:, 0], label="feature 0")
+ # plt.legend(loc="upper left")
+ # plt.show()
+ return data_item, label
+
+
+The amount of time-steps per batch does not seem to affect convergence.
+Train test val split
+is done like so:
+import os
+from pytorch_lightning import LightningDataModule
+import torchvision.datasets as datasets
+from torchvision.transforms import transforms
+import torch
+from torch.utils.data import DataLoader
+from torch.utils.data import Subset
+from Testing.Research.config.paths import mnist_data_download_folder
+from Testing.Research.datasets.real_cases.CaseDataset import CaseDataset
+from typing import Optional
+
+
+class CaseDataModule(LightningDataModule):
+ def __init__(self, config, path):
+ super().__init__()
+ self._config = config
+ self._path = path
+
+ self._train_dataset: Optional[Subset] = None
+ self._val_dataset: Optional[Subset] = None
+ self._test_dataset: Optional[Subset] = None
+
+ def prepare_data(self):
+ pass
+
+ def setup(self, stage):
+ # transform
+ transform = transforms.Compose([transforms.ToTensor()])
+ full_dataset = CaseDataset(self._path)
+
+ train_size = int(self._config.train_size * len(full_dataset))
+ val_size = int(self._config.val_size * len(full_dataset))
+ test_size = len(full_dataset) - train_size - val_size
+ train, val, test = torch.utils.data.random_split(full_dataset, [train_size, val_size, test_size])
+
+ # assign to use in dataloaders
+ self._full_dataset = full_dataset
+ self._train_dataset = train
+ self._val_dataset = val
+ self._test_dataset = test
+
+ def train_dataloader(self):
+ return DataLoader(self._train_dataset, batch_size=self._config.batch_size, num_workers=self._config.num_workers)
+
+ def val_dataloader(self):
+ return DataLoader(self._val_dataset, batch_size=self._config.batch_size, num_workers=self._config.num_workers)
+
+ def test_dataloader(self):
+ return DataLoader(self._test_dataset, batch_size=self._config.batch_size, num_workers=self._config.num_workers)
+
+
+Questions
+
+- I believe having the validation loss consistently lower than the train loss shows something is very wrong here, but I can't put my finger on what, or come up with how to verify this.
+- How can I just make the model auto-encode the data correctly? Basicaly, I would want it to learn the identity function, and for the loss to reflect that.
+- The loss does not seem to reflect the reconstruction. I think this is probably the most fundamental issue
+
+
+My thoughts
+
+- Try a convolutional net instead of FC? maybe it would be able to better learn features?
+- Out of ideas :(
+
+
+Will provide any lacking information.
+"
+"['neural-networks', 'papers', 'neural-architecture-search']"," Title: How can an ""architectural motif"" be extracted from a trained MLP?Body: I am trying to reproduce the paper Synthetic Petri Dish: A novel surrogate model for Rapid Architecture Search. In the paper, the authors try to reduce the architecture of an MLP model trained on MNIST (2 layers - 100 neurons) by initializing a motif network from it, that is, 2 layers, 1 neuron each, and extracting the sigmoid function. I have been searching a lot, but I have not found the answer of how can someone extract an 'architectural motif' from a trained neural network.
+"
+"['activation-functions', 'geometric-deep-learning', 'hyper-parameters', 'graph-neural-networks', 'representation-learning']"," Title: Given a 2-layer GCN, can we choose the dimensions of the 2nd weight matrix, such that this architecture has the same capacity as a 1-layer GCN?Body: This might be more of a question about nested function classes:
+For $k$ class node classification in a graph with $n$ nodes, and $d$ feature vector.
+I want to compare
+Architecture I: the GCN model of Kipf/ Welling with two graph convolutional layers:
+$$
+\mathbf{Y}=\operatorname{softmax}\left(\mathbf{A} \xi\left(\mathbf{A X W}_{1}\right) \mathbf{W}_{2}\right)
+$$
+where
+
+- $\mathbf{X}$ is $n \times d,
+- $ $\mathbf{Y}$ is $n \times k$
+- $\mathbf{A}$ is a fixed $n \times n$ graph diffusion matrix,
+- $\mathbf{W}_{1}, \mathbf{W}_{2}$ are learnable weight matrices of size $d \times d^{\prime}$ and $d^{\prime} \times 2,$ respectively, shared across all nodes, and
+- $\xi$ is a nonlinearity.
+
+Architecture II: a single-layer graph neural network of the form:
+$$
+\mathbf{Y}=\operatorname{softmax}\left(\mathbf{A}^{2} \mathbf{X W}\right)
+$$
+where $\mathbf{W}$ is a learnable weight matrix of size $d \times 2$.
+
+Now I'm wondering
+
+- $\operatorname{Can} \xi, d^{\prime}$ be chosen in a way that both architectures have the same expressive power? (i.e. can represent the same class of functions)?
+
+- $\operatorname{Can} \xi, d^{\prime}$ be chosen in a way that Architecture II is more expressive?
+
+- What would be the advantage in training complexity of Architecture II when applied to large-scale graphs.
+
+
+"
+"['reinforcement-learning', 'comparison', 'actor-critic-methods', 'policy-based-methods', 'continuous-tasks']"," Title: What are the advantages of RL with actor-critic methods over actor-only methods?Body: In general, what are the advantages of RL with actor-critic methods over actor-only (or policy-based) methods?
+This is not a comparison with the Q-learning series, but probably a method of learning the game with only the actor.
+I think it's effective to use only actors, especially for sparse rewards. Is that correct?
+Please, let me know if you have any specific use cases that use only actors.
+"
+"['neural-networks', 'deep-learning', 'batch-normalization', 'normalisation', 'standardisation']"," Title: Why does batch norm standardize with sample mean/variance, when it also learns parameters to scale the mean/variance?Body: Batch norm is a normalizing layer that is shown to help deep networks learn faster and with higher generalization accuracy. It normalizes the activations of the previous layer to a mean $\beta$ and variance $\gamma^2$ to prevent things like activations from exploding or shifting during the learning process.
+More specifically:
+$$\hat{x} = \displaystyle \frac{x - \mu_t}{\sqrt{\sigma_t^2 + \epsilon}}\label{1}\tag{1}$$
+$$ BatchNorm_{\mu_t, \sigma_t}(x) = \gamma \hat{x} + \beta \label{2}\tag{2}$$
+where
+
+- $x$ is the layer input of the layer
+- $\mu_t, \sigma_t$ is the sample mean and standard deviation at time step $t$
+- $\epsilon$ is a small constant, and
+- $\gamma$ and $\beta$ are learnable parameters so that the output is not necessarily standardized to mean $0$ and variance $1$, but possibly to another mean and variance that may be better for the neural network.
+
+My question is, why does BatchNorm first standardize the input $x$ to $\hat{x}$ before applying the learnable parameters $\gamma$ and $\beta$? Isn't this redundant? The parameters $\gamma$ and $\beta$ could learn to standardize the input themselves right?
+In fact, as training progresses, $\mu_t$ and $\sigma_t$ becomes updated to new values $\mu_{t+1}$ and $\sigma_{t+1}$, so the learned parameters at that time step, $\gamma_t$ and $\beta_t$, no longer apply for time step $t+1$ since that involves a different standardization process with a different mean and variance. So by adding this standardization step, it may even hurt the convergence of the layer during learning, since it is adding the gradient of $BatchNorm_{\mu_{t+1}, \sigma_{t+1}}(x)$ to $BatchNorm_{\mu_t, \sigma_t}(x)$, which are two different functions right?
+Why not just simply make it like this?
+$$BatchNorm(x) = \gamma x + \beta \label{3}\tag{3}$$
+This would simplify the calculation of the gradients, which would make learning faster to compute.
+BatchNorm is one of the most successful developments of deep learning, so I know my intuition on these things is wrong -- I'm just curious as to what I am missing.
+"
+"['training', 'q-learning', 'off-policy-methods', 'batch-learning']"," Title: Offline/Batch Reinforcement Learning: when to stop training and what agent to selectBody: Context:
+My team and I are working on a RL problem for a specific application. We have data collected from user interactions (states, actions, rewards, etc.).
+It is too costly for us to emulate agents. We decided therefore to concentrate on Offline RL techniques. For this, we are currently using the RL-Coach library by Intel, which offers support for Batch/Offline RL. More specifically, to evaluate policies in offline settings, we train a DDQN-BCQ model and evaluate the learned policies using Offline Policy Estimators (OPEs).
+Problem:
+In an Online RL setting, the decision of when to stop the training of an agent generally depends on the goal one wants to achieve (as described in this post: https://stats.stackexchange.com/questions/322933/q-learning-when-to-stop-training). If the goal is to train until convergence (of rewards) but no longer, then you could for example stop when the standard deviation of your rewards over the last n steps drops under some threshold. If the goal is to compare the performance of two algorithms, then you should simply compare the two using the same number of training steps.
+However, in the Offline RL setting, I believe the conditions to stop training are not so clear. As stated above, no environement is directly available to evaluate our agents and the evaluation of the quality of the learned policy almost solely relies on OPEs, which are not always accurate.
+For me, I believe that there are two different options that would make sense. I am unsure if both those options are actually equivalent though.
+
+- The first option would be to stop training when the Q-values have converged/reached a plateau (i.e. when the Q-value network loss has converged) -- if they ever do, as we don't really have any guarantee of this happening with artificial neural networks. If the Q-values do reach a plateau, this would mean that our agent has reached some local optimum (or in the best case, the global optimum).
+- The second option would be to only look at the OPEs reward estimation, and stop when they reach a plateau. However, different OPEs do not necessarily reach a plateau at the same time, as it can be seen in the figure below. In the Batch-RL tutorial of RL-Coach, it seems that they would simply select the agent at the epoch where the different OPEs give the highest policy value estimation, without checking that the loss of the network had converged or not (but this is only a tutorial, so I suppose we can't rely too much on it).
![]()
+
+Questions:
+
+- What would be the best criteria for choosing when to stop the training of an agent in an Offline-RL setting?
+- Also, the performance of an agent often heavily depends on the seed used for training. To evaluate the general performance, I believe you have to run multiple training with different seeds? However, in the end, you still want only a single agent to deploy. Should you simply select the one having the highest OPEs values among all the runs?
+
+P.S. I am not sure if this question should be splitted into two different posts, so please let me know if this is the case and I will edit the post!
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'natural-language-processing', 'natural-language-understanding']"," Title: How to extract parameters from a text using AI/NLPBody: lets say I have three texts:
+
+- "make a heading that says hello word"
+- "make a heading of hello world"
+- "create heading consist of hello world"
+
+How can I fetch those groups of words using AI which is referring to heading i.e hello world in this case. Which AI frameworks or libraries can do that?
+in all examples heading is pointing to hello world (which i am referring as group of words). so basically i want those words which will be a part of heading or in other word there is a relationship between them. another example i can give is "I am watching Breaking bad" so there is a relationship between watching and breaking bad and i want to extract what are you watching.
+What's the best approach? Do I have to train a model for that or there are some other techniques that can get it done?
+"
+"['math', 'notation', 'meta-learning', 'gradient', 'model-agnostic-meta-learning']"," Title: What is $ \nabla_{\theta_{k-1}} \theta_{k}$ in the context of MAML?Body: I am attempting to fully understand the explicit derivation and computation of the Hessian and how it is used in MAML. I came across this blog: https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html.
+Specifically, could someone help to clarify this for me: is this term in the red box literally interpreted as the gradient at $\theta_{k-1}$ multiplied by the $\theta_k$?
+
+"
+"['neural-networks', 'deep-learning', 'math', 'function-approximation']"," Title: Are monotonically increasing functions easier to learn?Body: A monotonically increasing function is a function that as x gets bigger so does its output. So, if plotted, it will never go down. Although the outputs might stay constant.
+Logically this seems like an easier function to learn when compared to something that can, when plotted, go up or down.
+Wikipedia has some example diagrams on monotonic functions.
+If I were to say that it is easier for a neural network to learn a monotonic function compared to a non-monotonic function would the statement be correct? If so, is there any reason to it other than 'it only goes one way'?
+"
+"['reinforcement-learning', 'convolutional-neural-networks']"," Title: Can a convolutional network predict states for a RL AgentBody: During the course of training a DQN agent, all visited states are stored in a replay buffer. Therefore would it be practically possible for a CNN, given a reasonable amount of data, to predict the next RL state (in the form of an image) for a given action?
+For example, take the game of Atari as shown below -
+
+Here the agent can take 2 major actions - go left and go right. Would a CNN be generate the image of the bar going right/left for the respective actions?
+My practical knowledge of CNNs is quite limited and therefore I'm trying to gauge the abilities of CNNs before I take up a project.
+"
+"['neural-networks', 'convolutional-neural-networks', 'pytorch', 'hyperparameter-optimization']"," Title: Is it possible to train one part of the network with a particular learning rate and the other part with a different one?Body: I have a combined network consisting of two parts: one is for images and the other is for numerical data. Each sample is matched with a numerical case by an ID. For this combined network, a lr
of 0.01 was found to be best working via hyperparameter tunings
+However, when I trained them as a separate task (binary classification), a lr
of 0.001 for images and 0.01 for numerical data were best.
+As for an AUC metric, the combined network (0.818) is performing on average of image and numerical networks (0.799 and 0.821, respectively).
+Here I thought maybe the combined network's lr
is too high for image part and should apply lower lr
for that part. But, I don't know if it is possible.
+If anyone has any idea of what is what, let me know
+"
+"['machine-learning', 'deep-learning', 'tensorflow', 'object-detection', 'yolo']"," Title: Is it possible to modify or replace the basic network of YOLO?Body: I have an idea to adapt YOLO algorithm to my application, the original YOLO algorithm is for image classifications, which have 24 convolutional layers with output class of 1000, is it possible to replace the basic network of YOLO with Alexnet or Resnet or the custom network structure designed by myself? Noted that my application have input shape of 500 * 10000 * 1 and only 4 classes for classification.
+"
+"['machine-learning', 'training', 'data-preprocessing', 'feature-selection', 'features']"," Title: How to predict the best from a set of messages - best practiceBody: Suppose I have a set of messages A,B,C,D and I want to produce the best message for a website user at a given time.
+For training I plan to show random users a random single message [A/B/C/D] and fill these columns (i'm simplifying the data for illustration)
+
+- converted before
+- funnel state (e.g awareness, search, decision)
+- number of page views
+- message shown [A-D]
+- Time to convert (this will be updated later if there is a conversion)
+
+I want to predict what is the best message to show to a specific user in order to maximise the chance of conversion (=min time to convert).
+I'm not sure how to represent this for training and inference. Its not a simple prediction like predicting one of the given data points.
+One option is to run prediction of time to buy for each of the messages but
+1- its not efficient
+2- It will prefer messages that are shown closer to purchase time regardless if they fit the current user time.
+"
+"['reinforcement-learning', 'value-functions', 'multi-armed-bandits', 'pomdp', 'contextual-bandits']"," Title: How do I learn the value function for a POMDP with a single-step horizon (bandit)?Body: Consider a POMDP with a finite number of environment states, $|\mathcal{S}| = N$, but the number of belief states is uncountably infinite. The belief state space is the convex hull of an $N$ simplex. Each turn this space is sampled with a flat probability distribution. As you are sampling from an uncountably infinite set of belief states, the probability of a belief state recurring in a finite number of samples is zero.
+Now, let's suppose that there are a finite number of episodes, and each episode ends after 1-time step. At the only time step of the episode, the agent receives a belief state $b(s)$ over some fixed set of contexts $s$, selects a single action, and receives a single reward, before the episode ends.
+I understand that the belief state value function, $V(b)$, is piecewise linear and convex, with a single hyperplane for each action (see e.g. [1]).
+My question is, given that I only observe the belief states and the sampled rewards, how do I identify the value function, given that a belief state $b(s)$ has an infinitesimally small probability of occuring again?
+The expected reward for a given belief state $b$ is just a linear function $\alpha \cdot b$, where $\alpha$ is the vector of the rewards for each state of the environment. But I cannot simply learn a linear model here because $\alpha \cdot b$ gives me the expected reward for a given belief state, but I may never start with this belief state again and so cannot simply calculate the sample mean expected reward.
+"
+"['machine-learning', 'deep-learning', 'data-preprocessing']"," Title: Is pre-processing used in deep learning?Body: I'm new to deep learning. I wanted to know: do we use pre-processing in deep learning? Or it is only used in machine learning. I searched for it and its methods on the internet, but I didn't find a suitable answer.
+"
+"['machine-learning', 'tensorflow', 'models']"," Title: How to embed/deploy an arbitrary machine learning model on microcontrollers?Body: Say I have a machine learning model trained on a laptop and I then want to embed/deploy the model on a microcontroller. How can I do this?
+I know that TensorflowLite Micro generates a C header to be added in the project and then be embedded, but every example I read shows how it is done with neural networks, which seems legit as TensorFlow is primarily used for deep learning.
+But how can I do the same with any type of model, like the ones there is in scikit-learn? So, I'm not interested in necessarily doing this with TensorflowLite Micro, but I'm interested in the general approach to solve this problem.
+"
+"['computer-vision', 'object-detection', 'yolo']"," Title: Aggregating 2D object detections into 3D object detectionsBody: I have a data set of 3D images with some bounding box annotations. The images are too large to train something like YOLO 3D (would run out of memory), so I instead created slices of the 3D images with corresponding 2D bounding boxes and trained a 2D object detector. During inference, I assemble the 2D detections into 3D detections. I constructed some simple heuristics to do that, which works fine, but I am wondering if there aren't any established methods of doing that.
+I appreciate if you could point me in the right direction.
+"
+"['reinforcement-learning', 'python', 'environment', 'state-spaces']"," Title: Is object-based representation of the observation space feasible?Body: I just started working on a DRL project from scratch. The state of each episode can be expressed as a state set $S=(S^A, S^B, S^C, S^D)$. Each subset is a feature set of a constituent component of the environment, say, $S^A=(a_1, a_2, a_3)$. To model components, I decided to create four pythonic classes with attributes as features. For example, class A
is like:
+class A:
+ def __init__(self, a1, a2, a3):
+ self.a1 = a1
+ self.a2 = a2
+ self.a3 = a3
+
+Each class has some methods that help in the interaction with other components (classes) and is used in the environment's step function
to generate actions.
+I am going to create one instance of class A
, 10 instances of class B
, 20 instances of class C
, and a random number of between 1-10 instances of class D
at the beginning of each episode. So, my observation includes 33-42 states of entities.
+As far as I know, the observation space is usually encoded in n-dimensional arrays as it is in OpenAI Gym. Is it possible, feasible, or considered good practice to store instances as a sub-state of the whole observation space? In my case, it would be like storing 33-42 instances in an array (list) of 33-42 elements.
+Thanks for your time and suggestions!
+"
+"['machine-learning', 'comparison', 'definitions', 'statistical-ai', 'statistics']"," Title: Can AI be understood as a generalized statistics tool?Body: I am a (soon-to-become, to be honest) theoretical physicist. I want to learn a bit about AI. So as you know in physics we develop theories based on as few and as simple basic equations as possible which shall explain as much of the experimental results and observations as possible. I feel that this is kind of not how AI solves problems.
+My understanding is that AI can be understood as a very generalized and abstract statistics software package handling input data in a general way to find the "best fit" to some form of problem. Is that correct? I know it isn't. But is it vaguely correct?
+I give you an example. In weather prediction there is a technique called MOS (model output statistics). It collects output from numerical weather prediction models (simulation software) as well as observational data and finds statistical relations between them to correct the model output for errors. For example, it might be that the intensity of precipitation in London is on average underestimated by the model by 10 %, so MOS will correct for that. Over time, it improves itself, because it collects more and more data. Is this already a form of AI?
+"
+"['reinforcement-learning', 'gym']"," Title: Difference in average rewards between taking random actions and following random policiesBody: I wrote two programs that simulated 10000 episodes in gym environment CartPole-v0.
+The first program takes random moves in every steps in each episode. The average reward over 10000 episodes is 22.1582.
+The second program uses a random policy in each episode. For each episode, initialize a 4 by 2 matrix $M$ with random numbers from a uniform distribution on $[0,1)$ that maps state observations to action values. Then choose the action with higher value in each step. The average reward over 10000 episodes is 46.8291.
+The linear mapping given by $M$ covers only a portion of the search space so it seems the way actions are selected in the first program is more "random" than the second program. How can we go about explaining the huge discrepancy in the average rewards obtained by the two methods?
+"
+"['neural-networks', 'machine-learning', 'training', 'efficiency']"," Title: How to train/update neural networks faster without a decrease in performance?Body: I noticed that there are many studies in recent years on how to train/update neural networks faster/quicker with equal or better performance. I find the following methods(except the chips arms race):
+
+- using few-shot learning, for instance, pre-taining and etc.
+- using the minimum viable dataset, for instance using (guided) progressive sampling.
+- model compression, for instance, efficent transformers
+- Data echoing, or simply put let the data pass multiple times in the graph(or GPU)
+
+Is there a systematic structure on this topic and how can we update or train a model faster without loss of its capacity?
+"
+"['deep-learning', 'backpropagation', 'math', 'cross-entropy', 'softmax']"," Title: How to compute the gradient of the cross-entropy loss function with respect to the parameters with softmax activation function?Body: I've seen plenty of examples of people doing Sigmoid + MSE backpropagation implementations, yet I do not seem to understand how to implement backpropagation as stated in the title in the case of multi-class classifications.
+What confuses me mainly are the matrix-vector shapes and their multiplications, and their implementations in code.
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'keras', 'activation-functions']"," Title: What happens if there is no activation function in some layers of a neural network?Body: What if I don't apply an activation function on some layers in a neural network. How will it affect the model?
+Take for instance the following code snippet:
+def model(x):
+ a = Conv2D(64, (3, 3))(x)
+ x = Conv2D(64, (3, 3), activation = 'relu')(x)
+ b = Conv2D(128, (3, 3))(x)
+ x = Conv2D(128, (3, 3), activation = 'relu')(b)
+ return x, a, b
+
+"
+"['neural-networks', 'deep-learning', 'tensorflow', 'long-short-term-memory', 'ctc-loss']"," Title: Why won't my model train with CTC loss?Body: I am trying to train an LSTM using CTC loss, but the loss does not decrease when I train it. I have created a minimal example of my issue by creating training data where the network simply has to copy the current input element at each time step. Moreover, I have made the length of the label the same as the length of the input sequence and no adjacent elements in the label sequence the same so that both CTC loss and categorical cross-entropy loss can be used. I found that when using categorical cross-entropy loss the model very quickly converges, whereas when using CTC loss it gets nowhere.
+I have uploaded by minimal example to colab. Does anyone know why CTC loss is not working in this case?
+"
+"['neural-networks', 'bayesian-deep-learning', 'bayesian-neural-networks']"," Title: Why is neural networks being a deterministic mapping not always considered a good thing?Body: Why is neural networks being a deterministic mapping not always considered a good thing?
+So I'm excluding models like VAEs since those aren't entirely deterministic. I keep thinking about this and my conclusion is that often times neural networks are used to model things in reality, which often time do have some stochasticity and since neural networks are deterministic if they are not trained on enough examples of the possible variance inputs in relation to outputs can have they cannot generalize well. Are there other reasons this is not a good thing?
+"
+"['reinforcement-learning', 'proofs', 'bellman-equations', 'eligibility-traces']"," Title: How to prove the formula of eligibility traces operator in reinforcement learning?Body:
+I don't understand how the formula in the red circle is derived. The screenshot is taken from this paper
+"
+"['reference-request', 'agi', 'problem-solving', 'godel-machine']"," Title: Current research on Gödel machinesBody: Is there any current research on Gödel machines? It seems that the last article by Jürgen Schmidhuber on this topic was published in 2012: http://people.idsia.ch/~juergen/goedelmachine.html
+"
+"['deep-learning', 'convolutional-neural-networks', 'python', 'keras', '1d-convolution']"," Title: Keras 1D CNN always predicts the same result even if accuracy is high on training setBody: The validation accuracy of my 1D CNN is stuck on 0.5 and that's because I'm always getting the same prediction out of a balanced data set. At the same time my training accuracy keeps increasing and the loss decreasing as intended.
+Strangely, if I do model.evaluate()
on my training set (that has close to 1 accuracy in the last epoch), the accuracy will also be 0.5. How can the accuracy here differ so much from the training accuracy of the last epoch? I've also tried with a batch size of 1 for both training and evaluating and the problem persists.
+Well, I've been searching for different solutions for quite some time but still no luck. Possible problems I've already looked into:
+
+- My data set is properly balanced and shuffled;
+- My labels are correct;
+- Tried adding fully connected layers;
+- Tried adding/removing dropout from the fully connected layers;
+- Tried the same architecture, but with the last layer with 1 neuron and sigmoid activation;
+- Tried changing the learning rates (went down to 0.0001 but still the same problem).
+
+
+Here's my code:
+import pathlib
+import numpy as np
+import ipynb.fs.defs.preprocessDataset as preprocessDataset
+import pickle
+import tensorflow as tf
+from tensorflow.keras.models import Sequential
+from tensorflow.keras import Input
+from tensorflow.keras.layers import Conv1D, BatchNormalization, Activation, MaxPooling1D, Flatten, Dropout, Dense
+from tensorflow.keras.optimizers import SGD
+
+main_folder = pathlib.Path.cwd().parent
+datasetsFolder=f'{main_folder}\\datasets'
+trainDataset = preprocessDataset.loadDataset('DatasetTime_Sg12p5_Ov75_Train',datasetsFolder)
+testDataset = preprocessDataset.loadDataset('DatasetTime_Sg12p5_Ov75_Test',datasetsFolder)
+
+X_train,Y_train,Names_train=trainDataset[0],trainDataset[1],trainDataset[2]
+X_test,Y_test,Names_test=testDataset[0],testDataset[1],testDataset[2]
+
+model = Sequential()
+
+model.add(Input(shape=X_train.shape[1:]))
+
+model.add(Conv1D(16, 61, strides=1, padding="same"))
+model.add(BatchNormalization())
+model.add(Activation('relu'))
+model.add(MaxPooling1D(2, strides=2, padding="valid"))
+
+model.add(Conv1D(32, 3, strides=1, padding="same"))
+model.add(BatchNormalization())
+model.add(Activation('relu'))
+model.add(MaxPooling1D(2, strides=2, padding="valid"))
+
+model.add(Conv1D(64, 3, strides=1, padding="same"))
+model.add(BatchNormalization())
+model.add(Activation('relu'))
+model.add(MaxPooling1D(2, strides=2, padding="valid"))
+
+model.add(Conv1D(64, 3, strides=1, padding="same"))
+model.add(BatchNormalization())
+model.add(Activation('relu'))
+model.add(MaxPooling1D(2, strides=2, padding="valid"))
+
+model.add(Conv1D(64, 3, strides=1, padding="same"))
+model.add(BatchNormalization())
+model.add(Activation('relu'))
+model.add(Flatten())
+model.add(Dropout(0.5))
+
+model.add(Dense(200))
+model.add(Activation('relu'))
+
+model.add(Dense(2))
+model.add(Activation('softmax'))
+
+opt = SGD(learning_rate=0.01)
+
+model.compile(loss='binary_crossentropy',optimizer=opt,metrics=['accuracy'])
+
+model.summary()
+
+model.fit(X_train,Y_train,epochs=10,shuffle=False,validation_data=(X_test, Y_test))
+
+model.evaluate(X_train,Y_train)
+
+
+Here's model.fit():
+model.fit(X_train,Y_train,epochs=10,shuffle=False,validation_data=(X_test, Y_test))
+
+Epoch 1/10
+914/914 [==============================] - 277s 300ms/step - loss: 0.6405 - accuracy: 0.6543 - val_loss: 7.9835 - val_accuracy: 0.5000
+Epoch 2/10
+914/914 [==============================] - 270s 295ms/step - loss: 0.3997 - accuracy: 0.8204 - val_loss: 19.8981 - val_accuracy: 0.5000
+Epoch 3/10
+914/914 [==============================] - 273s 298ms/step - loss: 0.2976 - accuracy: 0.8730 - val_loss: 1.9558 - val_accuracy: 0.5002
+Epoch 4/10
+914/914 [==============================] - 278s 304ms/step - loss: 0.2897 - accuracy: 0.8776 - val_loss: 20.2678 - val_accuracy: 0.5000
+Epoch 5/10
+914/914 [==============================] - 277s 303ms/step - loss: 0.2459 - accuracy: 0.8991 - val_loss: 5.4945 - val_accuracy: 0.5000
+Epoch 6/10
+914/914 [==============================] - 268s 294ms/step - loss: 0.2008 - accuracy: 0.9181 - val_loss: 32.4579 - val_accuracy: 0.5000
+Epoch 7/10
+914/914 [==============================] - 271s 297ms/step - loss: 0.1695 - accuracy: 0.9317 - val_loss: 14.9538 - val_accuracy: 0.5000
+Epoch 8/10
+914/914 [==============================] - 276s 302ms/step - loss: 0.1423 - accuracy: 0.9452 - val_loss: 1.4420 - val_accuracy: 0.4988
+Epoch 9/10
+914/914 [==============================] - 266s 291ms/step - loss: 0.1261 - accuracy: 0.9497 - val_loss: 4.3830 - val_accuracy: 0.5005
+Epoch 10/10
+914/914 [==============================] - 272s 297ms/step - loss: 0.1142 - accuracy: 0.9548 - val_loss: 1.6054 - val_accuracy: 0.5009
+
+Here's model.evaluate():
+model.evaluate(X_train,Y_train)
+
+914/914 [==============================] - 35s 37ms/step - loss: 1.7588 - accuracy: 0.5009
+
+Here's model.summary():
+Model: "sequential"
+_________________________________________________________________
+Layer (type) Output Shape Param #
+=================================================================
+conv1d (Conv1D) (None, 4096, 16) 992
+_________________________________________________________________
+batch_normalization (BatchNo (None, 4096, 16) 64
+_________________________________________________________________
+activation (Activation) (None, 4096, 16) 0
+_________________________________________________________________
+max_pooling1d (MaxPooling1D) (None, 2048, 16) 0
+_________________________________________________________________
+conv1d_1 (Conv1D) (None, 2048, 32) 1568
+_________________________________________________________________
+batch_normalization_1 (Batch (None, 2048, 32) 128
+_________________________________________________________________
+activation_1 (Activation) (None, 2048, 32) 0
+_________________________________________________________________
+max_pooling1d_1 (MaxPooling1 (None, 1024, 32) 0
+_________________________________________________________________
+conv1d_2 (Conv1D) (None, 1024, 64) 6208
+_________________________________________________________________
+batch_normalization_2 (Batch (None, 1024, 64) 256
+_________________________________________________________________
+activation_2 (Activation) (None, 1024, 64) 0
+_________________________________________________________________
+max_pooling1d_2 (MaxPooling1 (None, 512, 64) 0
+_________________________________________________________________
+conv1d_3 (Conv1D) (None, 512, 64) 12352
+_________________________________________________________________
+batch_normalization_3 (Batch (None, 512, 64) 256
+_________________________________________________________________
+activation_3 (Activation) (None, 512, 64) 0
+_________________________________________________________________
+max_pooling1d_3 (MaxPooling1 (None, 256, 64) 0
+_________________________________________________________________
+conv1d_4 (Conv1D) (None, 256, 64) 12352
+_________________________________________________________________
+batch_normalization_4 (Batch (None, 256, 64) 256
+_________________________________________________________________
+activation_4 (Activation) (None, 256, 64) 0
+_________________________________________________________________
+flatten (Flatten) (None, 16384) 0
+_________________________________________________________________
+dropout (Dropout) (None, 16384) 0
+_________________________________________________________________
+dense (Dense) (None, 200) 3277000
+_________________________________________________________________
+activation_5 (Activation) (None, 200) 0
+_________________________________________________________________
+dense_1 (Dense) (None, 2) 402
+_________________________________________________________________
+activation_6 (Activation) (None, 2) 0
+=================================================================
+Total params: 3,311,834
+Trainable params: 3,311,354
+Non-trainable params: 480
+_________________________________________________________________
+
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'objective-functions', 'temporal-difference-methods']"," Title: When calculating the cost in deep Q-learning, do we use both the input and target states?Body: I just finished Andrew Ngs's deep learning specialization, but RL was not covered, so I don't know the basics of RL. So, I have been having trouble understanding the cost function in deep Q-learning. Like other cost functions in machine learning, you usually have $\hat{y}$ (the network prediction) and $y$ (the target, or what the network is being optimized for.)
+I've read through a few online articles on deep Q-learning. So far, there has been no mention of setting up a target state ($y$) for the agent to produce. There has been mention of calculating a temporal-difference, however, which is where I am confused.
+When calculating the cost function, are you taking the input state ($\hat{y}$) and a target state ($y$) into consideration to determine the temporal-difference?
+Otherwise, I'm not sure how the cost function could determine a reward based on the input alone (state of the environment the agent is in.).
+"
+"['objective-functions', 'generative-adversarial-networks']"," Title: In this implementation of pix2pix, why are the weights for the discriminator and generator losses set to 1 and 100 respectively?Body: I am working on a pix2pix GAN model that was inspired by the code in this Github repository. The original code is working and I have already customized most of the code for my needs. However, there is one part I am unable to understand.
+The pix2pix GAN is a conditional GAN network that takes an image as a condition and outputs a modified image - such as blurry to clear, facades to buildings, filling up cut out part of an image, etc. The combined model thus takes as input a conditional image, the discriminator compares it with the dummy matrix named valid or fake, containing 0s or 1s according to validity (0 for generated samples, 1 for real samples). The generator loss is according to similarity with real sample + discriminator. The following code corresponds to what I told:
+self.combined = Model(inputs=[img_A, img_B], outputs=[valid, fake_A])
+self.combined.compile(loss=['mse', 'mae'],
+ loss_weights=[1, 100],
+ optimizer=optimizer)
+
+The losses are thus set as MSE for discriminator output and MAE for generator. That seems to be OK, but I can not understand why the implementation uses 1 and 100 for the weights of the discriminator and generator losses, respectively, which seems to imply that the discriminator loss is 100 times lower than the loss of the generator. I couldn't find the reason in the original article. Are my understandings of the GAN incorrect?
+Disclaimer: I have posted this question on Stats SE, but have no luck with answers. Maybe it is more suitable for AI.
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'reward-design', 'reward-functions']"," Title: Is better to reward short- or long-term progress in Q-learning?Body: I have been training some kind of agent to reach a target using a Q-learning based approach, and I have tried two different types of rewards:
+
+- Long-term reward: $\mathrm{reward} = - \mathrm{distance}(\mathrm{agent,target})(t+1)$
+
+- Short-term reward: $\mathrm{reward} = \mathrm{distance}(\mathrm{agent,target})(t) - \mathrm{distance}(\mathrm{agent,target})(t+1)$
+
+
+In the first case, I am rewarding the current progress. In the second case, I am rewarding direct progression, but this may lead to less progression in the future. My question is, what kind of reward does Q-learning need?
+I understand that the $\gamma$ factor should incorporate long term rewards, so it makes more sense to reward direct progression. However, using long-term rewards gave better results for my scenario...
+"
+"['computer-vision', 'training', 'papers', 'attention']"," Title: How are the parameters $\alpha_i$ of hard attention trained?Body: I have a question about Show, Attend and Tell: Neural Image CaptionGeneration with Visual Attention paper by Xu. The basic mechanism of stochastic hard attention is that each pixel of the input image has a corresponding parameter $\alpha_i$, which describes the probability that this pixel will be chosen for further processing.
+But I don't see an explanation of how to train or define this parameter in the paper. Can someone explain how to train this $\alpha_i$ for each pixel?
+"
+"['convolutional-neural-networks', 'image-recognition', 'continuous-tasks']"," Title: Predicting continous value with CNN (prediction of fruit maturity)Body: I want to train some IA algorithm to be able to evaluate the maturity of a fruit (say, measured in numbers of days before rotten) based on an image of the fruit.
+My first instinct is to go with convolutional neural network (CNN), since those have proven very efficient for recognizing images. However, I am not sure what the output layer should look like in this case.
+I could separate the data into a bunch of classes (1 day left, 2 days left, 3 days left, etc.) and use one output node for each of these classes, as in an usual classification task, but in doing so I completely lose the continuous nature of the output, which makes me think it might not be the optimal way to proceed.
+Another option would be to just have a unique output node, whose activation would correspond to the continuous value to predict, here the number of days left (normalized appropriately to lie between 0 and 1). This would have the advantage of taking the continuity into account, but I have been told that neural networks aren't made to predict values in that way, they really are best suited for classification into discrete classes.
+What do you think would be the best way to proceed? Is there another way to nudge a neural network so that its output is continuous? Or maybe CNN just aren't suited for this task? If you have any suggestions of other algorithms that would be efficient for this kind of task, I would be happy to know them.
+"
+"['convolutional-neural-networks', 'weights']"," Title: What are acting as weights in a convolution neural network?Body: Looking at some old notes I took on CNN's and I wrote down that the weights in a CNN are acting like filters in a CNN but to be honest I don't really know what the weights are acting as in a CNN and was wondering if someone could explain that clearly to me.
+"
+"['game-ai', 'game-theory', 'games-of-chance', 'state-space-complexity']"," Title: What is the state-space complexity of Spades?Body: AI reached super-human level in many complex games, including imperfect information games such as six-player no-limit Texas hold’em poker. However, it still did not reached that level in Trick-taking card games such as Spades, Bridge, Skat and Whist. In a related question, I am asking Why Trick-Taking games are a challenge for AI.
+An important factor that makes those games a challenge for AI is their size, to be precise lets talk about the State-space complexity which is define as the number of legal game positions reachable from the initial position of the game [Victor Allis (section 6.2.4)].
+What is the size of Spades?
+"
+"['reinforcement-learning', 'papers', 'autoencoders', 'generalization', 'state-spaces']"," Title: How does replacing states with latent representations help RL agents?Body: I have seen many papers using autoencoders to replace images (states) with latent representations. Some of those methods have shown higher rewards using such techniques. However, I do not understand how this helps the RL agent learn better. Perhaps viewing latent representations allows the agent to generalize to novel states more quickly?
+Here are 2 papers I have read -
+
+"
+"['deep-learning', 'reference-request', 'deep-neural-networks', 'weights', 'weights-initialization']"," Title: Are there any new weight initialization techniques for DNN published after 2015?Body: Considering weights initialization in my personal projects, I always used some standard techniques such as:
+
+- Glorot (also known as Xavier) initialization (2010).
+- Mertens initialization (2010).
+- He initialization (2015).
+
+As it is a very active research field, are there some innovations in recent years that have increased the performance of DNNs?
+I am thinking specifically of architectures such as DNNs and CNNs with activation functions, such as ReLU, ELU, PReLU, Leaky ReLU, SELU, Swish, and Mish.
+"
+"['tensorflow', 'object-detection', 'weights-initialization', 'bounding-box']"," Title: Bounding Box Regression - An Adventure in FailureBody: I've solved many problems with neural networks, but rarely work with images. I have about 18 hours into creating a bounding box regression network and it continues to utterly fail. With some loss functions it will claim 80% accuracy during training and validation (with a truly massive loss on both) but testing the predictions reveals a bounding box that only moves one or two pixels in any given direction and seems to totally ignore the data. I've now implemented a form of IoU loss, but find that IoU is pinned at zero... which is obviously true based on the outputs after training. :). I'd like someone to look this over and give me some advice on how to proceed next.
+What I Have
+I am generating 40000 examples of 200x100x3 images with a single letter randomly placed in each. Simultaneously I am generating the ground truth bounding boxes for each training sample. I have thoroughly validated that this all works and the data is correct.
+What I Do To It
+I am then transforming the 200x100x3 images down to greyscale to produce a 200x100x1 image. The images are then normalized and the bounding boxes are scaled to fall between 0 and 1. In simplified form, this happens:
+x_train_normalized = (x_data - 127.5) / 127.5
+y_train_scaled = boxes[:TRAIN]/[WIDTH,HEIGHT,WIDTH,HEIGHT]
+
+I've been through this data carefully, even reconstituting images and bounding boxes from it. This is definitely working.
+Training
+To train, after trying mse
and many others, all of which fail equally badly, I have implemented a simple custom IOU loss function. It actually returns -ln(IoU)
. I made this change based on a paper since the loss was (oddly?) pinned at zero over multiple epochs.
+(Loss function:)
+import tensorflow.keras.backend as kb
+def iou_loss(y_actual,y_pred):
+ b1 = y_actual
+ b2 = y_pred
+# tf.print(b1)
+# tf.print(b2)
+ zero = tf.convert_to_tensor(0.0, b1.dtype)
+ b1_ymin, b1_xmin, b1_ymax, b1_xmax = tf.unstack(b1, 4, axis=-1)
+ b2_ymin, b2_xmin, b2_ymax, b2_xmax = tf.unstack(b2, 4, axis=-1)
+ b1_width = tf.maximum(zero, b1_xmax - b1_xmin)
+ b1_height = tf.maximum(zero, b1_ymax - b1_ymin)
+ b2_width = tf.maximum(zero, b2_xmax - b2_xmin)
+ b2_height = tf.maximum(zero, b2_ymax - b2_ymin)
+ b1_area = b1_width * b1_height
+ b2_area = b2_width * b2_height
+
+ intersect_ymin = tf.maximum(b1_ymin, b2_ymin)
+ intersect_xmin = tf.maximum(b1_xmin, b2_xmin)
+ intersect_ymax = tf.minimum(b1_ymax, b2_ymax)
+ intersect_xmax = tf.minimum(b1_xmax, b2_xmax)
+ intersect_width = tf.maximum(zero, intersect_xmax - intersect_xmin)
+ intersect_height = tf.maximum(zero, intersect_ymax - intersect_ymin)
+ intersect_area = intersect_width * intersect_height
+
+ union_area = b1_area + b2_area - intersect_area
+ iou = -1 * tf.math.log(tf.math.divide_no_nan(intersect_area, union_area))
+ return iou
+
+The Network
+This has been through many, many iterations. As I said, I've solved many other problems with NNs... This is the first one to get me completely stuck. At this point, the network is dramatically stripped down but continues to fail to train at all:
+import tensorflow as tf
+from tensorflow import keras
+from tensorflow.keras import layers, optimizers
+
+tf.keras.backend.set_floatx('float32') # Use Float32s for everything
+
+input_shape = x_train_normalized.shape[-3:]
+model = keras.Sequential()
+model.add(layers.Conv2D(4, 16, activation = tf.keras.layers.LeakyReLU(alpha=0.2), input_shape=input_shape))
+model.add(layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
+model.add(layers.Dropout(0.2))
+model.add(layers.Flatten())
+model.add(layers.Dense(200, activation = tf.keras.layers.LeakyReLU(alpha=0.2)))
+model.add(layers.Dense(64, activation=tf.keras.layers.LeakyReLU(alpha=0.2)))
+model.add(layers.Dense(4, activation="sigmoid"))
+
+model.compile(loss = iou_loss, optimizer = "adadelta", metrics=['accuracy'])
+history = model.fit(x_train_normalized, y_train_scaled, epochs=8, batch_size=100, validation_split=0.4)
+
+All pointers are welcome! In the meantime I'm implementing a center point loss function to see if that helps at all.
+"
+"['neural-networks', 'proofs', 'statistics', 'ensemble-learning']"," Title: When do two identical neural networks have uncorrelated errors?Body: In Chapter 9, section 9.1.6, Raul Rojas describes how committees of networks can reduce the prediction error by training N identical neural networks and averaging the results.
+If $f_i$ are the functions approximated by the $N$ neural nets, then:
+$$
+Q=\left|\frac{1}{N}(1,1, \ldots, 1) \mathbf{E}\right|^{2}=\frac{1}{N^{2}}(1,1, \ldots, 1) \mathbf{E} \mathbf{E}^{\mathrm{T}}(1,1, \ldots, 1)^{\mathrm{T}}\tag{9.4}\label{9.4}
+$$
+is the quadratic error of the average of the networks, where
+$$
+\mathbf{E}=\left(\begin{array}{cccc}
+e_{1}^{1} & e_{2}^{1} & \cdots & e_{m}^{1} \\
+\vdots & \vdots & \ddots & \vdots \\
+e_{1}^{N} & e_{2}^{N} & \cdots & e_{m}^{N}
+\end{array}\right),
+$$
+and $\mathbf{E}$'s rows are the errors of the approximations of the $N$ functions, i.e. $\mathbf{e}^{i} = f_i(\mathbf{x}^{i}) - t_i$, for each of the input-output pairs $\left(\mathbf{x}^{1}, t_{1}\right), \ldots,\left(\mathbf{x}^{m}, t_{m}\right)$ used in training.
+Is there a way to assure that the errors for a neural network are uncorrelated to the errors of the others?
+Raul Rojas says that the uncorrelation of residual errors is true for a not too large $N$ (i.e. $N < 4$). Why is that?
+"
+"['machine-learning', 'reinforcement-learning', 'multi-armed-bandits', 'upper-confidence-bound']"," Title: In UCB, is the actual upper bound an upper bound of an one-sided or two-sided confidence interval?Body: I'm a bit confused about the visualization of the upper bound (following the notation of (c.f. Sutton & Barto (2018))
+$$Q_t(a)+C\sqrt{\frac{\mathrm{ln}(t)}{N_t(a)}}$$
+In many blog posts about the UCB(1)-algorithm, such as visualized in the following image (c.f. Link ):
+
+Isn't the upper (confidence) bound simply the upper bound of a one-sided confidence interval instead of a two-sided confidence interval as shown in the image above?
+A lower bound of the interval is completely useless in this case, or am I wrong?
+"
+"['convolutional-neural-networks', 'batch-normalization']"," Title: Understanding Batch Normalization for CNNsBody: I am trying to understand how batch normalization (BN) works in CNNs. Suppose I have a feature map tensor $T$ of shape $(N, C, H, W)$
+where $N$ is the mini-batch size,
+$C$ is the number of channels, and
+$H,W$ is the spatial dimension of the tensor.
+Then it seems there could a few ways of going about this:
+Method 1: $T_{n,c,x,y} := \gamma*\frac {T_{c,x,y} - \mu_{x,y}} {\sqrt{\sigma^2_{x,y} + \epsilon}} + \beta$ where $\mu_{x,y} = \frac{1}{NC}\sum_{n, c} T_{n,c,x,y}$ is the mean for all channels $c$ for each batch element $n$ at spatial location $x,y$ over the minibatch, and
+$\sigma^2_{x,y} = \frac{1}{NC} \sum_{n, c} (T_{n, c,x,y}-\mu_{c})^2$ is the variance of the minibatch for all channels $c$ at spatial location $x,y$.
+Method 2: $T_{n,c,x,y} := \gamma*\frac {T_{c,x,y} - \mu_{c,x,y}} {\sqrt{\sigma^2_{c,x,y} + \epsilon}} + \beta$ where $\mu_{c,x,y} = \frac{1}{N}\sum_{n} T_{n,c,x,y}$ is the mean for a specific channels $c$ for each batch element $n$ at spatial location $x,y$ over the minibatch, and
+$\sigma^2_{c,x,y} = \frac{1}{N} \sum_{n} (T_{n, c,x,y}-\mu_{c})^2$ is the variance of the minibatch for a channel $c$ at spatial location $x,y$.
+Method 3: For each channel $c$ we compute the mean/variance over the entire spatial values for $x,y$ and apply the formula as
+$T_{n, c,x,y} := \gamma*\frac {T_{n, c,x,y} - \mu_{c}} {\sqrt{\sigma^2_{c} + \epsilon}} + \beta$, where now $\mu_c = \frac{1}{NHW} \sum_{n,x,y} T_{n,c,x,y}$ and $\sigma^2{_c} = \frac{1}{NHW} \sum_{n,x,y} (T_{n,c,x,y}-\mu_c)^2 $
+In practice which of these methods is used (if any) are correct for?
+The original paper on batch normalization , https://arxiv.org/pdf/1502.03167.pdf , states on page 5 section 3.2, last paragraph, left side of the page:
+
+For convolutional layers, we additionally want the normalization to
+obey the convolutional property – so that different elements of the
+same feature map, at different locations, are normalized in the same
+way. To achieve this, we jointly normalize all the activations in a
+minibatch, over all locations. In Alg. 1, we let $\mathcal{B}$ be the set of all
+values in a feature map across both the elements of a mini-batch and
+spatial locations – so for a mini-batch of size $m$ and feature maps of
+size $p \times q$, we use the effective mini-batch of size $m^\prime = \vert \mathcal{B} \vert = m \cdot pq$. We learn a pair of parameters $\gamma^{(k)}$ and $\beta^{(k)}$ per feature map,
+rather than per activation. Alg. 2 is modified similarly, so that
+during inference the BN transform applies the same linear
+transformation to each activation in a given feature map.
+
+I'm not sure what the authors mean by "per feature map", does this mean per channel?
+"
+"['reinforcement-learning', 'actor-critic-methods', 'reinforce', 'eligibility-traces']"," Title: How to implement REINFORCE with eligibility traces?Body: The pseudocode below is taken from Barto and Sutton's "Reinforcement Learning: an introduction". It shows an actor-critic implementation with eligibility traces. My question is: if I set $\lambda^{\theta}=1$ and replace $\delta$ with the immediate reward $R_t$, do I get a backwards implementation of REINFORCE?
+
+"
+"['reinforcement-learning', 'python', 'implementation', 'function-approximation', 'tile-coding']"," Title: Can someone explain to me this implementation of Tile Coding using Hash Tables?Body: The code below is adapted from this implementation.
+from math import floor
+
+basehash = hash
+
+class IHT:
+ "Structure to handle collisions"
+ def __init__(self, sizeval):
+ self.size = sizeval
+ self.overfullCount = 0
+ self.dictionary = {}
+
+ def count (self):
+ return len(self.dictionary)
+
+ def getindex (self, obj, readonly=False):
+ d = self.dictionary
+ if obj in d: return d[obj]
+ elif readonly: return None
+ size = self.size
+ count = self.count()
+ if count >= size:
+ if self.overfullCount==0: print('IHT full, starting to allow collisions')
+ self.overfullCount += 1
+ return basehash(obj) % self.size
+ else:
+ d[obj] = count
+ return count
+
+def hashcoords(coordinates, m, readonly=False):
+ if type(m)==IHT: return m.getindex(tuple(coordinates), readonly)
+ if type(m)==int: return basehash(tuple(coordinates)) % m
+ if m==None: return coordinates
+
+
+def tiles(ihtORsize, numtilings, floats, ints=[], readonly=False):
+ """returns num-tilings tile indices corresponding to the floats and ints"""
+ qfloats = [floor(f*numtilings) for f in floats]
+ Tiles = []
+ for tiling in range(numtilings):
+ tilingX2 = tiling*2
+ coords = [tiling]
+ b = tiling
+ for q in qfloats:
+ coords.append( (q + b) // numtilings )
+ b += tilingX2
+ coords.extend(ints)
+ Tiles.append(hashcoords(coords, ihtORsize, readonly))
+ return Tiles
+
+if __name__ == '__main__':
+ tc=IHT(4096)
+ tiles = tiles(tc, 8, [0, 0.5], ints=[], readonly=False)
+ print(tiles)
+
+I'm trying to figure out how the function tiles()
works. It implements tile coding, which is explained in "Reinforcement Learning: An Introduction" (2020) Sutton and Barto on page 217.
+So far I've figured out:
+
+qfloats
rescales the floating numbers to be the largest integer less than or equal to the original floating number, these are then re-scaled by the number of tilings.
+
+- Then, for each tiling, a list is created, the first element of the list is the tiling number, followed by the coordinates of the floats for that tiling, i.e. each of the re-scaled numbers is offset by the tiling number,
b
and integer divided by numtilings
.
+
+- Finally,
hashcoords
first checks the dictionary d
to see if this list has appeared before, if it has it returns the number relating to that list. It not it either creates a new entry with that list at the key and the count as the value or if the count is more than or equal to the size it adds one to overfullCount
and returns basehash(obj) % self.size
.
+
+
+I'm struggling to understand two parts:
+
+- What is the
tilingX2
part doing?
+
+- Why is
tilingX2
added to b
after the first coordinate has been calculated? It seems to me that each coordinate should be treated separately
+
+- And why by a factor of 2?
+
+- What is the expression
basehash(obj) % self.size
doing? I'm quite new to the concept of hashing. I know that, generally, they create a unique number for a given input (up to a limit), but I'm really struggling to understand what is going on in the line above.
+
+
+"
+"['objective-functions', 'continuous-action-spaces', 'soft-actor-critic']"," Title: How to make SAC (Soft-Actor-Critic) learn a policy?Body: I cannot make SAC learn a task in a certain environment. The point is that it actually sometimes finds a very good policy, but it never learns the policy in the end. I am using the SAC implementation from stable-baselines3, which is correct as far as I have seen.
+I have an environment driven by complex dynamics. I have to control a variable to be in a certain range. Every time the variable goes out of minimum or maximum range the environment is done. The action is continuous (between 0 and 30). My goal is to keep the variable (1D) in the range for as long as possible (millions of steps per episode would be ideal). There are certain characteristics of the environment that may make it particular:
+
+- The action can only drive the variable to lower values. The variable can go up as a result of the environment dynamics (not controlled) and as a consequence of certain events (not controlled) that occur at random intervals.
+- The observation is a noisy sample of the variable. The observation is just a real number.
+- The effect of actions in the variable is usually delayed. That is, applying an action does not immediately lower the value of the variable.
+
+I have tried SAC with many different hyperparameters. It sometimes find very good policies, policies that last for thousands and even millions of steps in evaluation or rollout. But it never learns such policies. Even saving the policy in those cases, they are not able to produce a lone episode later. In the attached image, it can be seen that during the training (in some evaluations) the policy is able to run for thousand of steps. But then it never learns that. I only show 500K here steps but I have run test for 1.5 million training timesteps.
+
+So, my question is (I have several ones actually):
+
+- Is SAC not suitable for this problem? I have also run TD3 and PPO but without better results and SAC is the only one actually able to find those policies that make very long episodes. Any other algorithm?
+- I have tried several reward functions, and, in the end, a simple one that gives 1 for every step and 0 when done is the one that seems to give better results. In the image, the reward is one for every step and -100 when done.
+- Since the values of the variable are time correlated due to the dynamics, I have also tried with RNN actors (with TF Agents), but results do not improve.
+- I cannot see any relationship between the actor loss and critic loss and the results (maybe that is my problem). The loss seem to be larger when the episodes are longer (which is what I want).
+
+Any advice is highly appreciated. Thanks
+"
+"['neural-networks', 'training', 'optimization', 'inverse-rl']"," Title: How to make input variable as trainable parameter in a neural network?Body: I am working on an optimization problem. First, I have done forward training to work the network as a surrogate model, then I freeze the output and I want to find an optimal value of input for a given output.
+"
+"['natural-language-processing', 'datasets', 'generative-adversarial-networks', 'image-generation', 'natural-language-understanding']"," Title: What dataset might Elon Musk's Dall-E have used?Body: Dall-E, it can generate many imaginative images from the description, even some peculiar images, how did they actually create this kind of dataset to train this AI , because there is not much of that kind of data which include weird images and descriptive text, how did they create this massive dataset. Does anyone have any idea?
+If you have no idea what I am talking about, please refer to this link: https://openai.com/blog/dall-e/.
+"
+"['reinforcement-learning', 'training', 'deep-rl', 'incremental-learning', 'travelling-salesman-problem']"," Title: How to train a policy model incrementally to solve a problem similar to the vehicle routing problem?Body: I have a problem similar to the vehicle routing problem (VRP) that I want to solve with reinforcement learning. In this problem, the agent starts from the point $(x_0, y_0)$, then it needs to travel through $N$ other points, $(x_1, y_1), \dots, (x_n, y_n)$. The goal is to minimize the distance traveled.
+Right now, I am modeling a state as a point $(x, y)$. There are 8 possible actions: go east, go north-east, go north, go north-west, go-west, go south-west, go south, go south-east. Each action goes by a pace of 100 metres.
+After reaching near a destination point, that destination point is removed from the list of destination points.
+The reward is the reciprocal of total distance until all destination points reached (there's a short optimisation to arrange the remaining points for a better reward).
+I'm using a DNN model to keep the policy of a reinforcement learning agent, so this DNN maps a certain state to suitable action.
+However, after every action of the agent with a good reward, the training data are added with 1 more sample, it's kinda incremental learning.
+Should the policy model be trained again and again with every new sample added in? This does take too much time.
+Any better RL approach to the problem above?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'experience-replay', 'importance-sampling']"," Title: Where does this variation of the importance sampling weight come from?Body: I have seeing a variation in importance sampling (IS) in Prioritized Experience Replay (PER) in some implementations regarding the original paper approach stated as (in section 3.4):
+$$
+w_{i}=\left(\frac{1}{N} \cdot \frac{1}{P(i)}\right)^{\beta}
+$$
+For something like this:
+$$
+w_{i}=\left(\frac{\min (P(i))}{P(i)}\right)^{\beta}
+$$
+Does anyone know where it comes from? A reference that explains the reason for that new formula and improvements obtained?
+My intuition guides me to some conclusions, not necessarily correct, using this new formula:
+
+- In the beginning, supposing that the PER stills have empty positions, $\min(P(i)) \sim 0$, not giving too much weight for samples. But it grows substantially once the capacity is achieved as well as when the error becomes low (plus the incrementing Beta)
+
+A code on github that applies this: link
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'generalization', 'discrete-state-spaces']"," Title: Does DQN generalise to unseen states in the case of discrete state-spaces?Body: In my understanding, DQN is useful because it utilises a neural network as a q-value function approximator, which, after the training, can generalise to unseen states.
+I understand how that would work when the input is a vector of continuous values, however, I don't understand why DQN would be used with discrete state-spaces. If the input to the neural network is just an integer with no clear structure, how is this supposed to generalise?
+If, instead of feeding to the network just an integer, we fed a vector of integers, in which each element represents a characteristic of the state (separating things like speed, position, etc.) instead of collapsing everything in a single integer, would that generalise better?
+"
+['long-short-term-memory']," Title: Is it possible and if so does it make sense to have dense layers in between LSTM layers?Body: I am new to LSTMs and I was wondering if it is possible to have LSTM layer then dense then LSTM again and does it make sense?
+"
+"['natural-language-processing', 'reference-request', 'sentiment-analysis']"," Title: What is the best approach for sentiment analysis when the text is very brief?Body: I'm working on a project to do sentiment analysis but my data is not long and properly formatted text. It's more likely to be very short sentences, e.g. tweets (in full tweet lingo), quick reviews of maybe 2-5 short sentences, etc.
+If my text is of that nature, what approach would you recommend? E.g. CNNs (spaCy has a ready-made text classifier), LSTM (e.g. something like Keras), etc.
+What are the pros/cons of your suggested approach (i.e. why is it better suited for classifying short paragraphs/sentences)?
+I'm starting out in the area so any links/papers/etc. will be most welcome!
+"
+"['neural-networks', 'natural-language-processing', 'natural-language-understanding', 'expert-systems', 'natural-language-generation']"," Title: Is there any neural network model that can perform multiple NLP steps at once?Body: I realize most NLP algorithms have multiple steps. (e.g. OCR/speech rec > syntax > semantics > response logic > semantic output > natural language output)
+Is there any NN model that can perform multiple steps in NLP at once? For example, a single network which accepts audio input and returns a semantic analysis of the given speech, or a single network which accepts text input and returns natural language output?
+"
+"['deep-learning', 'objective-functions', 'activation-functions', 'image-segmentation', 'dice-loss']"," Title: What does Dice Loss should receive in case of binary segmentationBody: I implemented Dice loss class in pytorch
:
+import torch
+import torch.nn as nn
+
+
+class DiceLoss(nn.Module):
+ def __init__(self, weight=None, size_average=True):
+ super(DiceLoss, self).__init__()
+
+ def forward(self, inputs, targets, smooth=1):
+ smooth = 1.
+
+ input_flat = inputs.contiguous().view(-1)
+ target_flat = targets.contiguous().view(-1)
+
+ intersection = (input_flat * target_flat).sum()
+ A_sum = torch.sum(input_flat * input_flat)
+ B_sum = torch.sum(target_flat * target_flat)
+
+ dsc = (2. * intersection + smooth) / (A_sum + B_sum + smooth)
+ return 1 - dsc
+
+Now I tested it in 2 scenarios:
+
+- where
inputs
is the prediction from the network without applying activation (in my case sigmoid), only convolution with a kernel of size 1.
+- where
inputs
are the result of the network including activation of the sigmoid.
+
+Now I get comparable results between the 2 ways, but I was wondering what is the "Right" way out of the 2.
+"
+"['chess', 'alpha-beta-pruning']"," Title: How to implement very simple move-ordering for alpha-beta pruningBody: I've done implementing alpha-beta, and transpositional table on my search tree algorithm so I decided to implement move-ordering next. But once I implemented it, it's way more longer to respond than before?
+Here's my code so far:
+function sortMoves(chess)
+{
+ const listA = [], listB = chess.moves({ verbose: true });
+ const scores = [];
+
+ // calc best moves
+ const moves = [...listB];
+ for (const move of moves)
+ {
+ const state = chess.move(move, {
+ promotion: 'q'
+ });
+ scores.push(evaluate(chess.board()));
+ chess.undo();
+ }
+
+ // sort move
+ for (var i = 0; i < Math.min(5, moves.length); i++)
+ {
+ let maxEval = -Infinity;
+ let maxIndex = 0;
+
+ for (var j = 0; j < scores.length; j++)
+ {
+ if (scores[j] > maxEval)
+ {
+ maxEval = scores[j];
+ maxIndex = j;
+ }
+ }
+
+ scores[maxIndex] = -Infinity;
+ listA.push(moves[maxIndex]);
+ listB.splice(maxIndex, 1);
+ }
+
+ const newList = listA.concat(listB);
+ return newList;
+}
+
+I am expecting for this to respond quicker than before but it turns out it's not. So my question is am I actually sorting the moves correctly? or should I write some code for the alpha-beta pruning related to the sorted moves?
+Here's my negamax
function:
+function negamax(chess, depth, depthOrig, alpha, beta, color)
+{
+ // transposition table look up
+ const alphaOrig = alpha;
+ const hashKey = zobrist.hash(chess);
+ const lookup = transposition.get(hashKey);
+ if (lookup)
+ {
+ if (lookup.depth >= depth)
+ {
+ if (lookup.flag === EXACT)
+ return lookup.score;
+ else if (lookup.flag === LOWERBOUND)
+ alpha = Math.max(alpha, lookup.score);
+ else if (lookup.flag === UPPERBOUND)
+ beta = Math.min(beta, lookup.score);
+
+ if (alpha >= beta)
+ return lookup.score;
+ }
+ }
+
+
+ if (depth === 0 || chess.game_over())
+ {
+ // if current turn is checkmated,
+ // remove the king on the board
+ // so the AI knows if the move
+ // will lead to checkmate or not, if
+ // it's remove on the board,
+ // the checkmated team will
+ // reduce the king's value leading
+ // the AI to move in checkmate
+ const kingPos = getPiecePos(chess, 'k', chess.turn());
+ if (chess.in_checkmate())
+ chess.remove(kingPos);
+
+ const evaluation = evaluate(chess.board());
+ chess.put({ type: chess.KING, color: chess.turn() }, kingPos);
+
+ return color * evaluation;
+ }
+
+
+ /* let moves = chess.moves();
+ if (lookup)
+ {
+ console.log(moves, depth)
+ const bestMove = lookup.move;
+ const moveIndex = moves.indexOf(bestMove);
+ const arr = moves.splice(moveIndex, 1);
+ moves = arr.concat(moves);
+ console.log(moves, depth)
+ } */
+ const moves = sortMoves(chess);
+ /* const moves = chess.moves({ verbose: true }); */
+
+
+ let count = 0;
+ let score = -Infinity;
+ let bestMove = null;
+ if (lookup)
+ bestMove = lookup.move;
+
+
+ for (const move of moves)
+ {
+ const state = chess.move(move, {
+ promotion: 'q'
+ });
+ searchedMoves++;
+ if (count === 0)
+ score = -negamax(chess, depth-1, depthOrig, -beta, -alpha, -color);
+ else
+ {
+ score = -negamax(chess, depth-1, depthOrig, -alpha-1, -alpha, -color);
+ if (alpha < score < beta)
+ score = -negamax(chess, depth-1, depthOrig, -beta, -score, -color);
+ }
+ chess.undo();
+
+ if (score > alpha)
+ {
+ alpha = score;
+ bestMove = move;
+ }
+
+ // do I add something on this part?
+ count++;
+ if (alpha >= beta)
+ break;
+ }
+
+
+ // transposition table store
+ const key = zobrist.hash(chess);
+ keyArr.push(key);
+ const entry = new Transposition();
+ entry.score = score;
+ entry.depth = depth;
+ entry.move = bestMove;
+ if (score <= alphaOrig)
+ entry.flag = UPPERBOUND;
+ else if (score >= beta)
+ entry.flag = LOWERBOUND;
+ else
+ entry.flag = EXACT;
+ transposition.set(key, entry);
+
+
+ return alpha;
+}
+```
+
+"
+"['reinforcement-learning', 'comparison', 'q-learning', 'dqn', 'deep-rl']"," Title: What is the difference between Q-learning, Deep Q-learning and Deep Q-network?Body: Q-learning uses a table to store all state-action pairs. Q-learning is a model-free RL algorithm, so how could there be the one called Deep Q-learning, as deep means using DNN; or maybe the state-action table (Q-table) is still there but the DNN is only for input reception (e.g. turning images into vectors)?
+Deep Q-network seems to be only the DNN part of the Deep Q-learning program, and Q-network seems the short for Deep Q-network.
+Q-learning, Deep Q-learning, and Deep Q-network, what are the differences? May be there a comparison table between these 3 terms?
+"
+"['neural-networks', 'machine-learning', 'reference-request', 'applications']"," Title: Is non-negative matrix factorization for machine learning obsolete?Body: I am taking a course about using matrix factorization for machine learning.
+The first thing that came into my mind is by using the matrix factorization we are always limited to linear relationships between the data, which is very limiting to predict complex patterns.
+In comparison with neural networks, where we can use a non-linear activation function. It seems to me that all the tasks that matrix factorization can achieve will score better using a simple multilayer neural network.
+So, can I conclude that NMF and matrix factorization for machine learning, in general, are not that practical, or there are cases where it's better to use NMF?
+"
+"['heuristics', 'constraint-satisfaction-problems', 'local-search']"," Title: CSP heuristic to simultaneously reduce conflicts and find near optimal assignmentBody: I am trying to design a good heuristic to solve a constraint satisfaction problem (CSP). I think that a possible heuristic to use is
+$$h_1(\text{state}) = \text{number of conflicts in state}$$
+However, of the possible solutions to the CSP, some have a lower cost (are better). So I can't just use $h_1$ as my heuristic.
+Since the state space is pretty huge, I want to use the local search with a heuristic $h$, that guides my variable assignments towards a low-cost solution while reducing the conflicts. The way I am thinking about going about this is: of the variable assignments which do not cause conflicts (are valid), apply $h$ to them, pick the variable assignment which has the lowest/best value $h$ value. So $h$ would not handle conflicts, I would make sure any assignments considered by $h$ are guaranteed to be valid.
+Ideally, though, I want $h$ to both drive down the conflicts to 0 and simultaneously guide the assignments to the lowest cost solution. Is this generally possible?
+"
+"['neural-networks', 'recurrent-neural-networks', 'papers', 'gradient-descent', 'optimization']"," Title: Is it possible to ensure the convergence when training a RNN weight on its SVD decomposition?Body: I'm reading the following paper in which the author seems to do 2 things interesting:
+
+- The hidden-to-hidden weight matrix of the RNN is SVD decomposed and train separately.
+- Each orthogonal part of the decomposition is optimized multiplicatively according to Cayley Transformation to maintain its orthogonal properties.
+
+Now, I'm not so strong with the math behind the technique, but I could be hand-waving and say that albeit being multiplicative, it is just another method of gradient descent, and each orthogonal part is still minimizing the Loss function. So far so good.
+But what they are doing is actually split the original optimization problem into multiple sub-optimization (2 for orthogonal matrices and n for the number of singular values), and then multiplied the result together. How can we be sure about the convergence and the optimality of such method? Or is this the case where we can say nothing and let the experiment speak for themselves?
+"
+"['neural-networks', 'classification', 'data-labelling']"," Title: For binary classification learning problems, how should I label instances where I'm only 60% sure?Body: I've come across a few binary classification problems lately where the labelling was challenging even for an expert. I'm wondering what I should do with this. Here are some of my suggestions to get the ball rolling:
+
+- Make a third category called "unsure" then make it a three-class classification problem instead.
+- Make a third category called "unsure" and just remove these from your training set.
+- Make a third category called "unsure" and during training model this as a 0.5 such that the binary cross entropy loss looks like $-0.5\log(\hat{y})-0.5\log(1-\hat{y})$
+- Allow the labeller to pick a percentage on a sliding scale (or maybe multiple choice: (0%, 25%, 50%, 75%, 100%), and take that into account when calculating cross entropy (as in my point above).
+
+I recently saw a paper which goes for option 2, although that's not enough to convince me. Here's the relevant quote:
+
+In case of a high-level risk, collision is imminent and
+the driver must react in less than 0.5 s (TTC < 0.5s). For
+low-level risk, the TTC is more than 2.0 s (TTC > 2.0s).
+Videos that show intermediate-level risk (0.5 s ≤ TTC
+≤ 2.0 s), which is a mixture of high- and low-level risks,
+were not included in the NIDB because when training
+a convnet, it must be possible to make a clear visual
+distinction of risk.
+
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'genetic-programming', 'norvig-russell']"," Title: Are Genetic Algorithms suitable for problems like the Knuth problem?Body: We all know that Genetic Algorithms can give an optimal or near-optimal solution. So, in some problems like NP-hard ones, with a trade-off between time and optimal solution the near-optimal solution is good enough.
+Since there is no guarantee to find the optimal solution, is GA considered to be a good choice for solving the Knuth problem?
+According to Artificial intelligence: A modern approach (third edition), section 3.2 (p. 73):
+
+Knuth conjectured that, starting with the number 4, a sequence of
+factorial, square root, and floor operations will reach any desired
+positive integer.
+
+For example, 5 can be reached from 4:
+
+floor(sqrt(sqrt(sqrt(sqrt(sqrt((4!)!))))))
+
+So, if we have a number (5) and we want to know the sequence of the operations of the 3 mentioned ones to reach the given number, each gene of the chromosome will be a number that represents a certain operation with an additional number for (no operation) and the fitness function will be the absolute difference between the given number and the number we get from applying the operations in a certain order for each the chromosome (to min). Let's consider that the number of the iterations (generations) is done with no optimal solution and the nearest number we have is 4 ( with fitness 1), the problem is that we can get 4 from applying no operation on 4 while for 5 we need many operations, so the near-optimal solution is not even near to the solution.
+So, is GA is not suitable for this kind of problems? Or the suggested chromosome representation and fitness function are not good enough?
+"
+"['testing', 'test-datasets']"," Title: How to build a test set for a model in industry?Body: Most of the tutorials only teach us to split the whole dataset into three parts: training set, develop set, and test set. But in the industry, we are kind of doing test-driven development, and what comes most important is the building of our test set. We are not given a large corpus first, but we are discussing how to build the test set first.
+The most resource-saving method is just sampling(simple random sampling) cases from the log and then having them labeled, which represents the population. Perhaps we are concerning that some groups should be more important than others, then we do stratified sampling.
+Are there any better sampling methods?
+What to do when we are releasing a new feature and we cannot find any cases of that feature from the user log?
+"
+"['deep-learning', 'convolutional-neural-networks', 'convolution', 'notation', 'cross-correlation']"," Title: What do the variables in the cross-correlation formula mean?Body: I understand what cross-correlation does given a kernel and an input image, but the formula confuses me a little. Given here in Goodfellow's Deep Learning (page 329), I can't quite understand what $m$ and $n$ are. Are they the dimensions of the kernel along the height and width dimensions?
+$$S(i,j) =(K*I)(i,j) = \sum_m \sum_n I(i+m, j+n)K(m,n)$$
+So, for the input image $I$ and kernel $K$, we take the sum product of $I*K$, but what do the $m$ and $n$ represent? How is the input image $I$ indexed?
+"
+"['machine-learning', 'deep-learning', 'monte-carlo-tree-search', 'alphazero']"," Title: Alpha Zero does not converge for Connect 6, a game with huge branching factor - why?Body: I have a problem with applying alpha zero self-play to a game (Connect 6) with a huge branching factor (30,000 on average).
+I have implemented the MCTS as described but I found that during the MCTS simulations for the first move, because P(s, a) is so much smaller than Q(s, a), the MCTS search tree is extremely narrow (in fact, it only had 1 branch per level, and later I have added dirichlet noise and it changed the tree to have 2 child branches instead of 1, per level).
+With some debugging I figured that after the first simulation, the last visited child would have a Q value which is on average 0.5, and all the rest of the children would have 0 for their Q values (because they haven't been explored) while their U are way smaller, about 1/60000 on average. So comparing all the Q + U on the 2nd and subsequent simulations would result in selecting the same node over and over again, building this narrow tree.
+To make matter worse, because the first simulation built a very narrow tree, and we don't clear the statistics on the subsequent simulations for the next move, the end result is the simulations for the first move dictated the whole self-play for the next X moves or so (where X is number of simulations per move) - this is because the N values on this path is accumulated from the previous simulations from the prior moves. Imagine I run 800 simulations per move but the N value on the only child inherited from previous simulations is > 800 when I started the simulation for the 3rd move in the game.
+This is a basically related to question 2 raised in this thread:
+AlphaGo Zero: does $Q(s_t, a)$ dominate $U(s_t, a)$ in difficult game states?
+I don't think the answer addressed the problem I am having here. Surely we are not comparing Q and U but when Q dominates U then we ended up building a narrow and deep tree and the accumulated N values are set on this single path, preventing the self-play to explore meaningful moves.
+At the end of the game these N values on the root nodes of moves played ended up training the policy network, reinforcing these even more on the next episode of training.
+What am I doing wrong here? Or is AlphaZero algorithm not well suited for games with branching factor in this order?
+Note: as an experiment I have tried to use a variable C PUCT value, which is proportional to the number of legal moves on each game state. However this doesn't seem to help either.
+"
+"['neural-networks', 'computer-vision', 'tensorflow', 'python', 'image-recognition']"," Title: How to recognize sequence of digits in an imageBody: I am learning to program neural networks and others, and I would like to know how I can get the numbers that are in an image, for example, if I pass an image that has 123 written, get with my model that there are 123 written, I have tried to use PyTesseract
is not very precise, and I would like to do it with a neural network, my current code is quite simple, it recognizes the digits of the mnist
dataset such that:
+import tensorflow as tf
+from tensorflow.keras import Sequential, optimizers
+from tensorflow.keras.utils import to_categorical
+from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D
+import matplotlib.pyplot as plt
+
+mnist = tf.keras.datasets.mnist
+(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
+
+print('train_images.shape:', train_images.shape)
+print('test_images.shape:', test_images.shape)
+plt.imshow(train_images[0])
+
+train_images = train_images.reshape((60000, 28, 28, 1))
+test_images = test_images.reshape((10000, 28, 28, 1))
+
+train_images = train_images.astype('float32') / 255
+test_images = test_images.astype('float32') / 255
+
+train_labels = to_categorical(train_labels)
+test_labels = to_categorical(test_labels)
+
+
+model = Sequential()
+
+model.add(Conv2D(32, (5, 5), activation = 'relu', input_shape = (28, 28, 1)))
+model.add(MaxPooling2D((2, 2)))
+
+model.add(Conv2D(64, (5, 5), activation = 'relu'))
+model.add(MaxPooling2D((2, 2)))
+
+model.add(Flatten())
+
+model.add(Dense(10, activation = 'softmax'))
+
+model.summary()
+
+model.compile(loss = 'categorical_crossentropy', optimizer = 'sgd', metrics = ['accuracy'])
+
+model.fit(train_images, train_labels, batch_size = 100, epochs = 5, verbose = 1)
+
+test_loss, test_accuracy = model.evaluate(test_images, test_labels)
+
+print('Test accuracy:', test_accuracy)
+
+but I would need to know how I can pass an image with a sequence of digits to it, and that it recognizes the digits in question, does anyone know how I could do it?
+"
+"['deep-learning', 'tensorflow', 'deep-neural-networks', 'pytorch', 'graph-neural-networks']"," Title: How to deal with dynamically changing input tensor in neural networks without padding?Body: I have a dataset about the monitored health/growth of a community of people. The dataset has tensor shaped (batch_size, features, person, window)
, where:
+
+- person==10 means there are 10 people in the community
+- features==9 means that there are 9 features being monitored, for example, blood pressure, sugar level, ..etc
+- window==15 means the recorded value of each feature every day for 15 days (time dimension)
+
+Moreover, people can join/leave the community, so the person dimension would increase/decrease over time. For simplicity, the window dimension is fixed at 15, then a new person that joined has to be in the community for a minimum of 15 days to be included in the dataset as 1 data point/sample. Also, say the number of features is fixed at 9. Hence, for this problem, only the number of people at an instance may change over each input interaction.
+For example, assume batch_size==1 then the input dimension into the neural network would be something like:
+Iter 1: (1, 9, 7, 15)
+Iter 2: (1, 9, 7, 15)
+Iter 3: (1, 9, 7, 15)
+Iter 4: (1, 9, 8, 15) # 1 person joins the community
+Iter 5: (1, 9, 8, 15)
+Iter 6: (1, 9, 7, 15) # 1 person left the community
+Iter 7: (1, 9, 6, 15) # 1 person left the community
+Iter 8: (1, 9, 6, 15)
+Iter 9: (1, 9, 10, 15) # 4 person joins the community
+
+Is there a way to deal with this dynamically changing input tensor in neural networks without padding? As we won't know in advance how many people will join/leave the community (related to continual learning?) hence won't know the maximum pad.
+Also, how to deal when batch_size
is not 1?
+"
+"['monte-carlo-tree-search', 'upper-confidence-bound', 'tree-search']"," Title: What should the initial UCT value be with MCTS, when leaf's simulation count is zero? Infinity?Body: I am implenting a Monte Carlo Tree Search algorithm, where the selection process is done through Upper Confidence Bound formula:
+def uct(state):
+ log_n = math.log(state.parent.sim_count)
+ explore_term = self.exploration_weight * math.sqrt(log_n / state.sim_count)
+ exploit_term = (state.win_count / state.sim_count)
+
+ return exploit_term + explore_term
+
+I have trouble however choosing the initial value for UCT, when the sim_count of the node is 0. I tried with +inf (which would be appropriate as approaching lim -> 0 from the positive side would give infinity), but that just means the algorithm will be always choosing an unexplored child.
+What would you suggest as a initial value for the uct?
+Thank you in advance!
+"
+"['neural-networks', 'reference-request', 'prediction']"," Title: Can cryptocurrency charts be estimated using neural networks?Body: If I were to make a neural network that predicts the value of e.g. Bitcoin tomorrow based on the chart of the last month, would that work? Of course, 100% accuracy cannot be reached, but a success rate over 50% on determining if I should buy or sell Bitcoin could be very profitable. Have there been any attempts to create such neural networks so far?
+"
+"['comparison', 'training', 'alphazero', 'chess', 'muzero']"," Title: Do AlphaZero/MuZero learn faster in terms of number of games played than humans?Body: I don't know much about AI and am just curious.
+From what I read, AlphaZero/MuZero outperform any human chess player after a few hours of training. I have no idea how many chess games a very talented human chess player on average has played before he/she reaches the grandmaster level, but I would imagine it is a number that can roughly be estimated. Of course, playing entire games is not the only training for human chess players.
+Nonetheless, how does this compare to AI? How many games do AI engines play before reaching the grandmaster level? Do (gifted) humans or AI learn chess faster?
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'graph-neural-networks']"," Title: Reinforcement learning and Graph Neural Networks: Entropy drops to zeroBody: I am currently working on an experiment to link reinforcement learning with graph neural networks.
+This is my architecture:
+Feature Extraction with GCN:
+
+- there is a fully meshed topology with
23
nodes. Therefore there are 23*22=506
edges.
+- the original feature vector comprises
43
features that range from about -1
to 1
.
+- First, a neuronal network
f
takes calculates a vector per edge, given the source node and target node features.
+- After we have calculated
506
edge vectors, function u
aggregates the results from f
per node (aggregation over 22
edges)
+- A function
g
takes the original target feature vector and concatenates the aggregated results from u
. Finally, the output dimension of g
determines the new feature vector size for each node.
+- At last, the function
agg
decides which information is returned from the feature extraction, e.g. just flatten the 23xg_output_dim
feature vectors or building the average
+
+After that:
+
+- The output of the feature extractor is passed to the OpenAi Baseline PPO2 Implementation. The frameworks adds a
flatten
layer to the output and maps it to 19
action values and 1
value-function value.
+
+I have made some observations in the experiments and do not manage to explain them. Hyperparameter are: an output dimension for f
and g
of 512
. U=sum
, aggr=flatten
. A tanh
activation is applied on the outputs of f
and g
. For PPO2: lr=0.000343
, stepsize=512
.
+This gives me the following weight matrices:
+<tf.Variable 'ppo2_model/pi/f_w_0:0' shape=(86, 512) dtype=float32_ref>
+<tf.Variable 'ppo2_model/pi/g_w:0' shape=(555, 512) dtype=float32_ref>
+<tf.Variable 'ppo2_model/pi/w:0' shape=(11776, 19) dtype=float32_ref>
+<tf.Variable 'ppo2_model/vf/w:0' shape=(11776, 1) dtype=float32_ref>
+
+The following problem occurs. Normally you wait for the entropy in the PPO2 to decrease during the training, because the algorithm learns which actions lead to more reward.
+With the described hyperparameters, the entropy drops abruptly to 0
within 100
update steps and stays zero even after >15.000
updates (=150M steps in the game). This means that the same action is always selected.
+What I found out: the problem is that by making the sum over 22 edges, very large values are created (maximum 22*1
and 22*-1
). The values are then given to the function g
and thus ends up in the saturation region of the tanh. As a result, the new features of the 23
nodes contain many 1
's and -1
's. Because we flatten, the weighted sum of 11776
input neurons flows into each of the 19 action neurons, resulting in very large values in the policy. An action is then calculated from the policy with the following formula:
+u = tf.random_uniform(tf.shape(logits), dtype=logits.dtype)
+action = tf.argmax(logits - tf.log(-tf.log(u)), axis=-1),
+
+Most of the time tf.log(-tf.log(u)
gives sommething between 2 and -2 (in my opinion). This means that as soon as a very large value appears in the policy, the corresponding action is always selected and not the second or third most probable one, which might lead to more exploration.
+What I don't understand 1): As soon as negative reward occurs, shouldn't the likelihood decrease again, so that in the end I choose other actions again?
+I did some experiments with relu
and elu
activations:
+These are the value histogram of the output of g
after the using relu
, tanh
and elu
:
+
+These are the value histograms of the policy, when using relu
, tanh
and elu
:
+
+Histogram over resulting actions:
+What I don't understand 2:Using Relu
you see that in the first steps in the policy were large values, but then the model learns to reduce the range, which is why in this example also the entropy does not drop. Why does this not work when using tanh
or elu
?
+
+We have found out 3 things with which the problem does not occur or is delayed. These are my assumptions. Are the correct in your opinion?:
+
+- using smaller output dimension of
f
and g
, like 6
or using aggr=mean
-> For each of the 19 action neurons, less input neurons are averaged -> smaller values in the policy --> more exploration
+- Using
u=mean
and not sum, averages the outputs of f
, therefore the aggregated values are not only 1
and -1
+- Smaller learning rate -> Making the weights too big, increases the chance of the 19 action values to be big. If there is no negativ reward, there is no need for the algorithm to make the weights smaller.
+
+I know this is a lot of information, so I would be grateful for any small tip.!
+"
+"['neural-networks', 'deep-learning', 'natural-language-processing', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: Is the working of RNNs, LSTM and GRU sequential or parallel?Body: You take any blog or any example and all they tell you about is the given picture below.
+
+It has 4 different matrices and 3 of whose weights are shared. So, I'm wondering how is this achieved in practice?
+Please correct me:
+I think the first word "hello" goes in as a one-hot encoded form and changes the Hidden matrix. And then after it, "world" goes and gets multiplied and then changes the matrix again and so on. What people make it look like is that all of the words going are in Parallel. It can't be the case because the Hidden matrix is dependent on the previous word and without changing the matric, you can not pass the current word. Please correct if my idea is wrong but I think the execution is in sequential order.
+"
+"['deep-learning', 'natural-language-processing', 'transformer', 'text-classification', 'sentiment-analysis']"," Title: Is using a LSTM, CNN or any other neural network model on top of a Transformer(using hidden states) overkill?Body: I have recently come across transformers, I am new to Deep Learning. I have seen a paper using CNN and BiLSTM on top of a transformer, the paper uses a transformer(XLM-R) for sentiment analysis in code-mixed domain. But many of the blogs only use a normal feed formal network on top of the transformer.
+I am trying to use transformers for sentiment analysis, short text classification.
+Is it overkill to use models like CNN and BiLSTM on top of the transformer considering the size of the data it is trained on and its complexity?
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'overfitting', 'regularization', 'dropout']"," Title: How should we regularize an LSTM model?Body: There are five parameters from an LSTM layer for regularization if I am correct.
+To deal with overfitting, I would start with
+
+- reducing the layers
+- reducing the hidden units
+- Applying dropout or regularizers.
+
+There are kernel_regularizer
, recurrent_regularizer
, bias_regularizer
, activity_regularizer
, dropout
and recurrent_dropout
.
+They have their definitions on the Keras's website, but can anyone share more experiences on how to reduce overfitting?
+And how are these five parameters used? For example, which parameters are most frequently used and what kind of value should be input? ?
+"
+"['reinforcement-learning', 'q-learning', 'actor-critic-methods']"," Title: Relationship between Rewards and Q Value (Graph between Q(s, a) vs episodes)Body: I'm employing the Actor-Critic algorithm. The critic network approximates the action-value function, i.e. $Q(s, a)$, which determines how good a particular state is, when provided with an action.
+$Q(s, a)$ is approximated using the backpropagation of the temporal difference error (TD error). We can understand that $Q(s, a)$ has been approximated properly when TD error is minimized, i.e. when it is saturated at lower values.
+My question is, when exactly can you say that $Q(s, a)$ is approximated properly, if you don't have TD error, i.e. if you have to plot the graph between $Q(s, a)$ vs episodes, then what would be the optimal behaviour?
+Will it be increasing exponential with saturation value around reward values, or increasing exponential with saturation (around any value)?
+Follow up: What can be the possible mistake, if the output of Q-value function is around 5x the rewards, and not saturating?
+"
+"['neural-networks', 'comparison', 'papers', 'anomaly-detection']"," Title: What is the difference between out of distribution detection and anomaly detection?Body: I'm currently reading the paper Likelihood Ratios for Out-of-Distribution Detection, and it seems that their problem is very similar to the problem of anomaly detection. More precisely, given a neural network trained on a dataset consisting of classes $A,B,$ and $C$, then they can detect if an input to the neural network is anomalous if it is different than these three classes. What is the difference between what they are doing and regular anomaly detection?
+"
+"['machine-learning', 'object-detection', 'object-recognition', 'text-detection']"," Title: Can object detection approaches be used to solve text/detection problems?Body: I have been working on text detection and recognition for almost two months and new on this field. So far, I have fine-tuned, tested, and trained several text detection/recognition methods, such as CRAFT, TextFuseNet, CharNet for detection, and clova.ai model for recognition. Now I come up with this question:
+
+- Can object detection approaches(yolov5,efficientDet) be used to solve text/detection problems?
+
+"
+"['reinforcement-learning', 'q-learning', 'terminology', 'value-functions', 'policies']"," Title: What is a ""learned policy"" in Q-learning?Body: I am completing an assignment at the moment. One of the assignment questions asks how you identified the learned policy and how you obtained it. The question is a reinforcement learning question, and the task is to apply the Q-learning algorithm to fill out a Q-table (which I've done) but confused on what it may mean by the learned policy.
+So, what is a "learned policy" in Q-learning?
+"
+"['deep-learning', 'terminology', 'generative-adversarial-networks', 'probability', 'wasserstein-gan']"," Title: Aren't scores in the Wasserstein GAN probabilities?Body: I am quite new to GAN and I am reading about WGAN vs DCGAN.
+Relating to the Wasserstein GAN (WGAN), I read here
+
+Instead of using a discriminator to classify or predict the probability of generated images as being real or fake, the WGAN changes or replaces the discriminator model with a critic that scores the realness or fakeness of a given image.
+
+In practice, I don't understand what the difference is between a score of the realness or fakeness of a given image and a probability that the generated images are real or fake.
+Aren't scores probabilities?
+"
+"['deep-learning', 'convolutional-neural-networks', 'autoencoders', 'overfitting', 'underfitting']"," Title: Underfitting a single batch: Can't cause autoencoder to overfit multi-sample batches of 1d data. How to debug?Body: TL;DR
+I am unable to overfit batches with multiple samples using autoencoder.
+Fully connected decoder seems to handle more samples per batch than conv decoder, but then also fails when number of samples increases.
+Why is this happening, and how to debug this?
+
+In depth
+I am trying to use an auto encoder on 1d data points of size (n, 1, 1024)
, where n
is the number of samples in the batch.
+I am trying to overfit to that single batch.
+Using a convolutional decoder, I am only able to fit a single sample (n=1
), and when n>1
I am unable to drop the loss (MSE) below 0.2.
+In blue: expected output (=input), in orange: reconstruction.
+Single sample, single batch:
+
+Multiple samples, single batch, loss won't go down:
+
+Using more than one sample, we can see the net learns the general shape of the input (=output) signal, but greatly misses by an over-all constant.
+
+Using a fully connected decoder does manage to reconstruct batches of multiple samples:
+
+
+Relevant code:
+class Conv1DBlock(nn.Module):
+ def __init__(self, in_channels, out_channels, kernel_size):
+ super().__init__()
+ self._in_channels = in_channels
+ self._out_channels = out_channels
+ self._kernel_size = kernel_size
+
+ self._block = nn.Sequential(
+ nn.Conv1d(
+ in_channels=self._in_channels,
+ out_channels=self._out_channels,
+ kernel_size=self._kernel_size,
+ stride=1,
+ padding=(self._kernel_size - 1) // 2,
+ ),
+ # nn.BatchNorm1d(num_features=out_channels),
+ nn.ReLU(True),
+ nn.MaxPool1d(kernel_size=2, stride=2),
+ )
+
+ def forward(self, x):
+ for layer in self._block:
+ x = layer(x)
+ return x
+
+
+class Upsample1DBlock(nn.Module):
+ def __init__(self, in_channels, out_channels, factor):
+ super().__init__()
+ self._in_channels = in_channels
+ self._out_channels = out_channels
+ self._factor = factor
+
+ self._block = nn.Sequential(
+ nn.Conv1d(
+ in_channels=self._in_channels,
+ out_channels=self._out_channels,
+ kernel_size=3,
+ stride=1,
+ padding=1
+ ), # 'same'
+ nn.ReLU(True),
+ nn.Upsample(scale_factor=self._factor, mode='linear', align_corners=True),
+ )
+
+ def forward(self, x):
+ x_tag = x
+ for layer in self._block:
+ x_tag = layer(x_tag)
+ # interpolated = F.interpolate(x, scale_factor=0.5, mode='linear') # resnet idea
+ return x_tag
+
+encoder:
+self._encoder = nn.Sequential(
+ # n, 1024
+ nn.Unflatten(dim=1, unflattened_size=(1, 1024)),
+ # n, 1, 1024
+ Conv1DBlock(in_channels=1, out_channels=8, kernel_size=15),
+ # n, 8, 512
+ Conv1DBlock(in_channels=8, out_channels=16, kernel_size=11),
+ # n, 16, 256
+ Conv1DBlock(in_channels=16, out_channels=32, kernel_size=7),
+ # n, 32, 128
+ Conv1DBlock(in_channels=32, out_channels=64, kernel_size=5),
+ # n, 64, 64
+ Conv1DBlock(in_channels=64, out_channels=128, kernel_size=3),
+ # n, 128, 32
+ nn.Conv1d(in_channels=128, out_channels=128, kernel_size=32, stride=1, padding=0), # FC
+ # n, 128, 1
+ nn.Flatten(start_dim=1, end_dim=-1),
+ # n, 128
+ )
+
+conv decoder:
+self._decoder = nn.Sequential(
+ nn.Unflatten(dim=1, unflattened_size=(128, 1)), # 1
+ Upsample1DBlock(in_channels=128, out_channels=64, factor=4), # 4
+ Upsample1DBlock(in_channels=64, out_channels=32, factor=4), # 16
+ Upsample1DBlock(in_channels=32, out_channels=16, factor=4), # 64
+ Upsample1DBlock(in_channels=16, out_channels=8, factor=4), # 256
+ Upsample1DBlock(in_channels=8, out_channels=1, factor=4), # 1024
+ nn.ReLU(True),
+ nn.Conv1d(in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=1),
+ nn.ReLU(True),
+ nn.Flatten(start_dim=1, end_dim=-1),
+ nn.Linear(1024, 1024)
+)
+
+FC decoder:
+self._decoder = nn.Sequential(
+ nn.Linear(128, 256),
+ nn.ReLU(True),
+ nn.Linear(256, 512),
+ nn.ReLU(True),
+ nn.Linear(512, 1024),
+ nn.ReLU(True),
+ nn.Flatten(start_dim=1, end_dim=-1),
+ nn.Linear(1024, 1024)
+)
+
+
+Another observation is that when the batch size increases more, to say, 16, the FC decoder also starts to fail.
+In the image, 4 samples of a 16 sample batch I am trying to overfit
+
+
+What could be wrong with the conv decoder?
+How to debug this or make the conv decoder work?
+Please notice that the same infrastructure with only the encoder and decoder different do manage to overfit and generalize over MNIST.
+(This is also posted here, but I think this is still ok to do. If not, please tell me and I will delete one).
+"
+"['reinforcement-learning', 'markov-decision-process', 'implementation', 'temporal-difference-methods', 'transition-model']"," Title: How should I implement the state transition when it is a Gaussian distribution?Body: I am reading this paper Anxiety, Avoidance and Sequential Evaluation and is confused about the implementation of a specific lab study. Namely, the authors model what is called the Balloon task using a simple MDP for which the description is below:
+
+My confusion is the following sentence:
+
+...The probability of this bad transition was modeled using normal density function, with parameters $N(16, 0.5)$
+
+But the fact that this is a continuous, normal distribution makes me stumped. In MDP's, usually there is a nice, discrete transition matrix and so there is no ambiguity as to how to implement it. For instance, if they said the transition to a bad state is modeled by a Bernoulli random variable with parameter $p,$ then it is clear how to implement it. I would do something like:
+def step(curr_state, curr_action):
+ if uniform random variable(0,1) < p:
+ next_state = bad state
+
+But they are using a normal random variable for this "bad" transition, so how do I implement this?
+"
+"['machine-learning', 'generative-adversarial-networks', 'deepfakes', 'video-classification']"," Title: Why don't those developing AI Deepfake detectors use two detectors at once so as to catch deepfakes in one or the other?Body: Why don't those developing AI Deepfake detectors use two differently trained detectors at once that way if the Deepfake was trained to fool one of the detectors the other would catch it and vice-versa?
+To be clear this is really a question of can deepfakes be made to fool multiple high-accuracy detectors at the same time. And if so then how many can they fool before they become human detectable from noticeable noise?
+I've heard of papers where they injected a certain noise into their deepfake videos which allows them to fool a given detector (https://arxiv.org/abs/2009.09213, https://delaat.net/rp/2019-2020/p74/report.pdf), so I thought well if they simply used two high-accuracy detectors then any pattern of noise used to fool one detector would interfere with the pattern of noise used to fool the other detector.
+"
+"['machine-learning', 'deep-learning', 'recurrent-neural-networks', 'attention']"," Title: Factors that causing totally different outcomes from an exactly same model and datasetsBody: Here is a model that trains time series data in (batch, step, features) way.
+I have kept the random state for train test split function the same. Every parameter below the same, running the model training yields different outcomes every time and the outcomes are drastically different.
+What may be the factors that led to this? Regularization?
+
+
+
+X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=666)
+def attention_model(X_train, y_train, X_test, y_test,num_classes,dropout=0.2, batch_size=68, learning_rate=0.0001,epochs=20,optimizer='Adam'):
+
+ Dense_unit = 12
+ LSTM_unit = 12
+
+ attention_param = LSTM_unit*2
+ attention_init_value = 1.0/attention_param
+
+
+ u_train = np.full((X_train.shape[0], attention_param),
+ attention_init_value, dtype=np.float32)
+ u_test = np.full((X_test.shape[0],attention_param),
+ attention_init_value, dtype=np.float32)
+
+
+ with keras.backend.name_scope('BLSTMLayer'):
+ # Bi-directional Long Short-Term Memory for learning the temporal aggregation
+ input_feature = Input(shape=(X_train.shape[1],X_train.shape[2]))
+ x = Masking(mask_value=0)(input_feature)
+ x = Dense(Dense_unit,kernel_regularizer=l2(0.005), activation='relu')(x)
+ x = Dropout(dropout)(x)
+ x = Dense(Dense_unit,kernel_regularizer=l2(0.005),activation='relu')(x)
+ x = Dropout(dropout)(x)
+ x = Dense(Dense_unit,kernel_regularizer=l2(0.005),activation='relu')(x)
+ x = Dropout(dropout)(x)
+ x = Dense(Dense_unit,kernel_regularizer=l2(0.005), activation='relu')(x)
+ x = Dropout(dropout)(x)
+
+
+ y = Bidirectional(LSTM(LSTM_unit,activity_regularizer=l2(0.000029),kernel_regularizer=l2(0.027),recurrent_regularizer=l2(0.025),return_sequences=True, dropout=dropout))(x)
+# y = Bidirectional(LSTM(LSTM_unit, kernel_regularizer=l2(0.01),recurrent_regularizer=l2(0.01), return_sequences=True, dropout=dropout))(y)
+
+ with keras.backend.name_scope('AttentionLayer'):
+ # Logistic regression for learning the attention parameters with a standalone feature as input
+ input_attention = Input(shape=(LSTM_unit * 2,))
+ u = Dense(LSTM_unit * 2, activation='softmax')(input_attention)
+
+ # To compute the final weights for the frames which sum to unity
+ alpha = dot([u, y], axes=-1) # inner prod.
+ alpha = Activation('softmax')(alpha)
+
+ with keras.backend.name_scope('WeightedPooling'):
+ # Weighted pooling to get the utterance-level representation
+ z = dot([alpha, y], axes=1)
+
+ # Get posterior probability for each emotional class
+ output = Dense(num_classes, activation='softmax')(z)
+
+ model = Model(inputs=[input_attention, input_feature], outputs=output)
+
+ optimizer = opt_select(optimizer,learning_rate)
+ model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer=optimizer)
+
+
+ hist = model.fit([u_train, X_train],
+ y_train,
+ batch_size=batch_size,
+ epochs=epochs,
+ verbose=2,
+ validation_data=([u_test, X_test], y_test))
+
+
+#kernel_regularizer=l2(0.002),recurrent_regularizer=l2(0.002),
+ return hist
+
+
+batch_size= 150
+#217
+epochs = 1000
+learning_rate = 0.00081
+optimizer = 'RMS'
+num_classes = y_train.shape[1]
+dropout=0.22
+
+tf.keras.backend.clear_session()
+
+history = attention_model(X_train, y_train, X_test, y_test, num_classes,dropout = dropout,batch_size=batch_size, learning_rate=learning_rate,epochs=epochs,optimizer=optimizer
+)
+
+"
+"['neural-networks', 'computer-vision', 'transformer', 'embeddings', 'vision-transformer']"," Title: How does the embeddings work in vision transformer from paper?Body: I get the part from the paper where the image is split into P
say 16x16
(smaller images) patches and then you have to Flatten
the 3-D (16,16,3) patch to pass it into a Linear
layer to get what they call "Liner Projection". After passing from the Linear layer, the patches will be vectors but with some "meaning" to them.
+Can someone please explain how the two types of embeddings are working?
+I visited this implementation on github, looked at the code too and looked like a maze to me.
+If someone could just explain how these embeddings are working in laymen's terms, I'll look at the code again and understand.
+"
+"['convolutional-neural-networks', 'objective-functions', 'pytorch', 'image-segmentation']"," Title: How to incorporate a symmetry constraint in the loss function to train a CNN?Body: I have a task of extremely sparse binary segmentation, i.e. the segmentation mask contains either 0 or 1, and there are ~95% zeros and only ~5% ones. I use the focal loss to address the sparseness (which is equivalent in my case to imbalances). I have another piece of information that I want to incorporate in the loss term.
+The desired output is always symmetric over the diagonal. I was searching for a way to use this information in the loss, but I couldn't find a solution. How would I do this?
+For some example of the symmetry in the segmentation maps, I added an arrow to show the axis of symmetry:
+
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'generative-adversarial-networks', 'heuristics']"," Title: Why is my GAN more unstable with bigger networks?Body: I am working with generative adversarial networks (GANs) and one of my aims at the moment is to reproduce samples in two dimensions that are distributed according to a circle (see animation). When using a GAN with small networks (3 layers with 50 neurons each), the results are more stable than with bigger layers (3 layers with 500 neurons each). All other hyperparameters are the same (see details of my implementation below).
+I am wondering if anyone has an explanation for why this is the case. I could obviously try to tune the other hyperparameters to get good performance but would be interested in knowing if someone has heuristics about what is needed to change whenever I change the size of the networks.
+
+
+Network/Training parameters
+I use PyTorch with the following settings for the GAN:
+Networks:
+
+- Generator/Discriminator Architecture (all dense layers): 100-50-50-50-2 (small); 100-500-500-500-2 (big)
+- Dropout: p=0.4 for generator (except last layer), p=0 for discriminator
+- Activation functions: LeakyReLU (slope 0.1)
+
+Training:
+
+- Optimizer: Adam
+- Learning Rate: 1e-5 (for both networks)
+- Beta1, Beta2: 0.9, 0.999
+- Batch size: 50
+
+"
+"['reinforcement-learning', 'proximal-policy-optimization', 'reward-design', 'state-spaces', 'markov-property']"," Title: Reinforcement Learning algorithm with rewards dependent both on previous action and current actionBody: Problem description:
+Suppose we have an environment, where a reward at time step $t$ is dependent not only on the current action, but also on previous action in the following way:
+
+- if current action == previous action, you get reward = $R(a,s)$
+- if current action != previous action, you get reward = $R(a,s) - \text{penalty}$
+
+In this environment, switching actions bears a significant cost. We would like the RL algorithm to learn optimal actions under the constraint that switching action is costly, i.e. we would like to stay in selected action as long as possible.
+The penalty is significantly higher than an immediate reward, so if we do not take it into account, the model evaluation will have a negative total reward with almost 100% probability, since the agent will be constantly switching and extracting rewards from environment that are smaller than the cost of switching actions.
+Action space is small (2 actions: left, right). I'm trying to beat this game with PPO (Proximal Policy Optimization)
+Questions
+
+- How one might address this constraint: i.e. explicitly make the agent learn that switching is costly and it's worth sitting in one action even if immediate rewards are negative?
+
+- How can you make the RL algorithm learn that it's not the reward term $R(a_t|s_t)$ that is negative, and thus decreasing $Q(a_t|s_t)$ and $V(s_t)$, but it's the penalty term (taking the action that is different from the previous action at step $t-1$) that is pushing total reward down?
+
+
+"
+"['reinforcement-learning', 'papers', 'rewards', 'inverse-rl', 'derivative']"," Title: What is the dimensionality of these derivatives in the paper ""Active Learning for Reward Estimation in Inverse Reinforcement Learning""?Body: I'm trying to implement in code part of the following paper: Active Learning for Reward Estimation in Inverse Reinforcement Learning.
+I'm specifically referring to section 2.3 of the paper.
+Let's define $\mathcal{X}$ as the set of states, and $\mathcal{A}$ as the set of actions. We then sample a set of observations $\mathcal{D}$ from an agent which follows an optimal policy.
+$$
+\mathcal{D}=\left\{\left(x_{1}, a_{1}\right),\left(x_{2}, a_{2}\right), \ldots,\left(x_{n}, a_{n}\right)\right\}
+$$
+Our goal is to find the reward vector $\mathbf{r}$ s.t. the total likelihood $\Lambda_{r}(\mathcal{D})$ is maximised (every time we compute a new $\mathbf{r}$, the likelihood is updated by computing the action-value function $Q_{r}^{*}$ and taking the softmax).
+$$
+L_{r}(x, a)=\mathbb{P}[(x, a) \mid r]=\frac{e^{\eta Q_{r}^{*}(x, a)}}{\sum_{b \in A} e^{\eta Q_{r}^{*}(x, b)}}
+$$
+$$
+\Lambda_{r}(\mathcal{D})=\sum_{\left(x_{i}, a_{i}\right) \in \mathcal{D}} \log \left(L_{r}\left(x_{i}, a_{i}\right)\right)
+$$
+Then, the paper suggests how to compute the derivatives w.r.t. $\mathbf{r}$ by defining the following quantities:
+$$
+\left[\nabla_{r} \Lambda_{r}(\mathcal{D})\right]_{x a}=\sum_{\left(x_{i}, a_{i}\right) \in \mathcal{D}} \frac{1}{L_{r}\left(x_{i}, a_{i}\right)} \frac{\partial L_{r}\left(x_{i}, a_{i}\right)}{\partial r_{x a}}
+$$
+$$
+\nabla_{r} L_{r}(x, a)=\frac{d L_{r}}{d Q^{*}}(x, a) \frac{d Q^{*}}{d r}(x, a)
+$$
+Then, considering $\mathbf{T}=\mathbf{I}-\gamma \mathbf{P}_{\pi^{*}}$
+$$
+\frac{\partial Q^{*}}{\partial r_{z u}}(x, a)=\delta_{z u}(x, a)+\gamma \sum_{y \in \mathcal{X}} \mathrm{P}_{a}(x, y) \mathbf{T}^{-1}(y, z) \pi^{*}(z, u)
+$$
+$$
+\frac{d L_{r}}{d Q_{y b}^{*}}(x, a)=\eta L_{r}(x, a)\left(\delta_{y b}(x, a)-L_{r}(y, b) \delta_{y}(x)\right)
+$$
+with $x, y \in \mathcal{X}$ and $a, b \in \mathcal{A} .$
+In the above expression, $\delta_{u}(v)$ denotes the Kronecker delta function.
+Finally, the update is trivially computed by
+$$
+\mathbf{r}_{t+1}=\mathbf{r}_{t}+\alpha_{t} \nabla_{r} \Lambda_{r_{t}}(\mathcal{D})
+$$
+Here I suppose that the paper's author is considering $\mathbf{r}$ as a matrix of dimension number of states $\times$ number of actions (i.e. each element of this matrix represents $R(s,a)$)
+My question is: what is the dimensionality of $\frac{d L_{r}}{d Q^{*}}(x, a)$ and $\frac{d Q^{*}}{d r}(x, a)$? (is that a point-wise product, a matrix-matrix product, a vector-matrix product?)
+The more reasonable solution, dimensionally speaking, for me would be something like:
+$$
+\nabla_{r} L_{r}(x, a)=\\
+\frac{d L_{r}}{d Q^{*}}(x, a) \frac{d Q^{*}}{d r}(x, a) = \\
+\left(\sum_{s'\in\mathcal{X}}\sum_{a'\in\mathcal{A}}\frac{d L_{r}}{d Q^{*}_{s'a'}}(x, a)\right)
+\begin{bmatrix}
+\frac{d Q^{\star}}{d r_{s_1a_1}}(x, a) & \dots &\frac{d Q^{\star}}{d r_{s_1a_m}}(x, a) \\
+\vdots& \ddots & \vdots \\
+\frac{d Q^{\star}}{d r_{s_na_1}}(x, a) & \dots & \frac{d Q^{\star}}{d r_{s_na_m}}(x, a)
+\end{bmatrix}
+$$
+(where $n = |\mathcal{X}|$ and $m = |\mathcal{A}|$)
+"
+"['machine-learning', 'reference-request', 'signal-processing']"," Title: What are some of the main high level approaches to applying ML on kinematic sensor data?Body: I've just started a project which will involve having to detect certain events in a stream of kinematic sensor data. By searching through the literature, I've found a lot of highly specific papers, but no general reviews.
+If I search up on computer vision, I'm likely to get 100s of articles giving overviews of different types of architectures for various vision tasks. They would look something like this:
+
+- We mainly use CNNs which work like this ...
+- For object detection we use one or two stage detectors which look like this...
+- For video classification we can use 3D CNNs or RNNs...
+- .... etc
+
+So I'm looking for something similar with regard to kinematic motion sensors. As was pointed out to me on the signal processing SE, "kinematic" could mean a lot of things. So specifically, I'm referring to 1d time series data for:
+
+- acceleration/velocity/position
+- angular velocity / absolute orientation
+
+"
+"['convolutional-neural-networks', 'deep-neural-networks', 'dimensionality-reduction', 'principal-component-analysis']"," Title: Estimating dimensions to reduce input image size to in CNNsBody: Considering input images to a CNN that have a large dimension (e.g. 256X256), what are some possible methods to estimate the exact dimensions (e.g. 16X16 or 32X32) to which it can be condensed in the final pooling layer within the CNN network such that the important features are retained? I have found references to using linear dimensionality estimates (such as PCA) and the Riemannian Metric for non-linear estimation, but am not confident of how accurate the predicted dimensions may be.
+One paper that explores this issue in Deep Neural Networks in a better way can be found here. Answers specifically pertaining to processing of SAR images would be more helpful.
+"
+"['machine-learning', 'deep-learning', 'computer-vision']"," Title: How to predict multiple set of coordinates (of bounding boxes) for signboards text localization through neural network?Body: I am creating a signboard translation application from scratch. I have images of signboards where there are multiple texts and I have the corresponding set of coordinates of bounding boxes for multiple texts. I want to create a regression model which will try to predict the coordinates if there is some text in the image. I am really stuck at a place. In some cases, I have multiple words in the image, so each word will have its own set of coordinates. So, how can I make a model such that if there is a single word then it will output single set of coordinates, but if there are 5 words then it should give me 5 set of coordinates? The number of output may vary with each image. What kind of neural net should I use? I don't want to use sliding window approach. Please help me out.
+"
+['agi']," Title: Are there any approaches to AGI that will definitely not work?Body: Is there empirical evidence that some approaches to achieving AGI will definitely not work? For the purposes of the question the system should at least be able to learn and solve novel problems.
+Some possible approaches:
+
+- A Prolog program
+- A program in a traditional procedural language such as C++ that doesn't directly modify its own code
+- A program that evolves genetically in response to selection pressures in some constructed artificial environment
+- An artificial neural net
+- A program that stores its internal knowledge only in the form of a natural human language such as English, French, etc (which might give it desirable properties for introspection)
+- A program that stores its internal knowledge only in the form of a symbolic language which can be processed unambiguously by logical rules
+
+"
+"['machine-learning', 'gradient-boosting']"," Title: House price inflation modellingBody: I have a data set of house prices and their corresponding features (rooms, meter squared, etc). An additional feature is the sold date of the house. The aim is to create a model that can estimate the price of a house as if it was sold today. For example a house with a specific set of features (5 rooms, 100 meters squared) and today's date (28-1-2020), what would it sell for? Time is an important component, because prices increase (inflate over time). I am struggling to find a way to incorporate the sold date as a feature in the gradient boosting model.
+I think there are a number of approaches:
+
+- Convert the data into an integer, and include it directly in the model as a feature.
+- Create a separate model for modelling the house price development over time. Let's think of this as some kind of an AR(1) model. I could then adjust all observations for inflation, so that we would get an inflation adjusted price for today. These inflation adjusted prices would be trained on the feature set.
+
+What are your thoughts on these two options? Are there any alternative methods?
+"
+"['machine-learning', 'deep-learning']"," Title: Accuracy goes straight for about 200 epochs then start increasingBody: Can anyone explain the following observation?
+Why did the accuracies keep to be a straight line with a very smooth decrease of loss?
+Is this because of the learning rate or other reasons?
+Some info:
+The input is in the dimension of (319,50,40) as (batch, step, features)
+The dataset consists of 319 samples. It was split using train_test_split()
to yield 0.2 test size.
+I used a self-attention LSTM model with 4 Dense layers and 1 self-attention LSTM layer. The codes are long, I will post them if it is needed.
+The hyperparameters:
+batch_size= 100
+epochs =1400
+learning_rate = 0.00014
+optimizer = 'RMS'
+num_classes = y_train.shape[1]
+dropout=0.37
+
+In addition,
+If I don't set random seed and keep shuffle=True
, Sometimes, I get a horizontal line till the endo of training without any increasing.
+
+"
+['long-short-term-memory']," Title: Using LSTM model to train spatial inputsBody: I have an $x$-$y$ plane, inside that plane I have 9 paths $(p_1, p_2, \dots, p_3)$. Each path is classified into one of the three classes $(c_1, c_2, c_3)$. Each path has 100 coordinates points i.e $((x_1, y_1),(x_2, y_2), \dots, (x_n, y_n))$. Totally I have 1800 input coordinate points. Now I am interested in training the LSTM model in such a way that if I feed some test path $p_{10}$, the model should be supposed to predict which class it belongs to. This is my problem definition. Regarding this, I have some questions
+
+- First of all, is it necessary to use LSTM models to obtain a solution?
+- Are there any other simple models to attain a solution to this problem?
+- I did some literature surveys for this kind of problem using LSTM, they are having time has one of the parameters along with $(x_1, y_1, t_1)$.
+
+The paper I have read is "A Single-Shot Approach Using an LSTM for Moving Object Path Prediction".
+I am a beginner to sequence model neural networks. A link or examples to similar works is very much beneficial.
+"
+"['reinforcement-learning', 'markov-decision-process', 'reward-functions']"," Title: How can I go from $R(s)$ to $R(s,a)$ in this specific MDP?Body: I'm trying to implement a research paper, as explained in this other post, here the author of the paper assumed R as a function of both states and actions, while the code (and the MDP) I'm using to test this algorithm assumes R as a function of only states.
+My question is:
+Given $\mathcal{X}$ as the set of states of an MDP and $\mathcal{A}$ as the set of actions of an MDP. Supposing I have four states ($1$,$2$,$3$,$4$), two actions $a$ and $b$ and a reward function $R: \mathcal{X}\to\mathbb{R}$ s.t.
+$R(1) = 0$
+$R(2) = 0$
+$R(3) = 0$
+$R(4) = 1$
+If I need to change the current reward function to a new reward function $R:\mathcal{X}\ \times \mathcal{A} \to\mathbb{R}$ is it ok to compute it as $\forall a,R(s,a) = R(s)$?
+$R(1,a) = 0$
+$R(1,b) = 0$
+$R(2,a) = 0$
+$R(2,b) = 0$
+$R(3,a) = 0$
+$R(3,b) = 0$
+$R(4,a) = 1$
+$R(4,b) = 1$
+More generally, what's the correct way of generalising a reward function
+$R: \mathcal{X}\to\mathbb{R}$ to a reward function $R:\mathcal{X}\ \times \mathcal{A} \to\mathbb{R}$?
+"
+"['neural-networks', 'reference-request', 'optimization', 'gradient-descent', 'activation-functions']"," Title: Why are most commonly used activation functions continuous?Body: I have come to notice that the most commonly used activation functions are continuous. Is there any specific reason behind this? Results such as this paper have worked on training networks with discontinuous activations yet this does not seem to have taken off. Does anybody have insight into why this happens, or better yet an article talking about this?
+"
+"['reinforcement-learning', 'reference-request', 'deep-rl', 'ddpg', 'catastrophic-forgetting']"," Title: Gradual decrease in performance of a DDPG agentBody: I'm trying to solve the OpenAI's CarRacing-v0 environment with the DDPG algorithm. I've observed that after a period of learning, the agent's performance starts to deteriorate slowly. For some hyperparameter configurations this is followed by a rebound and again a slump. Here's what a typical reward plot looks like:
+Since I'm new to reinforcement learning (this is my first shot at it), I don't know if this a common phenomenon. I know of catastrophic forgetting, but I believe that's not the case here, since this is more akin to a "languishing dementia". As far as I understand, "catastrophic forgetting" is an abrupt event, which contrasts with a gradual change I've been seeing in my attempts.
+Is this some kind of general phenomenon with coverage in the existing literature or is this rather a quirk of my specific setup (algorithm + hyperparameters) for which the solution would be "change the setup"?
+For reference, the implementation I'm using: https://github.com/hirekk/pytorch-rl
+"
+"['reinforcement-learning', 'ddpg', 'off-policy-methods', 'experience-replay', 'behavioral-cloning']"," Title: Can I add expert data to the replay buffer used by the DDPG algorithm in order to make it converge faster?Body: I am working on a restricted reinforcement learning environment, i.e. the environment breaks very often (i.e.: the communication between the simulator and reinforcement learning agent breaks after some time). So, it is getting difficult for me to continue training in this environment.
+The continuous state-space is $\mathcal{S} \subseteq \mathbb{R}^{10}$ and the continuous action-space $\mathcal{A} \subseteq \mathbb{R}^{2}$.
+What I want to know is whether I can add expert data to the replay buffer, given that DDPG is an off-policy algorithm?
+Or I should go with the behavior cloning technique to train the actor-network only, so that it converges rapidly?
+I just want to get the work done first and then I can think of exploring the environment.
+"
+"['reinforcement-learning', 'markov-decision-process', 'state-spaces']"," Title: Is there a natural way to define the terminal state from the MDP transition probabilities $p(s',r|s,a)$?Body: I'm learning the basics of RL and I'm struggling to understand the notion of terminal state in MDPs.
+To ask my question straightforwardly: is there a natural way to define the terminal state from the MDP transition probabilities $p(s',r|s,a)$? If I need to be more restrictive, assume a game setting, for example, chess.
+My first hypothesis would be to define the terminal state as the state $s_T$ such that $p(s',r|s_T,a) = p(s',r|s_T)$, a state from which the transition is independent of the agent's actions. But that does not seem quite right. First, there is no particular reason why this state should be unique. Second, from this definition, it could also just be an intermittent state of "lag".
+"
+"['reinforcement-learning', 'deep-rl', 'applications']"," Title: What are the biggest barriers to get RL in production?Body: I am studying the state of the art of Reinforcement Learning, and my point is that we see so many applications in the real world using Supervised and Unsupervised learning algorithms in production, but I don't see the same thing with Reinforcement Learning algorithms.
+What are the biggest barriers to get RL in production?
+"
+"['game-ai', 'minimax', 'chess', 'evaluation-functions', 'negamax']"," Title: What is the meaning of the terms in this evaluation function for chess?Body: I'm trying to improve my evaluation and I saw this here
+materialScore = kingWt * (wK-bK)
+ + queenWt * (wQ-bQ)
+ + rookWt * (wR-bR)
+ + knightWt* (wN-bN)
+ + bishopWt* (wB-bB)
+ + pawnWt * (wP-bP)
+
+How do I get the value, let's say wK
? Do I get the position of the king and score it relative to the board? For example, wK
is more safe than the bK
, so let's say wK - bK = 1 - 0.5
. So, the result will be 90 * (0.5)
. Is this really how it works?
+"
+"['reinforcement-learning', 'proofs']"," Title: Is my proof of equation 0.6 in the book ""Reinforcement Learning: Theory and Algorithms"" correct?Body: In Sham Kakade's Reinforcement Learning: Theory and Algorithms, this equation (page 17) is used preceding the proof of performance difference lemma.
+
+I am attempting to prove equation 0.6. Here is my current attempt:
+\begin{align*}
+ \mathbb{E}_{\tau \sim \rho^\pi}\left[\sum\limits_{t=0}^\infty \gamma^t f(s_t,a_t)\right] &= \sum\limits_{t=0}^\infty \gamma^t \mathbb{E}_{\tau \sim \rho^\pi} [f(s_t,a_t)]\\
+ &= \sum\limits_{t=0}^\infty \gamma^t \mathbb{E}_{s_t, a_t} [f(s_t,a_t)]\\
+ &= \sum\limits_{t=0}^\infty \gamma^t \sum\limits_{s, a} \mathbb{P}(s_t = s, a_t = a) f(s,a)\\
+ &= \sum\limits_{t=0}^\infty \gamma^t \sum\limits_{s} \mathbb{P}(s_t = s) \sum\limits_{a}\pi(a_t = a|s_t = s) f(s,a)\\
+ &= \frac{1 - \gamma}{1 - \gamma}\sum\limits_{t=0}^\infty \gamma^t \sum\limits_{s} \mathbb{P}(s_t = s) \mathbb{E}_{a \sim \pi(s)} [f(s,a)]\\
+ &= \frac{1}{1 - \gamma} \sum\limits_{s} (1-\gamma) \sum\limits_{t=0}^\infty \gamma^t \mathbb{P}(s_t = s) \mathbb{E}_{a \sim \pi(s)} [f(s,a)]\\
+ &=\frac{1}{(1-\gamma)} \mathbb{E}_{s \sim d^\pi}\left[\mathbb{E}_{a \sim \pi(s)}\left[f(s,a)\right]\right] \\
+\end{align*}
+Is the swapping of expectation and summation in this way allowed (given that the series converges)?
+Note that this is not the proof of the performance difference lemma, but just an attempt to show equation 0.6, which is used but not proved in the book.
+"
+"['neural-networks', 'attention', 'neural-turing-machine', 'hopfield-network']"," Title: Reasoning behind performance improvement with hopfield networksBody: In the paper Hopfield networks is all you need, the authors mention that their modern Hopfield network layers are a good replacement for pooling, GRU, LSTM, and attention layers, and tend to outperform them in various tasks.
+I understand that they show that the layers can store an exponential amount of vectors, but that should still be worse than attention layers that can focus parts of an arbitrary length input sequence.
+Also, in their paper, they briefly allude to Neural Turing Machine and related memory augmentation architectures, but do not comment on the comparison between them.
+Has someone studied how these layers help improve the performance over pooling and attention layers, and is there any comparison between replacing layers with Hopfield layers vs augmenting networks with external memory like Neural Turing Machines?
+Edit 29 Jan 2020
+I believe my intuition that attention mechanism should outperform hopfield layers was wrong, as I was comparing the hopfield layer that uses an input vector for query $R (\approx Q)$ and stored patterns $Y$ for both Key $K$ and Values $V$. In this case my assumption was that hopfield layer would be limited by its storage capacity while attention mechanism does not have such constraints.
+However the authors do mention that the input $Y$ may be modified to ingest two extra input vectors for Key and Value. I believe in this case it would perform hopfield network mapping instead of attention and I do not know how the 2 compare.
+
+"
+"['computer-vision', 'object-detection', 'model-request']"," Title: Object detection approaches without anchors and NMSBody: The Context
+From all of the problems I have worked with in computer vision, the most challenging one is the object detection. This is not because the problem itself is complex to understand or bad formulated. But because we need to inject some strong priors about how we understand the world. Those priors are the anchors (which are priors about object shapes, aspect ratio...).
+This prior information, although very simple to understand, it is very hard to inject on the training logic. Hence making the computation of the ground truth very messy and prone to errors. It is even harder when different object detection backbones propose different ground truth computation methods.
+The Question
+From mid-2019 till now there is a growing trend on research about one-stage object detectors that do not rely on anchors: hence dropping the costly NMS postprocessing and in some cases even the IoU computation. I would like to do a proof of concept with some of them so here is my question:
+What are some good object detectors that do not use anchors? Or said in other words, what are the go-to object detectors for this new research trend?
+"
+"['genetic-algorithms', 'homework', 'fitness-functions', 'fitness-design']"," Title: How to design a fitness function for a problem where there are 2 objectives?Body: I am told to express a fitness function for a question I have been presented. I am unsure how I would express the function. In words, what I have written down makes sense but turning this into a mathematical formula is proving a bit difficult. My understanding is:
+The fitness function for this scenario will want to ensure that the best offer for building the computers is chosen whilst the price of the final optimal offer is low.
+The fitness function in this case would want to consider a few factors. The first factor is that the quantity of the computer parts was enough that each offers that were returned had a sufficient quantity of parts. Ideally, it would be best if we did not have any duplicates of parts in the offers. The cost is low too, but all parts have been found amongst the different offers that we have.
+The fitness function will need to ensure all of this is factored in.
+The scenario and question are below:
+
+For the production of a number of laptops, a computer company needs a quantity of each component such as screens (S), hard drives (HD), optical drives (OD), RAM, video cards (VC), CPU, Ports, etc. The company received a number of priced offers. Note that offers do not contain all components. As examples:
+
+- Offer 1: 1000 RAMs, 800 HDs, 2000 ODs – £75K
+- Offer 2: 1850 S, 1570 OD - £40K
+- Offer 3: 3000 HD, 2000 RAM – £70K
+- Offer 4: 1500 RAM, 2000 VC, 1700 S – £55Ketc.
+
+The company would be interested to accept cheaper offers in the first place. Answer the following: Give the expression of the fitness function.
+
+Any help would be greatly appreciated 😊.
+"
+"['convolutional-neural-networks', 'computer-vision', 'reference-request', 'image-processing', 'template-matching']"," Title: Can I use a CNN for template matching, so that there is robustness, as the background of the target image is not that good?Body: I have to extract part of a source image, then I have to check if it is similar or almost similar to any of the 10 target images, so that I can do further processing on that one specific target image, which is similar to the source image. It's like template matching, but they have to loop over 10 different images to find whether a matching template is found in any of those images or not.
+I wanted to use a CNN-based solution, as a classical distance-based solution is giving poor results.
+Can I use a CNN for template matching, so that there is robustness, as the background of the target image is not that good, and it causes a problem? If some resource can be pointed that would be great too.
+"
+"['reference-request', 'story-generation']"," Title: What is the first short film completely made by AI?Body: I have created (not me exactly) a short film entirely made by AI. There many short films (like Sunspring) 'written' by AI but were acted out by humans. In my short film, the story is by the AI, the music is by the AI, the title art is by the AI, the visuals and acting is by the AI (yes the AI can act) and the dialogue is by the AI. So, everything is AI. What I wanted to know is if this is the first one like this. I can't seem to find others online.
+"
+"['reinforcement-learning', 'deep-rl', 'function-approximation', 'experience-replay', 'stochastic-gradient-descent']"," Title: Can stochastic gradient descent be properly used in any sample based learning algorithm in Reinforcement Learning?Body: Assuming we use an MSE cost function of the form
+$$ \sum_s\mu(s)(V_{\pi}(S_t)-\hat{V}(S_t,\theta_t))^2 = E_{\mu(s)}[(V_{\pi}(S_t)-\hat{V}(S_t,\theta_t))^2])$$
+The Stochastic Gradient Descent is used to approximate the true update algorithm, which looks like this
+$$\theta_{t+1} = \theta_{t} - \frac{\eta_t}{2}\nabla_{\theta}(E_{\mu(s)}[(V_{\pi}(S_t)-\hat{V}(S_t,\theta_t))^2])$$
+to this
+$$\theta_{t+1} = \theta_{t} - \frac{\eta_t}{2}\nabla_{\theta}(U_t-\hat{V}(S_t,\theta_t))^2$$
+where, for simplicity, $U_t$ represents an unbiased estimate of the true value function $V_{\pi}(s_t)$. This expression is the source of many learning algorithms used in reinforcement learning.
+One of the conditions for SGD requires that samples used for updating the parameters must be I.I.D according to the distribution $\mu(s)$. However, in both on-policy and off-policy learning methods, updates at each time-step are based on trajectories generated. Since, along a trajectory, the state $s_{t+1}$ depends on the state at ${s_t}$, this means that the sample used to update $\theta_t$ and $\theta_{t+1}$ are not independent. Many, if not all, sample-based learning algorithms used in RL rely on using SGD, such as the Gradient Monte Carlo Algorithm but I've not really seen anywhere that mentions these algorithms have the "issue" that I mention so I feel like I'm missing something. More formally,
+
+My Question: Does the fact that parameter updates are not I.I.D mean we can't really use stochastic gradient descent AT ALL in learning algorithms, and, if so, why then do these algorithms "work"?
+
+As far as I know, this question applies equally to all forms of parameterised function approximation that are used with learning algorithms (tabular functions*, linear functions and non-linear functions). But, if anyone knows a special reason as to whether these cases should be treated separately could they make clear why
+*I understand that when learning algorithms with tabular functions, there exists theory beyond SGD that ensures convergence, however, I'm not entirely sure what this theory is and whether if this makes them exempt, so if anyone knows whether or not it does make them exempt could they also make this clear!
+
+Edit:
+It has been highlighted in the comments that replay buffers have been used to resolve the issue of correlated sampling in cases such as DQN and variants of it. This implies correlated sampling is an issue in these cases. Aside from this, I've not heard of replay buffers being used elsewhere (correct me if I'm wrong), so why are replay buffers needed with this off-policy NN approach but not in other learning algorithms given that they all suffer from the issue of correlated sampling.
+"
+"['convolutional-neural-networks', 'generative-adversarial-networks', 'image-processing', 'denoising-autoencoder', 'image-denoising']"," Title: Is the GAN architecture better suited for medical image denoising than the CNN?Body: I'm considering using GANs for medical image denoising, based on previous literature, like this and this. My input to the GAN would be a high-noise image and my ideal output would be a low-noise, high-quality image.
+Is the GAN architecture better suited for applications where the inputs are just random noise? Is the discriminator necessary in this case or is it better to just use a Deep CNN/Autoencoder? How do I justify using a GAN for my application?
+"
+"['neural-networks', 'deep-rl', 'genetic-algorithms', 'regularization']"," Title: If one of the inputs to a neural network (that represents a policy) is noisy and degrades the performance, would this architecture solve the issue?Body: I'm using genetic algorithms to train deep reinforcement learning (DRL) agents, similarly to what was done in this paper. DRL policies are therefore represented by deep neural networks, which map states into deterministic actions. My state space consists of three state variables $v_1, v_2$ and $v_3$. Variable $v_1$ is extremely noisy and seems to be degrading the performance (i.e. the return or discounted cumulative reward) of my RL agent but, for certain reasons, I have to include it. The return is precisely my fitness function.
+Currently, my DNN looks like this:
+
+There is only 1 output since the action space is 1-dimensional (one degree of freedom, which is used to control a system).
+The DNN tends to overfit more quickly when $v_1$ is present. I'm considering creating a custom NN that looks like this:
+
+By doing this I would reduce the complexity of the influence of the variable $v_1$ on the output, since the number of layers between $v_1$ and the output node would be reduced.
+I have reasons to believe that the optimal control depends linearly or (something close to linearity) on $v_1$.
+Does this make any practical sense and are there are reasons why one should avoid doing this?
+"
+"['deep-learning', 'papers', 'generative-adversarial-networks', 'discriminator']"," Title: Why does the relativistic discriminator increase the probability that generated data are real and decrease the probability that real data are real?Body: I was reading the ESRGAN whitepaper, where I came across this line:
+
+Relativistic discriminator [2] is developed not only to increase the probability that generated data are real but also to simultaneously decrease the probability that real data are real.
+
+Can somebody explain this?
+"
+"['neural-networks', 'machine-learning', 'reference-request', 'research']"," Title: Are there any active areas of research in machine learning that do not involve neural networks at all?Body: So far, I have not been able to find many papers that do not involve neural networks, so I was hoping I can gain some insight here. Any references would be greatly appreciated.
+"
+"['deep-learning', 'computer-vision', 'object-detection', 'object-recognition', 'yolo']"," Title: How are Ground truth provided to each Pyramid map in RetinaNet or YOLOv3 Paper? How is the mapping of Feature Pyramids done to Ground TruthBody: SO the YOLO V3 and RetinaNet both uses the Feature pyramids which look something like this:
+(except b
and e
which have one output)
+I'm just confuse how the predictions and training is done?
+Do we have to give EACH feature map a different Y label
? IF yes, how is that possible? We need to have N
different ground truth in my opinion. (Also ther'll be 3 different losses I think?)
+If not, then how are these done at once?
+There is a lot of confusion on these networks because I am not able to get my head around How are y-labels provided, trained and predicted in YOLOv3 and RetinaNet . Everything will make sense about loss, multioutputs and all if I know this one thing.
+"
+"['reinforcement-learning', 'policy-gradients', 'softmax', 'a3c']"," Title: Understanding loss function gradient in asynchronous advantage actor-critic (A3C) algorithmBody: This is a question I posted here. I am asking it on this StackExchange branch as well, so that more people who could potentially answer get to see the question.
+In the A3C algorithm from the original paper:
+
+the gradient with respect to log policy involves the term
+$$\log \pi(a_i|s_i;\theta')$$
+where $s_i$ is the state of the environment at time step $i$, and $a_i$ is the action produced by the policy. If I understand correctly, the output of the policy is a softmax function, so that if there are $n$ different actions, then we get the $n$-dimensional vector output
+$$\pi(s_i;\theta')=\left(\frac{e^{o_1(s_i)}}{\sum_{l=1}^n e^{o_l(s_i)}},\frac{e^{o_2(s_i)}}{\sum_{l=1}^n e^{o_l(s_i)}},...,\frac{e^{o_n(s_i)}}{\sum_{l=1}^n e^{o_l(s_i)}}\right),$$
+where the $o_j(s)$ are softmax layer activations obtained from forward propagation of state $s_i$ through the neural network.
+Do I understand correctly that in the A3C algorithm above the term $\log \pi(a_i|s_i;\theta')$ refers to
+$$\log \pi(a_i|s_i;\theta') = \log\left(\frac{e^{o_j(s_i)}}{\sum_{l=1}^n e^{o_l(s_i)}}\right)$$
+with index $j$ referring to the position of the largest element in vector $\pi(s_i;\theta')$ above? Or maybe all action options should be contributing according to their probabilistic weights, like so
+$$\log \pi(a_i|s_i;\theta') = \sum_{j=1}^n\log\left(\frac{e^{o_j(s_i)}}{\sum_{l=1}^n e^{o_l(s_i)}}\right)~~~?$$
+Or perhaps neither of these expressions is correct? In that case, what is the correct explicit formula for the expression $\log \pi(a_i|s_i;\theta')$ in terms of softmax layer activations $o_j$?
+"
+"['neural-networks', 'backpropagation', 'weights', 'sigmoid', 'vanishing-gradient-problem']"," Title: Why does sigmoid saturation prevent signal flow through the neuron?Body: As per these slides on page 35:
+
+Sigmoids saturate and kill gradients.
+when the neuron's activation saturates at either tail of 0 or 1, the gradient at these regions is almost zero.
+the gradient and almost no signal will flow through the neuron to
+its weights and recursively to its data.
+
+So, if the gradient is close to zero, then the error correction would be very minimal. But why would that cause that no signal flow through the neuron?
+$$w(n+1) = w(n) - \text{gradient}$$
+That would only cause the weights not to change.
+"
+"['reference-request', 'emotional-intelligence', 'affective-computing', 'emotion-recognition']"," Title: Is there any research on the detection of the user's emotion and stress based on the mouse movement and keyboard?Body: I have to create a model that can detect the user's emotion and stress level based on their mouse movement and keyboard typing activity. I didn't found any research work based on this. Is there any research on this?
+"
+"['reinforcement-learning', 'q-learning', 'exploration-exploitation-tradeoff', 'exploration-strategies']"," Title: In Q-learning, wouldn't it be better to simply iterate through all possible states?Body: In Q-learning, all resources I've found seem to say that the algorithm to update the Q-table should start at some initial state, and pick actions (which are sometimes random) to explore the state space.
+However, wouldn't it be better/faster/more thorough to simply iterate through all possible states? This would ensure that the entire table is updated, instead of just the states we happen to visit. Something like this (for each epoch):
+for state in range(NUM_STATES):
+ for action in range(NUM_ACTIONS):
+ next_state, reward = env.step(state, action)
+ update_q_table(state, action, next_state, reward)
+
+Is this a viable option? The only drawback I can think of is that it wouldn't be efficient for huge state spaces.
+"
+"['python', 'time-series', 'regression']"," Title: What model to use to get a robust model to predict next 3 days of sales even for products that have just sold once ever?Body:
+- PROJECT: I am working on an e-commerce site where digital products can run out so there is need to reorder them 72h before they run out (reordering them sooner is not a problem but having notification a bit later so if the product would sell better that would be a problem because we cannot reorder products in time).
+- GOAL: is to know if products run out at least 72h earlier.
+- DATA COLUMNS: sales datetime, product id, current number of products, price of product, what currency it was purchased in, other data like profit currency was used for the purchase…
+- SIZE: Before grouping I have a few millions of rows after grouping hundreds of thousands so it is a lot of data point but DASK can handle them.
+- GRUPPING COLUMNS: I have grouped the data by PURCAHSEDATE & ID so each day has the product that were sold with all its feature. Features have been aggregated mostly buy summing (profit, expenses) and mean (percentage features like margin%)
+- HOW FAR I HAVE GONE WITH THE PROECJT: I have looked up a couple of Kaggle projects online that were focused on use https://www.kaggle.com/tejasrinivas/simple-xgb-starter
+- PROBLEMS: A.) Some product has been sold in the past but they are selling out in 1-2 days so it is hard to put trendline on it. B.) Some item just has 1-2 days of data because it just started to sell a few days ago. C.) I also have data of products that have been sold a lot for a mid or long run (hundreds of days thousands of times). So I could do time series modelling on the whole of the sales but for each individual item I don't always have data on it
+- CURRENT RESULTS: I have used XGBOSOT Regression like It predicts well number of products sales after the days is over with all the features, but that is not the goal - https://www.kaggle.com/tejasrinivas/simple-xgb-starter
+- PROJECT RECOMMEND:I am trying to use the following pick ideas from the following competitions: https://www.kaggle.com/c/demand-forecasting-kernels-only/notebooks?competitionId=9999&sortBy=voteCount , https://www.kaggle.com/c/competitive-data-science-predict-future-sales/code
+- GOAL: simple and easy solution, not LSTM or something complicated but something quick and easy (like xgboost regression so if I have more data I can use rapids.ai to GPU teach it) to implement because as I said it is not a problem if it is missing on the time frame on the positive side and the item gets reordered 96h early and not 72h early. I am guessing that somehow, I should shift the dates but as I said in many case items have not enough dates to shift their sales date.
+
+"
+"['training', 'tensorflow', 'python', 'keras', 'objective-functions']"," Title: Single-value loss/training in a CNN with a tensor outputBody: I am playing around with an idea of using using Q-learning with a DQN (Deep Q-Network), to determine the optimal position of a number of 'units' on a grid of allowed locations, according to some reward-metric that is only calculated after placing all of the units. I am following much of Deep Convolutional Q-Learning with Python and TensorFlow 2.0, but I diverge with the grid output.
+To do this, I use a CNN that takes in a one-hot encoded grid of the units (0=no units here
, 1=one unit here
), and outputs a grid with the same shape as the input with expected rewards of placing a unit at this location.
+I can use the following Tensorflow/Keras code to get the expected rewards, where units can be placed in 3 dimensions and channels determining the different unit-styles:
+from tensorflow.keras import layers, models
+
+model = models.Sequential(name="DQN")
+model.add(
+ layers.Conv3D(
+ filters=10,
+ kernel_size=3,
+ activation="relu",
+ padding="same",
+ input_shape=input_shape,
+ bias_initializer="random_normal"
+ )
+)
+model.add(layers.Conv3D(filters=10, kernel_size=3, activation="relu", padding="same"))
+model.add(layers.Conv3D(filters=input_shape[-1], kernel_size=3, activation="relu", padding="same"))
+
+model.compile(optimizer="adam", loss=tf.keras.losses.Huber(), metrics=["accuracy"])
+
+Currently I am using a very simple training scheme, where Q-values are first generated from the current state. At the position where the agent earlier placed a unit, the calculated true reward is given and trained against.
+If the following state was a terminated state, the calculated reward is used directly, while the discounted reward is used in non-terminated states.
+for state, action, next_state, reward, terminated in minibatch:
+ # state: one-hot grid
+ # action: index in the state where a unit is placed by the agent
+ # terminated: True/False whether the 'next_state' is the terminated state
+
+ q_target = model.predict(state)
+
+ if terminated:
+ q_target[0][action] = reward
+ else:
+ following_target = model.predict(next_state)
+ q_target[0][action] = reward + gamma * np.amax(following_target)
+
+ model.fit(state, q_target, epochs=1, verbose=0)
+
+This means, that only a single value in the entire training tensor is the true reward - all other are approximated by the CNN.
+However, all of the expected rewards are used in training, instead of this singular value. So I was considering whether it would be possible to train the CNN towards this single value only, and whether it would make any sense at all?
+I thought of creating a custom loss function that would calculate the loss function for this single action, so training is done against this. However, I can't really figure out how I would go about doing this. I've looked at something like Custom training with tf.distribute.Strategy, but I wasn't successful at it..
+"
+"['reinforcement-learning', 'q-learning', 'monte-carlo-methods', 'value-functions']"," Title: What are the popular approaches to estimating the Q-function?Body: I need the q-value for my RL training, there are some approaches:
+
+- Brute-force the action sequence (this won't work for long sequence)
+- Use a classic algorithm to optimise and estimate (this ain't much AI)
+- Create Monte Carlo samples and train an approximator network for calculating q-value
+
+I find the Monte Carlo method above rather widely applicable to different problems, and the more computing power, the more precise it is. Any other methods for calculating q-value?
+"
+"['u-net', 'batch-normalization', 'adam', 'mini-batch-gradient-descent', 'semantic-segmentation']"," Title: Why does my model not improve when training with mini-batch gradient descent, while it does with Adam?Body: I am currently experimenting with the U-Net. I am doing semantic segmentation on the 2018 Data Science Bowl dataset from Kaggle without any data augmentation.
+In my experiments, I am trying different hyper-parameters, like using Adam, mini-batch GD (MBGD), and batch normalization. Interestingly, all models with BN and/or Adam improve, while models without BN and with MBGD do not.
+How could this be explained? If it is due to the internal covariate shift, the Adam models without BN should not improve either, right?
+In the image below is the binary CE (BCE) train loss of my three models where the basic U-Net is blue, the basic U-Net with BN after every convolution is green, and the basic U-Net with Adam instead of MBGD is orange. The learning rate used in all models is 0.0001. I have also used other learning rates with worse results.
+
+"
+"['deep-learning', 'backpropagation']"," Title: Why are the weights of the previous layers updated only considering the old values of the weights of the later layer, not the updated values?Body: Why are the weights of a neural net updated only considering the old values of the later layer, not the already updated values?
+I use this example to explain my problem. When applying the backpropagation chain rule, the weights of the previous layer ($w_1, w_2, w_3, w_4$) are updated making use of the chain rule:
+$$\frac{\partial E_{total}}{\partial w_1} = \frac{\partial E_{total}}{\partial out_{h1}} * \frac{\partial out_{h1}}{\partial net_{h1}}*\frac{\partial net_{h1}}{\partial w_1}$$
+He then says:
+$$\frac{\partial net_{o1}}{\partial out_{h1}}=w_5$$
+Although he has already calculated the updated value for $w_5$, he uses the old value of $w_5$ to update $w_1$? Because the updated value of $w_1$ will have an impact on the outcome together with the updated value of $w_5$?
+"
+"['reinforcement-learning', 'dqn', 'reward-functions', 'reward-design', 'multi-objective-rl']"," Title: Can the rewards be matrices when using DQN?Body: I have a basic question. I'm working towards developing a reward function for my DQN. I'd like to train an RL agent to edit pixels on an image. I understand that convolutions are ideal for working with images, but I'd like to observe the agent doing it in real-time. Just a fun side project.
+Anyway, to encourage an RL agent to craft a specific image I'm crafting a reward function that returns a $N \times N$ dimensional matrix. Which represents the distance between the state of the target image (RGB values for each pixel location) and the image the agent crafted.
+Generally speaking, is it better for rewards to be a scalar, or is using matrices okay?
+"
+"['machine-learning', 'tensorflow']"," Title: NN to find arbitrary transformationBody: Problem description
+I'm creating a clock with 4 seven-segment LED displays. In an effort to get more familiar with tensorflow, I figured I should try to drive this clock with use of a Neural Network.
+The input of the network is the Unix time.
+Initially I wanted to make the network output a UINT32 value, which I can then bit-shift into the shift-registers I use for driving the LEDS. Because this proved to be unsuccessful, I removed the last step, and instead went with 32 booleans for every led-segment as output.
+Current status
+This last action was unfruitful as well, the best I get my network is a loss of about 0.48, indicating to me that it's best effort is to guess what the output could be.
+Input
+print(x_validate[0])
+print(y_validate[0])
+3116360099
+[0 1 1 0 0 1 1 0 1 1 1 1 0 0 1 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 0]
+
+The input is the unix time code. I tried normalizing this by dividing by 2^32, but this didn't make any significant difference.
+Validation is an array with either 0 or 1 based on whether the LED needs to be on or off. The first 8 bits represent the first 7-segment display, etc. (The 8th bit is never used here because that's connected to the dot on the display.)
+Data generation
+# Number of sample datapoints
+SAMPLES = 2000000
+
+# Generate a uniformly distributed set of random numbers in the range from
+# 0 to uin32 max
+x_values = np.random.uniform(
+ low=0, high=2**32, size=SAMPLES).astype(np.uint32)
+
+# Shuffle the values to guarantee they're not in order
+np.random.shuffle(x_values)
+
+print(x_values)
+
+# Time helper function
+def to_utc(x):
+ return datetime.utcfromtimestamp(x).replace(tzinfo=timezone.utc).astimezone(pytz.timezone("Europe/Amsterdam")).strftime('%H%M')
+to_utc_batch = np.vectorize(to_utc)
+
+y_values = to_utc_batch(x_values)
+print(y_values)
+
+# translate to bitstream
+def lookup(number):
+ switch = {
+ 0: [1,1,1,1,1,1,0,0],
+ 1: [0,1,1,0,0,0,0,0],
+ 2: [1,1,0,1,1,0,1,0],
+ 3: [1,1,1,1,0,0,1,0],
+ 4: [0,1,1,0,0,1,1,0],
+ 5: [1,0,1,1,0,1,1,0],
+ 6: [1,0,1,1,1,1,1,0],
+ 7: [1,1,1,0,0,0,0,0],
+ 8: [1,1,1,1,1,1,1,0],
+ 9: [1,1,1,1,0,1,1,0]
+ }
+ return switch.get(number)
+
+print(y_values)
+
+def compile_output(value):
+ f = []
+ for i, c in reversed(list(enumerate(value))):
+ f = f + lookup(int(c))
+ return f
+
+output_values = []
+
+for y in y_values:
+ output_values.append(compile_output(y))
+
+y_values = output_values
+
+After, data is distributed in 3 sets:
+Training: 60%
+Validation: 20%
+Testing: 20%
+
+Model
+model = tf.keras.Sequential()
+model.add(keras.layers.Dense(32, activation='relu', input_shape=(1,)))
+model.add(keras.layers.Dense(64, activation='relu'))
+model.add(keras.layers.Dense(64, activation='relu'))
+
+# Final layer
+model.add(keras.layers.Dense(32, activation='sigmoid'))
+
+model.compile( optimizer='adam', loss='binary_crossentropy', metrics=["accuracy"])
+
+I went with binary_crossentropy and sigmoid as I figured the case is essentially a multi-label setup.
+I have tried the following already, but did not succeed:
+
+- Add more layers
+- Make dense layers wider (to max of 512)
+- Add a Dropout layer -> so to have it try out more things to find the relation
+- Use Softmax activation
+- Normalize input data by dividing Unix time by 2^32
+- enlarge sample data to 20.000.000
+- do anything between 10 and 50 epochs ( I usually quit after 15 epochs, when absolutely no change was observed between the last 5 epochs)
+
+Question
+
+- Why is this model not successful in finding a relation between the data?
+- How can I improve this model or the data so it will be able to succeed?
+
+Bonus
+When successful, ideally the output of the model would be a UINT32 number instead of this array of booleans. Any tips on how to get there would be appreciated as well.
+Edit:
+Really sorry, left out this particular line:
+# Train the model
+history = model.fit(x_train, y_train, epochs=10, batch_size=64,
+ validation_data=(x_validate, y_validate))
+
+"
+"['neural-networks', 'reinforcement-learning', 'q-learning', 'dqn', 'deep-rl']"," Title: What is the target output for updating a Deep Q NetworkBody: I'm trying to implement Deep Q-Learning for a pet problem having a continuous state space and discretized action space.
+The algorithm for table-based Q-Learning updates a single entry of the Q table - i.e. a single $Q(s, a)$. However, a neural network outputs an entire row of the table - i.e. the Q-values for every possible action for a given state. So, what should the target output vector be for the network?
+I've been trying to get it to work with something like the following:
+q_values = model(state)
+action = argmax(q_values)
+next_state = env.step(state, action)
+next_q_values = model(next_state)
+max_next_q = max(next_q_values)
+
+target_q_values = q_values
+target_q_values[action] = reward(next_state) + gamma * max_next_q
+
+The result is that my model tends to converge on some set of fixed values for every possible action - in other words, I get the same Q-values no matter what the input state is. (My guess is that this is because, since only 1 Q-value is updated, the training is teaching my model that most of its output is already fine.)
+What should I be using for the target output vector for training? Should I calculate the target Q value for every action, instead of just one?
+"
+"['reinforcement-learning', 'sutton-barto', 'policy-evaluation', 'dynamic-programming']"," Title: Why is the update in-place faster than the out-of-place one in dynamic programming?Body: In Barto and Sutton's book, it's written that we have two types of updates in dynamic programming
+
+- Update out-of-place
+- Update in-place
+
+The update in-place is the faster one. Why is that the case?
+This is the pseudocode that I used to test it.
+if in_place:
+ state_values = new_state_values
+else:
+ state_values = new_state_values.copy()
+old_state_values = state_values.copy()
+
+for i in range(WORLD_SIZE):
+ for j in range(WORLD_SIZE):
+ value = 0
+ for action in ACTIONS:
+ (next_i, next_j), reward = step([i, j], action)
+ value += ACTION_PROB * (reward + discount * state_values[next_i, next_j])
+ new_state_values[i, j] = value
+
+max_delta_value = abs(old_state_values - new_state_values).max()
+if max_delta_value < 1e-4:
+ break
+
+Why is the in-place version faster, and what is the difference? What I think is that it is only better for storage usage, I don't understand the speed increase part.
+"
+"['convolutional-neural-networks', 'object-detection', 'mask-rcnn']"," Title: Improving Mask RCNN by arbitrary scaling head inputBody: Currently, I am looking at how Mask R-CNN works. It has a backbone, RPN, heads, etc. The backbone is used for creating the feature maps, which are then passed to the RPN to create proposals. Those proposals would then be aligned with feature maps and rescaled to some $n \times n$ pixels before entering box head or mask head or keypoint head.
+Since conv2D is not scale-invariant, I think this scaling to $n \times n$ would introduce scale-invariant characteristics.
+For an object that is occluded or truncated, I think scaling to $n \times n$ is not really appropriate.
+Is it possible if I predict the visibility of the object inside the box head (outputting not only xyxy [bounding box output], but also xyxy+x_size y_size [bounding box output + width height scale of object]). This x_size and y_size value would then be used to rescale $n \times n$ input.
+So, if only half of the object is seen (occluded or truncated), inputs inside the keypoint head or mask head would be 0.5x by 0.5x.
+Is this a good approach to counter occlusion and truncation?
+"
+"['neural-networks', 'policies', 'deterministic-policy', 'stochastic-policy', 'softmax-policy']"," Title: Is a learned policy, for a deterministic problem, trained in a supervised process, a stochastic policy?Body: If I trained a neural network with 4 outputs (one for each action: move down, up, left, and right) to move an agent through a grid (deterministic problem). The output of the neural network is a probability distribution over the 4 actions, due to the softmax activation function.
+Is the policy (based on the neural network) a stochastic policy, even if the action space is discrete?
+"
+"['machine-learning', 'classification', 'text-classification']"," Title: How to go about classifying 1000 classes?Body: I am trying to find research paper with theory(preferably implementation) that is about classifying 1000 (or more) classes. I have heard of an implementation, that initially clustering needs to be done then classification with something like softmax. Does anyone know of any research paper that implements 1000+ class classification.
+"
+"['computer-vision', 'classification', 'datasets', 'object-detection', 'yolo']"," Title: How to treat (label and process) edge case inputs in machine learning?Body: In every computer vision project, I struggle with labeling guidelines for border cases. Benchmark datasets don't have this problem, because they are 'cleaned', but in real life unsure cases often constitute the majority of data.
+Is 15% of a cat's tail a cat? Is a very blurred image of a cat still a cat? Are 4 legs of a horse, but the rest of its body of the frame still a horse?
+Would it be easier or harder to learn a regression problem instead of classification? I.e by taking 5 subclasses of class confidence (0.2,0.4,0.6,0.8,1.) and using them as soft targets?
+Or is it better to just drop every unsure case from training or/and testing set?
+I experimented a lot with different options, but weren't able to get any definitive conclusion. This problem is so common that I wonder if it has already been solved for good by someone?
+"
+"['reinforcement-learning', 'reward-functions', 'sparse-rewards', 'delayed-rewards', 'potential-reward-shaping']"," Title: How to improve the reward signal when the rewards are sparse?Body: In cases where the reward is delayed, this can negatively impact a models ability to do proper credit assignment. In the case of a sparse reward, are there ways in which this can be negated?
+In a chess example, there are certain moves that you can take that correlate strongly with winning the game (taking the opponent's queen) but typically agents only receive a reward at the end of the game, so as to not introduce bias. The downside is that training in this sparse reward environment requires lots of data and training episodes to converge to something good.
+Are there existing ways to improve the agent's performance without introducing too much bias to the policy?
+"
+"['python', 'keras', 'long-short-term-memory', 'ensemble-learning']"," Title: How to make an ensemble model of two LSTM models with different window sizes i.e. different data shapesBody: Below is the Python code for making an ensemble model. All the inputs are the same for all three models. But what if the models have different input shapes due to different window size, such as LSTM models. So the input shapes for Model A would be (window_size_A, features) and for Model B would be (window_size_B, features). The window sizes are different but the number of features are the same. As such, due to the different window size, the training data of the same dataset is split differently for each model such that the X_train.shape for model A: (train_data_A, window_size_A, output) And for Model B: (train_data_B, window_size_B, output). Note the training data is from the same dataset but the length is different due to the different window size. How would you make an ensemble of these models?
+def get_model():
+ inputs = keras.Input(shape=(128,))
+ outputs = layers.Dense(1)(inputs)
+ return keras.Model(inputs, outputs)
+
+
+model1 = get_model()
+model2 = get_model()
+model3 = get_model()
+
+inputs = keras.Input(shape=(128,))
+y1 = model1(inputs)
+y2 = model2(inputs)
+y3 = model3(inputs)
+outputs = layers.average([y1, y2, y3])
+ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
+
+"
+"['neural-networks', 'transformer', 'attention', 'machine-translation']"," Title: What is the purpose of ""alignment"" in the self-attention mechanism of transformers?Body: I've been reading about transformers & have been having some difficulty understanding the concept of alignment.
+Based on this article
+
+Alignment means matching segments of original text with their corresponding segments of the translation.
+
+Does this mean that, with transformers, we're adding the fully translated sentences as inputs too? What's the purpose of alignment? How exactly do these models figure out how to match the different segments together? I'm pretty sure there's some underlying assumption/knowledge that I'm not fully getting -- but I'm not entirely sure what.
+"
+"['multi-armed-bandits', 'online-learning', 'upper-confidence-bound']"," Title: UCB-like algorithms: how do you compute the exploration bonus?Body: My question concerns Stochastic Combinatorial Multiarmed Bandits. More specifically, the algorithm called CombUCB1 presented in this paper. It is a UCB-like algorithm.
+Essentially, in each round of the sequential game the learner chooses a super-arm to play. A super-arm is a $d$-dimensional vector $a \in \mathcal{A}$ where $\mathcal{A} \subset \{0,1\}^d$. In each super-arm $a$, when the $i$-element equals to $1$ ( $i \in \{0, \dots, d\}, a(i)=1$ ), that means that the basic action $i$ is active. Basically, in each round the learner plays the basic actions that are active in the chosen super-arm. The rewards of the basic actions are stochastics and a super-arm receives as a reward the sum of the rewards of the basic active actions.
+The algorithm mentioned above presents a UCB-like algorithm, where with each basic action is associated a UCB-index and in each round the learner plays the super-arm that maximises that index.
+My question concerns the confidence interval around the mean of the rewards of the basic actions, presented in equation $2$ of the mentioned paper. Here, the exploration bonus is
+$c_{t,s} = \sqrt{\frac{1.5 \log t}{s}}$
+I don't understand where that $1.5$ is coming from.
+I've always known that one needs to use Chernoff-Hoeffding inequality to derive the exploration bonus in a UCB-algorithm. Am I wrong and it needs to be computed in other ways? I've always seen the same coefficient but with $2$ instead of $1.5$ (reference).
+Could someone explain me where does $1.5$ come from, please?
+I know there is a similar question here, but I cannot really understand how that works here.
+Thank you in advance in case you have time to read and answer my questions.
+"
+"['machine-learning', 'deep-learning', 'classification']"," Title: Can ML/DL solve my classification problem?Body: I'm new to AI but would still like to try and get a project off the ground.
+I've read a lot about ML/DL the past few days but I just can't figure out if my problem can be solved with ML/DL. What I'm trying to do looks like a classification job to me but maybe isn't.
+I have 100s of images of compacted soil samples, on these images there may be multiple layers visible.
+I will include a picture below, this sample had a sticker on it, normally they don't. On the image there are 3 layers, separated above and under the sticker.
+With every image there is data (xml file) available on the size of the layer(s) and the type of soil in that layer, which costs a lot of time to produce, so I want to automate this classification in the future.
+The data files contains info like:
+layer0:
+ type 004
+ 2cm
+ 12cm
+layer1:
+ type 003
+ 12cm
+ 25cm
+
+If there would be just one layer, the AI could learn what these layers look like and sort them in the right soil class.
+But I don't know if my problem can be solved with AI as there could be 1, 2, 3 or 4 different layers (classes) on one image and I haven't seen any examples on classification where there can be multiple classes in one image.
+As AI is quite a steep learning curve I would like to know if my problem is suited for ML/DL before I spend more of my nights reading for something that might not work.
+I've read numerous websites and a few short books but can't find an answer to my questions.
+Can ML/DL solve my multi-class single-image classification problem and which strategy should I read into?
+
+"
+"['recommender-system', 'knowledge-base', 'decision-support-system']"," Title: Recent methods for Decision Support System (DSS)Body: In Decision Support System (DSS), we rank items based on predetermined weighted criteria. For example, we want to rank prospective programmers based on their working experience, required salary, set of skills, age, etc. We rank using weights for each criterion that we have previously defined. The simplest method is using Simple Additive Weighting (SAW).
+As far as I know, DSS is included in knowledge-based AI (it's a mandatory subject in AI specialization in most universities in my country).
+My question:
+
+With the development of AI/ML/DL today, is there another modern approach that can be used to solve similar problems?
+
+At first, I thought it's similar with Content-Based Recommender System, but it looks different as we don't have "user" in DSS.
+"
+"['neural-networks', 'recurrent-neural-networks', 'feedforward-neural-networks', 'multilayer-perceptrons']"," Title: Why do feedforward neural networks require the inputs to be of a fixed size, while RNNs can process variable-size inputs?Body: Why does a vanilla feedforward neural network only accept a fixed input size, while RNNs are capable of taking a series of inputs with no predetermined limit on the size? Can anyone elaborate on this with an example?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'action-spaces', 'discretization']"," Title: What should the input and output of the Q-network be in the case of an ordinal action space?Body: I recently started looking into implementations of the DQN algorithm (e.g. TensorFlow) in some more detail. All the implementations that I found use a network that gives an output for each possible action (e.g. if you have three possible actions you will have three output units in your network). This makes a lot of sense from a computational standpoint and seems to work fine if you are dealing with categorical action spaces (e.g "left" or "right").
+However, I am currently working with an action space that I discretized and the actions have an ordinal meaning (e.g. you can drive left or right in 5-degree increments). I assume that the action-value function has some monotonicity in the action component (think driving 45 degrees to the left instead of 40 will have a similar value).
+Am I losing information on the similarity of actions, if I use a network that has an output unit for each possible action?
+Are there implementations of the DQN available in which actions are used as network inputs?
+"
+"['natural-language-processing', 'python', 'pretrained-models', 'fine-tuning']"," Title: How to scrape product data on supplier websites?Body: I'm currently trying to build a semantic scraper that can extract product information from different company websites of suppliers in the packaging industry (with as little manual customization per supplier/website as possible).
+The current approach that I'm thinking of is the following:
+
+- Get all the text data via scrapy (so basically a HTML-tag search). This data would hopefully be already semi-structured with for example: name, description, product image, etc.
+- Fine-tune a pre-trained NLP model (such as BERT) on a domain specific dataset for packaging to extract more information about the product. For example: weight and size of the product
+
+What do you think about the approach? What would you do differently?
+One challenge I already encountered is the following:
+
+- Not all of the websites of the suppliers are as structured as for example e-commerce sites are → So small customisations of the XPath for all websites is needed. How can you scale this?
+
+Also does anyone know an open-source project as a good starting point for this?
+"
+"['dqn', 'deep-rl', 'convergence', 'value-functions', 'policies']"," Title: Why do I get the best policy before Q values converge using DQN?Body: I have implemented DQN algorithm and wonder why during testing, the best performance is achieved by a policy from about 300 episode, when mean Q values converge at about 800 episode?
+
+- Mean Q-values are calculated on a fixed set of states by taking mean of max Q-values for each state.
+- By convergence I mean that the plot of mean Q-values converge to some level (those values does not increase to infinity).
+
+It can be seen in here (page 7) that mean Q-values converge and average rewards plot is quite noisy. I get similar results and in tests, the best policy is where the peaks are during training (average reward plot). I don't understand why don't I get better average scores over time (and better policies) when Q-values converge.
+"
+"['machine-learning', 'deep-learning', 'facial-recognition']"," Title: Is it possible to do face recognition with just the eyes?Body: Assuming the input photo is focused on a person's face, if the person is wearing a surgical mask, most face recognition software fail to identify the subject's face.
+Most facial landmark models are trained to identify at least the eyes and the tip of the nose (for example, dlib's 5 point landmark).
+Is it possible to construct a model that is trained to identify a face based on only the eyes?
+Edit: Sorry for my broken english, but by "eyes" I mean the periocular area. I am terribly sorry because english isn't my first language.
+"
+"['natural-language-processing', 'word-embedding']"," Title: Is there a reason why no one combines word embeddings with the median?Body: Could you combine word embeddings with the median per dimension to get a document embedding? In my case I have a huge amount of words to build one document, which in turn should describe a topic. I feel like using the median is the right thing to do, as I get the most common parameter value per dimension. However, I cannot find anyone trying it before. This is why I'm wondering, is there something speaking against it?
+"
+"['linear-regression', 'mean-squared-error', 'l2-regularization', 'l1-regularization']"," Title: Would either $L_1$ or $L_2$ regularisation lower the MSE on the training and test data?Body: Consider linear regression. The mean squared error (MSE) is 120.5 for the training dataset. We've reached the minimum for the training data.
+Is it possible that by applying Lasso (L1 regularization) we would get a lower MSE for the training data? Would it get lower for the test data? Would this also hold for ridge regression (L2 regularization)?
+"
+"['neural-networks', 'terminology', 'papers', 'filters']"," Title: Visualizing the Loss Landscape of Neural Nets: Meaning of the word 'filter'?Body: I found myself scratching my head when I read the following phrase in the paper Visualizing the Loss Landscape of Neural Nets:
+
+To remove this scaling effect, we plot loss functions using filter-wise normalized directions. To obtain such directions for a network with parameters $\theta$, we begin by producing a random Gaussian direction vector $d$ with dimensions compatible with $\theta$. Then, we normalize each filter in $d$ to have the same norm of the corresponding filter in $\theta$. In other words, we make the replacement $d_{i,j} \leftarrow d_{i,j} \| d_{i,j}\| \| \theta_{i,j}\| $
+
+I'm completely unclear what the authors are referring to when they refer to the filters of the vector $d$ in weight space. As far as I can tell, the vector $d$ is a standard vector in weight space ($W$) with a number of components equal to the number of changeable weights in the network. In my opinion, it could be said that each layer in the network can be visualized as a vector in weight space ($\theta_{i}$) with:
+$$\theta = \sum_{i}\theta_{i}$$
+and then maybe these vectors $\theta_{i}$ are called filters? But how this would have anything to do with the random vector $d$, generated in this space, remains a complete mystery to me.
+"
+"['recurrent-neural-networks', 'objective-functions', 'cross-entropy', 'mean-squared-error', 'root-mean-square']"," Title: What error should I use for RNN?Body: I'm relatively new to machine learning, and I don't know what error I should use for an RNN.
+I want to use a simple Elman RNN to predict the cases of Covid-19 there will be in a hospital for the next 15 days. I modeled this as a regression problem, treating the input like a bunch of dots in a graph to predict the tendency that the data is going to take (only show if there will be more cases or less).
+With that bunch of dots I in fact refer to this:
+
+Then I would treat this problem as a regression.
+I actually don't have anything programmed yet. Firstly I want to write it all on a paper and then get down to work. I am also considering focusing the problem to predict the actual plot of the time-series input, but right now I want to try the regression.
+I've come to the conclusion that I can use these four different errors:
+
+- MSE
+- RMSE
+- Entropy
+- Cross-entropy
+
+What are the different characteristics of these errors? Which to use? Where and when to use them?
+"
+"['machine-learning', 'training', 'pattern-recognition']"," Title: Number Series Continuation?Body: I am new to AI.
+I have a series of numbers ranging from x to y and I have a lot of data to train with
+What I am trying to do is, let's say from 0 to 1, I train it with data calculated over time and predict what may happen next, training it with my data and then feeding it the last few days and continue the pattern.
+I have been thinking about using char-rnn, but from what i understand the data exported is arbitrary and not a continuation of a series. I oftentimes see videos on youtube "AI continues this song" so I'm wondering which I can use and where I can get started to do this myself.
+Thank you and have a nice day ☺
+"
+"['natural-language-processing', 'transformer', 'attention', 'word-embedding']"," Title: What kind of word embedding is used in the original transformer?Body: I am currently trying to understand transformers.
+To start, I read Attention Is All You Need and also this tutorial.
+What makes me wonder is the word embedding used in the model. Is word2vec or GloVe being used? Are the word embeddings trained from scratch?
+In the tutorial linked above, the transformer is implemented from scratch and nn.Embedding from pytorch is used for the embeddings. I looked up this function and didn't understand it well, but I tend to think that the embeddings are trained from scratch, right?
+"
+"['machine-learning', 'computational-learning-theory', 'learning-algorithms']"," Title: Can an ML model sort a random sequence of numbers from 1 to $ 2^{2^{512}} $ in our universe in infinite time?Body: I am pondering on the question in the title. As a human being, somehow I can sort a random sequence of numbers from 1 to $ 2^{2^{512}} $ in our universe in infinite time (But I am not sure.). Can an ML model do that in our universe if it is provided with infinite time? There is no restriction on how the learning algorithm is supposed to learn how to sort. (Be careful, even $ 2^{512} $ is bigger than the number of atoms in the universe. Therefore you will have limited memory.)
+"
+"['machine-learning', 'classification', 'k-nearest-neighbors']"," Title: Is it possible to use k-nearest neighbour for classification with more than two attributes?Body: If I were to have a dataset of 9 attributes of different types that describe current weather, such as temperature, humidity, etc., and want to classify the current weather by use of a k-NN algorithm, is this possible?
+From what I understand, k-NN has two different attributes that are plotted, and, wherever a point is drawn, its nearest neighbors will classify it.
+Could I do the same thing but each data point is placed based on its 9 attributes?
+"
+"['reinforcement-learning', 'dqn', 'state-spaces', 'action-spaces', 'double-dqn']"," Title: How should I model the state and action spaces for a problem where the goal is to draw a line between two points?Body: I have a problem where the goal is for the agent to draw a single line between two points on a $500 \times 500$ white image.
+I have built my DQN. For now, the output layer's size of the network is $[1, 500 * 500]$. This way, when a Q value is given within a single step, it can be mapped to a single coordinate within the space of the image.
+So, with that, I'm able to get a starting point for the line. However, what that doesn't give me is the location of the second point for completion of the line.
+One thing I have tried is drawing a line for every two steps. However, this means that the state of the environment does not change for each step.
+Should the goal be to change the environment/state for each step or does this not matter? Maybe I have not found the ideal way of modelling the state and action spaces for this problem.
+"
+"['reinforcement-learning', 'game-ai']"," Title: Embedding Isolation game states into key values for RLBody: I'm trying to think of how I can embed a game's state into a unique key value. The game I'm specifically working with is Isolation: https://en.wikipedia.org/wiki/Isolation_(board_game). The game state has the coordinates of player 1's pawn, coordinates of player 2's pawn, coordinates of free spaces and coordinates of already used spaces. Is there a way to embed this into a unique key value? My plan is to generate a dict and use that for value iteration with RL to learn the optimal value function for every state.
+"
+"['deep-learning', 'convolutional-neural-networks', 'keras', 'image-processing', 'data-science']"," Title: Does the order of data augmentation and normalization matter?Body: What is the preferred order of data augmentation and normalization? Is it the former followed by the latter?
+"
+"['alpha-beta-pruning', 'iddfs']"," Title: Beating iterative alpha beta search in Isolation GameBody: I'm having trouble beating an AI in Isolation game: https://en.wikipedia.org/wiki/Isolation_(board_game) with 3 queens on a 7x7 board. I tried applying alpha beta iteration with a scoring function on the state. I tried 3 scoring functions:
+
+- (0.75/# moves taken) * number of legal moves - (# moves taken/0.75) * number of opponent legal moves
+- number of legal moves - 3 * number of opponent legal moves
+- 3 * number of legal moves - number of opponent legal moves
+
+The first is an annealed aggressive strategy so the agent gets more aggressive as the game goes longer. The 2nd is a pure aggression strat and the last is a pure defensive strat. None of them consistently beat standard alpha beta iteration with the state scoring function: number of legal moves - number of opponent legal moves. They all broke roughly even.
+Any suggestions of scoring state functions or search algorithms are appreciated. The search has a limit of 6000 seconds per turn though.
+"
+"['prediction', 'text-classification', 'imbalanced-datasets', 'multiclass-classification']"," Title: Multi class text classification when having only one sample for classesBody: I have a dataset of texts, each text was identified with an ID number. I would like to do a prediction by finding the best match ID number for upcoming new texts. To use multi text classification, I am not sure if this is the right approach since there is only one text for most of ID numbers. In this case, I wouldn't have any test set. Can up-sampling help? Or is there any other approach than classification for such a problem?
+The data set looks like this:
+id1 'text1', id2 'text2', id3 'text3', id3 'text4', id3 'text5', id4 'text6', . . id200 'text170'
+I would appreciate any guidance to find the best approach for this problem.
+"
+"['neural-networks', 'implementation', 'weights']"," Title: Is there a convention on the order of multiplication of the weights with the inputs in neural nets?Body: Is there a convention on how the input data and the weights are multiplied? The input data can be anything, including the result from the previous layers.
+There are two options:
+Option 1:
+$$\begin{bmatrix}i_1 & i_2\end{bmatrix} \times \begin{bmatrix} w_1 & w_2 & w_3\\w_4 & w_5 & w_6\end{bmatrix} = \begin{bmatrix}i_1*w_1 + i_2*w_4 & i_1*w_2+i_2*w_5 &i_1*w_3+i_2*w_6\end{bmatrix}$$
+Option 2:
+$$\begin{bmatrix} w_1 & w_4\\ w_2 & w_5\\ w_3 & w_6\end{bmatrix} \times \begin{bmatrix}i_1 \\ i_2\end{bmatrix} = \begin{bmatrix}i_1*w_1 + i_2*w_4 & i_1*w_2+i_2*w_5 &i_1*w_3+i_2*w_6\end{bmatrix}$$
+"
+['monte-carlo-tree-search']," Title: Does Monte Carlo Tree Search not work on games without the same initial state?Body: I'm curious how you would apply Monte Carlo Tree Search to a game that has a random initial state. You generate a tree where the root node is the initial state, then you expand if the options from that state are not explored yet.
+I'm also wondering how this works in 2 player games. After your opponent moves, does each state in the tree have a key to look up in a dictionary? Otherwise, the algorithm won't know what to do when there's a jump in a state between choosing your action on your turn and when your opponent moves, unless you also store your opponent's move in the tree.
+"
+"['machine-learning', 'random-variable']"," Title: What machine learning model should I use for a random dice-based game?Body: Consider a game like Pig (https://en.wikipedia.org/wiki/Pig_(dice_game)), but with a few additions: namely functions of both player's score and turn number that have unique impacts on scoring.
+What machine learning model should I use to try and get the optimal number of dice roles per turn (say number of dice roles are bounded between 1 and 10)?
+I was reading this tutorial: https://towardsdatascience.com/playing-cards-with-reinforcement-learning-1-3-c2dbabcf1df0, and they suggested reinforcement learning with a Q value function. I don't know how this work though, because turn number isn't bounded, but also needs to be a parameter to the Q value function. Multiplying the range of all parameters suggest this Q value function needs 2,000,000 states. Is this too many? - I have no idea how to judge this.
+Is there a better model I should use to try and solve this problem which at its core takes the parameters of (my_score, opponent_score, turn_number) and should return a number 0 - 10 representing how many dice to roll.
+"
+"['neural-networks', 'training', 'data-preprocessing', 'data-augmentation', 'data-labelling']"," Title: Data Augmentation of store images using handwritten labelsBody: I am new to AI and NN. I've started learning using Geron's book on Tensorflow.
+My first project ("Smart Shelf") is to determine which items in a store have been purchased and need refilled. The store camera periodically takes pictures of the tops of items on store shelves. To start, we have only 5 distinct products.
+We have created ~250 handwritten images of product-labels that cover these 5 distinct products. So far, the training results are way below our expectation.
+I am thinking to augment the training data and see whether it would make any difference.
+I have thought about the following strategies:
+
+- Train the model again using grayscale images. https://stackoverflow.com/questions/45320545/impact-of-converting-image-to-grayscale/45321001
+- Invert images, translate them horizontally or vertically https://www.tensorflow.org/tutorials/images/data_augmentation, https://nanonets.com/blog/data-augmentation-how-to-use-deep-learning-when-you-have-limited-data-part-2/
+
+Which of the above will yield better results and why? I am curious. Thanks for any help. I feel that I know various data augmentation techniques, but not sure how and why to apply them.
+
+It seems this is a popular question, as learned from Choosing Data Augmentation smartly for different application etc.
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'autoencoders', 'overfitting']"," Title: Dealing with bias in multi-channel auto encodersBody: The problem
+I have a multi-channel 1D signal I want to auto-encode.
+I am unable to resonstruct the input when the number of channels increases.
+
+Code
+I am using a convolutional encoder, and a convolutional decoder:
+latent_dim: 512
, frames_per_sample: 128
+ self._encoder = nn.Sequential(
+ nn.Conv1d(in_channels=self._n_in_features, out_channels=50, kernel_size=15, stride=1, padding=7),
+ nn.LeakyReLU(inplace=True),
+ nn.Conv1d(in_channels=50, out_channels=50, kernel_size=7, stride=1, padding=3),
+ nn.LeakyReLU(inplace=True),
+ nn.Conv1d(in_channels=50, out_channels=50, kernel_size=3, stride=1, padding=1),
+ nn.LeakyReLU(inplace=True),
+ # nn.Flatten(start_dim=1, end_dim=-1)
+ nn.Conv1d(in_channels=50, out_channels=1, kernel_size=1, stride=1, padding=0),
+ nn.Flatten(start_dim=1, end_dim=-1),
+ nn.Linear(frames_per_sample, self._config.case.latent_dim)
+ )
+
+and
+ start_channels = 256
+ start_dim = frames_per_sample // (2 ** 4)
+
+ start_volume = start_dim * start_channels
+ self._decoder = nn.Sequential(
+ nn.Linear(self._config.case.latent_dim, start_volume),
+ nn.LeakyReLU(inplace=True),
+ # b, latent
+ nn.Unflatten(dim=1, unflattened_size=(start_channels, start_dim)),
+ # b, start_channels, start_dim
+ nn.Upsample(scale_factor=2, mode='linear', align_corners=False),
+ # b, start_channels, start_dim*2
+ nn.Conv1d(in_channels=start_channels, out_channels=128, kernel_size=3, stride=1, padding=1),
+ # b, 128, start_dim*2
+ nn.LeakyReLU(inplace=True),
+ # b, 128, start_dim*2
+ nn.Upsample(scale_factor=2, mode='linear', align_corners=False),
+ # b, 128, start_dim*4
+ nn.Conv1d(in_channels=128, out_channels=64, kernel_size=7, stride=1, padding=3),
+ # b, 64, start_dim*4
+ nn.LeakyReLU(inplace=True),
+ # b,64, start_dim*4
+ nn.Upsample(scale_factor=2, mode='linear', align_corners=False),
+ # b, 64, start_dim*8
+ nn.Conv1d(in_channels=64, out_channels=32, kernel_size=11, stride=1, padding=5),
+ # b, 32, start_dim*8
+ nn.LeakyReLU(inplace=True),
+ # b, 32, start_dim*8
+ nn.Upsample(scale_factor=2, mode='linear', align_corners=False),
+ # b, 32, start_dim*16
+ nn.Conv1d(in_channels=32, out_channels=16, kernel_size=21, stride=1, padding=10),
+ # b, 16, start_dim*16
+ nn.LeakyReLU(inplace=True),
+ # b, 16, start_dim*16
+ nn.Conv1d(in_channels=16, out_channels=self._n_features, kernel_size=3, stride=1, padding=1),
+ )
+
+I am not putting the entire code/data here because this is a theoretical question, and I don't expect anyone to go and run this.
+
+Results
+The result (orange) has artifacts on the edges, relative to the input data (blue):
+This is easy to see on training examples:
+
+Worse - for unseen examples (validation), reconstruction misses on the bias
+
+
+Observation
+The above only starts to happen when adding more channels, which have different biases.
+I am normalizing the entire dataset to sit between -1 and 1, but still each channel has its own typical boundary.
+Here is a (nice) result, for a single channel:
+
+
+What I think
+My guess - Multiple channels force filters to have a single bias, which doesn't fit all of them.
+The edges problems are due to bias + zero padding, and the validation data is due to bias that doesn't agree with all channels.
+
+Questions:
+
+- Does my analysis make sense?
+- What is a possible way to solve this?
+
+
+My thoughts:
+
+- A distinct bias per channel on at least the last layer. How to do this in Pytorch?
+- Normalizing per-sample (and not per channel) just before passing it to the model, then de normalizing the reconstructed sample.
+
+I don't know how to correctly implement either of those, nor if they make sense, or how to check.
+Also posted here, but I think this also belongs on ai.stackexchange
+"
+"['reinforcement-learning', 'markov-decision-process', 'planning', 'dynamic-programming']"," Title: What trait of a planning problem makes reinforcement learning a well suited solution?Body: Planning problems have been the first problems studied at the dawn of AI (Shakey the robot). Graph search (e.g. A*) and planning (e.g. GraphPlan) algorithms can be very efficient at generating a plan. As for problem formulation, for planning problems PDDL is preferred. Although planning problems in most cases only have discrete states and actions, the PDDL+ extention covers continuos dimensions of the planning problem also.
+If a planning problem has non-deterministic state transitions, classical planning algorithms are not a well suited solution method, (some) reinforcement learning is considered to be a well suited solution in this case. From this point of view, if a planning problem has a state transition probability less than 1, classical planning methods (A*, GraphPlan, FF planner, etc.) are not the right tool to solve these problems.
+Looking at reinforcement learning examples, in some cases environments are used to showcase reinforcement learning algorithms which could be very well solved by search/planning algorithms. They are fully deterministic, fully observable and sometimes they even have discrete action and state spaces.
+Given an arbitrary fully deterministic planning problem, what is/are the chartacteristic(s) which make reinforcement learning, and not "classical planning" methods better suited to solve the problem?
+"
+"['genetic-algorithms', 'optimization', 'evolutionary-algorithms', 'fitness-functions']"," Title: How to deal with evolutionary/genetic fitness function that can have both negative and positive values?Body: I am optimising function that can have both positive and negative values in pretty much unknown ranges, might be -100, 30, 0.001, or 4000, or -0.4 and I wonder how I can transform these results so I can use it as a fitness function in evolutionary algorithms and it can be optimised in a way that, for example, it can go from negative to positive along the optimisation process (first generation best chromosome can have -4.3 and best at 1000 generation would have 5.9). Although the main goal would always be to maximise the function.
+Adding a constant value like 100 and then treating it simply as positive is not possible because like I said, the function might optimise different ranges of results in different runs for example (-10000 to +400 and in another run from -0.002 to -0.5).
+Is there a way to solve this?
+"
+"['natural-language-processing', 'transformer', 'attention', 'word-embedding', 'bert']"," Title: What does the outputlayer of BERT for masked language modelling look like?Body: In the tutorial BERT – State of the Art Language Model for NLP the masked language modeling pre-training steps are described as follows:
+
+In technical terms, the prediction of the output words requires:
+
+- Adding a classification layer on top of the encoder output.
+
+2.Multiplying the output vectors by the embedding matrix, transforming them into the vocabulary dimension.
+3.Calculating the probability of each word in the vocabulary with softmax.
+
+In the Figure below this process is visualized and also from the tutorial.
+I am confused about what exactly is done. Does it mean that each output vector O is fed into a fully connected layer with embedding_size neurons and then multiplied by the embedding matrix from the input layer?
+Update:
+In the tutorial The Illustrated GPT-2 (Visualizing Transformer Language Models) I found an explanation for GPT-2 which seems to be similar to my question.
+In the tutorial is said that each output vector is multiplied by the input embedding matrix to get the final output.
+Does the same mechanic apply to BERT?
+
+"
+"['deep-learning', 'alpha-fold']"," Title: What kind of deep learning model does latest version of AlphaFold use for protein folding problem?Body: I understand there are multiple versions used in AlphaFold. What kind of deep learning model does the more advanced version use? CNN, RNN, or something else?
+(Additionally, is there an open-source reference model for the protein folding problem?)
+"
+"['terminology', 'definitions', 'models', 'intelligent-agent']"," Title: What are the differences between an agent and a model?Body: In the context of Artificial Intelligence, sometimes people use the word "agent" and sometimes use the word "model" to refer to the output of the whole "AI-process". For examples: "RL agents" and "deep learning models".
+Are the two words interchangeable? If not, in what case should I use "agents" instead of "models" and vice versa?
+"
+"['neural-networks', 'weights-initialization', 'randomness']"," Title: Can the quality of randomness in neural network initialization affect model fitting?Body: This is a topic I have been arguing about for some time now with my colleagues, maybe you could also voice your opinion about it.
+Artificial neural networks use random weight initialization within a certain value range. These random parameters are derived from a pseudorandom number generator (Gaussian etc.) and they have been sufficient so far.
+With a proper sample simple, pseudorandom numbers can be statistically tested that they are in fact not true random numbers. With a huge neural network like GPT-3 with roughly 175 billion trainable parameters, I guess that if you would use the same statistical testing on the initial weights of GPT-3 you would also get a clear result that these parameters are pseudorandom.
+With a model of this size, could in theory at least the repeatable structures of initial weights caused by their pseudorandomness affect the model fitting procedure in a way that the completed model would be affected (generalization or performance-wise)? In other words, could the quality of randomness affect the fitting of huge neural networks?
+"
+"['classification', 'multiclass-classification']"," Title: Using one-class classification first to find anomalies then apply multi-class classificationBody: I'm new to machine learning and trying to apply it for fault detection, an idea came to mind which is using only anomaly detection after which if the results after a while come up as positive, a multi-class classification algorithm (using 7 different classes) is used to classify the fault type. would that be efficient and saves on resources power?
+"
+"['convolutional-neural-networks', 'computer-vision', 'math', 'variational-autoencoder', 'kl-divergence']"," Title: How do you calculate KL divergence on a three-dimensional space for a Variational Autoencoder?Body: I'm trying to implement a variational auto-encoder (as seen in Section 3.1 here: https://arxiv.org/pdf/2004.06271.pdf).
+It differs from a traditional VAE because it encodes its input images to three-dimensional latent feature maps. In other words, the latent feature maps have a width, height and channel dimension rather than just a channel dimension like a traditional VAE.
+When calculating the Kullback-Liebler divergence as part of the loss function, I need the mean and covariance that is the output of the encoder. However, if the latent feature maps are three-dimensional, this means that the output of the encoder is three-dimensional, and therefore each latent feature is a 2D matrix.
+How can I derive a mean and covariance from a 2D matrix to calculate the KL divergence?
+"
+"['unsupervised-learning', 'clustering', 'metric', 'umap']"," Title: Which metric should I use to assess the quality of the clusters?Body: I have a model that outputs a latent N-dimensional embedding for all data points, trained in a way that clusters data-points from the same class together, while being separated from other clusters belonging to other different classes.
+The N-dimensional embedding is projected down to 2D using UMAP. At each epoch, I wish to test the clustering capability of the model on these 2D projections for use as validation accuracy. I have the labels for each class.
+How should I proceed?
+
+"
+"['neural-networks', 'classification', 'representation-learning', 'active-learning', 'zero-shot-learning']"," Title: classification of unseen classes of image in open set classificationBody: I have a scanned image, and they need to be classified in one of the pre-defined image classes, so that it can be sorted. However, the problem is the open nature of the classes. At testing time, new classes of scanned images can be added and the model should not only classify them as unseen (open set image recognition), but it should be able to tell in which new class it should belong (not able to figure out the implementation for this.)
+So, I am thinking that the below option can work for the classification of unseen classes
+
+- Zero-shot learning: Once the image is classified as unseen, we can then apply zero-shot learning to find its respective class for sorting.
+
+- Template matching: Match the test image of unseen classes with all available class images, and, once we have a match, we can do sorting of images.
+
+- Meta learning-based approach: I am not sure how to implement this, suggestions are much appreciated.
+
+
+Note: I already tried the classical computer vision approach, but it's not working out. So, more open for neural net-based approach.
+Is my approach to solving the problem correct? If possible, suggest some alternative to find the corresponding match/classification of the unseen class image. As I could think of these 2 alternative solutions only.
+"
+"['research', 'human-like', 'turing-test', 'robots']"," Title: What is the reverse of passing a Turing test by a human pretending to be a robot that can't be identified?Body: A Turing Test is a method of inquiry for determining whether or not a computer is capable of thinking like a human being. In an ideal Turing test, it would be clear to differentiate between a real human being and a robot or AI with human characteristics.
+However, it is also possible in a Turing test that a human tries to mimic the behaviour of a computer so that the person applying the test cannot distinguish between a human being and a robot/AI.
+Is this a concept that is explored much in computer science? As in research into the variations of Turing tests that can be used to identify whether a human is trying to mimic or impersonate as a robot or AI.
+"
+"['neural-networks', 'tensorflow', 'prediction', 'biology']"," Title: Extracting information from RNA sequenceBody: I am relatively new to machine learning, and I am trying to use a deep neural network to extract some information from sequences of RNA.
+A quick overview of RNA: there is both sequence and structure. I am currently expressing the sequence with one-hot encoding (so a sequence of length $60$ would be expressed as a $60 \times 4$ matrix, with one row for each letter of the sequence, and one column for each possible value of that letter). I am also feeding the 2D structure of the RNA into the network, which is expressed as $60 \times 60$ matrix for a sequence of length $60$.
+I am trying to use these inputs to predict a single continuous value for a given sequence.
+Currently, I am using pretty much the exact setup from this tutorial. I chose this architecture because it allows me to separate the inputs (sequence and structure) and have individual layers for them before merging them into a single model. I think this makes more sense than trying to glue the two separate pieces of data together into a single input.
+However, the model doesn't seem to be learning anything - validation loss decreases very slightly then plateaus.
+If anyone has suggestions, especially someone who has worked with RNA, DNA, or proteins before, I would really, really appreciate it. I am new to this, and I am not sure how to improve my model from here.
+def create_mlp(height,width,filters=(16, 16, 32, 32, 64), regress=False):
+ # initialize the input shape and channel dimension, assuming
+ # TensorFlow/channels-last ordering
+ inputShape = (height, width)
+ chanDim = -1
+ # define the model input
+ inputs = Input(shape=inputShape)
+ # loop over the number of filters
+ for (i, f) in enumerate(filters):
+ # if this is the first CONV layer then set the input
+ # appropriately
+ if i == 0:
+ x = inputs
+ # CONV => RELU => BN => POOL
+ x = Conv1D(f, 3, padding="same")(x)
+ x = Activation("relu")(x)
+ x = BatchNormalization(axis=chanDim)(x)
+ x = MaxPooling1D(pool_size=2)(x)
+ # flatten the volume, then FC => RELU => BN => DROPOUT
+ print(x.shape)
+ x = Flatten()(x)
+ x = Dense(16)(x)
+ x = Activation("relu")(x)
+ x = BatchNormalization(axis=chanDim)(x)
+ x = Dropout(0.5)(x)
+ # apply another FC layer, this one to match the number of nodes
+ # coming out of the MLP
+ x = Dense(4)(x)
+ x = Activation("relu")(x)
+ # check to see if the regression node should be added
+ if regress:
+ x = Dense(1, activation="linear")(x)
+ # construct the CNN
+ model = Model(inputs, x)
+ # return the CNN
+ return model
+
+def create_cnn(width, height, depth, filters=(16, 16, 32, 32, 64), regress=False):
+ # initialize the input shape and channel dimension, assuming
+ # TensorFlow/channels-last ordering
+ inputShape = (height, width, depth)
+ chanDim = -1
+ # define the model input
+ inputs = Input(shape=inputShape)
+ # loop over the number of filters
+ for (i, f) in enumerate(filters):
+ # if this is the first CONV layer then set the input
+ # appropriately
+ if i == 0:
+ x = inputs
+ # CONV => RELU => BN => POOL
+ x = Conv2D(f, (3, 3), padding="same")(x)
+ x = Activation("relu")(x)
+ x = BatchNormalization(axis=chanDim)(x)
+ x = MaxPooling2D(pool_size=(2, 2))(x)
+ # flatten the volume, then FC => RELU => BN => DROPOUT
+ print(x.shape)
+ x = Flatten()(x)
+ x = Dense(16)(x)
+ x = Activation("relu")(x)
+ x = BatchNormalization(axis=chanDim)(x)
+ x = Dropout(0.5)(x)
+ # apply another FC layer, this one to match the number of nodes
+ # coming out of the MLP
+ x = Dense(4)(x)
+ x = Activation("relu")(x)
+ # check to see if the regression node should be added
+ if regress:
+ x = Dense(1, activation="linear")(x)
+ # construct the CNN
+ model = Model(inputs, x)
+ # return the CNN
+ return model
+
+mlp = create_mlp(l, 4, regress=False)
+ cnn = create_cnn(l, l, 1, regress=False)
+ # create the input to our final set of layers as the *output* of both
+ # the MLP and CNN
+ #cnn.output.reshape()
+ combinedInput = concatenate([mlp.output, cnn.output])
+
+"
+"['monte-carlo-tree-search', 'upper-confidence-bound']"," Title: How UCT in MCTS selection phase avoids starvation?Body: The first step of MCTS is to keep choosing nodes based on Upper Confidence Bound applied to trees (UCT) until it reaches a leaf node where UCT is defined as
+$$\frac{w_i}{n_i}+c\sqrt{\frac{ln(t)}{n_i}},$$
+where
+
+- $w_i$= number of wins after i-th move
+- $n_i$ = number of simulations after the i-th move
+- $c$ = exploration parameter (theoretically equal to $\sqrt{2}$)
+- $t$ = total number of simulations for the parent node
+
+I don't really understand how this equation avoids sibling nodes being starved, aka not explored. Because, let's say you have 3 nodes, and 1 we'll call it node A is chosen randomly to be explored, and just so happens to simulate a win. So, node A's UCT$=1+\sqrt(2)\sqrt{\frac{ln(1)}{1}}$, while the other 2 nodes UCT = 0, because they are unexplored and the game just started, so by UCT the other 2 nodes will never be explored no? Because after this it'll go into the expansion phase and expansion only happens it reaches a leaf node in the graph. So because node A is the only one with a UCT $> 0$ it'll choose a child of node A and it will keep going down that node cause all the siblings of node A have a UCT of 0 so they never get explored.
+"
+"['natural-language-processing', 'natural-language-understanding', 'word2vec', 'semantic-networks']"," Title: How would one disambiguate between two meanings of the same word in a sentence?Body:
+The boy lifted the bat and hit the ball.
+
+In the above sentence, the noun "bat" means the wooden stick. It does not mean bat, the flying mammal, which is also a noun. Using NLP libraries to find the noun version of the definition would still be ambiguous.
+How would one go about writing an algorithm that gets the exact definition, given a word, and the sentence it is used in?
+I was thinking you could use word2vec, then use autoextend https://arxiv.org/pdf/1507.01127.pdf to differentiate between 2 different lexemes e.g. bat (animal) and bat (wooden stick).
+Then the closest cosine distance between the dictionary definition and any of the words of the sentence might indicate the correct definition.
+Does this sound correct?
+"
+"['neural-networks', 'reinforcement-learning', 'deep-learning', 'deep-rl', 'papers']"," Title: Understanding policies in helicopter control in the paper by Andrew Ng et alBody: I was going through this paper on helicopter flight control using reinforcement learning by Andrew Ng et al.
+It defines two policy classes to learn two policies, one for hovering the helicopter and another for maneuvering (aka trajectory following). The goal for hovering policy is defined as follows:
+
+We want a controller that, given the current helicopter state and a desired hovering position and orientation $\{x^*, y^*, z^*, \omega^*\}$ computes controls $a\in [-1,1]^4$ to make it hover stably.
+
+The goal for maneuvering policy is given in term of hovering policy as follows:
+
+Given a controller for keeping a system’s state at a point $(x^*,y^*,z^*,\omega^*)$, one standard way to make the system move through a particular trajectory is to slowly vary $(x^*,y^*,z^*,\omega^*)$ along a sequence of set points on that trajectory points.
+
+The neural network for these policy classes is shown as follows:
+
+In this, $(\dot{x},\dot{y},\dot{z})$ are velocity estimates, $(\phi,\theta)$ is helicopter roll and pitch estimates and $\dot{w}$ is angular velocity component estimate (more on this at the bottom of page 1 of the linked paper).
+Each edge with an arrow in the picture denotes a tunable parameter. The solid lines show the hovering policy class. The dashed lines show the extra weights added for trajectory
+following (maneuvering). With this observation, I had following doubts:
+Q1. Does addition of dashed lines to hovering policy to get maneuvering policy makes manuvering policy superset of hovering policy?
+Q2. Rephrasing Q1: can we use maneuvering policy for hovering task (say by setting weights corresponding to dashed lines to zero)?
+Q3. If maneuvering policy is indeed a superset of hovering policy, why the authors dont use just maneuvering policy for both tasks or maneuvering policy for hovering task also? Is it because it involves computation involving helicopter's additional sub dynamics represented by dashed line and this additional computation is not required for hovering task?
+Or am I completely getting wrong with all these questions?
+"
+"['game-ai', 'search', 'heuristics', 'graphs', 'a-star']"," Title: Incorrect node expansion in game board with A* searchBody: I have the following game board below, and we're using A* search to find the optimal path from the agent to the key. There are 8 directions. Up, down, left, right have a cost of 1, and diagonal directions have cost 3. We will be using a priority queue with function $f(v) = g(v) + h(v)$ where $g(v)$ is the backwards cost from the goal through the given edges and up to the vertex v while $h(v)$ is the optimal least cost distance from v to the goal node.
+
+So I calculated the f(s) for the different states, assuming no prior edges specified:
+
+And then I started the search and these are the steps I took: expand C: (CD,3), (CE,3), (CF,3), (CA,5), (CB,5)
+expand CD: (CDF,3),(CE,3), (CF,3), (CA,5), (CB,5), (CDB,5)
+expand CDF: (CDFH,3), (CE,3), (CF,3), (CA,5), (CB,5), (CDB,5), (CDFG,6)
+expand CDFH: (CE,3), (CF,3), (CA,5), (CB,5), (CDB,5), (CDFG,6)
+So I only expanded, C,D,F,H. I got the correct answer for the optimal path, but not the correct answer for nodes expanded, which is supposed to be C, D, E, F, G, H. What am I doing wrong?
+"
+"['reinforcement-learning', 'terminology', 'function-approximation', 'state-spaces', 'contextual-bandits']"," Title: What is the relation between the context in contextual bandits and the state in reinforcement learning?Body: Conceptually, in general, how is the context being handled in contextual bandits (CB), compared to states in reinforcement learning (RL)?
+Specifically, in RL, we can use a function approximator (e.g. a neural network) to generalize to other states. Would that also be possible or desirable in the CB setting?
+In general, what is the relation between the context in CB and the state in RL?
+"
+"['reference-request', 'papers', 'explainable-ai']"," Title: What is the paper that states that humans incorrectly trust the incorrect explanations of the AI?Body: I was reading a paper on the subject of explainable AI and interpretability, in particular the tendency of people (even experts) to excessively trusting explanations given by AI. In the intro the author describes riding in a self-driving car with a screen on the passenger side that depicts the car's vision and classification of the objects on the road, ostensibly to improve the level of trust in the car's decision-making. Later the author quotes a study in which experts in a field give good ratings to an AI's explanations for its decision-making, even when the explanations given are intentionally incorrect.
+I cannot for the life of me remember which paper this is or what it covers in its main sections, and after searching through dozens of my saved papers as well as online search engines I cannot recover it. The paper also mentions a specific term for trusting machines/AI, which I also can't remember and would definitely help me find the paper if I could.
+If anyone is familiar with this paper or the study it quotes, I would really appreciate a link.
+"
+"['neural-networks', 'definitions', 'support-vector-machine', 'binary-classification', 'hinge-loss']"," Title: What is the definition of the hinge loss function?Body: I came across the hinge loss function for training a neural network model, but I did not know the analytical form for the same.
+I can write the mean squared error loss function (which is more often used for regression) as
+$$\sum\limits_{i=1}^{N}(y_i - \hat{y_i})^2$$
+where $y_i$ is the desired output in the dataset, $\hat{y_i}$ is the actual output by the model, and $N$ is the total number of instances in our dataset.
+Similarly, what is the (basic) expression for hinge loss function?
+"
+"['neural-networks', 'papers', 'research']"," Title: Is the framework provided by this paper for checking the constraints of AI systems really new?Body: The authors of this paper present a framework for checking the constraints of AI systems using formal argumentative logic between 2 agents: an interrogating agent and a suspect agent. The interrogating agent is attempting to find contradictions in the responses of the suspect agent by querying about information and the suspect agent must provide all the relevant information.
+Is this framework really new? I am pretty certain that I saw a very similar framework in the context of program verification some years ago.
+"
+"['machine-learning', 'classification', 'python', 'algorithm', 'text-classification']"," Title: What approach to use for selecting one of the category according to short category text?Body: I need some tool to classify articles based on short category text which consists of two or three words separated by '-'. The RSS/XML tag content is for example:
+
+Foreign - News
+
+
+Football - Foreign
+
+I created my own categories and now I need to classify categories from parsed RSS of this news source, so it fits news categories defined by me.
+I would for example need all articles containing category "football" to be identified as a category Sport but sometimes those categories XML tags contains exact match like Foreign - News should belong in the DB to category defined by me as Foreign.
+I can of course also use longer description text if that would be needed but I think for this simple problem that would not be even necessary.
+Since I used only trained decision trees frameworks so far for another project, I would like to hear advice about approach, AI technique or particular framework I can use to solve this problem. I don't want to get into a dead-end street by my own poor in this field not experienced decision.
+"
+"['convolutional-neural-networks', 'computer-vision', 'tensorflow']"," Title: CNN leaf segmentation throught classification of edges how to improveBody: I am trying to design a CNN that can do pixel wise segmentation of edges leaves in dense foliage agriculture images. Such as these:
+
+On the basis of this article https://arxiv.org/pdf/1904.03124.pdf, two classes are defined, such as the external contours and the internal contours of the leaves boundaries. In addition, a multi-scale approach is used trough a unet like architecture and a auxiliary loss was used to learn the edge detection at different scales (which corresponds to the main idea of this article https://arxiv.org/pdf/1804.01646.pdf). I learn the network using a mIoU loss whose weight varies depending on the class and the scale. Finally, my last activation layer is a clipped ReLU. The results are starting to be good :
+
+However, the network is not able to reconnect some internal edges. The following image shows a broken inner edge that does not continue to the next part (in blue) :
+
+So I'm looking for some paper, git, codes, whatever that can improve the reconstruction of missing edges (inside or outside the CNN).
+"
+"['weights', 'features']"," Title: Does the weight vector form imply feature space curvature?Body: I came across this sentence when exploring a simple nearest neighbor classifier method using Euclidean distance (link):
+
+The slightly odd thing about using the Euclidean distance to compare features is that each dimension (or feature) is compared using the same scale.
+
+This got me thinking - if flat feature space implies that each feature contributes equally to the distance (score function), then curved feature space changes the scale between features, so that the features then contribute different amounts to the score function. For example, imagine we have a 2D feature space - a flat piece of paper - with two points, $X_1$ and $X_2$ on it, between which we wish to calculate the distance. If we then bend this into U-shape along, say, y-axis (so, no curvature introduced in y-dimension), the distances along the x-axis would be larger in the bent case than in the flat case:
+
+In other words, feature x would contribute more to the score function than feature y. This sounds awfully like weighing the feature inputs with weight vectors. Does this imply, that weight vectors (and matrices) have a direct effect on curvature of feature space? Does an identity weight matrix (or a vector of all 1s) imply our feature space is flat (and curved otherwise)? Lastly, could it then be said that whenever we are training an ML model, we are in fact learning the approximate curvature of the feature space we wish to model?
+"
+"['machine-learning', 'reinforcement-learning', 'tensorflow', 'deep-rl', 'models']"," Title: Is there a machine learning model that can be trained with labels that only say how ""right"" or ""wrong"" it was?Body: I'm trying to find the name for a model that is used to output a decision (maybe something like right
, left
, or do nothing
= -1
, 0
,1
) but that can be trained with labels that contain how "correct" or "incorrect" it was. I've tried to google around and ask some friends in my machine learning class, but no one seems to have an answer.
+The classic example I seem to always see is the models used in the snake game. We don't know what the right decision was per se, but we can say that if it ran into the wall, that was really wrong. Or if it got an apple and gained 50 points, then it was correct and if it got 2 apples and gained 100 points then it was even more correct, etc.
+I'm looking for a network where the exact labels don't exist, but where we can penalize or reward its decisions.
+I'm assuming this requires some kind of modified cost function, but I would imagine this type of network already exists. I'm hoping someone can provide me with the name for this type of network and whether or not there is a Keras frontend for something like this.
+"
+"['neural-networks', 'reference-request', 'graph-neural-networks', 'model-request', 'algorithm-request']"," Title: Is there a graph neural network algorithm that can deal with a different number of input and output nodes?Body: I am new to graph neural networks and their applications. I have an input graph $G = \{V, E\}$ and an output graph $G' = \{V', E'\}$ where the number of nodes $V$ and $V'$ are different. I am trying to learn the function where $f(G) = G'$ and $V > V'$, thus, the function is mapping many-to-one ($n$ number of nodes map to one). The Graph Convolution Network (GCN) seems to have the same number of nodes in input and output with the function being learnt. Could I utilize the GCN for my task?
+"
+"['reinforcement-learning', 'deep-rl', 'policy-gradients']"," Title: Confusion about computing policy gradient with automatic differentiation ( material from Berkeley CS285)Body: I am taking Berkeley’s CS285 via self-study. On this particular lecture regarding Policy Gradient, I am very confused about the inconsistency between the concept explanation and the demonstration of code snippet. I am new to RL and hope someone could clarify this for me.
+Context
+1.The lecture defines policy gradient as follow:
+
+log(pi_theta(a | s)) denotes the log probability of action given state under policy parameterized by theta
+gradient log(pi_theta(a |s)) denotes the gradient of parameter theta with respect to the predicted log probability of action
+2.The lecture defines a pseudo-loss.By auto differentiate the pseudo-loss, we recovery the policy gradient.
+
+Here Q_hat is short hand of sum of r(s_i_t, a_i_t) in the Policy gradient equation under 1)
+
+- The lecture then proceeds to gives a pseudo-code implementation of 2)
+
+
+My confusion
+
+
+
+From 1) above,
+gradient log(pi_theta(a |s)) denotes the gradient of parameter theta with respect to the predicted log probability of action , not a loss value calculated from a label action and predicted action.
+Why does the below in 2) implies that gradient log(pi_theta(a |s)) just morph into output of loss function instead of just predicted action probability as defined in 1) ?
+
+2.
+In this pseudo-code implementation,
+
+Particularly, this line below.
+negative_likelihoods = tf.nn.softmax_cross_entrophy_with_logis(labels=actions, logits=logits)
+
+
+Where does the actions even coming from ? If it comes from collected trajectory, aren’t the actions result of logits = policy.predictions(states) to begin with ?
+Then won’t tf.nn.softmax_cross_entrophy_with_logis(labels=actions, logits=logits) always return 0 ?
+
+- Based on the definition of policy gradient in 1), shouldn’t the implementation of pseudo-loss be like below ?
+
+
+# Given:
+# actions - (N*T) x Da tensor of actions
+# states - (N*T) x Ds tensor of states
+# q_values – (N*T) x 1 tensor of estimated state-action values
+# Build the graph:
+logits = policy.predictions(states) # This should return (N*T) x Da tensor of action logits
+weighted_predicted_probability = tf.multiply(torch.softmax(logits), q_values)
+loss = tf.reduce_mean(weighted_predicted_probability )
+gradients = loss.gradients(loss, variables)
+
+
+"
+"['convolutional-neural-networks', 'generative-adversarial-networks', 'image-processing', 'image-generation']"," Title: Best Machine Learning Model for ""Predicted"" Image GenerationBody: I am currently working on undergraduate research to determine hotspots for hand-surface contact. Ideally, I would like to give the model a depth image as input:
+
+Example of synthetic depth image
+and return an image mask indicating where the surface was touched:
+
+Example of synthetic contact mask
+I have worked with Machine Learning before but am struggling to determine what model I should use. My understanding is that CNNs are typically intended for classification tasks. And while GANs are used to generate new images, they can produce these images independently of an input. Assuming I have a large dataset of depth images and the respective black and white contact mask, what model can be used to efficiently predict a contact mask given an unseen depth image?
+"
+"['python', 'papers', 'monte-carlo-methods', 'game-theory', 'games-of-chance']"," Title: How exactly is Monte Carlo counterfactual regret minimization with external sampling implemented?Body: I have read many papers, such as this or this, explaining how external sampling works, but I still don't understand how the algorithm works.
+I understand you divide $Q$, which is the set of all terminal histories into subsets $Q_1,..., Q_n$.
+What is the probability of reaching some subset $Q_i$? Is it just the product of chance probability, the opponent's probability, and my probability?
+As I understand it, the sampling only occurs in the opponent's information sets. How does that work? If there are two players, player 1 strategy is based on what strategy I use.
+What happens after you have determined a subset $Q_i$ you want to sample? How many times do you iterate over the subset $Q_i$?
+I have searched around and I cannot find any Python code that uses external sampling, but plenty of papers that give formulas, but do not explain the algorithm in detail. So, a Python example of MC-CFR external sampling would probably make it a lot easier for me to understand the algorithm.
+"
+"['natural-language-processing', 'bert', 'sentiment-analysis']"," Title: How to keep track of the subject/entity in a sentence?Body: I'm working on Sentiment Analysis, using HuggingFace to perform sentiment analysis on articles
+ classifier = pipeline('sentiment-analysis', model="nlptown/bert-base-multilingual-uncased-sentiment")
+ classifier(['We are very happy to show you the 🤗 Transformers library.', "We hope you don't hate it."])
+
+This returns
+
+label: POSITIVE, with score: 0.9998
+
+
+label: NEGATIVE, with score: 0.5309
+
+Now I'm trying to understand how to keep track of a subject when performing the sentiment analysis.
+Suppose I'm given a sentence like this.
+
+StackExchange is a great website. It helps users answer questions. Hopefully, someone will help answer this question.
+
+I would like to keep track of the subject when performing sentiment analysis. In the example above, in the 2nd sentence 'it' refers to 'StackExchange'. I would like to be able to do track a subject between sentences.
+Now, I could try to manually try to parse this by finding the verb and trying to figure find the phrase that comes before it. However, it doesn't sound like a very safe or accurate way to find the subject.
+Alternatively, I could train similar to a Named Entity Recognition. However, finding a dataset for this is very hard, and training it would be very time-consuming.
+How can I keep track of an entity within an article?
+"
+"['neural-networks', 'machine-learning', 'datasets', 'generative-model']"," Title: Would it be possible to determine the dataset a neural network was trained on?Body: Let's say we have a neural network that was trained with a dataset $D$ to solve some task. Would it be possible to "reverse-engineer" this neural network and get a vague idea of the dataset $D$ it was trained on?
+"
+"['reinforcement-learning', 'deep-rl', 'proximal-policy-optimization', 'trust-region-policy-optimization']"," Title: Is (log-)standard deviation learned in TRPO and PPO or fixed instead?Body: After having read Williams (1992), where it was suggested that actually both the mean and standard deviation can be learned while training a REINFORCE algorithm on generating continuous output values, I assumed that this would be common practice nowadays in the domain of Deep Reinforcement Learning (DRL).
+In the supplementary material associated with the paper introducing Trust Region Policy Optimization (TRPO), however, it is stated that:
+
+A neural network with several fully-connected
+(dense) layers maps from the input features to the mean of a Gaussian distribution. A separate set of parameters specifies the log standard deviation of each element. More concretely, the parameters include a set of weights and biases for the neural network computing the mean, $\{W_i , b_i\}_{i=1}^L$ , and a vector $r$ (log standard deviation) with the same dimension as $a$. Then, the policy is defined by the normal distribution $\mathcal{N}(\text{mean}=\text{NeuralNet}(s; \{W_i , b_i\}_{i=1}^L), \text{stdev}=\text{exp}(r))$.
+
+where $s$ refers to a state and $a$ to a predicted action (respectively a vector of actions if multiple outputs are generated concurrently).
+To me this suggests that the standard deviation stdev (being a function of $r$) is actually not learned when training a TRPO agent, but that it is solely determined by some possibly constant vector $r$.
+Since I found the idea of adjusting both the mean and standard deviation together when training a REINFORCE agent quite reasonable, I got wondering whether it is actually true that TRPO agents do not treat the standard deviation for sampling output values as a trainable parameter, but just as a function of the state-independent vector $r$.
+(Pretty much the same shall then apply to Proximal Policy Optimization (PPO) agents as well, since they are reported to follow TRPO's model architecture in the continuous output case.)
+In search for an answer, I browsed OpenAI's baselines repository containing reference implementations of both TRPO and PPO.
+In my understanding of their code, the code seems to confirm my assumption that standard deviation is a non-trainable parameter and that it is, instead of being trainable, taken to be a constant.
+Now, I was wondering whether my understanding of the procedure how TRPO (and PPO) computes standard deviation(s) is correct or whether I misunderstood or overlooked something important here.
+"
+"['neural-networks', 'recurrent-neural-networks', 'long-short-term-memory', 'function-approximation', 'universal-approximation-theorems']"," Title: Is it possible to predict $x^2$, $\log(x)$, or variable function of $x$ using RNN?Body: There were some posts that using RNN can predict the next point of the sine wave function with data history.
+However, I wondered if it also works on all the functions of $x$, such as $x^2$, $x^3$, $\log(x)$, $\frac{1}{(x+1)}$ functions.
+"
+"['papers', 'objective-functions', 'calculus', 'derivative']"," Title: BlackOut - ICLR 2016: need help understanding the cost function derivativeBody: In the ICLR 2016 paper BlackOut: Speeding up Recurrent Neural Network Language Models with very Large Vocabularies, on page 3, for eq. 4:
+$$ J_{ml}^s(\theta) = log \ p_{\theta}(w_i | s) $$
+They have shown the gradient computation in the subsequent eq. 5:
+$$ \frac{\partial J_{ml}^s(\theta)}{\partial \theta} = \frac{\partial}{\partial \theta}<\theta_i \cdot s> - \sum_{j=1}^V p_{\theta}(w_j|s)\frac{\partial}{\partial \theta} <\theta_j \cdot s>$$
+
+I am not able to understand how they have obtained this - I have tried to work it out as follows:
+from eq. 3 we have
+$$ p_{\theta}(w_i|s) = \frac{exp(<\theta_i \cdot s>)}{\sum_{j=1}^V exp(<\theta_j \cdot s>)} $$
+re-writing eq. 4, we have:
+$$\begin{eqnarray}
+J_{ml}^s(\theta) &=& log \ \frac{exp(<\theta_i \cdot s>)}{\sum_{j=1}^V exp(<\theta_j \cdot s>)} \nonumber \\
+ &=& log \ exp(<\theta_i \cdot s>) - log \ \sum_{j=1}^V exp(<\theta_j \cdot s>) \nonumber \nonumber
+\end{eqnarray}$$
+Now, taking derivatives w.r.t. $ \theta $:
+$$\begin{eqnarray}
+\frac{\partial}{\partial \theta} J_{ml}^s(\theta) &=& \frac{\partial}{\partial \theta} log \ exp(<\theta_i \cdot s>) - \frac{\partial}{\partial \theta} log \ \sum_{j=1}^V exp(<\theta_j \cdot s>) \nonumber \nonumber
+\end{eqnarray}$$
+
+So, that's it; the second term (after the negative sign), how did that change to the term they have given in eq. 5? Or did I commit a blunder?
+
+Update
+I did commit a blunder and I have edited it out, but, the question remains!
+correct property:
+$$log \ (\prod_{i=1}^K x_i) = \sum_{i=1}^K log \ (x_i)$$
+"
+"['pytorch', 'proofs', 'implementation', 'variational-autoencoder', 'kl-divergence']"," Title: How is this Pytorch expression equivalent to the KL divergence?Body: I found the following PyTorch code (from this link)
+-0.5 * torch.sum(1 + sigma - mu.pow(2) - sigma.exp())
+
+where mu
is the mean parameter that comes out of the model and sigma
is the sigma parameter out of the encoder. This expression is apparently equivalent to the KL divergence. But I don't see how this calculates the KL divergence for the latent.
+"
+"['neural-networks', 'tensorflow', 'python', 'keras']"," Title: Is it possible to transform audio with neural networks to make it sound like 3d soundBody: so the idea is to feed neural network data like
+input: mono audio(extracted from existing 3d audio) output: 3d audio
+after training it should convert mono audio to 3d sound
+do you think it is possible? does it already implemented?(I didn't found)
+P.S
+it should sound like https://www.youtube.com/watch?v=kVH_y0rOyGM
+not like usual 3d youtube.com/watch?v=QFaSIti5_d0
+"
+['alphazero']," Title: What is the consensus on the ""correct"" temperature settings for the AlphaZero algorithm?Body: In the AlphaZero learning algorithm, during self-play to generate training games, the move played is chosen with probability proportional to the MCTS visits raised to the $\tau$-th power, where $\tau$ is the so called temperature. Higher temperatures correspond to more exploration. It seems that in deepmind's original paper (on AlphaGo Zero if I'm not mistaken) it is mentioned that temperature is decayed to zero after move 30 in Go/Baduk, then this is contradicted in the AlphaZero with it saying that temperature is not decayed at all, and finally in AlphaZero's pseudocode I believe it is implied that the temperature is decayed after some number of moves. Specifically I believe that lczero concluded that they decayed after 15 moves for chess. It's not clear to me after searching what the current training regime for lczero is with regards to temperature. Also, I believe that ELF openGo efforts used $\tau=1$ for the entire game.
+Question: Is there a consensus on what $\tau$ should be? Does it matter if the training is in early phases or not (i.e. if the AI is not advanced yet is it beneficial to explore seemingly "worse" moves?) How dependent on the game is this optimal $\tau$? If I have a game which lasts 50 moves average, and I want to decay $\tau$, is there a best practice?
+"
+"['sentiment-analysis', 'naive-bayes']"," Title: How can I apply naive Bayes classifier for three classes (Positive, Negative and Neutral) in text data?Body: I found a naive Bayes classifier for positive sentiment or a negative sentiment Citius: A Naive-Bayes Strategy for Sentiment Analysis on English Tweets. But with most available datasets online, sentiments are classified into 3 types: positive, negative, and neutral.
+How does the naive Bayes formula change for such cases? Or does it remain the same, and we only consider the positive and negative to calculate the log likelihoods-likelihoods?
+"
+"['convolutional-neural-networks', 'convolutional-layers']"," Title: Error in MobileNet V1 Architecture?Body: From the architecture table of the first MobileNet paper, a depthwise convolution with stride 2 and an input of 7x7x1024 is followed by a pointwise convolution with the same input dimensions, 7x7x1024.
+Shouldn't the pointwise layer's input be 4x4x1024 if the depthwise conv. layer was stride 2? (Assumming padding of 1)
+Is this an error on the author's side? Or are there something that I've missed between these layers?
+I've checked implementations of MobileNet V1 and it seems that everyone just treated this depthwise layer's stride as 1.
+
+"
+"['machine-learning', 'reinforcement-learning', 'dqn', 'actor-critic-methods']"," Title: Can I train a DQN on the same dataset for multiple epochs?Body: I am trying to learn about reinforcement learning and chose the stock market to experiment with. I have minute by minute historical data on a particular stock for the past 20 years. I am using a generator to feed the data into my DQN. I've been running some automated tuning on the hyperparameters and seem to have found some good values.
+Now I am wondering if I should be training on the dataset more than once or whether that would cause the network to simply memorize past experiences and cause overfitting. Is there a standard practice when it comes to training on historical data in regards to the number of epochs?
+Edit: I'm not nessesarily looking for an answer to how many epochs I should be using, rather I'd like to know if running over the same data more than once is okay with DQNs
+"
+"['pytorch', 'variational-autoencoder', 'kl-divergence', 'mnist']"," Title: Why does the VAE using a KL-divergence with a non-standard mean does not produce good images?Body: I know I can make a VAE do generation with a mean of 0 and std-dev of 1.
+I tested it with the following loss function:
+def loss(self, data, reconst, mu, sig):
+ rl = self.reconLoss(reconst, data)
+ #dl = self.divergenceLoss(mu, sig)
+ std = torch.exp(0.5 * sig)
+ compMeans = torch.full(std.size(), 0.0)
+ compStd = torch.full(std.size(), 1.0)
+ dl = kld(mu, std, compMeans, compStd)
+ totalLoss = self.rw * rl + self.dw * dl
+ return (totalLoss, rl, dl)
+
+def kld(mu1, std1, mu2, std2):
+ p = torch.distributions.Normal(mu1, std1)
+ q = torch.distributions.Normal(mu2, std2)
+ return torch.distributions.kl_divergence(p, q).mean()
+
+In this case, mu and sig are from the latent vector, and reconLoss
is MSE. This works well, and I am able to generate MNIST digits by feeding in noise from a standard normal distribution.
+However, I'd now like to concentrate the distribution at a normal distribution with std-dev of 1 and mean of 10. I tried changing it like this:
+compMeans = torch.full(std.size(), 10.0)
+
+I did the same change in reparameterization and generation functions. But what worked for the standard normal distribution is not working for the mean = 10 normal one. Reconstruction still works fine but generation does not, only producing strange shapes. Oddly, the divergence loss is actually going down too, and reaching a similar level to what it reached with standard normal.
+Does anyone know why this isn't working? Is there something about KL that does not work with non-standard distributions?
+Other things I've tried:
+
+- Generating from 0,1 after training on 10,1: failed
+- Generating on -10,1 after training on 10,1: failed
+- Custom version of KL divergence: worked on 0,1. failed on 10,1
+- Using sigma directly instead of std = torch.exp(0.5 * sig): failed
+
+Edit 1:
+Below are my loss plots with 0,1 distribution.
+Reconstruction:
+
+Divergence:
+
+Generation samples:
+
+Reconstruction samples (left is input, right is output):
+
+And here are the plots for 10,1 normal distribution.
+Reconstruction:
+
+Divergence:
+
+Generation sample:
+
+Note: when I ran it this time, it actually seemed to learn the generation a bit, though it's still printing mostly 8's or things that are nearly an 8 by structure. This is not the case for the standard normal distribution. The only difference from last run is the random seed.
+Reconstruction sample:
+
+Sampled latent:
+tensor([[ 9.6411, 9.9796, 9.9829, 10.0024, 9.6115, 9.9056, 9.9095, 10.0684,
+ 10.0435, 9.9308],
+ [ 9.8364, 10.0890, 9.8836, 10.0544, 9.4017, 10.0457, 10.0134, 9.9539,
+ 10.0986, 10.0434],
+ [ 9.9301, 9.9534, 10.0042, 10.1110, 9.8654, 9.4630, 10.0256, 9.9237,
+ 9.8614, 9.7408],
+ [ 9.3332, 10.1289, 10.0212, 9.7660, 9.7731, 9.9771, 9.8550, 10.0152,
+ 9.9879, 10.1816],
+ [10.0605, 9.8872, 10.0057, 9.6858, 9.9998, 9.4429, 9.8378, 10.0389,
+ 9.9264, 9.8789],
+ [10.0931, 9.9347, 10.0870, 9.9941, 10.0001, 10.1102, 9.8260, 10.1521,
+ 9.9961, 10.0989],
+ [ 9.5413, 9.8965, 9.2484, 9.7604, 9.9095, 9.8409, 9.3402, 9.8552,
+ 9.7309, 9.7300],
+ [10.0113, 9.5318, 9.9867, 9.6139, 9.9422, 10.1269, 9.9375, 9.9242,
+ 9.9532, 9.9053],
+ [ 9.8866, 10.1696, 9.9437, 10.0858, 9.5781, 10.1011, 9.8957, 9.9684,
+ 9.9904, 9.9017],
+ [ 9.6977, 10.0545, 10.0383, 9.9647, 9.9738, 9.9795, 9.9165, 10.0705,
+ 9.9072, 9.9659],
+ [ 9.6819, 10.0224, 10.0547, 9.9457, 9.9592, 9.9380, 9.8731, 10.0825,
+ 9.8949, 10.0187],
+ [ 9.6339, 9.9985, 9.7757, 9.4039, 9.7309, 9.8588, 9.7938, 9.8712,
+ 9.9763, 10.0186],
+ [ 9.7688, 10.0575, 10.0515, 10.0153, 9.9782, 10.0115, 9.9269, 10.1228,
+ 9.9738, 10.0615],
+ [ 9.8575, 9.8241, 9.9603, 10.0220, 9.9342, 9.9557, 10.1162, 10.0428,
+ 10.1363, 10.3070],
+ [ 9.6856, 9.7924, 9.9174, 9.5064, 9.8072, 9.7176, 9.7449, 9.7004,
+ 9.8268, 9.9878],
+ [ 9.8630, 10.0470, 10.0227, 9.7871, 10.0410, 9.9470, 10.0638, 10.1259,
+ 10.1669, 10.1097]])
+
+Note, this does seem to be in the right distribution.
+Just in case, here's my reparameterization method too. Currently with 10,1 distribution:
+def reparamaterize(self, mu, sig):
+ std = torch.exp(0.5 * sig)
+ epsMeans = torch.full(std.size(), 10.0)
+ epsStd = torch.full(std.size(), 1.0)
+ eps = torch.normal(epsMeans, epsStd)
+ return eps * std + mu
+
+"
+"['convolutional-neural-networks', 'terminology']"," Title: Is there any difference between ConvNet and CNN?Body: ConvNet stands for Convolutional Networks and CNN stands for Convolutional Neural Networks.
+Is there any difference between both?
+If yes, then what is it?
+If no, is there any reason behind using ConvNet at some places and CNN at some other places in literature?
+"
+"['machine-learning', 'loss', 'batch-learning', 'validation-loss']"," Title: Is it okay to calculate the validation loss over batches instead of the whole validation set for speed purposes?Body: I have about 2000 items in my validation set, would it be reasonable to calculate the loss/error after each epoch on just a subset instead of the whole set, if calculating the whole dataset is very slow?
+Would taking random mini-batches to calculate loss be a good idea as your network wouldn't have a constant set? Should I just shrink the size of my validation set?
+"
+"['training', 'game-ai', 'alphazero', 'muzero']"," Title: Is it practical to train AlphaZero or MuZero (for indie games) on a personal computer?Body: Is it practical/affordable to train an AlphaZero/MuZero engine using a residential gaming PC, or would it take thousands of years of training for the AI to learn enough to challenge humans?
+I'm having trouble wrapping my head around how much computing power '4 hours of Google DeepMind training' equates to my residential computer running 24/7 trying to build a trained AI.
+Basically, are AlphaZero or MuZero practical for indie board games that want a state of the art AI, or is it too expensive to train?
+"
+"['markov-decision-process', 'policies', 'bellman-equations', 'transition-model']"," Title: How can we find the value function by solving a system of linear equations without knowing the policy?Body: An MDP is a Markov Reward Process with decisions, it’s an environment in which all states are Markov. This is what we want to solve. An MDP is a tuple $(S, A, P, R, \gamma)$, where $S$ is our state space, $A$ is a finite set of actions, $P$ is the state transition probability function,
+$$P_{ss'}^a = \mathbb{P}[S_{t+1} = s' | S_t = s, \hspace{0.1cm}A_t = a] \label{1}\tag{1}$$
+and
+$$R_s^a = \mathbb{E}[R_{t+1}| S_t =s, A_t = a]$$
+and a discount factor $\gamma$.
+This can be seen as a linear equation in $|S|$ unknowns, which is given by,
+$$V = R + \gamma PV \hspace{1mm} \label{2}\tag{2}$$
+$V$ is value of a state vector, $R$ is immediate reward vector, $P$ is transition probability matrix, where each element at $(i,j)$ in $P$ is given by, $ P[i][j] = P(i \mid j)$ i.e., probability that I am in state $j$ going to state $i$.
+As $P$ is given, we treat, equation $\ref{2}$ as a linear equation in $V$.
+But $P[i][j] = \sum_a (\pi(a \mid j) \times \mathrm{p}(i \mid j, a) )$. But, $ \pi (a \mid s)$ (i.e., probability that I will take action a in state s) is NOT given.
+So, how can we frame this problem as the solution to a system of linear equations in \ref{2}, if we only know $ P^a_{ss'}$ and we do not know $ \pi(a \mid s)$, which is needed to calculate $P[i][j]$?
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'word-embedding']"," Title: NLP: Are hashtags tokenised?Body: I am exploring a potential NLP project. I was wondering what generally is done with the hashtags words (e.g. #hello
). Are those words ignored? is the #
removed and the word tokenised? Is it tokenised with the #
?
+"
+"['u-net', 'generalization', 'data-augmentation']"," Title: Late Onset AugmentationBody: If I train a U-Net model for image segmentation (e.g. medical images) and start training until it converges and then add augmentation - can i expect similar results as if i train with augmentation from the beginning ?
+
+"
+"['reinforcement-learning', 'deep-rl', 'alphazero', 'chess']"," Title: Clarifying representation of Neural Nerwork input for Chess Alpha ZeroBody: In the Alpha Zero paper (https://arxiv.org/pdf/1712.01815.pdf) page 13, the input for the NN is described. In the beggining of the page, the authors state that:
+"The input to the Neural Network is an N x X x (MT + L) image stack [...]"
+From this, I understand that (for one training example) each input feature is an 8x8 plane. (Technically speaking, every value of every 8x8 plane is a feature, but for the purpose of the question let's suppose that a plane is an input feature).
+In the description of the table on top of the image, the following statement is made:
+"[...] Counts are represented by a single real-valued input; other input features are represented by a one-hot encoding using thespecified number of binary input planes. [...]"
+I understand how they convert the P1 and P2 pieces to one-hot encodings. My questions are:
+
+- When they say single real-valued input, since every input feature should be an 8x8 plane, do they mean that they create an 8x8 plane where every entry has the same single-real value? For example, for the 'Total move count' plane, if 10 moves had been played in the game so far, it would look like the one below?
+
+ move_count_plane = [[10, 10, 10, 10, 10, 10, 10, 10],
+ [10, 10, 10, 10, 10, 10, 10, 10],
+ [10, 10, 10, 10, 10, 10, 10, 10],
+ [10, 10, 10, 10, 10, 10, 10, 10],
+ [10, 10, 10, 10, 10, 10, 10, 10],
+ [10, 10, 10, 10, 10, 10, 10, 10],
+ [10, 10, 10, 10, 10, 10, 10, 10],
+ [10, 10, 10, 10, 10, 10, 10, 10]]
+
+
+- For the 'Repetitions' plane, is it the same case as above? They mean a plane where every value is the number of times a specific board setup has been reached? For example, if a specific position has been reached 2 times, then the repetitions plane for that position would be
+
+ # for a specific timestep in the T=8 step history
+ repetitions_plane = [[2, 2, 2, 2, 2, 2, 2, 2],
+ [2, 2, 2, 2, 2, 2, 2, 2],
+ [2, 2, 2, 2, 2, 2, 2, 2],
+ [2, 2, 2, 2, 2, 2, 2, 2],
+ [2, 2, 2, 2, 2, 2, 2, 2],
+ [2, 2, 2, 2, 2, 2, 2, 2],
+ [2, 2, 2, 2, 2, 2, 2, 2],
+ [2, 2, 2, 2, 2, 2, 2, 2]]
+
+? Also, why do they keep 2 repetitions planes? Is it one for every player? (8 repetition planes for the past T=8 moves for P1, and more 8 repetition planes for the past T=8 moves for P2?)
+Thanks in advance.
+"
+"['python', 'transformer', 'bert', 'sentiment-analysis']"," Title: Sentiment analysis does not handle neturalsBody: I'm writing some financial tools, I've found highly performant models for question and answering but when it comes to sentiment analysis I haven't found anything that good. I'm trying to use huggingface:
+from transformers import pipeline
+classifier = pipeline('sentiment-analysis')
+print(classifier("i'm good"))
+print(classifier("i'm bad"))
+print(classifier("i'm neutral"))
+print(classifier("i'm okay"))
+print(classifier("i'm indifferent"))
+
+Which returns results
+
+[{'label': 'POSITIVE', 'score': 0.999841034412384}]
+
+
+[{'label': 'NEGATIVE', 'score': 0.9997877478599548}]
+
+
+[{'label': 'NEGATIVE', 'score': 0.999396026134491}]
+
+
+[{'label': 'POSITIVE', 'score': 0.9998164772987366}]
+
+
+[{'label': 'NEGATIVE', 'score': 0.9997762441635132}]
+
+The scores for all of the neutral words come up very high in a positive or negative direction, I would of figured the model would put the score lower.
+I've looked at some of the more fine-tuned models yet they seem to perform the same.
+I would assume there would be some pretrained models which could handle these use cases. If not, How can I find neutral sentiments?
+"
+"['terminology', 'definitions', 'academia', 'federated-learning']"," Title: What is Federated Learning?Body: How would you explain Federated Learning in simple layman terms for a non-STEM person?
+What are the main ideas behind Federated Learning?
+"
+"['machine-learning', 'generative-adversarial-networks', 'wasserstein-gan']"," Title: Classifying generated samples with Wasserstein-GAN as real or fakeBody: I'm quite new to GANs and I am trying to use a Wasserstein GAN as an augmentation technique. I found this article
+https://www.sciencedirect.com/science/article/pii/S2095809918301127,
+and would like to replicate their method of evaluating the GAN. The method is shown in the figure.
+In the article they write that they extract the generated samples that fooled the discriminator and use these to train a classifier. They also say that they use a Wasserstein GAN. Does anyone know how it is possible to extract samples that fooled the discriminator, since for a Wasserstein GAN the critic (discriminator) only puts a rating and not a label on the generated data?
+
+"
+"['neural-networks', 'classification', 'model-request']"," Title: Is it possible to make a neural network to solve this ""reaction time test""?Body: I'm thinking about writing an essay on the comparison between the human nervous system (reaction time) and a neural network that does the same reaction time test. I am very new in this area, so I was wondering if I can build a neural network that can perform a test like this: https://humanbenchmark.com/tests/reactiontime
+I just wanted to know how I should approach this problem, and what would be the best way to compare it to the human nervous system.
+I have thought about maybe using an image classification neural network, and have it looks for different colors and such, but not too sure about its technical aspects as of yet. Any help is appreciated.
+"
+"['tensorflow', 'backpropagation', 'relu', 'gradient']"," Title: Why is tf.abs non-differentiable in Tensorflow?Body: I understand why tf.abs is non-differentiable in principle (discontinuity at 0) but the same applies to tf.nn.relu yet, in case of this function gradient is simply set to 0 at 0. Why the same logic is not applied to tf.abs? Whenever I tried to use it in my custom loss implementation TF was throwing errors about missing gradients.
+"
+"['convolutional-neural-networks', 'papers', 'embeddings']"," Title: Converting age and sex variables to a 64-unit dense layerBody: I am studying a preprint for my own learning (https://www.medrxiv.org/content/medrxiv/early/2020/04/27/2020.04.23.20067967.full.pdf) and I am befuddled by the following detail of the neural network architecture:
+
+This is in accord with the paper's description of the architecture (p. 5):
+
+Age and sex were input into a 64-unit hidden layer and was concatenated with the other branches.
+
+How can the two scalars of age and sex be implemented as a 64-unit dense layer?
+"
+"['machine-learning', 'deep-learning', 'ethics', 'green-ai']"," Title: Will there be some promising techniques that can make AI greener and affordable in the future?Body: The recent advances in machine learning were mostly achieved by the hardware, and the hardware is said to continue driving the development of AI, but I was still shocked by this thread which reads that the projected future cost for the largest model would be 1B dollars in 2025. And I learned that universities are suffering from an academic AI brain drain partly due to the scarce hardware resources.
+
+Some people proposed the so-called Green AI that encourages sustainable AI development but provides few constructive methods to prevent the trend.
+I wonder if the redder and redder AI would be in fact truly inevitable. It seems to me that all companies should build an expensive compute infrastructure to be competitive, but I think the investment would be very risky since most companies cannot get a higher return.
+But on the other hand, we human beings have evolved tens of millions of years or billions of years(life) on earth with hundreds of billions of brains that have ever lived on earth as a whole "human brain". The biological wetware seems much much redder than the nowadays hardware and has consumed much much more energy than all the supercomputers. To make machines as intelligent as we humans shouldn't we pay as high a price? It reminds me of the NFL theorem but it should be imprecise in this scenario.
+So, will there be some promising techniques on the algorithm side that can make AI greener, affordable and sustainable in the future? If not, could anyone please explain why AI should be unavoidably red and inevitably redder?
+"
+"['deep-learning', 'natural-language-processing', 'word-embedding', 'word2vec', 'embeddings']"," Title: Are the Word2Vec encoded embeddings available online?Body: I am trying to do an NLP project and was wondering if there is anywhere online where the Word2Vec embeddings are stored (the actual n-dimmensional vectors).
+I want to search up a word and see what its encoding is. I have tried looking but couldn't find anything.
+Thank you
+"
+"['long-short-term-memory', 'prediction']"," Title: How to improve prediction performance of periodic data?Body: I have a 1 column dataset of $50 000$ points where 95% of the values equal $-50$. The data looks like the following: $$\begin{matrix}
+\text{time} & \text{value}\\
+1&-50 \\
+2&-50 \\
+3&-50 \\
+4& -50 \\
+5&3 \\
+6&-50\\
+7&-50\\
+8&5
+\end{matrix}$$
+As an addition, I know the exact time instance in which I will get value $\neq -50$ (as in the example above these are instances $5$ and $8$). The data is somewhat periodic, so the values which are different from $-50$ are chosen from a finite set $\mathcal{S}$.
+To predict the values I use a 3 layer LSTM network with l2 regularizer where along with the values I input another column that looks like that:
+$$\begin{matrix}
+\text{time} & \text{value} & \text{expect a change}\\
+1&-50 & 0 \\
+2&-50 & 0\\
+3&-50 & 0\\
+4& -50 & 1\\
+5&3 & 0\\
+6&-50& 0\\
+7&-50& 1\\
+8&5 & 0
+\end{matrix}$$
+which identifies the change one time instant in advance, so the LSTM will know to expect a change. However, the prediction performance is quite poor, it always predicts the change in the value but is far from real one and usually takes values out of the set $\mathcal{S}$. Any idea of how this could be improved?
+"
+"['q-learning', 'markov-decision-process', 'off-policy-methods', 'on-policy-methods', 'markov-property']"," Title: Why can we take the action $a$ from the next state $s'$ in the max part of the Q-learning update rule, if that action doesn't lead to any reward?Body: I'm using OpenAI's cartpole environment. First of all, is this environment not Markov?
+Knowing that, my main question concerns Q-learning and off-policy methods:
+For me, there is something weird in updating a Q value based on the max Q for a state and a reward value that was not from the action taken? How does this make learning better and makes you learn the optimal policy?
+"
+"['convolutional-neural-networks', 'reference-request', 'terminology', 'adversarial-ml']"," Title: Can CNNs be made robust to tricks where small changes cause misclassification?Body: I while ago I read that you can make subtle changes to an image that will ensure a good CNN will horribly misclassify the image. I believe the changes must exploit details of the CNN that will be used for classification. So we can trick a good CNN into classifying an image as a picture of a bicycle when any human would say it's an image of a dog. What do we call that technique, and is there an effort to make image classifiers robust against this trick?
+"
+"['convolutional-neural-networks', 'image-processing', 'convolution', 'hyper-parameters', 'filters']"," Title: Is it a good idea to use different width and height of the kernel in a CNN?Body: I always see that the width and height of the kernel are the same. But is it a good idea to use different numbers?
+Recently I tried to use GoogLeNet (which expects images to be 224x224) on my images (500x150) and I got an error:
+
+Negative dimension size caused by subtracting 7 from 5 for 'average_pooling2d_5/AvgPool'...
+
+I know that this error is because the height of my image is too small. If I use the height of about 200, then everything is ok. So, maybe, in this situation, I could just use a smaller height and bigger width in the kernel. For example (5, 3).
+Is it a good idea in this case? Or in general? How can it affect the accuracy of the network and the ability to extract different features?
+"
+"['deep-learning', 'computer-vision', 'object-detection', 'data-labelling']"," Title: Is there a methodology for splitting up annotated orthophotos into smaller photos that retain the original bounding boxes?Body: I'm trying to train an object detection algorithm (i.e. YOLOv4 Scaled, Faster R-CNN) on data taken from large orthophotos. Let's say I have one class, and I label the entire orthophoto with bounding boxes. After labeling, is there a way to slice up the entire image into individual photos of specified pixel sizes (i.e. 416x416 pixels) while keeping the bounding boxes? I can easily slice the photo into the specified dimensions, but the problem I am having is keeping the bounding boxes in these new images.
+That way, I would not be exhausting my GPU's memory requirements.
+"
+"['tensorflow', 'keras', 'objective-functions', 'activation-functions']"," Title: setting up last layer in tensoflow for class type of labelBody: I am creating a NN in tensorflow keras. the inputs are all float and the output is a class.
+The output currently encoded as a float, but only has 4 values (0,1,2,3).
+My model is similar to this:
+model = tf.keras.Sequential([
+ normalize,
+ layers.Dense(128, activation='relu'),
+ layers.Dense(254, activation='relu'),
+ layers.Dense(512, activation='relu'),
+ layers.Dense(512, activation='relu'),
+ layers.Dense(512, activation='relu'),
+ layers.Dense(512, activation='relu'),
+ layers.Dense(512, activation='relu'),
+ layers.Dense(512, activation='relu'),
+ layers.Dense(254, activation='relu'),
+ layers.Dense(128, activation='relu'),
+ layers.Dense(4)
+])
+
+model.compile(loss = tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy'],
+ optimizer = tf.optimizers.Adam())
+
+history=model.fit(data_set_features, data_set_labels,validation_split=0.33, epochs=100)
+
+is the model last layer correct?
+what type of activation should I use and loss function?
+"
+"['reinforcement-learning', 'probability', 'proximal-policy-optimization']"," Title: PPO2: Intuition behind Gumbel Softmax and Exploration?Body: I'm trying to understand the logic behind the magic of using the gumbel distribution for action sampling inside the PPO2 algorithm.
+This code snippet implements the action sampling, taken from here:
+def sample(self):
+ u = tf.random_uniform(tf.shape(self.logits), dtype=self.logits.dtype)
+ return tf.argmax(self.logits - tf.log(-tf.log(u)), axis=-1)
+
+I've understood that is a mathematical trick to be able to backprop over the action sampling in case of categorical variables.
+
+- But why can't I just put a softmax layer on top of the logits and sample according to the given probabilities? Why do we need
u
?
+
+- Tere is still the
argmax
which is not differential. How can backprob work?
+
+- Does
u
allows exploration? Imagine that at the beginning of the learning process, Pi holds small similar values (nothing is learned so far). In this case the action sampling does not always choose the maximum value in Pi because of logits-tf.log(-tf.log(u))
.
+In the further course of the training, larger values arise in Pi, so that the maximum value is also taken more often in the action sampling? But doesn't this mean that the whole process of action sampling is extremely dependent on the value range of the current policy?
+
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'generative-adversarial-networks', 'semi-supervised-learning']"," Title: GAN Generator Output w/ Periodic NoiseBody: I am training a Semi-Supervised GAN, using multivariate time-series with window of shape (180*80) with the generator and discriminator architecture below. My data is scaled using Robust Scaler, so I kept linear activation for the generator output.
+During the training I get noise in the generated signals and I can't understand the reason why whereas the original data is smooth. What can be the reason for this noise?
+![]()
+def make_generator_model(noise):
+ w_init = tf.random_normal_initializer(stddev=0.02)
+ gamma_init = tf.random_normal_initializer(1., 0.02)
+
+ def residual_layer(layer_input):
+
+ res_block = Conv1D(128, 3, strides=1, padding='same')(layer_input)
+ res_block = BatchNormalization(gamma_initializer=gamma_init)(res_block)
+ res_block = LeakyReLU()(res_block)
+ res_block = Conv1D(128, 3, strides=1, padding='same')(res_block)
+ res_block = BatchNormalization(gamma_initializer=gamma_init)(res_block)
+ res_block = LeakyReLU()(res_block)
+ res_add = Add()([res_block, layer_input])
+
+ return res_add
+
+ in_noise = Input(shape=(100,))
+
+
+ gen = Dense(180*65, kernel_initializer=w_init, use_bias=None)(in_noise)
+ gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
+ gen = LeakyReLU()(gen)
+
+ gen = Reshape((180, 65))(gen)
+ #assert model.output_shape == (None, 45, 256) # Note: None is the batch size
+
+ gen = Conv1D(64, 7, strides=1, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
+ #assert model.output_shape == (None, 45, 128)
+ gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
+ gen = LeakyReLU()(gen)
+
+ gen = Conv1D(64, 4, strides=2, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
+ #assert model.output_shape == (None, 45, 128)
+ gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
+ gen = LeakyReLU()(gen)
+
+ gen = Conv1D(128, 4, strides=2, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
+ #assert model.output_shape == (None, 45, 128)
+ gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
+ gen = LeakyReLU()(gen)
+
+ for i in range(6):
+ gen = residual_layer(gen)
+
+ gen = Conv1DTranspose(128, 4, strides=2, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
+ #assert model.output_shape == (None, 90, 64)
+ gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
+ gen = LeakyReLU()(gen)
+
+ gen = Conv1DTranspose(128, 4, strides=2, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
+ #assert model.output_shape == (None, 90, 64)
+ gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
+ gen = LeakyReLU()(gen)
+
+
+ out_layer = Conv1D(65, 7, strides=1, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
+ #assert model.output_shape == (None, 180, 65)
+
+ model = Model(in_noise, out_layer)
+
+ return model
+
+def make_discriminator_model(n_classes=8):
+ w_init = tf.random_normal_initializer(stddev=0.02)
+ gamma_init = tf.random_normal_initializer(1., 0.02)
+
+ in_window = Input(shape=(180, 65))
+
+ disc = Conv1D(64, 4, strides=1, padding='same', kernel_initializer=w_init)(in_window)
+ disc = LeakyReLU()(disc)
+ disc = Dropout(0.3)(disc)
+
+ disc = Conv1D(64*2, 4, strides=1, padding='same', kernel_initializer=w_init)(disc)
+ disc = LeakyReLU()(disc)
+ disc = Dropout(0.3)(disc)
+
+ disc = Conv1D(64*4, 4, strides=1, padding='same', kernel_initializer=w_init)(disc)
+ disc = LeakyReLU()(disc)
+ disc = Dropout(0.3)(disc)
+
+ disc = Conv1D(64*8, 4, strides=1, padding='same', kernel_initializer=w_init)(disc)
+ disc = LeakyReLU()(disc)
+ disc = Dropout(0.3)(disc)
+
+ disc = Conv1D(64*16, 4, strides=1, padding='same', kernel_initializer=w_init)(disc)
+ disc = LeakyReLU()(disc)
+ disc = Dropout(0.3)(disc)
+
+ disc = Flatten()(disc)
+
+ disc = Dense(128)(disc)
+ disc = Dense(128)(disc)
+
+ out_layer = Dense(1)(disc)
+
+ c_out_layer = Dense(8, activation='softmax')(disc)
+
+ model = Model(in_window, out_layer)
+ c_model = Model(in_window, c_out_layer)
+
+ return model, c_model
+
+"
+"['machine-learning', 'data-preprocessing', 'normalisation']"," Title: Is data leakage relevant when scaling across samples?Body: I have a question about data leakage when pre-processing data for a neural network and whether data leakage actually applies in my instance.
+I have variance stabilising transformed genomic data. Because it is genomic data we know apriori that lower numbers translate to lower levels of a gene being made and vice versa. Before input into the neural network, the data are squashed to between 0 and 1 using sklearn:
+preprocessing.minmax_scale(data, feature_range=(0,1), axis=1)
+
+The min_max scaling needs to be done across sample (axis=1)
as opposed to features because of this apriori assumption of gene levels - low genes need to remain low and vice-versa...
+Because of this, my question is: do training samples still need to be scaled separately from test samples as it doesn't seem there is a risk of data leakage here? Is this the correct assumption to make?
+"
+"['neural-networks', 'machine-learning', 'reference-request', 'ai-security']"," Title: Is there any research work on known malware detection systems based on AI?Body: I'm working on writing an article about the possibilities of modern AI-based algorithms to produce invisible self-learning malware, that can distribute itself throughout the internet and create flexible botnets.
+So far, I can not find any additional information about that, except this one.
+Is there any research work on known malware detection systems based on AI?
+"
+"['neural-networks', 'gradient-descent']"," Title: Should I use batch gradient descent when I have a small sample size?Body: I have a dataset with an input size of 155x155, with the output being 155 x 1 with a 3-4 layer neural net being used for regression. With such a small sample size, should I use full batch gradient descent (so all 155 samples) or use mini batch/stochastic gradient descent. I have read that using smaller mini batch sizes allows better generalisation, but as the batch size is very very small computationally it shouldn't be a burden to use BGD.
+"
+"['reinforcement-learning', 'python', 'deep-rl', 'open-ai']"," Title: How should I simulate this Markov Decision Process?Body: I am working on solving a problem on nodes in a graph communicating with each other. They try to estimate a central state using Kalman consensus filter, with the connections described by the graph's adjacency matrix. Considering time to be discrete, the adjacency matrix changes at each time instant with a transition probability matrix (unknown) to some other matrix (in some finite set of matrices). I want to simulate this in python to solve an MDP with some cost/reward function. (An external agent is concerned with this cost/reward and takes actions accordingly)
+Since the state space can be large, my advisor suggested using deep-RL techniques. However I have only studied (formally) basic RL (Q learning, stochastic approximation, etc. with finite states and finite actions at every instant). I tried looking at RL libraries but I can't figure out which one to pick. And even before that I am very confused by how to simulate KCF between nodes in python (from scratch?). How should I proceed?
+"
+"['reinforcement-learning', 'backpropagation', 'proximal-policy-optimization']"," Title: Why is Openai's PPO2 implementation differentiable?Body: I'm trying to understand the concept behind the implementation of the OpenAI PPO2 algorithm. The loss function that is minimized is as follows: loss = pg_loss - entropy * ent_coef + vf_loss * vf_coef
.
+First question: The computation of pg_loss
requires to use operations like tf.reduce_mean
and tf.maximum
. Are these two functions differentiable? Apparently, they are, otherwise, it would not work. Can someone explain why so I can understand the implementation?
+Second question: During training, an action is sampled by using the Gumbel Distribution: Noise from such a distribution is added to the logits
and then tf.argmax
is applied. This index is then used to calculate the negative log-likelihood. However, the tf.argmax
should also not be differentiable, so how can this work?
+"
+['recurrent-neural-networks']," Title: Can the hidden state of an RNN be a matrix?Body: If I'm dealing with a sequence of images as the input (frame by frame), and I want to output a matrix at each timestamp, can the hidden state be a matrix?
+"
+"['bayesian-deep-learning', 'bayesian-neural-networks', 'uncertainty-quantification', 'variance']"," Title: Why does this formula $\sigma^2 + \frac{1}{T}\sum_{t=1}^Tf^{\hat{W_t}}(x)^Tf^{\hat{W_t}}(x_t)-E(y)^TE(y)$ approximate the variance?Body: How does:
+$$\text{Var}(y) \approx \sigma^2 + \frac{1}{T}\sum_{t=1}^Tf^{\hat{W_t}}(x)^Tf^{\hat{W_t}}(x_t)-E(y)^TE(y)$$
+approximate variance?
+I'm currently reading What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision, and the authors wrote the above formula for the approximate estimation for the variance. I'm confused how the above is an approximation for $\frac{\sum(y-\bar{y})^2}{N-1}$. So, in the above equation, they're using a Bayesian Neural Network to quantify uncertainty. $\sigma$ is the predictive variance (kind of confused how they get this). $x$ is the input and $y$ is the label for the classification. $f^{\hat{W_t}}(\cdot)$ output a mean to a Gaussian distribution, with $\sigma$ being the SD for that distribution and $T$ is a predefined number of samples because the gradient is evaluated using Monte Carlo sampling.
+"
+"['comparison', 'search', 'a-star', 'breadth-first-search']"," Title: Is there any situation in which breadth-first search is preferable over A*?Body: Is there any situation in which breadth-first search is preferable over A*?
+"
+"['reinforcement-learning', 'q-learning', 'convergence', 'epsilon-greedy-policy', 'exploration-strategies']"," Title: Why does Q-learning converge under 100% exploration rate?Body: I am working on this assignment where I made the agent learn state-action values (Q-values) with Q-learning and 100% exploration rate. The environment is the classic gridworld as shown in the following picture.
+
+Here are the values of my parameters.
+
+- Learning rate = 0.1
+- Discount factor = 0.95
+- Default reward = 0
+
+Reaching the trophy is the final reward, no negative reward is given for bumping into walls or for taking a step.
+After 500 episodes, the arrows have converged. As shown in the figure, some states have longer arrows than others (i.e., larger Q-values). Why is this so? I don't understand how the agent learns and finds the optimal actions and states when the exploration rate is 100% (each action: N-S-E-W has 25% chance to be selected)
+"
+"['machine-learning', 'classification', 'datasets']"," Title: Theoretical limits on correlation between classification algorithm performancesBody: Are there any known theoretical bounds, or at least heuristic approaches, regarding the relation or correlation between the performances of any two different classification algorithms?
+For example, would there exist binary classification datasets for which, say, $k$-nearest-neighbour classifiers would perform with say >90% accuracy, whereas say decision tree classifiers would do no better than 50-60%? (Accuracy here is measured by say $k$-fold cross-validation.)
+It seems to me, at first glance, that a dataset which is able to achieve a very high accuracy on some classification algorithm would necessarily have some structure that would make it highly improbable that some other general classification algorithm would be able to perform very poorly. Yet it's also not impossible that there might be some 'exotic' type of dataset that does exhibit such a phenomenon.
+"
+"['path-planning', 'path-finding']"," Title: How to pathfind with volatile probabilities (Slay the spire)Body: I'm attempting to write an AI for the game Slay the Spire. One of the tasks it will need to do is navigate the map. The map is a directed acyclic graph with the same start and end node.
+Each node (including the end node) will have 2 values associated with it: Expected value and death probability. The goal of the AI should be to maximize the expected value without dying. So far, none of this seems tough.
+The twist here is that the death probability changes over time. Some nodes (elites) have high volatility: The death probability may go up or down drastically as we move through the map. The pathfinding algorithm would need to consider adjacent alternatives. A path that allows me to switch to a low-volatile node if things are getting tough is important.
+As an example, the following map has two major routes. Both routes have an elite (the creature with horns), but the one on the right is a forced elite, while the one on the left can be skipped if death probability is too high. The ability for me to be flexible mid-route is an attractive feature, and one I'd like to take into account when pathfinding somehow.
+
+How can my path-finding algorithm take into account adjacent paths/path flexibility? Is this even a job for pathfinding at all?
+"
+"['reinforcement-learning', 'reference-request', 'deep-rl', 'research']"," Title: Is there a document with a list of conjectures or research problems regarding reinforcement learning (like the Millennium Prize Problems)?Body: Is there a document with a list of conjectures or research problems regarding reinforcement learning like the Millennium Prize Problems?
+"
+"['reinforcement-learning', 'q-learning', 'value-functions', 'bellman-equations']"," Title: How would I compute the optimal state-action value for a certain state and action?Body: I am currently trying to learn reinforcement learning and I started with the basic gridworld application. I tried Q-learning with the following parameters:
+
+- Learning rate = 0.1
+- Discount factor = 0.95
+- Exploration rate = 0.1
+- Default reward = 0
+- The final reward (for reaching the trophy) = 1
+
+After 500 episodes I got the following results:
+
+How would I compute the optimal state-action value, for example, for state 2, where the agent is standing, and action south?
+My intuition was to use the following update rule of the $q$ function:
+$$Q[s, a] = Q[s, a] + \alpha (r + \gamma \max_{a'}Q[s', a'] — Q[s, a])$$
+But I am not sure of it. The math doesn't add up for me (when using the update rule).
+I am also wondering either I should use the backup diagram to find the optimal state-action q value by propagating the reward (gained from reaching the trophy) to the state in question.
+For reference, this is where I learned about the backup diagram.
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision']"," Title: What are the metrics to be used for unsupervised monocular depth estimation in computer vision?Body: I am currently replicating the results of this paper. In this paper they have not mentioned how they are evaluating the results as no ground truth is available for comparison. Same goes for other papers of this topic (unsupervised depth estimation). So I am very much confused about how to evaluate the model for overfitting, underfitting for this.
+"
+"['neural-networks', 'deep-learning', 'reference-request', 'gradient-descent', 'learning-rate']"," Title: Is there an ideal range of learning rate which always gives a good result almost in all problems?Body: I once read somewhere that there is a range of learning rate within which learning is optimal in almost all the cases, but I can't find any literature about it. All I could get is the following graph from the paper: The need for small learning rates on large problems
+
+In the context of neural networks trained with gradient descent, is there a range of the learning rate, which should be used to reduce the training time and get a good performance in almost all problems?
+"
+"['machine-learning', 'training', 'overfitting', 'principal-component-analysis', 'dimensionality-reduction']"," Title: When using PCA for dimensionality reduction of the feature vectors to speed up learning, how do I know that I'm not letting the model overfit?Body: I'm following Andrew Ng's course for Machine Learning and I just don't quite understand the following.
+Using PCA to speed up learning
+
+Using PCA to reduce the number of features, thus lowering the chances for overfitting
+
+Looking at these two separately, they make perfect sense. But practically speaking, how am I going to know that, when my intention is to speed up learning, I'm not letting the model over-fit?
+Do I've to find a middle-ground between these two scenarios when applying PCA? If so how exactly can I do that?
+"
+"['reinforcement-learning', 'experience-replay']"," Title: What is the purpose of storing the action $a$ within an experience tuple?Body: From what I understand, experience replay works by storing tuples of $(s, a, r, s')$ to be sampled for training. I understand why we store $s$, $r$ and $s'$. However, I do not understand the need for storing the action $a$.
+As I recall, the reward $r$ and the next state $s'$ are both used to calculate the target values. We can then compare these target values to the output we get when we do a forward-pass using state $s$ It seems to me that the stored action $a$ is not required for this process to work; or am I missing something? Why would we store the action $a$ if it isn't used in the training process itself?
+Please, forgive me if this question has been answered before. I looked, but was unable to find anything other than generic explanations as to what experience replay is and why we do it.
+"
+"['natural-language-processing', 'bert', 'fine-tuning']"," Title: Adding corpus to BERT for QABody: I was wondering about SciBERT's QA abilities using SQuAD. I have a scarce textual dataset consisting of less than 100 files where doctors are discussing cancer in dialogues. I want to add it to SciBERT to see if the QA abilities will improve in the cancer disease domain.
+After concatenating them into one large file which will be our vocab, I then clean the file (all char to lower, white space splitting, char filtering, punctuation, stopword filtering, short tokens and etc) which leaves me with a list of 3000 unique tokens
+If I wanted to add these tokens, do I just do scibert_tokenizer.add_tokens(myList)
where myList is the 3k tokens?
+I can confirm that more tokens are added doing print(len(scibert_tokenizer))
and I can see that embeddings do change such as corona
and ##virus
changes to coronavirus
and ##virus
.
+Does the model need to be trained from scratch again?
+"
+"['deep-rl', 'dqn', 'epsilon-greedy-policy']"," Title: $\epsilon$-greedy policy in environments where actions are performed in a long term. Does it has influence?Body: I'm working in an environment where once an action $a \in A$ is performed, it must hold this action selection for a while. To clarify this, assumes a horizon length $h$ and the set of actions: $\{a_{1}, a_{2}, a_{3}\} \in A$. Assumes now that the length $h$ to converge for an optimal solution must be split by 3 where each action is applied distinctly because if one of these actions is selected twice during an episode, the reward penalizes it severely in such way that there's no possible policy that can get a better return than selects each action just once. Thus, the length of each of "sub-horizon" $h_{i}$ is $h_{i} > 0$ and $h_{i} < h - 2$.
+Now seeing the DQN setting in the naive approach for exploration using $\epsilon$-greedy policy that selects at each time-step an action following:
+n = random value between 0 and 1
+if n < epsilon then:
+ a = random action in A
+else:
+ a = maxQ(S, a)
+
+For my particular problem (and probably many with similar settings), this way of selects an action seems oppositely and hard to converge for the optimal solution, shifting the distribution of samples for far from that. Does it make sense?
+Actually, I believe that sticky actions, could be way beneficial for many similar settings, beyond the scope of his original approach.
+"
+"['reinforcement-learning', 'multi-armed-bandits', 'continuous-action-spaces', 'algorithm-request']"," Title: Which RL algorithm would be suitable for this multi-dimensional and continuous action space?Body: Is there an RL approach/algorithm that would be suited for the following kind of problem?
+
+- There is a continuous action space with an action value $A_{a,t}$ for each action dimension $a$.
+- The objective function is a non-linear function of the satisfaction factors $S_{s}$ for each satisfaction dimension $s$ and some other random & independent factors. This objective function can be known to the agent.
+- Each satisfaction factor depends on an independent variable $\Delta^S_{s,t}$ and the effect $\delta^S_{a,s,t}$ of each action: $S_{s,t}=\Delta^S_{s,t} +\sum_a A_{a,t} * \delta^S_{a,s,t}$.
+- Each action can further have an effect $\delta^R_{a,r,t}$ on the inventory factors $I_{r,t}$ for each resource dimension $r$, with inventories being kept between time-steps and a factor $\Delta^R_{r,t}$ that is added or removed from the inventory at each step independent of the actions: $I_{r,t+1}=I_{r,t}+\Delta^R_{r,t} + \sum_a A_{a,t} * \delta^R_{a,r,t}$
+- The agent is constrained by each of these resources (i.e. the inventory has to remain positive).
+- The agent should be able to deal both with $\delta$ and $\Delta$ factors that are visible (states) and invisible (have to be learned).
+- A trained agent should be able to know how to adapt to changes of the $\delta$ and $\Delta$ factors, as well as the introduction or removal of activity dimensions.
+
+EDIT: I have adapted the problem description after some feedback.
+"
+['deep-learning']," Title: Using numerical/categorical data and image data to detect objectsBody: Let's say that I want to create a program capable of detecting lamps on some pictures. Those pictures can be, for instance, of a room, a street, etc.
+I would like to know if the following is possible:
+
+- Create a program that is trained using both pictures of lamps (so a dataset of images) and numerical/categorical data of lamps (so that would be a .csv file which contains features ranging from type, height, etc.)
+The idea is to combine both types of data in order to detect on another unseen picture if it contains a lamp.
+
+I'm not entirely sure if the resulting algorithm will be performant, but I would like to try.
+If what I described above is possible, could you please point me in the right direction as to which literature to consult ? My research have lead me nowhere.
+"
+"['neural-networks', 'backpropagation', 'gradient-descent', 'feedforward-neural-networks', 'batch-normalization']"," Title: Bias gradient of layer before batch normalization always zeroBody: From the original paper and this post we have that batch normalization backpropagation can be formulated as
+
+I'm interested in the derivative of the previous layer outputs $x_i=\sigma(w X_i+b)$ with respect to $b$, where $\{X_i\in\mathbb{R}, i=1,\dots,m\}$ is a network input batch and $\sigma$ is some activation function with weight $w$ and bias $b$.
+I'm using Adam optimizer so I average the gradients over the batch to get the gradient $\theta=\frac{1}{m}\sum_{i=1}^m\frac{\partial l}{\partial x_i}\frac{\partial x_i}{\partial b}$.
+Further, $\frac{\partial x_i}{\partial b}=\frac{\partial}{\partial b}\sigma(wX_i+b)=\sigma'(wX_i+b)$.
+I am using ReLu activation function and all my inputs are positive, i.e. $X_i>0 \ \forall i$, as well as $w>0$ and $b=0$. That is, I get
+$\frac{\partial x_i}{\partial b} = 1\ \forall i$. That means that my gradient $\theta=\frac{1}{m}\sum_{i=1}^m\frac{\partial l}{\partial x_i}$ is just the average over all derivatives of the loss function with respect to $x_i$. But summing all the $\frac{\partial l}{\partial x_i}$ up is zero, which can be seen from the derivations of $\frac{\partial l}{\partial x_i}$ and $\frac{\partial l}{\partial \mu_B}$.
+This means that my bias change would always be zero, which makes no sense to me. Also i created a neural network in Keras just for validation and all my gradients match except the bias gradients, which are not always zero in Keras.
+Does one of you know where my mistake is in the derivation?
+Thanks for the help.
+"
+"['ai-design', 'ai-basics']"," Title: AI approach for layout mappingBody: I am researching different AI approaches and was curious what approach would be useful in my scenario.
+Assume you are tiling a room. The tiles, and the room itself, can be any shape. In this room you could encounter N number of obstacles, such as a wall, or built-in. The goal is to layout the tiles, taking into account cutting into the obstacles mentioned above, along with the shape and dimensions of the destination room. This would have to account the shape, and measurements of said tile being placed onto the room.
+Which AI approach would prove useful in this scenario?
+"
+"['neural-networks', 'autoencoders', 'principal-component-analysis']"," Title: Why Autoencoder Weights Are Not Always TiedBody: To me, tying weights in an autoencoder makes sense if we think of the auto encoder as doing PCA. Why in any situation would it make sense to not tie the weights? If we don't tie the weights, would it not try to learn something that is PCA anyway or rather something that might not be as optimal as PCA?
+Also, if weights are not tied, it doesn't make sense to me that the auto-encoder is invertible i.e. if the decoder is looking for an inverse operation because it's a mapping between spaces of different dimension which should not invertible.
+So, if the weights are not tied then why do we expect the decoder to learn anything meaningful i.e neither PCA nor an inverse operation?
+"
+"['transformer', 'attention', 'encoder-decoder']"," Title: How is the transformers' output matrix size arrived at?Body: In this tensorflow article, the comments in the code say that MHA should output with one of the dimensions being the sequence length of the query/key. However, that means that the second MHA in the decoder layer should output something with one of the dimensions being the input sequence length, but clearly it should actually be the output sequence length! From all that I have read on transformers, it seems that the output of the left side of the SHA should be a matrix with dimensions q_seq_length x q_seq_length, and the output of the right side of the SHA should be v_seq_length x d_model. These matrices can't even multiply when using the second MHA in the decoder to incorporate the encoder output! Please help. I would appreciate a clear-cut explanation. Thanks
+"
+"['machine-learning', 'natural-language-processing', 'recurrent-neural-networks', 'transformer', 'natural-language-generation']"," Title: How are certain machine learning models able to produce variable-length outputs given variable-length inputs?Body: Most machine learning models, such as multilayer perceptrons, require a fixed-length input and output, but generative (pre-trained) transformers can produce sentences or full articles of variable length. How is this possible?
+"
+"['game-ai', 'game-theory', 'decision-trees', 'tree-search']"," Title: Find the expected reward in an expectimax-based dice rolling game?Body: I have this question that I'm kinda stuck on.
+It's a game scenario in which we set up an expectimax tree. In the game, you have 3 dice with sides 1-4 that you roll at the beginning. Then, depending on the roll, the player can choose one of the dice to reroll or not reroll anything. Points are assigned like so:
+
+- 10 points if there's 2 of a kind
+- 15 if there's 3 of a kind
+- 7 if there's a series like 1-2-3 or 2-3-4
+- Otherwise, or if the sum is higher than the rewards from above, the score = sum of the rolls
+
+For additional context, this is an example expectimax tree I came up with, for the case that the player rolled a 1,2,4 and is considering rerolling or not:
+
+Now lets introduce a new agent -- a robot that's supposed to help the human player. we assume
+
+- the human player choses any action with uniform probability regardless of the initial roll
+- there's a robot that, given a configuration of dice and the human's desired action, actually implements the action with probability 1-p and overrides it with a "no reroll" order with probability p>0. It has no effect if the human's decision is already to not reroll.
+
+For that scenario, I came up with this expectimax tree:
+
+Now for the part I'm actually stuck on -- lets define A, B, C, and D as the expected reward of performing actions "reroll die 1", "reroll die 2", "reroll die 3", and "no reroll." How do we find $R_H$, the expected reward for the human acting without the robot's help, and $R_{AH}$ the expected reward for if the robot helps? We can't use p in the expression, we only have access to A,B,C,D and we're supposed to write it in the form $X + Y_p$
+*EDIT: I asked again and the question was worded weirdly. They said we should definitely use p. What was meant by not using p is X and Y themselves can't contain p. But Y will be multiplied by p in the final simplified form.
+For $R_{H}$ I think the answer should be $\frac{(A + B + C + D}{4}$ because of uniform distribution over A-D.
+I'm supposing that $R_{AH}$ would be $\frac{(A + B + C)(1-p) + D + Dp}{4}$? Because the robot doesn't override with probability $1-p$, but he can only override A-C and does with probability $p$.
+I think something feels slightly wrong about my answer but I'm not sure what.
+"
+"['deep-learning', 'regression', 'geometric-deep-learning']"," Title: How to restrain a model's outputs to a certain range without affecting its representative capacity?Body: CONTEXT
+I am trying to build a regression model that finds the optimal parameters for a given input. The data I am using are point clouds, with N
points and 3
coordinates (x,y,z) each. Each point cloud is divided into neighborhoods of constant size and, during inference, a batch of these neighborhoods are fed into the model which outputs a set of parameters. The parameters represent a family of surfaces and the goal is to find parameters such that the surface fits the neighborhood of points as tightly as possible (in the least squares sense).
+THE ISSUE
+The problem is that each type of parameter must fall into a specific range, otherwise it has no meaning. For example the first two parameters must lie inside [0.1, 1.9], the next three must be strictly positive etc.. I have tried restraining the outputs by adding a scaled sigmoid activation or simply clamping the output to the range that I want. However, it seems that such hacks result in saturation, the model outputs negative values and all the outputs become 0 from clamping.
+I can't imagine I'm the first one to encounter such a problem, but I haven't been able to find out a way to solve it. Is there a defacto way of dealing with this situation?
+P.S. I am not including details of the model architecture to keep this question general interest, but I will include them upon request, if it helps.
+"
+"['classification', 'algorithm-request']"," Title: Can I do topic classification of Arabic text (software requirements) without a training dataset?Body: I am trying to make a text classification for Arabic data. The problem is that there is no labeled Arabic dataset for this data. My question is then: is possible to do a classification without a training dataset? If yes, what methods can I use?
+"
+"['computational-learning-theory', 'wasserstein-gan', 'sample-efficiency']"," Title: How can I estimate the minimum number of training samples needed to get interesting results with WGAN?Body: Let's say we have a WGAN where the generator and critic have 8 layers and 5 million parameters each. I know that the greater the number of training samples the better, but is there a way to know the minimum number of training examples needed? Does it depend on the size of the network or the distribution of the training set? How can I estimate it?
+"
+"['data-preprocessing', 'features', 'feature-engineering']"," Title: Does feature scaling have any benefits if all features are on the same scale?Body: By scaling features, we can prevent one feature from dominating the decisions of a model. For example, say heights (cm), and age (years) are two features in my data. Since range of heights is larger than of years, a trained model could weight importance of heights much more than years. This could result in a poor model in return.
+However, say that all of my features are binary, they take a value of either 0 or 1. In such a case, does feature scaling still have any benefits?
+"
+"['neural-networks', 'backpropagation', 'history', 'weights', 'perceptron']"," Title: Why did the developement of neural networks stop between 50s and 80s?Body: In a video lecture on the development of neural networks and the history of deep learning (you can start from minute 13), the lecturer (Yann LeCunn) said that the development of neural networks stopped until the 80s because people were using the wrong neurons (which were binary so discontinuous) and that is due to the slowness of multiplying floating point numbers which made the use of backpropagation really difficult.
+He said, I quote, "If you have continuous neurons, you need to multiply the activation of a neuron by a weight to get a contribution to the weighted sum."
+But the statement stays true even with binary (or any discontinuous activation function) neurons. Am I wrong? (at least, as long as you're in the hidden layer, the output of your neuron will be multiplied by a weight I guess). The same professor said that the perceptron, ADALINE relied on weighted sums so they were computing multiplications anyways.
+I don't know what I miss here and I hope someone will enlighten me.
+"
+"['reinforcement-learning', 'markov-decision-process']"," Title: Does stochasticity of an environment necessarily mean non-stationarity in MDPs?Body: Is a stochastic environment necessarily also non-stationary? To elaborate, consider a two-state environment ($s_1$ and $s_2$), with two actions $a_1$ and $a_2$. In $s_1$, taking action $a_1$ has a certain probability $p_1$ of transitioning you into $s_2$, and a probability $1-p_1$ of keeping you in $s_1$. There is also a similar probability for taking $a_2$ in $s_1$, and taking either action in $s_2$. Let's also say that there is a reward $r$ given only when a transition occurs from either state, and 0 otherwise. This is a stochastic environment. But isn't this non-stationary in one sense and stationary in another? I think it is stationary because the expected return from taking a particular action in a particular state converges to a constant value. But it is non-stationary in the sense that the reward obtained from taking a certain action in a given state may change at a given time. Which is really the case?
+"
+"['papers', 'transpose-convolution']"," Title: How to implement the deconv which is used in “Visualizing and Understanding Convolutional Networks”Body: I'm trying to understand the deconv referenced in the paper Visualizing and Understanding Convolutional Networks
+The paper states (section 2, p. 3):
+
+the deconvnet uses transposed versions of the same filters, but applied to the rectified maps
+
+Is it possible to implement this step in a short code example? Given an unpooled, rectified map; how would the transposed filter be applied against it?
+I did try looking at the referenced paper Adaptive Deconvolutional Networks for Mid and High Level Feature Learning. However, I'm not wrapping my head around its explanations too well; and it references a third paper with regard to its work on "layers of convolutional sparse coding" (deconvolution [M. Zeiler, D. Krishnan, G. Taylor, and R. Fergus. Deconvolutional networks. In CVPR, 2010]), but this 2010 paper appears to require access to download.
+"
+"['neural-networks', 'classification', 'computational-learning-theory', 'architecture', 'capacity']"," Title: In classification, how does the number of classes affect the model size and amount of data needed to train?Body: When solving a classification problem with neural nets, be it text or images, how does the number of classes affect the model size and amount of data needed to train?
+Are there any soft or hard limitations where the number of outputs starts to stall learning?
+Do you know about any analysis of how the number of classes scales the model?
+Does the optimal size increase proportionally with the number of outputs? Does it increase at all? If it does increase, is the relationship linear or exponential?
+"
+"['game-ai', 'search', 'heuristics']"," Title: Determining minimal state representation for maze gameBody: I came across this question set. It asks following question:
+
+Let’s revisit our bug friends from assignment 2. To recap, you control one or more insects in a rectangular maze-like environment with dimensions M × N , as shown in the figures below. At each time step, an insect can move North, East, South, or West (but not diagonally) into an adjacent square if that square is currently free, or the insect may stay in its current location. Squares may be blocked by walls (as denoted by the black squares), but the map is known.
+For the following questions, you should answer for a general instance of the problem, not simply for the example maps shown.
+(a) You now control a single flea as shown in the maze above, which must reach a designated target location X. However, in addition to moving along the maze as usual, your flea can jump on top of the walls. When on a wall, the flea can walk along the top of the wall as it would when in the maze. It can also jump off of the wall, back into the maze. Jumping onto the wall has a cost of 2, while all other actions (including jumping back into the maze) have a cost of 1. Note that the flea can only jump onto walls that are in adjacent squares (either north, south, west, or east of the flea).
+![]()
+i. Give a minimal state representation for the above search problem.
+Sol. The location of the flea as an (x, y) coordinate.
+ii. Give the size of the state space for this search problem.
+Sol. M ∗ N
+(b) You now control a pair of long lost bug friends. You know the maze, but you do not have any information about which square each bug starts in. You want to help the bugs reunite. You must pose a search problem whose solution is an all-purpose sequence of actions such that, after executing those actions, both bugs will be on the same square, regardless of their initial positions. Any square will do, as the bugs have no goal in mind other than to see each other once again. Both bugs execute the actions mindlessly and do not know whether their moves succeed; if they use an action which would move them in a blocked direction, they will stay where they are. Unlike the flea in the previous question, bugs cannot jump onto walls. Both bugs can move in each time step. Every time step that passes has a cost of one.
+![]()
+i. Give a minimal state representation for the above search problem.
+Sol. A list of boolean variables, one for each position in the maze, indicating whether the position could contain a bug. You don’t keep track of each bug separately because you don’t know where each one
+starts; therefore, you need the same set of actions for each bug to ensure that they meet.
+ii. Give the size of the state space for this search problem.
+Sol. $2^{MN}$
+
+I don't get why the (a).i. uses $(x,y)$ coordinates whereas (b).i. uses boolean list. I guess they can be used interchangeablly right? And correspondingly the answers to ii will change.
+Update
+I now understand following:
+
+For single flea maze, the representation $(x,y)$ will have $M\times N$ state space, whereas boolean list will have $2^{M\times N}$ state space. For two bug maze, the representation $(x_1,y_1,x_2,y_2)$ will have $(M\times N)^2$ state space, whereas boolean list will have $2^{M\times N}$ state space. I am able to understand, we prefer $(x,y)$ representation for single flea maze since $M\times N < 2^{M\times N}$. But for two bug maze, I am not able to understand why we prefer boolean list representation (and not $(x_1,y_1,x_2,y_2)$ representation), since $(M\times N)^2<2^{M\times N}$.
+
+"
+"['convolutional-neural-networks', 'object-detection', 'image-segmentation', 'semantic-segmentation']"," Title: What are the state-of-the-art Person-Detektion / Human-Segmentation?Body: I would like to use a deep learning approach to detect people in videos. I have found some freely accessible implementations like Human Segementation with Pytorch or BodyPix / DeepLab / Pixellib with Tensorflow. They all work well, but with many it happens that, for example, half hand is not detected or if a person is sitting in the picture only the legs and the head are detected. Are there other approaches to detect people who are freely accessible or is that state-of-the-art?
+I had imagined such problems have been solved, but I don't know so much about it. Thanks for your answers.
+"
+"['deep-learning', 'image-segmentation', 'u-net']"," Title: How to use mixed data for image segmentation?Body: I have a task for which I have to do image segmentation (cancer detection on MRIs). If possible, I would also like to include clinical data (i.e. numeric/categorical data which comes in the form of a table with features such as age, gender, ...).
+I know that for classification purposes, it's possible to create a model that uses both numeric data as well as image data (as mentioned in the paper by Huang et al. : "Multimodal fusion with deep neural networks for leveraging CT imaging and electronic health record: a case‑study in pulmonary embolism detection"
+The problem I have is that, for image segmentation tasks, it doesn't really make sense to me as to how to use both types of data.
+In the above-mentioned paper, they create one model with only the image data and another with only the numeric data, and then they fuse them (there are multiple strategies for fusing them together). For classification tasks, it makes sense. However, for my task, it does not make sense to have a model which only uses the clinical data for image segmentation and that's where I get confused.
+How do you think I should proceed with my task? Is it even possible to mix both types of data for image segmentation tasks?
+"
+"['reinforcement-learning', 'training', 'q-learning', 'markov-decision-process']"," Title: What is a good convergence criterion for Q-learning in a stochastic environment?Body: I have a stochastic environment and I'm implementing a Q-table for the learning that happens on the environment. The code is shown below. In short, there are ten states (0, 1, 2,...,9), and three actions: 0, 1, and 2. The action 0 does nothing, action 1 subtracts 1 with a probability of 0.7, and action 2 adds 1 with a probability of 0.7. We get a reward of 1 when we are in state 5, and 0 otherwise.
+import numpy as np
+import matplotlib.pyplot as plt
+
+def reward(s_dash):
+ if s_dash == 5:
+ return 1
+ else:
+ return 0
+states = range(10)
+Q = np.zeros((len(states),3))
+Q_previous = np.zeros((len(states),3))
+episodes = 2000
+trials = 100
+alpha = 0.1
+decay = 0.995
+gamma = 0.9
+ls_av = []
+ls = []
+for episode in range(episodes):
+ print(episode)
+ s = np.random.choice(states)
+ eps = 1
+ for i in range(trials):
+ eps *= decay
+ p = np.random.random()
+ if p < eps:
+ a = np.random.randint(0,3)
+ else:
+ a = np.argmax(Q[s, :])
+
+ if a == 0:
+ s_dash = s
+ elif a == 1:
+ if p >= 0.7:
+ s_dash = max(s-1, 0)
+ else:
+ s_dash = s
+ else:
+ if p >= 0.7:
+ s_dash = min(s+1, 9)
+ else:
+ s_dash = s
+ r = reward(s_dash)
+ Q[s][a] = (1-alpha)*Q[s][a] + alpha*(r + gamma*np.max(Q[s_dash]))
+ s = s_dash
+ ls.append(np.max(abs(Q - Q_previous)))
+ Q_previous = np.copy(Q)
+print(Q)
+for i in range(10):
+ print(i, np.argmax(Q[i, :]))
+plt.plot(ls)
+plt.show()
+
+
+When I plot the absolute value of the maximum change in the Q-table at the end of each episode, I get the following, which indicates that the Q-table is constantly being updated.
+
+However, I see that when I print out the action with the max Q-value for each state, it shows what I expect to be the optimal policy. For each state, the best action is given as shown below:
+(0, 2)
+(1, 2)
+(2, 2)
+(3, 2)
+(4, 2)
+(5, 0)
+(6, 1)
+(7, 1)
+(8, 1)
+(9, 1)
+
+
+My question is: why do I not have convergence in the Q-table? If I had a stochastic environment for which I didn't know before-hand what the optimal policy is, how will I be able to judge if I need to stop training when the Q-table isn't converging?
+"
+"['convolutional-neural-networks', 'probability-distribution', 'explainable-ai', 'binary-classification']"," Title: Why are CNN binary classifier output probability distributions often skewed?Body: I've been working on a lot of simple resnet18 binary classifiers lately and I've started to notice that the probability distributions are often skewed one way or the other. This figure shows one such example. The red and blue color code the negative and positive ground truths respectively. And the bottom axis is the output prediction of the binary classifier (sigmoid activated output neuron). Notice how the red is more bunched towards 0, but the blue has quite some spread.
+
+At first I began to reason this to myself with arguments like "well the positive clues in the image have a small footprint, so they are hard to find, therefore the model should be unsure about positives more of the time."
+Later I found oppositely skewed distributions and tried to say "well the positive clues in the image have a small footprint, so it might be easy to confuse some other things for the positive clues, therefore the model should be unsure about negatives more of the time"
+You can see where I'm going with this. It took me training up quite a few models like this in a short amount of time to realise I was kidding myself. Even the exact same architecture and similar dataset may produce a different skew over different training runs. And if you think about it, negative probability is just the complement of positive probability, so any argument you make in favor of one over the other can be easily reversed.
+So what's influencing this skew? Why is there a skew at all? If there's a skew, is it because of something "real", or is it just random?
+These all may seem like philosophical questions, but they have great practical significance. Because that skew basically tells me where I should put my decision threshold in production level inference!
+"
+"['reinforcement-learning', 'q-learning', 'markov-decision-process', 'pomdp']"," Title: Are there any known disadvantages of implementing vanilla Q-learning on a discretized-state space environment?Body: For an RL problem on a continuous state space, the states could be discretized into buckets and these buckets used in implementing the Q-table. I see that is what is done here. However, according to van Hasselt from his book, this discretization changes the problem into a partially observable MDP (POMDP), and this is understandable. And I know POMDPs require special treatment from the vanilla Q-learning we are used to (observation space, belief states, etc).
+But my question is: is there a specific technical reason why a discretized-state problem (which is now POMDP) should be solved using POMDP algorithms, instead of plainly constructing a vanilla Q-table using the discretized states (i.e. the buckets from discretization)? In other words, is there a disadvantage in not using POMDP algorithms to tackle the discretized-state problem?
+"
+"['unsupervised-learning', 'pattern-recognition', 'text-classification', 'algorithm-request']"," Title: Which algorithm can be used for extracting text patterns in tabular data?Body: I am working with tabular data that is similar to the below:
+
+
+
+
+Name |
+Phone Number |
+ISO3 Country |
+Amount |
+Email |
+... |
+... |
+Outcome |
+Possible Reason |
+
+
+
+
+Leona Sunfurry |
+(555)-555-5555 |
+United States |
+58.96 |
+leo_sun@gmail.com |
+... |
+... |
+0 |
+Not ISO3 country |
+
+
+Diana Moonglory |
+(333)-555-5555 |
+USA |
+8.32 |
+di.moon@gmail.com |
+... |
+... |
+1 |
+ |
+
+
+Fiora Quik |
+(111)-555-5555 |
+FRA |
+0.35 |
+null |
+... |
+... |
+1 |
+ |
+
+
+Darius Guy |
+12345678901234 |
+CAN |
+555.01 |
+null |
+... |
+... |
+0 |
+Too many digits in phone |
+
+
+LULU |
+(333)-555-5555 |
+CAN |
+0.00 |
+null |
+... |
+... |
+0 |
+Odd name format |
+
+
+Eve K. |
+(111)-555-5555 |
+FRA |
+69.25 |
+e.k@gmail.com |
+... |
+... |
+1 |
+ |
+
+
+Lucian Light |
+(999)-555-5555 |
+ENG |
+65.00 |
+null |
+... |
+... |
+1 |
+ |
+
+
+Lux D. |
+(333)-555-5555 |
+USA |
+11.64 |
+test@test.com |
+... |
+... |
+1 |
+ |
+
+
+Jarvin Crown |
+(333)-555-5555 |
+USA |
+1357.13 |
+j4@gmail.com |
+... |
+... |
+0 |
+Unknown reason |
+
+
+
+
+The table contains information about users. Some of the fields are user-generated while others are generated by the program (like device location, amount, etc.). When this data is collected, it is sent to third parties (we will say a bank). Sometimes the bank rejects the data and it is not good for our users. The rejection could have happened because the user did not input the data correctly or the banks did not like how a field is formatted despite the data being correct and acceptable to other banks.
+So we want to find the fields that are causing the most errors and how to fix the issue.
+Does it make sense to do pattern recognition on the values to find the reason why the row was rejected? It would need to be an alpha-numeric type of algorithm, it seems.
+We know the outcomes from the bank which is labeled as Outcome
. Although we have labeled data, it still feels like we need an unsupervised learning algorithm because we do not have labels on why the rows of data were rejected.
+Does anyone know what type of algorithm would be best? Any feedback would be appreciated!
+"
+"['neural-networks', 'tensorflow', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: If I want to predict two unrelated values given the same sequence of data points, should I have a model with two outputs or two models?Body: I want to predict two separate y-values (not really logically connected) based on an input sequence of data (values x). Using LSTM cells.
+Should I train two models separately or should I just increase the dimension of the last layer to 2 instead of 1 and feed the fitting algorithm y-values as 2D pairs? In other words, will there be a lot of interference or can I expect on average similar results with one or two models?
+"
+"['neural-networks', 'reference-request', 'backpropagation', 'feedforward-neural-networks', 'multilayer-perceptrons']"," Title: What are examples of good free books that cover the back-propagation algorithm?Body: What are examples of good free books that cover the back-propagation used to train multilayer perceptrons? I've just started to learn about artificial neural networks, so I'm looking for books that cover the theoretical basics of back-propagation.
+"
+"['neural-networks', 'convolutional-neural-networks', 'deep-neural-networks', 'features', 'representation-learning']"," Title: How do we know that the neurons of an artificial neural network start by learning small features?Body: I'd like to ask you how do we know that neural networks start by learning small, basic features or "parts" of the data and then use them to build up more complex features as we go through the layers. I've heard this a lot and seen it on videos like this one of 3Blue1Brown on neural networks for digit recognition. It says that in the first layer the neurons learn and detect small edges and then the neurons of the second layer get to know more complex patterns like circles... But I can't figure out based on pure maths how it's possible.
+"
+"['deep-learning', 'convolutional-neural-networks', 'model-request']"," Title: Setting up a deep learning architecture for multi-dimensional dataBody: The input data is thousands, millions of 4x1000 matrices. Each row consists of 3 small natural numbers (1000 combinations) and a corresponding real number between 0 and 1.
+The output is a 1x1000 vector for each of the matrices. Each output vector value [1 or 0] is not defined by the corresponding 4-argument entry, but the whole 4x1000 matrix. Each input matrix defines a few valid output 1x1000 vectors that cannot be computed analytically from the 4x1000 matrices.
+What would be the options for setting up a deep learning architecture to try to tackle this challenge?
+"
+"['reinforcement-learning', 'dqn', 'pytorch', 'state-spaces']"," Title: What's the best way to take a list of lists as DQN input?Body: I have my own environment for the DQN algorithm. In my environment, the state space is represented by a list of lists, where each sublist can be of different lengths. In my case, the length of the global list is 300 and the length of each of the sublists varies from 0 to 10. What is the best way to use such state representation as a DQN input if I want to use the PyTorch platform?
+#exapmle state with only 4 sublists and each sublist length can be highest 5
+state = [[1,2,3,4], [1,20,20], [10], [20,4,5,6,7]]
+
+I am thinking of using the raw data with zero(s) at the end of every sublist to make them all of the equal lengths.
+state = [[1,2,3,4,0], [1,20,20,0,0], [10,0,0,0,0], [20,4,5,6,7]]
+
+Then I can convert them to torch.tensor
(and maybe flattened) and take that as input in DQN. However, I am wondering - is there a better approach?
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: Why identity mapping is so hard for deeper neural network as suggested by Resnet paper?Body: In resnet paper they said that a deeper network should not produce more error than its shallow counterpart since it can learn the identity map for the extra added layer. But empirical result shown that deep neural networks have a hard time finding the identity map. But the solver can easily push all the weights towards zero and get an identity map in case of residual function($\mathcal{H}(x) = \mathcal{F}(x)+x$). My question is why it is harder for the solver to learn identity maps in the case of deep nets?
+Generally, people say that neural nets are good at pushing the weights towards zero. So it is easy for the solver to find identity maps for residual function. But for ordinary function ($\mathcal{H}(x) = \mathcal{F}(x)$) it have to learn the identity like any other function. But I do not understand the reason behind this logic. Why neural nets are good to learn zero weights ?
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'seq2seq', 'padding']"," Title: What is the difference between zero-padding and character-padding in Recurrent Neural Networks?Body: For RNN's to work efficiently, we vectorize the operations, which results in an input matrix of shape
+(m, max_seq_len)
+
+where m
is the number of examples, e.g. sentences, and max_seq_len
is the maximum length that a sentence can have. Some examples have smaller lengths than this max_seq_len
. A solution is to pad these sentences.
+One method to pad the sentences is called "zero-padding". This means that each sequence is padded with zeros. For example, given a vocabulary where each word is related to some index number, we can represent a sentence with length 4,
+I am very confused
+
+by
+[23, 455, 234, 90]
+
+Padding it to achieve a max_seq_len=7
, we obtain a sentence represented by:
+[23, 455, 234, 90, 0, 0, 0]
+
+The index 0 is not part of the vocabulary.
+Another method to pad is to add a padding character, e.g. <<pad>>
, in our sentence:
+I am very confused <<pad>>> <<pad>> <<pad>>
+
+to achieve the max_seq_len=7
. We also add <<pad>>
in our vocabulary. Let's say its index is 1000. Then the sentence is represented by
+[23, 455, 234, 90, 1000, 1000, 1000]
+
+I have seen both methods used, but why is one used over the other? Are there any advantages or disadvantages comparing zero-padding with character-padding?
+"
+"['terminology', 'goal-based-agents']"," Title: What are the different types of goals for an AI system called?Body: I remember reading about two different types of goals for an intelligence. The gist was that the first type of goal is one that "just is" - it's an end goal for the system. There doesn't need to be any justification for wanting to achieve that goal, since wanting to do that is a fundamental purpose for that system. The second type of goal is a stepping stone, for lack of better words. Those aren't end goals in and of themselves, but they would help the system achieve its primary goals better.
+I've forgotten the names for these types of goals and Googling didn't help me much. Is there a standard definition for these different types of goals?
+"
+"['proofs', 'a-star', 'admissible-heuristic', 'tree-search', 'heuristic-functions']"," Title: If $h_1(n)$ is admissible, why does A* tree search with $h_2(n) = 3h_1(n)$ return a path that is at most thrice as long as the optimal path?Body: Consider a heuristic function $h_2(n) = 3h_1(n)$. Where $h_1(n)$ is admissible.
+Why are the following statements true?
+
+- $A^*$ tree search with $h_2(n)$ will return a path that is at most thrice as long as the optimal path.
+- $h_2(n) + 1$ is guaranteed to be inadmissible for any $h_1(n)$
+
+"
+"['convolutional-neural-networks', 'data-preprocessing', 'normalisation', 'standardisation']"," Title: How to normalize images before training?Body: I have seen people normalize images by just dividing 255. But why? Why not use mean normalization or Z-score Normalization? I also came across this StackOverflow topic while searching but the answers there were not enough enlightening for me.
+"
+"['deep-learning', 'residual-networks']"," Title: What are the benefits of Cross Stage Partial Connections over Residual Connections?Body: Cross Stage Partial Connections (CSPC) try to solve the next problems:
+
+- Reduce the computations of the model in order to make it more suitable for edge devices.
+- Reduce memory usage.
+- Better backpropagate the gradient.
+
+I cannot really understand how the first two points are actually achieved with this type of connection. Furthermore, in CSPC, the skip connection is just a slice of the feature map, and in Residual Connections, the skip connection is all the feature map. Aren't CSPC and Residual Connections (with concatenation) actually "almost" the same thing? Then, what advantages do you get for connecting with deeper layers only a slice of the previous feature map (CSPC) vs the whole feature map (Residual Connection)?
+"
+"['deep-learning', 'classification', 'regression', 'imbalanced-datasets']"," Title: Handling imbalanced data with multiple targetsBody: I have the model which has 3 outputs (it is a regression task, I have the angle of the steering wheel, brake and acceleration). I can divide my values to some smaller bins and in this way I can change this into classification problem. I can balance data to have the same number of data points in each bin.
+But now I wonder how to balance this data correctly.
+I found some good resources and libraries
+imbalanced-learn | Python official documentation
+multi-imbalance | Python official documentation
+Multi-imbalance | Poznan University of Technology
+But to my understanding, these algorithms can deal with imbalanced data (in normal and multi class classification) only if you have one output.
+But I have 3 outputs. And these outputs can be correlated somehow.
+How to balance them correctly?
+I thought about 2 ideas:
+
+- Creating tuples consist of 3 elements and balancing in such a way that you have the same number of different tuples
+But you can have this situation:
+(A, X, 1), (A, Y, 2), (A, Y, 3), (B, Z, 3)
+These tuples are different, but you can see that we have a lot of tuples with the value A at first position. So the data is still quite imbalanced.
+
+- Balancing data iteratively considering only one column at a time. You balance first column, then you balance second column etc.
+
+
+Are these ideas good or not? Maybe there are some other options for balancing data if you have multiple targets?
+"
+"['machine-learning', 'classification', 'dimensionality-reduction']"," Title: Does Linear Discriminant Analysis make dimensionality reduction before classification?Body: I'm trying to understand what LDA exactly does when used as a classifier. I've understood how the dimensionality reduction works and I've understood that the classification task is carried out with the application of Bayes' theorem, but I still can't figure out if LDA executes both operations when used as a classification algorithm.
+Is it correct to say that LDA, as a classifier, executes by itself dimensionality reduction and then applies Bayes' theorem for classification?
+If that makes any difference, I've used LDA in Python from the sklearn library.
+"
+"['machine-learning', 'classification', 'ensemble-learning']"," Title: Does it make sense to combine classifiers trained on the same dataset?Body: I am working on a classification problem.
+I have a dataset $S$ and I am training several prediction algorithms using S: Naive Bayes, SVM, classification trees.
+Intuitively, I was planning to combine my models, and, for each data point in the test sample $S'$, take the majority vote as my prediction.
+Does that make sense? I feel this is a very simplistic way to combine different models.
+"
+"['classification', 'python', 'bert', 'probability', 'multiclass-classification']"," Title: How do I calculate the probabilities of the BERT model prediction logits?Body: I might be getting this completely wrong, but please let me first try to explain what I need, and then what's wrong.
+I have a classification task. The training data has 50 different labels. The customer wants to differentiate the low probability predictions, meaning that, I have to classify some test data as "Unclassified / Other" depending on the probability (certainty?) of the model.
+When I test my code, the prediction result is a numpy array. One example is:
+[[-1.7862008 -0.7037363 0.09885322 1.5318055 2.1137428 -0.2216074
+ 0.18905772 -0.32575375 1.0748093 -0.06001111 0.01083148 0.47495762
+ 0.27160102 0.13852511 -0.68440574 0.6773654 -2.2712054 -0.2864312
+ -0.8428862 -2.1132915 -1.0157436 -1.0340284 -0.35126117 -1.0333195
+ 9.149789 -0.21288703 0.11455813 -0.32903734 0.10503325 -0.3004114
+ -1.3854568 -0.01692022 -0.4388664 -0.42163098 -0.09182278 -0.28269592
+ -0.33082992 -1.147654 -0.6703184 0.33038092 -0.50087476 1.1643585
+ 0.96983343 1.3400391 1.0692116 -0.7623776 -0.6083422 -0.91371405
+ 0.10002492]]
+
+I'm then using numpy.argmax()
to identify the correct label.
+My question is, is it possible to define a threshold (say, 0.6), and then compare the probability of the argmax() element so that I can classify the prediction as "other" if the probability is less than the threshold value?
+
+Edit 1:
+We are using 2 different models. One is Keras, and the other is BertTransformer. We have no problem in Keras since it gives the probabilities so I'm skipping Keras model.
+The Bert model is pretrained. Here is how it is generated:
+def model(self, data):
+ number_of_categories = len(data['encoded_categories'].unique())
+ model = BertForSequenceClassification.from_pretrained(
+ "dbmdz/bert-base-turkish-128k-uncased",
+ num_labels=number_of_categories,
+ output_attentions=False,
+ output_hidden_states=False,
+ )
+
+ # model.cuda()
+
+ return model
+
+The output given above is the result of model.predict()
method. We compare both models, Bert is slightly ahead, therefore we know that the prediction works just fine. However, we are not sure what those numbers signify or represent.
+Here is the Bert documentation.
+"
+"['papers', 'deep-neural-networks', 'applications']"," Title: What's up with Neural Stochastic Differential Equations from a practical standpoint?Body: I've spent a few days reading some of the new papers about Neural SDEs. For example, here is one from Tzen and Raginsky and here is one that came out simultaneously by Peluchetti and Favaro. There are others which I plan to read next. The basic idea, which are attained via different routes in each paper, is that if we consider the input data arriving at time $t=0$ and the output data arriving at time $t=1$, and with certain assumptions on the distribution of network weights and activations, the evolution of the data from one layer to the next inside the network is akin to a stochastic process. The more layers you have, the smaller the $\Delta t$ is between the layers. In the limit as the number of layers goes to infinity, the network approaches a true stochastic differential equation.
+I am still working on the math, which is my main objective. However, what I find missing from these papers is: Why is this important? The question is not, why is this interesting?. It is certainly interesting from a purely mathematical perspective. But what is the importance here? What is the impact of this technology?
+I was at first excited about this because I thought it proposed a way to apply a neural network to learn the the parameters of an SDE by fitting it to real time-series data where we don't know the form of the underlying data generation process. However I noticed in the experiment of Peluchetti and Favaro is simply the MNIST data set, while the data experiment from Tzen and Raginsky is in fact a simulated SDE. The later fit more with my intuition.
+So, again, my question is, what is the general importance of Neural SDEs? And a secondary question is: am I correct in thinking this technology proposes a new way to fit a model to data which we suppose is generated by a stochastic process?
+"
+"['computer-vision', 'tensorflow', 'object-detection', 'yolo', 'single-shot-multibox-detector']"," Title: What are the main differences between YOLOv3 and RetinaNet object detection algorithms?Body: I am looking at a certain project that compares performance on a certain dataset for an object detection problem using YOLOv3 and RetinaNet (or the "SSD_ResNet50_FPN" from TF Model Zoo). Both YOLOv3 and RetinaNet seem to have similar features like detection at scales, skip connections, etc.
+So, what is the exact main difference between YOLOv3 and SSD_ResNet50_FPN?
+"
+"['game-ai', 'game-theory', 'games-of-chance', 'poker', 'state-space-complexity']"," Title: What is the size of 6-players no limit Texas holdem Poker?Body: What is the number of game states/information sets in 6-players, no limit, Texas Holdem?
+A year ago, Pluribus reached a super-human level in 6-players no limit Holdem Poker. I am interested in the size of poker because it is a simple heuristic method to compare the complexity of different games.
+
+In the paper Measuring the Size of Large No-Limit Poker Games (2013), they write
+
+The size of a game is a simple heuristic that can be used to describe
+its complexity and compare it to other games, and a game’s size can be
+measured in several ways. The most commonly used measurement is to
+count the number of game states in a game.
+...
+In imperfect information games, an alternate measure is to count the number of decision points, which are more formally called information sets.
+
+Here's the definition of game states and information sets.
+
+Game states are the number of possible sequences of actions by the players or by chance, as viewed by a third party that observes all of the players' actions. In the poker setting, this would include all of the ways that the players private and public cards can be dealt and all of the possible betting sequences.
+Information sets: When a player cannot observe some of the actions or chance events in a game, such as in poker when the opponent’s private cards are unknown, many game states will appear identical to the player. Each such set of indistinguishable game states forms one information set, and an agent's strategy or policy for a game must necessarily depend on its information set and not on the game state: it cannot choose to base its actions on information it does not know.
+
+
+Here are the number of game states of certain variants of Poker.
+
+"
+"['datasets', 'representation-learning', 'disentangled-representation']"," Title: Why different images of the same person, under some restrictions, are in a 50 dimension manifold?Body: In this lecture (starting from 1:31:00) the professor says that the set of all images of a person lives in a low dimensional surface (compared the the set of all possible images). And he says that the dimension of that surface is 50 and that they get this number by adding the three translations of the body, the three rotations of the head and the independent movements of the face's muscles. He also adds that it may be more than 50 but less than 100. How do we get the number 50 ?
+The professor previously said (in the same lecture, 1:29:00) that the set of all the images that we could describe as natural and that we could interpret are in a manifold. I try to understand how the number 50 came up like the following: let's take an image of a person, since it's "natural" then it belongs to that manifold. Hence there is an open set to which this image belongs to and there is a homeomorphic map from this open set to an euclidean space. Let's suppose (I don't know why but it's the only possible thing I could come up with to understand) that all the images of that same person, regardless of his position and expressions..., are in that open space then through the homeomorphic mapping we have the "same points" in an euclidean space, do we get the base of it by decomposing all the possible movements of the person?
+I hope someone can clarify things for me, it seems this doesn't only work with images but all types of non-structured types of data.
+"
+"['convolutional-neural-networks', 'data-preprocessing']"," Title: How to deal with a variable number of channels of the inputs?Body: I have a problem in which my input data may have a varying number of channels. Let me explain with an example.
+
+Imagine we have a classification problem in which we wish to identify
+if certain species are present in wildlife photographs. This can be
+done via a neural network including maybe some convolutions. For the
+first layer of the network we could set up a convolutional layer with
+3 input channels (one for R, G and B respectively) and this would
+probably work well enough.
+Now imagine that someone comes along with some new data for us and
+this time they have not only taken regular RGB images but they have
+used an IR-camera as well. Great, but how do we treat this data, we
+have one more channel?! One could of course simply add an extra channel
+and re-train the network but that would mean that our old data (without
+IR-info) is useless and what if someone comes along with a
+UV-camera.....
+
+My situation is similar but I will most definitely be dealing with varying numbers of channels and the range can be quite wide (from 5 channels all the way up to maybe 50). Is there a good way of dealing with a situation like this?
+"
+"['machine-learning', 'classification', 'definitions', 'regression', 'clustering']"," Title: How to define machine learning to cover clustering, classification, and regression?Body: How to define machine learning to cover clustering, classification, and regression? What unites these problems?
+"
+"['convolutional-neural-networks', 'facial-recognition', 'content-based-image-retrieval']"," Title: Why are the landmark retrieval and facial recognition literature so divergent?Body: Context and detail
+I've been working on a particular image retrieval problem and I've found two popular threads in the literature:
+Image retrieval (usually benchmarked with landmark retrieval datasets)
+
+Face recognition/verification:
+
+I'm still making my way through these lists and more (I've checked the ones I've looked at already) but I'm starting to get a sense that there's not much overlap in the techniques used, or the collective trains of thought in the research community. Here are the main points of divergence where I think both communities should be borrowing from each other.
+
+- Facial recognition seems to focus on getting embeddings to be as discriminative as possible by playing around with loss functions and training methods, whereas image retrieval seems to care more about ways of extracting feature descriptors from CNN pretrained backbones (types of pooling operations, which feature maps to look at, etc..).
+- Image retrieval has a considerable amount of work on what needs to happen after an embedding is obtained. Eg: dimensionality reduction, whitening + l2 norm, databise-side augmentation, query expansion, reranking etc
+- Facial recognition cares about keeping a minimum margin between non-matching faces in order to avoid mismatches, but I would think that should be imposed in image retrieval tasks as well (this is kind of a sub-point to my first point)
+
+So to sum up: Why is it that facial recognition focuses on generating discriminative embeddings, while landmark retrieval focusses on generating rich "descriptors"? Why does landmark retrieval use this cool bag of tricks for database search while facial recognition just mentions kNN? Shouldn't all these considerations boost performance in either domain?
+"
+"['neural-networks', 'computer-vision', 'face-recognition']"," Title: Face recognition from single image providedBody: I am working on a computer vision project, based on face detection to record the time spent by a person in an office.
+It consists of detecting the face by camera number 1 (input), temporarily storing the detected face, calculating the time spent until this same person leaves and his face is detected by camera number 2. (We don't have a customer database).
+Is there a better approach to follow? I would also appreciate articles to read on the topic.
+"
+"['convolutional-neural-networks', 'meta-learning', 'model-agnostic-meta-learning', 'template-matching']"," Title: Which meta-learning approach selection methodology should I use for similarity learning of an image?Body: Meta-learning has 3 broad approaches: model, metric and optimization-based approach. Each of them has its own sub-approach, like matching network, meta-agonistic and Siamese-based network, and so on.
+How do I decide which approach to select for a task? For my case, I have a noisy image, and they need to be compared with 10 different new images every time. Do I have to start with the trial and error method, or there is some methodology behind this approach selection?
+"
+"['natural-language-processing', 'captcha']"," Title: CAPTCHA based on text comprehension and random tokensBody: I developed a novel type of CAPTCHA based on text comprehension and random tokens. Given a task Pick the first pair of adjacent letters
and a random token 8NBA596V
, the user has to provide the solution NB
. It offers basic protection and an attacker can solve individual tasks with specific effort. I am curious, whether contemporary AI can solve it generically?
+You can access more example tasks here:
+https://www.topincs.com/manual/captcha
+There is a task database and at every attempt a new task is presented with a new random token. They always have a solution of varying length and pure guessing thus has limited chances of success. It is easy to attack an individual task by writing a small piece of code, thus a large task database is essential. What intrigues me is the question whether natural language processing or machine learning at its current state can attack the CAPTCHA generically by building a model of the meaning of the task
+– essentially a predicate in a tiny universe of discourse – and then applying it to the random token.
+"
+"['deep-learning', 'dqn', 'proofs']"," Title: Can the law of iterated expectation be used on the inner expectation of the DQN cost function described in the DQN paperBody: Is the expression for the DQN cost function, Equation (2) of the DQN paper
+$$\begin{align}L_1 &= E_{\mu,\pi}\left[\left(y_i - q(s,a;\theta)\right)^2\right]\\
+&=E_{\mu,\pi}\left[\left(E_{\mathcal{E}}[r + \gamma \max\limits_{a'}q(s',a';\theta^-)] - q(s,a;\theta)\right)^2\right] \end{align}$$
+equivalent to this? (Substituting the expression for $y_i$ defined in the paragraph directly after, $\mathcal{E}$ represents the transition distribution governed by the environment, $\pi$ represents the behaviour policy and $\mu$ represents the stationary distribution of states)
+$$L_2 = E_{\mu,\pi,\mathcal{E}}\left[\left(r + \gamma \max\limits_{a'}q(s',a';\theta^-) - q(s,a;\theta)\right)^2\right]$$
+Can the law of iterated expectation be used to derive the second expression from the first, if not, is there another way to go about showing their equivalence IF they are equivalent.
+It seems as though $L_2$ is used for sampling but I'm not sure how it's possible to get here from the original cost function $L_1$. If it is possible to use $L_2$ to sample I assume that means the two expressions must be equivalent. The second expression is used for sampling in the DQN paper here.
+I do realise that the gradient for each function is the same and thus so is the $n^{th}$ derivative for some $n\geq1$ and since the curvature and optimas align I guess that also means they are the same function (minus some constant difference)?
+$$\nabla_{\theta} L = E_{\mu,\pi,\mathcal{E}}\left[\left(r + \gamma \max\limits_{a'}q(s',a';\theta^-) - q(s,a;\theta)\right)\nabla q(s,a;\theta)\right]$$
+
+Related Question
+A related problem that concerns equivalence of sampling from $L_1$ and $L_2$. Is it possible to sample from a nested expectation that is squared as follows?
+$$E[E[X|Y]^2] \approx \frac{1}{n}\sum X^2$$
+Where $X$ is generated according to the marginalised distribution $P(X)$. I don't think it is true since $E[X]^2 \neq E[X^2]$ which should mean sampling from $L_1$ and $L_2$ are not equivalent.
+"
+"['reinforcement-learning', 'reward-functions', 'constrained-optimization']"," Title: Intuition behind $1-\gamma$ and $\frac{1}{1-\gamma}$ for calculating discounted future state distribution and discounted rewardBody: In the appendix of the Constrained Policy Optimization (CPO) paper (Arxiv), the authors denote the discounted future state distribution $d^\pi$ as:
+$$d^\pi(s) = (1-\gamma) \sum_{t=0}^\infty{\gamma^t P(s_t = s \vert \pi)}\tag1$$
+and the discounted total reward $J(\pi)$ as:
+$$J(\pi) = \frac{1}{1-\gamma} E_{\substack{s\sim d^\pi \\ a \sim \pi \\ s' \sim P}}[R(s,a,s')]\tag2$$
+I have two questions regarding these equations.
+Question 1
+Intuitively, I understand that $d^\pi(s)$ returns the discounted probability of landing on state $s$ when executing policy $\pi$.
+I understand that the summation part of $(1)$ results in values that are greater than $1$, and are, therefore, not fit for a probability distribution. But I do not understand why the value that results from this is multiplied by $(1-\gamma)$.
+I have read in this question that "$(1−\gamma)$ normalizes all weights introduced by γ so that they are summed to $1$". I have confirmed that this is true, but I don't understand why.
+I tested this with a simple example:
+Suppose there is are only two states $s_A$ and $s_B$ and the probabilty of landing on $s_A$ is $0.4$ and on $s_B$ is $0.6$, independently of the previous state or action taken (therefore, independently of the policy $\pi$). Also suppose we set the maximum number of time steps $t_{max} = 1000$ (to make the equation easy to compute) and $\gamma = 0.9$.
+Then:
+$$d^\pi(s_A) = (1-0.9) \sum_{t=0}^{1000} 0.9^t \cdot 0.4 \approx (1-0.9) \cdot 4$$
+and
+$$d^\pi(s_B) \approx (1-0.9) \cdot 6$$
+So indeed if we sum them and multiply by $(1-\gamma)$ we get:
+$$(1-0.9)\cdot(4+6) = 1$$
+Q: My question is why does multiplying by $(1-\gamma)$ normalize to $1$? And what does $(1-\gamma)$ represent in this context?
+Question 2
+Similarly, I can't understand the use of $\frac{1}{1-\gamma}$ in $(2)$.
+Q: How does multiplying the expected value of the reward function by $\frac{1}{1-\gamma}$ result in the discounted reward, instead of multiplying by $\gamma$? What does $\frac{1}{1-\gamma}$ represent?
+"
+"['natural-language-processing', 'long-short-term-memory', 'word-embedding', 'sentiment-analysis', 'word2vec']"," Title: Given the word embeddings, how do I create the sentence composed of the corresponding words?Body: I have done some reading. I want to implement an LSTM with pre-trained word embeddings (I also have plans to create my word embeddings, but let's cross that bridge when we come to it).
+In any given sentence, you don't usually need to have all the words as most of them do not contribute to the sentiment, such as the stop words and noise. So, let's say there is a sentence. I remove the stop words and anything else that I deem unnecessary for the project. Then I run the remaining words through the word embedding algorithm to get the word vectors.
+Then what? How does it represent the sequence or the sentence 'cause it's just vector for a word.
+For example, take the sentence:
+
+The burger does not taste good.
+
+I could remove certain words and still retain the same sentiment like so:
+
+Burger not good.
+
+Let's assume some arbitrary vectors for those three words:
+
+Burger
: $[0.45, -0.78, .., 1.2]$
+
+not
: $[9.6, 4.0, .., 5.6]$
+
+good
: $[3.5, 0.51, 0.8]$
+
+
+So, those vectors represent the individual words. How do I make a sentence out of them? Just concatenate them?
+"
+"['neural-networks', 'accuracy', 'weights-initialization', 'mnist', 'validation']"," Title: Why would my neural network have either an accuracy of 90% or 10% on the validation data, given a random initialization?Body: I'm making a custom neural network framework (in C++, if that is of any help). When I train the model on MNIST, depending on how happy the network is feeling, it'll give me either 90%+ accuracy, or get stuck at 10-9% (on validation set).
+I shuffle all my data before feeding it to the neural net.
+Is there a better randomizer I should be using, or maybe I am not initializing my weights properly (Using srand
to generate values between +/-0.1). Did I somehow hit a saddle point?
+My network consists of 784 size input layer, 256, 64, 32, 16 neuron hidden layers, all with RELU, and 10 output with SMAX
+Where should I start investigating based on this kind of behavior, when I can't even replicate what is going on?
+"
+"['neural-networks', 'tensorflow', 'keras', 'weights']"," Title: How to train a neural network with few weights and biases held constant?Body: I am a beginner in neural networks. I am building a neural network with 3 layers. The input $X$ has 7 features and the output $Y$ is a real number. In the hidden layer, there are two nodes. The bottom node contains weights and biases which should be hard set.
+
+Now, I want to train this neural network with the training data $X$ and $Y$, such that the red weights are held constant while all other weights are learnable.
+Is there a way of doing this during the training of the neural network? I'm using TensorFlow and Keras, so, if you could provide also the code necessary to do this, that would be very useful.
+"
+"['deep-learning', 'accuracy', 'action-recognition', 'testing', 'test-datasets']"," Title: Is there a way, while training (with contrastive learning) the embedding network, to find the test accuracy?Body: I aim to do action recognition in videos on a private dataset.
+To compare with the existing state-of-the-art implementations, other guys published their code on Github, like the one here (for the paper Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework). Here, the author first trains the embedding network (3D ResNet without final classification layer) with contrastive learning. Finally, he adds a final layer and finetunes the weights, training the whole network again for some epochs.
+Now, here is my doubt: Is there a way, while training just the embedding network, to find the test accuracy?
+One way to tell if the final accuracy after finetuning would be good is to see if the training loss is decreasing or not. If the training loss decreases, that certainly builds up the hope that the test accuracy would be improving during the training, but in no way gives an idea about how much the test accuracy would be.
+Another way is to plot the t-SNE, see if, on the test data, the data points from the same class are close together, thus forming a cluster. Then it could be said that the test accuracy would also be good. But it's not quantifiable, and hence it would be hard to compare the t-SNE plots obtained from two different models.
+I was also suggested to add a final layer to my embedding network and just test it on the test data, without training or fine-tuning again. The reason for that is that the embedding network should have learned the weights reasonably by now; even if I finetune the model, the test dataset's test accuracy won't vary a lot. I need some advice here. Is that suggestion good? Are there any potential pitfalls with this suggestion?
+Or do you have any other possible suggestions I could try?
+"
+"['reinforcement-learning', 'terminology', 'hierarchical-rl', 'maxq']"," Title: Where does the hierarchical reinforcement learning framework name ""MAXQ"" come from?Body: I've been researching different frameworks for hierarchical RL (mainly options, HAMs, and MAXQ) and noticed that both options and HAMs have names that relate to how they function. I can't seem to find anything stating how MAXQ got its name and I was wondering if anyone knew what the name was referencing.
+"
+"['genetic-algorithms', 'feature-selection', 'genetic-programming', 'algorithm-request']"," Title: How can I select features for a symbolic regression problem to be solved with genetic programming?Body: I want to solve a symbolic regression problem with genetic programming. My dataset is similar to this one, but I have 30 features, and I want to use only the most sensitive features. I found this library interesting for Symbolic Regression, but could not find the right approach for feature selection.
+"
+"['neural-networks', 'machine-learning', 'binary-classification', 'c++', 'model-request']"," Title: Which approach should I use to classify points above and below a sine function $y(x) = A + B \sin(Cx)$?Body: In a linear regression problem, a line can divide a data set into two categories. So, basically, points above the line belong to category 1
, and points below the line belong to category -1
.
+However, my professor has asked me to write a C++ program in which the program will classify whether the data points lie above or below a sine function.
+Let me explain a bit more. So, first, we will generate a data set $$D = \{(x_i, y_i) \} \label{0}\tag{0} $$ with random $x$ and $y$ coordinates, for example, according to this equation
+$$y(x) = A + B \sin(Cx)\label{1}\tag{1},$$
+where $A$, $B$, and $C$ are known.
+The data points above the sine function will have a label 1
on them, and the points below the function will have -1
.
+Now, this data set $D$ in \ref{0} has to be fed to a C++ program. This C++ program has to somehow learn the curve separating the two data point categories. After training, the program will then classify some new query data points.
+The key difficulty is that the program does not know in advance that the points were scattered around a sine curve. It does not know the values of $A$, $B$, or $C$ in equation \ref{1}. It also does not know that the curve is a sine curve.
+Now, this is where I am stuck. I do not know if I need to use a neural network to solve this problem. If a neural network is to be used, then I presume that backpropagation will have to be used in some way. I can generate the data set and I can feed the data into the program.
+Which approach (algorithm and model) should I use to solve this problem?
+I have studied linear classification with the perceptron learning algorithm, but this sine-classifier stuff is a huge step-up for me. Another important thing is that I am not allowed to use any ready-made C++ libraries for Machine Learning. If a neural network solution is needed, then I will have to design the neural network from scratch. Note that I don't need any C++ code, but I am just looking for some guidance on how to approach this problem.
+"
+"['deep-learning', 'feature-extraction', 'data-augmentation']"," Title: What is the difference between feature extraction with or without data augmentation?Body: Here's an extract from Chollet's book "Deep Learning with Python" about using pre-trained CNN to predict class from a photo set (p. 146):
+
+At this point, there are two ways you could proceed:
+
+- Running the convolutional base over your dataset, recording its output to a Numpy array on disk, and then using this data as input to
+a standalone, densely connected classifier similar to those you saw in
+part 1 of this book. This solution is fast and cheap to run, because
+it only requires running the convolutional base once for every input
+image, and the convolutional base is by far the most expensive part of
+the pipeline. But for the same reason, this technique won’t allow you
+to use data augmentation.
+
+- Extending the model you have (conv_base) by adding Dense layers on top, and running the whole thing end to end on the input data. This
+will allow you to use data augmentation, because every input image
+goes through the convolutional base every time it’s seen by the model.
+But for the same reason, this technique is far more expensive than the
+first.
+
+
+
+The first method is called (1) and the second is (2).
+If I use data augmentation to expand my data set, then could (1) be as good ad (2)? If no, why?
+"
+"['research', 'history', 'data-compression']"," Title: Did the Hutter Prize help research in artificial intelligence in any way?Body: Wikipedia states:
+
+
+- The Hutter Prize is a cash prize funded by Marcus Hutter which rewards data compression improvements on a specific 1 GB English text file.
+- The goal of the Hutter Prize is to encourage research in artificial intelligence (AI). The organizers believe that text compression and AI are equivalent problems.
+
+
+Did the Hutter Prize help research in artificial intelligence in any way?
+"
+"['reinforcement-learning', 'function-approximation', 'feature-engineering', 'heuristic-functions', 'discrete-state-spaces']"," Title: How to find good features for a linear function approximation in RL with large discrete state set?Body: I've recently read much about feature engineering in continuous (uncountable) feature spaces. Now I am interested what methods exist in the setting of large discrete state spaces. For example consider a board game with grid as a basic layout. Each position on a grid can contain exactly one of multiple elements and the agent makes decisions according to the current board position. If the grid is large enough, say 30x30, and there are only two different elements we could model the states as a linear model with $2*30*30 = 1800$ variables (using dummy variables) and this model can't even distinguish relationships between positions. For this we would need to use $\binom{90}{2}$ or even $\binom{90}{k}$, $k = 2, 3, 4$ more features.
+How would one approach this problem? Are the methods for feature selection for linear approximations, which even automatically find/learn non-linear combinations? What was the approach to solving these problems when NN where not around?
+"
+"['deep-learning', 'image-recognition', 'papers', 'math']"," Title: Is there a full and precise formulation of Theorem 1 in the Integrated Gradients paper?Body: Theorem 1 (page 5) in the paper about Integrated Gradients states that
+
+Integrated gradients is the unique path method that is symmetry-preserving.
+
+What I miss is
+
+- A precise formulation of the theorem: in particular, the exact properties that must be satisfied by the function $f$ used in the proof (continuity, differentiability, etc.). Also, should the paths be assumed to be monotonic?
+
+- A consistent definition of function $f$ in the proof - note that $f$ is defined inconsistently, e.g. in the region where $x_i<a$ and $x_j>b$, where it is not clear whether its value should be $0$ or $(b-a)^2$.
+
+
+Point 2 is easy to fix with an appropriate redefinition (e.g. replacing "if $\text{max}(x_i,x_j)\geq 0$" with "else if $\text{max}(x_i,x_j)\geq 0$"). What it is not clear if whether there is a redefinition that:
+
+- preserves the properties that have been assumed in the rest of the paper, in particular in Proposition 1 (proving completeness), where the function is assumed to be continuous everywhere, and the set of discontinuous points of each of its partial derivatives along each input dimension has measure zero, and
+
+- the function is a constant for $t \notin [t_1,t_2]$.
+
+
+Does anybody have a precise formulation and full proof of the theorem?
+"
+"['deep-rl', 'dqn', 'gym']"," Title: Why does Q-value become negative during training of DQN, while the agent learns to play?Body: I have implemented a simple version of the DQN algorithm for CartPole-v0
. The algorithm works fine, in the sense that achieves the highest possible scores. The below diagram shows the cumulative reward versus training episode.
+
+The scary part is when I tried to plot the q values during training. For this purpose, 1000 random states were generated and stored. After each training episode, I fed these states to the Q-network and computed the Q-value for all actions. Then for each state, the max Q-value was computed. Finally, these max Q-values were averaged to yield a single number. The following plot shows this quantity during training.
+
+As you can see, there is a slight increase in average-max-q in the beginning and it is followed by a sharp decrease. My question is, how can this value be negative? Since I know all rewards received by the agent are positive, and expected cumulative reward of a state can not be negative.
+Edit
+I added the code for clarity:
+from matplotlib import pyplot as plt
+
+import torch as th
+import torch.nn as nn
+import torch.nn.functional as F
+
+import gym
+import random
+import numpy as np
+from collections import deque
+
+from test_avg_q import TestEnv
+
+
+class ReplayBuffer():
+ def __init__(self, maxlen):
+ self.buffer = deque(maxlen=maxlen)
+
+ def add(self, experience):
+ self.buffer.append(experience)
+
+ def sample(self, batch_size):
+ sample_size = min(len(self.buffer), batch_size)
+ samples = random.choices(self.buffer, k=sample_size)
+ return map(list, zip(*samples))
+
+
+class QNetwork():
+ def __init__(self, state_dim, action_size):
+ self.action_size = action_size
+ self.q_net = nn.Sequential(nn.Linear(state_dim, 100),
+ nn.ReLU(),
+ nn.Linear(100, action_size))
+ self.optimizer = th.optim.Adam(self.q_net.parameters(), lr=0.001)
+
+ def update_model(self, state, action, q_target):
+ action = th.Tensor(action).to(th.int64)
+ action_one_hot = F.one_hot(action, num_classes=self.action_size)
+ q = self.q_net(th.Tensor(state))
+ q_a = th.sum(q * action_one_hot, dim=1)
+ loss = nn.MSELoss()(q_a, q_target)
+ self.optimizer.zero_grad()
+ loss.backward()
+ self.optimizer.step()
+
+
+class DQNAgent():
+ def __init__(self, env):
+ self.state_dim = env.observation_space.shape[0]
+ self.action_size = env.action_space.n
+ self.q_network = QNetwork(self.state_dim, self.action_size)
+ self.replay_buffer = ReplayBuffer(maxlen=1_000_000)
+ self.gamma = 0.97
+ self.eps = 1.0
+
+ def get_action(self, state):
+ with th.no_grad():
+ q_state = self.q_network.q_net(th.Tensor(state).unsqueeze(0))
+ action_greedy = th.argmax(q_state).item()
+ action_random = np.random.randint(self.action_size)
+ action = action_random if random.random() < self.eps else action_greedy
+ return action
+
+ def train(self, state, action, next_state, reward, done):
+ self.replay_buffer.add((state, action, next_state, reward, done))
+ states, actions, next_states, rewards, dones = self.replay_buffer.sample(50)
+ with th.no_grad():
+ q_next_states = self.q_network.q_net(th.Tensor(next_states))
+ q_next_states[dones] = th.zeros(self.action_size)
+ q_targets = th.Tensor(rewards) + self.gamma * th.max(q_next_states, dim=1)[0]
+ self.q_network.update_model(states, actions, q_targets)
+
+ if done: self.eps = max(0.1, 0.99 * self.eps)
+
+
+env_name = "CartPole-v0"
+env = gym.make(env_name)
+
+agent = DQNAgent(env)
+num_episodes = 400
+testEnv = TestEnv()
+avg_qs = []
+rewards = []
+render = False
+
+for ep in range(num_episodes):
+ state = env.reset()
+ total_reward = 0
+ done = False
+ while not done:
+ action = agent.get_action(state)
+ next_state, reward, done, info = env.step(action)
+ agent.train(state, action, next_state, reward, done)
+ total_reward += reward
+ state = next_state
+ if render and ep > 80:
+ env.render()
+
+ avg_qs.append(testEnv.run(agent.q_network.q_net))
+ rewards.append(total_reward)
+ print("Episode: {}, total_reward: {:.2f}".format(ep, total_reward))
+
+plt.plot(avg_qs)
+plt.show()
+plt.figure()
+plt.plot(rewards)
+plt.show()
+
+
+here is the code for class TestEnv
:
+import torch as th
+from torch.utils.data import Dataset, DataLoader
+
+
+class TestEnv:
+ def __init__(self):
+ self.dataloader = DataLoader(TestDataset(), batch_size=100, num_workers=10)
+
+ def run(self, model):
+ out_list = []
+ model.eval()
+ with th.no_grad():
+ for batch in self.dataloader:
+ out = model(batch)
+ out_list.append(out)
+ qs = th.cat(out_list, dim=0)
+ maxq = th.max(qs, dim=1)[0]
+ result = th.mean(maxq)
+ return result.item()
+
+
+
+class TestDataset(Dataset):
+ def __init__(self):
+ self.db = th.load('test_states.pt')
+
+ def __len__(self):
+ return len(self.db)
+
+ def __getitem__(self, idx):
+ return self.db[idx]
+
+
+"
+"['neural-networks', 'deep-learning', 'feature-extraction', 'features', 'feature-engineering']"," Title: When is it necessary to manually extract features to feed into the neural network rather than providing raw data?Body: Usually, Neural Networks uses raw data. You do not need to extract features manually. NN's can find & extract good features which is a pattern of an image, signal or any kind of data. When we check layer outputs in a NN, we can see and visualize how NNs extract features.
+Do neural networks extract features by themselves every time? When is it necessary to manually extract or engineer features to feed into the neural network rather than providing raw data?
+For example, I had a time series sensor data. When I use LSTM & GRU on a raw dataset, I had bad test accuracy but when I extract some features manually I had really good test set accuracy results. I extract Fast Fourier Transform, Cross-correlation features which helped a lot to increase accuracy. "Extraction of features manually" helped to solve my problem.
+"
+"['machine-learning', 'pattern-recognition', 'geometric-deep-learning', 'algorithm-request']"," Title: Is there any known technique to determine a graph from a 1D signal pattern?Body: I'd like to evaluate the possibility of using a Machine/Deep Learning technique as a sort of pattern recognition and parameters estimation.
+The problem I want to address can be stated as follows: Let's consider that I have a set of interacting "particles" that can be represented as a graph in which the vertices represent the particles and the edges the magnitude of the interaction amongst them. For instance, in the diagram below I'm showing a particle graph formed by 4 interacting particles.
+
+So each particle/vertex has a value (e.g. $A=3.1$, $B = 4.2$, etc.) and each edge contains the magnitude of the interaction between two connected nodes/àrticles (e.g. $AB = 5.3$, $AC = 1.1$, $DB = 0$, etc).
+With all this information, there exists a quantum mechanics algorithm that, after some complex calculations, results in a 1D signal (the pattern; essentially a vector of X-Y values). The overall process is illustrated in the figure below:
+
+The appearance of the obtained signal will therefore depend upon the values of the graph. The goal is, in this case, the inverse problem: given one of these 1D signals (that is, a characteristic pattern), is it possible to determine the graph with its corresponding values?
+I could create a training set formed by a very large number of simulated graphs with corresponding 1D patterns.
+Since my experience with ML has so far focused only on simple classification problems, it is not clear to me which ML method would be more convenient or whether or not this problem can actually be addressed by an ML technique. Any general recommendation or advice would be highly appreciated.
+"
+"['reinforcement-learning', 'deep-rl', 'continuous-action-spaces', 'discrete-action-spaces']"," Title: Can a large discrete action space be represented using Gaussian distributions?Body: I have a large 1D action space, e.g. dim(A)=2000-10000. Can I use continuous action space where I could learn the mean and std of the Gaussian distributions that I would use to sample action from and round the value to the nearest integer? If yes, can I extend this idea to multi-dimensional large action space?
+"
+"['image-recognition', 'object-detection']"," Title: Why do popular object detecting models output heatmaps instead of coordinators of object directly?Body: I think heatmap outputs of architectures like CenterNet, OpenPose, etc. can be changed to coordinator outputs, and loss functions like focal loss can be modified so they can deal with coordinators instead of heatmaps. Is there any particular reasons that researchers use heatmaps instead of coordinators?
+"
+"['convolutional-neural-networks', 'computer-vision', 'keras', 'image-recognition', 'reference-request']"," Title: Should one use an ""other"" category in image classification?Body: In image classification, there are sometimes images that do not fit in any category.
+For example, if I build a CNN in Keras to classify Dogs and Cats, does it help (in terms of training time and performance) to create an "other" (or "unclassified") category in which images of houses, people, birds, etc., are classified? Is there any research paper that discusses this?
+A similar question was asked before here, but, unfortunately, it has no answer.
+"
+"['deep-learning', 'computer-vision', 'classification']"," Title: What model structure I should use to train on low res and blurry images?Body: I am looking for advice or suggestion.
+I have photos like these: photo_1 and photo_2 and many more similar to that. The average shape of these photos is about 160 x 100. What we are doing is we are trying to find wheather or not person in a photo is wearing safety vest and helmet (if person is wearing both it is 1, if something is missing or both are missing it is 0). Training data consists of about 5k almost equally distributed image sets. I have tried to use augmentation techniques (flipping, adding noise, brighness correction) but results didn't improove.
+I tried to train on many pretrained popular models: resnet101, mobilenet_v2, efficientneyb3, efficientneyb0, DenseNet121, InceptionResNetV2, InceptionV3, ResNet152V2, ResNet50V2, but results are not eyepleasing. I have tried different input sizes ranging from 224x224 to 112x112 but result didn't improve as much as I would have liked it to be. And the weird thing is that the image shape does not correlate to wheather or not there are more wrong predictions using bigger or smaller images.
+As a side not I would lik to ask couple questions:
+
+- Should I use my own written small net?
+- Are the models that I use too big for this problem?
+
+Any advice will be appreciated.
+"
+"['natural-language-processing', 'comparison', 'word-embedding', 'language-model', 'bleu']"," Title: What is the difference between a language model and a word embedding?Body: I am self-studying applications of deep learning on the NLP and machine translation.
+I am confused about the concepts of "Language Model", "Word Embedding", "BLEU Score".
+It appears to me that a language model is a way to predict the next word given its previous word. Word2vec is the similarity between two tokens. BLEU score is a way to measure the effectiveness of the language model.
+Is my understanding correct? If not, can someone please point me to the right articles, paper, or any other online resources?
+"
+"['machine-learning', 'deep-learning', 'data-preprocessing', 'feature-engineering']"," Title: Is feature engineer an important step for a deep learning approach?Body: I'd like to ask you if feature engineering is an important step for a deep learning approach.
+By feature engineering I mean some advanced preprocessing steps, such as looking at histogram distributions and try to make it look like a normal distribution or, in the case of time series, make it stationary first (not filling missing values or normalizing the data).
+I feel like with enough regularization, the deep learning models don't need feature engineering compared to some machine learning models (SVMs, random forests, etc.), but I'm not sure.
+"
+"['reference-request', 'explainable-ai']"," Title: One hot encoding vs dummy variables best practices for explainable AI (XAI)Body: When creating artificial columns for your categorical variables there are two mainstream methods you could use:
+Disclaimer: For this example, I use the following definitions of dummy variables and one-hot-encoding. I'm aware both methods can be used to either return n
or n-1
columns.
+Dummy variables: each category is converted to it's own column and the value 0 or 1 indicates if that category is present for each record
+one-hot-encoding: similar to dummy variables, but one column is dropped, as its value can be derived from the other columns. This is to prevent multicollinearity and the dummy variable trap.
+As an arbitrary example, let's take people's favorite color: pink, blue and green. For a person who's favorite color is pink, the dummy and one-hot-encoded data would look as follows:
+dummy variables
+
+
+
+
+person_id |
+favorite_color_pink |
+favorite_color_blue |
+favorite_color_green |
+
+
+
+
+xyz |
+1 |
+0 |
+0 |
+
+
+
+
+one-hot-encoded variables
+
+
+
+
+person_id |
+favorite_color_blue |
+favorite_color_green |
+
+
+
+
+xyz |
+0 |
+0 |
+
+
+
+
+From a statistics point of view, I would use the one-hot encoded columns to build my model. In addition, I can infer the favorite color is pink, because I encoded the variables.
+However, when I'm applying XAI to explain the prediction to someone else and they see the favorite color wasn't blue or green. I'm not so sure they will infer the favorite color was pink unless it's explicitly stated. So using dummy variables might serve explainability better, but brings other risks..
+Are there any best practices on this?
+"
+"['neural-networks', 'deep-learning', 'tensorflow', 'keras', 'graph-theory']"," Title: It is possible to use deep learning to give approximate solutions to NP-hard graph theory problems?Body: It is possible to use deep learning to give approximate solutions to NP-hard graph theory problems?
+If we take, for example, the travelling salesman problem (or the dominating set problem). Let's say I have a bunch of smaller examples, where I compute the optimal values by checking all possibilities, can this be then used for bigger problems?
+In particular, let's say I take a large graph and just optimize subgraphs of this large graph. This is perhaps a more general question: My experience with deep learning (TensorFlow/Keras) is to predict values. How can I get graph isomorphism and/or a list of local moves on the graph, to obtain a better solution? Can ML/DL give you a list of moves or local changes to get closed to an optimal value, or does it just return the predicted optimal value?
+"
+"['neural-networks', 'tensorflow', 'activation-functions', 'regression']"," Title: Why is no activation function needed for the output layer of a neural network for regression?Body: I'm a bit confused about the activation function in the output layer of a neural network trained for regression. In most tutorials, the output layer uses "sigmoid" to bring the results back to a nice number between 0 and 1.
+But in this beginner example on the TensorFlow website, the output layer has no activation function at all? Is this allowed? Wouldn't the result be a crazy number that's all over the place? Or maybe TensorFlow has a hidden default activation?
+This code is from the example where you predict miles per gallon based on horsepower of a car.
+// input layer
+model.add(tf.layers.dense({inputShape: [1], units: 1}));
+
+// hidden layer
+model.add(tf.layers.dense({units: 50, activation: 'sigmoid'}));
+
+// output layer - no activation needed ???
+model.add(tf.layers.dense({units: 1}));
+
+"
+"['deep-learning', 'classification', 'long-short-term-memory', 'time-series', 'normalisation']"," Title: How to define a ""don't care"" class in time series classification in Pytorch?Body: This is a theoretical question.
+Setup
+I have a time series classification task in which I should output a classification of 3 classes for every time stamp t
.
+All data is labeled per frame.
+The problem:
+In the data set are more than 3 classes [which are also imbalanced].
+My net should see all samples sequentially, because it uses that for historical information.
+Thus, I can't just eliminate all irrelevant class samples at preprocessing time.
+In case of a prediction on a frame which is labeled differently than those 3 classes, I don't care about the result.
+
+My thoughts:
+
+- The net will predict for 3 classes
+- The net will only learn (pass backward gradient) for valid classes, and just not calculate loss for other classes.
+
+Questions
+
+- Is this the way to go for "don't care" classes in classification?
+- How to calculate loss only for relevant classes in Pytorch?
+- Should I apply some normalization per batch, or change batch norm layers if dropping variable samples per batch?
+
+I am using nn.CrossEntropyLoss()
as my criterion, which has only mean
or sum
as reductions.
+I need to mask the batch so that the reduction will only apply for samples whose label is valid.
+I could use reduction='none'
and do that manually, or I could do that before the loss and keep using reduction='mean'
.
+Is there some method to do this using built in Pytorth tools?
+Maybe this can be done in the data-fetching phase somehow?
+
+I am looking some standard, vanilla, thumb rule implementation to tackle this. The least fancy the better.
+
+I am aware this is more than a single question.
+They are still not separable, as the solution will be unified most likely.
+"
+"['reinforcement-learning', 'multi-agent-systems']"," Title: How would I design a finite budget, cascaded multi agent deep reinforcement learning model?Body: In most of the multi-agent reinforcement learning models I've found, it seems to generate the observations for each of the agents simultaneously and then uses a centralized critic to assess all of the agent's actions together.
+However, what if two agents have a finite amount of resources to allocate, and the more one agent spends the less the other agent can spend. So really, the state space of the second agent is conditional on the action of the first agent.
+Are there any papers or resources that describe an architecture like this?
+"
+['variational-autoencoder']," Title: In VQ-VAE code what does this line of code signify?Body: The VQ-VAE implimentation:https://colab.research.google.com/github/zalandoresearch/pytorch-vq-vae/blob/master/vq-vae.ipynb
+quantized = inputs + (quantized - inputs).detach()
+
+Why are we subtracting and adding input to quantized result?
+"
+"['natural-language-processing', 'language-model', 'iid']"," Title: Are training sequences for LMs sampled in an IID fashion?Body: If I understand correctly, when training language models, we take a document and then chunk the document into a sequences of k tokens. So if the document is of length 30 and k=10, then we'll have 20 chunks of 10 tokens each (token 1-11, 2-12, and so on).
+However these training sequences are not iid, right? If so, are there any papers that try and deal with this?
+"
+"['reinforcement-learning', 'q-learning', 'state-spaces']"," Title: Compute state space from variables in Q-learning (RL)Body: I'm trying to use Q-learning, but I'm stuck because I don't know how to compute the state.
+Let's say, in my problem, there are the following variables, which I'm using to compute state:
+x in range 0-3
+y in range 0-3
+d in range 0-3
+g in range 0-1
+a in range 0-1
+s in range 0-4
+br in range 0-4
+bu in range 0-4
+gl in range 0-1
+So, the state space is equal to $64000$ ($4 * 4 * 4 * 2 * 2 * 5 * 5 * 5 * 2$). I'd like to create a number, from the above variables, which is contained in the range $[0, 63999]$.
+My previous idea was to create a binary number from the binary representation of state variables (just write them next to each other and convert into an int
). It seems to fail if a variable is not a power of two (bonus question: why doesn't it work?).
+"
+"['reinforcement-learning', 'deep-rl', 'state-spaces', 'data-compression']"," Title: How can I compress the states of a reinforcement learning agent?Body: I'm working on a problem that involves an RL agent with very large states. These states consist of several pieces of information about the agent. The states are not images, so techniques like convolutional neural networks will not work here.
+Is there some general solutions to reduce/compress the size of the states for reinforcement learning algorithms?
+"
+"['deep-learning', 'facial-recognition', 'accuracy', 'siamese-neural-network']"," Title: Why is my siamese network learning very well in e.g. 1 out of every 5 runs?Body: Why is my siamese network learning very well in e.g. 1 out of every 5 runs? The rest of the time it's not learning and maintains an accuracy of 0.5.
+Any explanations? Is the contrastive loss taken in the embedded space to loose of a constraint?
+The task is greyscale signature matching.
+Additionally, trying the model on facial matching gives a constant 0.5 accuracy, no learning at all - the images are RGB, and maybe it's a higher-order task in general.
+Anyways, would appreciate any and all enlightenment in this matter.
+P.S. I'm thinking to try a variational autoencoder for the face dataset, where I then use the trained encoder as the siamese network "head".
+I would appreciate any guidance or thoughts on this approach as well.
+"
+"['machine-learning', 'computer-vision', 'feature-extraction', 'sift']"," Title: Can I use the SIFT feature detector on data other than images?Body: I know how to use SIFT algorithm for images but I never use it for other kinds of data. I have tabular data (x, y, z, time) where x,y,z is the joint position along x, y, z coordinates. Now, can I apply the SIFT algorithm to this data to find features that will act as input to traditional machine learning algorithms, like SVM, DT, etc.?
+"
+"['tensorflow', 'long-short-term-memory', 'weights']"," Title: Is this LSTM layer learning anything?Body: I've trained a CNN-LSTM model but the results weren't satisfactory, so I took a look at my weight distributions and this is what I got:
+
+I don't understand. Is this layer learning anything? Or no?
+Update: I've also tried LeakyReLU activation and also removed l2 regularization and this is what I got:
+
+So I guess my layer isn't learning or does take more epochs to train LSTM layers? The gradients are not vanishing because the CNN layer before this is changing.
+"
+"['neural-networks', 'deep-learning', 'tensorflow']"," Title: Validation Accuracy remains constant while training VGG?Body: I posted this question on stackoverflow and got downvoted for unmentioned reason, so I'll repost it here, hoping to get some insights
+This is the plot
+
+This is the code:
+with strategy.scope():
+
+ model2 = tf.keras.applications.VGG16(
+ include_top=True,
+ weights=None,
+ input_tensor=None,
+ input_shape=(32, 32, 3),
+ pooling=None,
+ classes=10,
+ classifier_activation="relu",
+ )
+
+ model2.compile(optimizer='adam',
+ loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
+ metrics=['accuracy'])
+
+ history = model2.fit(
+ train_images, train_labels,epochs=10,
+ validation_data=(test_images, test_labels)
+ )
+
+I'm trying to train VGG16 from scratch, hence not importing their weights I also tried a model which I created myself, with same hyperparameters, and that worked fine
+Any help is highly appreciated
+Heres the full code
+"
+"['natural-language-processing', 'word-embedding', 'bert', 'word2vec']"," Title: Should I need to use BERT embeddings while tokenizing using BERT tokenizer?Body: I am new to BERT and NLP and I am a little confused with tokenization and word embedding.
+My doubt is if I use the BertTokenizer for tokenizing a sentence then do I have to compulsorily use BertEmbedding for generating its corresponding word vectors of the tokens or I can train my own word2vec model to generate my word embedding while using BertTokenizer?
+Pardon me if this question doesn't make any sense.
+"
+"['generative-adversarial-networks', 'probability-distribution', 'normal-distribution', 'uniform-distribution']"," Title: Why do we sample vectors from a standard normal distribution for the generator?Body: I am new to GANs. I noticed that everybody generates a random vector (usually 100 dimensional) from a standard normal distribution $N(0, 1)$. My question is: why? Why don't they sample these vectors from a uniform distribution $U(0, 1)$? Does the standard normal distribution has some properties that other probability distributions don't have?
+"
+"['python', 'keras', 'mean-squared-error']"," Title: ""Porpoising"" in latter stages of validation loss and MSE charts in KerasBody: Performing a prediction of a continuous y target using Keras, the simple structure of the code revolves around;
+model = Sequential()
+model.add(Dense(200, input_dim=15, activation= "relu"))
+model.add(Dense(750, activation= "relu"))
+model.add(Dense(500, activation= "relu"))
+model.add(Dense(750, activation= "relu"))
+model.add(Dense(500, activation= "relu"))
+model.add(Dense(200, activation= "relu"))
+model.add(Dense(100, activation= "relu"))
+model.add(Dense(50, activation= "relu"))
+model.add(Dense(1))
+
+model.compile(loss= 'mse' , optimizer='adam', metrics=['mse','mae'])
+history=model.fit(X_train, y_train, batch_size=50, epochs=150,
+ verbose=1, validation_split=0.2)
+
+This has resulted in the following metric chart;
+
+
+What might be causing these, and how to eliminate (or greatly reduce) them?
+UPDATE: Just reduced the learning rate to 0.0001 per Neil Slater's suggestion, and the loss curve may have had the spikes reduced, though the scale of the graph has changed. The training loss has increased from 0.00007 to 0.00037, the validation loss from 0.0014 to 0.002, and the prediction error increased from 0.037 to 0.046.
+
+I then changed epsilon from it's value of 1e-07 to 0.1 and increased the epochs from 150 to 500. The validation loss increased to 0.0082 and the prediction error increased to 0.093, with the corresponding model loss shown below.
+
+While not an overall improvement at either step, this did remove the spikes as I requested, hence Neil's advice gives me additional considerations to explore and measure within the Adam optimizer (along with other optimizers), so I consider this to have been an important learning experience. One such exploration uncovered this more detailed explanation of optimizers than I had been exposed to before, as well as this 3D visualization of loss topologies and the effect differing optimizers and parameters have on finding the optimal minima (keep a sharp eye on the options being chosen in the upper right corner).
+"
+"['reinforcement-learning', 'reference-request', 'deep-rl', 'gym', 'multi-agent-rl']"," Title: How do I get started with multi-agent reinforcement learning?Body: Is there any tutorial that walks through a multi-agent reinforcement learning implementation (in Python) using libraries such as OpenAI's Gym (for the environment), TF-agents, and stable-baselines-3?
+I searched a lot, but I was not able to find any tutorial, mostly because Gym environments and most RL libraries are not for multi-agent RL.
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'keras', 'yolo']"," Title: Preparing data set for the YOLO algorithmBody: Hi I am working on a project which requires the You Only Look Once algorithm in order to classify and localise objects within images. I have to prepare my dataset (which has 2 classes, and predicts 6 objects per grid cell, and the 448 * 448 image is split into a 7*7 grid). What would be a viable approach to do that? I found this code, found in this article. However I do not understand why he has done what he has done, e.g why is he specifically checking the 24th element of the “box”, and so what element of the box would I have to check? Is there any tutorial running through that? Would it be possible for someone to explain or even adapt his approach to fit my dataset?
+FYI: I am coding the YOLO algorithm from scratch
+"
+"['machine-learning', 'computer-vision', 'reference-request']"," Title: Resources for Computer Vision Algorithms and ApplicationsBody: Are there any videos or other books/notes/slides that anyone has come across that follow Computer Vision Algorithms and Applications by Richard Szeliski? We are using this book in class but the professor did a bad job explaining and I have some trouble getting through the book. Thanks a lot!
+"
+"['supervised-learning', 'support-vector-machine', 'decision-theory']"," Title: Can you correlate decision boundary of final layer of a neural network to predictive distribution?Body: I was reading in a On the Decision Boundary of Deep Neural Networks
+ that the final layer of a MLP can be equated to an SVM and can generate decision boundaries similar to methods with SVM. I was wondering if using this boundary detection method or another can you quantify how much probability a model assigns to each bin where a bin is a class. So for example, if a project of an input before the final layer has a SVM margin of let's say 4 from the boundaries and classification 1 and 2, can we determine how much probability it'll give class 1 or 2 after the final layer?
+"
+"['reinforcement-learning', 'q-learning', 'state-spaces']"," Title: Q-learning in gridworld with random boardBody: I'm trying to use Q-learning in order to solve Wumpus world environment.
+Wumpus world is a toy problem on 4x4 gridworld. The agent starts in entry position of the cave, looks for gold (agent can sense that he is on the gold field), then he has to pick it up and leave cave in the entry position. Some fields are safe, other contain pit or wumpus (monster). If agent move to the pit or wumpus field he dies. The fields next to wumpus or pit (not diagonally!) have properties that agent can sense - stench (wumpus), breeze (pit). Agent achieve positive reward if he leaves cave with gold and negative if he dies. Action space: turn right/left, move forward, shoot arrow (if shot in good direction it can kill wumpus, only 1 available), pick up gold, leave cave. There: https://www.javatpoint.com/the-wumpus-world-in-artificial-intelligence you can find more detailed description of environment.
+It is easy to solve this problem if the gridworld is constant. I have a huge problem to even start thinking about it if gridworld is random in every learning episode (random fields and random size) and also random in testing. How should I define it (especially state space)? How should I create q table? I'm using Python.
+Thank you in advance for any help.
+"
+['neural-networks']," Title: Finding Specific Patterns in DataBody: I'm trying to research modeling that can help me find very specific patterns in data. I've done a fair amount of work about generalized predictions with machine learning, but I'm very confused about how to approach something that gets into very specific predictions.
+As an example, for an IT infrastructure that has 1000 Cisco Router across the world. I'm trying to find patterns in this data when outages occur. Outages typically are either power related on transport circuit related. I have historical data for over a year. I'm trying build a model to help predict the outage type. But, I want it to have very specific knowledge of previous outages. Maybe there is a pattern when Routers 17,245 and 813 fail it is always a power problem at each of the sites.
+I will have input data about geospacial, diagnostics, and other IT type information.
+I know a lot of modeling can generalize this type of scenario, but I'm trying to see if there are options to remember more specific patterns within a large dataset.
+"
+"['machine-learning', 'reference-request', 'computational-learning-theory', 'vc-dimension', 'k-nearest-neighbors']"," Title: What is the effect of K in K-NN on the VC dimension?Body: What is the effect of K in K-NN on the VC dimension? When K increases, is the VC dimension decreased or increased, or we can't say anything about this? Is there a reference book that discusses this?
+"
+"['reinforcement-learning', 'policy-gradients', 'reinforce']"," Title: It is mathematically correct to use a Policy Gradient method for 1-step trajectories?Body: I have come across a Google paper that uses the REINFORCE
algorithm (a Policy Gradient Method) for a case where the trajectory of the episodes it proposes would be only one step.
+When trying to replicate the experiments they propose I found that there are some problems with the stability of the method (maybe that's why it is not accepted by peer review).
+Researching on my own I have found something I suspected, and that is that the problem they present could be solved as a multiarmed bandit problem in THIS link. But because of this event, I have the doubt if using methods based on trajectories (such as Policy Gradient Methods) has some mathematical problem in situations where the trajectory is a single step.
+PS: I think the problem of this paper may be also that they average only after one execution of a trajectory and not over k
trajectories as it is necessary for a Policy Gradient Method, so I would also like to know the opinion of more people about this issue.
+"
+"['reinforcement-learning', 'deep-rl', 'inverse-rl']"," Title: Proving existence or non existence of reward function to make given policy ""uniquely"" optimal when reward function is dependent only on S or both S,ABody: I was going through paper titled "Algorithms for Inverse Reinforcement Learning" by Andrew Ng and Russell.
+It states following basics:
+
+
+- MDP $M$ is a tuple $(S,A,\{P_{sa}\},\gamma,R)$, where
+
+- $S$ is a finite seto of $N$ states
+- $A=\{a_1,...,a_k\}$ is a set of $k$ actions
+- $\{P_{sa}(.)\}$ are the transition probabilities upon taking action $a$ in state $s$.
+- $R:S\rightarrow \mathbb{R}$ is a reinforcement function (I guess its what it is also called as reward function) For simplicity in exposition, we have written rewards as $R(s)$ rather than $R(s,a)$; the extension is trivial.
+
+
+- A policy is defined as any map $\pi : S \rightarrow A$
+
+- Bellman Equation for Value function $V^\pi(s)=R(s)+\gamma \sum_{s'}P_{s\pi(s)}(s')V^\pi(s')\quad\quad...(1)$
+
+- Bellman Equation for Q function $Q^\pi(s,a)=R(s)+\gamma \sum_{s'}P_{sa}(s')V^\pi(s')\quad\quad...(2)$
+
+- Bellman Optimality: The policy $\pi$ is optimal iff, for all $s\in S$, $\pi(s)\in \text{argmax}_{a\in A}Q^\pi(s,a)\quad\quad...(3)$
+
+- All these can be represented as vectors indexed by state, for which we
+adopt boldface notation $\pmb{P,R,V}$.
+
+- Inverse Reinforcement Learning is: given MDP $M=(S,A,P_{sa},\gamma,\pi)$, finding $R$ such that $\pi$ is an optimal policy for $M$
+
+- By renaming actions if necessary, we will assume
+without loss of generality that $\pi(s) = a_1$.
+
+
+
+Paper then states following theorem, its proof and a related remark:
+
+Theorem: Let a finite state space $S$, a set of actions $A=\{a_1,..., a_k\}$, transition probability matrices ${\pmb{P_a}}$, and a discount factor $\gamma \in (0, 1)$ be given. Then the policy $\pi$ given by $\pi(s) \equiv a_1$ is optimal iff, for all $a = a_2, ... , a_k$, the reward $\pmb{R}$ satisfies $$(\pmb{P}_{a_1}-\pmb{P}_a)(\pmb{I}-\gamma\pmb{P}_{a_1})^{-1}\pmb{R}\succcurlyeq 0 \quad\quad ...(4)$$
+Proof:
+Equation (1) can be rewritten as
+$\pmb{V}^\pi=\pmb{R}+\gamma\pmb{P}_{a_1}\pmb{V}^\pi$
+$\therefore\pmb{V}^\pi=(\pmb{I}-\gamma\pmb{P}_{a_1})^{-1}\pmb{R}\quad\quad ...(5)$
+Putting equation $(2)$ into $(3)$, we see that $\pi$ is optimal iff
+$\pi(s)\in \text{arg}\max_{a\in A}\sum_{s'}P_{sa}(s')V^\pi(s') \quad...\forall s\in S$
+$\iff \sum_{s'}P_{sa_1}(s')V^\pi(s')\geq\sum_{s'}P_{sa}(s')V^\pi(s')\quad\quad\quad\forall s\in S,a\in A$
+$\iff \pmb{P}_{a_1}\pmb{V}^\pi\succcurlyeq\pmb{P}_{a}\pmb{V}^\pi\quad\quad\quad\forall a\in A\text{\\} a_1 \quad\quad ...(6)$
+$\iff\pmb{P}_{a_1} (\pmb{I}-\gamma\pmb{P}_{a_1})^{-1}\pmb{R}\succcurlyeq\pmb{P}_{a} (\pmb{I}-\gamma\pmb{P}_{a_1})^{-1}\pmb{R} \quad\quad \text{...from (5)}$
+Hence proved.
+Remark: Using a very similar argument, it is easy to show (essentially by replacing all inequalities in the proof above with strict inequalities) that the condition $(\pmb{P}_{a_1}-\pmb{P}_a)(\pmb{I}-\gamma\pmb{P}_{a_1})^{-1}\pmb{R}\succ 0 $ is necessary and sufficient for $\pi\equiv a_1$ to be the unique optimal policy.
+
+I dont know if above text from paper is relevant for what I want to prove, still I stated above text as a background.
+I want to prove following:
+
+- If we take $R : S → \mathbb{R}$—there need not exist $R$ such that $π^*$ is the unique optimal policy for $(S, A, T, R, γ)$
+- If we take $R : S × A → \mathbb{R}$. Show that there must exist $R$ such that $π^*$ is the unique optimal policy for $(S, A, T, R, γ)$.
+
+I guess point 1 follows directly from above theorem as it says "$\pi(s)$ is optimal iff ..." and not "unique optimal iff". Also, I feel it also follows from operator $\succcurlyeq$ in equation $(6)$. In addition, I feel its quite intuitive: if we have same reward for any given state for every action, then different policies choosing different actions will yield same reward from that state hence resulting in same value function.
+I dont feel point 2 is correct. I guess, this directly follows from the remark above which requires additional condition to hold for $\pi$ to be "uinque optimal" and this condition wont hold if we simply define $R : S × A → \mathbb{R}$ instead of $R : S → \mathbb{R}$. Additionally, I feel, this condition will hold iff we had $=$ in equation $(3)$ instead of $\in$ (as this will replace all $\succcurlyeq$ with $\succ$ in the proof). Also this also follow directly from point 1 itself. That is we can still have same reward for all actions from given state despite defining reward as $R : S × A → \mathbb{R}$ instead of $R : S → \mathbb{R}$, which is the case with point 1.
+Am I correct with the analysis in last two paragraphs?
+Update
+After some more thinking, I felt I was doing it all wrong. Also I feel the text from the paper which I specified is of not much help in proving these two points. So let me restate new intuition for proofs for the two points:
+
+- For $R: S\rightarrow \mathbb{R}$, if some state $S_1$ and next state $S_2$ has two actions between them, $S_1-a_1\rightarrow S_2$ and $S_1-a_2\rightarrow S_2$, and if optimal policy $π_1^*$ chooses $a_1$, then $π_2^*$ choosing $a_2$ will also be optimal, thus making NONE "uniquely" optimal since both $a_1$ and $a_2$ will yield same reward as reward is associated with $S_1$ instead of with $(S_1,a_x)$.
+
+- For $R: (S,A)\rightarrow\mathbb{R}$, we can assign large reward say $+∞$ to all actions specified in given $π^*$ and $-∞$ to all other actions. This reward assignment will make $π^*$ a unique optimal policy.
+
+
+Are above logics correct and enough to prove given points?
+"
+"['convolutional-neural-networks', 'pooling']"," Title: What are the purposes of pooling in CNNs?Body: There are at least three questions on this site related to this
+
+- What is the effect of using pooling layers in CNNs?
+- Is pooling a kind of dropout?
+- What are the benefits of using max-pooling in convolutional neural networks?
+
+I got the following useful information regarding the purpose of pooling. As per my understanding, the purposes of pooling, based on priority, in general, are as follows:
+
+- To decrease the size of the feature maps
+- To make the model stronger in feature extraction
+
+Are there any other purposes of pooling in CNN other than them?
+"
+"['reinforcement-learning', 'deep-rl', 'terminology', 'gradient-descent', 'reinforce']"," Title: What is the name of this algorithm that estimates the gradient with an average by sampling from a distribution?Body: Consider maximizing the function $R(w)$ with parameter $w$ using gradient ascent. However, we don't know the gradient $\nabla_wR(w)$ formula. Now suppose $w$ is sampled from a probability distribution $\pi(w,\theta)$ parameterized by $\theta$. Then we can define
+$$J(\theta)=E[R(w)]=\int R(w)\pi(w,\theta)dw.$$
+And we have
+$$\nabla_\theta J(\theta)=E[R(w)\nabla_\theta \log \pi(w,\theta)]$$.
+Then, if we sample $w_1,\ldots,w_N$, we can estimate the gradient as $$\nabla_\theta J(\theta)\approx \frac{1}{N}\sum_{i=1}^N R(w_i) \nabla_\theta \log \pi(w_i, \theta).$$
+It looks like REINFORCE algorithm in Deep Reinforcement Learning. Does this algorithm have a name? Is the above derivation correct?
+I wonder if it is useful in optimizing $R(w)$ function.
+"
+"['tensorflow', 'keras', 'long-short-term-memory']"," Title: What is the input to the left most LSTM cell c(t-1) and h(t-1)?Body: Given an LSTM model with 3 cells shown below, what would be the input to the left most cell c(t-1) and h(t-1)?
+"
+"['papers', 'generative-adversarial-networks', 'hyperparameter-optimization', 'implementation', 'wasserstein-gan']"," Title: What to do with a GAN that trained well but got worse over time?Body: I am training a WGAN-GP network based on the following paper, though I am using a different dataset. Now, for the first ~ 60-70 epochs, my network trained really well, which I could see in the loss going down, but I also made sure to regularly check the quality of the images.
+Unfortunately, what I am seeing now (for the last $20$ epochs) is that the generator is getting worse and worse, the images don't look that good anymore. I save checkpoints every epochs, so in principle, I could stop training and get myself a state of the network from where it was still performing quite okay.
+However, my question would be: How can I improve the training of the GAN? Would you decrease the learning rate?
+I use a batch size of 124 and a learning rate of 1e-3. Maybe I could/should continue training (with a checkpoint that was still quite okay) with a learning rate of 5e-4?
+Any other hints would be appreciated!
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'architecture', 'regularization']"," Title: Are there regularisation methods related only to architecture of the CNNs?Body: Are there any methods of regularisation of deep neural networks, particularly CNNs (or generally ANN but that will also work on CNNs) that are related only to the network's architecture and not the training itself?
+I mean maybe something like how deep they are, amount of conv/pooling/fully connected layers, size of filters, size of steps of filters, etc. any pointers that would help with regularisation.
+EDIT: To explain deeper what I mean I might add that I am exploring an experimental idea for the training of the CNNs that is not in any way related to typical gradient descent with backpropagation. That is why typical methods related to training will not work. I can see already that the models train satisfactorily on the training set but don't perform that well on a test set and since I didn't figure out any regularization methods for this type of training I thought maybe there are some related to architecture, that the training process will have to abide.
+"
+"['reinforcement-learning', 'game-ai', 'implementation', 'gaming', 'algorithm-request']"," Title: How is the AI in 3d games implemented?Body: A few days ago, I started looking a bit more into AI and learning about the way it works, and it is very interesting, but I can't find a clear answer on how the artificial intelligence is implemented in 3d shooter games, like COD or practically any 3d game.
+I just don't understand how they teach the enemies such different things based on the game to fit its narratives. For example, is the enemy "AI" in 3d games just a bunch of if-else statements, or do they actually teach the enemies to think strategically? In big AAA games, you can clearly see that enemies hide from you in shootings and peek to shoot not just rush and get killed.
+So, how is the AI in 3d games implemented? How do they code it? You don't need to explain in detail, but just give me the idea. Do they use algorithms?
+"
+"['python', 'regression', 'multi-label-classification']"," Title: Multi-target regression using scikit-learn without ytrainBody: I would like to use the multi-target regression with scikit-learn. However, the examples I've seen use Xtrain and ytrain?
+What is ytrain in regression?
+I know y it is used for classes in classification. My data is composed of two columns of data, and I want to predict both values independently (but using a single MTR). So, I have clear X is a training set of n number of samples from those two values, however, I don't know how to create y.
+My data is composed of two attributes not correlated, I guess this is Xtrain. What should I use as ytrain?
+I'm based on this source https://machinelearningmastery.com/multi-output-regression-models-with-python/.
+Any clue?
+"
+"['neural-networks', 'tensorflow']"," Title: Can we use Multiple data as Input in a NN for a single Output?Body: So I am new to NN and I'm trying to go deep and apply it to my subject. I would like to ask: the input of the NN can be 2 or more values for example-> the measurement of a value, distance, and time? An example of input data would be [ [1,2,3, ....],[11,22,33, .....],[5] ] whose output is a value 1 for example or cat or an generated model.
+"
+['reinforcement-learning']," Title: Train agent to surround a burning fireBody: I have built a wildfire 'simulation' in unity. And I want to train an RL agent to 'control' this fire. However, I think my task is quite complicated, and I can't work out to get the agent to do what I want.
+A fire spreads in a tree-like format, where each node represents a point burning in the fire. When a node has burned for enough time, it spreads in all possible cardinal directions (as long as it does not spread to where it came from). The fire has a list of 'perimeter nodes' which represent the burning perimeter of the fire. These are the leaf nodes in the tree. The rate of spread is calculated using a mathematical model (Rothermel model) that takes into account wind speed, slope, and parameters relating to the type of fuel burning.
+I want to train the agent to place 'control lines' in the map, which completely stops the fire from burning. The agent will ideally work out where the fire is heading and place these control lines ahead of the fire such that it runs into these lines.
+Please could you guide me (or refer me to any reading that would be useful) on how I can decide the rules by which I give the model rewards?
+Currently, I give positive rewards for the following:
+
+- the number of fire nodes contained by a control line increases.
+
+And I give negative rewards for:
+
+- the number of fire nodes contained by a control line does not increase.
+- the agent places a control line (these resources are valuable and can only be used sparingly).
+
+I end the session with a win when all nodes are contained, and with a loss if the agent places a control line out of the bounds of the world.
+I am currently giving the agent the following information as observations:
+
+- the direction that the wind is heading, as a bearing.
+- the wind speed
+- the vector position that the fire is started at
+- the current percentage of nodes that are contained
+- the total number of perimeter nodes
+
+I am new to RL, so I don't really know what is the best way to choose these parameters to train on. Please could you guide me to how I can better solve this problem?
+"
+"['datasets', 'image-segmentation', 'u-net', 'training-datasets', 'test-datasets']"," Title: Why doesn't U-Net work with images different from the dataset?Body: I have implemented a U-Net, similar to this implementation, but for a different dataset, this one, to segment roads.
+It works fine using the test folder images, but, for example, when I pick a print from bing maps and try to infer with the trained model, this is returned:
+
+Why this is happening?
+I already tried to change the thresholding values, normalization, etc.
+Tensorboard
+"
+"['binary-classification', 'sigmoid', 'softmax', 'categorical-crossentropy', 'binary-crossentropy']"," Title: Is it appropriate to use a softmax activation with a categorical crossentropy loss?Body: I have a binary classification problem where I have 2 classes. A sample is either class 1 or class 2 - For simplicity, lets say they are exclusive from one another so it is definitely one or the other.
+For this reason, in my neural network, I have specified a softmax activation in the last layer with 2 outputs and a categorical crossentropy for the loss. Using tensorflow:
+model=tf.keras.models.Sequential()
+model.add(tf.keras.layers.Dense(units=64, input_shape=(100,), activation='relu'))
+model.add(tf.keras.layers.Dropout(0.4))
+model.add(tf.keras.layers.Dense(units=32, activation='relu'))
+model.add(tf.keras.layers.Dropout(0.4))
+model.add(tf.keras.layers.Dense(units=2, activation='softmax'))
+
+model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
+
+Here are my questions.
+
+- If the sigmoid is equivalent to the softmax, firstly is it valid to specify 2 units with a softmax and categorical_crossentropy?
+
+- Is it the same as using binary_crossentropy (in this particular use case) with 2 classes and a sigmoid activation, and if so why?
+
+
+I know that for non-exclusive multi-label problems with more than 2 classes, a binary_crossentropy with a sigmoid activation is used, why is the non-exclusivity about the multi-label case uniquely different from a binary classification with 2 classes only, with 1 (class 0 or class 1) output and a sigmoid with binary_crossentropy loss.
+"
+"['reinforcement-learning', 'deep-rl', 'policy-gradients', 'ddpg', 'notation']"," Title: Why is the behaviour policy denoted by $\beta$ and the exploration policy by $ \mu'$ in the DDPG paper?Body: I am learning about the deep deterministic policy gradient (DDPG) (Lillicrap et al, 2016) and got confused about the notation of the behavior policy.
+Lillicrap et al. denote the policy gradient by
+$$\nabla _{\theta^\mu} J \approx \mathbb{E}_{s_t \sim \rho^\beta} \left[ \nabla _{\theta^\mu} Q(s,a|\theta^Q) | s=s_t, a=\mu(s_t ; \theta ^\mu) \right],$$
+where $\beta$ denotes the behavior policy (equation 5 in the original paper).
+However, when they talk about exploration, they denote the exploration policy by $ \mu'$. This notation seems confusing to me since the target actor network is also denoted by $\mu'(s|\theta^{\mu'})$.
+As far as I understand, the exploration policy is not directly linked to the target critic network but rather corresponds to the previously mentioned behavior policy $\beta$. Is this correct or am I understanding it wrong?
+"
+"['python', 'regression', 'time-series']"," Title: How to forecast multiple target attributes in Python?Body: I need to forecast two non-correlated time-series (non-stationary). A sample is presented below:
+414049364,21773560
+414049656,21773926
+414049938,21774287
+414050204,21774638
+414050453,21774975
+414050682,21775296
+414050895,21775597
+414051093,21775874
+414051278,21776125
+414051453,21776344
+414051620,21776530
+414051780,21776678
+414051935,21776785
+414052089,21776849
+414052242,21776865
+
+The above is the input (two attributes) and the output (prediction) is composed of two targets (the same as input) for instance,
+414052252,21776765
+
+However, current regression techniques only consider a single attribute (class) forecasting but two or more. I've checked the following site https://machinelearningmastery.com/multi-output-regression-models-with-python/ for multi-target regression or predictive clustering trees. Unfortunately, I don't know how to adapt my data to those techniques. Ideally, I would like to predict multiple steps.
+Any idea?
+"
+"['machine-learning', 'reinforcement-learning', 'q-learning', 'optimization', 'bellman-equations']"," Title: How to avoid being stuck local optima in q-learning and q-networkBody: When using the Bellman equation to update q-table or train q-network to fit greedy max values, the q-values very often get to the local optima and get stuck although randomization rate ($\epsilon$) has already been applied since the start.
+The sum of q-values of all very first steps (of different actions at the original location of the agent) increases gradually until a local optimum is reached. It gets stuck and this sum of q-values starts decreasing slowly a bit by a bit.
+How to avoid being stuck in a local optimum? and how to know if the local optimum is already the global optimum? I may think of this but it's chaotic: Switch on randomization again for a while, worse values may come at first but maybe better in the future.
+"
+"['bayesian-deep-learning', 'bayesian-neural-networks', 'bayes-theorem']"," Title: What's the likelihood in Bayesian Neural Networks?Body: I'm trying to understand the concept behind BNN.
+Their are based on the Bayes Theorem:
+$$p(w \mid \text{data}) = \frac{p(\text{data} \mid w)*p(w)}{p(\text{data})}$$
+which boils down to
+$$\text{posterior} = \frac{\text{likelihood} * \text{prior}}{\text{evidence}}.$$
+I understand that if we assume a Gaussian distribution for the model, the likelihood function comprises a product (or sum if we use log) of each data point inserted into the Gaussian pdf. The parameters which we can change are $\mu$ and $\sigma^2$. We want to increase the likelihood because higher values of the Gaussian pdf means higher probability.
+How do things work when using a neural network? I assume that the likelihood function looks s.th. like inserting each data point inside the calculation (weighted sums) of the neural network. But does the neural network need a softmax layer at the end so that we can interpret the outputs as probabilities/likelihoods? Or do we measure likelihood by applying some error measurement like cross-entropy or squared-loss?
+"
+"['neural-networks', 'monte-carlo-tree-search', 'tree-search']"," Title: What is this algorithm? Is it a variant of Monte-Carlo Tree Search?Body: I'm using a Neural Network as an agent in a simple car racing game. My goal is to train the network to imitate a brute-force tree search to an arbitrary depth.
+My algorithm goes something like the following:
+
+- The agent starts with depth 0
+- Generate a bunch of test states
+- For each test state
s
:
+
+- For each action
a
:
+
+s' = env.step(s, a)
+for x in range(agent.depth): s' = env.step(s', agent.predict(s'))
+- calculate reward at state
s'
+
+
+- Set the label for test state
s
as whichever action a
produced the highest reward
+
+
+- Train the agent using the test states and labels, and increment
agent.depth
+- Loop until desired depth
+
+The idea is that an agent trained in this way to depth N should produce output close to a brute-force tree search to depth N...so by using it to play out N moves, it should be able to find me the best final state at that depth. In practice, I've found that it performs somewhere between N and N-1 (but of course it never reaches 100% accuracy).
+My question is: what is the name of this algorithm? When I search for tree search with playout, everything talks about MCTS. But since there's no randomness here (the first step is to try ALL actions), what would this be called instead?
+"
+"['neural-networks', 'tensorflow', 'keras', 'pytorch', 'implementation']"," Title: How can I model any structure for a neural network?Body: Hello I am currently doing research on the effect of altering a neural network's structure. Particularly I am investigating what affect would putting a random DAG (directed acyclic graph) in the hidden layer of a network instead of a usual fully connected bipartite graph.
+For instance my neural network would look something like this:
+
+Basically I want the ability to create any structure in my hidden layer as long as it remains a DAG [add any edge between any node regardless of layers]. I have tried creating my own library to do so but it proved to be much more tedious than anticipated therefore I am looking for ways to do this on existing libraries such as Keras, pytorch, or tensorflow.
+"
+"['meta-learning', 'training-datasets', 'model-agnostic-meta-learning']"," Title: How to split data for meta-learning?Body: I've been trying to understand the meta-learning paradigm, more precisely, the optimization-based models, such as MAML, but I have a hard time understanding how I should correctly split my data to train such models.
+For example, let's consider we have 4 traffic datasets, and we would like to use 3 of them as source datasets (to train the model) and the remaining one as target (to fine-tune on it and then test the model performance). As far as I understood, for each source dataset, I need to split it into train and validation. During training, I would randomly select 2 batches of samples from the training datasets, use one batch to compute the task-specific parameters and the other one to compute the loss. Then repeat the same process with the validation dataset, such that I can select the best candidate model. After the training is done, I need to fine-tune the model on the target dataset. I assume the process is the same, but I need to use the target dataset instead.
+During testing (after the model is fully learned and fine-tuned), how exactly do I test it? It is the same procedure as if I was training a supervised model? I would like to know if the setup I described is correct and it fits the meta-learning paradigm.
+"
+"['image-processing', 'hardware-evaluation']"," Title: What amount of ressources is involved in building an image recognition system?Body: I would like to have an order of magnitude of ressources required to build an image recognition system.
+Let say you want to build a startup company which main product will have to distinguish 20 different kinds of objects (bottle, dogs, car, flowers...). Images are already tagged.
+
+- How many images are needed as a learning set ? 1k, 10k, 100k, 1
+million ?
+- What kind of hardware and how long will the learning process take ?
+- How many developers, how much time ?
+- Does it changes a lot if the number of target output is reduced to two kinds, or increased to one thousands ?
+
+A link to a real life paper would be perfect. Thank you
+"
+"['machine-learning', 'long-short-term-memory', 'autoencoders']"," Title: Convert LSTM univariate Autoencoder to multivariate AutoencoderBody: I have the following code snippet which takes in a single column of value i.e. 1 feature. How do I modify the LSTM model such that it accepts 3 features?
+import numpy as np
+from keras.models import Sequential
+from keras.layers import LSTM, Input, Dropout
+from keras.layers import Dense
+from keras.layers import RepeatVector
+from keras.layers import TimeDistributed
+import pandas as pd
+from matplotlib import pyplot as plt
+from sklearn.preprocessing import MinMaxScaler, StandardScaler
+from keras.models import Model
+import seaborn as sns
+
+dataframe = pd.read_csv('GE.csv')
+dataframe.head()
+
+df = dataframe[['Date', 'EnergyInWatts']]
+df['Date'] = pd.to_datetime(df['Date'])
+sns.lineplot(x=df['Date'], y=df['EnergyInWatts'])
+
+#train, test = df.loc[df['Date'] <= '2003-12-31'], df.loc[df['Date'] > '2003-12-31']
+train = df.loc[df['Date'] <= '2003-12-31']
+test = df.loc[df['Date'] > '2003-12-31']
+
+scaler = StandardScaler()
+
+scaler = scaler.fit(train[['EnergyInWatts']])
+
+train['EnergyInWatts'] = scaler.transform(train[['EnergyInWatts']])
+test['EnergyInWatts'] = scaler.transform(test[['EnergyInWatts']])
+
+seq_size = 30
+
+
+def to_sequences(x, y, seq_size=1):
+ x_values = []
+ y_values = []
+
+ for i in range(len(x)-seq_size):
+ #print(i)
+ x_values.append(x.iloc[i:(i+seq_size)].values)
+ y_values.append(y.iloc[i+seq_size])
+
+ return np.array(x_values), np.array(y_values)
+
+trainX, trainY = to_sequences(train[['EnergyInWatts']], train['EnergyInWatts'], seq_size)
+testX, testY = to_sequences(test[['EnergyInWatts']], test['EnergyInWatts'], seq_size)
+
+
+model = Sequential()
+model.add(LSTM(128, input_shape=(trainX.shape[1], trainX.shape[2])))
+model.add(Dropout(rate=0.2))
+model.add(RepeatVector(trainX.shape[1]))
+model.add(LSTM(128, return_sequences=True))
+model.add(Dropout(rate=0.2))
+model.add(TimeDistributed(Dense(trainX.shape[2])))
+model.compile(optimizer='adam', loss='mae')
+model.summary()
+
+history = model.fit(trainX, trainY, epochs=10, batch_size=32, validation_split=0.1, verbose=1)
+```
+
+"
+"['natural-language-processing', 'comparison', 'symbolic-computing']"," Title: Is there a relationship between Computer Algebra and NLP?Body: My intuition is that there is some overlap between understanding language and symbolic mathematics (e.g. algebra). The rules of algebra are somewhat like grammar, and the step-by-step arguments get you something like a narrative. If one buys this premise, it might be worth training an AI to do algebra (solve for x, derive this equation, etc).
+Moreover, when variables represent "real" numbers (as seen in physics, for example) algebraic equations describe the real world in an abstracted, "linear," way somewhat similar to natural language.
+Finally, there are exercises in algebra, like simplifying, deriving useful equations, etcetera which edge into the realm of the subjective, yet it is still much more structured and consistent than language. It seems like this could be a stepping stone towards the ambiguities of natural language.
+Can anyone speak to whether this has either (1) been explored or (2) is a totally bogus idea?
+"
+"['machine-learning', 'maximum-likelihood', 'bayesian-inference']"," Title: Does the Bayesian MAP give a probability distribution over unseen t*?Body: I'm working my way through the Bayesian world. So far, I've understood that the MLE or the MPA are point estimates, therefore using such models just output one specific value and not a distribution.
+Moreover, vanilla neuronal networks do in fact s.th. like MLE, because minimizing the squared-loss or the cross-entropy is similar to finding parameters that maximize the likelihood. Moreover, using neural networks with regularisation is comparable to the MAP estimates, as the prior works like the penalty term in error functions.
+However, I've found this work. It shows that the weights $W_{PLS}$ gained from a penalized least-squared are the same as the weights $W_{MAP}$ gained through maximum a posteriori:
+
+However, the paper says:
+
+The first two approaches result in similar predictions, although the MAP Bayesian model does give a probability distribution for $t_*$ The mean of this distribution is the same as that of the classical predictor $y(x_*; W_{PLS})$, since $W_{PLS} = W_{MAP}$
+
+What I don't get here is how can the MAP Bayesian give a proability distribution over $t_*$, when it is only a point estimate?
+Consider a neuronal network - a point estimate would mean some fixed weights, so how can there be a output probability distribution? I thought that this is only achieved in the true Bayesian, where we integrate out the unknown weights, therefore building something like the weight averaged of all outcomes, using all possible weights.
+Can you help me?
+"
+"['neural-networks', 'natural-language-processing', 'recurrent-neural-networks', 'data-preprocessing']"," Title: Is my approach to building an RNN to predict the probability that the word is in English appropriate?Body: Goal
+To build an RNN which would receive a word as an input, and output the probability that the word is in English (or at least would be English sounding).
+Example
+input: hello
+output: 100%
+
+input: nmnmn
+output: 0%
+
+Approach
+Here is my approach.
+RNN
+I have built an RNN with the following specifications: (the subscript $i$ means a specific time step)
+The vectors (neurons):
+$$
+x_i \in \mathbb{R}^n \\
+s_i \in \mathbb{R}^m \\
+h_i \in \mathbb{R}^m \\
+b_i \in \mathbb{R}^n \\
+y_i \in \mathbb{R}^n \\
+$$
+The matrices (weights):
+$$
+U \in \mathbb{R}^{m \times n} \\
+W \in \mathbb{R}^{m \times m} \\
+V \in \mathbb{R}^{n \times m} \\
+$$
+This is how each time step is being fed forward:
+$$
+y_i = softmax(b_i) \\
+b_i = V h_i \\
+h_i = f(s_i) \\
+s_i = U x_i + W h_{i-1} \\
+$$
+Note that the $ + W h_{i-1}$ will not be used on the first layer.
+Losses
+Then, for the loss of each layer, I used cross entropy ($t_i$ is the target, or expected output at time $i$):
+$$
+L_i = -\sum_{j=1}^{n} t_{i,j} \ln(y_{i,j})
+$$
+Then, the total loss of the network:
+$$
+L = \sum L_i
+$$
+RNN diagram
+Here is a picture of the network that I drew:
+
+Data pre-processing
+Here is how data is fed into the network:
+Each word is split into characters, and every character is split into a one-hot vector. Two special tokens START and END are being appended to the word from the beginning and the end. Then the input at each time step will be every sequential character without END, and the output at each time step will be the following character to the input.
+Example
+Here is an example:
+
+- Start with a word: "cat"
+- Split it into characters and append the special tags:
START c a t END
+- Transform into one-hot vectors: $v_1, v_2, v_3, v_4, v_5$
+- Then the input is $v_1, v_2, v_3, v_4$ and the output $v_2, v_3, v_4, v_5$
+
+Dataset
+For the dataset, I used a list of English words.
+Since I am working with English characters, the size of the input and output is $n=26+2=28$ (the $+2$ is for the extra START and END tags).
+Hyper-parameters
+Here are some more specifications:
+
+- Hidden size: $m=100$
+- Learning rate: $0.001$
+- Number of training cycles: $15000$ (each cycle is a loss calculation and backpropagation of a random word)
+- Activation function: $f(x) = \tanh(x)$
+
+Problem/question
+However, when I run my model, I get that the probability of some word being valid is about 0.9 regardless of the input.
+For the probability of a word begin valid, I used the value at the last layer of the RNN at the position of END tag after feeding forward the word.
+I wrote a gradient checking algorithm and the gradients seem to check up.
+Is there conceptually something wrong with my neural network?
+I played a bit with $m$, the learning rate, and the number of cycles, but nothing really improved the performance.
+"
+"['deep-learning', 'objective-functions', 'gradient-descent', 'multi-task-learning']"," Title: How to deal with losses on different scales in multi-task learning?Body: Say I'm training a model for multiple tasks by trying to minimize sum of losses $L_1 + L_2$ via gradient descent.
+If these losses are on a different scale, the one whose range is greater will dominate the optimization. I'm currently trying to fix this problem by introducing a hyperparameter $\lambda$, and trying to bring these losses to the same scale by tuning it, i.e., I try to minimize $L_1 +\lambda \cdot L_2$ where $\lambda > 0 $.
+However, I'm not sure if this is a good approach. In short, what are some strategies to deal with losses having different scales when doing multi-task learning? I'm particularly interested in deep learning scenarios.
+"
+"['papers', 'generative-adversarial-networks', 'discriminator']"," Title: Why does this paper say that the Nash-equilibrium of GAN is given by a discriminator which is 0 everywhere on the data distribution?Body: I am facing difficulty in understanding the bolded portion of the following statement from this paper
+
+GANs are defined by a min-max two-player game between a discriminative network $D_\Psi(x)$ and generative network $G_\theta(z)$. While the discriminator tries to distinguish between real data point and data points produced by the generator, the generator tries to fool the discriminator. It can be shown that if both the generator and discriminator are powerful enough to approximate any real-valued function, the unique Nash-equilibrium of this two-player game is given by a generator that produces the true data distribution and a discriminator which is 0 everywhere on the data distribution.
+
+My understanding is that discriminator gives $\dfrac{1}{2}$ for any further inputs after training. But, what is the $0$ mentioned?
+"
+"['classification', 'k-nearest-neighbors', 'curse-of-dimensionality']"," Title: Why the number of training points to densely cover the space grows exponentially with the dimension?Body: In this lecture (minute 42), the professor says that the number of training examples we need to densely cover the space of training vectors grows exponentially with the dimension of the space. So we need $4^2=16$ training data points if we're working on $2D$ space. I'd like to ask why this is true and how is it proved/achieved? The professor was talking before about K-Nearest Neighbors and he was using $L^{1}$ and $L^{2}$ metrics. I don't think these metrics induce a topology that makes a discrete set of points dense in the ambient space.
+"
+"['deep-learning', 'data-preprocessing', 'u-net', 'semantic-segmentation', 'multi-task-learning']"," Title: How should I incorporate numerical and categorical data as part of the inputs to the U-net for semantic segmentation?Body: I am using a U-Net to segment cancer cells in images of patients' arms. I would like to add patient data to it in order to see if it is possible to enhance the segmentation (patient data comes in the form of a table containing features such as gender, age, etc.). So far, my researches have led me nowhere. What can I do in order to achieve this?
+"
+"['deep-learning', 'natural-language-processing', 'sentiment-analysis', 'question-answering', 'fine-tuning']"," Title: Can Facebook's LASER be used like BERT?Body: Can Facebook's LASER be fine-tuned like BERT for Question Answering tasks or Sentiment Analysis?
+From my understanding, they created an embedding that allows for similar words in different languages to be close to each other. I just don't understand if this can be used for fine-tuning tasks like BERT can.
+"
+['philosophy']," Title: Where is the difference between a neural network mapping a problem space and learning a behaviour?Body: I've been looking at neural networks for control applications. Let's say I used an RL algorithm to train a controller for the cart pole balancing problem.
+Assuming the neural network is simple and very small, I can pretty much deduce what exactly the network is doing. For instance, if the network takes inputs for pole angle and cart position and outputs a motor force, the neural network is approximating a function that will move the cart left if the pole is falling left etc. and I can forward propagate through the network manually, again assuming that it is simple. In this case however, I could say that the neural network isn't truly learning a behavior, and instead is just mapping the problem space.
+However, what if I trained another, larger network for a similar problem, where there are environmental uncertainties that randomly occur (ie. oil patch on the ground so the cart dynamics change, or the ground is made of ice, or there are stochastic disturbances that simulate someone bumping the cart). If the training is successful, the resulting neural network would be learning the behaviour of balancing the cart for a variety of situations (robustness), instead of just pushing it left or right depending on the pole angle.
+The cart pole problem may not be the best example for this since it's a relatively simple control problem, but for more complex behaviors (ie. autonomous driving), where does this inflection point between learning and mapping exist?
+Is this even a valid question, or am I just completely mistaken and everything is technically just a function approximation and there is never any "true" robust learning happening?
+"
+"['deep-learning', 'backpropagation', 'gradient-descent']"," Title: In gradient descent's update rule, why do we use $\sigma(z^{l-1})\frac{\delta C_0}{ \delta w^{l}}$ instead of $\frac{\delta C_0}{\delta w^{l}}$?Body: I am trying to code a two layered neural network simple NN as I have described here https://itisexplained.com/html/NN/ml/5_codingneuralnetwork/
+I am getting stuck on the last step of updating the weights after calculating the gradients for the outer and inner layers via back-propagation
+#---------------------------------------------------------------
+
+# Two layered NW. Using from (1) and the equations we derived as explanations
+# (1) http://iamtrask.github.io/2015/07/12/basic-python-network/
+#---------------------------------------------------------------
+
+import numpy as np
+# seed random numbers to make calculation deterministic
+np.random.seed(1)
+
+# pretty print numpy array
+np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
+
+# let us code our sigmoid funciton
+def sigmoid(x):
+ return 1/(1+np.exp(-x))
+
+# let us add a method that takes the derivative of x as well
+def derv_sigmoid(x):
+ return x*(1-x)
+
+# set learning rate as 1 for this toy example
+learningRate = 1
+
+# input x, also used as the training set here
+x = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
+
+# desired output for each of the training set above
+y = np.array([[0,1,1,0]]).T
+
+# Explanaiton - as long as input has two ones, but not three, ouput is One
+"""
+Input [0,0,1] Output = 0
+Input [0,1,1] Output = 1
+Input [1,0,1] Output = 1
+Input [1,1,1] Output = 0
+"""
+
+input_rows = 4
+# Randomly initalised weights
+weight1 = np.random.random((3,input_rows))
+weight2 = np.random.random((input_rows,1))
+
+print("Shape weight1",np.shape(weight1)) #debug
+print("Shape weight2",np.shape(weight2)) #debug
+
+# Activation to layer 0 is taken as input x
+a0 = x
+
+iterations = 1000
+for iter in range(0,iterations):
+
+ # Forward pass - Straight Forward
+ z1= x @ weight1
+ a1 = sigmoid(z1)
+ z2= a1 @ weight2
+ a2 = sigmoid(z2)
+
+ # Backward Pass - Backpropagation
+ delta2 = (y-a2)
+ #---------------------------------------------------------------
+ # Calcluating change of Cost/Loss wrto weight of 2nd/last layer
+ # Eq (A) ---> dC_dw2 = delta2*derv_sigmoid(z2)
+ #---------------------------------------------------------------
+
+ dC_dw2 = delta2 * derv_sigmoid(a2)
+
+ if iter == 0:
+ print("Shape dC_dw2",np.shape(dC_dw2)) #debug
+
+ #---------------------------------------------------------------
+ # Calcluating change of Cost/Loss wrto weight of 2nd/last layer
+ # Eq (B)---> dC_dw1 = derv_sigmoid(a1)*delta2*derv_sigmoid(a2)*weight2
+ # note delta2*derv_sigmoid(a2) == dC_dw2
+ # dC_dw1 = derv_sigmoid(a1)*dC_dw2*weight2
+ #---------------------------------------------------------------
+
+ dC_dw1 = (np.multiply(dC_dw2,weight2.T)) * derv_sigmoid(a1)
+ if iter == 0:
+ print("Shape dC_dw1",np.shape(dC_dw1)) #debug
+
+
+ #---------------------------------------------------------------
+ #Gradinent descent
+ #---------------------------------------------------------------
+
+ #weight2 = weight2 - learningRate*dC_dw2 --> these are what the textbook tells
+ #weight1 = weight1 - learningRate*dC_dw1
+
+ weight2 = weight2 + learningRate*np.dot(a1.T,dC_dw2) # this is what works
+ weight1 = weight1 + learningRate*np.dot(a0.T,dC_dw1)
+
+
+print("New ouput\n",a2)
+
+
+Why is
+ weight2 = weight2 + learningRate*np.dot(a1.T,dC_dw2)
+ weight1 = weight1 + learningRate*np.dot(a0.T,dC_dw1)
+
+done instead of
+ #weight2 = weight2 - learningRate*dC_dw2
+ #weight1 = weight1 - learningRate*dC_dw1
+
+I am not getting the source of the equation of updating the weights by multiplying with the activation of the previous layer
+As per gradient descent, the weight update should be
+$$
+ W^{l}_{new} = W^{l}_{old} - \gamma * \frac{\delta C_0}{\delta w^{l}}
+$$
+However, what works in practice is
+$$
+ W^{l}_{new} = W^{l}_{old} - \gamma * \sigma(z^{l-1})\frac{\delta C_0}{ \delta w^{l}},
+$$
+where $\gamma$ is the learning rate.
+"
+"['reinforcement-learning', 'papers', 'proofs', 'soft-actor-critic']"," Title: How is the discounted maximum entropy objective obtained for soft-q-learning and SACBody: In the soft q-learning paper, they provide an expression for the maximum entropy objective that takes discounting into account.
+My main question is: can someone explain how they incorporated discounting into the objective?
+I've also got a few other questions related to the form of the discounted objective as well.
+The first one being is: they first define the objective in way of obtaining $\pi_{\text{MaxEnt}}^*$.
+In this first expression,
+$$
+\pi_{\mathrm{MaxEnt}}^{*}=\arg \max _{\pi} \sum_{t} \mathbb{E}_{\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right) \sim \rho_{\pi}}\left[\sum_{l=t}^{\infty} \gamma^{l-t} \mathbb{E}_{\left(\mathbf{s}_{l}, \mathbf{a}_{l}\right)}\left[r\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)+\alpha \mathcal{H}\left(\pi\left(\cdot \mid \mathbf{s}_{t}\right)\right) \mid \mathbf{s}_{t}, \mathbf{a}_{t}\right]\right],
+$$
+I don't really understand the purpose of the inner expectation. If it's an expectation over $(s_l,a_l)$, the terms within the expectation are constants, so they can be taken out of the expectation and even the inner sum too. So, I think the subscript might be wrong, but was hoping someone could confirm this.
+My second issue is: they rewrite the maximum entropy objective using $Q_{soft}$ in (16)
+$$J(\pi) \triangleq \sum_{t} \mathbb{E}_{\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right) \sim \rho_{\pi}}\left[Q_{\mathrm{soft}}^{\pi}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)+\alpha \mathcal{H}\left(\pi\left(\cdot \mid \mathbf{s}_{t}\right)\right)\right]$$
+I'm not sure how they do this. If someone could provide a proof of this connection, that would be much appreciated.
+"
+"['neural-networks', 'pretrained-models', 'self-supervised-learning', 'commonsense-knowledge']"," Title: What are some most promising ways to approximate common sense and background knowledge?Body: I learned from this blog post Self-Supervised Learning: The Dark Matter of Intelligence that
+
+We believe that self-supervised learning (SSL) is one of the most promising ways to build such background knowledge and approximate a form of common sense in AI systems.
+
+What are other most promising ways which would be competitive to self-supervised learning?
+I only know the knowledge base, but I don't think it would be that promising due to the curation problem of the large scale Automated Knowledge Base Construction.
+"
+"['neural-networks', 'natural-language-processing', 'python', 'sentiment-analysis', 'named-entity-recognition']"," Title: How to train a sequence labeling model with annotations from three annotators?Body: I have a dataset of movie reviews annotated by 3 persons. The following example contains one sentence with corresponding annotations from 3 different persons.
+sentence = ['I', 'like', 'action', 'movies','!']
+annotator_1 = ['O','O', 'B_A', 'I_A', 'O']
+annotator_2 = ['O','O', 'B_A', 'I_A', 'O']
+annotator_3 = ['O','O', 'B_A', 'O', 'O']
+
+The labels follow the BIO format. That is, B_A
means the beginning of aspect-term (action) and I_A
indicates inside of aspect-term (movie).
+Unfortunately, the annotators do not agree always together. While the first two persons assigned the right labels for aspect-term (action movie), the last one mislabeled the token (movies).
+I am using Bi-LSTM-CRF sequence tagger to train the model. However, I am not sure if am using the training data correctly.
+Is it correct to feed the model the same sentence with annotations from 3 persons? Then test it in the same way, i.e., the same sentence with different annotations?
+Another question.
+I merged the annotations in one final list of labels as follows:
+final_annotation = ['O','O', 'B_A', 'I_A', 'O']
+
+In this case, the final label is chosen based on the majority of labels among three annotators.
+Is it right to feed the model the same sentence with corresponding annotations from all users during the testing phase?
+"
+"['artificial-neuron', 'neurons', 'sigmoid']"," Title: Why is the sigmoid function interpreted as a saturating firing rate of a neuron?Body: I've seen several people say that sigmoids are like a saturating firing rate of a neuron but I don't see how or why they interpret it as such. I especially don't see the relationship between a "rate" (so a number of something over time, I guess here it's the number that a neuron activates in a unit of time) and the sigmoid graph. For me it resembles more to the voltage output of an operational amplifier in some cases.
+"
+"['reinforcement-learning', 'deep-learning', 'policy-gradients', 'ddpg', 'continuous-action-spaces']"," Title: How to have zero value or a value between 200 and 400 in the output of a deep learning model?Body: I want to implement a DDPG method and obviously, the action space will be continuous. I have three outputs. The first output should be zero or a value between 200 and 400, and the other outputs have similar conditions. I don't know how can I implement this condition in the layers and activation functions. Should I use a binary activation before the scaled sigmoid function? How can I scale the activation function for this example?
+
+(a1 = 0) or (200 < a1 < 400)
+(a2 = 0) or (100 < a2 < 500)
+(a3 = 0) or (200 < a3 < 1000)
+
+"
+"['reinforcement-learning', 'terminology', 'notation']"," Title: Does a trajectory in reinforcement learning contain the last action?Body: From what I learn from CS285 and OpenAI's spinning up, a trajectory in RL is a sequence of state-action pairs:
+$$\tau = \{s_0, a_0, ..., s_t, a_t\}$$
+And the resulting trajectory probability is:
+$$ P(\tau \mid \pi)=\rho_{0}\left(s_{0}\right) \prod_{t=0}^{T-1} P\left(s_{t+1} \mid s_{t}, a_{t}\right) \pi\left(a_{t} \mid s_{t}\right) $$
+
+- From CS285: http://rail.eecs.berkeley.edu/deeprlcourse/static/slides/lec-4.pdf
+
+- From spinning up: https://spinningup.openai.com/en/latest/spinningup/rl_intro.html#trajectories
+
+
+However, from my derivation, the above trajectory probability actually corresponds to the following sequence where the last action $a_t$ is absent:
+$$ \tau = \{s_0, a_0, ..., s_t\} $$
+Can someone please help me clarify this confusion?
+"
+"['neural-networks', 'machine-learning', 'bayesian-deep-learning', 'bayesian-neural-networks', 'variational-inference']"," Title: What is the intuition behind variational inference for Bayesian neural networks?Body: I'm trying to understand the concept of Variational Inference for BNNs. My source is this work. The aim is to minimize the divergence between the approx. distribution and the true posterior
+$$\text{KL}(q_{\theta}(w)||p(w|D) = \int q_{\theta}(w) \ log \frac{q_{\theta}(w)}{p(w \mid D)} \ dw$$
+This can be expanded out as $$- F[q_{\theta}] + \log \ p(D)$$ where $$F[q_{\theta}] = -\text{KL}(q_{\theta}(w) || p(w)) + E[\log p(D \mid w)]$$
+Because $log \ p(D)$ does not contain any variational parameters, the derivative will be zero. I really would like to summarize the concept of VI in words.
+How can one explain the last formula in words intuitive and with it on the fact that one approximates a function without really knowing it / able to compute it?
+My attempt would be: Minimizing the KL between the approximate distribution and the true posterior boils down in minimizing the KL between the approximate distribution and the prior (?) and maximizing the log-likelihood that the parameters of the approximate distribution resulted in the data. Is this somehow correct?
+"
+"['intelligent-agent', 'intelligence-testing', 'data-compression']"," Title: Do text compression tests qualify winRar or 7zip as intelligent?Body: I read this paper Text Compression as a Test for Artificial Intelligence, Mahoney, 1999.
+So far I understood the following:
+Text Compression tests can be used as an alternative to Turing Tests for intelligence.
+The Bits per character score obtained from compression of a standard benchmark corpus, can be used as a quantitative measure for intelligence
+My questions:
+
+- Is my understanding of the topic correct?
+- Does this mean that applications like 7zip/WinRar are intelligent?
+- How are the ways a human compresses information (as in form of summary) and ways a computer compresses (using Huffman coding or something) are compatible? How can we compare that?
+
+"
+"['computer-vision', 'audio-processing', 'sentience']"," Title: Is it likely that a sentient AI experience synesthesia?Body: The reason I ask this question is because we humans tend to compartmentalize our sensory inputs, except in some individuals that experience synesthesia. If an Artificial Intelligence Entity (AIE) can correlate all sensory input (as a bunch of tensors), wouldn't that be the ultimate form of synesthesia?
+An AIE geometry / color synesthesia might lead to an explanation of how Joan Miro colorized his doodles.
+"
+"['neural-networks', 'deep-learning', 'backpropagation']"," Title: Is vectorizing backpropagation feasible?Body: Does it make sense to have the backpropagation of a neural network layer happen all at once if the learning rate is lowered? This would mean the new weights of that layer would be independent of each other, but, it would be extremely fast. Is this method feasible in any way for a neural network, or would it create a cost threshold which the network can't reach because of it's independent inaccuracy?
+"
+"['machine-learning', 'papers', 'bayesian-neural-networks', 'bayes-theorem', 'probability-theory']"," Title: Bayesian Perceptron: Why to marginalize over neuron's output instead of it's weights?Body: I found a very interesting paper on the internet that tries to apply Bayesian inference with a gradient-free online-learning approach: Bayesian Perceptron: Towards fully Bayesian Neural Networks.
+I would love to understand this work, but unfortunately I am reaching my limits with my Bayesian knowledge. Let us assume that we have the weights $\mathcal{w}$ of our model and observed the data $\mathcal{D}$. Using the Bayes rule, we obtain the posterior according to $$p(\mathcal{w}|D)=\frac{p(D|\mathcal{w})p(\mathcal{w})}{p(D)}$$.
+In words: we update our prior belief over our weights by multiplying the prior with the likelihood and divide everything by the evidence. In order to calculate the true posterior, we would need to calculate the evidence by marginalizing over (intergrating out) our unknown parameters. This gives the integral $$p(D) = \int p(D|\mathbf{w})p(\mathbf{w})dw$$.
+So far so good. Now I refer to the paper mentioned above. Here, the approach is presented exemplarily on a neuron whose weighted sum is called $a$, which is then given to the activation function $f(.)$. Moreover it is assumed that $\mathbf{w}\sim N (\mu_w, \mathbf{C}_w)$. Because of the linearity it can be exploited that also $\mathbf{a}\sim N (\mu_a, \mathbf{C}_a)$.
+What I am confused about now is formula (14), which seems to show the compute the true posterior:
+$$p(w) = \int p(a, w|D_i)da = \int p(w|a, D_i)p(a|D_i)da$$
+Why is $a$ integrated out here and not $w$? We want a distribution over $w$, don't we? But without marginalization over $w$ there is still uncertainty inside $w$
+Glad about any help and food for thought;)
+"
+"['reinforcement-learning', 'dqn', 'value-based-methods', 'd3qn']"," Title: Why do we need to have two heads in D3QN to obtain value and advantage separately, if V is the average of Q values?Body: I have two questions on the Dueling DQN paper. First, I have an issue on understanding the identifiability that Dueling DQN paper mentions:
+
+Here is my question: If we have given Q-values $Q(s, a; \theta)$ for all actions, I assume we can get value for state $s$ by:
+$$V(s) = \frac {1} {|Q|} \sum_{a \in \mathcal{Q}} Q(s, a; \theta)$$
+and the advantage by:
+$$A(s,a) = Q(s, a; \theta) - V(s), ~~~ \forall ~a ~in ~\mathcal{A}(s)$$
+in which $\mathcal{A}(s)$ is the action space for state $s$. If this is correct, why do we need to have two heads in the network to obtain value and advantage separately?
+and then obtain Q-value using
+$$Q(s, a; \theta, \alpha, \beta) = V(s; \theta, \beta) + \left( A(s, a; \theta, \alpha) - \max_{a' \in | \mathcal{A} |} A(s, a'; \theta, \alpha) \right). \tag{8}$$
+or
+$$Q(s, a; \theta, \alpha, \beta) = V (s; \theta, \beta) + \left( A(s, a; \theta, \alpha) − \frac {1} {|A|} \sum_{a' \in \mathcal{A}} A(s, a'; \theta, \alpha) \right). \tag{9}$$
+Am I missing something?
+My second question is why Dueling DQN does not use the target network as it is used in the DQN paper?
+"
+"['machine-learning', 'reference-request', 'performance', 'federated-learning', 'iid']"," Title: How would the performance of federated learning compare to the performance of centralized machine learning when the data is i.i.d.?Body: How would the performance of federated learning (FL) compare to the performance of centralized machine learning (ML), when the data is independent and identically distributed (i.i.d.)?
+Moreover, what is the difference in the performance of FL when the data is i.i.d. as compared to non-i.i.d?
+"
+"['agi', 'definitions', 'intelligent-agent', 'intelligence']"," Title: How widely accepted is the definition of intelligence by Marcus Hutter & Shane Legg?Body: I came across several papers by M. Hutter & S. Legg.
+Especially this one:
+Universal Intelligence: A Definition of Machine Intelligence, Shane Legg, Marcus Hutter
+Given that it was published back in 2007, how much recognition or agreement has it received?
+Has any other work better formalizing the idea of intelligence been done since?
+What is considered current standard on the topic in the field?
+"
+"['machine-learning', 'papers', 'bayesian-neural-networks', 'bayes-theorem']"," Title: Bayesian Perceptron: How is it compatible to Bayes Theorem?Body: I found a very interesting paper on the internet that tries to apply Bayesian inference with a gradient-free online-learning approach: [Bayesian Perceptron: Bayesian Perceptron: Towards fully Bayesian Neural Networks.
+I would love to understand this work, but, unfortunately, I am reaching my limits with my Bayesian knowledge. Let us assume that we have the weights $\mathcal{w}$ of our model and observed the data $\mathcal{D}$. Using the Bayes rule, we obtain the posterior according to $$p(\mathcal{w}|D)=\frac{p(D|\mathcal{w})p(\mathcal{w})}{p(D)}$$.
+In words: we update our prior belief over our weights by multiplying the prior with the likelihood and divide everything by the evidence. In order to calculate the true posterior, we would need to calculate the evidence by marginalizing over (intergrating out) our unknown parameters. This gives the integral $$p(D) = \int p(D|\mathbf{w})p(\mathbf{w})dw$$.
+So far so good. Now I refer to the paper mentioned above. Here, the approach is presented exemplarily on a neuron whose weighted sum is called $a$, which is then given to the activation function $f(.)$. Moreover it is assumed that $\mathbf{w}\sim N (\mu_w, \mathbf{C}_w)$. Because of the linearity, it can be exploited that also $\mathbf{a}\sim N (\mu_a, \mathbf{C}_a)$.
+What I am confused about now is formula (14), which seems to show the compute the true posterior:
+$$p(w) = \int p(a, w|D_i)da = \int p(w|a, D_i)p(a|D_i)da$$
+How is this formula of the posterior compatible with the Bayes Theorem? Where is the evidence, likelihood and prior?
+"
+"['backpropagation', 'activation-functions', 'sigmoid', 'gradient']"," Title: Why is it a problem if the outputs of an activation function are not zero-centered?Body: In this lecture, the professor says that one problem with the sigmoid function is that its outputs aren't zero-centered. Are the explanation provided by the professor regarding why this is bad is that the gradient of our loss w.r.t. the weights $\frac{\partial L}{\partial w}$ which is equal to $\frac{\partial L}{\partial \sigma}\frac{\partial \sigma}{\partial w}$ will always be either negative or positive and we'll have a problem updating our weights as she shows in this slide, we won't be able to move in the direction of the vector $(1,-1)$. I don't understand why since she only talks about one component of our gradient and not the whole vector. if the components of the gradient of our loss will have different signs which will allow us to adjust to different directions I'm I wrong? But the thing that I don't understand is how this property generalizes to non zero-centered functions and non-zero centered data?
+"
+"['natural-language-processing', 'natural-language-generation']"," Title: I have 5000 html files (structured text), how can I generate a new one that ""resembles"" those?Body: I don't know anything about ML or NLP, but I was asked by someone to create brand new statutes (written laws) that resemble the ones currently in effect in my country. I have already gathered the laws, and have 5000 html files now, one per law.
+The average size of each html file is 49 kB. The entire corpus is 300 MB.
+I have two alternative goals (doing both would be perfect of course):
+
+- Generate a new, complete HTML file, that would imitate the 5000 existing ones (it would typically have 1 big heading at the top, sub-headings, articles with their own title and number, etc.)
+
+- Generate sentences that sound as if they could be found in a typical law (the laws are written in French)
+
+
+Is any of those goals feasible, with such a small corpus (~300 MB total)?
+Should I try and fine-tune an existing model (but in that case, wouldn't the small size of my corpus be a problem? Wouldn't it be "drowned out" in the rest of the training data?), or should I create one from scratch?
+I've tried following guides on huggingface, but between the obsolete files, the undocumented flags and my general lack of knowledge of the subject, I'm completely lost.
+Thanks in advance.
+BTW, if you want to take a peek at the data, there it is: https://github.com/Biganon/rs/
+"
+"['neural-networks', 'deep-learning', 'overfitting', 'accuracy', 'binary-classification']"," Title: Could I just choose the other (non-predicted) class when the accuracy is low?Body: I have a binary classification problem.
+My neural network is getting between 10% and 45% accuracy on the validation set and 80% on the training set. Now, if I have a 10% accuracy and I just take the opposite of the predicted class, I will get 90% accuracy.
+I am going to add a KNN module that shuts down that process if the inputted data is, or is very similar to the data present in the data set.
+Would this be a valid approach for my project (which is going to go on my resume)?
+"
+"['deep-learning', 'transformer']"," Title: How to construct Transformers to predict multidimensional time series?Body: There is plenty of information describing Transformers in a lot of detail how to use them for NLP tasks. Transformers can be applied for time series forecasting. See for example "Adversarial Sparse Transformer for Time Series Forecasting" by Wu et al.
+For understanding it is best to replicate everything according to already existing examples. There is a very nice example for LSTM with flights dataset https://stackabuse.com/time-series-prediction-using-lstm-with-pytorch-in-python/.
+I guess I would like to know how to implement transformers for at first univariate (flight dataset) and later for multivariate time series data. What should be removed from the Transformer architecture to form a model that would predict time series?
+"
+"['neural-networks', 'tensorflow', 'overfitting', 'accuracy', 'test-datasets']"," Title: What are possible ways to combat overfitting or improve the test accuracy in my case?Body: I have asked a question here, and one of the comments suggested that this is a case of severe overfitting. I made a neural network, which uses residual boosting (which is done via a KNN), and I am still just able to get < 50% accuracy on the test set.
+What should I do?
+I tried everything from reducing the number of epochs to replacing some layers with dropout.
+Here is the source code.
+"
+"['reference-request', 'heuristics', 'games-of-chance', 'card-games']"," Title: Are there heuristics that play Klondike Solitaire well?Body: Are there heuristics that play Klondike Solitaire well?
+I know there are some good exhaustive search solvers for Klondike Solitaire. The best one that I know of is Solvitaire (2019) which uses DFS, (see paper, code).
+I searched the web for heuristics that plays as a human would play, with no backwards moves, however, I found only one. In that paper, they report on a win-rate of 13.05%. In comparison, human experts reach 36.6% win-rate in thoughtful solitaire which is Klondike Solitaire where the location of all the cards is known. Source: Solitaire: Man Versus Machine (2005).
+Are there any other published heuristics for Klondike Solitaire?
+When determining if a heuristic is interesting, I would consider its win-rate and the similarity to how humans are playing.
+"
+"['papers', 'geometric-deep-learning', 'graph-neural-networks']"," Title: How does the K-dimensional WL test work?Body: I am reading a paper on the K-WL GCN. I did not complete the paper yet, but I just skimmed over it. There I am trying to understand the K-WL test (page 3 Weisfeiler-Leman Algorithm). I think my understanding is quite ambiguous, so I looking for an example problem that is solved using K-WL test. But I can't find any of them on the web.
+Does anyone have any solved example problem on K-WL or can anyone explain to me how the K-WL test works?
+Note: If anyone also explains how K-WL GCN uses the K-WL test, I will be thankful.
+"
+"['genetic-algorithms', 'crossover-operators', 'homework', 'mutation-operators', 'chromosomes']"," Title: How should the 1-point crossover and mutation be defined for the problem of finding the largest circle that does not enclose any point?Body:
+For a random scattering of points, in a bounded area, the goal is to find the largest circle that can be drawn inside those same bounds that does not enclose any points. Solving this problem with a genetic algorithm requires deciding how to encode as a genome, information sufficient to represent any solution. In this case, the only information we need is the center point of the circle, so our genome of point $p_i$ will look like $(x_i, y_i)$, representing the Cartesian coordinates.
+
+In this case, what does each of the genetic operators mean for this simplistic genome? Geometrically speaking, what would a 1-point crossover look like in this case? What about mutation?
+This is my answer, but I am not sure.
+Consider two individuals with 2 variables each (2 dimensions), $p_1=(x_1, y_1)$ and $p_2=(x_2, y_2)$. For each variable, the parent who contributes its variable to the offspring is chosen randomly with equal probability. Geometrically speaking, a 1-point crossover would represent a quadrilateral in the Cartesian space, where one of the diagonals is formed by the parents and the other one by the offsprings $c_1=(x_1, y_2)$ and $c_2=(x_2, y_1)$.
+On the other hand, a mutation operator is an r-geometric mutation under the metric $d$ if all its offsprings are in the $d$-ball of radius $r$ centered in the parent.
+The radius (fitness function) would be the distance between the center (genome) and the closest point (star) from the random points in the bounded area.
+"
+"['reference-request', 'backpropagation']"," Title: Are there relatively new research papers that describe how to make back-propagation more efficient?Body: I read Yann LeCun's paper Efficient BackProp, which was published in 2000. I looked for similar but more recent papers on Arxiv, but I have not yet found any.
+Are there relatively new research papers that describe how to make back-propagation more efficient?
+So, I am looking for papers similar to Efficient Backprop by LeCun but newer. The papers could describe why ReLU now "dominates" tanh or even sigmoid (but tanh was Yann's favorite, as explained in the paper). ReLU is just one thing I am interested in, but the paper could also analyze e.g. the inputs from a statistical standpoint.
+"
+"['markov-decision-process', 'pomdp']"," Title: Is there a way of path reconstruction using only the history of belief states?Body: Given a history of belief states, is there a common method that backtracks the most likely path of ending up in the current belief state?
+I have a Markov model which calculates belief states after every step. The belief state is a representation of the most likely states one could be in. A belief state may look like this:
+$$b=[1,0,0,0,0],$$ where I am in the state $s_0$ with 100% certainty.
+I can store the belief state history like $b_0, b_1, b_2,\dots, b_n$.
+Is there a common way to represent and estimate the most likely states one has been in?
+A naive approach could be to just look for the state with the highest value per belief state and take that as the node along the reverse path. But I am not confident enough, if that is a common and a good practice, as it is not considering the fuzziness, which comes with a belief state. But then again, if I would take all states that are bigger than 0, I might not know which state leads to which state and if that transition is even possible.
+"
+"['computer-vision', 'object-detection', 'algorithm-request', 'model-request', 'template-matching']"," Title: Should I use U-net to label keys in a keyboard image?Body: This is a 600*800 image.
+
+Which algorithm/model should I use to get an image like the one below, in which each key is detected and labeled by a rectangle?
+I guess this is some kind of a segmentation problem where U-net is the most popular algorithm, though I don't know how to apply it to this particular problem.
+
+"
+"['terminology', 'bayesian-networks', 'books', 'probabilistic-graphical-models']"," Title: In Probabilistic Graphical Model (written by Daphne Koller), what's the meaning of ""parameter"" in representation of the distribution?Body: I just started to read the PGM book written by Daphne Koller.
+In the chapter of Bayesian Network Representation(Chapter 3), there are some descriptions about the standard parameterization of the joint distribution corresponding to n-trial coin tosses.
+
+The book also says,
+
+Here I'm very confused about the meaning of $ 2^n parameters $. In terms of random variable or probability distribution, parameter means characteristic of the distribution. But parameter in this paragraph sounds like $O(2^n)$ space complexity. Because it also describes that we can reduce the space of all joint distribution to $n$-dimension by using expression $ \prod_{i} \theta_{x_{i}} $.
+
+So, what's the meaning of parameter in this context? Does it mean space complexity for computation of the joint distribution?
+"
+"['machine-learning', 'convolutional-neural-networks', 'reference-request', 'dimensionality']"," Title: Huge dimensionality of input and output — any recommendations?Body: At work there is an idea of solving a problem with machine learning. I was assigned the task to have a look at this, since I'm quite good at both mathematics and programming. But I'm new to machine learning.
+In the problem a box would be discretized into smaller boxes (e.g. $100 \times 100 \times 100$ or even more), which I will call 'cells'. Input data would then be a boolean for each cell, and output data would be a float for each cell. Thus both input and output have dimensions of order $10^6$ to $10^9$.
+Do you have any recommendations about how to do this? I guess that it should be done with a ConvNet since the output depends on relations between close cells.
+I have concerns about the huge dimensions, especially as our training data is not at all that large, but at most contains a few thousands of samples.
+
+Motivation
+It can be a bit sensitive to reveal information from a company, but since this is a common problem in computational fluid dynamics (CFD) and we already have a good solution, it might not be that sensitive.
+The big boxes are virtual wind tunnels, the small boxes ('cells' or voxels) are a discretization of the tunnel. The input tells where a model is located and the output would give information about where the cells of a volume mesh need to be smaller.
+"
+"['recurrent-neural-networks', 'time-series', 'binary-classification', 'algorithm-request', 'model-request']"," Title: Can RNNs be used to classify these time series into two classes?Body: My task is to classify into two classes the time series like these shown in the figure.
+
+The figure shows one class on the left sub-figure and second one on the right. The series are shown in pairs for more clarity, but each series on the left (and right) belongs to one of the respective classes. The scale of the right panel is reduced to show all series, but these series are of the same amplitude as the left ones.
+Is it possible to apply RNN (or other methods) to classify the series of this kind into two classes?
+I have never used neural networks, but I am just looking for an adequate method for this problem.
+"
+"['deep-learning', 'overfitting', 'accuracy', 'generalization', 'dropout']"," Title: Is it possible that the model is overfitting when the training and validation accuracy increase?Body: I am aware of similar questions that have been asked, and I have gone through many. I want to bring my case to SE to understand better what my results are.
+I am working with a large dataset (around 75million records), but, for the purpose of testing techniques, I am actually using 2M records. I am working towards malicious traffic identification using NetFlow data. After employing some undersampling to have a balanced dataset according to my target variable (benign or attack) I have 1,240,950 of records in the training set and 310,238 in the validation set. Therefore I believe there is a good amount of data to train a Deep neural network properly.
+After using Yeo-Yohnsons transform and standardizing the data, I train the network with a very basic model:
+def basem():
+
+ model = Sequential()
+
+ model.add(Dense(25, input_dim=38))
+ model.add(Activation("relu"))
+
+ model.add(Dense(50))
+ model.add(Activation("relu"))
+
+ model.add(Dense(50))
+ model.add(Activation("relu"))
+
+ model.add(Dense(25))
+ model.add(Activation("relu"))
+
+ model.add(Dense(1, activation='sigmoid'))
+ model.compile(loss='binary_crossentropy', optimizer="adam", metrics=['accuracy'])
+ return model
+
+
+model_base = basem()
+model_base._name = 'base'
+
+history_base = model_base.fit(X_train, y_train, batch_size=2048,
+ epochs=15, validation_data=(X_val,y_val), shuffle=True)
+
+This gives me the following plot
+It maybe because I am a newbie, but this plot looks too perfect. It is weird to see validation and training accuracy growing together, although I believe this is what we want right? But now I have the feeling it is overfitting. Therefore I use the model and a 5-fold cross validation to understand how well it generalizes. Results, mean accuracy and mean std(%), are:
+test acc: 0.9816503485233088
+test_prec: 0.9840033637114158
+test_f1: 0.9816046990113001
+test_recall: 0.9792384866432975
+test_roc_auc: 0.9980004347946355
+
+Dev acc: 0.052931962886091546
+Dev prec: 0.2854656099314699
+Dev f1: 0.057228805478181974
+Dev recall: 0.3597811552056071
+Dev roc auc: 0.0036456892671197097
+
+If I understand correctly, accuracy is high which is generally good and the standard deviation is very low for each metric, the highest being 0.359% for recall. Does this mean my model generalizes well?
+Edit
+Adding dropout (0.3) to each layer yields the following:
+
+Now, my validation accuracy is higher than my training. I can't make sense of any of this.
+"
+"['principal-component-analysis', 'dimensionality-reduction']"," Title: Why does PCA of the vertices of a hexagon result in principal components of equal length?Body: I do PCA on the data points placed in the corners of a hexagon, and get the following principal components:
+
+The PCA variance is $0.6$ and is the same for each component. Why is that? Shouldn't it be greater in the horizontal direction than in the vertical direction? The data is between $-1$ and $1$ in the $x$-direction but only between $-\sqrt{3}/2$ and $\sqrt{3}/2$ in the $y$-direction. Why PCA results in the equal length components?
+The length of each vector in the picture is the twice the square root of the variance.
+UPDATE: added more points, the variances changed to $0.477$ but still they are equal.
+
+UPDATE 2: Added even more points, the variances changed to $0.44$ but still they are equal.
+
+"
+"['natural-language-processing', 'pytorch', 'transformer', 'data-preprocessing']"," Title: Why (not) using pre-processing before using Transformer models?Body: Regarding the use of pre-processing techniques before using Transformers models, I read this post that apparently says that these measures are not so necessary nor interfere so much in the final result.
+The arguments raised seemed to me quite convincing, but someone would know how to explain better, perhaps with a bibliographic reference, why is it not so necessary to use these techniques?
+"
+"['neural-networks', 'implementation', 'softmax', 'xor-problem', 'categorical-crossentropy']"," Title: Why does my neural network to solve the XOR problem always output 0.5?Body: I'm trying to create a neural network to simulate an XOR gate.
+Here's my dataset:
+╔════════╦════════╗
+║ x1, x2 ║ y1, y2 ║
+╠════════╬════════╣
+║ 0, 0 ║ 0, 1 ║
+║ 0, 1 ║ 1, 0 ║
+║ 1, 0 ║ 1, 0 ║
+║ 1, 1 ║ 0, 1 ║
+╚════════╩════════╝
+
+And my neural network:
+
+I use logistic loss to get the error between target $y_{k}$ and output $\hat{y}_{k}$:
+$$ E(y_{k}, \hat{y}_{k}) = - y_{k} \cdot log(\hat{y}_{k}) + (1 - y_{k}) \cdot log(1 - \hat{y}_{k}) $$
+And then use the chain rule to update the weights. For example weight $w_{3}$'s contribution to the error is:
+$$ \sum_{k=1}^{2} \frac{\partial E(y_{k}, \hat{y}_{k})}{\partial w_{3}} = \sum_{k=1}^{2} \left(\frac{\partial E(y_{k}, \hat{y}_{k})}{\partial \hat{y}_{k}} \cdot \frac{\partial s_{k}}{\partial c_{1}}\right) \cdot \frac{\partial c_{1}}{\partial w_{3}} $$
+Which in developed form is:
+$$ \sum_{k=1}^{2} \frac{\partial E(y_{k}, \hat{y}_{k})}{\partial w_{3}} = \left( \left( - \frac{y_{1}}{\hat{y}_{1}} + \frac{1 - y_{1}}{1 - \hat{y}_{1}} \right) \cdot \hat{y}_{1} \cdot (1 - \hat{y}_{1}) + \left( - \frac{y_{2}}{\hat{y}_{2}} + \frac{1 - y_{2}}{1 - \hat{y}_{2}} \right) \cdot (- \hat{y}_{2}) \cdot\frac{c_{1}}{c_{1} + c_{2}} \right) \cdot s_{0} $$
+My issue is that, after a couple epochs of training on the entire dataset, the network always outputs:
+$$ \hat{y}_{1} = \hat{y}_{2} = 0.5 $$
+What am I doing wrong?
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'markov-decision-process', 'markov-property']"," Title: Reinforcement Learning for an environment that is non-markovianBody: I will start working on a project where we want to optimize the production of a chemical unit through reinforcement learning approach. From the SME's, we already obtained a simulator code that can take some input and render us the output. A part of our output is our objective function that we want to maximize by tuning the input variables. From a reinforcement learning angle, the inputs will be the agent actions, while the state and reward can be obtained from the output. We are currently in the process of building a RL environment, the major part of which is the simulator code described above.
+We were talking to a RL expert and she mentioned that one of the thing that we have here conceptually wrong is that our environment will not have the Markov property in the sense that it is really a 'one-step process' with the process not continuing from the previous state and there is no sort of continuity in state transitions. She is correct there. This made me think, how can we get around this then. Can we perhaps append some part of the current state to the next state etc. More importantly, I have seen RL applied to optimal control in other examples as well which are non-markovian ex. scheduling, tsp problems, process optimization etc. What is the explanation in such cases? Does one simply assumes process to be markovian with unknown transition function?
+"
+"['markov-decision-process', 'value-functions', 'stochastic-policy', 'optimal-policy', 'optimality']"," Title: How is $v_*(s) = \max_{\pi} v_\pi(s)$ also applicable in the case of stochastic policies?Body: I am reading Sutton & Bartos's Book "Introduction to reinforcement learning". In this book, the defined the optimal value function as:
+$$v_*(s) = \max_{\pi} v_\pi(s),$$ for all $s \in \mathcal{S}$.
+Do we take the max over all deterministic policies, or do we also look at stochastic policies (is there an example where a stochastic policy always performs better than a deterministic one?)
+My intuition is that the value function of a stochastic policy is more or less a linear combination of the deterministic policies it tries to model, however, there are some self-references, so it is not mathematically true).
+If we do look over all stochastic policies, shouldn't we take the supremum? Or do we know, that the supremum is achieved, and therefore it is truly a maximum?
+"
+"['reference-request', 'neuromorphic-engineering', 'books', 'neuroscience', 'human-inspired']"," Title: Is there any comprehensive book that reviews topics in the area of brain-inspired computing?Body: I am looking to write my master's thesis next year about brain-inspired computing. Hence, I am looking to get a good overview of this domain.
+Do you know of any comprehensive book that reviews topics in the area of brain-inspired computing (such as spiking neural networks)?
+In spirit and scope, it should be similar to Ian Goodfellow's book deep learning.
+"
+"['machine-learning', 'regression', 'time-series', 'probability', 'probability-distribution']"," Title: Predicting the probability of a periodically happening event occurring at a given timeBody: I have encountered this problem on how to predict the probability of a periodically happening event occurring at a given time.
+For example, we have an event called being_an_undergrad. There are many data points: bob is an undergrad from (1999 - 2003), Bill is an undergrad from (1900 - 1903), Alice is an undergrad from (1900 - 1905), and there are many other data points such as (2010 - 2015), (2011 - 2013) ....
+There are many events(data points) of being_an_undergrad. The lasting interval varies, it might be 1 year, 2 years, 3 years, .... or even 10 years. But the majority is around 4 years.
+However, I am wondering given all the data points above. If I now know that Jason starts college in 2021, and how can I calculate/predict the probability that he will still be an undergrad in 2022? and 2023? and 2024 .... 2028, etc.
+My current dataset consists of 10000 tuples representing events of different relations. The relations are all continuous relations similar to the example above. There are about 10 continuous relations in total in this dataset, such as isMarriedTo
, beingUndergrad
, livesIn
, etc. For each relation, there are about 1000 data points(1000 durations) about this relation, for example,
+<Leo, isUndergrad, Harvard, 2010 - 2011>, <Leo, isUndergrad, Stanford, 2013 - 2016>.....
+
+<Jason, livesIn, US, 1990 - 2021>, <Richard, livesIn, UK, 1899- 1995> ...
+
+My problem now is that I want to get a confidence level(probability) when I want to predict one event happening at a specific time point. For example, I want to predict the probability that event <Jason, livesIn, US, 2068> happens, given:
+1.the above datasets which includes info about the relation: livesIn
+2.the starting time when Mike lives in US, say he started to live in US since 2030.
+I have used normal distribution to simulate, but I am wondering if there are any other better AI / ML / Stats approaches. Thanks a lot!
+"
+"['logic', 'symbolic-ai']"," Title: Do these FOL formula both represent ""You can fool some of the people all of the time""?Body:
+You can fool some of the people all of the time.
+
+This can be represented in FOL as follows
+$$\exists x \; \forall t \; (\text{person}(x) \land \text{time}(t)) \Rightarrow \text{can-fool}(x,t) \tag{1}\label{1}$$
+Is $\exists x \; \forall t \; \text{can-fool}(\text{person}(x), \text{time}(t))$ equivalent to (\ref{1}) ?
+"
+"['machine-learning', 'convolutional-neural-networks', 'image-processing', 'normalisation', 'standardisation']"," Title: For image preprocessing, is it better to use normalization or standartization?Body: For a neural network model that classifies images, is it better to use normalization (dividing by 255.0) or using standardization (subtract mean and divide by STD)?
+When I started learning convolutional neural networks, I always used normalization because it's simple and effective, but then I started to learn PyTorch and in one of the tutorials https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html they preprocess images like this:
+transform = transforms.Compose(
+ [transforms.ToTensor(),
+ transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
+
+trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
+ download=True, transform=transform)
+
+The transform
object is created, which has the NORMALIZE parameter, which in itself has the mean and STD values for each channel.
+At first, I didn't understand how this works, but then learned how standardization works from Andrew Ng's video, but I didn't find the answers to why is it better to use standardization over normalization or vice-versa. I understand that normalization scales inputs from [0, 1], and standardization first subtracts mean, so that dataset would be centered around 0, and divides everything by STD, so that it would normalize the variance.
+Though I know how each of these techniques work (I think I know), I still don't understand why would anybody use one over the other to preprocess images.
+Could anybody explain where and why would you use normalization or standardization (if possible could you give an example)? And as a side question: is it better to use the combined version where first you normalize the image and then standardize it?
+"
+"['regression', 'audio-processing', 'binary-classification', 'logistic-regression']"," Title: How to get more accuracy of the logistic regression model?Body: I am working on a Baby Crying Detection model using logistic regression.
+Out of $581$ audios, $222$ are of a baby crying. Each audio is of $5$ seconds.
+what I have done is convert each audio into numbers. and those numbers go into a .csv
file. so first I took $100$ samples from each audio, then $1000$ samples, and then all $110250$ samples into a .csv
file, and at the end of each of them was a number 1 (crying) or 0 (not crying). Then I trained the model using logistic regression from that .csv
file.
+The Problem I m facing is that with $100$ samples the 64% accuracy on each audio, while with 1000 samples and 110250 samples(Full dataset) it reaches to 66% accuracy only. How can I improve the accuracy of my model to upto 80% using logistic regression.
+I can only use simple logistic regression because I have to deploy the model on Arduino.
+"
+"['natural-language-processing', 'transformer', 'attention', 'bert', 'gpt']"," Title: Why does GPT-2 Exclude the Transformer Encoder?Body: After looking into transformers, BERT, and GPT-2, from what I understand, GPT-2 essentially uses only the decoder part of the original transformer architecture and uses masked self-attention that can only look at prior tokens.
+Why does GPT-2 not require the encoder part of the original transformer architecture?
+GPT-2 architecture with only decoder layers
+
+"
+"['transformer', 'attention', 'bert', 'gpt', 'forecasting']"," Title: Can an existing transformer model be modified to estimate the next most probable number in a sequence of numbers?Body: Models based on the transformer architectures (GPT, BERT, etc.) work awesome for NLP tasks including taking an input generated from words and producing probability estimates of the next word as the output.
+Can an existing transformer model, such as GPT-2, be modified to perform the same task on a sequence of numbers and estimate the next most probable number? If so, what modifications do we need to perform (do we still train a tokenizer to tokenize integers/floats into token IDs?)?
+"
+"['neural-networks', 'deep-learning']"," Title: Is the final model scaling done on the full training set?Body: We have our training set and our test set. When we scale our data we "fit" the scaler transform to the training set and then we scale both the training set and test set using this scaler object. Using splitting and cross-validation techniques, one can use the training set as training and validation. Finally, reporting on the test set.
+Now, if I want to use a model in a real-life environment, it's common to use the entire dataset (training and test) to train our already optimized model to obtain a final ready for the production model.
+My question is regarding scaling. Should we fit the scaler to the entire set and then scale? Or can we simply append the scaled training set and scaled test set (both have been scaled using the training set's scaling parameters)?
+I am making use of sklearn.preprocessing.PowerTransformer
. Using "Yeo-Johnson's" power transform and also standardizing the data.
+"
+"['deep-neural-networks', 'generative-adversarial-networks', 'image-processing', 'attention', 'attn-gan']"," Title: SAGAN - is there a mistake in the original paper?Body: in the original paper the following scheme of the self-attention appears:
+https://arxiv.org/pdf/1805.08318.pdf
+
+In a later overview:
+https://arxiv.org/pdf/1906.01529.pdf
+this scheme appears:
+
+referring the original paper.
+My understanding more correlates with the second paper scheme, as:
+
+Where there is two dot-product operations and three hidden parametric matrices:
+$$W_k, W_v, W_q$$
+which corresponds to $W_f, W_g, W_h$ without $W_v$ as it in the original paper explanation, which is as following:
+
+Is this a mistake in the original paper ?
+"
+"['deep-learning', 'pytorch']"," Title: Does anybody know what would happen if I changed input shape of pytorch models?Body: In this https://pytorch.org/vision/stable/models.html tutorial it clearly states:
+
+All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].
+
+Does that mean that for example if I want my model to have input size 128x128 it is or if I calculate mean and std which is unique to my dataset that it is gonna perform worse or won't work at all? I know that with tensorflow if you are loading pretrained models there is a specific argument input_shape which you can set according to your needs just like here:
+tf.keras.applications.ResNet101(
+include_top=True, weights='imagenet', input_tensor=None,
+input_shape=None, pooling=None, classes=1000, **kwargs)
+
+I know that I can pass any shape to those (pytorch) pretrained models and it works. What I wanna understand is can I change input shape of those models so that I don't decrease my models training performance?
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'natural-language-understanding']"," Title: Extracting values from text based on keywordsBody: I am trying to read a PDF file and put it in Python string and trying to fetch information based on keywords. The text here is completely irregular.
+Example of text
+
+Blockquote
+Ram has taken an insurance of his premises with total sum insured of INR 256,200,000,000. XYZ company provides an insured with limit of liability of INR 100,250,000 and 90 days indemnity period. Insured with deductible of INR 200,000.
+
+Here I want to find 3 things from this text
+
+- limit of liability amount
+- Deductible amount
+- Sum insured amount
+
+For example
+Limit of liability = 100,250,000
+"
+"['neural-networks', 'convolutional-neural-networks', 'long-short-term-memory', 'object-detection', 'feature-extraction']"," Title: How can Image Caption work?Body: I have two models and a file contains captions for images. The output of model 1 is .pkl files that contain the features of the images. Model 2 is the language model that will be trained with the captions. How can I link between two models to predict a caption for any image? The output of model 1 should be the input of model 2. But the features only are not enough so the input of model 2 will be .pkl files + caption file. Right?
+If someone can help me in getting the link between the two models, I will appreciate it.
+"
+"['search', 'proofs', 'hill-climbing', 'optimality', 'completeness']"," Title: Are hill climbing variations always optimal and complete?Body: Are hill climbing variations (like steepest ascent hill climbing, stochastic hill climbing, random restart hill climbing, local beam search) always optimal and complete?
+"
+"['convolutional-neural-networks', 'classification', 'image-recognition', 'accuracy', 'coco-dataset']"," Title: Accuracy Not Going Above 30%Body: I am trying to make a big classification model using the coco2017 dataset. Here is my code:
+import tensorflow as tf
+from tensorflow import keras
+import numpy as np
+import matplotlib.pyplot as plt
+import IPython.display as display
+from PIL import Image, ImageSequence
+import os
+import pathlib
+from tensorflow.keras.models import Sequential
+from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
+from tensorflow.keras.preprocessing.image import ImageDataGenerator
+import cv2
+import datetime
+
+gpus = tf.config.list_physical_devices('GPU')
+if gpus:
+ try:
+ # Currently, memory growth needs to be the same across GPUs
+ for gpu in gpus:
+ tf.config.experimental.set_memory_growth(gpu, True)
+ logical_gpus = tf.config.experimental.list_logical_devices('GPU')
+ print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
+ except RuntimeError as e:
+ # Memory growth must be set before GPUs have been initialized
+ print(e)
+
+epochs = 100
+steps_per_epoch = 10
+batch_size = 70
+IMG_HEIGHT = 200
+IMG_WIDTH = 200
+
+train_dir = "Train"
+test_dir = "Val"
+
+train_image_generator = ImageDataGenerator(rescale=1. / 255)
+
+test_image_generator = ImageDataGenerator(rescale=1. / 255)
+
+train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
+ directory=train_dir,
+ shuffle=True,
+ target_size=(IMG_HEIGHT, IMG_WIDTH),
+ class_mode='sparse')
+
+test_data_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
+ directory=test_dir,
+ shuffle=True,
+ target_size=(IMG_HEIGHT, IMG_WIDTH),
+ class_mode='sparse')
+
+model = Sequential([
+ Conv2D(265, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
+ MaxPooling2D(),
+ Conv2D(64, 3, padding='same', activation='relu'),
+ MaxPooling2D(),
+ Conv2D(32, 3, padding='same', activation='relu'),
+ MaxPooling2D(),
+ Flatten(),
+ keras.layers.Dense(256, activation="relu"),
+ keras.layers.Dense(128, activation="relu"),
+ keras.layers.Dense(80, activation="softmax")
+])
+
+optimizer = tf.keras.optimizers.Adam(0.001)
+optimizer.learning_rate.assign(0.0001)
+
+model.compile(optimizer='adam',
+ loss="sparse_categorical_crossentropy",
+ metrics=['accuracy'])
+
+model.summary()
+tf.keras.utils.plot_model(model, to_file="model.png", show_shapes=True, show_layer_names=True, rankdir='TB')
+checkpoint_path = "training/cp.ckpt"
+checkpoint_dir = os.path.dirname(checkpoint_path)
+
+cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
+ save_weights_only=True,
+ verbose=1)
+
+os.system("rm -r logs")
+
+log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
+tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
+
+model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
+history = model.fit(train_data_gen,steps_per_epoch=steps_per_epoch,epochs=epochs,validation_data=test_data_gen,validation_steps=10,callbacks=[cp_callback, tensorboard_callback])
+model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
+model.save('model.h5', include_optimizer=True)
+
+test_loss, test_acc = model.evaluate(test_data_gen)
+print("Tested Acc: ", test_acc)
+print("Tested Acc: ", test_acc*100, "%")
+
+I have tried different optimizers like SGD
, RMSProp
, and ADAM
. I also tried changing the configuration of the hidden layers. I also tried to change the metrics from accuracy
to sparse_categorical_accuracy
with no improvement. I cannot go beyond 30% accuracy. My guess is that the MaxPooling
is doing something because I just added it but don't know what it means. Can somebody explain what the MaxPooling
Layer does and what is stopping my neural network from gaining accuracy?
+"
+"['reinforcement-learning', 'comparison', 'reference-request', 'model-based-methods', 'transition-model']"," Title: What is the difference between a distribution model and a sampling model in Reinforcement Learning?Body: The book from Sutton and Barto, Reinforcement Learning: An Introduction, define a model in Reinforcement Learning as
+
+something that mimics the behavior of the environment, or more generally, that allows inferences to be made about how the environment will behave.
+
+In this answer, the answerer makes a distinction:
+
+There are broadly two types of model:
+
+- A distribution model which provides probabilities of all events. The most general function for this might be $p(r,s'|s,a)$ which is the probability of receiving reward $r$ and transitioning to state $s'$ given starting in state $s$ and taking action $a$.
+
+- A sampling model which generates reward $r$ and next state $s'$ when given a current state $s$ and action $a$. The samples might be from a simulation, or just taken from history of what the learning algorithm has experienced so far.
+
+
+
+The main difference is that in sampling models I only have a black box, which, given a certain input $(s,a)$, generates an output, but I don't know anything about the probability distributions of the MDP. However, having a sampling model, I can reconstruct (approximately) the probability distributions by running thousands of experiments (e.g. Monte Carlo Tree Search).
+On the other hand, if I have a distribution model, I can always sample from it.
+I was wondering if
+
+- what I wrote is correct;
+
+- this distinction has been remarked in literature and where I can find a more in-depth discussion on the topic;
+
+- someone has ever separated model-based algorithms which use a distribution model and model-based algorithms which use only a sampling model.
+
+
+"
+"['neural-networks', 'python', 'q-learning']"," Title: MicroPython MicroMLP: How do I reward the program based on state?Body: I have been trying to use MicroMLP to teach a small neural network to converge to correct results. Ultimately, I want to have three outputs, one which is high priority (X
must be as close to XTarget
as possible; Y
and Z
must be within bounds, and should approach YTarget
and ZTarget
as best as possible). Right now, I'm just trying to get convergence of one variable to understand this library.
+The below code works for xor
, but I don't really understand why it works, or how to extend this to reward behavior:
+from microMLP import MicroMLP
+import utime
+import random
+import machine
+import gc
+
+DEPTH=3
+mlp = MicroMLP.Create( neuronsByLayers = [DEPTH, DEPTH, 1],
+ activationFuncName = MicroMLP.ACTFUNC_GAUSSIAN,
+ layersAutoConnectFunction = MicroMLP.LayersFullConnect )
+
+nnFalse = MicroMLP.NNValue.FromBool(False)
+nnTrue = MicroMLP.NNValue.FromBool(True)
+
+led = machine.Pin(25, machine.Pin.OUT)
+tl = 0
+xor = []
+c=0
+for i in range(5000):
+ if not i % 100:
+ led.toggle()
+ gc.collect()
+ print(" Iteration: %s \t Correct: %s of 10" % (i,c))
+ c = 0
+ xor.clear()
+ xorOut = nnFalse
+ for j in range(DEPTH):
+ if random.random() > 0.5:
+ xor.append(nnTrue)
+ xorOut = nnFalse if xorOut == nnTrue else nnTrue
+ else:
+ xor.append(nnFalse)
+ p = mlp.Predict(xor)
+ mlp.QLearningLearnForChosenAction(None, xorOut, xor, 0)
+ if p[0].AsBool == xorOut.AsBool:
+ c += 1
+
+led.off()
+
+print( "LEARNED :" )
+
+c = 0
+tries = 0
+for i in range(100):
+ led.toggle()
+ gc.collect()
+ xor.clear()
+ xorOut = nnFalse
+ for j in range(DEPTH):
+ if random.random() > 0.5:
+ xor.append(nnTrue)
+ xorOut = nnFalse if xorOut == nnTrue else nnTrue
+ else:
+ xor.append(nnFalse)
+ tries += 1
+ p = mlp.Predict(xor)
+ c += 1 if mlp.Predict(xor)[0].AsBool == xorOut.AsBool else 0
+
+print( " %s of %s" % (c, tries) )
+
+del mlp
+print(gc.mem_alloc())
+gc.collect()
+print(gc.mem_alloc())
+
+I'm trying to achieve two goals, first for me to understand, second for the machine to do useful work.
+Goal #1: learn to adjust a value properly.
+Inputs:
+
+- Target (0,1)
+- Value (0,1)
+
+Outputs:
+
+- An adjustment toward Value (-1,1)
+
+- Possibly this has to be (0,1) so I've considered using the adjustment as
adjustment - 0.5
to put it into the (-0.5,0.5) range
+
+
+
+I want to reward the thing based on the degree to which value comes closer to the target. (As a special case, if it's impossible to adjust that far given the output, I want to maximize its reward for making the maximum adjustment.) I don't want to know the value adjustment should target; I only want to know that whatever value it gave produced a state I like better, and what that value was. If I can know the correct output, I don't need deep learning.
+Goal #2, the later one I expect to be able to do myself if I can get one variable working, is to have several inputs and three outputs. These inputs relate to the current targets and the deviation from those targets. One of these is of the highest priority to track toward a target value; the other two should track toward a target value, but are allowed to deviate by some amount with no harm done. If I can just figure out how to use the neural network, I should be able to assemble that.
+Does this sound reasonable? Is this the correct tool, or is Q-Learning wrong for this?
+Feel free to suggest a better package for regular Python as well, although MicroMLP is the only usable one of which I'm aware for the platform I'm targeting. I'll likely want a much more powerful one that I can use with extra available hardware if present.
+If I get something I can work with, I'll write documentation and submit a PR to the MicroMLP repo so nobody has to ask this again.
+"
+"['reinforcement-learning', 'deep-rl', 'policy-gradients', 'reinforce', 'sutton-barto']"," Title: How to simplify policy gradient theorem to $E_{\pi}[G_t \frac{\nabla_{\theta}\pi(a|S_t,\theta)}{\pi(a|S_t,\theta)}]$?Body: In "Introduction to Reinforcement Learning" (Richard Sutton) section 13.3(Reinforce algorithm) they have the following equation:
+\begin{align}
+ \nabla_{\theta}J &\propto \sum_s \mu(s) \sum_a q_{\pi}(s,a)\nabla_{\theta}\pi(a|s,\theta) \\
+ &= E_{\pi}[\sum_a q_{\pi}(S_t,a) \nabla_{\theta}\pi(a|S_t,\theta)] \tag{1}\label{1}
+\end{align}
+But in my opinion equation 1 should be expectation over state distribution: $$E_{\mu}[\sum_a q_{\pi}(S_t,a) \nabla_{\theta}\pi(a|S_t,\theta)]$$
+If I am right here then the rest of the lines follows like this:
+\begin{align}
+\nabla_{\theta}J &= E_{\mu}[\sum_a q_{\pi}(S_t,a) \nabla_{\theta}\pi(a|S_t,\theta)] \\
+ &= E_{\mu}[\sum_a \pi(a|S_t,\theta) q_{\pi}(S_t,a) \frac{\nabla_{\theta}\pi(a|S_t,\theta)}{\pi(a|S_t,\theta)}] \\
+ &= E_{\mu}[E_{\pi}[q_{\pi}(S_t,A_t)\frac{\nabla_{\theta}\pi(A_t|S_t,\theta)}{\pi(A_t|s,\theta)}]]\\
+ &= E_{\mu}[E_{\pi}[G_t\frac{\nabla_{\theta}\pi(a|S_t,\theta)}{\pi(a|S_t,\theta)}]]
+\end{align}
+Now the final update rule using stochastic gradient descent will be:
+$$\triangle \theta = \alpha E_{\pi}[G_t\frac{\nabla_{\theta}\pi(A_t|S_t,\theta)}{\pi(A_t|S_t,\theta)}] \tag{2}$$
+I think I am doing something wrong here because this equation 2 does not match with the book also with other materials. Can anyone please show me where I am doing wrong?
+"
+"['neural-networks', 'reinforcement-learning', 'cross-entropy']"," Title: How do I implement the cross-entropy-method for a RL environment with a continuous action space?Body: I found many tutorials and posts on how to solve RL environments with discrete action spaces using the cross entropy method (e.g., in this blog post for the OpenAI Gym frozen lake environment).
+However now I have built my first custom environment, which simulates a car driving on a road with leading and following vehicles. I want to control the acceleration of my vehicle without crashing into anyone. The state consists of the velocity and distance to the leading and following vehicles. The observation and action spaces are continuous and not discrete, which is why I cannot implement my training loop like in the examples that use the cross entropy method. That is, because the method relies on modifying each tuple for training <s, a, r>
(state, action, reward) so that the probability distribution in a
is equal to 1 in one dimension and equal to 0 in all others (meaning, it it very confident in its action, i.e., [0, 1, 0, 0]).
+How do I implement the cross entropy method for a continuous action space (in Python and Pytorch) or is that even possible? The answer to this question probably describes what I want to do in a very mathematical form, but I do not understand it.
+"
+"['image-recognition', 'object-recognition']"," Title: How can I train a model to recognize object with zoomed-in image?Body: Humans are good at guessing animals with zoomed-in images from patterns of fur/skin.
+(For example, if we saw a black-white pattern fur, it must be a zebra)
+I have some experience guessing a car model from an interior/exterior photo without a brand logo.
+(based on the dashboard/gear level/air vent or something like that)
+I think it would be helpful for my coworkers to have such a model.
+(I'm working at a car forum, and I have some limited experience working with TensorFlow).
+Is this possible?
+Where should I start with?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'declarative-programming']"," Title: How to transfer declarative knowledge into neural networksBody: Humans learn facts about the world like "most A are B" by own experience and by being told so (by other people or texts). The systems and mechanisms of storage and usage of such facts (by an "experience system" and a "declarative system") are presumably quite different and may have to do with "episodic memory" and "semantic memory". Nevertheless at least in the human brain the common currency are synaptic weights, and it would be quite interesting to know how these two systems cooperate.
+I assume that machine learning is mainly concerned with "learning by own experience" (= training data + annotations), be it supervised or unsupervised learning. I wonder which approaches there are that allow a neural network to "learn by being told". One brute force approach might be to translate a declarative statement like "most A are B" into a set of synthetic training data, but that's definitely not how it works for humans.
+"
+"['machine-learning', 'deep-learning', 'statistical-ai']"," Title: Is the target assumed to be a noisy version of the output of the model in machine learning?Body: I wonder if the following equation (you can find it in almost every ML book) refers to a general assumption that we make when using machine learning:
+$$y = f(x)+\epsilon,$$
+where $y$ is our output, $f$ is e.g. a neural network and $\epsilon$ is an independent noise term.
+Does this mean that we assume the $y$'s contained in our training data set come from a noised version of our network output?
+"
+"['deep-learning', 'gradient-descent', 'geometric-deep-learning', 'information-theory']"," Title: Do Gradient Descent and Natural Gradient solve the same problem?Body: I am troubled by natural gradient methods.
+If we have a function f(x) we wish to minimize, gradient descent minimizes f(x) of course, but what does the natural gradient do?
+I found on https://towardsdatascience.com/natural-gradient-ce454b3dcdfa:
+
+Instead of fixing the euclidean distance each parameter moves(distance in the parameter space), we can fix the distance in the distribution space of the target output.
+
+Where did the distributions come from? If we wish to minimize f(x), the target output is just a minimizer x* right, and not a distribution, or am I missing something?
+"
+"['neural-networks', 'deep-learning', 'optimization']"," Title: What are some use cases of discrete optimization in Deep Learning?Body: When we talk of optimization, it usually boils down to gradient descent and its variants in the context of deep learning. However, I wonder if there are some works that use discrete optimization in one way or another in deep learning.
+In brief, what are some applications of discrete optimization to deep learning?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'function-approximation', 'universal-approximation-theorems']"," Title: How can ""any process you can imagine"" be thought of as function computation?Body: I stumbled upon this passage when reading this guide.
+
+Universality theorems are a commonplace in computer science, so much
+so that we sometimes forget how astonishing they are. But it's worth
+reminding ourselves: the ability to compute an arbitrary function is
+truly remarkable. Almost any process you can imagine can be thought of
+as function computation.* Consider the problem of naming a piece of
+music based on a short sample of the piece. That can be thought of as
+computing a function. Or consider the problem of translating a Chinese
+text into English. Again, that can be thought of as computing a
+function. Or consider the problem of taking an mp4 movie file and
+generating a description of the plot of the movie, and a discussion of
+the quality of the acting. Again, that can be thought of as a kind of
+function computation.* Universality means that, in principle, neural
+networks can do all these things and many more.
+
+How is this true? How can any process be thought of as function computation? How would one compute function in order to translate Chinese text to English?
+"
+"['neural-networks', 'convolutional-neural-networks', 'backpropagation']"," Title: In a convolutional neural network, how is the error delta propagated between convolutional layers?Body: I'm coding some stuff for CNNs, just relying on numpy (and scipy just for the convolution operation for pure performance reasons).
+I've coded a small network consisting of a convolutional layer with several feature maps, the max pooling layer and a dense layer for the output, so far so good, extrapolating the backpropagation from fully connected neural networks was quite intuitive.
+But now I'm stuck when several convolutional layers are chained. Imagine the following architecture:
+
+- Output neurons: 10
+- Input matrix (
I
): 28x28
+- First convolutional layer (
CN1
): 3x5x5, stride 1 (output shape is 3x24x24)
+- First pooling layer (
MP1
): 2x2 (output shape is 3x12x12)
+- Second convolutional layer (
CN2
): 3x5x5, stride 1(output shape is 3x8x8)
+- Second pooling layer (
MP2
): 2x2 (output shape is 3x4x4)
+- Dense layer (
D
): 10x48 (fully connected to flattened MP2
)
+
+Propagating the error back:
+
+- Error delta in output layer: 10x1 (cost delta)
+- Error delta in
MP2
: 3x4x4 (48x1 unflattened, calculating the error delta for the dense layer as usual)
+- Error delta in
CN2
: 3x8x8 (error delta of MP2
but just upsampled)
+
+How do I keep from here? I don't know how to keep propagating the error to the previous layer, if the error delta in the current one is 3x8x8, and the kernel 3x5x5, performing the convolution between the error delta and the filter for calculating the delta for the previous layer, that gives a 3x4x4 delta.
+"
+"['machine-learning', 'ensemble-learning']"," Title: In ensemble learning, does accuracy increase depending on the number of models you want to combine?Body: I want to predict using the same model as multivariate time series data in a time series prediction problem.
+Example:
+pa = model predict result(a)
+pb = model predict result(b)
+pc = model predict result(c)
+...
+model ensemble([pa, pb, pc,...]) -> predict(y)
+
+Can I expect a better performance of our model by using a model ensemble with more kinds of time series data here?
+"
+"['optimization', 'gradient-descent']"," Title: Optimizer that prevents parameters from oscillatingBody: When we perform gradient descent, especially in an online setting where the training data is presented in a non-random order, a particular 1-dimensional parameter (such as an edge weight) may first travel in one direction, then turn around and travel the other way for a while, then turn around and travel back, and so forth. This is wasteful, and the problem is that the learning rate for that parameter was too high, making it overshoot the optimal point. We don't want parameters to oscillate as they are trained; instead, ideally, they should settle directly to their final values, like a critically damped spring.
+Is there an optimizer that sets learning rates based on this concept? Rprop seems related, in that it reduces the learning rate whenever the gradient changes direction. The problem with Rprop is that it only detects oscillations of period 2. What if the oscillation is longer, e.g. the parameter is moving in a sine wave with a period of dozens or hundreds of time steps? Looking for an optimizer that can suppress oscillations of any period length.
+Let's be specific. Say that $w$ is a parameter, receiving a sequence of gradient updates $g_0, g_1, g_2, ... $ . I am looking for an optimizer that would pass the following tests:
+
+- If $g_t = sin(t) - w$, then $w$ should settle to the value 0.
+- If $g_t = sin(t) + 100 - 100 cos(0.00001t) - w$, then $w$ should settle to the value 100.
+- If $g_t = sin(t) - w$ for $0 < t < 1000000$, and $g_t = sin(t) + 100 - w$ for $1000000 \leq t$, then $w$ should at first settle to the value 0, and then not too long after time step $1000000$ it should settle to the value 100.
+- If $g_t = sin(t) - w$ for $floor(t / 1000000)$ even, and $g_t = sin(t) + 100 - w$ for $floor(t / 1000000)$ odd, then $w$ should at first settle to the value 0, then not too long after time step $1000000$ it should settle to the value 100, and then not too long after step $2000000$ it should settle back to 0, but eventually after enough iterations it should settle to the value 50 and stop changing forever after.
+
+"
+"['reinforcement-learning', 'deep-rl', 'model-based-methods', 'algorithm-request']"," Title: Are there RL algorithms that also try to predict the next state?Body: So far I've developed simple RL algorithms, like Deep Q-Learning and Double Deep Q-Learning. Also, I read a bit about A3C and policy gradient but superficially.
+If I remember correctly, all these algorithms focus on the value of the action and try to get the maximum one. Is there an RL algorithm that also tries to predict what the next state will be, given a possible action that the agent would take?
+Then, in parallel to the constant training for getting the best reward, there will also be constant training to predict the next state as accurately as possible? And then have that prediction of the next state always be passed as an input into the NN that decides on the action to take. Seems like a useful piece of information.
+"
+"['neural-networks', 'objective-functions']"," Title: Is it normal getting noise values in the error history along training iteration?Body: I'm giving my first steps in really learning machine learning.
+As an exercise in my online course, it was asked for me to code the Cost function of some neural network that should resolve the handwritten problem with digits between 1 to 10.
+As most of you know, the cost function of NN is given by:
+$$
+J(\theta)=\frac{1}{m} \sum_{i=1}^{m} \sum_{k=1}^{K}\left[-y_{k}^{(i)} \log \left(\left(h_{\theta}\left(x^{(i)}\right)\right)_{k}\right)-\left(1-y_{k}^{(i)}\right) \log \left(1-\left(h_{\theta}\left(x^{(i)}\right)\right)_{k}\right)\right]
+$$
+so I tried to code it considering the following information:
+
+where $\mathrm{h_{\theta}}(\mathrm{x^{i}})$ is computed as shown in Figure 2 and $K=10$ is the total number of possible labels. Note that $h_{\theta}\left(x^{(i)}\right)_{k}= a_{k}^{(3)}$ is the activation (output value) of the $k$ -th output unit Also, recall that whereas the original labels (In the variable $y$) were $1,2, \ldots, 10$, for the purpose of training a neural network, we need to recode the labels as vectors containing only values 0 or 1, so that
+$$
+y=\left[\begin{array}{c}
+1 \\
+0 \\
+0 \\
+\vdots \\
+0
+\end{array}\right],\left[\begin{array}{l}
+0 \\
+1 \\
+0 \\
+\vdots \\
+0
+\end{array}\right], \ldots . \text { or }\left[\begin{array}{c}
+0 \\
+0 \\
+0 \\
+\vdots \\
+1
+\end{array}\right]
+$$
+For example, if $x^{i}$ is an image of the digit $5,$ then the corresponding $y^{i n}$ (that you should use with the cost function) should be a $10-$ dimensional vector with $y_{5}=1,$ and the other elements equal to $0$. You should implement the feedforward computation that computes $\mathrm{h_{\theta}}(\mathrm{x^{i}})$ for every example $i$ and sum the cost overall examples. Your code should also work for a dataset of any size, with any number of labels (you can assume that there are always at least $K \geq 3$ labels)
+
+plotting the graph, I got:
+
+here's my error cost function code:
+function [J grad] = nnCostFunction(nn_params, ...
+ input_layer_size, ...
+ hidden_layer_size, ...
+ num_labels, ...
+ X, y, lambda)
+
+% Setup some useful variables
+ m = size(X, 1)
+% bias = ones(m,1)';
+Theta1 = [ ones(401,1) reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
+ hidden_layer_size, (input_layer_size + 1))']';
+
+Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
+ num_labels, (hidden_layer_size + 1));
+
+J =0;
+error_history = zeros(1,m);
+y_coded = zeros(1,num_labels);
+for i = 1:m
+ y_coded(y) = 1;
+ X_in = [1 X(i,:)]';
+ hypotesis_array = sigmoid(Theta2*sigmoid(Theta1*X_in));
+
+ for k =1:num_labels
+ J = J -(y_coded(k)*log10(hypotesis_array(k)) - (1- y_coded(k))*log10(hypotesis_array(k)));
+ end
+ J =J/m;
+ error_history(i) = J;
+
+
+end
+plot(1:5000, error_history);
+ylabel("Error_Value");
+xlabel("training iteration");
+
+I did that considering that the weights were previously given.
+Is it normal getting this noise error history value or I did something wrong with my code?
+"
+['generative-adversarial-networks']," Title: How does the output distribution of a GAN change if the parameters are slightly purturbed?Body: Suppose $G_{\phi}:\mathcal{Z}\rightarrow \mathcal{X}$ is a generator (neural network, non-invertible) that can sample from some distribution $\pi$ on $\mathcal{X}$. That is, $G_{\phi}(z)\sim \pi$ when $z\sim \mathcal{N}(0,I)$. Let $\phi+\delta_{\phi}$ represent a (small) perturbation of the parameters of $G_{\phi}$ and let $G_{\phi+\delta_{\phi}}(z)\sim \pi'$ when $z\sim \mathcal{N}(0,I)$.
+Are there any results that quantify or bound $\mathcal{D}(\pi,\pi')$ in terms of $\delta_{\phi}$, where $\mathcal{D}$ is a distance measure for distributions (let's say KL-divergence, or the Wasserstein-1 distance)?
+Basically, I want to know what kind of geometry is induced on the space of distributions by the Euclidian geometry on the parameter space of a generative adversarial network.
+To explain further, let's consider a parametric family of distributions $p_{\phi}$, where $\phi\in\Phi$ (some parameter space). It is a fairly well-known result in statistics that $\text{KL}(p_{\phi}||p_{\phi+\delta_{\phi}})\approx \frac{1}{2}\delta_{\phi}^\top F_{\phi} \delta_{\phi}$, where $F_{\phi}$ is the Fisher information matrix. When the family $p_{\phi}$ is generated by a GAN with parameter $\phi$ (in which case we don't know $p_{\phi}$ in closed-form), can we have an analogous result?
+"
+"['reinforcement-learning', 'policy-gradients', 'actor-critic-methods', 'proximal-policy-optimization']"," Title: Understanding advantage estimator in proximal policy optimizationBody: I was reading Proximal Policy Optimization paper. It states following:
+
+The advantage estimator used is:
+$\hat{A}_t=-V(s_t)+r_t+\gamma r_{t+1}+...+\gamma^{T-t+1}r_{T-1}+\color{blue}{\gamma^{T-t}}V(s_T) \quad\quad\quad\quad\quad\quad\quad(10)$
+where $t$ specifies the time index in $[0, T]$, within a given length-$T$ trajectory segment. Generalizing
+this choice, we can use a truncated version of generalized advantage estimation, which reduces to
+Equation (10) when $λ = 1$:
+$\hat{A}_t=\delta_t+(\gamma\lambda)\delta_{t+1}+...+(\gamma\lambda)^{T-t+1}\delta_{T-1}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad(11)$
+where, $\delta_t=r_t+\gamma V(s_{t+1})-V(s_t)\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad(12)$
+
+How equation (11) reduces to equation (10). Putting $\lambda=1$ in equation (11), we get:
+$\hat{A}_t=\delta_t+\gamma\delta_{t+1}+...+\gamma^{T-t+1}\delta_{T-1}$
+Putting equation (12) in equation (11), we get:
+$\hat{A}_t$
+$=r_t+\gamma V(s_{t+1})-V(s_t) $
+$+\gamma[r_{t+1}+\gamma V(s_{t+2})-V(s_{t+1})]+...$
+$+\gamma^{T-t+1}[r_{T-1}+\gamma V(s_{T})-V(s_{T-1})]$
+$=-V(s_t)+r_t\color{red}{+\gamma V(s_{t+1})} $
+$+\gamma r_{t+1}+\gamma^2 V(s_{t+2})\color{red}{-\gamma V(s_{t+1})}+...$
+$+\gamma^{T-t+1}r_{T-1}+\color{blue}{\gamma^{T-t+2}} V(s_{T})-V(s_{T-1})$
+I understand the terms cancels out. I am not getting the difference in blue colored power of $\gamma$ in last terms. I must have made some stupid mistake.
+"
+"['reinforcement-learning', 'objective-functions', 'policy-gradients', 'reinforce']"," Title: Why does the implementation of REINFORCE algorithm minimize the gradient term but not the loss?Body: I read the book "Foundation of Deep Reinforcement Learning, Laura Graesser and Wah Loon Keng", and when I go through the REINFORCE algorithm, they show the objective function:
+$$
+J\left(\pi_{\theta}\right)=\mathbb{E}_{\tau \sim \pi_{\theta}}[R(\tau)]=\mathbb{E}_{\tau \sim \pi_{\theta}}\left[\sum_{t=0}^{T} \gamma^{t} r_{t}\right]
+$$
+and the gradient of the objective:
+$$
+\nabla_{\theta} J\left(\pi_{\theta}\right)=\mathbb{E}_{\tau \sim \pi_{\theta}}\left[\sum_{t=0}^{T} R_{t}(\tau) \nabla_{\theta} \log \pi_{\theta}\left(a_{t} \mid s_{t}\right)\right]
+$$
+But when they implement it,
+class Pi(nn.Module):
+ def __init__(self, in_dim, out_dim):
+ super(Pi, self).__init__()
+ layers = [
+ nn.Linear(in_dim, 64),
+ nn.ReLU(),
+ nn.Linear(64, out_dim)
+ ]
+ self.model = nn.Sequential(*layers)
+ self.onpolicy_reset()
+ self.train()
+
+ def onpolicy_reset(self):
+ self.log_probs = []
+ self.rewards = []
+
+ def forward(self, x):
+ pdparam = self.model(x)
+ return pdparam
+
+ def act(self, state):
+ x = torch.from_numpy(state.astype(np.float32))
+ pdparam = self.forward(x) # (1, num_action), each number represent the raw logits for that specific action
+ # model contain the paremeters theta of the policy, pd is the probability
+ # distribution parameterized by model's theta
+ pd = Categorical(logits = pdparam)
+ action = pd.sample()
+ log_prob = pd.log_prob(action)
+ self.log_probs.append(log_prob)
+ return action.item()
+
+def train(pi, optimizer):
+ T = len(pi.rewards)
+ rets = np.empty(T, dtype = np.float32)
+ future_ret = 0.0
+ for t in reversed(range(T)):
+ future_ret = pi.rewards[t] + gamma*future_ret
+ rets[t] = future_ret
+
+ rets = torch.tensor(rets)
+ log_probs = torch.stack(pi.log_probs)
+ loss = -log_probs*rets
+ loss = torch.sum(loss)
+ optimizer.zero_grad()
+ loss.backward()
+ optimizer.step()
+ return loss
+
+def main():
+ env = gym.make('CartPole-v0')
+ # in_dim is the state dimension
+ in_dim = env.observation_space.shape[0]
+ # out_dim is the action dimension
+ out_dim = env.action_space.n
+ pi = Pi(in_dim, out_dim)
+ optimizer = optim.Adam(pi.parameters(), lr = 0.005)
+ for epi in range(300):
+ state = env.reset()
+ for t in range(200): # max timstep of cartpole is 200
+ action = pi.act(state)
+ state, reward, done, _ = env.step(action)
+ pi.rewards.append(reward)
+ # env.render(mode='rgb_array')
+ if done:
+ break
+ loss = train(pi, optimizer)
+ total_reward = sum(pi.rewards)
+ solved = total_reward > 195.0
+ pi.onpolicy_reset()
+ print(f'Episode {epi}, loss: {loss}, total reward: {total_reward}, solve: {solved}')
+ return pi
+
+In train()
, they minimize the gradient term, and I can not understand why is that.
+Can someone shed light on that?
+I am new to this so please forget me if this question is stupid.
+"
+"['deep-learning', 'recurrent-neural-networks', 'long-short-term-memory', 'feedforward-neural-networks', 'multilayer-perceptrons']"," Title: Can RNNs get inputs and produce outputs similar to the inputs and outputs of FFNNs?Body: RNN and LSTM models have many architectures that can be modified. We can also compose their input and output data. However, in the examples that I found on the web, the inputs and outputs of RNNs/LSTMs are usually sequences.
+Let's say we have a 3-column dataset:
+data= np.array([[1.022 0.94 1.278]
+ [2.096 1.404 2.035]
+ [1.622 2.348 1.909]
+ [1.678 1.638 1.742]
+ [2.279 1.878 2.045]])
+
+where the first two columns contain the inputs (features) and the third one contains the labels.
+Usually, when modeling with feedforward neural networks (FFNNs), the input and output look like this:
+Input:
+x_input = np.vstack((data[:, 0], data[:, 1])).reshape(5, 2)
+
+[[1.022 2.096]
+ [1.622 1.678]
+ [2.279 0.94 ]
+ [1.404 2.348]
+ [1.638 1.878]]
+
+Output:
+y_output = np.vstack((data[:, 2])).reshape(5, 1)
+
+[[1.278]
+ [2.035]
+ [1.909]
+ [1.742]
+ [2.045]]
+
+When modeling with RNN, the input and output are:
+Input:
+[[1.022 0.94 1.278]
+ [2.096 1.404 2.035]
+ [1.622 2.348 1.909]
+
+Output (as a sequence):
+ [1.678 1.638 1.742]
+ [2.279 1.878 2.045]]
+
+I would like to ask: Is it possible to model the input and output as an ANN model when modeling with RNN? Would it be correct?
+"
+"['value-functions', 'bellman-equations', 'value-iteration', 'policy-iteration']"," Title: Bellman Expectation Equation leading to results where value iteration would not converge to the optimal policyBody: When applying the bellman expectation equation:
+$$v(s)=\mathbb{E}\left[R_{t+1}+\gamma v\left(S_{t+1}\right) \mid S_{t}=s\right]$$
+to the MRP below, states further away from the terminal state will have the same value ${v(s)}$ as states closer to the terminal states. Even though it is clear that the expected total reward from states further away is lower. If the discount factor $\gamma$ would be even lower states further away would get a higher value. If we now make this an MDP where the agent can decide to go either direction from all states (with the first state having an action leading to itself), the agent would then choose to go further away from the terminal. Getting less reward over the whole episode. So, this seems to be an example where policy/value iteration would not converge to an optimal policy. I know there is something wrong with the reasoning here. I just cannot seem to figure out what.
+What am I missing here?
+
+EDIT: So, the problem actually was that I didn't take into account that the terminal state has to get a value of 0. If you put it at 0 at all times this will converge as expected because all the other states will get lower and lower values while, assuming a greedy policy, the one-to-last state will retain a value of -1. After a bit over 10 iterations (if gamma is close to 1) it will converge because the states further away will get a value less than -1.
+"
+"['feedforward-neural-networks', 'stochastic-gradient-descent', 'universal-approximation-theorems']"," Title: Issue with graphical interpretation of the universal approximation theoremBody: This article attempts to provide a graphical justification of the universal approximation theorem.
+It succeeds in showing that a linear combination of two sigmoids can produce essentially a bounded constant function or step function, and thus can therefore to a reasonable degree of approximation produce any function by essentially splitting up any function into a cluster (linear combination?) of these towers or steps.
+However, he produced the steps and towers using specific weight parametrizations.
+However, since when are we allowed to specify weights and biases? Isn't this all out of our hands and in the hands of cost function minimization?
+I don't understand why he was dealing with setting weights to this, biases to that, when in my experience that is all done by "the machine" to minimize the cost function. I doubt the weights to minimize the cost function are arranged in the ways specified in order to form the towers and steps that were formed in this tutorial, so I kind of don't understand what all the hub-ub is all about.
+"
+"['convolutional-neural-networks', 'audio-processing']"," Title: I want to determine how similar a given song is to Queen's songs. Am I headed in the right direction?Body: I've asked this question before (@ Reddit) and people suggested CNNs on a mel spectrogram more than anything else. This is great.
+But I'm sort of stuck at: label some music data as "queen" and "not queen" and have this be the training set. Like, download 300 songs, 70 queen (that's all they have) and 230 not queen, create their mel spectrograms using some python package that can do that.
+First of all, is 300 songs even enough?
+I only have a basic understanding of what I'm doing. I need some help
+"
+"['machine-learning', 'papers', 'topic-model', 'latent-dirichlet-allocation']"," Title: Why does $E_q[\log p(\mathbf{w}|\mathbf{z},\beta)]=\sum_{n=1}^{N}\sum_{i=1}^{k}\sum_{j=1}^{V}\phi_{ni}w_n^j\log \beta_{ij}$ hold in LDA?Body: I'm having trouble understanding an equality that comes up in the original LDA paper by Blei et al.:
+Consider the classical LDA model, i.e. for every document $\textbf{w}=(w_1,\ldots,w_N)$ in a text corpus $\mathcal{D}=\{w_1,\ldots,w_M\}$ assume that the document is created as follows$^{\dagger}$:
+
+- Choose $N\sim \text{Poisson}(\xi)$.
+
+- Choose $\theta\sim\text{Dir}(\alpha)$.
+
+- For each of the $N$ words $w_n$:
+(a) Choose a topic $z_n\sim \text{Multinomial}(\theta)$.
+(b) Choose a word $w_n$ from $p(w_n|z_n,\beta)$, a multinomial probability conditioned on the topic $z_n$.
+
+
+In order to do inference for LDA, the authors use a variational approach with the following graphical model where $\gamma$ and $\phi$ are Dirichlet and multinomial parameters, respectively:
+
+Let us write $p$ for the original LDA distribution and $q$ for the variational one. The equality I don't understand is given in the appendix$^{*}$ of the paper and states that:
+$$E_q[\log p(\mathbf{w}|\mathbf{z},\beta)]=\sum_{n=1}^{N}\sum_{i=1}^{k}\sum_{j=1}^{V}\phi_{ni}w_n^j\log \beta_{ij}.$$
+My work so far:
+We can write the RHS as
+$$\sum_{n=1}^{N}\sum_{i=1}^{k}\sum_{j=1}^{V}\phi_{ni}w_n^j\log \beta_{ij}=\sum_{n=1}^{N}\sum_{i=1}^{k}\sum_{j=1}^{V}q(z_n=i)\cdot w_n^j\cdot \log p(w_n^j=1|z_n=i)$$
+and the LHS via exchangeability and de Finetti's theorem as
+$$p(\mathbf{w}|\mathbf{z},\beta)]=\sum_{n=1}^{N}E_q[\log p(w_n|z_n,\beta)].$$
+I now want to obtain the double sum from the right-hand side on the LHS as well. This looks like some sort of expected value with respect to $q$, of a discrete random variable that only depends on the values of $z_n$, conditional on $w_n$ (seemingly, as if $w_n$ was fixed) but the R.V. that we do have on the left-hand side is $\log p(w_n|z_n,\beta)$ which depends on both the values of $z_n$ and $w_n$, both not fixed. How do I continue?
+
+$^{\dagger}$ Also assume that both $M$, the number of documents, and $k$, the dimensionality of the Dirichlet, are known and fixed. Furthermore, let $V$ denote the number of possible words (the "size of the vocabulary") and write every word as a $\{0,1\}^V$ vector with zeros everywhere except for the index of the word in the vocabulary list. Finally, let $\beta$ be a $k\times V$ matrix with $\beta_{ij}=p(w^j=1|z^i=1)$ and assume that both words and documents are exchangeable.
+$^*$ Eq. 15, A.3
+"
+"['comparison', 'terminology', 'papers', 'unsupervised-learning', 'transfer-learning']"," Title: What is the relation between self-taught learning and transfer learning?Body: I am new to transfer learning and I start by reading A Survey on Transfer Learning, and it stated the following:
+
+according to different situations of labeled and unlabeled data in the source domain, we can
+further categorize the inductive transfer learning setting into two cases:
+case $(a)$ (It is irrelevant to my question).
+case $(b): $ No labeled data in the source domain are
+available. In this case, the inductive transfer
+learning setting is similar to the self-taught
+learning setting, which is first proposed by Raina
+et al. [22]. In the self-taught learning setting, the
+label spaces between the source and target
+domains may be different, which implies the
+side information of the source domain cannot be
+used directly. Thus, it’s similar to the inductive
+transfer learning setting where the labeled data
+in the source domain are unavailable.
+
+From that, I understand that self-taught learning is inductive transfer learning.
+But I opened the paper of self-taught learning that was mentioned (i.e paper by Raina
+et al. [22].), and It stated the following in the introduction:
+
+Because
+self-taught learning places significantly fewer restrictions on
+the type of unlabeled data, in many practical applications
+(such as image, audio or text classification) it is much easier
+to apply than typical semi-supervised learning or transfer learning
+methods.
+
+And here it looks like transfer learning is different from self-taught learning.
+So what is the right relation between them?
+"
+"['reinforcement-learning', 'q-learning', 'markov-decision-process', 'value-functions']"," Title: Given a sequence of states followed by the agent, is it possible to find the Q-value for a state-action pair not in this sequence?Body: Assume you are given a sequence of states followed by the agent, generated by a random policy, $[s_0, s_1, s_2, \dots, s_n]$. Furthermore, assume the MDP is fully observable and time is discrete.
+Is it possible to find the Q-value for a state-action pair $(s_j, a_j)$ which was not encountered along this sequence?
+From my understanding of the MDP, yes, it would be possible. However, I'm unsure how to get this Q-value.
+"
+"['deep-learning', 'convolutional-neural-networks', 'datasets', 'overfitting', 'residual-networks']"," Title: Can residual connections be beneficial when we have a small training dataset?Body: I have a classification problem, for which an inadequate amount of training data is available. Also, there is no known practical data augmentation approach for this problem (as no unlabelled data is available either), but I am working on it.
+As we know, deep neural networks require a large amount of data for training, especially when a deep architecture with many layers is used. Using these complex architectures with less data can easily lead to over-fitting. Residual connections can shortcut some blocks or layers, which can result in simpler models, while we have the benefit of complex structures.
+Can residual connections be beneficial when we have a small training dataset?
+"
+"['reinforcement-learning', 'python', 'implementation', 'bellman-operators']"," Title: Off-policy Bellman Operators: Writing Operator and Weight Update Function for a 2-State SystemBody: I am studying for RL on my own and was trying to solve this question I came across.
+
+- Write an operator function $T(w, \pi, \mu, l, g)$ that takes weights $w$, a target policy $\pi$, a behaviour policy $\mu$, a trace parameter $l$, and a discount $g$, and outputs an off-policy-corrected lambda-return. For this question, implement the standard importance-weighted per-decision lambda-return. There will only be two actions, with the same policy in each state,
+so we can define $\pi$ to be a number which is the target probability of selecting action a in any state (s.t. $1 - \pi$ is the probability of selecting $b$), and similarly for the behaviour $\mu$.
+
+- Write an expected weight update, that uses the operator function $T$ and a value function $v$ to compute the expected weight update. The expectation should take into account the probabilities of actions in the future, as well as the steady-state (=long-term) probability of being in a state. The step size of the update should be $\alpha=0.1$.
+
+
+Here is how my solution looks like (I am a total beginner in RL and in addition to studying Rich's book, I was trying to solve the basic intro course assignments as well to help understand the topic in detail.
+x1 = np.array([1., 1.])
+x2 = np.array([2., 1.])
+
+def v(w, x):
+ return x.T*w
+
+def T(w, pi, mu, l, g):
+ states = [0, 1]
+ n_states = len(states)
+ #initial_dist = np.array([[1.0, 0.0]])
+ transition_matrix = np.array([[pi, 1-pi],
+ [pi, 1-pi]])
+
+ if pi <= mu: # thresholding to select the state
+ val = v(w, x1)
+ else:
+ val = v(w, x2)
+ pi = 1 - pi
+
+ l_power = np.power(l, n_states - 1)
+ lambda_corrected = l_power * val
+ lambda_corrected *= 1 - l
+
+ return lambda_corrected - val
+
+def expected_update(w, pi, mu, l, g, lr):
+ delta = T(w, pi, mu, l, g)
+
+ w += lr * delta
+ return w
+
+The state diagram looks like this where there are two states $s_0$ and $s_1$. All rewards are $0$ and the state features $x_0 = x(s_0)$ and $x_1 = x(s_2)$ for two states are given as $x_1$ and $x_2$ in the code ([1., 1.], [2., 1.]) and also there are only two actions in each state $a$ and $b$. Action an always transitions to state $s_0$ (i.e. from s1 or from s0 itself) and action b always transitions to state $s_1$ (i.e. from $s_0$ or $s_1$ itself):
+This is how the caller portion of the code looks like.
+def caller(w, pi, mu, l, g):
+ ws = [w]
+ for _ in range(100):
+ w = w + expected_update(w, pi, mu, l, g, lr=0.1)
+ ws.append(w)
+ return np.array(ws)
+
+mu = 0.2 # behaviour
+g = 0.99 # discount
+
+lambdas = np.array([0, 0.8, 0.9, 0.95, 1.])
+pis = np.array([0., 0.1, 0.2, 0.5, 1.])
+
+I would appreciate any help.
+
+Edit:
+I tried implementing the T() following the Bellman backup operator, but I am still not sure if I did this right or not.
+return pi * g*v(w, x1) + (1-pi) * g*v(w, x2)
+
+"
+"['autoencoders', 'multilayer-perceptrons', 'dense-layers', 'deep-belief-network', 'forward-pass']"," Title: What is the difference between the forward pass of the Multi-Layer Perceptron, Deep AutoEncoder and Deep Belief Network?Body: Multi-Layer Perceptron (MLP), Deep AutoEncoder (DAE), and Deep Belief Network (DBN) are trained differently.
+However, do they follow the same process during the inference phase, i.e., do they calculate a weighted sum, then apply a non-linear activation function, for each layer until the last layer, or is there any difference? Moreover, are they only composed of fully connected layers?
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'value-based-methods']"," Title: What is the advantage of using MCTS with value based methods over value based methods only?Body: I have been trying to understand why MCTS is very important to the performance of RL agents, and the best description I found was from the paper Bootstrapping from Game Tree Search stating:
+
+Deterministic, two-player games such as chess provide an ideal
+test-bed for search bootstrapping. The intricate tactics require a
+significant level of search to provide an accurate position
+evaluation; learning without search has produced little success in
+these domains.
+
+I however don't understand why this is the case, and why value based methods are unable to achieve similar performance.
+So my question would be:
+
+- What are the main advantages of incorporating search based algorithms with value based methods?
+
+"
+"['monte-carlo-tree-search', 'monte-carlo-methods', 'upper-confidence-bound']"," Title: In MCTS, what to do if I do not want to simulate till the end of the game?Body: I'm trying to implement MCTS with UCT for a board game and I'm kinda stuck. The state space is quite large (3e15), and I'd like to compute a good move in less than 2 seconds. I already have MCTS implemented in Java from here, and I noticed that it takes a long time to actually reach a terminal node in the simulation phase.
+So, would it be possible to simulate games up until a specific depth?
+Instead of returning the winner of the game after running until the max depth, I could return an evaluation of the board (the board game is simple enough to write an evaluation function), which then back propagates.
+The issue I'm having is in handling the backpropagation. I'm not quite sure what to do here. Any help/resources/guidance is appreciated!
+"
+"['neural-networks', 'deep-learning', 'backpropagation', 'incremental-learning']"," Title: How to improve a trained model over time (i.e. with more predictions)?Body: I built a model using the tutorial on the TensorFlow site. It was a simple image classification neural network. I trained it and saved the model and weights together on a .h5
file.
+Recently, I have been reading about backpropagation. From what I understand, it's basically a way to tell the neural network whether if it's identified the correct output and that it is applied during training data only.
+So, I was wondering if there is a way for the model to 'improve' over time as it makes more and more predictions. Or is that not how it would work with Neural Networks?
+"
+"['machine-learning', 'testing']"," Title: What are the differences in testing between traditional software and artificial intelligence?Body: The testing problem in traditional software has been fully explored over the last decades, but it seems that testing in artificial intelligence/machine learning has not (see this question and this one).
+What are the differences between the two?
+"
+"['bayesian-networks', 'bayesian-inference']"," Title: What Constitutes Messages in Junction Tree Algorithm?Body: I'm currently studying the Junction Tree Algorithm: I'm referring to the process of transforming a Bayesian Network into a Junction Tree in order to apply inference. I understand how you build the Junction Tree, but I'm stuck on the idea of message passing.
+What exactly are these messages? Are they numbers, or vectors?
+If any of you could direct me to a numerical example that would be very appreciated.
+"
+"['deep-learning', 'papers', 'variance', 'entropy']"," Title: How does high entropy targets relate to less variance of the gradient between training cases?Body: I've been trying to understand the Distilling the Knowledge in a Neural Network paper by Hinton et al. But I cannot fully understand this:
+
+When the soft targets have high entropy, they provide much more information per training case than hard targets and much less variance in the gradient between training cases [...]
+
+The information part is very clear, but how does high entropy correlate to less variance between training cases?
+"
+"['reinforcement-learning', 'reference-request', 'policy-gradients', 'proximal-policy-optimization', 'action-spaces']"," Title: Is there any research on the application of policy gradients to problems where the selection of an action requires the selection of another one?Body: I am working on a problem and want to explore if it can be solved with PPO (or other policy gradient methods). The problem is that the action space is a bit special, compared to classic RL environments.
+At each time $t$, we chose between 4 actions: $a_1\in \{0, 1, 2, 3\}$,
+but given $a_1 = 0 \text{ or } 2$, we need to chose three more actions: $a_2, a_3, a_4$ (which all three can be chosen from categorical distributions).
+I know I can design this kind of policy myself and re-work the entropy terms, and so on for PPO.
+My question is: is there any research into this kind of RL?
+I am having a hard time finding someone working with problems in which the actions chosen are dependent on other chosen at the same time. I have looked into Hierarchical RL, but the papers I have found have not worked with this particular kind of problem.
+If these action spaces were small ($a_2, a_3$ are chosen from categorical distributions with $\sim$800 different options), one solution would be to roll it out into one big policy where each possible combination of actions is represented by one choice in the policy. But my concern of doing this with a bigger action space is that the choice of $a_1 = 1, 3$ where we don't choose the other separate actions will get lost in the policy.
+"
+"['machine-learning', 'deep-learning', 'recurrent-neural-networks', 'long-short-term-memory', 'attention']"," Title: Do RNNs/LSTMs really need to be sequential?Body: There are many articles comparing RNNs/LSTMs and the Attention mechanism. One of the disadvantages of RNNs that is often mentioned is that while Attention can be computed in parallel, RNNs are highly sequential. That is, the computation of the next tokens depends on the result of previous tokens, thus, RNNs are losing to Attention in terms of speed.
+Even though I fully agree that RNNs are sequential as stated above, I think they are still parallelizable by splitting the mini-batch into sub-batches and each of these sub-batches is processed independently by a dedicated thread. For example, a training batch of size 32 can be split into 4 sub-batches of size 8; 4 threads process 4 sub-batches independently. That way, RNNs/LSTMs are parallelizable and this is not a disadvantage compared to Attention.
+Is my thought correct?
+"
+"['reinforcement-learning', 'markov-decision-process', 'discount-factor']"," Title: Why do we discount the state distribution?Body: In Reinforcement Learning, it is common to use a discount factor $\gamma$ to give less importance to future rewards when calculating the returns.
+I have also seen mention of discounted state distributions. It is mentioned on page 199 of the Sutton and Barto textbook that if there is discounting then (for the state distribution) it should be treated as a form of termination, and it is implied that this can be achieved by adding a factor of $\gamma$ to the state transition dynamics of the MDP, so that now we have
+$$\mu(s) = \frac{\eta(s)}{\sum_{s'} \eta(s')}\;;$$
+where $\eta(s) = h(s) + \sum_{\bar{s}} \eta(\bar{s})\sum_a \pi(a|\bar{s}) \gamma p(s|\bar{s}, a)$ and $h(s)$ is the probability of the episode beginning in state $s$.
+In my opinion, the book kind of skips over this and it is not immediately clear to me why we need to discount our state distribution if we have discounting in the episode.
+My intuition would suggest that it is because we usually take an expectation of the returns over the state distribution (and action/transition dynamics), but, if we are discounting the (future) rewards, then we should also discount the future states to give them less importance. In Sergey Levine's lectures he provides a brief aside that I think agrees with my intuition but in a rather unsatisfactory way -- he introduces the idea of a 'death state' that we transition into at each step with probability $1-\gamma$ but he does not really provide a rigorous enough justification for thinking of it this way (unless it is just a useful mental model and not supposed to be rigorous).
+I am wondering whether someone can provide a more detailed explanation as to why we discount the state distribution.
+"
+"['machine-learning', 'overfitting', 'regularization', 'cross-validation', 'capacity']"," Title: Does adding a model complexity penalty to the loss function allow you to skip cross-validation?Body: It's my understanding that selecting for small models, i.e. having a multi-objective function where you're optimizing for both model accuracy and simplicity, automatically takes care of the danger of overfitting the data.
+Do I have this right?
+It would be very convenient for my use case to be able to skip lengthy cross-validation procedures.
+"
+['pytorch']," Title: Pytorch - Evaluation loss on the training set higher than loss during trainingBody: def forward(self, image, proj, proj_inv):
+ return self.predict_2d_joint_locations(image, proj, proj_inv)
+
+ def criterion(self, predicted, gt):
+ return self.mse(predicted, gt)
+
+ def training_step(self, batch, batch_idx):
+ player_images, j2d, j3d, proj, proj_inv, is_synth = batch
+ predicted_2d_joint_locations = self.predict_2d_joint_locations(player_images, proj, proj_inv)
+ train_loss = self.criterion(predicted_2d_joint_locations, j2d)
+ self.log('train_loss', train_loss)
+ return train_loss
+
+ def validation_step(self, batch, batch_idx):
+ player_images, j2d, j3d, proj, proj_inv, is_synth = batch
+ predicted_2d_joint_locations = self.predict_2d_joint_locations(player_images, proj, proj_inv)
+ val_loss = self.criterion(predicted_2d_joint_locations, j2d)
+ self.log('val_loss', val_loss)
+ return val_loss
+
+I have this simple code for training_step()
and forward()
in Pytorch. Both the functions essentially do the same.
+Owing to a relatively small dataset, my model grossly overfits on the training data (as is evident from there being an orders of magnitude of difference between the training and validation losses). But that's fine for now, I am perfectly aware of that and will add more data soon.
+What surprises me is when I try to evaluate (infer). I don't have a separate test set (for now) and only have a training and a validation set. When I evaluate on the validation set, the mean squared error turns out to be in the same range as the validation loss my model is based on as expected. However, when I evaluate on the training set, the mean squared error I get is again in the same range as the validation loss (not the training loss).
+ if args.val:
+ check_dl = dataset.val_dataloader()
+ else:
+ check_dl = dataset.train_dataloader()
+
+ for player_images,j2d,j3d,proj,proj_inv,is_synth in check_dl:
+ if args.visualize:
+ # visualize dataset
+ player_images = player_images.cpu().numpy()
+ j2d_predicted = model(torch.from_numpy(player_images), proj, proj_inv).cpu().detach().numpy()
+ print(((j2d - j2d_predicted) ** 2).mean(), model.training_step((torch.from_numpy(player_images),j2d,j3d,proj,proj_inv,is_synth), 0))
+
+When I print print(((j2d - j2d_predicted) ** 2).mean()
for images in the training set after fetching the model from the trained checkpoint, I get numbers in the range of the validation loss. I retried the same by printing the loss using the training_step()
function, but I again receive high losses (in the validation loss range).
+Note: The inference mean squared errors I receive on the training set are high but they are not as high as when the training actually started. So, the pre-trained model is fetched properly. On a model with completely random weights, I should have received orders of magnitudes of higher errors. So, the model is definitely fetched correctly.
+I have been scratching my head over this. Any help would be really appreciated.
+"
+"['neural-networks', 'keras', 'regression']"," Title: Can predictions of a neural network using ReLU activation be non-linear (i.e. follow the pattern) outside of the scope of trained data?Body: Training on a quadratic function
+x = np.linspace(-10, 10, num=1000)
+np.random.shuffle(x)
+y = x**2
+
+Will predict an expected quadratic curve between -10 < x < 10
.
+
+Unfortunately my model's predictions become linear outside of the trained dataset.
+See -100 < x < 100
below:
+
+Here is how I define my model:
+model = keras.Sequential([
+ layers.Dense(64, activation='relu'),
+ layers.Dense(64, activation='relu'),
+ layers.Dense(1)
+ ])
+
+model.compile(loss='mean_absolute_error', optimizer=tf.keras.optimizers.Adam(0.1))
+
+history = model.fit(
+ x, y,
+ validation_split=0.2,
+ verbose=0, epochs=100)
+
+Here's a link to a google colab for more context.
+"
+"['reinforcement-learning', 'terminology', 'markov-decision-process', 'state-spaces', 'markov-property']"," Title: What is the difference between environment states and agent states in terms of Markov property?Body: I'm going through the David Silver RL course on YouTube. He talks about environment internal state $S^e_t$, and agent internal state $S^a_t$.
+We know that state $s$ is Markov if
+$$\mathbb{P}\{S_t=s|S_{t-1}=s_{t-1},...,S_1=s_1\}=\mathbb{P}\{S_t=s|S_{t-1}=s_{t-1}\}.$$
+When we say that Decision Process is Markov Decision Process, does that mean:
+
+- All environment states must be Markov states
+- All agent states must be Markov states
+- Both (All environment states and all agent states must be Markov states)
+
+and according to this, if we specify corresponding MDP as $(\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \gamma, T)$, is $\mathcal{S}$ the state space of environment states or agent states?
+Why I'm confused by this? He claims that environment states are Markov (I'm also confused why, but I'll make another post for this), and then claims that if the agent can directly see environment internal state $S^e_t$, then observations $O_t=S^e_t$, and agent constructs its state trivially as $S^a_t=O_t=S^e_t$. Now, both environment and agent states are Markov (since they are the same), so this makes sense. If we specify MDP as $(\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \gamma, T)$, it's clear that state space $\mathcal{S}$ is state space of both agent internal states and environment internal states (again, since they are the same).
+Now consider the case when the environment is not fully observable. Now $O_t\ne S^e_t$, and agent must construct it's state $S^a_{t}=f(S^a_{t-1}, H_t)$, where $H_t=(O_0, A_0, R_1, O_1,...,O_{t-1}, A_{t-1}, R_t, O_t)$ is history until time step $t$, and $f$ is some function (such as Recurrent Neural Network for example). In the case of $f$ being a recurrent neural network, we have that both environment internal states are Markov (by this hypothesis), and agent internal states are Markov (approximately), so again, the process is an MDP $(\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \gamma, T)$, but state space of agent states is different to that of environment states, so I'm confused about what is $\mathcal{S}$ here. Is it environment state-space or agent state space?
+Lastly, what if $f(S^a_t, H_t)=O_t$? That is, the agent's internal state is simply the last observation. Considering that environment states are always Markovian (again, don't know why we can claim this), but agent states are not, this is the case of POMDP. Even here I don't know what $\mathcal{S}$ stands for in specification of POMDP. Is it environment state-space or action state space?
+"
+"['reinforcement-learning', 'model-based-methods']"," Title: How does a model based agent learn the model?Body: I want to build model-based RL. I am wondering about the process of building the model.
+If I already have data, from real experience:
+
+- $S_1, a \rightarrow R,S_2$
+- $S_2, a \rightarrow R,S_3$
+
+Can I use this information, to build model-based RL? Or it is necessary that the agent directly interact with the environment (I mean the same above-mentioned data should be provided by the agent)?
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'training']"," Title: How to interpret this learning curve of my neural network?Body: How to interpret the following learning curves?
+Background: The accuracy starts at 50%, because the network has a binary output (0 or 1). I chose an exponentially decreasing learning rate of the optimizer - I believe that this the reason why the network starts learning after 10 epochs or so.
+lr_schedule = keras.optimizers.schedules.ExponentialDecay(**h_p["optimizer"]["adam"])
+optimizer = keras.optimizers.Adam(learning_rate=lr_schedule)
+h_p["Compile"]["optimizer"] = optimizer
+
+
+"
+['recurrent-neural-networks']," Title: What does 'clock rate' mean in the context of recurrent neural networks (RNNs)?Body: I have often encountered the term 'clock rate' when reading literature on recurrent neural networks (RNNs). For example, see this paper. However, I cannot find any explanations for what this means. What does 'clock rate' mean in this context?
+"
+"['variance', 'unsupervised-learning']"," Title: Is there a bias-variance equivalent in unsupervised learning?Body: In supervised learning, bias, variance are pretty easy to calculate with labeled data. I was wondering if there's something equivalent in unsupervised learning, or like a way to estimate such things?
+If not, how do we calculate loss functions in unsupervised learning?
+"
+"['neural-networks', 'convolutional-neural-networks', 'tensorflow', 'python', 'regression']"," Title: Is it possible to use RGB image with decimal values when feeding training data to CNN?Body: I am working with four grayscale images of float32 data type to perform regression using Keras. Three images are stacked using np.dstack
to form a RGB data-set. The last grayscale image is used as label. The grayscale images contains different variations, including [0 , 790.65], [ 150.87 , 260.45], [ -2.74174 , 2.4126 ], [-32.927 , 69.333].
+If I convert the images to unit8, the maximum values for the first and second image will be 255 and the decimal values for all images will be lost. I am having difficulty and struggling to find the solution for using the image in the original data type (float32) when I try to use ImageDataGenerator
and flow_from_directory
. Can anyone suggest way for that?
+"
+"['reinforcement-learning', 'definitions', 'markov-decision-process', 'markov-chain', 'ergodicity']"," Title: What is ergodicity in a Markov Decision Process (MDP)?Body: I have read about the concept of ergodicity on the safe RL paper by Moldovan (section 3.2) and the RL book by Sutton (chapter 10.3, 2nd paragraph).
+The first one says that "a belief over MDPs is ergodic if and only if any state is reachable from any other state via some policy or, equivalently, if and only if":
+$$\forall s, s', \exists \pi_r \text{ such that } E_\beta E_{s, \pi_r}^P [B_{s'}] = 1$$
+where:
+
+- $B_{s'}$ is an indicator random variable of the event that the system reaches state $s'$ at least once, i.e., $B_{s'} = 1 \{ \exists t < \infty \text{ such that } s_t = s'\}$
+- $E_\beta E_{s, \pi_r}^P[B_{s'}]$ is the expected value for $B_{s'}$, under the belief over the MDP dynamics $\beta$, policy $\pi$ and transition measure $P$.
+
+The second one says "$\mu_\pi$ is the steady-state distribution, which is assumed to exist for any $\pi$ and to be independent of $s_0$. This assumption about the MDP is known as ergodicity.". They define $\mu_\pi$ as:
+$$\mu_\pi(s) \doteq \lim_{t \to \infty} \Pr\{s_t=s \vert a_{0:t-1} \sim \pi\}$$
+
+- i.e., there is a chance of landing on state $s$ by executing actions according to policy $\pi$.
+
+I noticed that the first definition requires that at least one policy should exist for each $(s, s')$ pair for the MDP to be ergodic The second definition, however, requires that all policies eventually visit all the states in an MDP, which seems to be a more strict definition.
+Then, I came accross the ergodicity definition for Markov chains:
+
+A state $i$ is said to be ergodic if it is aperiodic and positive recurrent. In other words, a state $i$ is ergodic if it is recurrent, has a period of $1$, and has finite mean recurrence time. If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic.
+
+This leads me to believe that the second definition (the stricter one) is the most appropriate one, considering the ergodicity definition in an MDP derives from the definition in a Markov chain. As an MDP is basically a Markov chain with choice (actions), ergodicity should mean that independently of the action taken, all states are visited, i.e., all policies ensure ergodicity.
+Am I correct in assuming these are different definitions? Can both still be called "ergodicity"? If not, which one is the most correct?
+"
+"['reinforcement-learning', 'optimization', 'genetic-algorithms', 'algorithm-request', 'fitness-functions']"," Title: Is it possible to optimize a multi-variable function with a reinforcement learning method?Body: I want to use RL instead of genetic or any other evolutionary algorithm in order to find the best parameter for a function.
+Here is the problem:
+Given a function $$f(x,y,z, \text{data}),$$
+where $x$, $y$ and $z$ are some integers from 1 to 50.
+So I can say I have a 3-dimensional array which is a way to save fitness values:
+$$\text{parameters} = [[1..50], [1..50], [1..50]]$$
+The $$\text{data}$$ is another input which is the $f$ needed to do some calculation on.
+Currently, I am optimizing it using a genetic algorithm with $$\text{cost}(\text{fitness}) = f(x,y,z,data)$$ which is a customized cost function.
+Any value for $x$, $y$, and $z$ will result in a cost for example:
+$$f(1, 5, 8, X) = 15$$
+$$\text{parameters}: [1, 5, 8] = 15$$
+or
+$$ \text{parameters}: [2, 9, 11] = 30$$
+In the provided example 2, 9, and 11 is a better set of parameters.
+So I run a genetic algorithm and make some children with a sequence of x,y, and z. Then I calculate the cost(fitness) and then select them and so on.
+I want to know is there any alternative or method in reinforcement learning which I can use instead of a genetic algorithm? If yes, please provide the name or any helpful link.
+Note that F is completely defined by the user and should be changed in other contexts.
+"
+"['convolutional-neural-networks', 'recurrent-neural-networks', 'long-short-term-memory', 'terminology', 'dense-layers']"," Title: Must all CNNs and RNNs not have a fully connected layer in order to be considered as such?Body: In the paper Wrist-worn blood pressure tracking in healthy free-living individuals using neural networks, the authors talk about a combination of feed-forward and recurrent layers, as if FC layers cannot be part of the RNN.
+So, must all Convolutional Neural Networks and Recurrent Neural Networks not have a fully connected layer in order to be considered CNNs and RNNs, respectively? If yes, should we consider CNNs and RNNs with an FC layer "hybrid models"?
+"
+"['reinforcement-learning', 'dqn', 'math', 'function-approximation', 'bellman-equations']"," Title: Why the optimal Bellman operator of a Q-function can be approximated by a single pointBody: I am currently studying reinforcement learning, especially DQN.
+In DQN, learning proceeds in such a way as to minimize the norm (least-squares, Huber, etc.) of the optimal Bellman equation and the approximate Q-function as follows (roughly):
+$$
+\min\|B^*Q^*-\hat{Q}\|.
+$$
+Here $\hat{Q}$ is an estimator of Q function, $Q^*$ is the optimal Q function, and $B^*$ is the optimal Bellman operator.
+$$
+B^*Q^*(s,a)=\sum_{s'}p_T(s'|s,a)[r(s,a,s')+\gamma \max_{a'}Q^*(s',a')],
+$$
+where $p_T$ is a transition probability, $r$ is an immediate reward, and $\gamma$ is a discount factor.
+As I understand it, in the DQN algorithm, the optimal Bellman equation is approximated by a single point, and the optimal Q function $Q^*$ is further approximated by an estimator different from $\hat{Q}$, say $\tilde{Q}$.
+\begin{equation}\label{question}
+B^*Q^*(s,a)\approx r(s,a,s')+\gamma\max_{a'}Q^*(s',a')\approx r(s,a,s')+\gamma\max_{a'}\tilde{Q}(s',a'),\tag{*}
+\end{equation}
+therefore the problem becomes as follows:
+$$
+\min\|r(s,a,s')+\gamma\max_{a'}\tilde{Q}(s',a')-\hat{Q}(s,a)\|.
+$$
+What I want to ask:
+I would like to know the mathematical or theoretical background of the approximation of \eqref{question}, especially why the first approximation is possible. It looks like a very rough approximation. Can the right-hand side be defined as an "approximate Bellman equation"? I have looked at various literature and online resources, but none of them mention exact derivation, so I would be very grateful if you could tell me about reference as well.
+"
+"['deep-rl', 'objective-functions', 'math', 'policy-gradients', 'policies']"," Title: Is the policy gradient expression in Fundamentals of Deep Learning wrong?Body: I don't understand the policy gradient as explained in Chapter-9 (Deep Reinforcement Learning) of the book Fundamentals of deep learning.
+Here is the whole paragraph:
+Policy Learning via Policy Gradients
+
+In typical supervised learning, we can use stochastic gradient descent to update our parameters to minimize the loss computed from our network's output and the true label. We are optimizing the expression:
+$$
+\arg \min _{\theta} \Sigma_{i} \log p\left(y_{i} \mid x_{i} ; \theta\right)
+$$
+In reinforcement learning, we don't have a true label, only reward signals. However, we can still use SGD to optimize our weights using something called policy gradients. We can use the actions the agent takes, and the returns associated with those actions, to encourage our model weights to take good actions that lead to high reward, and to avoid bad ones that lead to low reward. The expression we optimize for is:
+$$
+\arg \min _{\theta}-\sum_{i} R_{i} \log p\left(y_{i} \mid x_{i} ; \theta\right)
+$$
+where $y_{i}$ is the action taken by the agent at time step $t$ and where $R_{i}$ is our discounted future return. A In this way, we scale our loss by the value of our return, so if the model chose an action that led to negative return, this would lead to greater loss. Furthermore, if the model is very confident in that bad decision, it would get penalized even more, since we are taking into account the log probability of the model choosing that action. With our loss function defined, we can apply SGD to minimize our loss and learn a good policy.
+
+The first expression about the loss computed in a network already seems false since the log of a probability is always negative, and taking the $θ$ (weights) for which the expression is minimal doesn't seem right because it would favor very unsure answers.
+The same goes with the next expression on policy gradient. A very negative $R_i$ and very unsure $p(y_i)$ would both be big negatives and multiplied together give a big positive value. Since there is a - sign in front of the expression, this would be the best configuration for the argmin
. Meaning we are looking for weights in the policy that give highly negative rewards and for highly unsure actions. This just doesn't make sense to me.
+Is it just a sign error (or we could just change to argmax
)? Or is there more to it?
+"
+"['unsupervised-learning', 'data-science', 'clustering']"," Title: Is there a clustering algorithm that can make n clusters and the n+1 ""others"" cluster?Body: As far as I know all clustering algorithms assume that all delivered data points have to find its cluster.
+My question is, is there an algorithm that could focus only on n clusters (number stated by user) and try to dismiss the rest of the points that (according to algorithm) do not belong to n clusters, like in the picture shown below? Where we know that there are for example 2 classes that we need to cluster (red and green) and the rest (blue) we do not need in any cluster and therefore algorithm does not try to assign them to any cluster?
+For example if we would have 1 000 pictures of animals, of which 200 are dogs, 200 are cats and the rest are all other animals known to men and we want to make 1 cluster for cats, 1 for dogs and maybe another for collectively all others that do not match dogs or cats.
+
+"
+"['game-theory', 'combinatorial-games', 'policy-iteration']"," Title: What are some strong algorithms for Perfect Information, Deterministic Multiplayer Games?Body: I have a series of games with the following properties:
+
+- 3 or more players, but purely non-cooperative (i.e., no coalition forming);
+- sequential moves;
+- perfect information;
+- deterministic state transitions and rewards; and
+- game size is large enough to make approximate methods required (e.g. $1000^{120}$ for certain problems).
+
+For example, Chinese Checkers, or, for a more relevant example to my work, a multi-player knapsack problem (where each player, in round-robin fashion, can choose without replacement from a set of items with the goal of maximizing their own knapsack value).
+Question: what policy improvement operators or algorithms a) converge to optimality or b) provide reasonably strong results on these games?
+What have I researched
+
+- In a small enough game (e.g., 3-person Nim with (3,4,5) starting board), full tree search is possible.
+- In a one-person setting, certain exact Dynamic Programming formulations can reduce complexity. For example, in a one-person setting with a small enough knapsack, any standard array-based approach can solve the problem. I'm unsure if or how these "cost-to-achieve" shortest path formulations carry over to multi-player games.
+- In a one-person setting, policy improvement approaches like rollout algorithms and fortified rollout algorithms have the cost improvement property. I'm unsure if this property carries over to multi-player versions.
+- Work has been done (for example, this thesis) to demonstrate that Monte Carlo Tree search strategies can generate powerful policies on the types of games I'm interested in. I believe it's been proven that they converge to Nash Equilibrium in two-player perfect information games, but am not aware of any guarantees regarding multiplayer games.
+- For imperfect information games, Monte Carlo Counterfactual Regret Minimization (e.g. here) is required for any convergence guarantees. Given I am working in a perfect information environment, these seem like overkill.
+
+"
+"['deep-learning', 'convolutional-neural-networks', 'state-of-the-art', 'text-generation', 'model-request']"," Title: What would be the state of the art image captioning deep learning model?Body: I saw a couple of architectures, like CNN-LSTM, with and without attention model, use of Glove vector, self-critical models, etc. I am overwhelmed looking at different notebooks and architectures, came here for a guidance. I am looking to build a personal project on image annotations. Also, if I wanted to use this deep learning model together with TFX pipeline, what would be the best type of architecture I can go with?
+"
+"['deep-learning', 'training', 'data-preprocessing', 'normalisation', 'standardisation']"," Title: Why do you calculate the mean and standard deviation over the complete dataset before training rather than for every batch?Body: In most implementations of neural networks the features are scaled to make the optimization of the loss function as stable as possible.
+Mostly a min-max scaler is used. Alternatively, there is also a standard scaler.
+Why do you calculate the mean and standard deviation offline over the complete dataset before training? Couldn't this be calculated per batch or even per file? What is the disadvantage? Why doesn't anyone do this?
+"
+"['neural-networks', 'deep-learning', 'papers', 'deep-neural-networks']"," Title: The MLP output of a neural network can be written as $\|x\|\|w_l\|\cos(\theta_l)$: why is the norm easier to maximize?Body: The MLP output of a neural network is a dot product between the weights and the input and therefore can be written as $\|x\|\|w_l\|\cos(\theta_l)$ (see this for more details), where $x$ is the input, $w_l$ is the weights of layer $l$ and $\theta_l$ is the angle between them.
+I read this in a paper: Angular Visual Hardness. The paper stated that it's much easier to maximize the norms $\|x\|$ and $\|w_l\|$ than the cosine similarity. Why is this the case? Just because $\cos(\theta_l)$ gives less weight because it's bounded between $[-1,1]$? Or is it due to the gradient? So, why is the norm easier to maximize?
+"
+"['gradient-descent', 'fitness-functions']"," Title: Does gradient descent in deep learning assume a smooth fitness landscape?Body: I've come across the concept of fitness landscape before and, in my understanding, a smooth fitness landscape is one where the algorithm can converge on the global optimum through incremental movements or iterations across the landscape.
+My question is: Does deep learning assume that the fitness landscape on which the gradient descent occurs is a smooth one? If so, is it a valid assumption?
+Most of the graphical representations I have seen of gradient descent show a smooth landscape.
+This Wikipedia page describes the fitness landscape.
+"
+"['terminology', 'intelligent-agent', 'norvig-russell', 'utility-based-agents', 'learning-agents']"," Title: What is the difference between a performance standard and performance measure?Body: I am reading AI: A Modern Approach. In the 2nd chapter when introducing different agent types, i.e., reflex, utility-based, goal-based, and learning agents, I understood that all types of agents, except learning agents, receive feedback and choose actions using the performance measure.
+But they do so in different ways. Model-based reflex agents possess an internal state (like a memory), while goal-based agents predict the outcome of actions and choose the one serving the goal. Lastly, utility-based functions measure the 'happiness' of each state using the utility function, which is again an internalization of the performance measure, hence all have similar nature overall.
+The learning agents, however, can be wrapped around the entire structure of previous agents. The entire agent's architecture is now called a performance element, and the learning agent has an additional learning element, which modifies each component of the agent, so as to bring the components into closer agreement with the available feedback information. But the feedback information in learning agents does not from the performance measure embedded in the agent's structure, but from a fixed external performance standard, which is part of the critic element*.
+For the purpose of illustration, the structure of a utility-based agent and that of a learning agent are presented in the figure:
+
+What boggles my mind is figuring out the actual difference and interaction between performance standard and performance measure, which is perhaps related to those between learning agents and other ones. Here are my thoughts thus far:
+
+- Other agents aim for maximizing the performance measure, causing them to do perfect actions. On the other hand, learning agents have the freedom of doing sub-optimal actions, which allow them to discover better actions on the long run using the performance standard.
+
+- Through the performance standard's feedback (which comes from the critic as shown in the figure), the learning agent can also learn a utility function or reflex component.
+
+
+For providing examples, the book states that giving tip to an automated taxi is considered a performance standard. And also
+
+hard-wired performance standards such as pain and hunger in animals can be understood in this way.
+
+But I am still not sure about the discrepancy and interaction between the performance measure and performance standard. For instance, in the automated taxi, when confronting a road junction, the utility-based agent chooses a path that maximizes its utility function. The learning agent, however, must check different roads and after testing them, it receives feedback from outside so that eventually it would detect the user's preference.
+But what if we wrap a learning agent around a utility-based agent in such a condition? Which has more effect, the utility function from inside, or the performance standard from outside (critic)? If they happen to contradict each other, which one would have the prevalent effect?
+"
+"['reinforcement-learning', 'training', 'deep-rl', 'dqn']"," Title: How to recover the target Q network's weights solely from the snapshots of the primary Q network's weights in DQN?Body: Suppose that I have a DQN agent, which has two neural networks: one is the primary Q network and the other is the target Q network. In every update, the target Q network is updated with a soft update strategy:
+$$Q_{target} = (1-\tau) \times Q_{target} + \tau \times Q_{prime}$$
+I saved the primary Q network's weights every $n$ episodes (say $n=10$), but, unfortunately, I did not save the target Q network's weights.
+Say that my training process is aborted for some reason, and now I would like to continue the training using the latest saved weights. I can load the primary Q network's weights, but what about the target Q network's weights? Should I also use the latest primary Q network's weights for the target Q network's weights, or should I use the primary Q network's weights from several episodes ago, or how should it be?
+"
+"['convolutional-neural-networks', 'object-detection', 'feature-extraction']"," Title: Advice required for identifying bone fragments in CT-scans using STL Files (3D image segmentation)Body: I am working on a project related to automating the procedure of manually segmenting some bones in CT scans and hopefully if everything goes alright in this stage, move on to do something more with them - like bone reconstruction etc.
+I have been doing extensive research regarding this - and CNNs were something in my target as a ML method that could be used here. Emphasis is more on using Deep learning for this project.
+So, what I have - the data: CT scans of chest/shoulder and for each of the CT scan, I have 4-6 STL files of the individual bone fragments or segments located in the shoulder or near shoulder region. I am a tad uncertain as to how to use those individual STL files.
+Target: To label/classify/identify these fragments in the CT scan - automate it.
+My MOA (Method of Approach) or what I understand - I believe it is object (bone fragment being the object) detection and feature (of those bone pieces that I need to lock on in the CT-scan) extraction using CNNs. I am looking at Mask R-CNN etc, use a pre-trained CNN for this.
+But I am not entirely sure if my understanding is correct. This is my first time with this stuff, but hoping to learn more. CT-scans are in nifti format.
+I could provide more info if required, would gladly appreciate any insight or help or advice with what could be the way forward and if I am thinking along the correct lines.
+Thank you.
+"
+"['machine-learning', 'papers']"," Title: How many papers about AI / ML were published in the recent years?Body: I am trying to formulate an argument at work saying the disruption in AI/ML is very high and that it is hard to stay "state of the art". I would like to support that hypothesis by numbers.
+Question:
+How many papers were published in 2018-2020 related to AI (or if that is too generic: ML)?
+"
+"['natural-language-processing', 'named-entity-recognition', 'spacy']"," Title: Extracting ""hidden"" costs from financial statements using NLPBody: I'm designing a NLP model to extract various kinds of "hidden" expenses from 10-K and 10-Q financial statements. I've come up with about 7 different expense categories (restructuring costs, merger and acquisitions, etc.) and for each one I have a list of terms/synonyms that different companies call them. I'm new to NLP would like some advice on the best approach for extracting them.
+Values are usually hidden in two different areas of the document:
+Type 1: Free-form text (footnotes)
+Values are nested in sentences. Here are some examples, with the Expense Type and Monetary value indicated.
+
+Exploratory dry-hole costs were \$12.7 million, \$1.3 million, and \$1.0 million for the years ended December 31, 2012, 2011, and 2010, respectively.
+
+
+2012 includes the recognition of a $3,340 million impairment charge related to the carrying value of Citi's remaining 35% interest in the Morgan Stanley Smith Barney joint venture
+
+
+During the year ended December 31, 2017, we decided to discontinue the internal development of AMG 899, resulting in an impairment charge of $400 million for the IPR&D asset
+
+Type 2: Table data
+SEC statements also contain "structured" data in HTML tables. Some line items, like the first row below, correspond to the expense type I'm looking for:
+
+
+
+
+Item |
+2020 |
+2019 |
+2018 |
+
+
+
+
+impairment related to real estate assets(2): |
+398.2 |
+200 |
+0 |
+
+
+research and development |
+100 |
+200 |
+300 |
+
+
+other expenses |
+20 |
+30 |
+40 |
+
+
+
+
+Correct value = 398.2
+
+I'm thinking about a two-model approach:
+
+- Define a new NER model based off the terms I already know (e.g. "dry-hole costs", "impairment charges"). I would need to manually annotate extracts from historic statements that contain these terms for the training set.
+
+- For free-form text, it would match the sentence and pass it on for further processing (see 2).
+- For table data, I would loop over each row using beautifulsoup and pandas, check the first column for a match (e.g. using spaCy's comparison function), and then grab that year's value from the dataframe and finish.
+
+
+- For free-form matches, I still need to grab the monetary value for the correct year (sometimes multiple values are given for various years, see the first example above).
+
+
+One potential problem here is that sentences like this would cause problems:
+
+We gained $100 million this year, despite facing restructuring charges.
+
+If the NLP algo is split into the above two-model process, model 1 would pass (because it contains a known term like "restructuring charges"), and model 2 would extract $100 million
, which is incorrect because it doesn't actually correspond to the expense itself.
+Is there a better solution here? As I said, I'm new to NLP and data extraction so would really appreciate any advice or resources to learn more about solving these types of key/value problems.
+"
+"['machine-learning', 'tensorflow', 'python', 'keras', 'recurrent-neural-networks']"," Title: Is a true RNN auto encoder possible with Keras/TFBody: I want to get some encodings for temporal data (with a highly varying number of timesteps).
+The dataset is of the format: array<TemporalSample = list, SAMPLE_COUNT>
(where array
is fixed size and list
is variable).
+The TemporalSample
s are simply list
s of size TemporalSample::timesteps
+
+Currently, I use a standard RNN network of the form:
+model = keras.Sequential()
+model.add(layers.GRU(256, dropout=0.1, input_shape=[None, 1], return_sequences=False))
+model.add(layers.Dense(len(output_names), activation="softmax"))
+model.compile(optimizer='adam',
+ loss='categorical_crossentropy',
+ metrics=['accuracy'])
+model.summary()
+
+The problem is my inputs have variable lengths, if I want an auto-encoder, my output needs to be variable-length (just like my input), and I need an inverse RNN layer (something like layers.InverseGRU(output_shape=[None, 1])
) but from my reading this seems like not something that has been considered/done before.
+Is this at all possible?
+"
+"['neural-networks', 'tensorflow', 'keras', 'objective-functions', 'generative-adversarial-networks']"," Title: Is it possible to use an internal layer's outputs in a loss function?Body: For a network of the form:
+Input(10)
+Dense(200)
+Dense(100+10)
+Dense(20)
+Output()
+
+Those +10
outputs are what I want to add to the standard 20
outputs, for my loss function.
+Is this possible - in theory or even with some pre-existing library?
+"
+"['computer-vision', 'object-detection', 'image-processing']"," Title: Viola-Jones algorithm: Haar-like features, how are the features extracted?Body: If I have an image like this
+1 2 3 4 5 6 7 8
+a b c d e f g h
+...
+
+And I apply a Haar-like feature with a template
+1 1 1 1
+-1 -1 -1 -1
+
+Then in the first position we get X1 = 1+2+3+4+a+b+c+d
. If we slide one side to the right, we again get X2 = 2+3+4+5+b+c+d+e
.
+This way we will get X1
and X2
and X3
and so on. Now, how are these values combined to get the feature? Because when we say a feature we are not just running that template in one place, rather we will run it over multiple places in the image. It gives lots of values like X1
,X2
and X3
and so on. Now, how are those combined to get the final feature which will be passed to Adaboost?
+"
+"['natural-language-processing', 'transformer', 'hyperparameter-optimization', 'attention', 'gpt']"," Title: How to Select Model Parameters for Transformer (Heads, number of layers, etc)Body: Is there a general guideline on how the Transformer model parameters should be selected, or the range of these parameters that should be included in a hyperparameter sweep?
+
+- Number of heads
+- Number of encoder & decoder layers
+- Size of transformer model (
d_model
in Pytorch)
+- Size of hidden layers
+
+Are there general guidelines like number of decoder layers should be equal to encoder layers? Thank you
+"
+"['machine-learning', 'deep-learning', 'overfitting', 'loss']"," Title: How does the loss landscape look like or change when a model is overfitting?Body: My understanding is that when a model starts overfitting, it no longer learns useful features and starts remembering the training data set. Given enough epochs and sufficient parameters, a model can over-fit any arbitrary dataset. My question is how does the loss landscape look like for the training dataset vs the test dataset when a model is overfitting? Is there a weird dip around some point in the training dataset but the same dip is not there in the test dataset?
+"
+"['reinforcement-learning', 'convolutional-neural-networks', 'deep-rl', 'epsilon-greedy-policy', 'self-play']"," Title: How to fight with unstability in self play?Body: I'm working on a neural network that plays some board games like reversi or tic-tac-toe (zero-sum games, two players). I'm trying to have one network topology for all the games - I specifically don't want to set any limit for the number of available actions, thus I'm using only a state value network.
+I use a convolutional network - some residual blocks inspired by the Alpha Zero, then global pooling and a linear layer. The network outputs one value between 0 and 1 for a given game state - it's value.
+The agent, for each possible action, chooses the one that results in a state with the highest value, it uses the epsilon greedy policy.
+After each game I record the states and the results and create a replay memory. Then, in order to train the network, I sample from the replay memory and update the network (if the player that made a move that resulted in the current state won the game, the state's target value is 1, otherwise it's 0).
+The problem is that after some training, the model plays quite well as one of the players, but loses as the other one (it plays worse than the random agent). At first, I thought it was a bug in the training code, but after further investigation it seems very unlikely. It successfully trains to play vs a random agent as both players, the problem arises when I'm using only self play.
+I think I've found some solution to that - initially I train the model against a random player (half of the games as the first player, half as the second one), then when the model has some idea what moves are better or worse, it starts training against itself. I achieved pretty good results with that approach - in tic-tac-toe, after 10k games, I have 98.5% win rate against the random player as the starting player (around 1% draws), 95% as the second one (again around 3% draws) - it finds a nearly optimal strategy. It seems to work also in reversi and breakthrough (80%+ wins against random player after the 10k games as both players). It's not perfect, but it's also not that bad, especially with only 10k games played.
+I believe that, when training with self play from the beginning, one of the players gains a significant advantage and repeats the strategy in every game, while the other one struggles with finding a counter. In the end, the states corresponding to the losing player are usually set to 0, thus the model learns that whenever there is the losing player's turn it should return a 0. I'm not sure how to deal with that issue, are there any specific approaches? I also tried to set the epsilon (in eps-greedy) initially to some large value like 0.5 (50% chance for a random move) and gradually decrease it during the training, but it doesn't really help.
+"
+"['tensorflow', 'keras', 'recurrent-neural-networks']"," Title: Are there any inverse RNN layers?Body: Given the model:
+Sequence([
+GRU(200, input_shape=(None,100), return_sequences=False)
+])
+
+Which maps the space (None, 100) -> (200,)
+Is there an InverseGRU
such that it maps the space (200,) -> (None, 100)
+or, at least, is it possible to simulate this behaviour?
+"
+"['neural-networks', 'machine-learning', 'papers', 'hidden-layers', 'batch-normalization']"," Title: Why does Batch Normalization work?Body: Adding BatchNorm layers improves training time and makes the whole deep model more stable. That's an experimental fact that is widely used in machine learning practice.
+My question is - why does it work?
+The original (2015) paper motivated the introduction of the layers by stating that these layers help fixing "internal covariate shift". The rough idea is that large shifts in the distributions of inputs of inner layers makes training less stable, leading to a decrease in the learning rate and slowing down of the training. Batch normalization mitigates this problem by standardizing the inputs of inner layers.
+This explanation was harshly criticized by the next (2018) paper -- quoting the abstract:
+
+... distributional stability of layer inputs has little to do with the success of BatchNorm
+
+They demonstrate that BatchNorm only slightly affects the inner layer inputs distributions. More than that -- they tried to inject some non-zero mean/variance noise into the distributions. And they still got almost the same performance.
+Their conclusion was that the real reason BatchNorm works was that...
+
+Instead BatchNorm makes the optimization landscape significantly smoother.
+
+Which, to my taste, is slightly tautological to saying that it improves stability.
+I've found two more papers trying to tackle the question: In this paper the "key benefit" is claimed to be the fact that Batch Normalization biases residual blocks towards the identity function. And in this paper that it "avoids rank collapse".
+So, is there any bottom line? Why does BatchNorm work?
+"
+"['natural-language-processing', 'papers', 'open-ai', 'gpt-3']"," Title: What is the meaning of ""Our current objective weights every token equally and lacks a notion of what is most important to predict"" in the GPT-3 paper?Body: On page 34 of OpenAI's GPT-3, there is a sentence demonstrating the limitation of objective function:
+
+Our current objective weights every token equally and lacks a notion of what is most important to predict and what is less important.
+
+I am not sure if I understand this correctly. In my understanding, the objective function is to maximize the log-likelihood of the token to predict given the current context, i.e., $\max L \sim \sum_{i} \log P(x_{i} | x_{<i})$. Although we aim to predict every token that appears in the training sentence, the tokens have a certain distribution, and therefore we do not actually assign equal weight to every token in loss optimization.
+And what should be an example for a model to get the notion of "what is important and what is not". What is the importance refer to in here? For example, does it mean that "the" is less important compared to a less common noun, or does it mean that "the current task we are interested in is more important than the scenario we are not interested in ?"
+Any idea how to understand the sentence by OpenAI?
+"
+"['convolutional-neural-networks', 'training', 'knowledge-representation', 'bayesian-deep-learning']"," Title: How to add prior information when predicting using deep learning models?Body: Background
+I'm building a binary classification model for a pair match problem using CNN, e.g. whether person A1 likes product B1 or not. Model input features are sequence features (text descriptions) of the person and the product. The model accuracy is around 78%. So for a new person, the model can predict the probability whether he likes each product in our dataset.
+Problem
+The model is good if we know nothing about the person. However, in the real scenario, we already know the new person likes one or two products. We want to predict whether he likes other products. Is there any way to incorporate this prior information to improve the model?
+My thought
+A simple method would just retrain the model, giving the new person's pair higher sample weight. But we can't do this for each new person.
+Any suggestion would be appreciated. Thanks
+"
+"['deep-learning', 'hyperparameter-optimization', 'overfitting', 'hidden-layers', 'accuracy']"," Title: Do larger numbers of hidden layers have a bigger effect on a classification model's accuracy?Body: I trained different classification models using Keras with different numbers of hidden layers and the same number of neurons in each layer. What I found was the accuracy of the models decreased as the number of hidden layers increased However, the decrease was more significant in larger numbers of hidden layers. The accuracies refer to the test data and were obtained using k-fold=5. Also, no regularization was used. The following graph shows the accuracies of different models where the number of hidden layers changed while the rest of the parameters stayed the same (each model has 64 neurons in each hidden layer):
+
+My question is why is the drop in accuracy between 8 hidden layers and 16 hidden layers much greater than the drop between 1 hidden layer and 8 hidden layers, even though the difference in the number of hidden layers is the same (8).
+"
+"['reinforcement-learning', 'implementation', 'proximal-policy-optimization', 'normal-distribution']"," Title: Why is the logarithm of the standard deviation used in this implementation of proximal policy optimization?Body: I am currently writing my bachelor thesis, which is an implementation of proximal policy optimization. Sometimes, I hit a wall because of the gaps in my mathematical knowledge. However, implementing the algorithm helped me to understand the math behind the algorithm.
+Unfortunately, I still have a question.
+When the action space is continuous, I am using the normal distribution (same as in the PPO implementation by Spinning up). In the mentioned implementation, the logarithm of the standard deviation is used initially to give the same probability to all of the possible action, then they use the standard deviation when choosing an action. Why do we use the logarithm? Why not directly use simply the standard deviation?
+I know that the logarithm is easier when it comes to the computations, but I can not see the benefits of the logarithm in the Spinning up implementation.
+class MLPGaussianActor(Actor):
+
+ def __init__(self, obs_dim, act_dim, hidden_sizes, activation):
+ super().__init__()
+ log_std = -0.5 * np.ones(act_dim, dtype=np.float32)
+ self.log_std = torch.nn.Parameter(torch.as_tensor(log_std))
+ self.mu_net = mlp([obs_dim] + list(hidden_sizes) + [act_dim], activation)
+
+ def _distribution(self, obs):
+ mu = self.mu_net(obs)
+ std = torch.exp(self.log_std)
+ return Normal(mu, std)
+
+ def _log_prob_from_distribution(self, pi, act):
+ return pi.log_prob(act).sum(axis=-1) # Last axis sum needed for Torch Normal distribution
+
+"
+"['q-learning', 'rewards', 'open-ai', 'gym']"," Title: Is is not possible to achieve average reward of more than 20-40 with simple Q-LearningBody: I have implemented the simple Q-Learning based solution for AI-gym's Cartpole-v0.
+However, despite changing hyper-parameters, and rechecking my code, I cannot get an average reward (N-running reward) of more than 30. My question is, is it not possible to get successful completion of Cartpole without using sophisticated algorithms such as Deep learning etc.?
+I am glad to share my code, but I am sure no one would have time to check it.
+
+PS. I know there are many implementations out there, but I have learned from them but I want to implement my own code for learning purpose and do not just want to copy-paste.
+PSS (Edit): I have added the code in the answer to this question for reference.
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'off-policy-methods', 'importance-sampling']"," Title: With Monte Carlo off-policy learning what do we correct by using importance sampling?Body: I do not understand the link of importance sampling to Monte Carlo off-policy learning.
+We estimate a value using sampling on whole episodes, and we take these values to construct the target policy.
+So, it is possible that in the target policy, we could have state values (or state action values) coming from different trajectories.
+If the above is true, and if the values depend on the subsequent actions (the behavior policy), there is something wrong there, or else, better, something I do not understand.
+Linking this question with importance sampling, do we use this ro value to correct this inconsistency?
+Any clarification is welcome.
+"
+"['neural-networks', 'activation-functions', 'dense-layers']"," Title: Why my Fully Connected Neural Network outputs the same prediction?Body: I have a relatively small data set comprised of $3300$ data points where each data point is a $13$ dimensional vector where the $12$ first dimensions depict a "category" by taking the form of $[0,...,1,...,0]$ where $1$ is in the $i-th$ position for the $i-th$ category and the last dimension is an observation of a continuous variable, so typically one data point would be $[1,...,0,70.05]$.
+I'm not trying to have something extremely accurate so I went with a Fully Connected Network with two hidden layers each comprising two neurons, the activation functions are ReLus, one neuron at the output layer because I'm trying to predict one value, and I didn't put any activation function for it. The optimizer is ADAM, the loss is the MSE while the metric is the RMSE.
+I get this learning curve below:
+
+Eventhough at the beginning the validation loss is lesser than the training loss (which I don't understand), I think at the end it show no sign of overfitting.
+What I don't understand is why my Neural Network predicting the same value as long as the $13-th$ dimension takes values greater than $5$ and that value is $0.9747201$. If the $13-th$ dimension takes for example $4.9$ then the prediction would be $1.0005863$. I thought that it has something to do with the ReLu but even when I switched to Sigmoid, I have this "saturation" effect. The value is different but I still get the same value when I pass a certain threshold.
+EDIT: I'd also like to add that I get this issue even with normalizing the 13th dimension (substracting the mean and dividing by the standard deviation).
+I'd like to add that all the values in my training and validation set are at least greater than $50$ if that may help.
+"
+"['reinforcement-learning', 'deep-learning', 'training', 'q-learning']"," Title: How to properly resume training of deep Q-learning network?Body: I'm currently training a deep q-learning network. Due to resource limitations, I am not able to train the model to the desired performance in one go. So what I'm doing now is training the model for a certain number of episodes and save the resultant model. Later, I will load up the previously saved model and resume training.
+However, what I'm noticing is that when training resumes, the average rewards goes back to very low again, compared to what it achieved at the end of the previous training session. What I'm currently doing is to load up the previously saved model into the prediction and target models, and I keep all hyperparameters unchanged.
+
+- Is this behaviour expected?
+- If not, how do I properly resume training of a deep q-learning
+network?
+- Do I start off with the epsilon value at the end of the
+previous session, currently I reinitialize that as well?
+
+"
+"['neural-networks', 'tensorflow', 'keras']"," Title: Number of parameters in Keras/Tensorflow Dense layersBody: I am a bit confused about how the number of parameters are calculated in Dense model for the Kera/Tensorflow.
+For example, in the figure below I thought that both the statements were the same, but I found a different number of parameters for both. In particular, I am talking about model.add(Dense(...))
command.
+
+"
+"['neural-networks', 'recurrent-neural-networks', 'prediction', 'time-series', 'ai-basics']"," Title: Predict time series from initial non-time dependant parametersBody: I'm trying to create an algorithm (neural network) that is able to predict a time series from a set of different parameters that are not given through time. Let's say I have a plane flying under the following conditions:
+
+
+
+
+Parameters |
+Value |
+
+
+
+
+Angle of attack |
+8 degrees |
+
+
+Lateral angle |
+12 degrees |
+
+
+Wind speed |
+-20 m/s |
+
+
+Plane speed |
+200 m/s |
+
+
+
+
+From this point, I would like to predict the translational velocities in x-y-z axis for the next 2-3 seconds.
+In order to train my model, I have a data base with different initial situations (input) and different motions of the plane (desired output) linked to their initial situation. Therefore, I want to train my model to predict these motions mentioned before, based only on the initial situation described.
+In other words, the basics of what I'm trying to do could be summed up as the following:
+Parameters describing the initial situation -> Model -> Time series of translational velocities.
+"
+"['reinforcement-learning', 'markov-decision-process', 'discount-factor']"," Title: Relation between discounted MDP and stochastic shortest path problems in RLBody: I have been reading about discounted MDPs and Stochastic Shortest Path (SSP). I recently came to know (from a friend) that every discounted MDP can be converted to an equivalent SSP but not the other way around. Questions:
+
+- Is this claim true? Is the discount factor equal to 1 when the MDP is converted to an SSP?
+- More generally, what is the relationship between these two problem categories?
+
+"
+"['machine-learning', 'papers', 'math', 'evolutionary-algorithms', 'biology']"," Title: Clonal operator in Immune Clonal StrategyBody: I was reading about Immune Clonal Strategy, specifically about Monoclonal operator from Immunity clonal strategies, and it goes as follows:
+
+Here $a_i $ is a point and $a_i = \{ x_1, x_2, \cdots, x_m \}$.
+I do not understand what $I_i$ really is, It seems like just copy $a_i$ for $q_i$ times or something like that, can someone please explain to me what is really happened here?
+"
+"['deep-learning', 'convolutional-neural-networks', 'feature-extraction']"," Title: How is a ResNet-50 used for deep feature extraction?Body: I'm trying to implement the vehicle re-identification model described in https://arxiv.org/pdf/2004.06271.pdf.
+My question focuses on Section 3.2 of the paper, which uses a ResNet-50 for deep feature extraction in order to generate discriminative features which can be used to compare images of vehicles by Euclidean distance for re-identification. It takes a 256x256x3 image as input.
+My understanding of ResNet-50 is that its output is of the shape N, where N is the number of classes which an input image could be, and ground truth labels take the form of a one-hot encoding where the '1' value represents the node in the output layer which is associated with the given class.
+I am therefore confused by the usage of ResNet-50 in a re-identification task in which the goal is to generate an array of discriminative features which can be compared by Euclidean distance. There is no discrete set of N classes, as the model should work on any of the infinite number of vehicles in the world.
+What is the ground truth label in a ResNet-50 in the context of a re-identification task?
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'constraint-satisfaction-problems', 'mutation-operators']"," Title: How to handle equality constraints in the mutation operation of evolutionary algorithms?Body: I am new in evolutionary algorithms field. I have a chromosome of 6 variables (real variable), where the sum of these variables is equal to 1.
+I am looking for mutation formulas that can generate a new chromosome respecting the equality constraint: in my case, the sum of new chromosome should always equal to 1.
+"
+"['neural-networks', 'convolutional-neural-networks']"," Title: Showing first layer RGB weights similarly to AlexNetBody: I would like to show the RGB features learned in the first layer of a convolutional neural network similarly to this visualization of the same layer's features from AlexNet:
+
+My learned weights are in the range [-1.1,1.1]. When I use imshow
in python or imagesc
in Matlab, the weight values are clipped to [0,1], leaving only positive weights intact, everything else black (obviously).
+Negative weight values could be informative, so I don't want to clip them. Rescaling the weights to [0,1] works fine for grayscale features, but not for RGB features as it is unclear how negative values of a channel should be visualized. In the above picture 0 furthermore seems to map to the middle of the range (gray).
+How are such RGB features visualized so that they look similarly to above AlexNet visualization?
+(Sorry for the beginner's question.)
+"
+"['deep-learning', 'reference-request', 'recurrent-neural-networks', 'long-short-term-memory', 'time-series']"," Title: How to train an LSTM to classify based on rare historic event?Body: I want an LSTM to output one of two classes (Y, N), per frame, based on all the input so far.
+My original inputs are very long (~100000 samples long, far more than a standard LSTM training can handle due to vanishing gradients).
+
+- If the last seen instance out of the tokens (A, B) was A, output Y.
+- If the last seen instance out of the tokens (A, B) was B, output N.
+- The very long sequence is guaranteed to start with either A or B.
+
+If the sequence was short, this would be quite easy.
+For example, the following top lines and bottom lines correspond to inputs and required outputs:
+ABCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
+YNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
+
+ACCCCCCCCCBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCACCCCCCCCCCCCCCCCACCCCCCCC
+YYYYYYYYYYNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNYYYYYYYYYYYYYYYYYYYYYYYYYY
+
+
+Looks easy enough, just push batches comprised of chunks of the long sequence to the LSTM and have a coffee, right?
+However, for my case, the available inputs are (A, B, C), of which (A, B) are extremely rare, meaning I can have batches comprised of 100% C's. The LSTM has no chance then, if not fed with some current state, telling it about the last A or B seen.
+Unfortunately, this "state" is really something learned, and I can't just feed it as input AFAIK.
+CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
+????????????????????????????????????????????????????????????????????
+
+
+I am looking for a standard practice, or other references on how to train an LSTM or other RNN based model to be able to classify based on rare events far in history.
+I hope this is clear, if not please ask and I will edit.
+
+Please note that the data is labeled, and labeling can't be generated automatically for this task. The above is just an example for ease of understanding, the reality is more complicated.
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'adversarial-ml']"," Title: Expected behavior of adversarial attacks on deep NN?Body: I am trying adversarial attack (AA) for a simple CNNs. Instead of the clean image, my simple CNN is trained with attacked images as suggested by some papers. As the training goes on, I am not sure if the training is well going or something is wrong.
+Here is what I observed:
+When the epsilon value is large, the classification performance of the model from the adversarial training is low. I understand if the attacked image is given to the model, then the performance is poor. Although the model is from the adversarial training, because the epsilon is large, the model is poorly perform. However, when an clean image is given, the performance of the model is still low. Performance on the clean images are higher than the performance of the attacked images, but not as high as the baseline model without adversarial training.
+So, I wonder if the adversarial training also degrades the performance of the model on the clean images. When I read papers, I only see the results on the adversarial Images, not clean images. If you have any experience, it will be very helpful to check if my training code is working well or not.
+When the epsilon is very large, the accuracy of the model on clean image is around 15%. The model without the adversarial training is around 81%.
+Some details.
+I use PGD attack with 5-iterations and epsilon is one of eps = [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.03, 0.05, 0.07]
. Step size is eps/3
. Only one epsilon is selected and the adversarial training is conducted. So there are 8 different models trained with different epsilons. I use natural image Dataset.
+"
+"['deep-learning', 'comparison', 'batch-normalization', 'layer-normalization']"," Title: What are the consequences of layer norm vs batch norm?Body: I'll start with my understanding of the literal difference between these two. First, let's say we have an input tensor to a layer, and that tensor has dimensionality $B \times D$, where $B$ is the size of the batch and $D$ is the dimensionality of the input corresponding to a single instance within the batch.
+
+- Batch norm does the normalization across the batch dimension $B$
+- Layer norm does the normalization across $D$
+
+What are the differences in terms of the consequences of this choice?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'long-short-term-memory', 'text-classification']"," Title: Advantages of CNN vs. LSTM for sequence data like text or log-filesBody: When do you tend to use CNN rather than LSTM (or the other way round) in classification or generation tasks of sequential data like text or log-data? What are the reasons for the decision and what does it depend on? Are there any papers or statistics that confirm this?
+I'm thinking of data like Linux log entries or short sentence of length of less than 20 words/tokens.
+Personally i would almost always use LSTM but I'm curious if CNN wouldn't be better in some cases, if its possible to implement them in a meaningful way. On short sentence there isn't much buffer to use CNN if i'm not mistaken.
+"
+"['regression', 'relu', 'weights-initialization', 'dense-layers', 'learning-curve']"," Title: How to explain that a same DNN model have radically different behaviours with each new initialization and training?Body: I'm trying to predict the continuous values of a variable $y$ using a Fully Connected Neural Network while providing it with data from a $(3300, 13)$ matrix $X$ where $X[i, :]=[0,...,1,...,0,x_{i}]$. So the first $12$ elements of a data vector are all zeros except for one element which is equal to $1$ to denote the belonging of this data to a category. I'd like to add that my $X$ data is normalized with regard to the $13$-th column and that both $X$ and $y$ are shuffled in the same manner. Please find below my code for my model:
+model = Sequential()
+model.add(Input(shape=(13,)))
+model.add(Dense(6, activation = 'relu'))
+model.add(Dense(2, activation = 'relu'))
+model.add(Dense(1))
+
+model.compile(loss = 'mean_squared_error',
+ optimizer = 'adam',
+ metrics = ['RootMeanSquaredError'])
+
+history = model.fit(X, y, validation_split = 0.1, epochs=64)
+
+When trying to plot the learning curve using:
+plt.plot(history.history['loss'])
+plt.plot(history.history['val_loss'])
+plt.title('model loss')
+plt.ylabel('rmse')
+plt.xlabel('epoch')
+plt.legend(['train', 'val'], loc='upper left')
+plt.show()
+
+I get these curves:
+
+There's already an "unusual" element to point here; I've noticed that throughout the training the loss decreases but sometimes in an oscillating manner but we don't notice that on the training learning curve. For example the last four values of the loss are: $2.5176$, $3.4718$, $3.0704$ and it settles down on $3.8177$. I've also noticed that the losses provided by history.history
are different than those shown during training, I suspect ones are computed before the epoch and ones after but I'm not sure.
+I've tried to predict on the $275$ first elements of the training data. Most of the predictions took the value $4.2138872e+00$ but there are other predictions who took lesser values. I've computed the maximum of the predictions on the whole training set and it is $4.2138872e+00$.
+I've also tried to train on the whole training set without a validation set to see what'll happen. I've made sure to rerun the cells of the model so that it doesn't take the weights it already found. I've noticed the same behaviour for the loss during training, but this time there is no constant predicted value that comes up as a maximum limit for the predictions.
+I've already asked this question here and a user suggested to me that I should ask this question separately while providing the whole code. I ran the same code I was running and that was giving me the same predictions for no matter for my input vectors.
+I think, as the user @Kostya that answered my previous question pointed out, what's happening here is called "dying ReLus". It's the same code that I'm running over and over but gives different predictions and the only random parameters are the weights and the biases. I'm sure the biases are initially initialized to zero but I don't know how the weights are handled. I suppose they're randomly generated by a centered and reduced normal distribution.
+I have last came to this question: does the number of neurons, hence the number of weights influence the phenomena of "dying ReLus" ? I came to think that because if we had a large number of weights, their values are likely to fill that interval where the majority of the probability mass is concentrated. And since we have a small number of weights, we can get some "outlier" weights which lead to dyind ReLus.
+"
+"['neural-networks', 'training', 'backpropagation', 'feedforward-neural-networks', 'multilayer-perceptrons']"," Title: Why is the backpropagation algorithm used to train the multilayer perceptron?Body: I've read in the book Neural Network Design, by Martin Hagan et al. (chapter 11), that, to train the feed-forward neural network (aka multilayer perceptron), one uses the backpropagation algorithm.
+Why this algorithm? Could someone please explain in simple terms and detailed terms?
+"
+"['reinforcement-learning', 'markov-decision-process', 'convergence', 'policy-iteration']"," Title: Does the policy iteration convergence hold for finite-horizon MDP?Body: Most RL books (Sutton & Barto, Bertsekas, etc.) talk about policy iteration for infinite-horizon MDPs. Does the policy iteration convergence hold for finite-horizon MDP? If yes, how can we derive the algorithm?
+"
+"['neural-networks', 'hyperparameter-optimization', 'hyper-parameters', 'testing', 'statistics']"," Title: What is the most statistically acceptable method for tuning neural network hyperparameters on very small datasets?Body: Neural networks are usually evaluated by dividing a dataset into three splits:
+
+- training,
+- validation, and
+- test
+
+The idea is that critical hyperparameters of the network such as the number of epochs and the learning rate can be tuned by testing the network on the validation data while keeping the test data completely unseen until a final evaluation that happens only after the hyperparameters have been tuned.
+However, if the amount of data is very small (e.g. 10-20 examples per class), then dividing the dataset into three splits may negatively impact the model due to lack of training data, and two splits is therefore preferable. A two split approach that makes a reasonable amount of data available for training is ten-fold stratified cross validation.
+My question is -- is it statistically sound to tune hyperparameters by repeatedly evaluating hyperparameter sets using cross validation? Keep in mind that there is no held-out test data in this case, as the amount of available data is too small. I'd like some evidence/citations if possible showing that specifically for small datasets, this is the best approach for estimating the best hyperparameters that lead to the best generalizable model. Or if there is another approach that is better, I'd like to learn about that too.
+"
+"['machine-learning', 'reinforcement-learning', 'convolutional-neural-networks', 'ai-design', 'optimization']"," Title: Any RL approaches for this 2D space optimization problem?Body: I have a list of rectangles, they are in a certain order in 2D at the beginning. The task is to move them to get the boundary (rectangular) of the minimal area. It's OK to push off the dotted border as long as the area is minimal.
+The starting state may look similar to this (top view):
+
+Any reinforcement learning approaches to this problem? I'm thinking of some actions called 'rotate 90 degs', 'push east', 'push west', 'push south', 'push north', but these actions are still not clear how to be applied, which to push, how far to push.
+The 2D state can be mapped to a grid of zeros (free) and ones (occupied) to utilize conv2D layers. Before feeding to conv2D, all rectangle coords should be translated to make the ($x_{min}$,$y_{min}$) be at the origin.
+"
+"['neural-networks', 'deep-learning', 'terminology', 'backpropagation', 'weights']"," Title: Is the bias also a ""weight"" in a neural network?Body: I'm learning about how neural networks are trained. I understand how a neuron works, backpropagation, and all that. In neurons, there is a clear distinction between a "weight" and a "bias".
+$$
+Y= \sigma(\text{weight} * \text{input})+ \text{bias}
+$$
+However, all the sources I've found when you train the network you just adjust the weights. Not the bias.
+
+However, they never mention what the bias should do, which leads me to think that you just merge all weights and biases in a $W$ vector and call it weights, even though there are also biases. Is that correctly understood?
+"
+"['papers', 'variational-autoencoder', 'notation']"," Title: In variational autoencoders, what does p(x|z) mean?Body: If $x \sim \mathcal{N}(\mu,\,\sigma^{2})$, then it is a continuous variable, and therefore $P(x) = 0$ for any x. One can only consider things like $P(x<X)$ to get a probability greater than 0.
+So what is the meaning of probabilities such as $P(x|z)$ in variational autoencoders? I can't think of $P(x|z)$ as meaning $P(x<X|z)$, if $x$ is an image, since $x<X$ don't really make sense (all images smaller than a given one?)
+"
+"['neural-networks', 'deep-learning', 'terminology', 'transformer']"," Title: What part of the Vaswani et al. is the ""transformer""?Body: Which part of this is the transformer?
+
+Ok, the caption says the whole thing is the transformer, but that's back in 2017 when the paper was published. My question is about how the community uses the term "transformer" now.
+I'm not looking for an inline response to these questions. They are all a way of asking the same general thing.
+
+- Is this whole thing a transformer?
+- What parts or what relationships between parts make it a transformer?
+- Equivalently, what aspects can I change before it becomes something else and not a transformer?
+- If I only care about self-attention I suppose I don't need the right hand column. If I just keep the self-attention, is it still a transformer?
+
+Context about me is I've just become familiar with transformers and have not read much literature on them since this paper.
+"
+"['image-segmentation', 'semantic-segmentation']"," Title: Semantic segmentation - background or ignore for non-target classes?Body: I am training a deep learning model for semantic segmentation. I am using the cityscapes dataset for training/evaluation.
+In cityscapes, there are 34 classes, and of which, we consider only 19 classes and the rest of the classes are ignored. For training, I have assigned the 19 classes with 0-19 traid_ids.
+Now, since the rest of the classes are ignored, I have ignored them when computing the loss using cross enropy with ignore_index=255.
+But, the above effect can also be achieved by assigning a background class, i.e 20 as bg class and assign all the ignored classes to it.
+Now my question is, which method would be better to achieve a high mIoU in cityscapes? And what would be your intuition in choosing the approach?
+"
+"['reinforcement-learning', 'open-ai', 'gym']"," Title: Open AI Taxi - Agent fails to learn an effective policyBody: I'm trying to solve the openai gym taxi problem (v3) using deep q learning. I've already had some success with the q-table approach, but for the life of me cannot manage to train a NN to learn a reasonable action policy. I'm doing the training using an AWS p3.2xlarge instance.
+My approach is fairly straightforward, I set up the environment and agent, then run the training loop.
+My code more or less looks like this:
+import gym
+from taxi_agent import Agent
+
+env = gym.make('Taxi-v3').env
+optimizer = Adam(learning_rate=0.001)
+agent = Agent(env, optimizer)
+
+batch_size = 32
+num_of_episodes = 200
+timesteps_per_episode = 120
+
+The agent was cobbled together from various examples online:
+import numpy as np
+import random
+from IPython.display import clear_output
+from collections import deque
+from tensorflow.keras import Model, Sequential
+from tensorflow.keras.layers import Dense, Embedding, Reshape
+from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
+
+
+class Agent:
+ def __init__(self, environment, optimizer):
+
+ # Initialize atributes
+ self._state_size = environment.observation_space.n
+ self._action_size = environment.action_space.n
+ self._optimizer = optimizer
+
+ self.expirience_replay = deque(maxlen=2000)
+
+ # Initialize discount and exploration rate
+ self.gamma = 0.6
+ self.epsilon = 0.5
+
+ # Build networks
+ self.q_network = self._build_compile_model()
+ self.target_network = self._build_compile_model()
+ self.align_target_model()
+
+ #: Set up some callbacks
+ self.checkpoint_filepath = 'checkpoints/'
+ model_checkpoint_callback = ModelCheckpoint(
+ filepath=self.checkpoint_filepath,
+ save_weights_only=True,
+ save_freq='epoch'
+ )
+ # tensorboard_callback = TensorBoard('logs', update_freq=1)
+ self.model_callbacks = [model_checkpoint_callback]
+
+ self.history = []
+
+ def store(self, state, action, reward, next_state, terminated):
+ self.expirience_replay.append((state, action, reward, next_state, terminated))
+
+ def _build_compile_model(self):
+ model = Sequential()
+ model.add(Embedding(self._state_size, 10, input_length=1))
+ model.add(Reshape((10,)))
+ model.add(Dense(48, activation='tanh'))
+ model.add(Dense(24, activation='tanh'))
+ model.add(Dense(self._action_size, activation='linear'))
+
+ model.compile(loss='mse', optimizer=self._optimizer)
+ return model
+
+ def restore_weights(self):
+ path = self.checkpoint_filepath
+ print(f"restoring model weights from {path}")
+ self.q_network.load_weights(path)
+
+ def align_target_model(self):
+ self.target_network.set_weights(self.q_network.get_weights())
+
+ def act(self, state, environment):
+ if np.random.rand() <= self.epsilon:
+ return environment.action_space.sample()
+
+ q_values = self.q_network.predict(state)
+ return np.argmax(q_values[0])
+
+ def retrain(self, batch_size, epochs=1):
+ minibatch = random.sample(self.expirience_replay, batch_size)
+
+ for state, action, reward, next_state, terminated in minibatch:
+
+ target = self.q_network.predict(state)
+
+ if terminated:
+ target[0][action] = reward
+ else:
+ t = self.target_network.predict(next_state)
+ target[0][action] = reward + self.gamma * np.amax(t)
+
+ history = self.q_network.fit(state, target, epochs=1, verbose=0, callbacks=self.model_callbacks)
+ self.history.append(history.history)
+
+The training loop uses the agent to act in the environment up to a number of batch_size actions
. Next, it retrains the model based on a random sample of the experience for every subsequent timestep.
+I have it set to print out feedback whenever the environment terminates (achieves the objective). In practice this never happens.
+I've reloaded trained models from weights and trained for cumulative 24 hours without much success. I've also tried silly things like updating the target network after N steps just so it learns something - no luck.
+If I try to use my trained model to solve an example env instance, it just wants to move south other than the random actions it set to do 50% of the time.
+It would be great it someone could give me some advice towards what to try next. I can keep playing around with hyperparameters but I don't have the best intuition around where to optimize my efforts.
+iterations = 0
+state = env.reset()
+env.render()
+while not terminated:
+ state = np.reshape(state, [1, 1])
+ action = agent.act(next_state, env)
+ next_state, reward, terminated, info = env.step(action)
+ next_state = state
+ if iterations % 10: env.render()
+ iterations += 1
+ if iterations > 1000: break
+
+"
+"['search', 'evolutionary-algorithms', 'meta-heuristics', 'pseudocode']"," Title: Does a differential evolution algorithm mutate its population during a generation?Body: I'm implementing a differential evolution algorithm and when it comes to evolving a population, the page I am referencing is vague on how the new population is generated.
+https://en.wikipedia.org/wiki/Differential_evolution#Algorithm
+The algorithm looks like the population is mutated during evolution.
+# x, a, b, c are agents in the population
+# subscriptable to find position in specific dimension
+# pop is filled with Agent objects
+for i, x in enumerate(pop):
+ [a, b, c] = random.sample(pop[:i]+pop[i+1:], 3)
+ ri = random.randint(0, len(x))
+ new_position = []
+ for j in range(len(x)):
+ if random.uniform(0, 1) < CR or j == ri:
+ new_pos.append(a[j] + (F * (b[j] - c[j])))
+ else:
+ new_pos.append(x[j])
+ # Agent() class constructor for agent, takes position as arg
+ new_agent = Agent(new_pos)
+ if fitness(new_agent) <= fitness(x):
+ pop[i] = new_agent # replace x with new_agent
+
+But I wonder if instead it means a new population is made and then populated iteratively:
+new_pop = []
+for i, x in enumerate(pop):
+ [a, b, c] = random.sample(pop[:i]+pop[i+1:], 3)
+ ri = random.randint(0, len(x))
+ new_position = []
+ for j in range(len(x)):
+ if random.uniform(0, 1) < CR or j == ri:
+ new_pos.append(a[j] + (F * (b[j] - c[j])))
+ else:
+ new_pos.append(x[j])
+ new_agent = Agent(new_pos)
+ if fitness(new_agent) <= fitness(x):
+ new_pop.append(new_agent)
+ else:
+ new_pop.append(x)
+pop = new_pop
+
+Note new_pop
is made, and filled with agents as the for
loop continues.
+The first allows previously evolved agents to be used again in the same generation; in other words, the population is changed during the evolution. The second doesn't allow updated agents to be re-used, and only at the end is the original population changed.
+Which is it?
+"
+"['gradient-descent', 'linear-algebra', 'calculus']"," Title: How can the gradient of the weight be calculated in the viewpoint of matrix calculus?Body: Let $\sigma(x)$ be sigmoid function. Consider the case where $\text{out}=\sigma(\vec{x} \times W + \vec{b})$, and we want to compute $\frac{\partial{\text{out}}}{\partial{w}
+}.$
+Set the dimension as belows:
+$\vec{x}$: $(n, n_{\text{in}})$, $W$: $(n_{\text{in}}, n_{\text{out}})$, $\vec{b}$: $(1, n_{\text{out}})$.
+Then $\text{out}$ has the dimension $(n, n_{\text{out}})$. So we need to calculate the matrix by matrix derivative, as I know there is no such way to define that.
+I know that finally it is calculated as $\vec{x}^T \times (\text{out}\cdot(1-\text{out}))$.
+But I can't still get the exact procedure of calculation, why it should be $\vec{x}^T \times (\text{out}\cdot(1-\text{out}))$, not $(\text{out}\cdot(1-\text{out})) \times \vec{x}^T$,I know it by considering dimension, but not by calculation.
+My intuition about this problem is that all calculation can be considerd as vector by vector differentiation since $n$ is a batch size number, we can calculate matrix differentiation by considering each column vector.
+I'm not sure about my intuition yet, and I need some exact mathematical calculation procedure for the problem,
+"
+"['reinforcement-learning', 'alphazero', 'chess', 'multi-agent-systems', 'action-spaces']"," Title: How does the Alpha Zero's move encoding work?Body: I am a beginner in AI. I'm trying to train a multi-agent RL algorithm to play chess. One issue that I ran into was representing the action space (legal moves/or honestly just moves in general) numerically. I looked up how Alpha Zero represented it, and they used an 8x8x73 array to encode all possible moves. I was wondering how it actually works since I got a bit confused in their explanation:
+
+A move in chess may be described in two parts: selecting the piece to move, and then selecting among the legal moves for that piece. We represent the policy $\pi(a \mid s)$ by a $8 \times 8 \times 73$ stack of planes encoding a probability distribution over 4,672 possible moves. Each of the $8 \times 8$ positions identifies the square from which to "pick up" a piece. The first 56 planes encode possible "queen moves" for any piece: a number of squares $[1..7]$ in which the piece will be moved, along one of eight relative compass directions {N, NE, E, SE, S, SW, W, NW}. The next 8 planes encode possible knight moves for that piece. The final 9 planes encode possible under-promotions for pawn moves or captures in two possible diagonals, to knight, bishop or rook respectively. Other pawn moves or captures from the seventh rank are promoted to a queen.
+
+How would one numerically represent the move 1. e4 or 1. NF3 (and how would the integer for 1. NF3 differ from 1. f3) for example? How do you tell what integer corresponds to which move? This is what I'm essentially asking.
+"
+"['deep-learning', 'generative-adversarial-networks', 'generative-model']"," Title: Decision boundary figure in Least square GAN paperBody: I currently reading Least Square GAN paper. But, I cannot interpret one of its figures.
.
+Explanation of the figure goes like this:
+
+Figure 1: Illustration of different behaviors of two loss functions. (a): Decision boundaries of two loss functions. Note that the decision boundary should go across the real data distribution for a successful GANs learning. Otherwise, the learning process is saturated. (b): Decision boundary of the sigmoid cross entropy loss function. It gets very small errors for the fake samples (in magenta) for updateing G as they are on the correct side of the decision boundary. (c): Decision boundary of the least squares loss function. It penalize the fake samples (in magenta), and as a result, it forces the generator to generate samples toward decision boundary.
+
+I already knew the vanilla GAN structure. But I could not understand why decision boundary looks like this. Any help will be appreciated
+"
+"['objective-functions', 'autoencoders', 'variational-autoencoder', 'mean-squared-error', 'evidence-lower-bound']"," Title: In variational autoencoders, why do people use MSE for the loss?Body: In VAEs, we try to maximize the ELBO = $\mathbb{E}_q [\log\ p(x|z)] + D_{KL}(q(z \mid x), p(z))$, but I see that many implement the first term as the MSE of the image and its reconstruction. Here's a paper (section 5) that seems to do that: Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse (2019) by James Lucas et al. Is this mathematically sound?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'activation-functions']"," Title: What are the pros and cons of using sigmoid or softmax approach when dealing with 2 classes?Body: I know that when using Sigmoid, you only need 1 output neuron (binary classification) and for Softmax - it's 2 neurons (multiclass classification). But for performance improvement (if there is one), is there any difference which of these 2 approaches works better, or when would you recommend using one over the other. Or maybe there are certain situations when using one of these is better than the other.
+Any comments or shared experience will be appreciated.
+"
+"['math', 'activation-functions', 'function-approximation', 'universal-approximation-theorems']"," Title: Why can a neural network use more than one activation function?Body: From trying to understand neural networks better, I've come upon a tentative notion that an activation function aims to build a function it's approximating via linear combinations with biases and weights as their constants, like Fourier sums and other orthogonal basis functions.
+How, then, can one neural network layer use activation function, like a sigmoid, and another one like the output using softmax? How do we know a linear combination of sigmoids and something else can still build that function no matter what? To me, it's like saying a function is approximated using sine functions with $N$ different $k$ values and then also randomly a few Hermite polynomials are thrown in as well. In this case, Hermite polynomials and the sine function aren't even orthogonal (to be honest I haven't checked but I'd assume they're not).
+This question highlights some misconceptions I have about activation functions, perhaps, and I'd like to know where I'm going wrong here.
+"
+"['deep-learning', 'object-detection', 'yolo', 'scalability']"," Title: Object Detection: Can I modify this script to support larger images (Scaled YOLOv4)?Body: I am looking at training the Scaled YOLOv4 on TensorFlow 2.x, as can be found at this link. I plan to collect the imagery, annotate the objects within the image in VOC format, and then use these images/annotations to train the large-scale model. If you look at the multi-scale training commands, they are as follows:
+python train.py --use-pretrain True --model-type p5 --dataset-type voc --dataset dataset/pothole_voc --num-classes 1 --class-names pothole.names --voc-train-set dataset_1,train --voc-val-set dataset_1,val --epochs 200 --batch-size 4 --multi-scale 320,352,384,416,448,480,512 --augment ssd_random_crop
+
+As we know that Scaled YOLOv4 (and any YOLO algorithm at that) likes image dimensions divisible by 32, I have plans to use larger images of 1024x1024. Is it possible to modify the --multi-scale
commands to include larger dimensions such as 1024, and have the algorithm run successfully?
+Here is what it would look like when modified:
+--multi-scale 320,352,384,416,448,480,512,544,576,608,640,672,704,736,768,800,832,864,896,928,960,992,1024
+
+"
+"['activation-functions', 'function-approximation']"," Title: Trying to understand why nonlinearity is important for neural networks by analogyBody: Is the reason why linear activation functions are usually pretty bad at approximating functions the same reason why combinations of hermitian polynomials or combinations of sines and cosines are better at approximating a function than combinations of linear functions?
+For example, regardless of the amount of terms in this combination of linear functions, the function will always be some form of $y = mx + b$. However, if we're summing sines, you absolutely cannot express a combination of sines and cosines as something of the form $A \sin{bx}$. For example, a combination of three sinusoids cannot be simplified further than $A \sin{bx} + B \sin{cx} + D \sin{ex}$.
+Is this fact essentially why the Fourier series is able to approximate functions (other than obviously the fact that $A \sin{bx}$ is orthogonal to $B \sin{cx}$)? Because if it could be simplified into one sinusoid, it could never approximate an arbitrary function because it's lost its robustness? Because with other terms combined, whereas linear functions summed up gain no further ability to approximate, things like sinusoids actually begin to approximate really well with enough terms and with the right constants.
+In that vein, is this the reason why non-linear activiation functions (also called non-linear classifiers?) are generally valued more than linear ones? Because linear activation functions simply are lousy function approximators, while, with enough constants and terms, non-linear activation functions can approximate any function?
+"
+"['reinforcement-learning', 'supervised-learning', 'algorithm-request']"," Title: Is this a supervised or reinforcement learning problem, and which algorithm should I use to solve it?Body: I have a time series data with a little unusual cost/reward function (I haven't seen it before)
+The model must predict a $Y$ value for any $X(t)$.
+The reward is computed as follows. The model will receive a reward equal to $Y_\text{true} * Y_\text{prediction}$. But if the reward is a positive value, the model won't receive a positive reward in next $5$ time steps (it will get negative rewards anytime). It means sometimes it is better for the model to predict 0 and wait for a better reward.
+I have two questions:
+
+- Is it a supervised learning or reinforcement learning problem?
+
+- If it is a supervised learning, which optimization method should I use for it?
+
+
+"
+"['machine-learning', 'tensorflow', 'math', 'linear-algebra']"," Title: What's the difference between a 1d tensor and a 2d tensor with 1 dimension?Body: I'm doing a TensorFlow tutorial, where they convert an array of the numbers [1,2,3]
to a tensor like this:
+const xs = tf.tensor2d([1, 2, 3], [3, 1])
+
+The shape is [3,1]
because there is one row with 3 numbers.
+My question is, why would they use a 2D tensor, isn't this just exactly the same as:
+const xs = tf.tensor1d([1, 2, 3])
+
+"
+"['terminology', 'unsupervised-learning', 'self-supervised-learning']"," Title: Does Yann LeCun consider k-means self-supervised learning?Body: I was discussing the topic of self-supervised learning with a colleague. After a while we realized we were using different definitions. That's never helpful.
+Both of us were introduced to self-supervised learning by reading or listening to Yann LeCun. He is renaming (part of) unsupervised learning to self-supervised learning. For example in this Facebook post.
+Probably the definitions of unsupervised and self-supervised learning overlap. But to me the terms are not interchangeable. For example, a prototypical example of age-old unsupervised learning technique is k-means. To me that is unsupervised but not self-supervised learning.
+Is Yann LeCun renaming the entire concept unsupervised learning to self-supervised learning? More specifically, is his opinion that we should call clustering and anomaly detection self-supervised learning? And in the limit, does he call k-means supervised-learning?
+References are appreciated.
+"
+"['generative-adversarial-networks', 'variational-autoencoder']"," Title: Are there architectures to generate pictures from four labels? (VAEs, GANs)Body: I want to try something with image creation via NNs. I have come across Variational Autoencoders and Generative Adversarial Networks as possible solutions but have only found image creatinon with CIFAR-100 and GANs (two class-labels, non continuous)
+Im looking for an idea for generating a picture with four continuous labels (Age and Rotation around X-, Y-, Z-axis). Is there anything usable for this?
+Im especially looking for an VAE-Model on this task.
+
+"
+"['neural-networks', 'reference-request', 'math', 'multilayer-perceptrons', 'perceptron']"," Title: What are the math theorems regarding the Multilayer Perceptron?Body: I've come across a theorem "Convergence theorem
+Simple Perceptron" for the first time, here-> https://zaguan.unizar.es/record/69205/files/TAZ-TFG-2018-148.pdf, page 27, (is in Spanish)
+Are there others like this one but for the Multilayer Perceptron?
+Could someone please point me out to them?
+Thank you in advance.
+"
+"['neural-networks', 'objective-functions', 'math', 'gradient-descent', 'loss']"," Title: Could the inputs of the mean squared-error loss function be transformed to allow larger learning rates?Body: In the context of a neural network $\hat{y} = f_\theta(\mathbf{x})$ with parameters $\theta$ that is trained to perform regression such that the prediction $\hat{\mathbf{y}} = [\hat{y}_1,\hat{y}_2,...,\hat{y}_N]$ is close the target $\mathbf{y} = [y_1,y_2,...,y_N]$, the mean squared-error (MSE) loss function is:
+$$
+\mathcal{L}(\mathbf{y},\hat{\mathbf{y}}) = \frac{1}{N} \sum_{i=1}^N (y_i - \hat{y}_i)^2
+$$
+The parameters $\theta$ are then adjusted using the gradient descent update rule:
+$$
+\theta_{k+1} \leftarrow \theta_{k} - \alpha \cdot \nabla \mathcal{L}(\mathbf{y},\hat{\mathbf{y}}_{\theta_k})
+$$
+Where $\alpha$ is the learning rate. I am aware that if $\alpha$ is too small, the parameters $\theta$ might never converge, or is too slow to converge, to the optimal set of parameters $\theta^*$, and if $\alpha$ is too large, the iterates of $\theta$ could oscillate and also never converge. My question has to do with the latter scenario, where $\alpha$ is too big, which leads to overshooting and oscillation.
+A good way of choosing $\alpha$ is using backtracking line search. However, because the neural network has many parameters, it is not practical to perform line search, and $\alpha$ needs to be chosen using another way.
+Is it possible to allow a larger value of the learning rate before overshooting and oscillation by "elongating" valleys in the MSE loss function $\mathcal{L}(\mathbf{y},\hat{\mathbf{y}})$? More precisely, by transforming $\mathbf{y}$ and $\hat{\mathbf{y}}$ in some way before computing the loss? For example, in practice, I have found the following modification to the MSE loss function to be very helpful in avoiding overshooting and oscillation, even with large learning rates:
+$$
+\mathcal{L}(\mathbf{y},\hat{\mathbf{y}}) = \frac{1}{N} \sum_{i=1}^N (\log(y_i + \epsilon) - \log(\hat{y}_i + \epsilon))^2
+$$
+Where $\epsilon$ is a small value. However, I am not sure why this modification helps.
+"
+"['comparison', 'papers', 'a-star', 'path-finding', 'heuristic-functions']"," Title: Comparing heuristics in A* search and rescue operationBody: I was reading a research paper titled A Comparative Study of A-star Algorithms for Search and rescue in Perfect Maze (2011).
+I have some doubts regarding it:
+1.
+
+The Evaluation Function of $\mathrm{A}^{*}(2)[5]$ is:
+$$
+f_{2}(i)=g_{2}(i)+h_{2}(i)+h_{2}(j)
+$$
+Where, $j$ is the father point of the current point, $h_{2}(j)$ is the Euclidean distance from the father point of the current point to the target point. This term is added to the father point for improving the search speed because it reduces the number of nodes.
+
+In this section (page 2 middle-right) it says that the father point is added to improve search speed as it reduces the number of nodes searched.
+Is this because the added father point in some way overestimates the cost function, similar to Greedy Best First Search. Can it be interpreted as something between $A^{*}$ and Greedy BFS? If not what is the reason for the increase in speed?
+2.
+
+$\mathrm{A}^{*}(3)$ that employed a heuristic function with angle and distance has not been demonstrated well in this experiment, the reason is: in this experiment, we have added deviations not only on distance but also on an angle, so the $A^{*}(3)$ algorithm has no advantage in this searching.
+
+In this section (page 3 upper-right) it is saying that $\mathrm{A}^{*}(3)$ is not so useful as there are deviations in angle also. What does this statement mean how are deviations in angle added? Request help in understanding $\mathrm{A}^{*}(3)$?
+I need to understand why one heuristic is better than another. Is there some way to determine that apart from experimental evidence?
+"
+"['neural-networks', 'machine-learning', 'overfitting', 'performance', 'underfitting']"," Title: Identifying if a model is over or under-fitting via graphsBody: I am working on a Neural Network and have plotted the performance of my model. However the plots seem not to fit the "trends" (which help you identify the issue with your model) presented in this illustration.
+Here is the performance of my model
+The loss metric I used was Binary Cross Entropy (due to my problem being a binary classification task).
+Is my model over or under-fitting? and how can you tell?
+"
+"['machine-learning', 'terminology', 'papers', 'transfer-learning']"," Title: What does ""semantic gap"" mean?Body: I was reading DT-LET: Deep transfer learning by exploring where to transfer, and it contains the following:
+
+It should be noted direct use of labeled source domain data on a new scene of target domain would result in poor performance due to the semantic gap between the two domains, even they are representing the same objects.
+
+Can someone please explain what the semantic gap is?
+"
+"['training', 'datasets', 'resource-request', 'ai-security', 'training-datasets']"," Title: How to source training data in ML for information security?Body: A company entrusts a Data Scientist with the mission of processing and valuing data for the research or treatment of events related to traces of computer attacks. I was wondering how would he get the train data.
+I guess he would need to exploit the logs of the different devices of the clients and use statistical, Machine Learning and visualization techniques in order to bring a better understanding of the attacks in progress and to identify the weak signals of attacks... But how would he get labelled data?
+He might get the logs of attacks received before, but that might not have the same signature with the attacks that are going to come later? So it might be difficult to create a reliable product?
+"
+"['machine-learning', 'deep-learning', 'papers', 'hebbian-learning']"," Title: What is a Hebbian linear classifier?Body: I was reading Deep Learning of Representations for Unsupervised and Transfer Learning,
+and they state the following:
+
+They have only a small number of unlabeled examples (4096) and very few labeled examples (1
+to 64 per class) available to a Hebbian linear classifier (which discriminates according to
+the median between the centroids of two classes compared) applied separately to each class
+against the others.
+
+I have searched about what a Hebbian linear classifier is, but I couldn't find more than an explanation about what Hebbian learning is, so can anybody explain what Hebbian linear classifier is?
+"
+"['objective-functions', 'gradient-descent']"," Title: Why is loss displayed as a parabola in mean squared error with gradient descent?Body: I'm looking at the loss function: mean squared error with gradient descent in machine learning. I'm building a single-neuron network (perceptron) that outputs a linear number. For example:
+Input * Weight + Bias > linear activation > output.
+Let's say the output is 40 while I expect the number 20. That means the loss function has to correct the weights+bias from 40 towards 20.
+What I don't understand about mean squared error + gradient descent is: why is this number 40 displayed as a point on a parabola?
+Does this parabola represent all possible outcomes? Why isn't it just a line? How do I know where on the parabola the point "40" is?
+
+"
+"['neural-networks', 'genetic-algorithms', 'fitness-functions']"," Title: How to design fitness function for multiple objectives?Body: I am currently building a neural network with genetic algorithms that learns to fly a 2D drone to a target. My goal is that it achieves all tasks as fast as possible, but I want the drone to also fly stable and upright. The way I tried to calculate the fitness was to create a function that has the greatest value when the drone does everything I want right.
+fitness += 1/distToTarget + cos(drone_angle)
+
+My current inputs are:
+difference_target_X
+difference_target_Y
+velocity_X
+velocity_Y
+angular_velocity (degree per second)
+drone_angle | = 0; |_ = 90 _| = -90
+
+The output (I don't think it is important but)
+left_thruster_angle
+left_thruster_boost
+right_thruster_angle
+right_thruster_boost
+
+The NN is programmed in unity and the drone uses a 2D rigid body and the NN adds a force to the thruster at the right angle.
+How do I get the drone to set the best weights to fulfill all tasks: fly stable, fly fast, fly to the target?
+"
+"['reinforcement-learning', 'deep-rl', 'dqn', 'control-problem', 'policy-evaluation']"," Title: Using states (features) and actions from a heuristic model to estimate the value function of a reinforcement learning agentBody: new to RL here.
+As far as i understood from RL courses, that there is two sides of reinforcement learning. Policy Evaluation, which is the task of knowing the value function for certain policy. and Control, which is maximizing the reward or the value function. what if i have a heuristic agent that performs almost acceptable performance in an environment but i want to find a policy that tends to be the optimal policy, is there a way to cut the first half of the task by teaching the agent ? will be a side by side buffer of the (states, actions) be sufficient ?
+"
+"['classification', 'training', 'data-preprocessing', 'supervised-learning', 'imbalanced-datasets']"," Title: How to handle class imbalance when the actual data are that wayBody: My supervised learning training data are obtained from actual data; and in real cases, there's one class that happens less often than other classes, just around 5% of all cases.
+To be precise, the first 2 classes are in 95% of training data and the last one is in 5%. Training while keeping the data ratio intact will make accuracy reach 50% at the right first step and reaches 90%+ immediately that doesn't make sense.
+Should I exclude some data of classes 1 and 2, to make the numbers of samples of 3 classes equal? But it's not a real-world ratio.
+"
+"['convolutional-neural-networks', 'terminology', 'books']"," Title: What does ""statistical efficiency"" mean in this context?Body: Consider the following statement(s) from Deep Learning book (p. 333, chapter 9: Convolutional Networks) by Ian Goodfellow et al.
+
+Convolution is thus dramatically more efficient than dense matrix
+multiplication in terms of the memory requirements and statistical
+efficiency.
+
+Book is saying that statistical efficiency is due to the decrease in the number of parameters due to convolution (using kernel) compared to fully connected feed forward neural networks.
+What is meant by statistical efficiency in this context? And how does decrease in the number of parameters increase statistical efficiency?
+"
+"['object-detection', 'object-recognition', 'optical-character-recognition', 'ctc-loss']"," Title: Why object detection algorithms are poor in optical character recognition?Body: OCR is still a very hard problem. We don't have universal powerful solutions. We use the CTC loss function
+An Intuitive Explanation of Connectionist Temporal Classification | Towards Data Science
+Sequence Modeling
+With CTC | Distill
+which is very popular, but it's still not enough.
+The simple solution would be to use object detection algorithms for recognizing every single character and combine them to form words and sentences.
+We already have really powerful object detection algorithms like Faster-RCNN, YOLO, SSD. They can detect even very complicated objects that are not fully visible.
+But I read that these object detection algorithms are very poor if you use them for recognizing characters. It's very strange since these are very simple objects, just a few lines and circles. And mainly grayscale images. I know that we use object detection algorithms to detect the regions of text on big images. And then we recognize this text. Why can't we just use object detection algorithms (small versions of popular neural networks) for recognizing single characters?
+Why we use CTC or other approaches (besides the fact that it would require much more labeling)?
+Why not object detection?
+"
+"['tensorflow', 'bert']"," Title: What's new in LaBSE v2?Body: I can't find what's new in LaBSE v2 (https://tfhub.dev/google/LaBSE/2). What are the main highlights of v2 versus v1? And how did you find out?
+"
+"['tensorflow', 'transfer-learning', 'pretrained-models', 'inception']"," Title: How to train my model using transfer learning on inception_v3 pre-trained model?Body: I am trying to train my model to classify 10 classes of hand gestures but I don't get why am I getting validation accuracy approx. double than training accuracy.
+My dataset is from kaggle:
+https://www.kaggle.com/gti-upm/leapgestrecog/version/1
+My code for training model:
+print(x.shape, y.shape)
+# ((10000, 240, 320), (10000,))
+
+# preprocessing
+x_data = x/255
+le = LabelEncoder()
+y_data = le.fit_transform(y)
+x_data = x_data.reshape(-1,240,320,1)
+x_train,x_test,y_train,y_test = train_test_split(x_data,y_data,test_size=0.25,shuffle=True)
+y_train = to_categorical(y_train)
+y_test = to_categorical(y_test)
+
+# Training
+
+base_model = keras.applications.InceptionV3(input_tensor=Input(shape=(240,320,3)),
+ include_top=False,
+ weights='imagenet')
+base_model.trainable = False
+
+CLASSES = 10
+input_tensor = Input(shape=(240,320,1) )
+model = Sequential()
+model.add(input_tensor)
+model.add(Conv2D(3,(3,3),padding='same'))
+model.add(base_model)
+model.add(GlobalAveragePooling2D())
+model.add(Dropout(0.4))
+model.add(Dense(CLASSES, activation='softmax'))
+model.compile(loss='categorical_crossentropy',
+ optimizer=optimizers.Adam(lr=1e-5), metrics=['accuracy'])
+
+history = model.fit(
+x_train,
+y_train,
+batch_size=64,
+epochs=20,
+validation_data=(x_test, y_test)
+)
+
+I am getting accuracy like:
+Epoch 1/20
+118/118 [==============================] - 117s 620ms/step - loss: 2.4571 - accuracy: 0.1020 - val_loss: 2.2566 - val_accuracy: 0.1640
+Epoch 2/20
+118/118 [==============================] - 70s 589ms/step - loss: 2.3253 - accuracy: 0.1324 - val_loss: 2.1569 - val_accuracy: 0.2512
+
+
+I have tried removing the Dropout
layer, changing train_test_split
, but nothing works.
+EDIT:
+On changing the dataset to color images from https://www.kaggle.com/vbookshelf/v2-plant-seedlings-dataset
+, I am still getting higher validation accuracy in initial epochs, is it acceptable or I am doing something wrong?
+
+"
+['deep-neural-networks']," Title: Unable to 'learn' a rotational angle by parametrising the angle as a neural network layerBody: I'm trying to implement a neural network that can capture the drift in a measured angle as a way of dynamic calibration. i.e, I have a reference system that may change throughout the course of the data gathering and would like to train a network layer which actually converts the drifting reference to the desired reference by updating the angle parameter.
+For example: Consider the 2d case. We would have a set of 2d points $X\in \mathbb{R}^2$ and a trainable parameter called $\theta$ in the layer. The output of the layer would then be:
+$$X_o = XR$$ where
+$$R = \begin{bmatrix}
+\cos(\theta) & -\sin(\theta) \\
+\sin(\theta) & \cos(\theta)
+\end{bmatrix}$$
+Using Adam optimizer I then try to find the $\theta$ which transforms a given angle to the desired reference.
+However, the $\theta$ value seems to fluctuate around the initial value probably because of a diverging gradient(?). How can I overcome this issue?
+The code is below.
+import tensorflow as tf
+import numpy as np
+import matplotlib.pyplot as plt
+
+
+class Rotation2D(tf.keras.layers.Layer):
+ def __init__(self):
+ super(Rotation2D, self).__init__()
+
+ def build(self, input_shape):
+ self.kernel = self.add_weight("kernel", initializer=tf.keras.initializers.Constant(90),
+ shape=[1, 1])
+
+ def call(self, input):
+ matrix = ([[tf.cos(self.kernel[0, 0]), -tf.sin(self.kernel[0, 0])],
+ [tf.sin(self.kernel[0, 0]), tf.cos(self.kernel[0, 0])]])
+ return tf.matmul(input, tf.transpose(matrix))
+
+layer = Rotation2D()
+
+t = np.arange(0, 1000)/200.
+
+y_in = np.array([np.sin(t), np.cos(t)]).T
+y_ta = np.array([np.cos(t), np.sin(t)]).T
+
+model = tf.keras.Sequential()
+model.add(layer)
+
+model.compile(tf.keras.optimizers.SGD(lr=1.), loss='MSE')
+model.fit(y_in, y_ta, epochs=1)
+for i in range(100):
+ print(layer.get_weights())
+ model.fit(y_in, y_ta,verbose=0, batch_size=5)
+y_out = (model.predict(y_in))
+
+fig, axes = plt.subplots(2, 1)
+
+for i in range(2):
+ ax = axes[i]
+
+ ax.plot(y_in.T[i], label = 'input')
+ ax.plot(y_ta.T[i], label = 'target')
+ ax.plot(y_out.T[i], label = 'prediction')
+
+plt.legend()
+
+plt.show()```
+
+"
+['neural-networks']," Title: Are neural networks invertible?Body: I am interested in learning about the inverse of neural networks and I would like to understand about the invertibility of neural networks, as for example described in On the Invertibility of Invertible Neural Networks.
+Researchers who are working on this domain, can you help me understand these two questions.
+
+- Are all neural network invertible ?
+- What exactly qualifies a neural network to be invertible ?
+
+"
+"['research', 'matlab']"," Title: On what basis is MATLAB ""inflexible"" to perform ML/AI research on it?Body: During a course review, I have provided my opinion on the course overall. I stated that MATLAB is also a great environment to program and do research for ML/AI, but my professor seemed to have taken my comments as a joke and told me "If you take a look at the statistics, then you'll see that MATLAB is not a feasible environment to innovate and research topics in ML/AI".
+As an undergraduate who is new to machine learning, I would hope to understand more and not debate on which is better (MATLAB vs Python), but rather to know whether there is a bias against MATLAB or there are actually reasons that make MATLAB not a good environment to research and program ML topics.
+"
+"['machine-learning', 'reinforcement-learning', 'temporal-difference-methods', 'importance-sampling']"," Title: What would be the importance sampling ratio for off-policy TD learning control using Q values?Body: The off-policy TD learning control using state value function from page 34 of David Silver's RL lecture is:
+$$ V(S_t) \leftarrow V(S_t) + \alpha \left( \frac{ \pi(A_t|S_t)}{\mu (A_t|S_t)} (R_{t+1} + \gamma V(S_{t+1})) - V(S_t) \right). $$
+I'd like to change this update rule to action value function Q, something like:
+$$ Q(S_t,A_t) \leftarrow Q(S_t,A_t) + \alpha \left( \frac{ \pi(A_t|S_t)}{\mu (A_t|S_t)} (R_{t+1} + \gamma Q(S_{t+1},A_{t+1})) - Q(S_t,A_t) \right). $$
+Then what is the corresponding importance sampling ratio?
+Since $A_t$ is already determined (because we are calculating $Q(S_t,A_t)$), I think $\pi(A_t|S_t)$ is definitely 1. But what about $\mu (A_t|S_t)$? Is it 1 or not?
+"
+"['feature-extraction', 'accuracy', 'signal-processing', 'feature-engineering', 'padding']"," Title: Is it a good practice to pad signal before feature extraction?Body: Is padding, before feature extraction with VGGish, a good practice?
+Our padding technique is to find the longest signal (which is loaded .wav
signal), and then, in every shorter signal, put zeros to the size of the longest one. We need to use it because one size of input data is desirable.
+Perhaps there is any other techniques you recommend?
+The difference between padding before and after the features extraction by accuracy is quite big - more than 20%. Using padding before extraction gives 97% accuracy.
+I'd be glad to read your feedback, and explain me why that happens, and tell me if that kind of padding is correct action or is there a better solution.
+"
+"['neural-networks', 'activation-functions', 'feedforward-neural-networks', 'multilayer-perceptrons']"," Title: About the choice of the activation functions in the Multilayer Perceptron, and on what does this depends?Body: I've read in this: F. Rosenblatt, Principles of neurodynamics. perceptrons and the theory of brain mechanisms that in the Multilayer Perceptron the activation functions in the second, third, ..., are all non linear, and they all can be different. And in the first layer, they are all linear.
+Why?
+On what does this depends?
+
+- When it is said "the neural network learns automatically", in colloquial words, what does it mean?
+
+AFAIK, one first train the NN, then at some point NN learns. When does the "automatically" enters then?
+Thanks in advance for your help.
+"
+"['neural-networks', 'reinforcement-learning', 'alphazero', 'chess', 'multi-agent-systems']"," Title: Why does Alpha Zero's Neural Network flip the board to be oriented towards the current player?Body: While reading the AlphaZero paper in preparation to code my own RL algorithm to play Chess decently well, I saw that the
+
+"The board is oriented to the perspective of the current player."
+
+I was wondering why this is the case if there are two agents (black and white). Is it because there is only one central DCNN network used for board and move evaluation (i.e. there aren't two separate networks/policies used for the respective players - black and white) in the algorithm AlphaZero uses to generate moves?
+If I were to implement a black move policy and a white move policy for the respective agents in my environment, would reflecting the board to match the perspective of the current player be necessary since theoretically the black agent should learn black's perspective of moves while the white agent should learn white's perspective of moves?
+"
+"['neural-networks', 'deep-learning', 'backpropagation']"," Title: Backpropagation - what does rate of change calculated from the partial derivatives actually relate to?Body: I understand conceptually how backpropagation works according to the chain rule, and I understand that partial derivatives calculate the rate of change of a function containing multiple variables with respect to one of those variables, the rest being fixed.
+What I'm struggling with is what the value from these partial derivatives actually relates to. I found this https://activecalculus.org/multi/S-10-2-First-Order-Partial-Derivatives.html which gives some good examples. But with a NN I'm not sure what units the results of the derivatives relate to.
+One of the examples on the website used z = f(x,y) z horizontal distance travelled of a projectile, x initial speed in feet per second, and y was the angle. So if taking the partial derivative with respect to x the results tell us how much the distance travelled changes with respect to the change in speed. So it might be that for every one foot per second increase of the initial speed, we get an increase of 8 feet horizontal travel if using a fixed value for y.
+But when calculating the derivatives for backpropagation, does this mean that if we get an answer of (random value) 0.08, this means that for every change of 1 to the non-static variable we would get a change of 0.08 to our output? And what units (if any) do these values relate to?
+"
+"['training', 'ai-design', 'video-classification']"," Title: Video Analysis: Providing a success score for a of a student carrying out a specific taskBody: I have an AI/ML challenge in relation to video analysis and am unsure where to start.
+I am investigating an application that will grade students performance of carrying out a task, based on analysis of a video of them carrying out the task.
+The problem has not been sufficiently defined yet but to get a high level idea, imagine video showing close up of a trainee doctor performing stitches to close a wound. the AI model would be trained using many videos of someone performing the stitches correctly and score the trainee on a number of criteria.
+Most frameworks will allow detection of objects but taking a video of a person carrying out a task and assessing their success using an AI/ML model feels a step above regular object analysis.
+Assumption is we will create the training material, having professionals video themselves carrying out the task successfully, which will also be graded by other professionals to provide a rubric of scores.
+I understand this is not something that can be simply answered but an idea of where to start would be very helpful.
+
+- are there specific areas of AI i should investigate?
+- are there frameworks that can actually do this ( i have not found any)?
+
+Appreciate any advice.
+Thank you
+"
+"['deep-learning', 'classification', 'image-recognition', 'deep-neural-networks', 'knowledge-representation']"," Title: What algorithm to use to classify data by spatial relations?Body: Let's assume I have dataset of image-like 2D samples where values can be divided into few discrete levels (for example 1, 2, 3 and 4) like in the image below, where each color maps different value, from 1 to 4. Number of how many times given color occurs on the picture varies from sample to sample though.
+
+I would like to classify these images into different classes but based on the spatial relations of these values between each other (not the values themselves). By spatial relations I mean basically (left, right, up, down), for example:
+
+- If blue is above and to the right of the red
+- Another blue is above and to the left of the same red
+- Yellow is to the right of one blue (same height)
+- One green is below red
+- ...
+
+My question is, what algorithm (probably some deep neural network) I should use for that task?
+I would appreciate even just some keywords or clues of what might help.
+"
+"['natural-language-processing', 'transformer', 'bert', 'embeddings']"," Title: Embedding from Transformer-based model from paragraph or documnet (like Doc2Vec)Body: I have a set of data that contains the different lengths of sequences. On average the sequence length is 600. The dataset is like this:
+S1 = ['Walk','Eat','Going school','Eat','Watching movie','Walk'......,'Sleep']
+S2 = ['Eat','Eat','Going school','Walk','Walk','Watching movie'.......,'Eat']
+.........................................
+.........................................
+S50 = ['Walk','Going school','Eat','Eat','Watching movie','Sleep',.......,'Walk']
+
+The number of unique actions in the dataset are fixed. That means some sentences may not contain all of the actions.
+By using Doc2Vec (Gensim library particularly), I was able to extract embedding for each of the sequences and used that for later task (i.e., clustering or similarity measure)
+As transformer is the state-of-the-art method for NLP task. I am thinking if Transformer-based model can be used for similar task. While searching for this technique I came across the "sentence-Transformer"-
+https://github.com/UKPLab/sentence-transformers. But it uses a pretrained BERT model (which is probably for language but my case is not related to language) to encode the sentences. Is there any way I can get embedding from my dataset using Transformer-based model?
+"
+"['neural-networks', 'machine-learning', 'classification', 'ensemble-learning', 'random-forests']"," Title: How do I take the correct classification predictions of an ml algo (i.e. random forest/neural net) and sort the instances in each category?Body: I am trying to sort the instances within each of 5 classification categories in a dataset that has been put through both a random forest classifier and a neural network with 99% accuracy on each.
+Essentially what I am trying to do is stack a sorting algorithm on top of a random forest or a neural net (depending on which will boast a much more efficient and swift process in being integrated with a sorting algorithm post-output) so that the correctly classified instances within each category can be sorted into separate organized and easily comprehensible lists.
+I have tried researching this but all I have found are examples of ensemble learning and traditional stacking in order to achieve higher overall accuracy on an algorithm.
+I am trying to see how I can take the correct predictions from either of these algorithms and sort them by some arbitrary features that I will engineer, but all I am wondering at the moment is how to integrate a sorting algorithm in the first place into the output of a classification algorithm.
+"
+"['search', 'breadth-first-search', 'evaluation-functions', 'best-first-search']"," Title: What would happen if we set the evaluation function in the best-first search algorithm as the cost of paths taken to new nodes?Body: I am reading Artificial Intelligence: A Modern Approach.
+In Chapter 3, Section 3.3.1, The best-first search algorithm is introduced. We learn that in each iteration, this algorithm chooses which node to expand based on minimizing an evaluation function, $f(n)$, for new nodes. And if the expanded nodes are either not already reached, or they generate a less costly path to a reached state, they will be added to the frontier. So, the kind of queue used in the best-first search is a priority queue, i.e., ordering nodes by the function $f(n)$. If we set the $f(n)$ as the depth of the nodes, the queue type will be changed to FIFO (first-in-first-out), which is used in the breadth-first search algorithm.
+Therefore, we can change the nature of algorithms using the $f(n)$ function.
+I am wondering what would happen if we set $f(n)$ as the cost of the paths taken from the common parent node of new nodes to each new node $n$. Since new nodes might stem from different previous nodes, we might have to measure the cost of these nodes' path all the way back till we find a common parent of them (which, in the worst case, is the root node, indicating the initial state). In this way, each time a new node is chosen for expansion (using $f(n)$), and each time an expanded node is chosen for joining the frontier (using cost function), the choice is taken by the similar criterion since $f(n)$ and the cost function is now identical.
+What would be the nature of such an algorithm? Is measuring the cost of paths to new nodes computationally feasible? Can this be a practical algorithm?
+I read later sections and realized that the Dijkstra’s algorithm (uniform-cost search) is so similar to what I had in mind. However, it sets the evaluation function as the cost of the path from the root to the current node. I proposed grouping new nodes by their common parent and compare the cost of nodes within a certain group first. Then, after selecting out best nodes in each group, form a new group based on the selected nodes' new common parent, and do this until we reach the root node, when we will have the last group and comparing costs within that group will find the optimal node for us.
+Would the algorithm I have in mind have any advantage over Dijkstra's algorithm?
+"
+"['variational-autoencoder', 'evidence-lower-bound', 'variational-inference']"," Title: What does the approximate posterior on latent variables, $q_\phi(z|x)$, tend to when optimising VAE'sBody: The ELBO objective is described as follows
+$$ ELBO(\phi,\theta) = E_{q_\phi(z|x)}[log p_\theta (x|z)] - KL[q_\phi (z|x)||p(z)] $$
+This form of ELBO includes a regularisation term in the form of the KL divergence which drives $q_\phi(z|x) \rightarrow p(z)$ when optimising ELBO.
+However we also have the overall expression for the loglikelihood which is defined as follows (proof provided here)
+$$ p_\theta(x) = ELBO(\phi,\theta) + KL[q_\phi(z|x)||p_\theta(z|x)] $$
+Rearranging the above equation as follows
+$$ \max\limits_\phi ELBO(\phi,\theta) = \max\limits_\phi p_\theta(x) - KL[q_\phi(z|x)||p_\theta(z|x)] $$
+We can see that maximising ELBO w.r.t $\phi$ in this form causes $q_\phi(z|x) \rightarrow p_\theta(z|x)$
+These two ways of describing how VAEs learn conflicts my understanding of what happens to the approximate distribution during training.
+Is it simply just trying to match both the prior $p(z)$ and the posterior $p_\theta(z|x)$ or am I missing something
+"
+"['reinforcement-learning', 'rewards', 'multi-armed-bandits', 'upper-confidence-bound']"," Title: Difference in UCB performance when scaling the rewardsBody: I notice the following behavior when running experiments with $\epsilon$-greedy and UCB1. If the reward is kept binary (0 or 1) both algorithm's performances are on par with each other. However, if I make the reward continuous (and bounded [0, 1]) then $\epsilon$-greedy remains good but UCB1 performance plummets. As an experiment, I just scaled the reward of 1 by a factor of 1/10 which negatively influences the performance.
+I have plotted the reward values estimated by the algorithms and see that (due to the confidence interval term) UCB1 largely overestimates the rewards.
+Is there a practical trick to fix that? My guess is that the scaling coefficient $c$ in front of the upper confidence bound is meant just for this case. Nevertheless, the difference in performance is staggering to me. How do I know when and what scaling coefficient will be appropriate?
+===
+update 1 The reward distribution is very simple. There are 17 arms, for arm 3 and 4 the learning algorithm gets a reward of 1, the other arms return reward of 0. No stochasticity, the algorithm has 1000 iterations.
+If I scale the reward by a factor 1/10, for instance, then UCB1 takes a whole lot of time to start catching up with $\epsilon$-greedy
+"
+"['computational-complexity', 'causation', 'uncertainty-quantification', 'epistemic-uncertainty', 'philosophy-of-math']"," Title: Do we need as much information to know if we can can answer a question as we need to actually answer the question?Body: I am reading The Book of Why: The New Science of Cause and Effect by Judea Pearl, and in page 12 I see the following diagram.
+
+The box on the right side of box 5 "Can the query be answered?" is located before box 6 and box 9 which are the processes to actually answer the question. I thought that means that telling if we can answer a question would be easier than actually answering it.
+Questions
+
+- Do we need less information to tell if we can answer a problem (epistemic uncertainty) than actually answer it?
+- Do we need to try to answer the problem and then realize that we cannot answer it?
+- Do we answer the problem and at the same time provide an uncertainty estimation?
+
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'time-complexity', 'crossover-operators', 'selection-operators']"," Title: What is the most computationally efficient genetic algorithm?Body: In researching genetic algorithms, it seems that there are various methods of selection and other operator methods that can significantly change the performance. For example, this picture contains some of the methods that could be used:
+
+Presumably, you can mix & match these operators to optimize whatever problem you're trying to solve.
+What most people care about is how many iterations it takes to get to the target. This is understandable, but I've seen things that would be inefficient in real systems such as:
+
+- using sort on the current population $\mathcal{O}(n \log n)$ and picking the first n members for the mating pool
+
+- appending to a constantly resizing slice to create a mating pool instead of rewriting on the current array
+
+
+What I am looking for is, how can I arrive at the target using the least amount of computation and memory possible. The number of iterations and the time taken to get there is still a secondary priority.
+It's possible that it may be the process of picking the right operators, but what I am also considering is how parallelizable the implementation could be as well.
+"
+"['machine-learning', 'natural-language-processing', 'agi', 'singularity', 'neuroscience']"," Title: Is there any specific SW framework, libraries or algorithms (supported by any theory) designed for implementing a practical AGI system?Body: Any (AGI)-KERAS like libraries? Any deep-learning framework to develop AGI applications?
+Existing frameworks/algorithms used in NN, NLP, ML, etc are not enough in my opinion. In my opinion any framework has to be based on building blocks from: Cognitive Science, Neuroscience, Mathematics, Artificial Intelligence, Computer Science, psycology, sociology, etc.
+"
+"['reinforcement-learning', 'inverse-rl']"," Title: Why is it that the state visitation frequency equals the sum of state visitation frequency from initial time step to the horizon?Body: In the maximum entropy inverse reinforcement learning paper, Ziebart et al. show that the state visitation frequency $\rho(s)$ of a state $s$ can be computed as
+$$
+\rho_{\pi}(s) = \sum_{t}^{T} P(s_t=s|\pi),
+$$
+which is the sum of the probability that the state being visited at each time step.
+I just don't understand why is it the sum? From my perspective, a frequency should be the less than one, so that it should be the average value
+$$
+\rho_{\pi}(s) = \frac{1}{T}\sum_{t}^{T} P(s_t=s|\pi).
+$$
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'seq2seq', 'encoder-decoder']"," Title: Seq2Seq model produces repeating wordsBody: My framework is an encoder-decoder (LSTM-to-LSTM) model, similar to this post. The model basically reads a sentence and generate another sentence. But, the thing is, after a few epochs training, the model cannot produce any meaningful output, instead it keeps generating repeating words.
+The framework can translate Franch-Enlgish very effectively, but for my problem it generates the result like this.
+Can you explain why it produces such result, Thank you
+
+- Here is my printed output:
+
+
+"
+"['text-classification', 'text-detection']"," Title: Is there an AI that can extract proper nouns from free text?Body: I have some free text (think: blog articles, interview transcripts, chat comments), and would like to explore the text data by analysing the proper nouns it contains.
+I know of many ways to simply look up the text against a 'list' of proper nouns. The problem with the approach is many false positives and false negatives, as well as inaccuracies where one proper noun (e.g. "John Allen") is identified as two proper nouns ("John" and "Allen"), as well as other problems, mostly to do with long or unusual proper nouns (e.g. "the Gulf of Carpentaria" - a single proper noun containing the word "of", and long names like "Joost van der Westhuizen"). These kinds of longer, non-conformist proper nouns tend to really trip up grep-style proper noun identification models.
+Does anyone know if any AI available to the public can more accurately identify proper nouns in free text?
+"
+"['natural-language-processing', 'reference-request', 'chat-bots', 'resource-request']"," Title: Creating a NLP driven chatbotBody: I would like to create a chat bot for an e-commerce website that sells a wide range of general merchandize items, from t-shirts, jumpers to calculators. Its primary objective is to develop a Q&A option for visitors/potential customers, to improve engagement on the website. As such, the chat bot is required to be fairly conversational.
+I am experienced in classification et al, but know only the very basics on NLP. Can you provide suggestions on where to begin, e.g., recommended readings/ sources?
+Also note, there is currently has no chat bot system in place, and hence no historical conversation data of any form.
+"
+"['machine-learning', 'object-detection', 'overfitting', 'generalization']"," Title: When exactly am I overfitting -- contradicting metricsBody: I am training an object detection machine learning pipeline. Among the many metrics provided out of the box by tensorflow object detection API, I look at total_loss and DetectionBoxes_Precision/mAP@.75IOU:
+![]()
+Here the x-axis is the number of steps i.e. model experience. The orange line is for the training data and the blue line is for the validation data.
+From the loss graph I would conclude, that at approx 2k steps overfitting starts, so using the model at approx 2k steps would be the best choice. But looking at the precision graph, training e.g. until 24k steps would be a much better model.
+Which one is the best model?
+Here, loss and the precision metric where picked just for illustrating the dilemma, there are many more metrics available, leading to multiple conclusions about when overfitting actually starts.
+"
+"['reinforcement-learning', 'state-of-the-art', 'algorithm-request', 'contextual-bandits']"," Title: What are the state-of-the-art learning algorithms for contextual bandits with stochastic rewardsBody: I am building a solution for an environment with stochastic rewards in an online setting. I am wondering what the state of the art is in this setting.
+Is it $\epsilon$-greedy (with logistic regression), LinUCB (with ridge regression), Thompson Sampling (with some approximator)? Could maybe point me to the relevant papers/articles?
+"
+"['papers', 'math', 'attention', 'evidence-lower-bound']"," Title: How is the variational lower bound for hard attention derived in Show, Attend and TellBody: How is the jump from line 1 to line 2 done in equation 10 of Show, Attend and Tell?
+
+While we're at it, another thing that might be muddying the waters for me is that I'm not clear on what the sum is over. I know that $s$ is indexed as $s_{t,i}$, but the one-hot trick from line 2 to 3 makes me believe that the sum is over just $i$.
+"
+"['deep-learning', 'datasets', 'data-preprocessing', 'data-augmentation']"," Title: How do I increase the size of an (almost) balanced dataset?Body: I am trying to add more data points in my (almost) balanced dataset for training my neural network. I have come across techniques such as SMOTE or Random Over Sampling, but they work best for imbalanced data (as they balance the dataset). How can I do this and is it even worth it?
+P.S. I know copying the same data points and appending them at the end doesn't add much value, but can we do it, and can it help to increase the prediction accuracy?
+"
+"['reinforcement-learning', 'policy-gradients', 'proximal-policy-optimization', 'on-policy-methods', 'continuous-action-spaces']"," Title: PPO in continuous control not workingBody: I have PPO agent for discrete action space for LunarLander-v2
env in gym and it works well. However, when i am trying to solve continuous version of the same env - LunarLanderContinuous-v2
it is totally failing. I guess i made some mistakes in converting algorithm to continuous version. So, my steps of changing at algorithm are:
+
+- Change network: return
mu
and var
of shape n_actions
. I have 2 different last layers for that, for mu
i return Tanh
of logits and for var
i return Softplus
of logits.
+- Choosing action: sampling action from normal distribution with expectation
mu
and variance var
- torch.distributions.multivariate_normal.MultivariateNormal(torch.squeeze(mu), torch.torch.diag_embed(var))
+- For
log
of action probability i am using dist.log_prob(actions)
+
+With this small changes my algorithm totally doesn't work. Is it right steps to convert algorithm with discrete action space to algorithm with continuous action space? I really confused, because my PPO for discrete action space work very well and with only this changes it is failing. Could you please suggest what i am doing wrong here?
+"
+"['reinforcement-learning', 'q-learning', 'reward-functions', 'normalisation', 'standardisation']"," Title: How to scale all positive continuous reward?Body: My RL project has all positive continuous rewards for every step and the goal is to have the maximum cumulative reward (episodic reward). The problem is that the rewards are too close and all between 5 and 6, therefore achieving the optimum episodic reward will be harder.
+What scaling methods are recommended? (like min-max scaling or reward ** 3)
+How can I emphasize the episodic reward?
+"
+"['neural-networks', 'generative-adversarial-networks', 'models']"," Title: Which approach best suits vector encodings?Body: I want to build a model that when given two vectors, outputs the probability of one vector being the encoded form of the other. I have 2 strategies for this: (Dataset available)
+
+- I can directly feed them in concatenation to a neural network and take the output as the probability.
+
+- I can train a conditional GAN with the conditional vector being the encoded vector and using the original vector as the generated one. In this case, the discriminator works as the network that I train in the first approach.
+
+
+Which approach is better? Am I thinking in the right direction?
+"
+"['classification', 'long-short-term-memory', 'time-series']"," Title: Recommended Time serie forecasting model for Fibonacci levels classificationBody: I have a set of time series data which gives me fibonacci levels and the duration at which the value is at this level.
+Data structure to look like:
+Date / Duration (minutes) / Level
+
+201201 / 380 / 2
+.....
+
+210422 / 400 / 4
+
+I'd like to create a NN model (LSTM maybe) that would forecast the next level, the probability it reaches it and this for several steps ahead (1 step = 400 minutes).
+Which time series forecasting model would you recommend ?
+Thanks in advance.
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'dqn']"," Title: In DQN, would it be cheaper to have $N$ neural networks with a single real-valued output, one for each of the $N$ actions?Body: In the classical examples of deep q-learning, I often see neural networks in which the input represents the state of the agent, while the output is a tuple with all the values of $Q(s, a)$ predicted for all the possible $N$ actions.
+Would it be cheaper to have $N$ neural networks with a single real-valued output, one for each of the $N$ actions?
+With cheaper I mean cheaper in terms of the time complexity of a single training step of the network.
+"
+['faster-r-cnn']," Title: How to design training loop in RPN?Body: I have a short question. I understand the concept of RPN but one small details keeps me from implementing it. How should I design the training loop given that I have to use only a subset of anchor boxes (128 positive and 128 negative). In other words, if the ouput of reg layer is a map of every bounding box, how do I update only the bounding boxes that match my 128 positive/128 negative samples
+"
+"['reinforcement-learning', 'definitions', 'markov-decision-process', 'regret']"," Title: Is there any reasonable notion of regret for infinite horizon discounted MDPs?Body: I am thinking about episodic MDPs. Usually, in episodic MDPs, it seems that we have a finite fixed horizon per episode and no discount factor. Then, a very intuitive notion of regret after $T$ episodes is to sum over the difference of optimal expected return and achieved expected return.
+I was wondering about notions of regret for infinite horizon discounted MDPs. It is not clear to me what a reasonable notion of regret for this setting would be, and I am also not aware of any standard definition of regret in this setting.
+Maybe, as a justification for infinite horizon episodic MDPs, this quote by Littman in his paper: Markov games as a framework for multi-agent reinforcement learning
+
+As in MDP's, the discount factor can be thought of as the probability that the game will be allowed to continue after the current move. It is possible to define a notion of undiscounted rewards [Schwartz, 1993], but not all Markov games have optimal strategies in the undiscounted case [Owen, 1982]. This is because, in many games, it is best to postpone risky actions indefinitely. For current purposes, the discount factor has the desirable effect of goading the players into trying to win sooner rather than later.
+
+"
+"['deep-learning', 'convolutional-neural-networks', 'terminology', 'convolution', 'books']"," Title: What is meant by ""real-valued argument"" in this context of the convolution operation?Body: Consider the following statement from Deep Learning book (p. 327, chapter 9: Convolutional Networks)
+
+In its most general form, convolution is an operation on two functions
+of a real-valued argument.
+
+Suppose $f$ and $g$ are functions on which I want to apply convolution operation. What is meant by two functions of a "real-valued argument" in this context?
+Does it mean $f$ and $g$ are real-valued functions? Or does it mean $f$ and $g$ are real functions? or any other?
+
+- Real-valued function: Function whose codomain is a subset of real numbers
+
+- Real function: Function whose domain and codomain are a subset of real numbers.
+
+
+"
+"['classification', 'long-short-term-memory', 'time-series', 'algorithm-request', 'model-request']"," Title: What kind of neural network should I build to classify each instance of a time series sequence?Body: Let's say I have the time-series dataset below-left. I would like to train a model in such a way that, if I feed the model with an input like the test sequence below, it should be able to classify each sample with the correct class label.
+ Training Sequence: Test Sequence:
+
+Time, Bitrate, Class Time, Bitrate Predicted Class
+ 0, 312, 1 0, 234 -----> 0
+ 0.3, 319, 1 0.2, 261 -----> 0
+ 0.5, 227, 0 0.4, 277 -----> 0
+ 0.6, 229, 0 0.7, 301 -----> 1
+ 0.7, 219, 0 0.8, 305 -----> 1
+ 0.8, 341, 1 0.9, 343 -----> 1
+ 0.9, 281, 0 1.0, 299 -----> 0
+ ... ...
+
+So, what kind of neural network should I build to classify each instance of a time series sequence?
+"
+"['neural-networks', 'reference-request', 'weights', 'weights-initialization']"," Title: Which methods for weight initialization in Neural Networks are currently common practice?Body: I am currently researching the topic of weight initialization methods for (deep) neural networks and I'm a little stuck. The result of my work should be an overview of methods that are currently in use.
+I already collected information about different methods (Xavier/He, LSUV, Pre-Training, etc.), but I was wondering if anyone has a method that comes to mind (maybe recently developed) that I should look at more closely?
+"
+"['alpha-beta-pruning', 'expectiminimax']"," Title: How to alpha-beta pruning to expectiminimaxBody:
+I have this problem above and I'm trying to think of how to apply alpha-beta pruning to the above. The algorithm states that on you're opponents turn the (expecti turn) you just return the lowest value, but does that mean you apply the probabilities to those values? So for the far left you'd get 2 as the largest value then multiply that by 0.5, but then that set's $\\beta$ in the expecti node to $0.5*2=1$ and when it goes into the branch to the right it's comparing values without the probabilities applied to it when updating $\beta$.
+"
+"['convolutional-neural-networks', 'u-net', 'semantic-segmentation']"," Title: What does the ""number of channels"" correspond to in U-Net?Body: I'm studying the U-Net CNN architecture. I'm new to CNNs and am confused regarding the "number of channels".
+Referring to the U-Net diagram, the input image is convolved with a 3x3 mask which generates a 570x570 output. This output image is then convolved again by a 3x3 mask to produce a 568 x 568 signal. However, what do the 64's correspond to?
+The U-net says something about a multi-channel feature map. But how does convolving an image by a 3x3 mask results in a "64".
+
+"
+"['reinforcement-learning', 'incremental-learning', 'online-learning']"," Title: Why isn't RL considered a continual learning strategy itself?Body: I have read about methods that apply continual learning strategies to reinforcement learning.
+Since reinforcement learning also learns step by step (i.e., task by task, in a sense) during the training phase, why isn't it itself considered a continual learning strategy?
+Of course, I understand that if an agent catastrophically forgets previously learned tasks, there is a need to prevent this and therefore develop strategies to mitigate catastrophic forgetting, but my question is more about the definition. If continuous learning (or online learning) is about learning one task at a time, and RL somehow does this, why is it not considered a continual learning strategy (regardless of the fact that it may not be as effective)?
+To clarify, I haven't read anywhere the claim that RL is not a CL approach, but also none that it would be. Only the fact that CL methods are proposed for RL gives me the impression that RL is not considered an approach. Nor have I seen anyone mention RL for this purpose. I'm just wondering why that is.
+"
+"['neural-networks', 'papers', 'neurons', 'hopfield-network']"," Title: Why some dynamical systems in the form of ODE system called Neural networks in the 90sBody: I am familiar with the currently popular neural network in deep learning, which has weights and is trained by gradient descent.
+However, I found many papers entitled "Neural networks for solving XXX optimization problems." These papers were popular from the 1980s to the 2000s.
+For example, the first one is that "Neural network to solve linear programming problems" 1.
+Later, Kennedy et al. used "Neural network" to solve nonlinear programming problems 2.
+I summarize the difference between the current popular neural network and the "Neural networks":
+
+- They do not have parameter weights and biases to train or to learn from data.
+- They used a circuit diagram to present the model.
+- The model can be simplified as an ODE system and has a Lyapunov function to prove stability.
+
+So, my question is, why these ODE systems are called "Neural networks"?
+Reference:
+1: J. J. Hopfield, D. W. Tank, “neural” computation of decisions in optimization problems, Biological265 cybernetics 52 (3) (1985) 141–152.
+2: M. P. Kennedy, L. O. Chua, Neural networks for nonlinear programming, IEEE Transactions on Circuits and Systems 35 (5) (1988) 554–562.
+"
+"['constraint-satisfaction-problems', 'norvig-russell']"," Title: Understanding conflict set generation for conflict directed backjumpingBody: I was reading Constraint Satisfaction Problem chapter from Artificial Intelligence 3rd ed book by Peter Norvig et al. On page 219, section 6.3 it explains computation of conflict set for conflict directed backjumping as follows:
+
+In our example, $SA$ fails, and its conflict set is (say) $\{WA, NT,Q\}$. We backjump to $Q$,and $Q$ absorbs the conflict set from $SA$ (minus $Q$ itself, of course) into its own direct conflict set, which is $\{NT, NSW\}$; the new conflict set is $\{WA, NT, NSW\}$. That is, there is no solution from $Q$ onward, given the preceding assignment to $\{WA, NT, NSW\}$. Therefore, we backtrack to $NT$, the most recent of these. NT absorbs $\{WA, NT, NSW\}−\{NT\}$ into its own direct conflict set $\{WA\}$,giving $\{WA, NSW\}$ (as stated in the previous paragraph). Now the algorithm backjumps to $NSW$, as we would hope.
+To summarize: let $X_j$ be the current variable, and let $conf(X_j)$ be its conflict set. If every possible value for $X_j$ fails, backjump to the most recent variable $X_i$ in $conf(X_j)$, and set
+$conf(X_i) ← conf(X_i) ∪ conf(X_j) −\{X_i\}$.
+
+I didn't get the line:
+
+Therefore, we backtrack to $NT$, the most recent of these.
+
+How is $NT$ most recent of $Q$'s conflict set $\{WA,NT,NSW\}$?
+PS: Here is the map coloring example the book discusses:
+
+"
+"['machine-learning', 'reinforcement-learning', 'q-learning', 'algorithm', 'sarsa']"," Title: Reinforcement learning for rearranging the mobile home screen icon layout: what inputs/states do I need to pass into the algorithm?Body: I have a problem where I need to rearrange a particular user's mobile home screen icon layout. Let's say that the social media app usage of a user is high compared to other app usage. So I need the reinforcement algorithm to process this information and send back the instructions to the android operating system as to how the icons needs to be arranged. To address this problem, I have chosen three algorithms:
+
+- Q-learning.
+- State-Action-Reward-State-Action.
+- Deep Deterministic Policy Gradient.
+
+I have decided to first consider only Q-learning, so I am trying to understand the states, rewards, and the actions I need to pass in order to make this algorithm work.
+The principles I have considered are:
+– The environment is the mobile device operating system.
+– Moving the apps up in the list depending on their usage can be the action where the app can be moved left, right, up or down.
+– The reward can be a periodic reward, check if the user has rearranged an app which was given prominence in the list by the algorithm and receive a negative feedback if the user has rearranged the icon position or if it is in the same position receive a positive reward.
+The initial challenge I am facing is to understand what inputs/states I need to pass into the algorithm and is there any reinforcement learning library I can use to mimic such an environment?
+Are there any resources or papers I can use to solve this problem I am facing?
+"
+"['speech-recognition', 'markov-chain', 'hidden-markov-model', 'gaussian-mixture-models']"," Title: Looking for help on initializing continuous HMM model for word level ASRBody: I have been studying HMM implementation approaches on ASR for the last couple of weeks. This probabilistic model is very new to me. I am currently using a Python package called Pomegranate to implement an ASR model of my own for the Librispeech dataset. I plan to use my own small-size dataset once I feel comfortable assessing the results of this one with Librispeech.
+My problem: Librispeech (and my own dataset) has word-level transcription labels of the audio. The audio files have several word utterances each. Upon generating the MFCCs, I am not sure how to initialize the HMM matrices since the MFCCs, to my knowledge, capture phoneme level context at 10ms windows whereas I am trying to use the current word-level labels. There is no match up to which word each MFCC window belongs. Are the unique words in my corpus to be considered as the individual states in the transition probability matrix? I’m missing the point of how the extracted MFCCs are fed to the model for initialization and/or training?
+I’ve been stumped on this for several days and I can’t seem to understand a clear cut explanation in the literature I have read. Any advice and help is very very much appreciated.
+"
+"['neural-networks', 'architecture', 'generalization']"," Title: How can we get a differentiable neural network to count things?Body: Imagine I have images with apples in them. I want to train a neural network which can count the number of apples in each image.
+BUT, I don't want to use a detector, then count the number of bounding boxes. I'm interested in knowing if there's a way to bake this sort of logic into a differentiable neural network.
+The simplest variation of this problem might be: I have an input vector $x \in \{0, 1\}^N$ and I want to count the number of 1s in it. I can make a single layer neural network, setting all the weights to 1, bias to 0, and linear activation. And my answer pops out. But how would I train the network to do this from scratch? Sure, I could regress to an output in $[0,1]$ and multiply the result by $N$, then the network is differentiable. But did the model really learn how to count? If so, would this behaviour be generalisable to counting multiple types of objects at once? Would it generalise to inputs where there can be any number of said object (like an image can have many apples in it, despite the size of the image)?
+I want to know if there's a model which can learn how to count.
+Here's another way I'm thinking about it: I can look at an aerial view of pine trees and say "yeah maybe there are 30 trees", but then I can do the task of looking at and identifying each tree individually, incrementing a counter, and making sure not to revisit that tree. The latter is what I consider truly "counting".
+"
+"['markov-decision-process', 'policies', 'optimality']"," Title: Are optimal policies always deterministic, or can there also be optimal policies that are stochastic?Body: Let $M$ be an MDP with two states, $A$ and $B$, where $A$ is the starting state, and you always transit to the final state $B$ using two possible actions. $A_1$ gives you rewards that are normally distributed $\mathcal{N}(0, 1)$ and $A_2$ gives you rewards that are normally distributed $\mathcal{N}(0, 3)$.
+How many optimal policies do we have? What is the optimal value of the states? Is any policy preferred over the other? Why? If you prefer one over the other, is there some way to detect it using what we have studied?
+In my view, there are infinite policies that give the same expected reward.
+$\pi_\alpha$ be a policy that is stochastic, which maps as follows - $\pi_\alpha(s, A_1) = \alpha $ and $ \pi_\alpha (s, A_2) = 1 - \alpha$ for $ \alpha \in [0,1]$. It is clear that, for each $\alpha$, we get infinite policies but have the same expected return.
+But, according to some google searches (for example, here it says optimal policies are generally deterministic), I found optimal policies are always deterministic. Hence this implies there are only 2 policies, i.e., either take action $A_1$ or $A_2$ but not probabilistic.
+So, my doubt is: what are the optimal policies here? Is it deterministic (only 2 policies) or stochastic (infinite)? Or is it an assumption that optimal policies are deterministic?
+"
+"['convolutional-neural-networks', 'autoencoders', 'anomaly-detection']"," Title: How to train a model for 1 image class to detect anomaly?Body: I want to train a model with python over the images, and these images are for a metal product.
+my aim is to detect the defects, to notice if a product is a failure.
+what kind of architecture do you suggest? should I train over the class? or should I use an autoencoder?
+"
+"['reinforcement-learning', 'deep-rl']"," Title: PPO agent for vehicle control does not learn to stop at traffic lightsBody: I have built a custom RL environment with gym
, which simulates the RL vehicle and potential vehicles in front of the RL vehicle as well as traffic lights (including their state; red, yellow, green). I trained a PPO agent from stable_baselines3
without considering setting of hyperparameters and the agent learned to follow the vehicle in front of it without crashing. However it does not learn to stop at red lights after extensive training.
+I tried training it without surrounding vehicles to get more interactions of the RL vehicle with traffic lights and this helped the agent to learn stopping a red light. However when I then continue training of the agent in a new environment with surrounding traffic, the agent again un-learns
stopping at red lights.
+I am still a novice with RL and do not understand as to why this happens and what I can do here. Should I set hyperparameters? Or try a different model? Or should I exchange the default policy of the PPO model?
+"
+"['neural-networks', 'reinforcement-learning', 'tensorflow', 'python', 'softmax']"," Title: Exploration for softmax should be binary or continuous softmax?Body: Maybe it's silly to ask but for random exploration in an RL for choosing discrete action, that in the neural network last layer softmax will be used, what random samples should we provide? binary like (0,0,1,0,0,0) or continuous softmax like (0.1, 0.15, 0.45, 0.25, 0,5, 0.1)??
+if the answer is continuous, what algorithm do you suggest? like generating random numbers between 0 and 1 and then using softmax? (this algorithm mostly provides close numbers and I think it's not the correct way)
+"
+"['neural-networks', 'machine-learning', 'architecture', 'model-request', 'constrained-optimization']"," Title: Which neural network can I use to solve this constrained optimisation problem?Body: Let $\mathcal{S}$ be the training data set, where each input $u^i \in \mathcal{S}$ has $d$ features.
+I want to design an ANN so that the cost function below is minimized (the sum of the square of pairwise differences between model outputs) and the given constraint is satisfied, where $w$ is the ANN model parameter vector.
+\begin{align}
+\min _{w}& \sum_{\{i, j\} \in \mathcal{S}}\left(f\left(w, u^{i}\right)-f\left(w, u^{j}\right)\right)^{2} \\
+&f\left(w, u^{i}\right) \geq q_{\min }, \quad i \in \mathcal{S}
+\end{align}
+What kind of ANN is suitable for this purpose?
+"
+"['convolutional-neural-networks', 'image-processing', 'data-augmentation']"," Title: Should one rescale (normalize) image before or after data augmentation?Body: During image preprocessing pipeline, should one rescale each pixel value to [0, 1] by dividing 255 first, and then perform data transformation such as color distortion, gaussian blur? or vice versa?
+I believe for correctness, it may depend on the particular image transformation algorithm, or if you use some libraries. Is there any general advice? If anyone has experience trying both before or after, please share, particularly if you use external library or framework that may make implicit assumption to the value range of the pixel.
+"
+"['machine-learning', 'data-preprocessing', 'data-augmentation', 'training-datasets']"," Title: Is creating dataset only by augmentation a bad practice?Body: I wonder if creating data set only by augmentation base images is a bad practice.
+I mean the situation when you have to train net to predict really simple patterns, for example printed-like digits. And all digits from specific group looks basically the same, for example all one's look the same and so on. The only difference is rotation/translation etc. in the image.
+Is it bad way to create data set by taking digit image and randomly rotate, translate and maybe erode/dilate it?
+My intuition tells me that something's wrong with that approach, but I cannot find any reason why it should be wrong.
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks']"," Title: What is the difference between multi-head and normal output?Body: Let's say that I have a neural network with 2 heads. The first consists of X neurons. The second consists of Y neurons. I have these 2 heads because I want to predict 2 different variables. And I can see the loss for each head during training.
+Now, let's say that I have only one head that consists of X+Y neurons. I can interpret the output because I know that the first X neurons describe some variable and the latter Y neurons describe the second variable.
+I want to know if there is any difference between these 2 methods (maybe in performance or something). What are the pros and cons? Are there any advantages of one method over another for some particular tasks?
+"
+"['reinforcement-learning', 'deep-rl', 'actor-critic-methods', 'ddpg', 'gym']"," Title: How to deal with a moving target in the Lunar Lander environment with DDPG?Body: I have noticed that DDPG does rather well at solving environments with a static target.
+For example, the default of Lunar Lander, the flags do not change position. So the DDPG model learns how to get to the center of the screen and land fairly quickly.
+As soon as I start moving the landing position around randomly and adding the landing position as an input to the model, the model has an extremely hard time putting this connection together.
+A few questions/points about adding this complexity to the environment:
+
+- Would more nodes per layer or more layers help in figuring out this connection? I have tested this but seems the bigger I go, the harder it is to learn anything.
+- Is it a common RL AI issue that it has a hard time connecting data?
+- I realize that I could change the environment to always have a static target and instead change the position of the lunar lander ship, which in effect accomplishes the same thing, but want to know if we could solve it with moving target
+- Is there any good documentation on Actor/Critic analyzing models? I have some results where my critic target is falling out but my critic loss is going down nicely. At the same time my actor target is going up and up and eventually plateau. It is hard to really understand what is happening and would be great to understand actor loss vs critic loss vs critic target.
+
+Essentially, I added a random int (left side of flags), added 1 to get x position middle of landing and add 1 more to get distance of flags to be 3 out of 11 chunks.
+rand_chunk = random.randint(0, CHUNKS-3)
+self.x_pos_middle_landing = chunk_x[rand_chunk + 1]
+self.helipad_x1 = chunk_x[rand_chunk]
+self.helipad_x2 = chunk_x[rand_chunk + 2]
+height[rand_chunk] = self.helipad_y
+height[rand_chunk + 1] = self.helipad_y
+height[rand_chunk + 2] = self.helipad_y
+
+Old State:
+state = [
+ (pos.x - VIEWPORT_W/SCALE/2) / (VIEWPORT_W/SCALE/2),
+ (pos.y - (self.helipad_y+LEG_DOWN/SCALE)) / (VIEWPORT_H/SCALE/2),
+ vel.x*(VIEWPORT_W/SCALE/2)/FPS,
+ vel.y*(VIEWPORT_H/SCALE/2)/FPS,
+ self.lander.angle,
+ 20.0*self.lander.angularVelocity/FPS,
+ 1.0 if self.legs[0].ground_contact else 0.0,
+ 1.0 if self.legs[1].ground_contact else 0.0
+ ]
+
+Added to State, same as state[0] but using middle of 3 for landing
+(self.x_pos_middle_landing - VIEWPORT_W/SCALE/2) / (VIEWPORT_W/SCALE/2)
+
+And update the obs space from 8 spaces to 9
+self.observation_space = spaces.Box(-np.inf, np.inf, shape=(9,), dtype=np.float32)
+
+Rewards need to be updated, Old Rewards:
+shaping = \
+ - 100*np.sqrt(state[0]*state[0] + state[1]*state[1]) \
+ - 100*np.sqrt(state[2]*state[2] + state[3]*state[3]) \
+ - 100*abs(state[4]) + 10*state[6] + 10*state[7] # And ten points for legs contact, the idea is if you
+
+New Rewards:
+shaping = \
+ - 100*np.sqrt((state[0]-state[8])*(state[0]-state[8]) + state[1]*state[1]) \
+ - 100*np.sqrt(state[2]*state[2] + state[3]*state[3]) \
+ - 100*abs(state[4]) + 10*state[6] + 10*state[7] # And ten points for legs contact, the idea is if you
+
+DDPG Model
+ import tensorflow as tf
+import matplotlib.pyplot as plt
+import numpy as np
+import gym
+from tensorflow.keras.models import load_model
+import os
+import envs
+import time
+import scipy.stats as stats
+# from stable_baselines.common.policies import MlpPolicy, MlpLstmPolicy
+from stable_baselines.common.vec_env import SubprocVecEnv, DummyVecEnv, VecCheckNan
+from stable_baselines.common import set_global_seeds, make_vec_env
+from stable_baselines.ddpg import DDPG
+from stable_baselines.ddpg.policies import MlpPolicy
+# from stable_baselines.sac.policies import MlpPolicy
+from stable_baselines import PPO2, SAC
+from stable_baselines.common.noise import NormalActionNoise, OrnsteinUhlenbeckActionNoise, AdaptiveParamNoiseSpec
+
+if __name__ == '__main__':
+ num_cpu = 1 # Number of processes to use
+ env = SubprocVecEnv([make_env(env_id, i) for i in range(num_cpu)])
+ env = VecCheckNan(env, raise_exception=True, check_inf=True)
+ n_actions = env.action_space.shape[-1]
+
+ #### DDPG
+ policy_kwargs = dict(act_fun=tf.nn.sigmoid, layers=[512, 512, 512], layer_norm=False)
+ param_noise = None
+ # action_noise = OrnsteinUhlenbeckActionNoise(mean=np.zeros(n_actions), sigma=float(0.5) * np.ones(n_actions))
+ action_noise = NormalActionNoise(0, 0.1)
+ model = DDPG
+
+ # # Train Model
+ model.learn(total_timesteps=int(3e5))
+ model.save('./models/lunar_lander')
+
+Full Code:
+ """
+Rocket trajectory optimization is a classic topic in Optimal Control.
+
+According to Pontryagin's maximum principle it's optimal to fire engine full throttle or
+turn it off. That's the reason this environment is OK to have discreet actions (engine on or off).
+
+The landing pad is always at coordinates (0,0). The coordinates are the first two numbers in the state vector.
+Reward for moving from the top of the screen to the landing pad and zero speed is about 100..140 points.
+If the lander moves away from the landing pad it loses reward. The episode finishes if the lander crashes or
+comes to rest, receiving an additional -100 or +100 points. Each leg with ground contact is +10 points.
+Firing the main engine is -0.3 points each frame. Firing the side engine is -0.03 points each frame.
+Solved is 200 points.
+
+Landing outside the landing pad is possible. Fuel is infinite, so an agent can learn to fly and then land
+on its first attempt. Please see the source code for details.
+
+To see a heuristic landing, run:
+
+python gym/envs/box2d/lunar_lander.py
+
+To play yourself, run:
+
+python examples/agents/keyboard_agent.py LunarLander-v2
+
+Created by Oleg Klimov. Licensed on the same terms as the rest of OpenAI Gym.
+"""
+
+
+import sys, math
+import numpy as np
+import random
+
+import Box2D
+from Box2D.b2 import (edgeShape, circleShape, fixtureDef, polygonShape, revoluteJointDef, contactListener)
+
+import gym
+from gym import spaces
+from gym.utils import seeding, EzPickle
+
+FPS = 50
+SCALE = 30.0 # affects how fast-paced the game is, forces should be adjusted as well
+
+MAIN_ENGINE_POWER = 13.0
+SIDE_ENGINE_POWER = 0.6
+
+INITIAL_RANDOM = 1000.0 # Set 1500 to make game harder
+
+LANDER_POLY =[
+ (-14, +17), (-17, 0), (-17 ,-10),
+ (+17, -10), (+17, 0), (+14, +17)
+ ]
+LEG_AWAY = 20
+LEG_DOWN = 18
+LEG_W, LEG_H = 2, 8
+LEG_SPRING_TORQUE = 40
+
+SIDE_ENGINE_HEIGHT = 14.0
+SIDE_ENGINE_AWAY = 12.0
+
+VIEWPORT_W = 600
+VIEWPORT_H = 400
+
+
+class ContactDetector(contactListener):
+ def __init__(self, env):
+ contactListener.__init__(self)
+ self.env = env
+
+ def BeginContact(self, contact):
+ if self.env.lander == contact.fixtureA.body or self.env.lander == contact.fixtureB.body:
+ self.env.game_over = True
+ for i in range(2):
+ if self.env.legs[i] in [contact.fixtureA.body, contact.fixtureB.body]:
+ self.env.legs[i].ground_contact = True
+
+ def EndContact(self, contact):
+ for i in range(2):
+ if self.env.legs[i] in [contact.fixtureA.body, contact.fixtureB.body]:
+ self.env.legs[i].ground_contact = False
+
+
+class LunarLander(gym.Env, EzPickle):
+ metadata = {
+ 'render.modes': ['human', 'rgb_array'],
+ 'video.frames_per_second' : FPS
+ }
+
+ continuous = False
+
+ def __init__(self):
+ EzPickle.__init__(self)
+ self.seed()
+ self.viewer = None
+
+ self.world = Box2D.b2World()
+ self.moon = None
+ self.lander = None
+ self.particles = []
+
+ self.prev_reward = None
+
+ # useful range is -1 .. +1, but spikes can be higher
+ self.observation_space = spaces.Box(-np.inf, np.inf, shape=(9,), dtype=np.float32)
+
+ if self.continuous:
+ # Action is two floats [main engine, left-right engines].
+ # Main engine: -1..0 off, 0..+1 throttle from 50% to 100% power. Engine can't work with less than 50% power.
+ # Left-right: -1.0..-0.5 fire left engine, +0.5..+1.0 fire right engine, -0.5..0.5 off
+ self.action_space = spaces.Box(-1, +1, (2,), dtype=np.float32)
+ else:
+ # Nop, fire left engine, main engine, right engine
+ self.action_space = spaces.Discrete(4)
+
+ self.reset()
+
+ def seed(self, seed=None):
+ self.np_random, seed = seeding.np_random(seed)
+ return [seed]
+
+ def _destroy(self):
+ if not self.moon: return
+ self.world.contactListener = None
+ self._clean_particles(True)
+ self.world.DestroyBody(self.moon)
+ self.moon = None
+ self.world.DestroyBody(self.lander)
+ self.lander = None
+ self.world.DestroyBody(self.legs[0])
+ self.world.DestroyBody(self.legs[1])
+
+ def reset(self):
+ self._destroy()
+ self.world.contactListener_keepref = ContactDetector(self)
+ self.world.contactListener = self.world.contactListener_keepref
+ self.game_over = False
+ self.prev_shaping = None
+
+ W = VIEWPORT_W/SCALE
+ H = VIEWPORT_H/SCALE
+
+ # terrain
+ CHUNKS = 11
+ height = self.np_random.uniform(0, H/2, size=(CHUNKS+1,))
+ chunk_x = [W/(CHUNKS-1)*i for i in range(CHUNKS)]
+ rand_chunk = random.randint(0, CHUNKS-3)
+ self.x_pos_middle_landing = chunk_x[rand_chunk + 1]
+ self.helipad_x1 = chunk_x[rand_chunk]
+ self.helipad_x2 = chunk_x[rand_chunk + 2]
+ self.helipad_y = H/4
+ height[rand_chunk] = self.helipad_y
+ height[rand_chunk + 1] = self.helipad_y
+ height[rand_chunk + 2] = self.helipad_y
+
+ self.moon = self.world.CreateStaticBody(shapes=edgeShape(vertices=[(0, 0), (W, 0)]))
+ self.sky_polys = []
+ for i in range(CHUNKS-1):
+ p1 = (chunk_x[i], height[i])
+ p2 = (chunk_x[i+1], height[i+1])
+ self.moon.CreateEdgeFixture(
+ vertices=[p1,p2],
+ density=0,
+ friction=0.1)
+ self.sky_polys.append([p1, p2, (p2[0], H), (p1[0], H)])
+
+ self.moon.color1 = (0.0, 0.0, 0.0)
+ self.moon.color2 = (0.0, 0.0, 0.0)
+
+ initial_y = VIEWPORT_H/SCALE
+ self.lander = self.world.CreateDynamicBody(
+ position=(VIEWPORT_W/SCALE/2, initial_y),
+ angle=0.0,
+ fixtures = fixtureDef(
+ shape=polygonShape(vertices=[(x/SCALE, y/SCALE) for x, y in LANDER_POLY]),
+ density=5.0,
+ friction=0.1,
+ categoryBits=0x0010,
+ maskBits=0x001, # collide only with ground
+ restitution=0.0) # 0.99 bouncy
+ )
+ self.lander.color1 = (0.5, 0.4, 0.9)
+ self.lander.color2 = (0.3, 0.3, 0.5)
+ self.lander.ApplyForceToCenter( (
+ self.np_random.uniform(-INITIAL_RANDOM, INITIAL_RANDOM),
+ self.np_random.uniform(-INITIAL_RANDOM, INITIAL_RANDOM)
+ ), True)
+
+ self.legs = []
+ for i in [-1, +1]:
+ leg = self.world.CreateDynamicBody(
+ position=(VIEWPORT_W/SCALE/2 - i*LEG_AWAY/SCALE, initial_y),
+ angle=(i * 0.05),
+ fixtures=fixtureDef(
+ shape=polygonShape(box=(LEG_W/SCALE, LEG_H/SCALE)),
+ density=1.0,
+ restitution=0.0,
+ categoryBits=0x0020,
+ maskBits=0x001)
+ )
+ leg.ground_contact = False
+ leg.color1 = (0.5, 0.4, 0.9)
+ leg.color2 = (0.3, 0.3, 0.5)
+ rjd = revoluteJointDef(
+ bodyA=self.lander,
+ bodyB=leg,
+ localAnchorA=(0, 0),
+ localAnchorB=(i * LEG_AWAY/SCALE, LEG_DOWN/SCALE),
+ enableMotor=True,
+ enableLimit=True,
+ maxMotorTorque=LEG_SPRING_TORQUE,
+ motorSpeed=+0.3 * i # low enough not to jump back into the sky
+ )
+ if i == -1:
+ rjd.lowerAngle = +0.9 - 0.5 # The most esoteric numbers here, angled legs have freedom to travel within
+ rjd.upperAngle = +0.9
+ else:
+ rjd.lowerAngle = -0.9
+ rjd.upperAngle = -0.9 + 0.5
+ leg.joint = self.world.CreateJoint(rjd)
+ self.legs.append(leg)
+
+ self.drawlist = [self.lander] + self.legs
+
+ return self.step(np.array([0, 0]) if self.continuous else 0)[0]
+
+ def _create_particle(self, mass, x, y, ttl):
+ p = self.world.CreateDynamicBody(
+ position = (x, y),
+ angle=0.0,
+ fixtures = fixtureDef(
+ shape=circleShape(radius=2/SCALE, pos=(0, 0)),
+ density=mass,
+ friction=0.1,
+ categoryBits=0x0100,
+ maskBits=0x001, # collide only with ground
+ restitution=0.3)
+ )
+ p.ttl = ttl
+ self.particles.append(p)
+ self._clean_particles(False)
+ return p
+
+ def _clean_particles(self, all):
+ while self.particles and (all or self.particles[0].ttl < 0):
+ self.world.DestroyBody(self.particles.pop(0))
+
+ def step(self, action):
+ if self.continuous:
+ action = np.clip(action, -1, +1).astype(np.float32)
+ else:
+ assert self.action_space.contains(action), "%r (%s) invalid " % (action, type(action))
+
+ # Engines
+ tip = (math.sin(self.lander.angle), math.cos(self.lander.angle))
+ side = (-tip[1], tip[0])
+ dispersion = [self.np_random.uniform(-1.0, +1.0) / SCALE for _ in range(2)]
+
+ m_power = 0.0
+ if (self.continuous and action[0] > 0.0) or (not self.continuous and action == 2):
+ # Main engine
+ if self.continuous:
+ m_power = (np.clip(action[0], 0.0,1.0) + 1.0)*0.5 # 0.5..1.0
+ assert m_power >= 0.5 and m_power <= 1.0
+ else:
+ m_power = 1.0
+ ox = (tip[0] * (4/SCALE + 2 * dispersion[0]) +
+ side[0] * dispersion[1]) # 4 is move a bit downwards, +-2 for randomness
+ oy = -tip[1] * (4/SCALE + 2 * dispersion[0]) - side[1] * dispersion[1]
+ impulse_pos = (self.lander.position[0] + ox, self.lander.position[1] + oy)
+ p = self._create_particle(3.5, # 3.5 is here to make particle speed adequate
+ impulse_pos[0],
+ impulse_pos[1],
+ m_power) # particles are just a decoration
+ p.ApplyLinearImpulse((ox * MAIN_ENGINE_POWER * m_power, oy * MAIN_ENGINE_POWER * m_power),
+ impulse_pos,
+ True)
+ self.lander.ApplyLinearImpulse((-ox * MAIN_ENGINE_POWER * m_power, -oy * MAIN_ENGINE_POWER * m_power),
+ impulse_pos,
+ True)
+
+ s_power = 0.0
+ if (self.continuous and np.abs(action[1]) > 0.5) or (not self.continuous and action in [1, 3]):
+ # Orientation engines
+ if self.continuous:
+ direction = np.sign(action[1])
+ s_power = np.clip(np.abs(action[1]), 0.5, 1.0)
+ assert s_power >= 0.5 and s_power <= 1.0
+ else:
+ direction = action-2
+ s_power = 1.0
+ ox = tip[0] * dispersion[0] + side[0] * (3 * dispersion[1] + direction * SIDE_ENGINE_AWAY/SCALE)
+ oy = -tip[1] * dispersion[0] - side[1] * (3 * dispersion[1] + direction * SIDE_ENGINE_AWAY/SCALE)
+ impulse_pos = (self.lander.position[0] + ox - tip[0] * 17/SCALE,
+ self.lander.position[1] + oy + tip[1] * SIDE_ENGINE_HEIGHT/SCALE)
+ p = self._create_particle(0.7, impulse_pos[0], impulse_pos[1], s_power)
+ p.ApplyLinearImpulse((ox * SIDE_ENGINE_POWER * s_power, oy * SIDE_ENGINE_POWER * s_power),
+ impulse_pos
+ , True)
+ self.lander.ApplyLinearImpulse((-ox * SIDE_ENGINE_POWER * s_power, -oy * SIDE_ENGINE_POWER * s_power),
+ impulse_pos,
+ True)
+
+ self.world.Step(1.0/FPS, 6*30, 2*30)
+
+ pos = self.lander.position
+ vel = self.lander.linearVelocity
+ state = [
+ (pos.x - VIEWPORT_W/SCALE/2) / (VIEWPORT_W/SCALE/2),
+ (pos.y - (self.helipad_y+LEG_DOWN/SCALE)) / (VIEWPORT_H/SCALE/2),
+ vel.x*(VIEWPORT_W/SCALE/2)/FPS,
+ vel.y*(VIEWPORT_H/SCALE/2)/FPS,
+ self.lander.angle,
+ 20.0*self.lander.angularVelocity/FPS,
+ 1.0 if self.legs[0].ground_contact else 0.0,
+ 1.0 if self.legs[1].ground_contact else 0.0,
+ (self.x_pos_middle_landing - VIEWPORT_W/SCALE/2) / (VIEWPORT_W/SCALE/2)
+ ]
+ assert len(state) == 9
+
+ reward = 0
+ shaping = \
+ - 100*np.sqrt((state[0]-state[8])*(state[0]-state[8]) + state[1]*state[1]) \
+ - 100*np.sqrt(state[2]*state[2] + state[3]*state[3]) \
+ - 100*abs(state[4]) + 10*state[6] + 10*state[7] # And ten points for legs contact, the idea is if you
+ # lose contact again after landing, you get negative reward
+ if self.prev_shaping is not None:
+ reward = shaping - self.prev_shaping
+ self.prev_shaping = shaping
+
+ reward -= m_power*0.30 # less fuel spent is better, about -30 for heuristic landing
+ reward -= s_power*0.03
+
+ done = False
+ if self.game_over or abs(state[0]) >= 1.0:
+ done = True
+ reward = -100
+ if not self.lander.awake:
+ done = True
+ reward = +100
+ return np.array(state, dtype=np.float32), reward, done, {}
+
+ def render(self, mode='human'):
+ from gym.envs.classic_control import rendering
+ if self.viewer is None:
+ self.viewer = rendering.Viewer(VIEWPORT_W, VIEWPORT_H)
+ self.viewer.set_bounds(0, VIEWPORT_W/SCALE, 0, VIEWPORT_H/SCALE)
+
+ for obj in self.particles:
+ obj.ttl -= 0.15
+ obj.color1 = (max(0.2, 0.2+obj.ttl), max(0.2, 0.5*obj.ttl), max(0.2, 0.5*obj.ttl))
+ obj.color2 = (max(0.2, 0.2+obj.ttl), max(0.2, 0.5*obj.ttl), max(0.2, 0.5*obj.ttl))
+
+ self._clean_particles(False)
+
+ for p in self.sky_polys:
+ self.viewer.draw_polygon(p, color=(0, 0, 0))
+
+ for obj in self.particles + self.drawlist:
+ for f in obj.fixtures:
+ trans = f.body.transform
+ if type(f.shape) is circleShape:
+ t = rendering.Transform(translation=trans*f.shape.pos)
+ self.viewer.draw_circle(f.shape.radius, 20, color=obj.color1).add_attr(t)
+ self.viewer.draw_circle(f.shape.radius, 20, color=obj.color2, filled=False, linewidth=2).add_attr(t)
+ else:
+ path = [trans*v for v in f.shape.vertices]
+ self.viewer.draw_polygon(path, color=obj.color1)
+ path.append(path[0])
+ self.viewer.draw_polyline(path, color=obj.color2, linewidth=2)
+
+ for x in [self.helipad_x1, self.helipad_x2]:
+ flagy1 = self.helipad_y
+ flagy2 = flagy1 + 50/SCALE
+ self.viewer.draw_polyline([(x, flagy1), (x, flagy2)], color=(1, 1, 1))
+ self.viewer.draw_polygon([(x, flagy2), (x, flagy2-10/SCALE), (x + 25/SCALE, flagy2 - 5/SCALE)],
+ color=(0.8, 0.8, 0))
+
+ return self.viewer.render(return_rgb_array=mode == 'rgb_array')
+
+ def close(self):
+ if self.viewer is not None:
+ self.viewer.close()
+ self.viewer = None
+
+
+class RandomTargetLunarLander(LunarLander):
+ continuous = True
+
+def heuristic(env, s):
+ """
+ The heuristic for
+ 1. Testing
+ 2. Demonstration rollout.
+
+ Args:
+ env: The environment
+ s (list): The state. Attributes:
+ s[0] is the horizontal coordinate
+ s[1] is the vertical coordinate
+ s[2] is the horizontal speed
+ s[3] is the vertical speed
+ s[4] is the angle
+ s[5] is the angular speed
+ s[6] 1 if first leg has contact, else 0
+ s[7] 1 if second leg has contact, else 0
+ s[8] is the target coordinate
+ returns:
+ a: The heuristic to be fed into the step function defined above to determine the next step and reward.
+ """
+
+ angle_targ = s[0]*0.5 + s[2]*1.0 # angle should point towards center
+ if angle_targ > 0.4: angle_targ = 0.4 # more than 0.4 radians (22 degrees) is bad
+ if angle_targ < -0.4: angle_targ = -0.4
+ hover_targ = 0.55*np.abs(s[0]) # target y should be proportional to horizontal offset
+
+ angle_todo = (angle_targ - s[4]) * 0.5 - (s[5])*1.0
+ hover_todo = (hover_targ - s[1])*0.5 - (s[3])*0.5
+
+ if s[6] or s[7]: # legs have contact
+ angle_todo = 0
+ hover_todo = -(s[3])*0.5 # override to reduce fall speed, that's all we need after contact
+
+ if env.continuous:
+ a = np.array([hover_todo*20 - 1, -angle_todo*20])
+ a = np.clip(a, -1, +1)
+ else:
+ a = 0
+ if hover_todo > np.abs(angle_todo) and hover_todo > 0.05: a = 2
+ elif angle_todo < -0.05: a = 3
+ elif angle_todo > +0.05: a = 1
+ return a
+
+def demo_heuristic_lander(env, seed=None, render=False):
+ env.seed(seed)
+ total_reward = 0
+ steps = 0
+ s = env.reset()
+ while True:
+ a = heuristic(env, s)
+ s, r, done, info = env.step(a)
+ total_reward += r
+
+ if render:
+ still_open = env.render()
+ if still_open == False: break
+
+ if steps % 20 == 0 or done:
+ print("observations:", " ".join(["{:+0.2f}".format(x) for x in s]))
+ print("step {} total_reward {:+0.2f}".format(steps, total_reward))
+ steps += 1
+ if done: break
+ return total_reward
+
+
+if __name__ == '__main__':
+ demo_heuristic_lander(LunarLander(), render=True)
+
+Here are the results:
+Green is Normal Lunar Lander Continuous
+Pink is the Random Target Lunar Lander Continuous
+
+
+
+"
+"['reinforcement-learning', 'deep-rl', 'markov-decision-process']"," Title: What would happen to an agent trained using Markov Decision Process if the goal node changes?Body: I was reading up a paper that did routing based on an MDP, and I was wondering because, in routing, there is a sender node and a receiver node, so if the receiver node changes (sending a message to someone else), would we have to train the MDP algorithm all over again?
+This also got me thinking about what would happen even if one node in the process of transmission changes. Does using an MDP for training the agent mean that the obstacle and goals should never change?
+"
+"['neural-networks', 'machine-learning', 'pattern-recognition', 'geometric-deep-learning', 'graph-theory']"," Title: How can abstract graphs be recognized by neural nets?Body: Recognition of optical patterns (as pixel maps) by neural networks is standard. But optical patterns may be only slightly distorted or noisy, and may not be arbitrarily scrambled – e.g. by permutations of rows and columns of the pixel map – without losing the possibility to recognize them. This in turn is the normal case for abstract graphs in their standard representation as adjacency matrices: only under some permutations of nodes a possible pattern is visible. In general, for almost all random graphs under no permutation a pattern is visible, but for all graphs under almost all permutations a pattern is invisible.
+How can this be handled in the context of either unsupervised or supervised learning? Assume you have a huge set of graphs with 100 nodes and 1,000 edges, given as 100$\times$100 adjacency matrices under arbitrary permutations, but with only two isomorphism classes. How could a neural network find this out and learn from the samples?
+Is this possibly common knowledge: that it can not? Or are there any tricks?
+(One trick might be to draw the graph force-directed and hope that it settles in a recognizable configuration. But this to be detectable would require a much larger pixel map than 100$\times$100. But why not?)
+"
+"['machine-learning', 'training', 'image-recognition', 'architecture']"," Title: How to properly use Flatten layer?Body: Context
+I'm trying to create net that will be able to recognize printed-like digits. Something like MNIST, but only for standard printing font.
+Images are of the size 40x40 and I'd like to put them into feedforward net since ConvNet seems too powerful for this task.
+Question
+How should I use Flatten layer in this task?
+Code
+My current net:
+X, test_X, y, test_y = train_test_split(X, y, test_size=0.25, random_state=42)
+
+self.model = Sequential()
+self.model.add(Flatten())
+self.model.add(Dense(64, activation='relu', input_shape=X.shape[1:]))
+self.model.add(Dense(no_classes, activation='softmax'))
+self.model.compile(loss="categorical_crossentropy",
+ optimizer="rmsprop",
+ metrics=['accuracy'])
+
+self.history = self.model.fit(X, y, batch_size=256, epochs=20, validation_data=(test_X, test_y))
+print(self.model.summary())
+
+Example images
+
+Current results
+
+"
+"['reinforcement-learning', 'training', 'monte-carlo-methods', 'temporal-difference-methods']"," Title: Comparison between TD(0) and MC ( or GAE )?Body: I'm getting started with DRL and have trouble distinguishing TD(0), MC, and GAE; and which scenarios one's better than others. Here is what I understand so far:
+
+- TD(0): increment learning, can learn after each step instead of waiting for the episode to end. Update bases on one reward, then low variance. However, high bias.
+
+- MC: Learn after each episode, the calculation of return is correct. However, its drawback is high variance. And you have to make decisions in the whole episode without an update of parameters.
+
+- GAE: combine returns in all steps, get a better trade-off between variance and bias. However, still has to wait until the end of the episode for an update.
+
+
+I have some questions as follows:
+
+- Is variance and bias about the return of each episode? What are their effects on the outcomes (convergence speed of training process, performance of the model)?
+
+- Is increment learning important? The ability to correct behaviors after each step may improve convergence speed. However, it can lead to unstable learning ( if I understand correctly, this is why the Target model in Double DQL only updates its parameters for each k mini-batches ). Which scenarios should I use TD(0) or GAE?
+
+- Concretely, in my case, I run parallelly a batch with 12 environments, each with 1000 steps. If I use GAE, I make 12000 decisions for each update. All losses of the model are summed up and calculate gradients, after that, I clip gradients to 2.0. Is that too expensive to learn the correct direction? Should I consider using TD(0) here?
+
+
+"
+"['deep-learning', 'convolutional-neural-networks', 'terminology', 'history']"," Title: Origins of the name of convolutional neural networksBody: Convolutional neural networks (CNNs) contain convolutional layers. In modern deep learning libraries such as Tensorflow and PyTorch among others, convolutional layers are implemented by using the cross-correlation operator instead of the convolution operator. The difference is that in convolution, the kernel is flipped before applying it on the input.
+For example in the book "Deep Learning", it is explained as follows.
+
+Many machine learning libraries implement cross-correlation but call
+it convolution. --- In the
+context of machine learning, the learning algorithm will learn the
+appropriate values of the kernel in the appropriate place, so an
+algorithm based on convolution with kernel flipping will learn a
+kernel that is flipped relative to the kernel learned by an algorithm
+without the flipping. It is also rare for convolution to be used alone
+in machine learning; instead convolution is used simultaneously with
+other functions, and the combination of these functions does not
+commute regardless of whether the convolution operation flips its
+kernel or not.
+
+This makes perfect sense, and convincingly argues why implementing the flipping of the kernel would be unnecessary.
+But how come CNNs are not commonly called "cross-correlational neural networks" instead of "convolutional neural networks"? To the best of my knowledge, the first concrete implementations of CNNs predate any of the above mentioned libraries. Did these early implementations of CNNs indeed use the convolution operator, leading to the name? Or is there another reason?
+"
+"['neural-networks', 'deep-learning', 'architecture', 'model-request']"," Title: What type of ANN architecture to choose?Body: I have $N$ number of teachers each of which has an input feature vector ($25$ dimensional) consisting of positive numerical values for different quality of aspects (for example: lecturing ability, knowledge capacity, communication skills, etc.). I want to design an ANN to output a single quality index based on these quality features.
+What type of ANN architecture is appropriate for this problem?
+"
+"['neural-networks', 'natural-language-processing', 'recurrent-neural-networks', 'machine-translation', 'seq2seq']"," Title: Training seq2seq translation model with one source and multiple targetBody: So basically I'm training a sequence to sequence model that translates English sentences to Arabic sentences. I'm using the data provided by Anki @ manythings. I realized that some of the sentences in English (source) have multiple sentences in Arabic (target), for example:
+This is one case, where the Arabic harakat are not shown but the idea is that the same word has different translations (yes in arabic the first, fourth and fifth are not the same translations).
+
+A better example is the following one:
+
+I'm not sure how to deal with these cases, should I reduce the data and keep one translation, or should I have for each source key a list of target values. Any advice or "tips & tricks" in preparing the data before training translation models?
+"
+"['machine-learning', 'reference-request', 'prediction', 'algorithm-request']"," Title: Machine Learning in relation to personality and behaviors predictionsBody: I am tasked with making a machine learning model that predicts personality traits and behaviours of children based on simple and interactive quizzes.
+Currently I am lost and have no idea where to start!
+I am looking for guidance and where can start my research and the actual coding part and is NLP a good place to start from.
+"
+"['machine-learning', 'deep-learning']"," Title: What is the name of algorithms that train by competing each other?Body: In some learning algorithms, we don't directly train models by datasets with labels to predict, but rather we create 2 competing models and let them fight/compete against each other. As the many millions of epochs pass, the models fight each other, and every time each model improves itself (further optimise its weights) to win. After many epochs of the models smashing each other, eventually they become really strong super-hero models that can totally blow any human out of the water. This approach seems to be often used with machine learning models that are tasked to play multiplayer games. Instead of letting them play with the slow humans, they fight with each other to death for many many epochs to become way stronger than any human can naturally be.
+What is the name of such kind of machine learning approach?
+"
+"['deep-learning', 'math', 'explainable-ai', 'gradient']"," Title: Why don't integrated gradients explain samples correctly?Body: I have a linear tabular dataset made of floats. The dataset follows a simple rule like:
+if features A and B are in a certain range then target class is 1, otherwise target class is 0.
+
+Since I want to get some interpretability from my ANN model, I opted for using the integrated gradients method implemented by alibi.
+Unfortunately, most of individual samples don't show A and B as the leading features as expected. Even more weird is the fact that, when I average the attributions of all the individual samples, A and B get the highest score. In other words, local explanations fail but, on average, the global explanation is correct.
+Can anyone help me out to understand why this happens? Isn't integrated gradients method suitable for tabular datasets?
+By the way, my baseline is based on a uniform distribution of random floats ranging from 0 to the maximum of each column.
+"
+"['transformer', 'sequence-modeling', 'matrices']"," Title: Representing variable-length sequencesBody: I want to train a model over variable-length sequential data (e.g. the temperature at different times of day) where the output depends on what the temperature is at a time T
.
+Ideally, I want to represent the input using a variable-length compacted format of [temperature, duration]. Alternatively, I can divide a matrix into time slices where each cell contains the current temperature.
+I prefer the compacted format as it is more space-efficient and allows me to represent arbitrary-length durations, but I am afraid that a Transformer architecture won't be able to figure out what the temperature is at a time T
using the compact format.
+Is it safe to compact sequential inputs?
+"
+"['terminology', 'definitions']"," Title: Is there a clear distinction between Artificial Intelligence and running a sequential program?Body: Artificial Intelligence (AI) is often defined as a machine that is intelligent, or one that can think rationally.
+From a high-level perspective, things like self-driving car or Alpha-Go can easily be classified as an AI system, while things like a washing machine that follows a strict sequential program is not considered as AI.
+However, what confused me is that when looking at the definition from a low-level perspective, there does not seem to be a clear distinction between AI and non-AI.
+For example, consider an Artificial Neural Network from Deep Learning. Fundamentally, it is just a complex non-linear function. Why is this considered AI while a washing machine is not considered that?
+Is it because of the learning involved? But then path-finding will not considered as AI too.
+Is it because of the calculations? But then traditional calculators will be considered as AI.
+Is there even a clear distinction between AI and a sequential program? Or is it just a vague term that is only valid when viewed from a high-level perspective?
+"
+"['machine-learning', 'deep-learning', 'comparison']"," Title: What is the difference between applying shallow-learning methods repeatedly and deep learning?Body: In the book Deep Learning with Python, François Chollet writes (section 1.2.6, page 18)
+
+In practice, there are fast-diminishing returns to successive applications of shallow-learning methods, because the optimal first representation layer in a three-layer model isn't the optimal first layer in a one-layer or two-layer model. What is transformative
+about deep learning is that it allows a model to learn all layers of representation jointly, at the same time, rather than in succession (greedily, as it's called).
+
+By shallow learning, we mean traditional machine learning models that aren't deep learning, such as support vector machines.
+I understood the above as below.
+
+Using a model with three-layer shallow-learning methods has the same output (predicted) value as using one-layer shallow learning method. The effect of using multiple layers of shallow learning methods is to 'increase running time or repetition'.
+
+Did I understand properly?
+"
+"['neural-networks', 'deep-learning', 'training', 'hyperparameter-optimization', 'optimizers']"," Title: Why does Adam optimizer work slower than Adagrad, Adadelta, and SGD for Neural Collaborative Filtering (NCF)?Body: I've been working on Neural Collaborative Filtering (NCF) recently to build a recommender system using Tensorflow Recommenders. Doing some hyperparameter tuning with different optimizers available in the module tf.keras.optimizers, I found out that Adam and its other variants, such as Adamax and Nadam, work much slower than seemingly less advanced optimizers, like Adagrad, Adadelta, and SGD. With Adam and its variants, training each epoch takes about 30x longer.
+It came out as a surprise to me, knowing one of the most cherished properties of Adam optimizer is its convergence speed, especially compared to SGD. What could be the reason for such a significant difference in computation speed?
+"
+"['neural-networks', 'machine-learning', 'backpropagation', 'calculus']"," Title: What is the correct formula for updating the weights in a 1-single hidden layer neural network?Body: I'm creating a neural network with 3 layers and no bias.
+On internet I saw that the expression for the derivative of the weights between the hidden layer and the output layer was:
+$$\Delta W_{j,k} = (o_k - t_k) \cdot f'\left[\sum_j (W_{j,k} \ \cdot o_j)\right] \cdot o_j,$$
+where $t$ is the target output, $o$ is the activated output layer and $f'$ the derivative of the activation function.
+But the shape of these weights is $\text{output nodes}\times\text{hidden nodes}$, and $\text{hidden nodes}$ can be bigger than $\text{output nodes}$, so the formula is wrong because of I'm taking $o_k$ and $o$ has length $\text{output nodes}$.
+
+- In simple terms, what is the right formula for updating these weights?
+
+- Also, what is the right formula for updating the weights between the input layer and the hidden layer?
+
+
+"
+"['machine-learning', 'convolutional-neural-networks', 'binary-classification', 'training-datasets']"," Title: Is binary classification using CNN possible if the training data only consists of one class?Body: Is binary classification using CNN possible if the training data only consists of one class?
+I am working on landslide risk assessment using Convolutional Neural Networks and I want to train a network that can recognize high-risk areas using multi-spectral imagery. The bands will contain numeric and categorical data that I have found to be related to my field of work.
+The problem is that I only have historical data indicating where a landslide has happened before and defining zones as low-risk is not reliable in this field (since we are not yet sure how these variables affect the risk or susceptibility, and I don't want to bias my categorization) and my training data will be made up of only one class.
+Can this be done? Is training a network from scratch using only one class of training data possible?
+If so, after building this network, can I use it to classify any zone and get any meaningful data from its output for risk assessment (for example, output value "1" being "similar to past landslides" and "0" being "not similar at all")?
+"
+"['definitions', 'federated-learning']"," Title: What is dynamic data sampling in federated learning?Body: I am trying to learn about Federated Learning (FL), but I have a question.
+What is dynamic data sampling in FL?
+
+Cai, Lingshuang, et al. "Dynamic Sample Selection for Federated Learning with Heterogeneous Data in Fog Computing." ICC 2020-2020 IEEE International Conference on Communications (ICC). IEEE, 2020.
+
+"
+"['data-preprocessing', 'sequence-modeling', 'normalisation']"," Title: Normalization of possibly not fully representative dataBody: I am trying to train a classification RNN model on a sequence of table medical data, but I stuck with the normalization problem. I realized that I cannot simply use MinMaxScaler, because of 3 problems:
+
+- outliers, but I could fight them or use RobustScaler instead.
+- I am not sure that some features in my dataset include all possible ranges. Like I have max(feature_A) == 10, but with the data update, it could become 20. And if I'll preprocess data the same way I will get bad prediction results.
+- Some features do not have a limit at all and will only increase with time, like how many years patients were treated, for example. I could suppose that this value is !>100years, for example, but if my mean value is 10 years, it will squeeze feature values a lot.
+
+My dataset is pretty large, like millions of observations, thus there is a pretty good chance that it is representative, though. But I am concerned with the small-time range, like all those observations are for the 2 years only, thus, some feature values (like how many years patients were treated) could still grow their bounds.
+How should I handle this?
+My concerns example:
+import pandas as pd
+from sklearn.preprocessing import MinMaxScaler
+
+scaler = MinMaxScaler()
+
+#### like, initial state
+df1 = pd.DataFrame({'A': [1, 2, 3, 4, 5], 'B': [10, 40, 60, 80, 100]})
+""" output:
+ A B
+0 1 10
+1 2 40
+2 3 60
+3 4 80
+4 5 100
+"""
+
+scaler.fit_transform(df1)
+""" output:
+array([[0. , 0. ],
+ [0.25 , 0.33333333],
+ [0.5 , 0.55555556],
+ [0.75 , 0.77777778],
+ [1. , 1. ]])
+"""
+
+#### new data arrived, while preprocessing is the same
+df2 = pd.DataFrame({'A': [1, 2, 3, 4, 5, 10, 10], 'B': [10, 40, 60, 80, 100, 120, 140]})
+""" output:
+ A B
+0 1 10
+1 2 40
+2 3 60
+3 4 80
+4 5 100
+5 10 120
+6 10 140
+"""
+
+# now 5 in "A" scaled to 0.4 instead of 1, same in "B"
+scaler.fit_transform(df2)
+""" output:
+array([[0. , 0. ],
+ [0.11111111, 0.23076923],
+ [0.22222222, 0.38461538],
+ [0.33333333, 0.53846154],
+ [0.44444444, 0.69230769],
+ [1. , 0.84615385],
+ [1. , 1. ]])
+"""
+
+PS: I've duplicated this question in different communities (question in ai got most of views):
+
+"
+"['neural-networks', 'transformer', 'human-like', 'gpt-3']"," Title: How do transformers understand data and answer custom questions?Body: I recently heard of GPT-3 and I don't understand how the attention models and transformers encoders and decoders work. I heard that GPT-3 can make a website from a description and write perfectly factual essays. How can it understand our world using algorithms and then recreate human-like content? How can it learn to understand a description and program in HTML?
+"
+"['search', 'proofs', 'admissible-heuristic', 'consistent-heuristic', 'heuristic-functions']"," Title: Is $\min(h_1(s),\ h_2(s))$ consistent?Body: If $h_1(s)$ is a consistent heuristic and $h_2(s)$ is a admissible heuristic, is $\min(h_1(s),\ h_2(s))$ consistent?
+"
+"['neural-networks', 'deep-learning', 'weights', 'weights-initialization', 'residual-networks']"," Title: Why are weights not initialized with mean=1?Body: I wonder why weights are initialized with zero-mean. It is one of the reasons, why deep architectures cannot be trained without skip connections. Without the skip connections, the zero initialization becomes problematic, because the identity function cannot be learned in earlier layers (this is a simplified explanation I know). But why can we not initialize weights around one? This would enhance the intrinsic learning of the identity function. Of course, the skip connections also allow a better backpropagation of the gradients, but couldn't this be helpful anyways? Can anyone tell me, why this is not done?
+"
+"['machine-learning', 'reinforcement-learning', 'deep-learning', 'q-learning', 'dqn']"," Title: Reward firstly increase, but after more episodes, start decrease, and weights divergesBody: I'm making a simple deep Q learning algorithm, with cartpole-v1 env.
+Like you can see in chart, after many episodes the reward decrease, some possible reasons?
+The exploration vs axplotation algorithm is epsilon-decay, I used a target network (used every mini-batch gradient descent update, in calculating actual Q values, and next Q values, is it right?)
+The neural network is made from scratch
+the complete code is here:
+https://github.com/LorenzoTinfena/deep-q-learning-itt-final-project
+# %%
+from core.dqn_agent import DQNAgent
+from cartpole.cartpole_neural_network import CartPoleNeuralNetwork
+from cartpole.cartpole_wrapper import CartPoleWrapper
+import gym
+import numpy as np
+import torch
+from tqdm import tqdm
+import glob
+import os
+from IPython.display import Video
+import matplotlib
+import matplotlib.pyplot as plt
+import numpy as np
+from itertools import cycle
+import sys
+import shutil
+from pathlib import Path
+import shutil
+import pyvirtualdisplay
+_display = pyvirtualdisplay.Display(visible=False, size=(1400, 900))
+_ = _display.start()
+
+
+
+# %% [markdown]
+# Initialize deep Q-learning agent, neural network, and parameters
+# %%
+np.random.seed(20)
+agent = DQNAgent(env=CartPoleWrapper(gym.make("CartPole-v1")),
+ nn=CartPoleNeuralNetwork(), replay_memory_max_size=5000, batch_size=30)
+
+DISCOUNT_FACTOR = 0.995
+LEARNING_RATE = 0.0001
+
+n_episodes = []
+total_rewards = []
+number_steps = []
+total_episodes = 0
+
+
+# %% [markdown]
+# Training
+# %%
+while total_episodes <= 10000:
+ total_reward, steps = agent.start_episode_and_evaluate(DISCOUNT_FACTOR, LEARNING_RATE, epsilon=0, min_epsilon=0, render=False, optimize=False)
+ print(f'\ntotal_episodes_training: {total_episodes}\tsteps: {steps}\ttotal_reward: {total_reward}', flush = True)
+ n_episodes.append(total_episodes)
+ total_rewards.append(total_reward)
+ number_steps.append(steps)
+
+ for i in tqdm(range(50), 'learning...'):
+ agent.start_episode_and_evaluate(DISCOUNT_FACTOR, LEARNING_RATE, epsilon=1, epsilon_decay=0.99, min_epsilon=0.01, render=False, optimize=True)
+ total_episodes += i+1
+
+
+
+"
+"['neural-networks', 'proofs', 'regularization', 'dropout', 'l1-regularization']"," Title: How to prove that a regularisation method simplified a neural network?Body: There are a few ways to regularise a neural network, for example dropout or the L1. Now, both these methods, and possibly most other regularisation methods, tend to remove from, or simplify the neural network. The Dropout deactivates nodes and the L1 shrinks the weights of the model, and so on.
+The main argument in favour of regularising a neural network is that by simplifying the model you are forcing it to learn more general functions and thus making the neural network more robust to overfitting or noisy input.
+Once you have a model trained with, and without, regularisation, it is possible to compare their performance by calculating the error metrics on their outputs. This will prove whether the regularised model performs better than the standard model or not.
+However, considering that the regularised model achieved better performance on its error metrics, how to prove that the weights of the regularised model have less variance (simpler) than the standard neural network?
+"
+"['machine-learning', 'computational-learning-theory', 'statistical-ai', 'data-augmentation']"," Title: Does distribution of data augmentation parameters matter?Body: Idea
+Let's say we have simple pictures dataset containing 40x40 images of digits. We have only one image of each digit. We want to use that as training set, but we need more data, so we use data augmentation. We use only simple operations like translating and rotating and generate 1000 more images of each digit.
+Question
+Natural way to do data augmentation will be randomly generating parameters like translate_x, translate_y and rotate and applying them into our base image.
+Does distribution of these parameters matter? From one hand we would like to have net that is able to recognize digit placed in the side of the image and rotated as well as digit placed in center and not rotated at all but maybe we don't need such accuracy with those borderline images? Maybe we know that our prediction data will be close to the centered ones so we want high accuracy in that cases and the more digit is translated and rotated the lower our's net accuracy might be.
+What I mean is can we augmented data with parameters with e.g. gaussian distribution to make our net more sensitive for cases closest to ideal and less sensitive to this borderline cases. Advantage of that would be less training data that we don't need and more control on characteristic of our neural net.
+Disclaimer
+This digits case is just simple example to show You what I mean.
+"
+"['reinforcement-learning', 'comparison', 'deep-rl', 'multi-armed-bandits']"," Title: What are the major differences between multi-armed bandits and the other well-known algorithms (DQN, A3C, PPO, etc)?Body: I have studied in the past different algorithms, i.e. DQN, DDQN, REINFORCE, A3C, PPO, TRPO, so on. I am doing an internship this summer where I have to use a multi-armed bandit (MAB). I am a bit confused between MAB and the other above algorithms.
+What are the major differences between MAB and REINFORCE, for instance? What are the major differences between MAB and the other well-known algorithms (DQN, A3C, PPO, etc)?
+EDIT
+The @Kostya's answer is fine, but it would be interested for me to have a deeper answer for that question. I am still a bit confused.
+Question: Do we use the Markov Reward formula $$G_t = R_{t+1} + \gamma R_{t+2} + ... = \sum_{k=0}^{\infty} \gamma^k R_{t+k+1}$$ the same way in a Multi-Armed bandit problem versus a DQN problem?
+"
+"['neural-networks', 'activation-functions', 'softmax', 'multiclass-classification']"," Title: Why do we use the softmax instead of no activation function?Body: Why do we use the softmax activation function on the last layer?
+Suppose $i$ is the index that has the highest value (in the case when we don't use softmax at all). If we use softmax and take $i$th value, it would be the highest value because $e$ is an increasing function, so that's why I am asking this question. Taking argmax(vec)
and argmax(softmax(vec))
would give us the same value.
+"
+"['neural-networks', 'q-learning', 'genetic-algorithms']"," Title: Is a genetic algorithm efficient for a snake game?Body: I am working on a DIY project in which I want to be able to train a neural network to play Snake.
+
+- Is a genetic algorithm an efficient way of training a network for this application?
+
+- For a GA, what should the inputs of the network be? (distance to walls and fruit or the squares in the proximity of the snake head as a vector)
+
+- What would the difference in efficiency be depending on the algorithm and what limitation does each one have? Are there any other alternatives I should consider?
+
+
+"
+"['object-detection', 'semantic-segmentation', 'mask-rcnn', 'non-max-suppression']"," Title: How does Mask R-CNN automatically output a different number of objects on the image?Body: Recently, I was reading Pytorch's official tutorial about Mask R-CNN.
+When I run the code on colab, it turned out that it automatically outputs a different number of channels during prediction. If the image has 2 people on it, it would output a mask with the shape 2xHxW
. If the image has 3 people on it, it would output the mask with the shape 3xHxW
.
+How does Mask R-CNN change the channels? Does it have a for
loop inside it?
+My guess is that it has region proposals and it outputs masks based on those regions, and then it thresholds them (it removes masks that have low probability prediction). Is this right?
+"
+"['deep-learning', 'convolutional-neural-networks', 'convolution']"," Title: Why might the convolution be inappropriate when the task involves incorporating information from very distant locations in the input?Body: When I am reading about convolutional neural networks, I have encountered the following sentence from the textbook(page 341) that says about the limitation of the usage of the convolution in CNNs.
+
+When a task involves incorporating information from very distant
+locations in the input, then the prior imposed by convolution may be
+inappropriate
+
+My interpretations are
+
+- when the object is very small, then the convolution may not work
+well.
+
+- when the object is very large, then the components of the object are far away from each other and hence the convolution may not work well.
+
+
+Which of the interpretation is correct? If both are wrong, then what is the correct interpretation.
+If possible, please provide an example to understand it.
+"
+"['dimensionality-reduction', 'embeddings', 'dimensionality']"," Title: How does t-SNE preserves embedding orders?Body: According to the triplet loss Wikipedia page:
+
+t-SNE (t-distributed Stochastic Neighbor Embedding) preserves embedding orders via probability distributions, whereas triplet loss works directly on embedded distances.
+
+I don't understand how does t-SNE preserves embedding order from the description given by its Wikipedia page.
+I am trying to understand this claim in order to translate the page into other languages. I don't have a very quick understanding of maths, don't be afraid to explain it as if I was a teenager.
+"
+"['natural-language-processing', 'audio-processing', 'speech-recognition', 'spectral-clustering']"," Title: Speech diarization for a conversation detector: A good idea or not?Body: I am trying to write a program in which an ai can detect whether a conversation is occurring or not. The ai does not need to transcribe words or have any meaning about the conversation, simply if one is occurring. A conversation can then simply be defined as having more than one speaker.
+Anyways, while searching for past research on the subject, I came across the field of speech diarization, which is where an AI is trained to distinguish the numbers of speakers in a conversation. This seems perfect to me, however, while implementing I came across a few troubles. First of all, it wasn't good. I used this tutorial: https://medium.com/saarthi-ai/who-spoke-when-build-your-own-speaker-diarization-module-from-scratch-e7d725ee279 to write a simple program for this task, but I found it wasn't good at finding if there was a single or two speakers. Also the times where it was distinguishing speakers were all off.
+It occurred to me that perhaps speech diarization may not be the best approach for this problem, so I decided to ask here about if this is the best solution, or if there are better ones out there. If it is the best solution, I would love some insight into why this wasn't working for me. Is the tutorial simply not good enough? I used 45 second - 1 minute long clips of just myself speaking or other people speaking with me and it did not work well at all like I said.
+"
+"['optimization', 'deep-neural-networks', 'pytorch']"," Title: When should you not use the bias in a layer?Body: I'm not really that experienced with deep learning, and I've been looking at research code (mostly PyTorch) for deep neural networks, specifically GANs, and, in many cases, I see the authors setting bias=False
in some layers without much justification. This isn't usually done in a long stack of layers that have a similar purpose, but mostly in unique layers like the initial linear layer after a conditioning vector, or certain layers in an attention architecture.
+I imagined there must be a strategy to this, but most articles online seem to confirm my initial perception that bias is a good thing to have available for training in pretty much every layer.
+Is there a specific optimization / theoretical reason to turn off biases in specific layers in a network? How can I choose when to do it when designing my own architecture?
+"
+"['generative-adversarial-networks', 'generative-model', 'geometric-deep-learning', 'mo-flow', 'drug-design']"," Title: Are generative models actually used in practice for industrial drug design?Body: I just finished reading this paper MoFlow: An Invertible Flow Model for Generating Molecular Graphs.
+The paper, which is about generating molecular graphs with certain chemical properties improved the SOTA at the time of writing by a bit and used a new method to discover novel molecules. The premise of this research is that this can be used in drug design.
+In the paper, they beat certain benchmarks and even create a new metric to compare themselves against existing methods. However, I kept wondering if such methods were actually used in practice. The same question is valid for any comparable generative models such as GAN's, VAE's or autoregressive generative models.
+So, basically, are these models used in production already? If so, do they speed up existing molecule discovery and/or discover new molecules? If not, why not? And are there any remaining bottlenecks to be solved before this can be used?
+Any further information would be great!
+"
+"['reinforcement-learning', 'math', 'value-functions', 'books', 'expectation']"," Title: How is the state-value function expressed as a product of sums?Body: The state-value function for a given policy $\pi$ is given by
+$$\begin{align}
+V^{\pi}(s) &=E_{\pi}\left\{r_{t+1}+\gamma r_{t+2}+\gamma^{2} r_{t+3}+\cdots \mid s_{t}=s\right\} \\
+&=E_{\pi}\left\{r_{t+1}+\gamma V^{\pi}\left(s_{t+1}\right) \mid s_{t}=s\right\} \tag{4.3}\label{4.3} \\
+&=\sum_{a} \pi(s, a) \sum_{s^{\prime}} \mathcal{P}_{s s^{\prime}}^{a}\left[\mathcal{R}_{s s^{\prime}}^{a}+\gamma V^{\pi}\left(s^{\prime}\right)\right] \tag{4.4}\label{4.4}
+\end{align}$$
+It is given in section 4.1 of the first edition of Sutton and Barto's book (equations 4.3 and 4.4).
+I don't understand how equation \ref{4.4} derives from equation \ref{4.3}. How can I get the product in equation \ref{4.4} from the expectation in equation \ref{4.3}?
+"
+['models']," Title: What is a model lineage?Body: We know about the lineage of datasets. Is there anything called "(ML) model lineage". What are all the works that had been remarkable regarding "model lineage"?
+There are few links available on internet which talk about model lineage. According to one of the articles[1], the definition of model lineage is as follows:
+
+Model Lineage keeps the history of a model: when it was trained, using which data, algorithms, and parameters. This should be automatically generated each time a new version of a model is generated.
+
+The next reference that I could find on the internet is the session by data bricks[2].
+Apart from these two links, I could not find many resources or standards regarding model lineage. It will be helpful if anyone could provide more resources or pointers towards this topic.
+
+- https://blog.tail.digital/en/we-need-to-talk-about-model-lineage/#:~:text=Model%20Lineage%20keeps%20the%20history,of%20a%20model%20is%20generated
+
+- https://databricks.com/session_na20/machine-learning-data-lineage-with-mlflow-and-delta-lake
+
+
+"
+"['architecture', 'facial-recognition', 'principal-component-analysis', 'content-based-image-retrieval']"," Title: Under what circumstances is a fully connected layer similar to PCA?Body: I am reading this paper on image retrieval where the goal is to train a network that produces highly discriminative descriptors (aka embeddings) for input images. If you are familiar with facial recognition architectures, it is similar in that the network is trained with matching / non-matching pairs and triplet loss.
+The paper discusses the use of PCA and whitening on the training set of descriptors as a means of further improving the discriminability (second to last block in image below, fig 1a of paper). This all make sense to me.
+
+Where I'm confused is where they replace PCA/whitening with a trainable fully connected layer with bias. I do understand that PCA+whitening is just the composition of of two linear transformations (ie rotation + (un)squishing in each dimension) and that these are the same as having one linear transformation but:
+
+- How is PCA+whitening equivalent to a learnable fully connected layer? Is there some theorem or paper explaining that training a fully connected layer with triplet loss is somehow statistically equivalent to PCA and whitening?
+- Why is there a bias?
+
+"
+"['neural-networks', 'implementation', 'artificial-neuron', 'perceptron']"," Title: In practice, are perceptrons typically implemented as objects?Body: I'm fairly new to ANNs. I know the general structure, the math, and the algorithms behind them. I figured the logical next step on my journey to fully understanding them would to be implement one myself from scratch, even if it's a fairly small one.
+So I'm curious, coming from those who actually work with and deploy these things, are perceptrons/neurons typically implemented as objects with class variables, methods, etc. (kind of like nodes in a Linked List)? Or is there a more practical/memory-conservative way to build them?
+"
+"['reinforcement-learning', 'deep-rl', 'value-functions', 'function-approximation', 'state-spaces']"," Title: Is there any thumb rule on the cardinality of state space in order to use the parameterized function to estimate value functions?Body: Value functions for a given MDP can be learned in at least two ways by experience.
+
+- The first way (tabular calculation) is generally used in the case of state spaces that are small enough.
+
+- The second way (using parameterized functions) is generally used in the case of large state paces.
+
+
+It can be understood from the following statement from section 3.7 of the first edition of Sutton and Barto's book.
+
+The value functions $V^{\pi}$ and $Q^{\pi}$ can be estimated from experience. For example,
+if an agent follows policy $\pi$ and maintains an average, for each state
+encountered, of the actual returns that have followed that state, then
+the average will converge to the state's value, $V^{\pi}$(s), as the number of
+times that state is encountered approaches infinity. If separate
+averages are kept for each action taken in a state, then these
+averages will similarly converge to the action values, $Q^{\pi}(s,a)$ . We call
+estimation methods of this kind Monte Carlo methods because they
+involve averaging over many random samples of actual returns. Of course, if there are
+very many states, then it may not be practical to keep separate
+averages for each state individually. Instead, the agent would have to
+maintain $V^{\pi}$ and $Q^{\pi}$ as parameterized functions and adjust the parameters to
+better match the observed returns. This can also produce accurate
+estimates, although much depends on the nature of the parameterized
+function approximator.
+
+Is there any thumb rule or strict margin, in the literature, on the cardinality of state space to use the parameterized function to estimate value functions?
+"
+"['game-ai', 'proofs', 'minimax', 'multi-agent-systems', 'alpha-beta-pruning']"," Title: Why is the number of examined nodes $ O(b^{3d/4})$ in $\alpha$-$\beta$ pruning?Body: I'm taking a course 'Introduction to AI' and, in one of the tutorials, it was written that when pruning the game tree using $\alpha$-$\beta$ boundaries, the number of nodes that will be developed, when using random sort function for the children (i.e., in the average case) will be $$ O(b^{3d/4})$$.
+Since the proof is not in the scope of the course we weren't given one at the tutorial, so I tried looking for the proof online, but couldn't find anything, and I didn't think of anything myself. I wondered whether someone could give me a lead or refer me to some reading material that shows the full proof?
+"
+"['machine-learning', 'python', 'time-series', 'clustering', 'algorithm-request']"," Title: Is there a technique for analyzing the relationship between time-series clusters?Body: I have two time-series datasets (temperature and speed of the vehicle). I will use Agglomerative Hierarchical Clustering and DTW to cluster both datasets.
+I am looking for a technique (like regression model) to find the relationship between two time-series clustered data. I am curious to find the relationship between changes in temperature and vehicle speed. Does anyone have an idea?
+"
+"['machine-learning', 'prediction', 'features', 'binary-classification']"," Title: How to train a machine learning model with multiple attributes and one target value?Body: I'm working on a machine learning problem where I need to guess which customers will churn and which of them will continue to be customers.
+I have $X_0, X_1, X_2, X_3, X_4, X_5$ and $X_6$ attributes representing if they have credit cards, if they are active customers, if they have money in their accounts, etc. So, according to these multiple $X$ values and the target value $Y$, which is either $0$ or $1$, I need to develop a model that can do the prediction.
+I have always worked with only one $X$ attribute and one target value $Y$. Right now, I'm confused about how I should work with multiple $X_n$ values.
+Any help is appreciated.
+"
+"['machine-learning', 'definitions', 'computational-learning-theory', 'vc-dimension']"," Title: How would you intuitively but rigorously explain what the VC dimension is?Body: The VC dimension is a very important concept in computational/statistical learning theory. However, the first time you read its definition, you may not immediately understand what it really represents or means, as it involves other concepts, such as shattering, hypothesis class, learning and sets. For example, let's take a look at the definition given by Shai Shalev-Shwartz and Shai Ben-David (p. 70)
+
+DEFINITION $6.5$ (VC-dimension) The VC-dimension of a hypothesis class $\mathcal{H}$, denoted $\operatorname{VCdim}(\mathcal{H})$, is the maximal size of a set $C \subset \mathcal{X}$ that can be shattered by $\mathcal{H}$. If $\mathcal{H}$ can shatter sets of arbitrarily large size we say that $\mathcal{H}$ has infinite VC-dimension.
+
+Without knowing what a hypothesis class is, or what the specific $C$, $X$ and $H$ in this definition are, it's difficult to understand this definition. Even if you are familiar with what a hypothesis class is (i.e. a set of sets, i.e. our set of functions/hypotheses/models, e.g. the set of all possible neural networks with a specific topology) and you know that $C$ and $X$ are sets of input points, it should still not be clear what the VC dimesion really is or represents.
+So, how would you intuitively and rigorously explain the exact definition of the VC dimension?
+Note that I am not asking for answers like
+
+The VC dimension represents the complexity (or expressive power, richness, or flexibility) of your model/hypothesis class.
+
+Of course, this is easy to memorize, but it's quite vague. So, I am not looking for vague/general answers. I am looking for answers that rigorously but intuitively describe the mathematical definition of the VC dimension. For example, you could provide an illustration that shows what the VC dimension is, and, in your example (e.g. the XOR problem cannot be solved by a set of lines), you can describe what $H$, $C$, and $X$ are, and how they relate to the typical concepts you will find an introductory course to machine learning, but you should not forget to describe the concept of shattering. If you have other ideas of how to illustrate this concept memnomically, feel free to provide an answer.
+"
+"['image-generation', 'data-augmentation']"," Title: Data augmentation for very small image datasetsBody: I am looking for techniques for augmenting very small image datasets. I have a classification problem with 3 classes. Each class consists of 20 different shapes. The shapes are similar between the classes, but the task is to identify which class the shapes belong to. Per shape, I have between 1 and 35 training examples. For two classes, I have 25 training examples per shape, but the number of examples per shape for the third classes is usually around 5. Now, what data augmentation schemes do you recommend? Geometric / affine transformations seem like a good place to start. However, I have also thought of applying Fast Fourier Transform (do the forward transform, add some noise, do the inverse transform). GANs seem infeasible, right? Not enough data, I suspect. In any case, I am grateful for your advice.
+"
+"['neural-networks', 'non-linear-regression', 'curve-fitting']"," Title: Neural Network for Picking Parameters of a Nonlinear Function to Data PointsBody: I'm trying to make a neural network in pytorch that picks the parameters of a nonlinear function, the radius and (x,y) center of a circle in the example below, based on a sample of values from the nonlinear function.
+More concretely, the neural network trained in the code below takes as input 100 (x,y) points on a circle and outputs radius, x_center, y_center of the circle.
+I don't consider this a very difficult problem, but the trained neural network doesn't work very well, as you can see from two example plots after the code. How can the code be improved?
+And in case this informs your recommendation, the goal is not to fit circles, which no one needs a neural network to do. I'm trying to use a neural network to calculate 9 parameters in a nonlinear function taking a single real valued input and outputting a complex number f(t) -> a + b*sqrt(-1). The input into the neural network is 54 complex values, and the output is 9 parameter values. I am guaranteed that the 54 complex input values can always be well approximated by f(t) with an appropriately picked 9 parameters. The parameters can easily be guessed by a human because different parameters intuitively change the shape of the complex curve, but I've been unable to use a minimization math algorithm for curve fitting. The problem is there are a lot of local minima the minimization algorithms can encounter before reaching the global minimum. The goal of the neural network is to get a good guess of the 9 parameters at the global minimum for a minimization math algorithm to be close to the global minimum initially, and thus converge to the global minimum rather than get stuck at a local minima.
+You probably guessed that I know a bit of math, but I don't know much about machine learning. I was able to pick it up pretty quickly because of my math background, but I am severely lacking in practical experience. I don't know what to do at this point other than randomly changing the number of samples on a circle, number of examples circles, adding more layers to the neural network, adding different types of layers to the neural network, changing the loss function, changing the learning rate, changing the optimizer, changing the loss function, et cetera, but I have no method to my madness.
+Post Script
+I've found someone who did almost what I need. This paper paired with this github repo used 1,000 samples in a set of 100,000 with 1% failure rate, so there's hope. I have to dig deeper for the innards of their neural network training.
+import torch
+import numpy as np
+import math
+import matplotlib.pyplot as plt
+
+#circle parameterized by t, < x(t) , y(t) >
+t_parameter = np.linspace(-math.pi, math.pi, 100)
+
+#create random radius,(x,y) center or circle paired with points on circle evaluated at all t in t_parameter
+examples = 1000
+max_radius = 4
+random_rxy = np.random.rand(examples,3)
+input_list = []
+for i in range(examples):
+ r_rand = random_rxy.item(i,0) * max_radius
+ x_rand = random_rxy.item(i,1) * 7 - 2 #-2 < x_rand < 5
+ y_rand = random_rxy.item(i,2) - 2 #-2 < y_rand < -1
+ x_coordinates = [r_rand*math.cos(t) + x_rand for t in t_parameter]
+ y_coordinates = [r_rand*math.sin(t) + y_rand for t in t_parameter]
+ input_list.append(x_coordinates + y_coordinates)
+input_tensor = torch.Tensor(input_list)
+output_tensor = torch.Tensor(random_rxy)
+
+print(input_tensor)
+'''
+tensor([[ x_0_0, x_0_1, ..., x_0_99, y_0_0, y_0_1, ..., y_0_99 ],
+ [ x_1_0, x_1_1, ..., x_1_99, y_1_0, y_1_1, ..., y_1_99 ],
+ [ x_2_0, x_2_1, ..., x_2_99, y_2_0, y_2_1, ..., y_2_99 ],
+ ...,
+ [ x_997_0, x_997_1, ..., x_997_99, y_997_0, y_997_1, ..., y_997_99 ],
+ [ x_998_0, x_998_1, ..., x_998_99, y_998_0, y_998_1, ..., y_998_99 ],
+ [ x_999_0, x_999_1, ..., x_999_99, y_999_0, y_999_1, ..., y_999_99 ]])
+'''
+print(output_tensor) #radious, x circle center, y circle center
+'''
+tensor([[r_0, x_0, y_0 ],
+ [r_1, x_1, y_1 ],
+ [r_2, x_2, y_2 ],
+ ...,
+ [r_997, x_997, y_997],
+ [r_998, x_998, y_998],
+ [r_999, x_999, y_999]])
+'''
+
+#define model and loss function.
+model = torch.nn.Sequential(
+ torch.nn.Linear(200, 200),
+ torch.nn.Tanh(),
+ torch.nn.Tanh(),
+ torch.nn.Linear(200, 3)
+)
+loss_fn = torch.nn.MSELoss(reduction='mean')
+
+#train model
+learning_rate = 1e-4
+optimizer = torch.optim.Adagrad(model.parameters(), lr=learning_rate)
+for t in range(10000):
+ # Forward pass: compute predicted y by passing x to the model.
+ output_pred = model(input_tensor)
+
+ # Compute and print loss.
+ loss = loss_fn(output_pred, output_tensor)
+ if t % 100 == 99:
+ print(t, loss.item())
+ '''
+ 99 0.0337635762989521
+ 199 0.0285916980355978
+ 299 0.025961756706237793
+ 399 0.024196302518248558
+ 499 0.022839149460196495
+ ...
+ 9799 0.004136151168495417
+ 9899 0.0040830159559845924
+ 9999 0.004030808340758085
+ '''
+
+ #typical procedure
+ optimizer.zero_grad()
+ loss.backward()
+ optimizer.step()
+
+print(output_tensor[0].tolist())
+print(output_pred[0].tolist())
+#[0.7722834348678589, 0.46600303053855896, 0.5080233812332153 ]
+#[0.7921068072319031, 0.46946045756340027, 0.49222415685653687]
+
+plt.xlabel('x')
+plt.ylabel('y')
+r_rand, x_rand, y_rand = output_tensor[0].tolist()
+plt.scatter([r_rand*math.cos(t) + x_rand for t in t_parameter],[r_rand*math.sin(t) + y_rand for t in t_parameter],label="Measured Data")
+r_rand, x_rand, y_rand = output_pred[0].tolist()
+plt.scatter([r_rand*math.cos(t) + x_rand for t in t_parameter],[r_rand*math.sin(t) + y_rand for t in t_parameter],label="Fit Data")
+plt.legend(loc='upper right')
+plt.tight_layout()
+plt.show()
+
+
+
+"
+"['natural-language-processing', 'programming-languages', 'c++', 'gpt-3', 'c']"," Title: What language is the GPT-3 engine written in?Body: I know that the API is python based, but what's the gpt-3 engine written in mostly? C? C++? I'm having some trouble finding this info.
+"
+"['terminology', 'evolutionary-algorithms']"," Title: Is there a name for this approach to evolutionary algorithms?Body: I am considering an approach to evolutionary algorithms, in which instead of maintaining a population of individuals, we maintain a pool of $N$ mutations that can be applied to a base genome. For every possible subset (or many possible subsets) of the mutation pool, we apply that subset to the base genome to produce an individual, then test that individual. We discard mutations for which the population with that mutation performs worse than the population without it. We merge the best-performing mutations into the base genome for the next generation. Then we replace the discarded or merged mutations with new ones.
+Is this known under some name? Is it a good idea?
+"
+"['agi', 'cognitive-science', 'cognitive-architecture']"," Title: Why can't cognitive architectures achieve general intelligence?Body: Newbie here.
+I recently read about cognitive architectures (see: https://en.wikipedia.org/wiki/Cognitive_architecture). They are supposed to be modeled after the human mind and represent a promising approach towards artificial general intelligence (AGI).
+My question is, however, why haven't these cognitive architectures achieved AGI yet? What are the specific limitations and roadblocks that cognitive architectures face?
+"
+"['reinforcement-learning', 'proofs', 'value-functions', 'sutton-barto', 'policy-evaluation']"," Title: Is the existence and uniqueness of the state-value function for $\gamma < 1$ theoretical?Body: Consider the following statement from 4.1 Policy Evaluation of the first edition of Sutton and Barto's book.
+
+The existence and uniqueness of $V^{\pi}$ are guaranteed as long as
+either $\gamma < 1$ or eventual termination is guaranteed from all
+states under the policy $\pi$.
+
+I have a doubt on the first condition $\gamma < 1$. If $\gamma < 1$, then it makes our task easy in a way that the $\gamma^k$ becomes zero for sufficiently higher $k$ and it is totally based on the hardware architecture. But, in theory, $\gamma^k$ can never be zero. It may approach zero.
+In this context, how can the condition $\gamma < 1$ assure the existence and uniqueness of $V^{\pi}$ theoretically?
+"
+"['machine-learning', 'python', 'statistical-ai', 'model-based-methods', 'statistics']"," Title: What would be the reason behind using plots (such as box-plots or histograms) for ML development?Body: I've been learning Python machine-learning using this project report and the guy who wrote it begins by visualizing his data using various statistical analysis methods: histograms, density plots, box plots, scatter plots, etc.
+The problem is that he doesn't explain what this is for. The only detail he provides is that "univariate plots help to understand each attribute" and "multivariate plots help to understand the relationships between attributes."
+What would be the reason behind using these plots for ML development? Do they help you to determine which algorithm(s) you should try? If so, how? Can anyone explain the main points or maybe point me to a resource that will help?
+"
+"['neural-networks', 'machine-learning', 'reference-request', 'activation-functions']"," Title: Is there literature on Neural Network with activation functions of bounded domain?Body: I think to have found a somewhat interesting connection between neural networks and another area of mathematics. However, it requires the activation functions in the network to have a bounded - ideally small - domain. For the sake of simplicity, I am restricting this to feedforward networks.
+My approach has been the following: Assuming bounded input and weights, a maximal input can be derived. Before each application of an activation function, I thus just scale down the range from the maximal one to the one permitted.
+This however causes nearly all weights that are not close (in absolute terms) to the maximal ones to lead to very small outputs of the network, meaning nearly all weight combinations lead to outputs near zero. The network thus has issues learning even simple tasks.
+My question, therefore, is: Has anyone ever studied these issues and maybe found a network architecture that works well with this? Or another solution for bounded domains?
+"
+"['deep-learning', 'computer-vision', 'tensorflow', 'image-segmentation', 'u-net']"," Title: Unet Overfitting for binary segmentation of fake imagesBody: I am working on a project where I am trying to detect and localize forgeries in images. I am using the CASIA v2 dataset and using Unet model for the task. I have the binary masks of all the images in the CASIA v2 dataset. The metric I am using for the model are F1 score.
+The issue with the model is that it is highly overfitting, the validation loss plateaus up.
+Batch size is 128 and Learning rate is 0.000001. Image size is 128 x 128.
+Updated graph for batch size 16 with the changes mentioned by @spb is as follows:
+
+I have also tried using Learning rate scheduler to decrease the learning rate(starting with high learning rate) on plateaus but that didn't help much.
+I am also using the package Albumentations for data augmentation of both the images and its masks. I load the images and the masks and then apply the augmentations and save the augmented images and masks in a separate arrays and finally extend the original images and masks with the augmented images and masks. So technically I have original plus the augmented images and masks that I use for training the model. The augmentations I am using are:
+Augment = A.Compose([
+A.VerticalFlip(p=0.5),
+A.RandomRotate90(p=0.5),
+A.HorizontalFlip(p = 0.5)
+])
+
+I have split the dataset into 70% Training, 20% Validation and 10% for testing.
+Here is a snippet of my model. Updated Code below
+def conv2d_block(input_tensor, n_filters, kernel_size = 3, batchnorm = True):
+"""Function to add 2 convolutional layers with the parameters passed to it"""
+# first layer
+x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
+ kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
+if batchnorm:
+ x = BatchNormalization()(x)
+x = Activation('relu')(x)
+
+# second layer
+x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
+ kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
+if batchnorm:
+ x = BatchNormalization()(x)
+x = Activation('relu')(x)
+
+return x
+
+def get_unet(input_img, n_filters = 16, dropout = 0.1, batchnorm = True):
+"""Function to define the UNET Model"""
+# Contracting Path
+c1 = conv2d_block(input_img, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
+p1 = MaxPooling2D((2, 2))(c1)
+#p1 = Dropout(dropout)(p1)
+
+c2 = conv2d_block(p1, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
+p2 = MaxPooling2D((2, 2))(c2)
+#p2 = Dropout(dropout)(p2)
+
+c3 = conv2d_block(p2, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
+p3 = MaxPooling2D((2, 2))(c3)
+#p3 = Dropout(dropout)(p3)
+
+c4 = conv2d_block(p3, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
+p4 = MaxPooling2D((2, 2))(c4)
+#p4 = Dropout(dropout)(p4)
+
+c5 = conv2d_block(p4, n_filters * 16, kernel_size = 3, batchnorm = batchnorm)
+p5 = MaxPooling2D((2, 2))(c5)
+#p5 = Dropout(dropout)(p5)
+
+c6 = conv2d_block(p5, n_filters = n_filters * 32, kernel_size = 3, batchnorm = batchnorm)
+
+# Expansive Path
+u7 = Conv2DTranspose(n_filters * 16, (3, 3), strides = (2, 2), padding = 'same')(c6)
+u7 = concatenate([u7, c5])
+u7 = Dropout(dropout)(u7)
+c7 = conv2d_block(u7, n_filters * 16, kernel_size = 3, batchnorm = batchnorm)
+
+u8 = Conv2DTranspose(n_filters * 8, (3, 3), strides = (2, 2), padding = 'same')(c7)
+u8 = concatenate([u8, c4])
+u8 = Dropout(dropout)(u8)
+c8 = conv2d_block(u8, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
+
+u9 = Conv2DTranspose(n_filters * 4, (3, 3), strides = (2, 2), padding = 'same')(c8)
+u9 = concatenate([u9, c3])
+u9 = Dropout(dropout)(u9)
+c9 = conv2d_block(u9, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
+
+u10 = Conv2DTranspose(n_filters * 2, (3, 3), strides = (2, 2), padding = 'same')(c9)
+u10 = concatenate([u10, c2])
+u10 = Dropout(dropout)(u10)
+c10 = conv2d_block(u10, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
+
+u11 = Conv2DTranspose(n_filters * 1, (3, 3), strides = (2, 2), padding = 'same')(c10)
+u11 = concatenate([u11, c1])
+u11 = Dropout(dropout)(u11)
+c11 = conv2d_block(u11, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
+
+outputs = Conv2D(1, (1, 1), activation='sigmoid')(c11)
+model = Model(inputs=[input_img], outputs=[outputs])
+return model
+
+Currently I am not using the dropout as it leads to higher validation loss plateaus in my case.
+The F1 score and F1 loss I am calculating are as follows
+def f1(y_true, y_pred):
+
+y_pred = K.round(y_pred)
+tp = K.sum(K.cast(y_true*y_pred, 'float'), axis=0)
+tn = K.sum(K.cast((1-y_true)*(1-y_pred), 'float'), axis=0)
+fp = K.sum(K.cast((1-y_true)*y_pred, 'float'), axis=0)
+fn = K.sum(K.cast(y_true*(1-y_pred), 'float'), axis=0)
+
+p = tp / (tp + fp + K.epsilon())
+r = tp / (tp + fn + K.epsilon())
+
+f1 = 2*p*r / (p+r+K.epsilon())
+f1 = tf.where(tf.is_nan(f1), tf.zeros_like(f1), f1)
+return K.mean(f1)
+
+def f1_loss(y_true, y_pred):
+
+tp = K.sum(K.cast(y_true*y_pred, 'float'), axis=0)
+tn = K.sum(K.cast((1-y_true)*(1-y_pred), 'float'), axis=0)
+fp = K.sum(K.cast((1-y_true)*y_pred, 'float'), axis=0)
+fn = K.sum(K.cast(y_true*(1-y_pred), 'float'), axis=0)
+
+p = tp / (tp + fp + K.epsilon())
+r = tp / (tp + fn + K.epsilon())
+
+f1 = 2*p*r / (p+r+K.epsilon())
+f1 = tf.where(tf.is_nan(f1), tf.zeros_like(f1), f1)
+return 1 - K.mean(f1)
+
+I have also tried using other losses like focal_tversky but have a similar result.
+What can be the issue and how can I solve it?
+Is it
+
+- Issue with my data like presence of outliers
+- Model related issue
+- Batch size and Learning rate related issue
+- Or anything else?
+
+Please your help in this regard is really appreciated as I really need to solve it soon.
+"
+"['convolutional-neural-networks', 'convolution']"," Title: What is the difference between same convolution and full convolution in terms of feature map size?Body: In valid convolution, the size of the output shrinks at each layer. So after some point of time additional layers cannot meaningfully performs convolution. For this reason, same convolution is introduced, where where the size of the output remains intact. This is achieved by padding with enough number of zeroes at the borders of input image.
+What happens to the size of output feature map in case of full convolution?
+If it remains intact then what is the difference between same convolution and full convolution?
+"
+['training']," Title: Is it possible to design an AI with two inputs and a Boolean output?Body: I am having a difficult time explaining to my boss that what he is trying to achieve may not be possible or within reason. We have a database of 3 Million data points per computer across hundreds of machines and when any data point is updated, changed, or removed. Some of these data points are the number of times a computer has been logged in, the names of printers attach, folders on the root of the drive. Some of the data points we do care about, others like a printer being out of ink, we don't care about but the same method would return if the printer was offline which we do care about.
+He wants to design an AI that would check these data points and return with true or false on whether the data point is significant when they are changed, removed or updated. We are storing the name of the method to retrieve the data, the current data, all previous data, and the time the change was made. I can not foresee a way to train the data efficiently as we currently don't know which methods retrieve significant data or which values are not significant within the method.
+Is it possible to design such an AI without hours of supervisor learning?
+"
+"['reinforcement-learning', 'policy-gradients', 'sutton-barto', 'on-policy-methods', 'discount-factor']"," Title: If $\gamma \in (0,1)$, what is the on-policy state distribution for episodic tasks?Body: In Reinforcement Learning: An Introduction, section 9.2 (page 199), Sutton and Barto describe the on-policy distribution in episodic tasks, with $\gamma =1$, as being
+\begin{equation}
+\mu(s) = \frac{\eta(s)}{\sum_{k \in S} \eta(k)},
+\end{equation}
+where
+\begin{equation}
+\eta(s) = h(s) + \sum_s \eta(\bar{s}) \sum_a \pi(a|\bar{s})p(s|\bar{s},a), \text{ (9.2)}
+\end{equation}
+is the number of time steps spent, on average, in state $s$ in a single episode.
+Another way to represent this is setting $\eta(s) = \sum_{t=0}^{\infty} d_{j,s}^{(t)}$, the average number of visits to $s$ starting from $j$, and $d_{j,s}^{(t)}$ being the probability of going from $j$ to $s$ in $t$ steps under policy $\pi_{\theta}$. In particular, $d_{j,s}^{(1)} = d_{j,s} = \sum_{a \in A}[\pi_{\theta}(a|j)P(s|j,a)]$. This formulation is obtained through pag 325 proof of the Policy Gradient Theorem (PGT) and some basic stochastic processes theory.
+If instead of defining $\gamma = 1$, we prove PGT using any $\gamma \in (0,1)$, we would get
+\begin{equation*}
+\eta_{\gamma}(s) = \sum_{t=0}^{\infty} \gamma^t d_{j,s}^{(t)}
+\end{equation*}
+This is not anymore the average number of visits to $s$. Here comes my first question. Would we still get a $\mu_{\gamma}$ on-policy distribution through the same trick done before? That is
+\begin{equation}
+\mu_{\gamma}(s) = \frac{\eta_{\gamma}(s)}{\sum_{k \in S} \eta_{\gamma}(k)},
+\end{equation}
+would be the on-policy distribution?
+My second question is related and has to do with the phrase on page 199, that says that
+
+If there is discounting ($\gamma <1$) it should be treated as a form of termination, which can be done simply by including a factor of $\gamma$ in the second term of (9.2).
+
+What the authors mean by "as a form of termination"?
+As inferred by my previous question, my conclusion is that having $\gamma < 1$ does not alter $\mu_{\gamma}$ being the on-policy distribution, so I don't get this last comment on page 199.
+"
+"['machine-learning', 'deep-learning', 'image-processing', 'image-segmentation']"," Title: Predicting the the motion of a 3D object when the motion of a set of markers is knownBody: trying to figure out where to get started with this:
+I have a few hundred CT images where certain three-dimensional features in the image (anatomy) are moving in a correlated fashion with a set of radio-opaque markers. These anatomic features can rotate, translate and deform and the markers can move together or sometimes in relation to each other. The position and motion of the markers are indicative of the position and motion of the anatomy in which they are embedded in.
+I'd like to develop a model whereby given the positions and motion of the markers I can then predict the position and motion of the full anatomy in which they are embedded and use this for segmentation.
+What deep-learning software and techniques are suited for this type of problem?
+Thanks!
+"
+"['machine-learning', 'convolutional-neural-networks', 'image-recognition', 'datasets']"," Title: Cnn for Combination of both digits and letters(small and capital)Body: Hi I am new to machine learning can anyone suggest open dataset consists of both digits and letters(small,capital)
+I want images consisisting of both digits and letters to train my cnn model and make the model recognize the real time images
+Can anyone please suggest me that dataset link
+Thanks in advance
+"
+"['reinforcement-learning', 'policy-gradients', 'multi-agent-systems', 'continuous-action-spaces', 'advantage-actor-critic']"," Title: Policy Gradient ( Advantage actor-critic) for multiple simultaneous continuous actionsBody: i'm trying to solve a problem in which i need to carry out reinforcement learning with multiple simultaneous actions in continuous action space . i checked the multiagent structure; however, im trying to solve a problem in which there are difficulties to set up connection between the agents. for instance, they should take actions simultaneously so there is no way they can be aware of each other's actions. so i decided to go with the multivariate normal solution. has anybody tried that out ever?
+first of all i have have difficulties finding the covariance matrix. since it has to be PSD so i decided to assume covariance is zero. something like:
+covariance matrix = [[variance1 0][0 variance2]]
+but its not everything. the agent doesn't seem to be learning. the problem to be solved by the agent is about resource allocation so the "mean" can not be negative then i decided to go with the "RELU" activation function for the neural network. surprisingly, mean is usually zero so as you can guess its updating the weights in a way to do nothing (negative mean). on the other hand, the variances are on the rise. Though i have checked it a million times there might be a flaw on the code of the environment there is no doubt. i just wanted to to make sure if its mathematically ok to go in this way ? i checked for papers and i found bunch of them but they don't seem to be enough. i would appreciate any guidance.
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'dqn', 'double-dqn']"," Title: Update Rule with Deep Q-Learning (DQN) for 2-player gamesBody: I am wondering how to correctly implement the DQN algorithm for two-player games such as Tic Tac Toe and Connect 4. While my algorithm is mastering Tic Tac Toe relatively quickly, I cannot get great results for Connect 4. The agent is learning to win quickly, if it gets the chance, but it only plays in the centre. It is unable to detect threats in the first and last columns. I am using DDQN with memory replay. Also teacher
and student
refer to two agents at different strengths, while the teacher is frequently replaced by a new student. My algorithm looks simplified as follows:
+for i in range(episodes):
+ observation = env.reset()
+ done = False
+ while not done:
+ if env.turn == 1:
+ action = student.choose_action(observation)
+ observation_, reward, done, info = env.step(action)
+ loss = student.learn(observation, action, reward, observation_, done))
+ observation = observation_
+ else:
+ action = teacher.choose_action(-observation)
+ observation_, reward, done, info = env.step(action)
+ observation = observation_
+
+The observation is -1 for player "o", 1 for player "x" and 0 for empty. The agent learns to play as player "x" and through action = teacher.choose_action(-observation)
it should find the best move for player "o". I hope that is clear.
+The update rule looks as follows:
+# Get predicted q values for the actions that were taken
+q_pred = Q_eval.forward(state, action)
+# Get Q value for opponent's next move
+state_ *= -1.
+q_next = Q_target.forward(state_, max_action)
+# Update rule
+q_target = reward_batch - gamma * q_next * terminal
+loss = Q_eval.loss(q_pred, q_target)
+
+I am using -gamma * q_next * terminal
, because the reward is negative, if the opponent wins in the next move. Am I missing anything important or is it just a question of hyperparameter tuning?
+"
+"['reinforcement-learning', 'proofs', 'monte-carlo-methods', 'policy-improvement-theorem']"," Title: When showing that the policy improvement theorem applies to MC control, why is $q_{\pi_{k}}\left(s, \pi_{k}(s)\right) \geq v_{\pi_{k}}(s)$ true?Body: When discussing why the policy improvement theorem is true, when we do Monte Carlo control by updating greedily, it says on page 98 of Sutton and Barto's book (2nd edition) that:
+$$
+\begin{aligned}
+q_{\pi_{k}}\left(s, \pi_{k+1}(s)\right)
+&=
+q_{\pi_{k}}\left(s, \underset{a}{\arg \max } q_{\pi_{k}}(s, a)\right)
+\\
+&=
+\max _{a} q_{\pi_{k}}(s, a)
+\\
+&
+\geq
+q_{\pi_{k}}\left(s, \pi_{k}(s)\right)
+\\
+&
+\geq v_{\pi_{k}}(s) \end{aligned}
+$$
+I don't understand why the last inequality is not an equality.
+The policy improvement theorem was derived on page 78 for deterministic policies.
+So, the $\pi_{k}(s)$ we see in $q_{\pi_k}(s, \pi_{k}(s))$ is a fixed action, call it $a'$. Then, in this case, since $v_{\pi_k}(s)= \sum_a \pi_k(a|s) q_{\pi_k}(s, a) = 1 * q_{\pi_k}(s, a') = q_{\pi_k}(s, a')$ (where the second equality is because the probability of all other actions is 0), shouldn't the last inequality be an equality? When is it possible that we have a greater than relation?
+"
+"['reinforcement-learning', 'proximal-policy-optimization']"," Title: How should I interpret the surrogate and mean_noise_std plots of training a PPO model (from the Nvidia's Isaac gym)?Body: I am currently using the PPO method from the Nvidia's Isaac gym to train an agent for my robot. Below, you can see the plot which corresponds to a training process.
+I know that something is massively wrong with my training process (since the robot does not manage to get a nice policy), so I am trying to understand the training process more with the help of the below values being logged during the problematic training.
+
+
+So far, I could find out what this value function means: the reward function stabilizes, thus the loss value function also stabilizes, which means that my robot should explore more to fail and learn from the fail!
+But what about the other two plots, surrogate
and mean_noise_std
? How should one interpret those values?
+
+The ideal training process
+
+
+
+"
+"['reinforcement-learning', 'deep-rl', 'papers']"," Title: Understanding Generalized Advantage Estimate in reinforcement learningBody: I was reading the paper on Generalized Advantage Estimate. It first introduces a generalized form of policy gradient equation without involving $\gamma$ and then it says the following:
+
+We will introduce a parameter $\gamma$ that allows us to reduce variance by downweighting rewards corresponding to delayed effects, at the cost of introducing bias. This parameter corresponds to the discount factor used in discounted formulations of MDPs, but we treat it as a variance reduction parameter in an undiscounted problem.
+
+I know the Monte Carlo estimate of value function is given as:
+$$V(s_t)=\sum_{l=t}^\infty \gamma^tr_t$$
+The bootstrapped estimate of value function is given as:
+$$V(s_t)=r_t+\gamma V(s_{t+1})$$
+(In both equations, $\gamma$ is a discount factor.)
+The bootstrapped estimate is biased because it based on $V(s_{t+1})$ which is usually a biased estimate by some estimator such as a neural network. The Monte Carlo estimate is unbiased because it contains all rewards sampled from the environment. In this case, however, because the agent might take a lot of actions over the course of an episode, it's hard to assign credit to the right action, which means that a Monte Carlo estimate will have a high variance.
+Does this contradict what the paper says: "a parameter $\gamma$ that allows us to reduce variance"? Or does it simply mean the following: lower $\gamma$ gives smaller weights to distant future rewards, thus making value estimate less dependent on them, reducing variance in comparison to larger $\gamma$, which make distant future rewards contribute significantly to value estimate. So introduction/existence of $\gamma$ itself does not reduce the variance but gives way to increase or decrease the variance.
+"
+"['machine-learning', 'natural-language-processing', 'prediction', 'data-preprocessing', 'feature-engineering']"," Title: Do the training and test datasets need to be equally preprocessed as one whole dataset?Body: I have developed, trained and tested an NLP model. It is persisted in a pickle file. The model contains the data preprocessing function that includes text cleaning and new features engineered with word2vec.
+With the trained model, I want to make predictions on a new text. The new text data, after preprocessing, won't contain the same engineered features of the training dataset.
+Therefore my question is, how can the trained model make predictions on the new dataset as it has different engineered features (different numbers of columns and different columns)?
+Should I preprocess the new text data and the training dataset as one dataset?
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'off-policy-methods', 'sutton-barto']"," Title: In off-policy MC control algorithm by Sutton & Barto, why do we perform a last update when sample action is inconsistent with target policy?Body: I have a question about the $W$ term in the off-policy MC control algorithm on Page 111 of Sutton & Barto. I have also included it in the figure below.
+
+My question: shouldn't the check $A_{t} = \pi(S_{t})$ be made before updating $C(S_{t}, A_{t})$ and $Q(S_{t}, A_{t})$? And, at this point if $A_{t} \neq \pi(S_{t}) $ then the inner loop should exit before updating $Q(\cdot)$. If $A_{t} = \pi(S_{t})$ then shouldn't $W$ be updated to
+$W = W \frac{1}{b(A_{t}|S_{t})} $ before updating the $Q(s, a)$ and $C(s, a) functions?
+The algorithm as stated seems problematic to me. For example, if say the target policy $\pi$ is deterministic and behavior policy $b$ is stochastic. If in period $T-1$ the behavior policy takes an action that is not consistent with $\pi$ then the importance sampling ratio $\rho_{T-1:T-1} = 0$. However, the algorithm as shown would update $Q(S_{T-1}, A_{T-1})$ since the checks I referred to above don't occur until the end of the inner loop. What am I missing here?
+"
+"['deep-learning', 'tensorflow', 'training']"," Title: What is the process working on Tensorflow model.fit()?Body: I created a binary image classification model. The dataset contains about 500K images in each class, with ratio = Train : Validation : Test = 7 : 2 : 1. Total images = 1M
+I split my dataset into 5 parts (compute constraints)—5 training subsets, 5 validation subsets, and 1 test subset.
+I trained and evaluated my model stage by stage. In first stage (evaluation), my model's accuracy was 65%. I re-fitted it with 2nd dataset and the accuracy was 43%. I did same process with the rest, and my accuracies were: 65%, 43%, 57%, 21%, 30%.
+How can I train my model in staged training?
+I want to train models with different datasets without reinitialize the weight every training process.
+"
+"['neural-networks', 'machine-learning']"," Title: Tensforflow schedule - does not change boundariesBody: I'm trying to manipulate the learning rate with tf PiecewiseConstantDecay.
+I can easily check if the algorithm switches learning rate values, because one rate is extremely low 1e-20 !!
+However, NO setting of the "boundaries" causes the algorithm to switch learning rate... What am I doing wrong?
+step = tf.Variable(0, trainable=False)
+boundaries = [100]
+values = [1e2, 1e-20]
+
+schedule = tf.optimizers.schedules.PiecewiseConstantDecay(boundaries, values)
+lr = 1e-4 * schedule(step)
+
+optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
+model.compile(optimizer=optimizer, loss='mean_squared_error')
+
+history = model.fit(x = x_train, y = y_train, validation_data=(x_val, y_val), epochs=100,batch_size=32)
+
+Would love your input.
+"
+"['deep-learning', 'convolutional-neural-networks', 'backpropagation', 'convolution', 'convolutional-layers']"," Title: Convolutional Layer Multichannel Backpropagation ImplementationBody: I have been working on coding a CNN in python from scratch using numpy as a semester project and I think I have successfully implemented it up to backpropagation in the MaxPool Layers. However, my model seems to never converge whenever there is a Convolutional Layer(s) added. I am assuming there is a problem with the way I have implemented the backpropagation.
+Most examples that I have seen for this implementation either really simplify it by using a one-channel input and a single one-channel filter, or just dive straight into the Mathematics which doesn't only not help but also confuses me more.
+Here is the way I have tried to implement both Forward and Backward Propagation for multichannel inputs and outputs based on my own understanding and things I read online.
+Forward Prop:
+
+Backward Prop for Filter Gradients:
+
+Backward Prop for Input Gradients:
+
+Kindly point out anything that's wrong here. I have been working on this part for the last 2 days but there has to be a problem because my model never seems to converge.
+Thanks!
+"
+"['neural-networks', 'machine-learning', 'reference-request', 'autonomous-vehicles']"," Title: What are the typical sizes of practical/commercial artificial neural networks?Body: I'm interested in artificial neural networks (ANN) and I wonder how big ANNs in practical use are, for example, Tesla Autopilot, Google Translate, and others.
+The only thing I found about Tesla is this one:
+
+"A full build of Autopilot neural networks involves 48 networks that
+take 70,000 GPU hours to train. Together, they output 1,000 distinct
+tensors (predictions) at each timestep."
+
+It seems like most companies don't publish clear information about their ANN sizes. I really can't find anything detailed on this subject.
+Is there any information about the size of big practical/commercial ANNs that include something like the amount of neurons/connections/layers etc.?
+I'm looking for a few examples in this scale with more precise information on the size of the neural networks.
+"
+"['machine-learning', 'proofs', 'function-approximation', 'statistical-ai', 'probability-theory']"," Title: Why is the equation $\mathbb{E} \left[ (Y - \hat{Y})^2 \right] = \left(f(X) - \hat{f}(X) \right)^2 + \operatorname{Var} (\epsilon)$ true?Body: In the book An Introduction to Statistical Learning, the authors claim (equation 2.3, p. 19, chapter 2)
+$$\mathbb{E} \left[ (Y - \hat{Y})^2 \right] = \left(f(X) - \hat{f}(X) \right)^2 + \operatorname{Var} (\epsilon) \label{0}\tag{0},$$
+where
+
+- $Y = f(X) + \epsilon$, where $\epsilon \sim \mathcal{N}(0, \sigma)$ and $f$ is the unknown function we want to estimate
+- $\hat{Y} = \hat{f}(X)$ is the output of our estimate of $f$, i.e. $\hat{f} \approx f$
+
+They claim that this is easy to prove, but this may not be easy to prove for everyone. So, why is equation \ref{0} true?
+"
+"['applications', 'transformer', 'sequence-modeling', 'efficiency', 'seq2seq']"," Title: Are there any successful applications of transformers of small size (<10k weights)?Body: In the problems of NLP and sequence modeling, the Transformer architectures based on the self-attention mechanism (proposed in Attention Is All You Need) have achieved impressive results and now are the first choices in this sort of problem.
+However, most of the architectures, which appear in the literature, have a lot of parameters and are aimed at solving rather complicated tasks of language modeling ([1], [2]). These models have a large number of parameters and are computationally expensive.
+There exist multiple approaches to reduce the computational complexity of these models, like knowledge distillation or multiple approaches to deal with the $O(n^2)$
+computational complexity of the self-attention ([3], [4]).
+However, these models are still aimed at language modeling and require quite a lot of parameters.
+I wonder whether there are successful applications of transformers with a very small number of parameters (1k-10k), in the signal processing applications, where inference has to be performed in a very fast way, hence heavy and computationally expensive models are not allowed.
+So far, the common approaches are CNN or RNN architectures, but I wonder whether there are some results, where lightweight transformers have achieved SOTA results for these extremely small models.
+"
+"['machine-learning', 'convolutional-neural-networks', 'generative-adversarial-networks', 'image-processing', 'style-transfer']"," Title: In style transfer, why does the comparison between channels give a good sense of style?Body: I have been learning about Style Transfer recently. Style is defined as
+
+The correlation of activations between channels.
+
+I can't seem to understand why that would be true. Intuitively, style seems to be the patterns that exist in one particular channel/image rather than the patterns between channels. When filters in CNNs have different weights for acting as filters for different channels, why do we even expect 2 channels to be correlated? And further, why do we expect the style to be conveyed by it?
+I expected a style function that could compare activations in some layer of a CNN condensed into one channel so that an algorithm can search for which activations occur simultaneously and hold style information.
+I understand how we are carrying out the operations with the matrix and defining the loss function, what I don't get is why we are assuming style information lies in correlation between channels in the first place.
+"
+"['reinforcement-learning', 'deep-rl', 'discrete-action-spaces', 'multi-agent-rl']"," Title: Is there a multi-agent deep reinforcement learning algorithm which is for environments with only discrete action spaces (Not hybrid)?Body: Is there a multi-agent deep reinforcement learning algorithm which is for environments with only discrete action spaces (Not hybrid) and have centralized training?
+I have been looking for algorithms, (A2C, MADDPG etc.) but still havent find any algorithm that provides all of properties i mentioned (Multi agent + discrete action space + deep learning + centralized training).
+I am wondering if we use an actor network that gets state as input and concatenated discrete actions of agents as output (For example if agent has 3 actions and we have 4 agents output can be [0,0,1, 0,1,0, 0,0,1, 1,0,0]) is that would be bad idea ?
+"
+"['convolutional-neural-networks', 'computer-vision', 'r-cnn', 'data-labelling', 'labeled-datasets']"," Title: Is intersection of labels acceptable in computer vision?Body: I have a dataset, where objects are very close to each other. So, the question is: what is the best approach to label them?
+There are two possible options:
+
+- mark objects so that they will not intersect (it is difficult, surroundings are not included in the label area)
+- mark a larger area of objects, but labels will intersect
+
+What is more practical?
+
+
+"
+"['reinforcement-learning', 'dqn', 'policy-gradients']"," Title: Are policy gradient methods good for large discrete action spaces?Body: I have seen this question asked primarily in the context of continuous action spaces.
+I have a large action space (~2-4k discrete actions) for my custom environment that I cannot reduce down further:
+I am currently trying DQN approaches but was wondering that given the large action space - if policy gradient methods are more appropriate and if they are appropriate for large action spaces that are discrete as in my scenario above. I have seen answers to this question with regard to large continuous action spaces.
+Finally - I imagine there isn't a simple answer to this but: Does this effectively mean DQN will not work?
+"
+"['reinforcement-learning', 'open-ai', 'gym', 'atari-games']"," Title: Why does the Atari Gym Amidar environment only move after a certain number of episodes?Body: When I try to run Amidar even without RL code, I cannot get the environment to move immediately. It takes about 100 steps before the game actually starts moving. I use the following simple code to display some images and print some actions (I always try to do the same action, namely going up):
+env = gym.make('Amidar-v0')
+env.reset()
+
+for i in range(1000):
+ action = 2
+ next_state, reward, terminated, info = env.step(action) # take a random action
+ print(f"Timestep {i}")
+ print(next_state.shape)
+ print(reward)
+ print(action)
+ print(info)
+ plt.imshow(next_state)
+ plt.show()
+
+When running this code, it takes until about step 85 before the environment starts to move. After that, each step, it moves until the agent is hit by the enemy. Then the environment restarts in the start state, and it takes quite some time before it starts to move again. I have tried doing 'FIRE' as my first action; however, this is not working since it also takes a while before the environment starts moving. Because of this, my buffer is almost always filled with the same images and hence my network isn't learning anything. How do I get this environment to move immediately?
+"
+['batch-normalization']," Title: Does batch normalization affects the possible solution distribution?Body: As far as I recall, in Deep Learning, batch normalization normalizes each layer activations as a gaussian every batch. If so, Let $x$ be the input and $z_i$ the activation in the $i$-th layer: $p(z_i)$ becomes a gaussian with batch-norm. Right?
+Does this constraint affects $p(z_i|x)$?
+"
+"['reference-request', 'definitions', 'computational-learning-theory', 'history', 'vc-dimension']"," Title: Why was the VC dimension not defined for all configurations of $d$ points?Body: Let's start with a typical definition of the VC dimension (as described in this book)
+
+Definition $3.10$ (VC-dimension) The $V C$ -dimension of a hypothesis set $\mathcal{H}$ is the size of the largest set that can be shattered by $\mathcal{H}$ :
+$$
+\operatorname{VCdim}(\mathcal{H})= \max \left\{m: \Pi_{\mathcal{H}}(m)=2^{m}\right\}
+$$
+
+So, if there exists some set of size $d$ that $\mathcal{H}$ can shatter and it cannot shatter any set of size $d+1$, then the $\operatorname{VCdim}(\mathcal{H}) = d$.
+Now, my question is: why would we be just interested in the existence of some set of size $d$ and not all sets of size $d$?
+For instance, if you consider one of the typical examples that are used to illustrate the concept of the VC dimension, i.e. $\mathcal{H}$ is the set of all rectangles, then we can show that $\operatorname{VCdim}(\mathcal{H}) = d = 4$, given that there's a configuration of $d=4$ points that, for all possible labellings of those points, there's a hypothesis in $\mathcal{H}$ that correctly classifies those points. However, we can also easily show that, if the 4 points are collinear, there's some labelling of them (i.e. the 1st and 3rd are of colour $A$, while the 2nd and 4th are of colour $B \neq A$) that a rectangle cannot classify correctly.
+So, the class of all rectangles can shatter some sets of points, but not all, so we would need another class of hypotheses to classify all sets of four points correctly. The VC dimension does not seem to provide any intuition on which set of classes would do the trick.
+So, do you know why the VC dimension wasn't defined for all configurations of $d$ points? Was this just a need of Vapnik and Chervonenkis for the work they were developing (VC theory), or could have they defined it differently? So, if you know the rationale behind this specific definition, feel free to provide an answer. References to relevant work by Vapnik and Chervonenkis are also appreciated.
+"
+"['machine-learning', 'reference-request', 'python', 'facial-recognition', 'opencv']"," Title: What is the minimum video resolution I need to identify anyone with facial recognition?Body: I am currently working on a small project where I am trying to automate some stuff at home. I am building a model capable of identifying my face with OpenCV. This will be a live feed.
+I am making the project's estimations and have a really low budget. Therefore I am trying to identify what could be the minimum quality video feed I can pass to my algorithm to identify any face. For now I am just trying to identify mine.
+I understand facial recognition works primarily on the unique pattern that could be found in the face. What is the minimum video resolution I need to identify anyone with facial recognition?
+"
+"['audio-processing', 'model-request', 'speech-synthesis']"," Title: Model for direct audio-to-audio speech re-encodingBody: There are many resources available for text-to-audio (or vice versa) synthesis, for example Google's 'Wavenet'.
+These tools do not allow the finer degree of control that may be required regarding the degree of inflections / tonality retained in output. For example to change vocal characteristics (Implied Ethnicity / Sexbfor example) of a dubbed voice over from one voice whilst retaining tonality (Shouting vs calm).
+Text-to-speech 'and back' seems a suboptimal approach due to data loss (e.g. tonality) before reconstruction.
+Re-encoding audio-to-audio would/may allow the alteration of characteristics in a manner not available via standard audio processing methods whilst retaining more of the desired tonality.
+Is AI able to distinguish between characteristics and tonality as implied above and is such a speach-speach re-encoding tool available, ideally open source?
+"
+"['reinforcement-learning', 'deep-rl', 'reward-functions']"," Title: What happens with policy gradient methods if rewards are differentiable?Body: I would like some help with understanding why there is no explicit flow of information from the reward gradient to the parameters of the policy in policy gradient methods.
+What I mean is the following, there are 2 scenarios:
+1st - deterministic framework with given initial state $s_0$, actions $a_t = \mu_\theta(s_t)$, rewards $r(s_t, a_t, s_{t+1})$, and transitions $s_{t+1}=f(s_t, a_t)$. Assume all of these things are differentiable (maybe all is continuous). By drawing the computational graph I found I can compute the gradient of $J(\mu_\theta) = \sum_{t} r(s_t, a_t, s_{t+1})$ with respect to $\theta$. I could optimize for cumulative reward by doing gradient ascent on this objective.
+2nd - framework in https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html#id16 , which seems general, reading $\nabla_\theta J(\pi_\theta) = \nabla_\theta E_{\tau \sim \pi_\theta}[R(\tau)] = E_{\tau \sim \pi_\theta}[R(\tau)\sum_{t=0}^T \nabla_\theta \log{\pi_\theta (a_t | s_t)}]$.
+What I don't understand (and I certainly feel confused about in an ignorant way) is why the derivation assumes that $\nabla_\theta R(\tau) = 0$ when this scenario (as it is stochastic) should include the 1st as a particular case, which does have a derivative!
+It makes sense that $R(\tau)$ is only affected by $\theta$ through the change in probability of the trajectories $\tau$, but it still feels strange that $\tau = (s_0, a_0, \dots)$ is indeed one sample of $\tau = (s_0\sim \rho(s_0), a_0 \sim \pi_\theta(s_0), \dots)$ which depends on $\theta$. The deterministic reduction is obvious, but one could also think about reparametrization tricks in order to show the same point.
+In other words, if the reward function were differentiable, i.e. fully differentiable known environment, how could I use this information?
+"
+['notation']," Title: What is the correct notation for an operation that applies to each element of an array independently?Body: I am looking for the standard notation to define element-wise / Hadamard-style functions, if there is one.
+That is to say, if the operator I am looking for were represented by a hexagon ⬡, I could use it as such:
+$$A(x) = \underset{i}{\Large{⬡} } f(x_i)$$
+$$A : \mathbb{R}^n \rightarrow \mathbb{R}^n$$
+$$f : \mathbb{R} \rightarrow \mathbb{R}$$
+It is very convenient to define such functions explicitly because I want to manipulate them: $B \circ A$ . It seems to me that the following notation is correct: $A_i(x) = f(x_i)$ but I worry it is nonstandard and confusing.
+My functions are non-linear so I cannot simply apply them directly to the array as a vector.
+As stated in an answer, this is unnecessary when a function is strictly scalar because it is implied to apply element-wise. There are still some situations I would hope to have it:
+$$\underset{ij}{\Large{⬡} } e^{M_{ij}} \ne e^M$$
+The answers suggest to me the best option would be something like these:
+$$A(M) = \text{for each } i,j: e^{M_{ij}}$$
+$$A(M) = \text{element-wise}: e^{M_{ij}}$$
+The question is now closed in the negative, but I would welcome a new answer. Would be nice to find something like $\forall$.
+Related:
+
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'data-preprocessing', 'time-series']"," Title: Rescaling time-series data with very spiky pattern for training data in LSTM networkBody: I am working with some time-series hydrology data. Our goal is to forecast the time series forward, meaning predicting the data 1 month, 3 months ,6 months into the future. The data itself(image below) is characterized by mostly 0 or very small rates of flow expect for brief periods that are characterized by high flow. So I get this crazy spiky pattern where the median is around 0 or 1-2 meters^3/min, but at the same time there are periods of 5000 meters^3/minute, etc. I am not sure of the exact scale dimensions, but the picture below tells the tale.
+
+So I was trying to figure out a good way to scale this type of spiky data. I have been using a MinMaxScaler just to start with, to rescale the values between (-1, 1). But that approach is not going to work well, especially because at the top ends of the range, the difference between 1000 m^3/min and 5000 m^3/min will be like 0.001 difference.
+Does anyone have a good suggestion of how to rescale data like this for time-series analysis in an LSTM or RNN network?
+"
+"['neural-networks', 'regression']"," Title: Regression for a discrete variableBody: I'm building a model (neural net) that would predict a quality score for images.
+Ground truth is given by a 4-level discrete variable (0%, 33%, 67%, 100%), and I would like to build a model that would give something that looks like a continuous result over the 0-100% scale.
+What should I pay attention to?
+What I'm afraid of is that the model might stick to ground-truth levels and prefer them over any value in between.
+"
+"['convolutional-neural-networks', 'papers', 'convolution', 'geometric-deep-learning', 'graph-neural-networks']"," Title: Why the non-exploitation of edge labels in current graph convolutions ""results in an overly homogeneous view of local graph neighborhoods""?Body: I am currently reading a paper called Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs (2017, CPPR), and I cannot understand the following sentence:
+
+We identify that the current formulations of graph convolution do not
+exploit edge labels, which results in an overly homogeneous view of
+local graph neighborhoods, with an effect similar to enforcing
+rotational invariance of filters in regular convolutions on images.
+
+What does this sentence mean?
+"
+['classification']," Title: How to classify when the label is seldom knownBody: When I think about classification I think of the cancer/not cancer example. You have a bunch of attributes and you know whether the person had cancer during the relevant time period and you determine which attributes predict that result.
+I work in a highly-regulated industry that serves the public. There are certain people we are not allowed to do business with, let's say because they will use our service for illegal purposes. Sometimes we tell the (potential) customer "yes" and sometimes "no".
+When we say "yes" and the potential customer intends to use our service for illegal activity, they certainly won't inform us of the mistake.
+Likewise, when we say "no" the potential customer will sometimes go away, will sometimes complain, but that potential customer will not self-identify and say "yes, you are correct, I intended to use your service for illegal activity".
+Occasionally we will receive a report from a 3rd-party that will label a customer, but these reports are a tiny fraction of the number of customers. Unlike the cancer classification we almost always don't know the actual label, we only know what we guessed.
+What techniques should we consider to measure our accuracy?
+"
+"['convolutional-neural-networks', 'keras', 'datasets', 'u-net', 'semantic-segmentation']"," Title: Pixel values of segmap in multi-class semantic segmentationBody: I'm preparing a dataset for a multiclass semantic segmentation using U-Net like architecture. To be precise, I've got it ready but a question came to my mind. How does pixel values of a segmentation map influence the training?
+Also, is it better to have a greyscale segmap, or RGB one?
+I have the dataset labeled and augmented, the only thing I am thinking now is if I should alter the segmaps.
+I am planning to use Keras, is it smart enough to take in segmaps in both forms? I haven't found an answer anywhere, doublechecked if not on this forum, I hope it's not something super trivial. First time trying to do segmentation task so it's all a bit new to me :)
+Edit: right now the segmaps looks like:
+
+- background [0 0 0]
+
+- object_type1 [16 180 75] green
+
+- object_type2 [225 225 25] yellow
+
+- object_type3 [230 25 75] red
+
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'deep-neural-networks', 'universal-approximation-theorems']"," Title: Do we ever need more then 1 hidden layer in a binary classification problem with ANNs? If yes why?Body: I have read about the universal approximation theorem. So, why do we need more than 1 layer? Is it somehow computationally efficient to add layers instead of more neurons in the hidden layer?
+"
+"['terminology', 'gradient']"," Title: Terminology for the weight of likelihood ratio/score function?Body: If we estimate the gradient of $f(x)$ using the likelihood ratio/score function, i.e.
+$$\nabla f = f^*\dfrac{\partial \log p(x)}{\partial \theta}$$
+is there any agreed upon terminology to call "$f^*$"? Specifically I'm thinking of the case where you may use some sort of baseline/control variate or a critic, so $f^*$ is not $f$.
+I've seen $f^*$ called the learning signal or the cost. In reinforcement learning, you would call $f^*$ the advantage, but I think that terminology is only specific for RL. What is a general way to call $f^*$ that is not specific to RL?
+"
+"['reinforcement-learning', 'markov-decision-process', 'bellman-equations', 'policy-iteration', 'finite-markov-decision-process']"," Title: Converging to a wrong optimal policy if the agent is given more choicesBody: I am a bit new to Reinforcement learning. So, I am extremely sorry if I am asking something obvious. I have written a small piece of code to find the optimal policy for a 5x5 grid problem.
+
+- Scenario 1. The agent is only given two choices (
Up
, Right
). I believe, I am getting an optimal policy.
+- Scenario 2. The agent is given four choices (
Up
, Right
, Down
, Left
). I am getting the wrong answer.
+
+I have represented actions with numbers:
+0 - Right
+1 - Up
+2 - Down
+3 - Left
+
+When the action Up
is chosen, with 0.9 probability it will move up or 0.1 probability move right and vice-versa. When the action Down is chosen, with 0.9 probability it will move down or 0.1 probability move left and vice-versa.
+I did not use any convergence criteria. Instead let it run for sufficient iterations. I have indeed confirmed that my optimal state values and policy is converging but to a wrong number. I am attaching the code below:
+def take_right(state):
+ if (state/n < n-1): state = state + n
+ return state
+
+def take_up(state):
+ if (state%n!=n-1): state = state + 1
+ return state
+
+def take_left(state):
+ if (state/n > 0): state = state - n
+ return state
+
+def take_down(state):
+ if (state%n > 0): state = state - 1
+ return state
+
+Scenario 1 result:
+
+Scenario 2 result:
+
+Green has a reward of 100 and Blue has a penalty of 100. Rest of the states have a penalty of 1. Discount factor is chosen as 0.5
+Edit:
+This was really silly question. The problem with my code was more pythonic than RL. Check the comments to get the clue.
+"
+"['generative-adversarial-networks', 'hyper-parameters', 'recommender-system', 'collaborative-filtering']"," Title: Hyperparameters for Reproducing the Results of IRGAN on MovieLens 1MBody: I am trying to reproduce results reported for IRGAN (information retrieval GAN) on the MovieLens 1M dataset. The results I want to reproduce and their sources are listed in the table below.
+
+
+
+
+Model |
+Precision@5 |
+NDCG@5 |
+Source |
+
+
+
+
+IRGAN |
+26.30% |
+26.40% |
+CFGAN |
+
+
+IRGAN |
+30.98% |
+31.59% |
+CoFiGAN |
+
+
+IRGAN |
+31.82% |
+33.72% |
+BiGAN |
+
+
+
+
+While my implementation of IRGAN is able to reproduce the results on the MovieLens 100k dataset, I am having problems discovering the hyperparameters for reproducing the results on the MovieLens 1M dataset; currently my IRGAN implementation is achieving a precision@5 score of 21.7%.
+Unfortunately, the authors of the aforementioned papers do not share the hyperparameters used for training their version of IRGAN.
+Thus, I want to ask if there is a repository with the used hyperparameters?
+Furthermore, I would be most grateful if you could provide me information on how to contact the authors?
+"
+"['markov-decision-process', 'rewards', 'reward-shaping', 'interpolation']"," Title: Reward interpolation between MDPs. Will an optimal policy on both ends stay optimal inside the interval?Body: Say I've got two Markov Decision Processes (MDPs):
+$$\mathcal{M_0} = (\mathcal{S}, \mathcal{A}, P, R_0),\quad\text{and}\quad\mathcal{M}_1 = (\mathcal{S}, \mathcal{A}, P, R_1)$$
+Both have the same set of states and actions, and the transition probabilities are also the same. The only difference is in the reward functions $R_0$ and $R_1$. Suppose that we've found an optimal deterministic policy $\pi^*_0$ for the problem $\mathcal{M}_0$ and we've checked that this policy is also optimal for $\mathcal{M}_1$
+$$\pi_0^*(s) = \arg\max\limits_a Q^*_0(s,a)\qquad Q_1^*(s,\pi_0^*(s)) = \max\limits_a Q^*_1(s,a)$$
+Now, given the two MDPs one can build a whole family of MDPs interpolating between them:
+$$\mathcal{M}_\alpha = (\mathcal{S}, \mathcal{A}, P, \alpha R_0 + (1-\alpha) R_1)$$
+Where $\alpha\in[0,1]$ is the interpolation parameter between the two problems - the rewards are linearly changing from $R_0$ to $R_1$ with this parameter. My question is - in general. will $\pi_0^*$ be optimal for all MDPs in the middle of interpolation interval?
+$$Q_\alpha(s,\pi_0^*(s))\stackrel{?}{=}\max\limits_aQ^*_\alpha(s,a),\; \forall\alpha\in[0,1]$$
+I feel like this could be generally true due to linearity of the dependence and convexity of the optimization problem. But I cannot neither prove it, nor find a counterexample.
+"
+"['reinforcement-learning', 'definitions', 'environment', 'state-spaces', 'pomdp']"," Title: What exactly are partially observable environments?Body: I have trouble understanding the meaning of partially observable environments. Here's my doubt.
+According to what I understand, the state of the environment is what precisely determines the next state and reward for any particular action taken. So, in a partially observable environment, you don't get to see the full environment state.
+So, now, consider the game of chess. Here, we are the agent and our opponent is the environment. Even here we don't know what move the opponent is going to take. So, we don't know the next state and reward we are going to get. Also, what we can see can't precisely define what is going to happen next. Then why do we call chess a fully observable game?
+I feel I am wrong about the definition of an environment state or the definition of fully observable, partially observable. Kindly correct me.
+"
+"['hyperparameter-optimization', 'neat', 'hyper-parameters', 'neuroevolution']"," Title: Is there an optimal number of species for NEAT?Body: Is there an optimal number of species for NEAT?
+Since too low and too high is bad, I am thinking about adjusting the threshold of the distance function at runtime in order to have the number of species always between some bounds. Does this make sense? Is there an optimal range?
+"
+"['deep-learning', 'computer-vision', 'face-recognition']"," Title: Arcface implementation for image similarity produces opposite embeddings for positive negative image pairsBody: So I've built an arcface model with this arcface layer implementation:
+https://github.com/4uiiurz1/keras-arcface/blob/master/metrics.py
+I trained for a few epochs and now that I'm comparing the results I'm quite baffled.
+According to the paper and resources I've read the embedding should produce embeddings where similar images are closer and should have a higher cosine similarity.
+But I have the exact opposite case, I ran the models embedding layers through the hold out set and to 95% the mismatches are closer than the matches. Thus I have a reversed 95% accuracy.
+My feed and labels are correct
+I binned similar images in groups similar to here:
+https://www.kaggle.com/ragnar123/unsupervised-baseline-arcface
+but for a different dataset.
+Could someone guess why this is happening? Is it possible that some initialization would produce the opposite goal?
+"
+"['classification', 'long-short-term-memory', 'time-series', 'binary-classification']"," Title: Is my dataset unlearnable, or is my LSTM model not smart enough?Body: I have time-series data obtained from a video. The data is composed of bitrate and corresponding label pairs for each timestamp:
+
+The distribution over the first 30 seconds is as follows:
+
+I have built an LSTM model for this dataset to be able to classify the labels based on the bitrate. However, it seems that my model is not able to learn. Validation accuracy starts from approximately 0.3 (makes sense, since I have 2 classes (log2 = 0.3)) and it does not improve.
+Do you have any idea about this, is it normal considering this sample data distribution, or is something might be wrong with my model? Thanks!
+"
+"['convolutional-neural-networks', 'training', 'deep-neural-networks', 'neural-architecture-search']"," Title: A neural network to learn the connection between two totally different type of imagesBody: I have a dataset of two different type of images. Say, I have images of a person and his all 10 fingerprints. I want to create a relation between them to predict one from another. How I can do that and which architecture is suitable for this problem or similar type of problem.
+"
+"['deep-learning', 'python', 'facenet', 'triplet-loss-function']"," Title: How to retrain a Facenet model with the triplet loss function?Body: I want to calculate the similarity or distance of two faces. I'm using Python.
+I have read and done what this tutorial says. However, the result is not good (the similarity of same faces and similarity of different faces are very very very close to each other!).
+I have downloaded and used this Facenet model to get face embedding vectors, and then used 3 distance metrics (Euclidean, Manhattan, Cosine) to calculate the distance.
+After that, I decided to retrain that Facenet model with my dataset. I read this article. I want to use the triplet loss to retrain that Facenet model.
+How can I retrain that Facenet model with the triplet loss function? Or can you please send me some links to read?
+"
+['natural-language-processing']," Title: How to detect the description of spine segments in short text using a neural network?Body: The input data is a set of text chunks containing the description of the pathology or the surgical procedure:
+For instance:
+
+- Tere is a lumbar stenosis L3/4
+- Patient ist suffering from [...], MRI and X ray showed lumbar stenosis L3/4, segmental instability L3-5, foraminal stenosis L5/S1 both sides
+- The patient [...] underwent an MRI showing cervical stenosis C4-7 with myelopathy
+- [...] showed lumbar adult scoliosis L2-S1 with Cobb angle of 42°
+- Patient fell from the chair [...] showed osteoporotic fracture L3
+
+Now, the ideal classificator would give me:
+
+- Segments: L3,L4; typeofpathology: degenerative; subtypepathology: stenosis
+- Segments: L3,L4,L5,S1; type of pathology: degenerative; subtypepathology: stenosis, instability
+- Segments C4, C5, C6, C7;type of pathology: degenerative; subtypepathology: myelopathy
+- Segments L2,L3,L4,L5,S1;type of pathology: deformity; subtypepathology: de novo scoliosis
+- Segments L3; type of pathology: pathological fracture; subtypepathology: -
+
+I think that this cannot be reasonably achieved by a pre-programmed algorithm, because the amount of the text before the description can vary, and the choice of words can vary too. Is there an approach using neural networks or NLP tools that would have chance at reaching such classification? How large would the dataset used for training have to be (approximately)?
+Maybe it would be reasonable to separate the two problems: detection of the segments AND detection of the pathology. For the segments, one could search for a pattern of C? T? L? or S? with ? being a number and then include all such segment descriptions in the next 20-30 characters, then use an algorithm to mark the continuous segments from the upper to the lower vertebra.
+Done this, do neural networks offer any significant advantages over simple keyword matching classification? Most importantly, which NLP neural network tools would be the ones you would start trying with?
+"
+"['reinforcement-learning', 'deep-rl', 'proximal-policy-optimization']"," Title: What is the effect of parallel environments in reinforcement learning?Body: Do parallel environments improve the agent's ability to learn or does it not really make a difference? Specifically, I am using PPO, but I think this applies across the board to other algorithms too.
+"
+"['reinforcement-learning', 'multi-armed-bandits', 'epsilon-greedy-policy']"," Title: What is the probability of selecting the greedy action in a 0.5-greedy selection method for the 2-armed bandit problem?Body: I'm new to reinforcement learning and I'm going through Sutton and Barto. Exercise 2.1 states the following:
+
+In $\varepsilon$-greedy action selection, for the case of two actions and $\varepsilon=0.5$, what is the probability that the greedy action is selected?
+
+They describe the $\varepsilon$-greedy method on pages 27-28 as follows:
+
+...behave greedily most of the time, but every once in a while, say with small probability $\varepsilon$, instead select randomly from among all the actions with equal probability...
+
+The above method makes the agent select an action randomly "every once in a while" from the action space uniformly with probability $\varepsilon$. I find the question imprecise since we don't know the "once in a while" in this exercise (i.e. is it once every $50$ timesteps? every time step?). If it's for every timestep, isn't it like a Bernouli problem where the parameter is $0.5$? I'd say that the agent has a $0.5$ chance to select a greedy action but I'm not sure at all.
+"
+"['reinforcement-learning', 'deep-rl', 'proximal-policy-optimization']"," Title: How does sharing parameters between the policy and value functions help in PPO?Body: The PPO objective may include a value function error term when parameters are shared between the policy and value functions. How does this help, and when to use a neural network architecture that shares parameters between the policy and value functions, as opposed to two neural networks with separate parameters?
+"
+['reinforcement-learning']," Title: Sutton and Barto 2nd Edition Exercise 13.1Body: I'm attempting exercise 13.1 in the Sutton and Barto textbook. It asks for an optimal probability for selecting action right in the short corridor scenario (see first 6 lines of the image below for the scenario).
+Exercise 13.1: Use your knowledge of the gridworld and its dynamics to determine an
+exact symbolic expression for the optimal probability of selecting the right action in
+Example 13.1.
+My attempt:
+Letting $p$ denote the probability of choosing right, I understand that using the Bellman equations, we can solve for the value of $s_1, s_2, s_3$ where the states are numbered from left to right in terms of $p$. We have $v(s_1) = \frac{2-p}{p-1}$, $v(s_2) = \frac{1}{(p-1)p}$, $v(s_3) = -\frac{p+1}{p}$.
+I can see how we can find the max of each of these functions to get the best optimal policy, given the state we're currently in.
+However, how do you find the optimal policy generally (irrespective of starting state)? I found solutions here, which magically arrives at
+$\frac{p^2-2p+2}{p(1-p)}$. Can someone explain this part?
+https://github.com/brynhayder/reinforcement_learning_an_introduction/blob/master/exercises/exercises.pdf
+
+"
+"['machine-learning', 'comparison', 'metric', 'speech-recognition']"," Title: What is the difference between the ""equal error rate"" and ""detection cost function"" metrics?Body: I was designing a multi-speaker identification model, so I searched for some metrics that one may use. I found two metrics:
+
+- EER (equal error rate)
+- DCF (detection cost function)
+
+What is the difference between them? Is one better than the other for my model?
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'dqn', 'gym']"," Title: CartPoleV0 model is not getting trained in even after 1500+ episodes using deep Q-learningBody: I am new to deep Q learning and trying to train the open AI cartpole_V0 game using deep Q learning. Here is my code:
+import gym
+import os
+os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
+import tensorflow as tf
+from collections import deque
+import numpy as np
+import random
+import matplotlib
+matplotlib.use('tkagg')
+import matplotlib.pyplot as plt
+
+
+EPISODES = 5000
+output_dir = "/home/ug2018/mst/18114017/ML/"
+EPSILON = 1
+REPLAY_MEMORY = deque(maxlen=800)
+MIN_EPSILON = 0.01
+DECAY_RATE = 0.995
+MINIBATCH = 750
+GAMMA = 0.99
+
+env = gym.make('CartPole-v0')
+state_size = env.observation_space.shape[0]
+action_size = env.action_space.n
+
+
+class DQNagent:
+
+ def __init__(self):
+
+ self.fit_model = self.create_model()
+
+ self.predict_model = self.create_model()
+ self.predict_model.set_weights(self.fit_model.get_weights())
+
+ self.targets = []
+ self.states = []
+
+ def create_model(self):
+
+ model = tf.keras.models.Sequential()
+ model.add(tf.keras.layers.Dense(64, activation ="relu",input_dim = state_size))
+ model.add(tf.keras.layers.Dense(128, activation ="relu"))
+ model.add(tf.keras.layers.Dense(256, activation ="relu"))
+ model.add(tf.keras.layers.Dense(128, activation ="relu"))
+ model.add(tf.keras.layers.Dense(64, activation ="relu"))
+ model.add(tf.keras.layers.Dense(32, activation ="relu"))
+ model.add(tf.keras.layers.Dense(action_size, activation="linear"))
+ model.compile(loss="mse", optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics=['accuracy'])
+ return model
+
+ def model_summary(self,model):
+ return model.summary()
+
+ def get_q(self, state):
+ return self.predict_model.predict(state)
+
+ def train(self,batch_size):
+ minibatch = random.sample(REPLAY_MEMORY, batch_size)
+ for state, reward, action, new_state, done in minibatch:
+ if done :
+ target = reward
+ else:
+ target = reward + (GAMMA * np.amax(self.get_q(new_state)[0]))
+ target_f = self.get_q(state)
+ target_f[0][action] = target
+
+ self.states.append(state[0])
+ self.targets.append(target_f[0])
+
+ self.fit_weights(self.states,self.targets)
+
+
+ def fit_weights(self, states, targets):
+ self.fit_model.fit(np.array(states), np.array(targets), batch_size = MINIBATCH, epochs = 1 ,verbose=0)
+
+ def predict_save(self, name):
+ self.predict_model.save_weights(name)
+ def fit_save(self, name):
+ self.fit_model.save_weights(name)
+
+
+
+
+
+agent = DQNagent()
+print(agent.fit_model.summary())
+
+
+
+
+x=[]
+y=[]
+z=[]
+
+def update_graph(z,y):
+ plt.xlabel("Episodes")
+ plt.ylabel("Score")
+ plt.plot(z,y)
+ plt.pause(0.5)
+plt.show()
+
+
+for eps in range(EPISODES):
+ env.reset()
+ done = False
+ state = env.reset()
+ state = np.reshape(state, [1,state_size])
+ time = 0
+ exp=0
+ elp=0
+ while not done:
+
+ if EPSILON >= np.random.rand():
+ exp +=1
+ action = random.randrange(action_size)
+ else:
+ elp +=1
+ action = np.argmax(agent.get_q(state)[0])
+ new_state, reward, done, _ = env.step(action)
+ new_state = np.reshape(new_state,[1, state_size])
+ if not done:
+ reward = -10
+ else:
+ reward = 10
+ REPLAY_MEMORY.append((state,reward,action,new_state,done))
+ state = new_state
+ time += 1
+ x.append([eps,exp,elp,time,EPSILON])
+ y.append(time)
+ z.append(eps)
+ update_graph(z,y)
+ if (len(REPLAY_MEMORY)) >= MINIBATCH:
+ agent.train(MINIBATCH)
+ if EPSILON > MIN_EPSILON:
+ EPSILON *= DECAY_RATE
+ if eps % 50 == 0:
+ agent.predict_save(output_dir + "predict_weights_" + '{:04d}'.format(eps) + ".hdf5")
+ agent.fit_save(output_dir + "fit_weights_" + '{:04d}'.format(eps) + ".hdf5")
+ with open("score_vs_eps.txt", "w") as output:
+ output.write("Episodes"+" "+"Exploration"+" " + "Exploitation" + " "+ "Score" + " " + "Epsilon"+"\n")
+ for eps,exp,elp,time,epsilon in x:
+ output.write(" "+str(eps)+" "+str(exp)+" "+str(elp)+" "+str(time)+" "+"{:.4f}".format(epsilon) +"\n")
+
+agent.predict_model.save('CartPole_predict_model')
+agent.predict_model.save('CartPole_fit_model')
+
+Code is running perfectly but the model is taking too many episodes to get trained and even after it scores 200, there is no continuity of it. Could please help me with how can I train the model in fewer episodes and maintain the continuity of the 200 scores?
+Here are some of the steps listed:
+Episodes Exploration Exploitation Score Epsilon
+ 0 18 0 18 1.0000
+ 1 32 0 32 1.0000
+ 2 43 0 43 1.0000
+ 3 17 0 17 1.0000
+ 4 17 0 17 1.0000
+ 5 16 0 16 1.0000
+ 6 13 0 13 1.0000
+ 7 21 0 21 1.0000
+ 8 16 0 16 1.0000
+ 9 20 0 20 1.0000
+ 10 35 0 35 1.0000
+ 11 14 0 14 1.0000
+ 12 13 0 13 1.0000
+ 13 12 0 12 1.0000
+ 14 16 0 16 1.0000
+ 15 17 0 17 1.0000
+ 16 27 0 27 1.0000
+ 17 24 0 24 1.0000
+ 18 14 0 14 1.0000
+ 19 28 0 28 1.0000
+ 20 20 0 20 1.0000
+ 21 13 0 13 1.0000
+ 22 12 0 12 1.0000
+ 23 23 0 23 1.0000
+ 24 17 0 17 1.0000
+ 25 43 0 43 1.0000
+ 26 61 0 61 1.0000
+ 27 29 0 29 1.0000
+ 28 21 0 21 1.0000
+ 29 17 0 17 1.0000
+ 30 41 0 41 1.0000
+ 31 9 0 9 1.0000
+ 32 18 0 18 1.0000
+ 33 23 0 23 1.0000
+ 34 28 0 28 0.9950
+ 35 24 0 24 0.9900
+ 36 25 0 25 0.9851
+ 37 28 1 29 0.9801
+ 38 26 1 27 0.9752
+ 39 35 2 37 0.9704
+ 40 19 0 19 0.9655
+ 41 48 0 48 0.9607
+ 42 25 2 27 0.9559
+ 43 13 0 13 0.9511
+ 44 20 2 22 0.9464
+ 45 21 0 21 0.9416
+ 46 13 0 13 0.9369
+ 47 28 5 33 0.9322
+ 48 23 3 26 0.9276
+ 49 24 1 25 0.9229
+ 50 20 2 22 0.9183
+ 51 13 0 13 0.9137
+ 52 19 1 20 0.9092
+ 53 13 1 14 0.9046
+ 54 18 1 19 0.9001
+ 55 12 1 13 0.8956
+ 56 29 7 36 0.8911
+ 57 28 2 30 0.8867
+ 58 16 1 17 0.8822
+ 59 28 6 34 0.8778
+ 60 13 3 16 0.8734
+ 61 15 4 19 0.8691
+ 62 19 2 21 0.8647
+ 63 27 4 31 0.8604
+ 64 19 4 23 0.8561
+ 65 16 1 17 0.8518
+ 66 60 9 69 0.8475
+ 67 24 1 25 0.8433
+ 68 21 7 28 0.8391
+ 69 14 0 14 0.8349
+ 70 31 4 35 0.8307
+ 71 64 13 77 0.8266
+ 72 58 13 71 0.8224
+ 73 32 9 41 0.8183
+ 74 15 1 16 0.8142
+ 75 23 6 29 0.8102
+ 76 27 5 32 0.8061
+ 77 66 6 82 0.8021
+ 78 30 6 36 0.7981
+ 79 74 22 96 0.7941
+ 80 14 1 15 0.7901
+ 81 18 1 19 0.7862
+ 82 28 7 35 0.7822
+ 83 28 4 32 0.7783
+ 84 12 2 14 0.7744
+ 85 10 2 12 0.7705
+ 86 21 4 25 0.7667
+ 87 13 6 19 0.7629
+ 88 19 6 25 0.7590
+ 89 16 4 20 0.7553
+ 90 46 16 62 0.7515
+ 91 12 1 13 0.7477
+ 92 30 15 45 0.7440
+ 93 38 9 47 0.7403
+ 94 14 7 21 0.7366
+ 95 10 1 11 0.7329
+ 96 16 8 24 0.7292
+ 97 10 2 12 0.7256
+ 98 20 5 25 0.7219
+ 99 19 7 26 0.7183
+ 100 31 9 40 0.7147
+ .
+ .
+ .
+ 1522 0 104 104 0.0100
+ 1523 0 35 35 0.0100
+ 1524 0 27 27 0.0100
+ 1525 0 52 52 0.0100
+ 1526 0 25 25 0.0100
+ 1527 1 199 200 0.0100
+ 1528 0 30 30 0.0100
+ 1529 0 57 57 0.0100
+ 1530 0 35 35 0.0100
+ 1531 0 25 25 0.0100
+ 1532 0 22 22 0.0100
+ 1533 0 24 24 0.0100
+ 1534 1 199 200 0.0100
+ 1535 0 68 68 0.0100
+ 1536 0 200 200 0.0100
+ 1537 0 22 22 0.0100
+ 1538 2 42 44 0.0100
+ 1539 1 111 112 0.0100
+ 1540 0 91 91 0.0100
+ 1541 0 45 45 0.0100
+ 1542 2 108 110 0.0100
+ 1543 1 181 182 0.0100
+ 1544 0 30 30 0.0100
+ 1545 0 21 21 0.0100
+ 1546 1 25 26 0.0100
+ 1547 4 196 200 0.0100
+ 1548 0 95 95 0.0100
+ 1549 0 53 53 0.0100
+ 1550 0 55 55 0.0100
+ 1551 0 29 29 0.0100
+ 1552 0 40 40 0.0100
+ 1553 0 25 25 0.0100
+ 1554 0 33 33 0.0100
+ 1555 0 63 63 0.0100
+ 1556 0 23 23 0.0100
+ 1557 0 45 45 0.0100
+ 1558 0 25 25 0.0100
+ 1559 0 36 36 0.0100
+ 1560 0 24 24 0.0100
+ 1561 1 31 32 0.0100
+ 1562 0 30 30 0.0100
+ 1563 1 56 57 0.0100
+ 1564 0 22 22 0.0100
+ 1565 0 20 20 0.0100
+ 1566 1 22 23 0.0100
+ 1567 0 45 45 0.0100
+ 1568 1 50 51 0.0100
+ 1569 0 25 25 0.0100
+ 1570 0 30 30 0.0100
+ 1571 2 198 200 0.0100
+ 1572 2 198 200 0.0100
+ 1573 1 185 186 0.0100
+ 1574 0 26 26 0.0100
+ 1575 4 196 200 0.0100
+ 1576 3 197 200 0.0100
+ 1577 1 29 30 0.0100
+ 1578 0 25 25 0.0100
+ 1579 0 32 32 0.0100
+ 1580 3 197 200 0.0100
+ 1581 1 23 24 0.0100
+ 1582 0 25 25 0.0100
+ 1583 0 66 66 0.0100
+ 1584 1 27 28 0.0100
+ 1585 0 32 32 0.0100
+ 1586 0 21 21 0.0100
+ 1587 0 23 23 0.0100
+ 1588 1 47 48 0.0100
+ 1589 0 42 42 0.0100
+ 1590 0 26 26 0.0100
+ 1591 0 47 47 0.0100
+ 1592 0 200 200 0.0100
+ 1593 2 52 54 0.0100
+ 1594 1 19 20 0.0100
+ 1595 0 33 33 0.0100
+ 1596 0 27 27 0.0100
+ 1597 1 79 80 0.0100
+ 1598 0 54 54 0.0100
+ 1599 0 50 50 0.0100
+ 1600 0 25 25 0.0100
+
+I have initiated the epsilon with 1 and it has already reached its minimum possible value. Still, the result is very fluctuating from the score's point of view. Why is this happening? How can I maintain the continuity?
+"
+"['actor-critic-methods', 'advantage-actor-critic', 'a3c']"," Title: How can I compare the results of AC1 with the results of A3C (on the CartPole environment)?Body: I am implementing A3C for the CartPole environment. I want to compare the results I got from A3C with the ones I got from AC1. The problem is I don't know which process to look at. If I use, let's say, 11 processes, should I take the first one which got to average 495 points (over the last 100 episodes), last one, or should I take mean of all?
+I don't want to take the first one that got to 495 since it is using a model that was already updated by the first few processes and it looks like cheating. Does some norm exist I can follow for valid results?
+
+"
+"['machine-learning', 'convolutional-neural-networks', 'image-recognition', 'supervised-learning']"," Title: How can my CNN produce an ""unknown"" label?Body: I have a dataset of 20k images of infected mango. I have built a web-based app using Flask, where a user can upload a picture, and my CNN model detects the disease. I have 6 classes in the model, which correspond to 6 types of diseases.
+My question is: how do I train the model so that, if a user uploads any picture except infected mango, the model will show "not mango"?
+"
+"['neural-networks', 'classification', 'evolutionary-algorithms', 'fuzzy-logic', 'random-forests']"," Title: What approach would work well for predicting earthquake intensity based on historical data?Body: My problem: I own warning system where I collect data from institutions and send them over through various ways to users. I would like to hear your advice on what approach I can use for solving my problem with earthquake intensity far from epicenter. Since seismogical institutions mostly issue info about intenstiy of an earthquake for the epicenter, I would need to predict and classify what intensity the earthquake can have for places distant of several km/miles from the epicenter.
+As an input/training set, I can use data of historical earthquakes and their magnitudes in an epicenter. Then I would need to fill mostly "by hand" an information about intensity based on seismological records, historical testimonies, chronicles atc.
+What I need from AI: I need "something" that would predict earthquake intensity based on dataset of historical earthquakes.
+Example/TLDR: There is an earthquake with magnitude 3.8, distant 80 km with depth 6 km. Based on dataset of historical earthquakes (with same type of information + witnessed and collected intensity), and output, I would need prediction of intensity of an eartquake 80 km from the epicenter.
+"
+"['transformer', 'gpt', 'audio-processing', 'embeddings', 'self-supervised-learning']"," Title: Is it realistic to train a transformer-based model (e.g. GPT) in a self-supervised way directly on the Mel spectrogram?Body: In music information retrieval, one usually converts an audio signal into some kind "sequence of frequency-vectors", such as STFT or Mel-spectrogram.
+I'm wondering if it is a good idea to use the transformer architecture in a self-supervised manner -- such as auto-regressive models, or BERT in NLP -- to obtain a "smarter" representation of the music than the spectrogram itself. Such smart pretrained representation could be used for further downstream tasks.
+From my quick google search, I found several papers which do something similar, but -- to my surprise -- all use some kind of symbolic/discrete music representation such as scores. (For instance here or here).
+My question is this:
+
+Is it realistic to train such an unsupervised model directly on the
+Mel spectrogram?
+
+The loss function would not be "log softmax of next word probability", but some kind of l2-distance between "predicted vector of spectra" and "observed vector of spectra", in the next time step.
+Did someone try it?
+"
+"['neural-networks', 'batch-normalization', 'residual-networks', 'normalisation']"," Title: Does it make sense to apply batch normalization to a batch size of 1?Body: I am interested in your opinion on the topic if you think that it makes sense to use batch normalization layer in a network that is trained with a batch size of 1. This is a special case as part of an experiment. What effects can be expected?
+"
+"['natural-language-processing', 'reference-request', 'turing-test', 'ai-milestones', 'journalism']"," Title: Current extensions of the ""Turing Test""?Body: In 2014 it was widely reported that the Turing Test had been passed, and that this was a major AI milestone.
+See: Computer AI passes Turing test in 'world first [BBC]; Turing Test Success Marks Milestone in Computing History [reading.ac.uk]; What is the Turing test? And are we all doomed now? [The Guardian]
+Never mind that the "Imitation Game" is subjective, and that porn bots have been passing it since there were porn bots—University of Reading was clear about their metrics.
+But I understand that it did lead to revised tests, and extension of thinking on what constitutes passing such a threshold.
+
+- How have Turing Tests been extended since 2014?
+
+How strong was the 2014 test? What have been the criticisms of the 2014 determinations?
+"
+"['reinforcement-learning', 'alphazero', 'alphago-zero', 'deepmind']"," Title: How does policy network learn in AlphaZero?Body: I'm currently trying to understand how AlphaZero works. There is one thing with the training of the AlphaZero's policy head that confuses me. Basically, in AlphaGo Zero's paper (where the major part of AlphaZero algorithm is explained) a combined neural network is used to estimate both, the value of the position and a good policy. More precisely, the loss function used is:
+$$L = (z-v) - \pi^t \log(\textbf{p}) + c \Vert \theta \Vert$$
+where $z$ is the outcome of the game, $v$ is the value estimated by the neural network, $\pi$ is the policy calculated by the MCTS and $\textbf{p}$ is the policy predicted by the neural network.
+I would like to focus on the policy head loss. Basically, we are trying to minimize the difference between the policy calculated by the MCTS and the policy predicted by the neural network. That makes sense when the player has won the game, but it doesn't (at least from my point of view) when the player has lost it. You would be teaching your neural network a policy that has lost. Maybe the loss was unavoidable, but if it wasn't that's definitely not the policy we want to learn.
+I have programmed a slightly simplified version that works well with Tic Tac Toe. But for Connect 4 some problems related to this arise. Basically, it learns a bad policy. At the beginning of the training, the values estimated for each board are quite random, and that makes the policy shift to a random (and wrong) direction. At the same time, that makes the value function to be wrong (because we are losing games that we could have easily won), worsening even more the policy.
+I suppose that with enough training this problem disappears. The correct value and policy should backpropagate from the leaf nodes of our simulation. Even if the neural network policy gives a probability of 0 to the optimal action, thanks to the Dirichlet noise added to the probabilities the MCTS can find that optimal action and learn it.
+However, several things confuse me:
+
+- In AlphaGo's paper, they take into account whether if the outcome
+of the game has been a win or a loss when training the policy
+network with reinforcement learning. More precisely, the
+optimization made is
+$$\Delta p \propto \frac{\delta \log{p(a_t|s_t)}}{\delta p} z_t $$
+where $z_t = 1$ if we have won or $z_t = -1$ if we have lost. So DeepMind's take into account if the action was good or not and change the direction to optimize.
+
+- I haven't found anywhere in AlphaGo Zero's paper that we are training just with the examples where the player has won, so they might be using all the data gathered, including also the examples where the player has lost. As far as I know, they don't mention anything related to this problem.
+
+- $\pi$ (the policy provided by the MCTS) is calculated as the exponentiated visit count of each action
+$$\pi(a|s_0) = \frac{N(s_0,a)^{1/\tau}}{\sum_b N(s_0,b)^{1/\tau}}$$
+where $\tau$ is a parameter that controls the ``temperature''.
+DeepMind's team sets $\tau = 1$ during the 30 first movements to
+ensure exploration. After that, they set it to $\tau \approx 0$ to
+ensure that the action that is considered the best one (and thus has
+been simulated more times) is the one played. However, that means
+$\pi$ is something like
+$$[0,0,\dots,0,1,0, \dots, 0]$$
+making the policy changes a bit agressive and especially harmful if
+the movement is not the good one (making it even harder to recover from a bad action).
+
+
+Am I missing something? Is this the intended way of working of the algorithm? Is there any way to overcome the learning of bad policies?
+"
+"['natural-language-processing', 'reference-request', 'word-embedding', 'books', 'vector-semantics']"," Title: Book(s) for text embeddingBody: Text here refers to either character or word or sentence.
+Is there any recent textbook that encompasses from classical methods to the modern techniques for embedding texts?
+If a single textbook is unavailable then please recommend a list of books covering the whole spectrum as mentioned above.
+Modern textbooks that are similar to Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press. 2008 are highly encouraged.
+This question asks for textbook/research paper on word embedding only.
+"
+['natural-language-processing']," Title: How to build a custom morphological analyser for translation systemBody: I want to build a machine translation system from English to Georgian. Georgian is a language similar (and simpler) to the Russian language. its syntax looks like base + suffix, only suffix changes, most of the time base is frozen, to describe the time the only suffix is changed. Unfortunately, I couldn't find a morphological analyser for the Georgian language, so could you link or provide useful resources to help me to build one? or can you give me some suggestions?
+"
+"['machine-learning', 'training', 'generative-model']"," Title: Necessity of likelihood in training energy-based modelsBody: Lately, I've been getting into energy-based models (EBMs) through some of Yann LeCun's recent talks, where he advocates the use of non-normalized models because it allows for more flexibility in the choice of the loss function and convenient inference over high-dimensional spaces.
+However, after reading some papers on the recent approaches to training EBMs (e.g Kingma's How to Train Your Energy-Based Models), most approaches still use likelihood to optimize the EBMs parameters.
+I'm lost in the necessity of using a normalized likelihood for training, while the whole idea of EBMs is that they are not normalized. Why are methods that shape the energy-function directly not popular?
+"
+"['reinforcement-learning', 'comparison', 'evolutionary-algorithms']"," Title: What is the difference between ERL and EA by considering it as RL?Body: I am currently studying as an MSCS student and my research is based on Evolutionary Algorithm as Reinforcement Learning, and I am confused about the following terms:
+
+- What is the difference between Evolutionary Reinforcement Learning and Evolutionary Algorithm by considering it as Reinforcement Learning?
+- if the Evolutionary Algorithm is Reinforcement Learning, what's the definition of state?
+- which part of the Evolutionary Algorithm is the rewards of Reinforcement Learning?
+- What will be the transition and Action function? can anyone please help me with this
+
+"
+"['convolutional-neural-networks', 'implementation', 'convolution', 'convolutional-layers', 'residual-networks']"," Title: How to implement a (3 + 2)-dimensional convolutional layer where the 2d space is ""internal""?Body: I am trying to train a CNN to learn 5D (kind of) data. The data is structured as follows. It has three spatial dimensions [x, y, z]
, but it also has two "internal dimensions" [theta, phi]
at each [x, y, z]
. What I am trying to do is upsample the internal space from fewer [theta, phi]
data points.
+When I train a 2d residual network with random [x, y, z]
points in just the internal space it learns -- but there is some noise in the x, y, z
space, there should be a correlation with neighbouring points.
+What I wanted was some way to also include convolutions over the 3D [x, y, z]
space to try and remedy this.
+A possible but maybe naive approach is to do the following: Stack the images as [theta * phi, x, y, z]
(so, many input channels) and then have some 3d convolution layers, then after that stack as [x * y * z, theta, phi]
and take 2d convolutions in the internal space.
+Another approach is to use 5d filters that span over all dimensions. This might be hard to implement for me and probably very memory hungry.
+Are there any other ways?
+"
+"['machine-learning', 'classification', 'objective-functions', 'regression']"," Title: Which loss function could I use to solve a regression problem as a classification problem (where we discretize the labels into buckets)?Body: I am considering a rather typical regression problem, but, for practice, I am trying to implement this as a classification problem.
+The setup is as follows. I have $\mathbb{R}$-valued labels $y_i \in [-1,1]$, which I then discretize to $N$ buckets -- my classification problem is to then predict the labels to the nearest bucket.
+This is rather straightforward and easy to implement with a cross-entropy loss function. However, I do not believe that this is the best option, as I would ideally like my predictions to be close to their correct bucket, even if I do not predict them correctly (which will be more difficult as if I take $N$ larger).
+My current approach involves using a mean-squared error loss function. My network outputs logits for each bucket, I apply a softargmax (so the network remains differentiable) and then convert the output of the network into the $\mathbb{R}$-valued prediction.
+My (very premature) results are nothing to write home about. So, I ask, is there a more natural loss function that I could consider for this exercise?
+"
+"['reinforcement-learning', 'reference-request', 'applications', 'sequence-modeling', 'seq2seq']"," Title: Can Reinforcement Learning be used to generate sequences?Body: Can we use reinforcement learning for sequence-to-sequence tasks? If yes, whether or not this is a good choice, how could this be done?
+"
+"['deep-learning', 'definitions', 'transformer']"," Title: What makes a transformer a transformer?Body: Transformers are modified heavily in recent research. But what exactly makes a transformer a transformer? What is the core part of a transformer? Is it the self-attention, the parallelism, or something else?
+"
+"['reinforcement-learning', 'deep-rl', 'dqn', 'policies']"," Title: DQN learns to always choose the same action for all statesBody: I have created an RL model that uses QBased policy with a neural network for estimating Q values.
+My action space is of 27 actions, where each action is a 3 tuple where each value can be 1, 2 or 3. After training, the model always chooses the same action regardless of the state. For example (1, 2, 3) for all states. But I know this is wrong and not an optimal policy. But I cannot figure out why this is happening. The policy I am using is given below (code). Code is in Julia language and uses ReinforcementLearning.jl library.
+# Now we use a QBasedPolicy and neural network to estimate values
+# Create a flux based DNN for q - value estimation
+STATE_SIZE = length(env.channels) # 3
+ACTION_SIZE = length(action_set) # 27
+model = Chain(
+ Dense(STATE_SIZE, 48, relu),
+ Dense(48, 48, relu),
+ Dense(48, 48, relu),
+ Dense(48, 48, relu),
+ Dense(48, ACTION_SIZE)
+ ) |> cpu
+
+# optimizer
+η = 1f-2 # Learning rate
+η_decay = 1f-3
+opt = Flux.Optimiser(ADAM(η), InvDecay(η_decay))
+
+# Create policies for each agent
+single_agent_policy = Agent(
+ policy = QBasedPolicy(;
+ learner = BasicDQNLearner(;
+ approximator = NeuralNetworkApproximator(;
+ model = model,
+ optimizer = opt
+ ),
+ min_replay_history = 500
+ ),
+ explorer = EpsilonGreedyExplorer(
+ kind = :linear,
+ ϵ_stable = 0,
+ ϵ_init = 0.5,
+ warmup_steps = 300,
+ decay_steps = 700,
+ is_training = true,
+ is_break_tie = false,
+ step = 1
+ )
+ ),
+ trajectory = CircularArraySARTTrajectory(;
+ capacity = 500,
+ state=Array{Float64, 1} => (STATE_SIZE)
+ )
+ )
+
+During training, the model explores and exploits various actions in different states, but, during the testing/exploitation phase, it always outputs the same action for every state.
+I searched for similar questions on the web, but none of the questions were well answered.
+"
+"['machine-learning', 'natural-language-processing', 'transfer-learning', 'pretrained-models', 'multiclass-classification']"," Title: How to perform multi-class text classification with a dataset of 80 documents?Body: I have a training dataset of 80 text documents with an average number of characters in each document of 25000 and 210 unique tags.
+How can I perform multi-class text classification with such a small dataset, without using the pre-trained model? If it cannot be done without a pre-trained model, then which pre-trained model should I use?
+"
+"['python', 'autoencoders', 'principal-component-analysis', 'dimensionality-reduction']"," Title: How do I select the number of neurons for each layer in an auto-encoder for dimensionality reduction?Body: I am trying to apply an auto-encoder for dimensionality reduction. I wonder how it will be applied on a large dataset.
+I have tried this code below. I have total of 8 features in my data and I want to reduce it to 3.
+from keras.models import Model
+from keras.layers import Input, Dense
+from keras import regularizers
+from sklearn.preprocessing import MinMaxScaler
+import pandas as pd
+data = pd.read_csv('C:/user/python/HR.csv')
+columns_names=data.columns.tolist()
+print("Columns names:", columns_names)
+print(data.shape)
+data.head()
+print(data.dtypes)
+# Normalise
+scaler = MinMaxScaler()
+data_scaled = scaler.fit_transform(data)
+# Fixed dimensions
+input_dim = data.shape[1] # 8
+encoding_dim = 3
+# Number of neurons in each Layer [8, 6, 4, 3, ...] of encoders
+input_layer = Input(shape=(input_dim, ))
+encoder_layer_1 = Dense(6, activation="tanh", activity_regularizer=regularizers.l1(10e-5))(input_layer)
+encoder_layer_2 = Dense(4, activation="tanh")(encoder_layer_1)
+encoder_layer_3 = Dense(encoding_dim, activation="tanh")(encoder_layer_2)
+# Crear encoder model
+encoder = Model(inputs=input_layer, outputs=encoder_layer_3)
+# Use the model to predict the factors which sum up the information of interest rates.
+encoded_data = pd.DataFrame(encoder.predict(data_scaled))
+encoded_data.columns = ['factor_1', 'factor_2', 'factor_3']
+
+I have read in this tutorial that, if you have 8 features and your aim is to get 3 components, in order to set up a relationship with PCA, we need to create four layers of 8 (the original amount of series), 6, 4, and 3 (the number of components we are looking for) neurons, respectively. How does it make sense?
+Now, let's say that I initially have 500 features and I want to reduce them to 20, what should I do?
+According to my understanding, I need to reduce the number of neurons from the first to the last layer. So,
+
+- in the first layer, I have 500 neurons
+- in the second layer, it will be 250
+- in the third layer, it will be 130
+- in the fourth layer, it will be 60
+- in the fifth layer, it will be 20
+
+Is this correct, and why?
+And can I get matrix-like PCA at the end to see the components I got?
+"
+"['machine-learning', 'genetic-algorithms', 'evolutionary-algorithms', 'convergence', 'elitism']"," Title: Does elitism cause premature convergence in genetic algorithms?Body: I have a genetic algorithm which is working fairly well. It's got all the standard operators, including initial random population, crossover ratio, mutation rate, degree of mutation, etc.
+This works fairly well, and I have tuned and optimized the hyperparameters as much as possible, including some adaptive variants. The one thing that ruins the results EVERY TIME is when I implement elitism. It does not seem to matter if I include 1 elite, or a certain percentage of elites. I have tried 1% through 10%, tried a decay variable so that elites would only survive a certain number of generations, and numerous other tactics. Every single time I add elitism, the solution gets stuck in a local optimum so deeply that there is no escape.
+Most of the literature recommends to have elites, but the elites ruin my GA every single time, without fail.
+Ideas?
+"
+"['reinforcement-learning', 'reference-request', 'deep-rl']"," Title: Why did Distributional Q Learning go out of popularity?Body: I read some papers (for example, this) and blogs that spoke about the advantages of distributional Q learning. However, it no longer seems to come up in literature. Did it have any shortcomings that led to its failure? If yes, can someone can talk about it here?
+"
+"['machine-learning', 'comparison', 'terminology', 'feature-extraction', 'representation-learning']"," Title: Where do the feature extraction and representation learning differ?Body: Feature selection is a process of selecting a subset of features that contribute the most.
+Feature extraction allows getting new features that are not actually present in the given set of features.
+Representation learning is the process of learning a new representation that contributes the most.
+I can see no difference between feature extraction and representation learning.
+Is feature extraction the same as representation learning? If not, where do they differ? Do they differ at the application level only?
+"
+"['natural-language-processing', 'unsupervised-learning', 'natural-language-understanding', 'machine-translation', 'natural-language-generation']"," Title: Can unsupervised models learn something from cat vocalizations?Body: I love cats, and over the years have noticed that they have recurrent patterns of vocalizations. For example, upon seeing a bird, a cat may start chittering, but the same cat would never chitter at humans. Then there are complex vocalizations, like meow-wow, which I have observed across multiple cats on different continents. At the same time, we have birds and monkeys which have vocabularies of up to 300 (?) words. It seems like cats are communicating something, but humans may be too tone-deaf to understand that.
+It seems to me like the task of understanding what a cat is trying to communicate to humans is suitable for some kind of machine learning process.
+My question is: has any of these unsupervised models been applied to cat vocalizations? In other words, if a model can draw or generate text, can it generate cat meows? How close are we to understanding what cats are meowing about and translating it into English?
+I remember that some work has been done with trying to decode dolphin vocalizations, but as you can imagine, that requires specialized equipment, while a cat model can be tested in the real world with simpler equipment.
+"
+"['reinforcement-learning', 'deep-rl', 'hyperparameter-optimization', 'hyper-parameters', 'proximal-policy-optimization']"," Title: What are the best hyper-parameters to tune in reinforcement learning?Body: Obviously, this is somewhat subjective, but what hyper-parameters typically have the most significant impact on an RL agent's ability to learn? For example, the replay buffer size, learning rate, entropy coefficient, etc.
+For example, in "normal" ML, the batch size and learning rate are typically the main hyper-parameters that get optimised first.
+Specifically, I am using PPO, but this can probably be applied to a lot of other RL algorithms too.
+"
+"['deep-learning', 'cross-validation', 'generalization', 'learning-rate', 'multi-label-classification']"," Title: Why is the validation loss less than the training loss, and what can be said about the effect of the learning rate?Body: I have the following results I am trying to make sense of. I have attached the loss curves here for reference.
+
+- As you can see, the first issue is that the validation loss is lower than the training loss. I think this is due to using a pre-trained model with a high dropout rate (please correct me if I am wrong here).
+
+- As one can see, the
mean_auc
score is increasing consistently, and so it seems that the network is indeed learning something and the validation loss is also better behaved relatively.
+
+- The training loss is what bugs me a lot. It is not at all consistent and varies a lot. This is a naive question, but is this graph giving me any sort of information about the learning rate, etc, or am I in a situation wherein everything is incorrect essentially?
+
+
+Any response would be really appreciated.
+
+"
+"['reinforcement-learning', 'deep-rl', 'hyper-parameters', 'soft-actor-critic', 'td3']"," Title: Optimal episode length in reinforcement learningBody: I have a custom environment for stock trading where an episode can be as long as 2000-3000 steps. I've run several experiments with td3 and sac algorithms, average reward per episode flattens after few episodes. I believe average reward per episode should further improve, so I thought whether my training episode is too long. What is the recommended upper limit on the episode length?
+"
+"['convolutional-neural-networks', 'image-recognition', 'object-detection', 'deep-neural-networks']"," Title: Number of classes vs number of parameters/layers?Body: How to estimate the number of parameters in CNN for object detection?
+I know that there are some well-known architectures that was trained on a lot of data (AlexNet, ResNet, VGG, GoogleLeNet). But they were trained for example for classifying 1000 classes. Or they were used as backbones in the algorithms like YOLO to localize 80 classes of objects.
+Now let's say that I want to classify only 5 classes. Or I want to perform object detection and I am interested only in cars and people. I want to detect/classify this small number of objects. So the network must learn only the features of cars and people (instead of learning the features of hundreds of objects).
+So my intuition is that I can use smaller network with fewer number of parameters. Correct me if I am wrong. And my second intuition is that the number of layers should not have a big impact. I mean, you shouldn't decrease the number of layers only because you have less classes. Because the network learns more and more sophisticated features in deeper layers. And it wouldn't be able to detect advanced features of cars (or other objects) if you don't have enough layers.
+Recently I tried to use CenterNet https://arxiv.org/abs/1904.07850 to detect digits on 64x64 grayscale images and I achieved success having quite simple 900k convnet. Then I tried to use slightly modified GoogLeNet to detect cars using 224x224, 448x448, and 512x512 images. I trained it on 450 images. After a lot of trials and errors I still cannot train a good model. GoogLeNet is quite small network in compare to other well-known architectures, but I heard that it's very good. It was carefully designed to be very powerful despite being small (7M parameters).
+So to be clear. My question is about the dependencies between the number of classes and the number of layers and parameters.
+"
+"['terminology', 'word-embedding', 'embeddings']"," Title: Is an embedding a representation of a word or its meaning?Body: What does the term "embedding" actually mean?
+An embedding is a vector, but is that vector a representation of a word or its meaning? Literature loosely uses the word for both purposes. Which one is actually correct?
+Or is there anything like: A word is its meaning itself?
+"
+"['reinforcement-learning', 'deep-rl', 'policy-gradients', 'return', 'finite-markov-decision-process']"," Title: How to formulate discounted return in cartpole?Body: I am trying to formulate a problem that aims to prolong the lifetime of the simulation, the same as the Cartpole problem. I aware that there are two types of return:
+
+- finite horizon undiscounted return (used for episodic problems)
+
+$G = \sum_{t=0}^T R_t$
+
+- infinite horizon discounted return (used for non-episodic problems).
+
+$G = \sum_{t=0}^\infty \gamma^t R_t$
+However, I'm confusing that "Is Cartpole episodic task?". Ideally, the simulation lasts forever. This is my final objective (prolonging the lifetime). But it still has some termination states. Should I introduce the termination state and use it with a discounted return like:
+$G = \sum_{t=0}^T \gamma^t R_t$
+"
+"['ai-basics', 'convergence', 'stochastic-gradient-descent', 'weights-initialization']"," Title: Why don't we use this intialization with SGD rather than random?Body: Suppose I have a loss function as a polynomial with its variables being the weights of a network I wish to tune. Now, we want to find the minima of the loss function - so basically argmin
.
+In ML, we simple use SGD with any initialization. But consider this: we take a few $n$ random combinations of weights and plot a visual graph
(not to be confused with computation graph) where we find the local minimas of the graph (basically any point surrounded by larger point values would be minima). We store the weights (value of the variables in the polynomial) used for each point in the graph in a data structure.
+Theoretically, if $n$ is big enough to be computationally efficient while being quite descriptive, we can simply take the weights of a random minima as initialization to the network and then perform SGD
on it to converge to a global minima (hopefully).
+This method would be quite faster since the initialization is better, and we don't need to compute for large values of $n$ - simply having a decent enough estimate. SGD
would finally be used with a low learning rate to give the final push and we can be done with easier and faster.
+
+So why don't we do this instead of having random initialization? is there theoretical basis on which this can't work?
+
+"
+"['deep-learning', 'training', 'deep-neural-networks', 'computational-learning-theory', 'stability']"," Title: What is meant by ""stable training"" of a deep learning model?Body: I have read it said that the "stable training" of a deep learning model is important. What is meant by "stable training" of a deep learning model?
+"
+"['natural-language-processing', 'embeddings']"," Title: What is an ""input embedding"" in the context of NLP?Body: When reading about NLP, I saw it said that "input embeddings" are a main element of encoder-decoder learning frameworks for sequence modelling. What is an "input embedding" in the context of NLP?
+"
+"['image-segmentation', 'downsampling', 'upsampling']"," Title: What does 'downsampling' and 'upsampling' mean in coarse-to-fine segmentation?Body: The paper here in section 2.1 Coarse-to-fine prediction:
+
+To increase the field of view presented to the CNN and reduce the
+redundancy among neighboring voxels, each image is downsampled by a factor of 2. The resulting prediction maps are then resampled back to the original resolution using nearest-neighbor interpolation.
+
+What does it actually mean to downsample by a factor of 2?
+If I have an image size of $256 \times 256 \times 170$, and if I downsample it by a factor of 2, then will it result in an image of size $128 \times 128 \times 85$?
+Similarly, would upsampling/resampling be the opposite interpolation method, getting back to the original size of $256 \times 256 \times 170$?
+"
+"['pytorch', 'transformer', 'attention', 'data-visualization']"," Title: Visualizing encoder-attention after ResNet in terms of ResNet inputBody: I have a transform-encoder only architecture, which has the following structure:
+ Input
+ |
+ v
+ ResNet(-50)
+ |
+ v
+fully-connected (on embedding dimension)
+ |
+ v
+positional-encoding
+ |
+ v
+transformer encoder
+ |
+ v
+Linear layer to alphabet.
+
+I am trying to visualize the self-attention of the encoder layer to check how each input of the attention attends other inputs. (E.g. https://github.com/jessevig/bertviz)
+Where I encounter difficulty is in how I can visualize these activations in terms of the original input of the ResNet and not its output, in order to make my model visually interpretable.
+Do you have any ideas or suggestions?
+"
+"['convolutional-neural-networks', 'reference-request', 'transformer', 'convolution', 'positional-encoding']"," Title: Has positional encoding been used in convolutional layers?Body: Positional encoding (PE) is an essential part of the self-attention layers in the transformer architectures since without adding it in some way (fixed of learnable) to the input embeddings model has ultimately no notion of order and is permutationally equivariant and the given token attends to the far separate and local tokens identically.
+The convolution operation with a local filter, say of size $3, 5$ for 1D convolutions or $3 \times 3, 5 \times 5$ for 2D convolutions, has some notion of locality by construction. However, within this neighborhood, all pixels are treated in the same way.
+However, it may be the case, that it is important, that the given pixel is the central for the application of this filter, whereas the other is close to the boundary. For small filters
+$3 \times 3$ it is probably not an issue, but for the larger - injection of PE can be useful.
+Has this question been investigated in the literature? Are there any architectures with PE + convolutions?
+"
+"['computer-vision', 'reference-request', 'face-detection']"," Title: What are the most relevant resources that define the face detection problem formally?Body: I am new to AI, and I am a bit lost about finding the relevant materials that define the face detection problem formally/mathematically.
+Can anyone help me formally define face detection, or at least point me towards papers that define it formally?
+"
+"['geometric-deep-learning', 'graph-neural-networks', 'spectral-analysis']"," Title: Are spectral approaches to Graph Neural Networks still considered?Body: I've been reading several papers and reviews about Graph Neural Networks, and I still feel a bit confused about the difference between the two approaches, and also if the spatial approaches have somehow 'overcome' spectral ones. I will add some of my understanding:
+Graph Neural Networks take inspiration from the convolution operation between two signals in a Euclidean domain, as a way to combine the features on the nodes as it happens for Convolutional Neural Networks. To do this, a notion of convolution has been required for the graph domain. Given $\bf{x},\bf{y} \in \mathbb{R}^N$ two signals, then
+$$\bf{x} \, \, *_G \, \, \bf{y} := U(U^Tx \, \odot \, U^Tx)$$
+namely, we perform convolution on the graph domain and then take everything back using the inverse Fourier Transform. If we choose a filter $\bf g_\theta = diag(\theta_1, \dots, \theta_N)$ parametrized by some $\theta \in \mathbb{R}^N$ then the convolution becomes
+$$\bf{x} \, *_G \bf{g_\theta} = Ug_\theta U^T$$
+The approach above presents several limitations in terms of non-localization of filters (which depend on the entire graph) and scalability issues if presence of perturbations. So the authors of ChebNet proposed the following approximation:
+$$\bf{x} \, *_G \bf{g_\theta} = \sum_{i=0}^K \theta_iT_i(\tilde{L})x$$
+where $\tilde{L} = 2L/\lambda_{max}-I_n$, $L$ is the Laplacian and $T_i$ are Chebyshev polynomials.
+Now the crucial step is that Kipf et Al. (2017) have bridged the gap between spectral and spatial approaches by proposing a first order approximation of the above equation (assuming $\lambda_{max} =2$ and $\theta = \theta_0 = -\theta_1$):
+$$\bf{x} \, *_G \bf{g_\theta} = \theta(I_n+ D^{-1/2}AD^{-1/2})$$
+Now, from what I've read so far, it seems that from now on several improvements have been made on the spatial approach which defines convolutions on top of node's graph neighbourhood.
+The question is, does it still make sense to focus on spectral approaches?
+"
+"['neural-networks', 'autoencoders']"," Title: Why would an auto-encoder produce latent vectors with many zeros?Body: My autoencoder give latent vectors with many zeroes components like:
+[3.0796502 2.9488854 0.9002177 0. 0. 0.
+ 0. 0. 0. 1.0181859 0. 0.68507403
+ 0. 0.6128702 0. 0. 0. 0.
+ 0. 1.763725 0. 0. 0. 1.0947669
+ 0. 1.5330162 0. 0. 0. 0.
+ 0. 1.7434856 0. 0. 1.8942142 2.0379465
+ 0. 0. 1.2500542 0. 0. 0.
+ 0. 0. 0. 2.7917862 0. 0.
+ 2.2105153 0. 0. 0. 1.5798858 0.
+ 0. 3.7405093 0.8692952 0.01490922 0. 0.
+ 2.8320081 0. 0. 0. ]
+
+Certain components are always zero, another not always. Why might this be happening? How can I figure this out?
+"
+"['machine-learning', 'definitions', 'algorithm-request']"," Title: Fitting a Gaussian distribution into another distributionBody: Assume we have two vectors, containing random samples (maybe audio data?). Their distribution can be approximated to a normal distribution, so we can calculate their mean and standard deviation.
+
+- I am looking for a way to "fit" the second vector's samples, in a way that their mean and standard deviation correspond to the first vector's mean and standard deviation.
+
+- Also, I am looking for a way to do this by "moving the second vector's samples the least possible". This is because, an easy way to solve this problem could be to replace the second vector's data, with random samples that fit the first vector's parameters. This solution is easy, but not interesting.
+
+
+Questions
+
+- Is this kind of problem correlated with machine learning in general? If yes "how"?
+
+- Is there a way to perform this kind of operation with some kind of neural network? If yes, how could it be modelled?
+
+
+"
+"['deep-learning', 'autoencoders', 'variational-autoencoder']"," Title: Weird KL divergence behaviourBody: I'm training a complex model for motion prediction using a VAE, however the KL divergence has a very strange behavior
.
+A scheleton of the network is the following:
+
+At the end my network compute the MSE loss of the trajectory and the Kullback Leibler loss (with gaussian prior with mean 0 and std equal to 1) given as:
+kld_loss = -0.5*torch.sum(1 + sigma - mu.pow(2) - sigma.exp())
+
+Any idea of the possible causes? Do you need further details?
+"
+"['convolutional-neural-networks', 'long-short-term-memory', 'time-series']"," Title: Forecasting of spatio-temporal event dataBody: I’m currently working on my dissertation which is centred around forecasting social conflict events. I’m using data from GDELT (Global Database of Events, Tone, and Language) to develop my forecasting model. For the sake of conveying the problem and limiting the length of this post, I have simplified the features used in my investigation. These can be summarised as follows:
+(please feel free to skip the feature description to the end of this post indicated by "The Question" marked in bold if TL:DR)
+Temporal Attribute:
+
+FractionDate
: Date of event [numerical].
+
+Actor Attributes:
+
+Actor1Type
: The type of actor who performed the action [factor]. (e.g. Government, Rebels, Civilians, etc.)
+Actor2Type
: The type of actor who received the action [factor]. (e.g. Government, Rebels, Civilians, etc.)
+
+Event Action Attributes:
+
+EventClass
: Verbal cooperation, material cooperation, verbal conflict, and material conflict encoded as 1,2,3,4 respectively [factor].
+EventImpact
: A numeric score from [-10,10] capturing the potential impact that type of event may have on the stability of a country [numerical].
+
+Spatial Attributes:
+
+ActionGeoLong
: Longitude where the action took place [numerical].
+ActionGeoLat
: Latitude where the action took place [numerical].
+
+The database is updated on a daily schedule and is roughly 50 MB on average for single days data. The data is filtered to include only events that took place in a single country, which decreases the file size to about 1-2 MB. These events are then aggregated on a weekly basis.
+One notable modeling method to predict spatio-temporal data is by means of ConvLSTM models. These models have been successfully implemented in, for example, predicting precipitation or traffic flow. So the strategy that I have so far is:
+
+- Aggregate the spatial data to generate weekly geographical heatmaps, showing the intensity (for the sake of simplicity, can be thought as a weighted product of frequency and
EventImpact
) of events for each EventClass
. That is, you are left with a time series of 4 heatmaps similar to the ones below.
+
+
+
+- Aggregate the actor data to generate weekly actor "Interaction" matrices [I]. These matrices show the intensity (yet again, can be thought as a weighted product of frequency and
Eventimpact
) of interaction between each actor for each EventClass
. Actor 1 (performer) are on the rows and Actor 2 (receiver) are on the columns, therefore, [I]_{n,m} would mean the intensity of Actor n doing something to Actor m. (Note that these matrices won't be symmetrical, the intensity of actor n doing something to actor m, is different from actor m doing something to actor n). Then you are left with a time series of 4 matrices similar to the ones below:
+
+
+The two above (geographical heatmaps and interaction matrices) will be the "input" to my model, and it should be able to predict the next weeks heatmap and interaction matrix given the history of events. In theory I should be able to construct ConvLSTM model for the geographical heatmaps or the interaction matrices separately. Therefore, the problem I am faced with is building a sort of ensemble of ConvLSTM which is able to learn from both input sources simultaneously.
+The Question:
+Is there a way to construct a ConvLSTM that can learn from two different "types" of input tensors? The first being a sequence of geographical heatmaps (with 4 channels), and the second being a matrix (also with 4 channels). If so, how would you implement this in Keras? It is very important that the model considers both sources in order to learn underlying mechanisms of the system. An example of the model Input and Output is provided below.
+
+Thank you for taking the time to read. I would appreciate additional opinion or other applicable modeling methods very much.
+"
+['datasets']," Title: Split on dataset with some shared features?Body: I have a dataset with financial stock data, some of the features are shared, for example daily gold prices, while the stock price for each individual stock is different, the gold price would be the same for everybody that day.
+When I split 80/10/10 randomly, it's "cheating" and while the result accuracy is great the actual real world live result is bad.
+When I split sequentially, ie first 8 years of data in training, next year in validation, last year in testing. The result accuracy is bad, and live testing is also bad.
+What I want to ask is, should I do random split between just training and validation on first 9 years of data, then do testing on last year of data separately?
+OR is sequentially as good as it's gonna get and I simply can't predict the future?
+"
+"['natural-language-processing', 'word-embedding', 'sparse-word-embedding']"," Title: How do sparse word embeddings fail to capture synonymy?Body: While reading some explanations of why dense word embeddings work better than sparse word embeddings, the following statement has been given in the chapter Vector Semantics and Embeddings, showing a drawback of sparse word embeddings.
+
+Dense vectors may also do a better job of capturing synonymy. For example, in a sparse vector representation, dimensions for synonyms like car and automobile dimension are distinct and unrelated; sparse vectors may thus fail to capture the similarity between a word with car as a neighbor and a word with automobile as a neighbor.
+
+It says that the dimensions of synonyms may be unrelated and distinct. I am facing difficulty in understanding it.
+Can anyone provide me a simple example to understand it by taking some simple dimensions which are unrelated and distinct?
+You can consider either documents or (context) words as dimensions for the example.
+"
+"['deep-learning', 'generative-adversarial-networks', 'variational-autoencoder']"," Title: Is there a performace benefits using VAE-GAN instead of just GAN?Body: I have read that when using VAE-GANs, first what happens is the VAE's encoder encodes some image to another encoded image, which from GAN's point of view is considered a noise, and then the GAN part generates another image from that noise which from VAE's point of view is just an encoded image.
+Is that encoded image better suited for GAN to generate better images or not?
+The problem which bugs me is that there are not that many articles about VAE-GANs, especially in the last 2 years.
+As a side question, does that mean that VAE-GANs do not have any significant performance benefits than just simple GAN?
+"
+"['neural-networks', 'terminology', 'papers', 'semantic-segmentation']"," Title: What is meant by Hinton when he refers to ""Part-Whole Hierarchies"" in his GLOM frameworkBody: I was recently reading Hinton's GLOM idea How to represent part-whole hierarchies in a neural network, and I am simply unsure about what exactly he means when he says parsing images into "part-whole hierarchies".
+Moreover, wouldn't semantic segmentation "parse" the parts from the whole image? So what is different here?
+"
+"['neural-networks', 'deep-learning', 'research']"," Title: In the field of Deep Learning research, what considerations do researchers take into account when inventing new neural network models?Body: I am not a researcher, but I am curious to know what considerations are relevant to take into account during research for the invention of a new neural network model, and what relevant knowledge researchers typically possess in the area.
+And an accompanying question: is a background in neuroscience relevant to such an investigation?
+"
+"['binary-classification', 'multi-label-classification', 'binary-crossentropy']"," Title: What are pros and cons of using a multi-head neural network versus a single neural network for multi-label classification?Body: I haven't been able to find a good discussion specifically comparing the two (only one describing a classification and regression problem). I am training a classifier to learn both age and gender based on genomic data. Every sample has a known age and known gender (20 classes in total).
+Currently, I am using a single neural network with a sigmoid activation in the last layer with a binary_crossentropy loss. This works fine. However, I also see people using multi-head neural networks where, for example, a set of shared layers would split in to two either additional dense layers or in to two final layers for classification – each with an independent loss (in my case likely a categorical_ce).
+What I am unsure of, though, are the advantages and disadvantages between the two (maybe advantages and disadvantages are not the right words to use – actual differences between the two might be more appropriate and when one might use one of those over the other)?
+I want to be able to calculate the usual metrics – TP, FP, etc. after training – presumably it would be easier with two heads at the end of the network, as you can work with two independent sets of predictions to calculate these?
+"
+"['natural-language-processing', 'word-embedding', 'bert', 'embeddings']"," Title: Why are BERT embeddings interpreted as representations of the corresponding words?Body: It's often assumed in literature that BERT embeddings are contextual representations of the corresponding word. That is, if the 5th word is "cold", then the 5th BERT embedding is a representation of that word, using context to disambiguate the word (e.g. determine whether it's to do with the illness or temperature).
+However, because of the self-attention encoder layers, this embedding can in theory incorporate information from any of the other words in the text. BERT is trained using masked language modelling (MLM), which would encourage each embedding to learn enough to predict the corresponding word. But why wouldn't it contain additional information from other words? In other words, is there any reason to believe the BERT embeddings for different words contain well-separated information?
+"
+"['machine-learning', 'training', 'recurrent-neural-networks', 'overfitting', 'computational-learning-theory']"," Title: Is it possible to overfit a model on infinite amounts of data?Body: This is a theoretical question. Is it possible to overfit a model on infinite amounts of data?
+Let me clarify there are no duplicates.
+Say, we have a generator function that produces data, with the correct classification/regression value, and we can generate infinite amounts of valid data. How long does it take for the model to overfit?
+This question arose because I'm training an RNN model for fake news classification, and MSE loss is almost always 0.000, only 25% of the training data.
+Will it be possible to overfit with one epoch of training on the infinite data generator?
+(I'm thinking what will happen is the model will either get perfect, or sync into the generator's non-perfect randomness, and learn nothing)
+"
+"['natural-language-processing', 'reference-request', 'optical-character-recognition', 'named-entity-recognition', 'text-detection']"," Title: Which AI techniques are there that combine multiple models to make sense of data at different stages?Body: I have been working to design a system that uses multiple machine learning models to make sense of data that is dynamically webscraped. Each AI would handle a specific task, for example:
+An AI model would identify text in an image, then attempt to create plain text of what it might be. Once the text is extracted, it would be passed in a stored variable to an AI that can read the text to determine if it is a US city/state.
+I tried to look into if others have done this, but didn't find much on it relating to what I was looking for. Does anyone know if there are potential issues with this? Logically, it looks good to me, but I figured I'd ask.
+If anyone can put me in the right direction for reading material or further information, I would appreciate it.
+"
+"['markov-decision-process', 'reward-functions', 'bellman-equations', 'policy-iteration', 'dynamic-programming']"," Title: How do we get the value of this state of an MDP, at time-step $h-2$, using dynamic programming?Body: I am trying to understand the problem below, represented as an MDP with four states (PU, PF, RU, and RF) and two actions (AS).
+
+Let's consider V(RF), the value of the state RF. At time-step $h$, V(RF) = 10. When we go to the previous time-step $h-1$, V(RF) increases to 19.
+Why is the value of RF increasing backward, i.e. at time-step $h$, which is the last step, it's 10, but in $h-1$ it's 19?
+Also, when I apply the Bellman equation, I am not getting the value of V(RF) at time-step $h-2$, which is 25.08, according to the table.
+Below is my solution which I am applying on V(RF):
+Lets suppose for RF, I know that
+Vh (RF) = max {R(RF,A), R(RF,S)}
+ = max ({10,10}
+Vh (RF) = 10
+
+ **for h-1**
+ Vh-1 (RF) = max R(RF,act) + gamma E (summation state) P(State|RF,act) Vh(State)
+ = max {10+0.9(1*0), 10+0.9(0.5(10)+0.5(10))}
+ = max (10,19)
+
+ **for h-2**
+ Vh-2(RF) = max R(RF,act) + gamma E(summation state) P(State|RF,act) Vh(State h-1)
+ = max {19+0.9(1*0), 19+0.9(0.5(10)+0.5(10))}
+ = 28.0
+
+So, in the above scenario, the reward is 0.9, but I am not sure how we get the third result in V(RF) as 25.08. Where are we using this last part Vh(State)
from the equation?
+"
+"['training', 'recurrent-neural-networks']"," Title: Initial Input $h_0$ for RNN and updation of weightsBody: Consider an input to RNN $ x = \{x_i\}_{1}^{n}$. Assume that the length of each input $x_i$ is k.
+Now, consider the following diagram from p5 of this pdf
+
+My doubts are:
+
+- What should I pass as $h_0$? is it a zero vector?
+
+- Does RNN updates its weight matrices $U, W, V$ after each token of input $x_i$ ? Or updates after passing all tokens of a particular input $x$?
+
+
+"
+"['q-learning', 'deep-rl', 'gym', 'double-q-learning']"," Title: Deep Q-Learning ""catastrophic drop"" reasons?Body: I am implementing some "classical" papers in Model Free RL like DQN, Double DQN, and Double DQN with Prioritized Replay.
+Through the various models im running on CartPole-v1
using the same underlying NN, I am noticing all of the above 3 exhibit a sudden and severe drop in average reward (with a sudden and significant increase in loss) after achieving peak scores.
+After reading online, I can see that this is a recognized problem but I cant find a suitable explanation. Things I have tried to mitigate:
+
+- adapt model architecture
+- tune hyperparams like LR, batch_size, loss function (MSE, Huber)
+
+This problem persists, and I cannot seem to achieve any sustained peak performance.
+Useful links I found:
+
+Example:
+
+- till ~250 episodes in Double DQN with PR (with annealing beta), performance steady goes up in both increase in reward and decrease in loss
+- after that stage, the performance dips suddenly in both decreased average reward and increased loss as seen in output below
+
+Episode: Mean Reward: Mean Loss: Mean Step
+ 200 : 173.075 : 0.030: 173.075
+ 400 : 193.690 : 0.011: 193.690
+ 600 : 168.735 : 0.015: 168.735
+ 800 : 135.110 : 0.015: 135.110
+ 1000 : 157.700 : 0.013: 157.700
+ 1200 : 99.335 : 0.013: 99.335
+ 1400 : 97.450 : 0.015: 97.450
+ 1600 : 102.030 : 0.012: 102.030
+ 1800 : 130.815 : 0.010: 130.815
+ 1999 : 89.76 : 0.013: 89.76
+
+Questions:
+
+- what is the theoretical reasoning behind this? Does this
fragile
nature mean we cannot use the above mentioned 3 algorithms to solve CartPole-v1
?
+- if not, what steps can help mitigate this? Could this be overfitting and what does this brittle nature indicate?
+- any references to follow up with regarding this "catastrophic drop"?
+- I observe similar behavior in other environments as well, does this mean that the above mentioned 3 algorithms are insufficient?
+
+Edit:
+Taking from @devidduma's answer, I added time based LR decay to the DDQN+PRB model and kept everything else same. Here are the numbers, they look better than before in terms of the magnitude of the performance drop.
+ 10 : 037.27 : 0.5029 : 037.27
+ 20 : 121.40 : 0.0532 : 121.40
+ 30 : 139.80 : 0.0181 : 139.80
+ 40 : 157.40 : 0.0119 : 157.40
+ 50 : 225.10 : 0.0107 : 225.10 <- decay starts here, factor = 0.001
+ 60 : 227.90 : 0.0101 : 227.90
+ 70 : 227.00 : 0.0087 : 227.00
+ 80 : 154.30 : 0.0064 : 154.30
+ 90 : 126.90 : 0.0054 : 126.90
+ 99 : 154.78 : 0.0057 : 154.78
+
+Edit:
+
+- after further testing, pytorch's
ReduceLROnPlateau
seems to be working best with patience=0
param.
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'activation-functions', 'weights', 'softmax']"," Title: Which solutions are there to the problem of having too large activations before the softmax (or sigmoid) layer?Body: I'm trying to build a neural network (NN) for classification using only N-bit integers for both the activations and weights, then I will train it with some heuristic algorithm, based only on the NN evaluation.
+Currently, I'm using a non-linear activation function for hidden units. Because of its probability interpretation, I am forced to use the softmax (or the sigmoid for 2-class case) for the output layer. However, because of the use of integers, the linear combination of the activations and weights can easily be too large, and this causes a problem to the exponential in the softmax evaluation.
+Any solution?
+"
+"['generative-adversarial-networks', 'image-generation']"," Title: Text to image GANs and failureBody: My knowledge of GANs is relatively basic at the moment but I seem to remember reading somewhere that GANs that generate images from a text prompt, when they fail to understand some of the text/words render those text/words in the image itself instead of interpreting and rendering what they refer to - this is apparently a known bug or failure.
+Can somebody confirm that this is true or false? And if possible provide a link to writing about it that will allow me to verify the details e.g. if its specific to a type of GAN, version etc. Many thanks in advance.
+"
+"['comparison', 'terminology', 'word-embedding', 'categorical-data', 'one-hot-encoding']"," Title: Is categorical encoding a type of word embedding?Body: Word embedding refers to the techniques in which a word is represented by a vector. There are also integer encoding and one-hot encoding, which I will collectively call categorical encoding.
+I can see no fundamental difference between the categorical encoding and word embedding at a fundamental level. They may be different at an application level.
+Is it true that categorical encoding is a type of word embedding? And are different names solely due to the task in which apply the technique?
+"
+"['neural-networks', 'tensorflow', 'keras']"," Title: Neural Network Regression Experiment Going WrongBody: I've been trying to get a simple regression experiment going with a neural network and I would like some help interpreting what is going wrong.
+My goal is to see what level of regression accuracy I can achieve with a feed forward neural network. I have N pairs of inputs, x, and outputs, y.
+As an example:
+X is made up of serial integers starting at 0: e.x [0 1 2 3 ... N]
+Y is made up of pairs of random floats between 0 and 1: e.x [[.263 .548] [.157 .014] [.988 .478] ... Nth [.356 .245]].
+
+So my neural network's structure has 1 input neuron and 2 output neurons and some hidden layers in between whose properties are part of this experiment.
+These are the questions I seek to answer:
+
+- Can this neural network with some configuration of hidden layers map these inputs and outputs with perfect accuracy?
+- If not, what is the best accuracy that can be achieved with a reasonably sized network?
+- Is there a limit on the value of N such that the accuracy of the mapping deteriorates past an accuracy threshold t for each value a and b of an output pair, [a, b], of +- z where z is fairly small? In other words can the prediction z stay withing a - t and a + t
+and the same for b?
+
+Here is my model and some relevant functions as it stands:
+# Custom Loss
+def absoluteDifference(y_true, y_pred):
+ return abs(y_pred - y_true)
+
+model = Sequential()
+model.add(Dense(1, activation='relu', input_dim=1, kernel_initializer='he_uniform'))
+model.add(Dense(10, activation='relu'))
+model.add(Dense(10, activation='relu'))
+model.add(Dense(magnitudes))
+
+model.compile(loss=absoluteDifference, optimizer='adam', metrics=['accuracy'])
+
+The results of my experiments are confusing. It always seems to bottom out at some accuracy threshold and never progress. When I compare the outputs of the network to the y value pairs they are either wildly different than y or the predictions seem to be flattening at one value:
+Here are some actual results:
+Results 1: Flattening:
+Y:
+[[0.2890625 0.43554688]
+ [0.1171875 0.02734375]
+ [0.44921875 0.11328125]
+ [0.04296875 0.25585938]
+ [0.4921875 0.42578125]
+ [0.09960938 0.04101562]
+ [0.265625 0.05273438]
+ [0.421875 0.26757812]
+ [0.40625 0.0859375 ]
+ [0.25976562 0.1328125 ]]
+
+Predictions:
+[[0.27030337 0.11708295]
+ [0.27030337 0.11708295]
+ [0.27030337 0.11708295]
+ [0.27030337 0.11708295]
+ [0.27030337 0.11708295]
+ [0.27030337 0.11708295]
+ [0.27030337 0.11708295]
+ [0.27030337 0.11708295]
+ [0.27030337 0.11708295]
+ [0.27030337 0.11708295]]
+
+Results 2: Different:
+Y:
+[[0.3515625 0.47851562]
+ [0.16796875 0.28320312]
+ [0.140625 0.453125 ]
+ [0.21679688 0.44726562]
+ [0.21484375 0.0859375 ]
+ [0.47265625 0.15429688]
+ [0.37304688 0.30078125]
+ [0.06054688 0.04492188]
+ [0.49609375 0.41992188]
+ [0.4453125 0.40820312]]
+
+Predictions:
+[[0.18319808 0.33377975]
+ [0.16718353 0.33032405]
+ [0.19876595 0.32554907]
+ [0.23596771 0.3200497 ]
+ [0.2726316 0.31566542]
+ [0.30836326 0.31124872]
+ [0.34609824 0.30569595]
+ [0.38202912 0.30291694]
+ [0.4184485 0.2967524 ]
+ [0.45500547 0.29231113]]
+
+I would have expected Y and Predictions to match after training. What am I doing wrong here?
+"
+"['reinforcement-learning', 'q-learning']"," Title: How is it possible that Q-learning can learn a state-action value without taking into account the policy followed thereafter?Body: From my readings, I have been taught that the state-action value depends on the policy being followed. That seems logical because the expected return from actual actions will be different depending on which actions follow it.
+On page 58 of Sutton & Barto's book, we have
+
+So, how is it possible that Q-learning can learn a state-action value without taking into account the policy followed thereafter (i.e. the policy followed after having taken action $a$ in the state $s$)?
+"
+"['neural-networks', 'machine-learning', 'objective-functions']"," Title: Loss function to minimize the distance between setsBody: Are there references or links to examples about loss functions "Distance Metrics" which could be used to minimize the distance between two sets for a neural network. More precisely, this distance metric must depend on the whole set in calculation and not only a single point as the Euclidean distance.
+It is known that Hausdorff distance is used to find the distance between two sets and it is well used for images comparison but it depends additionally on a point in calculation. For my case, I can't depend on a single point for the distance metric but I must consider the whole set to compare it with the other set! Is there any recommendation?
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'activation-functions', 'sigmoid', 'gated-recurrent-unit']"," Title: What is it about sigmoid activations in particular that allows for the keeping and forgetting of past information from different time scales?Body: My understanding is that normal recurrent neural networks (RNNs) are not good at keeping past information from different time scales. Furthermore, my understanding is that Gated RNNs, such as Long Short-Term Memory, model the keeping and forgetting mechanisms explicitly with sigmoid activations, namely gates. What is it about sigmoid activations in particular that allows for the keeping and forgetting of past information from different time scales?
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'python']"," Title: Best way to use/learn ML for board-game reinforcement learningBody: I am relatively new to Python but I taught myself enough to code a two-player board game that is similar to chess. It has a simple Tkinter UI. Now I am dipping into machine learning, and I want to write another program to play itself in this game repeatedly and "naturally" learn strategies for playing the game.
+Can anyone give advice on what I might be able to use for this? Is Tensorflow a good option? Is there a Python library well suited for this that I could adapt and train? I am partially through the buildingai.elementsofai.com course, but I am still very new at ML / AI.
+"
+"['neural-networks', 'machine-learning', 'math', 'definitions']"," Title: how to go from mathematical problem to neural network (and back)?Body: I am a little confused on how, you can find online papers that describe complex Machine Learning formulas in a mathematical/probabilistic way, and, in the other hands, easy tutorials that teach you how to use frameworks to create neural networks, without mentioning the maths behind.
+What is not clear is, what is the correlation between these two worlds? What are the "parameters" that make you understand i.e. how many layer to code, what kind of perceptrons to use, etc?
+to make an example:
+Let's take this formula, which in Wikipedia Italy is described as "the standard learning algorytm":
+
+And suppose that size of w
and x is 4
. , and g(x) and f(x) are, for examples, linear functions.
+What next? Where do I start coding a neural network that solves this problem?
+It would seem more logical to me to code this "directly" without defining perceptrons, convolution, layers etc.
+"
+"['ai-design', 'philosophy', 'agi', 'human-like', 'halting-problem']"," Title: Might AGI need to be flawed?Body: An example is the halting problem, which states computing cannot be solved by exhaustion, but which humans avoid trivially by becoming exhausted.
+Humans typically give up what seems like a lost cause after a certain point, whereas a computer will just keep chugging along.
+
+- Do flaws have utility?
+
+Can inability be a strength and will AGI require such limitations to achieve human-level intelligence? Humans are simply not capable of infinite loops, except arguably in cases of mental illness. Are there other problems similar to the halting problem where weakness is a benefit?
+"
+"['machine-learning', 'comparison', 'terminology', 'data-labelling']"," Title: What is the difference between ""ground truth"" and ""ground-truth labels""?Body: I'm aware that the ground-truth of the example at the top left-hand corner of the image below is "zero"
+
+However, I am confused about the meaning of the terms ground truth and ground-truth labels. What is the difference between them?
+"
+"['unsupervised-learning', 'clustering', 'algorithm-request', 'anomaly-detection', 'categorical-data']"," Title: What is the best clustering method to detect anomalies for data with mostly categorical data?Body: I have a dataset with about 85 columns. Out of the 85 columns, 70+ are categorical. My goal is to identify the outliers in this dataset through clustering methods as I do not have a target column.
+What is the best way to approach this? Is it advisable to convert all the 70+ columns to dummies in pandas and use a clustering algorithm like DBScan?
+"
+"['neural-networks', 'deep-learning', 'python', 'transformer', 'attention']"," Title: How do autoregressive attention mechanism work in multi-headed attention?Body: [LONG POST!!] I am working on a DNN model that works as an improviser to generate music sequences. The idea of generating music is based on taking a sequence of music nodes (their index representation) and generating sequences that are distinctive with more context and coherent structure as well as capturing syntactic and structural information from the original sequences. Therefore I am dealing with a time series dataset. Similar work was reported in "Attentional Networks for music generation" but in our case, we have a different model architecture and different dataset.
+It has been known that Transformer (attention) suffers in multivariate time series dataset (Source: Attention for time series forecasting and classification). But given these problems were reported two years ago, the SOTA should be better by now. For that reason, my target is to use the attention mechanism in a way to overcome these challenges.
+Recently I have been using the multiheaded attention layer from TF and testing with head size between 128 and 3074 and head number from 1 to 10 and dropout from 0.1 to 0.5. Based on the results there was no noticeable improvement in the model performance, it seems that the multi-headed attention layer didn't have contribution during training.
+Therefore and after carefully reading the literature I found that autoregressive attention is the best option for this types of problem. Basically, by making the attention autoregressive, it will compute the attention over the previous (decoder) outputs in such a way as to avoid using future information to make current predictions (to preserve the notion of causality). So the attention has to designed so that at each time step it needs to be autoregressive, for example, use previously generated sequences as extra input while generating the next symbol.
+In "Autoregressive Attention for Parallel Sequence Modeling" paper they introduced the autoregressive attention mechanism in order to maintain the causality in the decoder. I didn't understand what they mean in Section 3.3 which describe the implementation of autoregressive attention. My problem is in the autoregressive implementation, in the paper they stated that autoregressive mechanics was implemented using the masking technique which changes all of the elements in the upper-right triangle and the diagonal
+to −∞ 3 to ensure that all the scores that would introduce future information into the attention calculation are equal to 0 after the softmax. I was hoping to see how it was implemented in the code to get a better idea of how it works.
+Here is how the attention is implemented in tensorflow:
+def multiHeadedAttentionLayer(cell_input):
+ cell_state = None
+
+ if cell_state is None:
+ cell_state = cell_input
+
+ mha = tfa.layers.MultiHeadAttention(head_size=128, num_heads=5, dropout = 0.5)
+ cell_output = mha([cell_input, cell_state])
+ cell_state = cell_input
+
+ return cell_output
+
+Then the function is recalled in the model architecture with the rest of the layers (below is a section of the model architecture only):
+x = MaxPooling1D(pool_size=2)(x) # previous layer
+x = multiHeadedAttentionLayer(x) # attention layer
+x = LSTM(lstmneurons, kernel_regularizer=regularizers.l2(kreg3_rate), dropout=dropout3_rate, recurrent_dropout=dropout4_rate)(x) # following layer
+x = BatchNormalization()(x) # following layer
+
+etc....
+Based on my intuition the autoregression should take the output results and feed them back to the input at every time step, so my questions are:
+Why do we need the masking technique?
+How to implement the masking technique in this case?
+Is there is a code for the autoregressive attention that I have a look at for reference?
+Is my current intuition about autoregressive attention correct as shown in the diagram?
+
+"
+"['convolutional-neural-networks', 'tensorflow', 'recurrent-neural-networks', 'image-recognition']"," Title: Document clustering from ordered pages listBody: I have a series of ordered pdf pages which own to different documents. Let me give you an example:
+Pages: 1 2 3 4 5 6
+True Pages: 1 2 | 1 2 3 4
+So I have like six ordered pages, two of which from document A, and the remaining from document B. I do not have documents labels so the grouping should be done in an unsupervised way.
+Which could be a reasonable approach? Using only CNN to detect border pages shouldn't be enough to discern documents, so I was thinking to something like RNN->CNN or CNN->RNN but I don't know how it would practically work because it is the first time I don't use labels in my TF model.
+Do you think it would be a reasonable idea?
+"
+"['machine-learning', 'deep-learning', 'unsupervised-learning', 'architecture']"," Title: How to learn transition type in a 1-hour extended DJ Mix?Body: How would you design a model which learns the transitions in a given 1-hour DJ Mix? To be specific, the model should be able to learn transitions, specify the occurring time and the type (Crossfade, Infinite Loop, and so on). Data annotation is way too long since I have 3000+ DJ Mixes that are 1 hour long each, as mentioned. It's almost impossible to annotate the transitions in each mix without spending lots of money. Is there a way to do it unsupervised?
+"
+"['feature-selection', 'algorithmic-trading']"," Title: Selecting features for a neural network: is it redundant to have a feature that is an average (or max, or min) of some other featuresBody: I'm trying to create a neural network that would able to look at the current price of a crypto asset and classify between a "BUY", "SELL" or "HOLD". So far for my input features, I've decided to go with the past 40 opens, closes, highs, lows, turnover, and volumes (240 features + the current price so 241 total features).
+Would it be redundant/not ideal if I had another feature that was the average of the past 40 opens for example? What about the max/min of the past opens?
+My thinking was that with only the raw prices data of the past 40 days, the neural network would be able to "detect" and create the most optimum features like the average or max in the hidden layers. And therefore, having the avg. or the max/min of some existing features would be unnecessary or perhaps worsen the performance of the model.
+Or is there no clear answer and would this be something I'd only be able to figure out by testing against data?
+Thanks for your help!!
+"
+"['deep-learning', 'natural-language-processing', 'computer-vision', 'long-short-term-memory', 'pytorch']"," Title: Changing a CNN-LSTM image captioning architecture to use BiLSTMsBody: Currently I'm dealing with an assignment that made us implement the network mentioned in this paper. The network has an architecture similar to this:
+
+As you can see it uses a Unidirectional RNN (in my case LSTM), which does the many to many sequence prediction task while training, giving LSTM outputs to dense layers with softmax activation. For generating the captions, the network is only given the image at first, and then using the prediction of the image, generates a word, which is then fed to the network along with the generated hidden state, and the model does this recursively to find a unique stop token. Here's the prediction code:
+def predict(self, image, max_len = 30):
+ output = []
+ hidden = None
+ inputs = self.encoder(image).unsqueeze(1) # Image features
+ for i in range(max_len): # Recursively feed generated words to LSTM
+ lstm_out, hidden = self.decoder.lstm(inputs,hidden)
+ output_vocab = self.decoder.fc(lstm_out)
+ output_vocab = F.softmax(output_vocab.squeeze(1), dim=1).detach().cpu().numpy()
+ words_indices = output_vocab.argsort(axis=1).squeeze()
+ word = words_indices[-1]
+ if word == self.unk_token_index:
+ word = indices[-2]
+ output.append(int(word))
+ if word == self.end_token_index:
+ break
+ inputs = self.decoder.embed(torch.LongTensor([[word]]).to(image.device))
+ return output
+
+The problem I'm having right now is that I don't know whether this generation scheme works with BiLSTMs. Right now my training loss is way better for the sequence to sequence prediction task than the UniLSTM, but my generated captions are far worse.
+This is a sample caption generated by Bi-LSTM:
+
+This is a sample caption generated by UniLSTM:
+
+My training loss for BiLSTM converges to 10e-3, while for UniLSTM it converges to 0.5. But the problem is that even before overfitting, BiLSTM is only generating gibberish.
+"
+"['classification', 'semantic-segmentation', 'semantics', 'multiclass-classification', 'labels']"," Title: What is the difference (if any) between semantic segmentation and multi-class, mutually exclusive classification?Body: Multi-class classification is simply assigning all data points into one of up to any finite number of mutually exclusive labels. I am new to the field(s) of AI/ML and I keep hearing people use the term "semantic segmentation."
+I want to "translate" this AI/ML jargon into something more familiar to me. The best video I have found so far to explain what it is made me wonder, what is the difference between semantic segmentation and classification?
+NOTE: I am specifically not referring to so-called multi-label "classification" which allows a data point to have more than one label at a time. In my experience, that sort of labeling is not classification at all, which is a division into mutually exclusive sets (no overlap).
+"
+"['reinforcement-learning', 'papers', 'reward-design', 'combinatorial-optimization']"," Title: How to choose the reward in reinforcement learning?Body: I am solving a combinatorial optimization problem, where I do not have a global optimum, so the goal is to improve the objective function as much as possible. So, to do this, I was inspired by this article Reactive Search strategies using Reinforcement Learning, local search algorithms and Variable Neighborhood Search, I apply during several iterations, heuristics to improve the solution, that is to say, that at each iteration I must choose a heuristic and apply it on the current solution.
+In this article, they have defined the state space as the set of heuristics to apply and the action space is the choice of a heuristic among these heuristics.
+Regarding the reward, they gave +1 if the solution is improved and -1 if the solution is not improved.
+Sincerely, I did not understand how we define the reward for example here -1 and 1, and according to which criteria we choose the reward to use?
+"
+"['terminology', 'papers', 'meta-learning']"," Title: What's mutual exclusivity in meta-learning?Body: What do we mean by mutual exclusivity of tasks?
+This work (E Pan, 21) and this one (M Yin, 20) state that most classification meta-learning algorithms fail for non-mutually exclusive tasks as the model may over-fit to a task, and no model can solve all the tasks at once (respectively).
+I had trouble understanding the exact meaning of a "task" in meta classification here. [E Pan, 21] uses "task" synonymously with "new class", while [M Yin, 20] states "...prior work uses a per-task random assignment of image classes to N-way classification labels". However, some priors on few-shot learning [S. Hugo, 17], and [Y Wang, 19] agree with FFLab's, (20) description of "task" which I found more clear:
+
+The number of classes (N) in the support set defines a task as an N-class classification task or N-way task, and the number of labeled examples in each class (k) corresponds to k-shot, making it an N-way, k-shot learning problem.
+
+Where the support set $D_s$ here is part of the meta training data $D$ which comprises a support and test set $D_t$ $D = <D_s, D_t>$ [Weng, 18].
+However, even with a better understanding of what a "task" is, I still couldn't get what constitutes mutually exclusive tasks.
+"
+"['deep-learning', 'transfer-learning', 'feature-extraction', 'fine-tuning', 'emotion-recognition']"," Title: What is the difference between feature extraction and fine-tuning in transfer learning?Body: I'm building a model for facial expression recognition, and I want to use transfer learning. From what I understand, there are different steps to do it. The first is the feature extraction and the second is fine-tuning. I want to understand more about these two stages, and the difference between them. Must we use them simultaneously in the same training?
+"
+"['neural-networks', 'pytorch']"," Title: Why do I have better RMSE when I don't scale the target?Body: I use PyTorch for training a simple neural net for a regression task on a dataset with 12 numerical features + target (target is the 13th column) + 2 categorical features
+Before training, I execute
+# numeric_columns = numeric_columns[:-1]
+scaler = StandardScaler()
+scaler.fit(df_train[numeric_columns]])
+
+Also, in my custom torch.util.data.Dataset
I scale the data using my scaler
object.
+After each epoch, I evaluate the RMSE("reversed scaled" prediction, non-scaled target), like the following:
+y_pred = (y_pred * self.scaler.scale_[13]) + self.scaler.mean_[13]
+loss += self.criterion(y_pred , y_true).item()
+
+RMSE if I don't scale the target (the first comment would be uncommented and the y_pred row would be commented) is around 0.95 (I tried multiple hyperparameters)
+RMSE if I scale the target is 1.7
+The target has mean 3.3 and standard deviation of 2.
+What am I doing wrong? I thought scaling the target is a must when dealing with neural networks.
+"
+['reinforcement-learning']," Title: What are the various problems RL is trying to solve?Body: I have read most of Sutton and Barto's introductory text on reinforcement learning. I thought I would try to apply some of the RL algorithms in the book to a previous assignment I had done on Sokoban, in which you are in a maze-like grid environment, trying to stack three snowballs into a snowman on a predefined location on the grid.
+The basic algorithms (MC control, Q-learning, or Dyna-Q) seemed to all be based on solving whichever specific maze the agent was trained on. For example, the transition probabilities of going from coordinate (1,2) to (1,3) would be different for different mazes (since in one maze, we could have an obstacle at (1,3)). An agent that calculates its rewards based on one maze using these algorithms doesn't seem like it would know what to do given a totally different maze. It would have to retrain: 1) either take real life actions to relearn from scratch how to navigate a maze, or 2) be given the model of the maze, either exact or approximate (which seems infeasible in a real life setting) so that planning without taking actions is possible.
+When I started learning RL, I thought that it would be more generalizable. This leads me to the question: Is this problem covered in multi-task RL? How would you categorize the various areas of RL in terms of the general problem that it is looking to solve?
+"
+"['deep-rl', 'testing']"," Title: How to test the robustness of an agent in a custom reinforcement learning environment?Body: I have used the stable-baseline3 implementation of the SAC algorithm to train policies in a custom gym environment. So far the results look promising. However, I would like to test the robustness of the results. What are common ways to test robustness? So far, I have considered testing different seeds. Which other tests are recommended?
+"
+"['neural-networks', 'machine-learning']"," Title: Can you use a graph as input for a neural network?Body: We want to try and distinguish real voices from (deep)fake voices using the graphs generated by a discrete fourier transform (generated from .wav audio files). We know from each image if it is a real or a fake voice, so it's a supervised classification problem. An image would look like this:
+
+We think that real voices generate a graph with clear spikes, whereas fake voices have more noise resulting in less clear spikes. For this reason, we thought of using a CNN to take such an image as input (with x and y-axes ommited), and classify it as real or fake. Our concern is that it's actually a graph and not an image of an object, so we're not sure if this would be a good approach. We could also use the arrays generated from the fourier transform, but we're not sure how we could use that as input as we want to classify if it's real or fake, and not predict y for each x.
+"
+"['reinforcement-learning', 'comparison', 'terminology', 'markov-decision-process', 'state-spaces']"," Title: What is the difference between terminal state, nonterminal states and normal states?Body: In Sutton & Barto's Reinforcement Learning: An Introduction, page 54, the authors define the terminal state as following:
+
+Each episode ends in a special state called the terminal state
+
+But the authors also say:
+
+the episodes can all be considered to
+end in the same terminal state, with different rewards for the different outcomes. Tasks
+with episodes of this kind are called episodic tasks.
+
+I believe there is also a fundamental difference between a terminal state, nonterminal states and plain, normal states:
+
+In episodic tasks we sometimes need to distinguish the set of all nonterminal states, denoted S, from the set of all states plus the terminal state, denoted S+.
+
+In the first quote, it appears as if the terminal state is just a term to describe the final state of an episode, but, from the second quote, I understand that the terminal state is the same no matter the outcome of the episode. If we consider the game of chess, what would we consider as a terminal state? Would it be the state that, if reached, will end the game (checkmate), no matter the result (win, loss)? But then how can we describe a state that would lead to draw? If we say about a state that leads to a draw that it's a nonterminal state since we can play an "infinite" number of turns without reaching a win or a loss hence without reaching the terminal state, aren't we implicitly supposing that reaching a draw isn't a result for which we should attribute a reward (e.g. 0)? And if we name a state that leads to a draw a terminal state, then what would be the difference between a normal state and a nonterminal state?
+"
+"['machine-learning', 'statistical-ai', 'ai-field', 'statistics']"," Title: How much statistics is involved in AI?Body: I am a 3rd-year math major, who is interested in computer science, particularly algorithms and competitive programming (did some olympiads in high school, ACM ICPC in university, etc.), and I have been meaning to get into AI.
+I have all the prerequisites to get started, but the problem is that I really, really hate statistics. I took a course on it last year and found it to be very dry.
+I've heard people say that AI is mostly statistics and I am very concerned if it's true. I can tolerate some amount of stats, but, if the field literally revolves around it, I will not be able to do it.
+So, exactly how much statistics is involved in AI? Are there fields of AI which use it less than others?
+"
+"['accuracy', 'sentiment-analysis', 'naive-bayes', 'sensitivity']"," Title: Is it ok to have an accuracy of 65% and a sensitivity of 90% with Naive Bayes for sentiment analysis?Body: I am creating a sentiment analysis model using Naive Bayes. When I test the model, I get an average accuracy of 65%; however, the sensitivity of the model is much higher, 90%.
+So, I am wondering if there are methods to fixing this data; or, since the sensitivity is very high, then would it be ok to move forward with the model?
+"
+"['tensorflow', 'keras', 'transfer-learning']"," Title: Validation accuracy very low with transfer learningBody: I am using MobileNetV3 from TF keras for doing transfer learning; I removed the last layer, added two dense layers, and trained for 20 epochs.
+
+- How many dense layers should I add after the MobileNet and How dense should they be?
+
+- How many epochs should I train for?
+
+- Validation loss and validation accuracy have a strange pattern, is that normal?
+
+
+Is there anything I am missing?
+
+"
+"['tensorflow', 'keras', 'recurrent-neural-networks', 'long-short-term-memory', 'gated-recurrent-unit']"," Title: What do RNN, LSTM, and GRU layers do in Tensorflow?Body: I have gone through some theoretical introductions of RNN and LSTM, which do not contain any code, and they describe in fair detail what the cells do, how they apply operations like forget, sigmoid, etc.
+Then I am trying to implement them with tensorflow, and even after reading the documentation, I am unable to connect the layers' API with my theoretical understanding of the operations. For example, take the following simple code:
+import tensorflow as tf # tensorflow 2.5.0
+inputs=tf.random.normal(shape=(32, 10, 8))
+lstm = tf.keras.layers.LSTM(units=4, return_sequences=True, return_state=True)
+outputs=lstm(inputs) # Call the layer, gives a list of three tensors
+lstm.trainable_weights # Gives a list of three tensors
+
+So what exactly is the layer doing here based on the input it receives and the weights that were initialised randomly?
+If I am to implement the layer's operation myself, how do I do that?
+The Google and Keras documentation contain a lot of example code, but not really explanations of the internal mathematical operations. So any help in this area, or any reference that explains the mathematical operations (not in general, but what's happening in the Tensorflow layer) would be greatly appreciated.
+I have the exact same question regarding RNN and GRU layers too.
+"
+"['comparison', 'terminology', 'word-embedding', 'books']"," Title: What is the exact difference between distributional semantics and distributed semantics?Body: While studying word embeddings in natural language processing, I encountered the following statement on page 327 of the textbook Natural Language Processing by Jacob Eisenstein
+
+Distributional semantics are computed from context statistics. Distributed semantics are a related but distinct idea: that meaning can be represented by numerical vectors rather than symbolic structures.
+
+The dissimilarity between them is that distributed semantics represent the meaning of a word by a vector of numbers. Distributional semantics represent the meaning of a word by symbolic structure (inferred from paragraph).
+I can say, in distributed semantics, the word cat can be represented by the vector $[23, 43,21,16]$ (for example).
+Similarly, please, give me a small example of how the meaning of a word is represented by symbolic structure (which should not be necessarily correct).
+What is meant by symbolic structure here?
+"
+"['image-recognition', 'perceptron']"," Title: Is it possible to train a perceptron to tell if a picture is a dog or cat?Body: I know perceptron is a linear classifier that tells linearly separable binary class data, such as iris setosa vs. iris versicolor via their sepal's length and width.
+I'd just like to know if I have 2 groups of photos, one is for dog and the other is for cat, is it possible to train a perceptron to tell if a picture is a dog or cat?
+
+"
+"['convolutional-neural-networks', 'object-detection', 'non-max-suppression']"," Title: How to reject boxes inside each other with Non Max SuppressionBody: I’m working on an object detection cnn, and having some issues with non max suppression. When I have a small box inside a large box, NMS is not rejecting the smaller, incorrect box, because its IOU is small (large union, small intersection). How is this scenario typically dealt with? When using out of the box pretrained models for object detection I don’t seem to get boxes completely inside other boxes. Example here:
green is ground truth, blue is prediction. The center box has a tiny blue box inside that’s not getting rejected by NMS
+"
+"['terminology', 'books', 'bag-of-words', 'cbow']"," Title: What is the meaning of ""continuous"" in a continuous bag-of-words model?Body: The word continuous in mathematics is a property of either a set or a function that says that the underlying object has no discontinuity in the range mentioned. If the object is a set, then $[-1,1]$ is a continuous one while $\{-1, +1\}$ is not. Similarly, a function is said to be continuous if the actual value and the limiting value at every point in the domain are equal.
+Now, coming to CBOW. I read the following statement from p:334 of Natural Language Processing by Jacob Eisenstein
+
+Thus, CBOW is a bag-of-words model, because the order of the context words does not matter; it is continuous, because rather than conditioning on the words themselves, we condition on a continuous vector constructed from the word embeddings.
+
+What is meant by continuous in this case? Does continuous vector stand for a vector of real numbers?
+"
+"['machine-learning', 'deep-learning', 'computer-vision', 'agi']"," Title: Why the collection of background/negative image dataset is not taught in object detection tutorials and books?Body: While I was doing an object detection project, I have encountered the problem of getting FALSE POSITIVES and FALSE NEGATIVES. After days of research on StackOverflow, I figured out that I need to collect more negative images or background images.I decided to document this process so other people could easily solve this issue and the result of documentation is this. After training the model with Negative/Background images, my FP/FN rates were normalized so that in video frames I started getting fewer FPs. All of us, machine learning developers get experience by getting hands dirty - this is clear to all of us. But I haven't seen(probably missed) any video tutorials or examples on books showing how to collect background images and why we need them at all.
+So here is the question: Okay, so every experienced ML engineer knows what is the FP/FNs are, and their prevention methods. But why this topic is less known and taught within popular object detection tutorials and books? Or am I missing something?
+"
+"['neural-networks', 'terminology', 'word2vec', 'books']"," Title: Why Word2Vec is called a neural model if no neural network is used in it?Body: Word2Vec model does not use any neural network. It uses logistic regression only.
+Consider the following paragraph from p:18 of Vector Semantics and Embeddings
+
+We’ll see how to do neural networks in the next chapter,
+but word2vec is a much simpler model than the neural network
+language model, in two ways. First,word2vec simplifies the task
+(making it binary classification instead of word prediction). Second,
+word2vec simplifies the architecture (training a logistic
+regression classifier instead of a multi-layer neural network with
+hidden layers that demand more sophisticated training algorithms). The
+intuition of skip-gram is:
+
+- Treat the target word and a neighboring context word as positive examples.
+
+- Randomly sample other words in the lexicon to get negative samples.
+
+- Use logistic regression to train a classifier to distinguish those two cases.
+
+- Use the learned weights as the embeddings.
+
+
+
+But, why it is called a neural model then? Is there any version of Word2Vec that use neural network?
+"
+"['reinforcement-learning', 'deep-rl', 'policy-gradients']"," Title: How do I quantify the difference in sample efficiency for two almost similar methods?Body: I am comparing my coded TD3 (Twin-Delayed DDPG) and the same TD3 (same hyperparameters) but with Priority Replay Buffer instead of a normal Replay Buffer.
+From what I have read, PER (Priority Experience Replay, Priority Replay Buffer) aims to improve sample efficiency. But how do I measure or quantify sample efficiency on these two? Is it who gets the highest average reward in a given number of episodes? Does it have something to do with the batch size?
+"
+"['neural-networks', 'activation-functions', 'perceptron', 'sigmoid']"," Title: How do sigmoid functions make it so that the prediction $\hat{y}$ indicates the probability that the observed value, $y$, is $1$?Body: I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. Chapter 1.2.1.3 Choice of Activation and Loss Functions says the following:
+
+The choice of activation function is a critical part of neural network design. In the case of the perceptron, the choice of the sign activation function is motivated by the fact that a binary class label needs to be predicted. However, it is possible to have other types of situations where different target variables may be predicted. For example, if the target variable to be predicted is real, then it makes sense to use the identity activation function, and the resulting algorithm is the same as least-squares regression. If it is desirable to predict a probability of a binary class, it makes sense to use a sigmoid function for activating the output node, so that the prediction $\hat{y}$ indicates the probability that the observed value, $y$, of the dependent variable is $1$.
+
+I've read about sigmoid functions, but it isn't clear to me how they make it so that the prediction $\hat{y}$ indicates the probability that the observed value, $y$, of the dependent variable is $1$. So how do sigmoid functions make it so that the prediction $\hat{y}$ indicates the probability that the observed value, $y$, of the dependent variable is $1$?
+EDIT: I am specifically asking about the probability that the value is $1$ (that is, how sigmoid functions specifically check for this).
+"
+"['convolutional-neural-networks', 'feature-extraction', 'bias']"," Title: How is the bias added after the convolution in a CNN?Body: I'm having trouble understanding how bias is added to the feature extraction convolution. I've seen people either refer to the bias as a single number that changes per filter or the whole matrix that is the size of the output. Here is what I mean:
+
+
+
+- $I$ is the input single-channel image.
+- $F$ is the filter.
+- $b$ is the bias.
+- "Izhod" means "output".
+
+Which is actually the correct bias used in CNN?
+"
+"['reinforcement-learning', 'proofs', 'value-functions', 'bellman-equations']"," Title: How to prove the second form of Bellman's equation?Body: I'd like to prove this "second form" of Bellman's equation: $v(s) = \mathbb{E}[R_{t + 1} + \gamma v(S_{t+1}) \mid S_{t} = s]$ starting from Bellman's equation: $v(s) = \mathbb{E}[G_{t} \mid S_{t} = s]$ where the return $G_{t}$ is defined as follows: $G_{t} = \sum_{k=0}^{\infty}{\gamma^{k}R_{t+k+1}}$.
+I tried to use the linearity of the expectation as follows: $v(s) = \mathbb{E}[R_{t+1} \mid S_{t} = s] + \mathbb{E}[\sum_{k = 1}^{\infty}{\gamma^{k}R_{t+k+1}} \mid S_{t} = s]$
+Which gives us: $v(s) = \mathbb{E}[R_{t+1} \mid S_{t} = s] + \gamma\mathbb{E}[\sum_{k = 0}^{\infty}{\gamma^{k}R_{(t + 1) + k + 1}} \mid S_{t} = s] = \mathbb{E}[R_{t+1} \mid S_{t} = s] + \gamma\mathbb{E}[G_{t + 1} \mid S_{t} = s]$
+I also tried to develop the second formula: $v(s) = \mathbb{E}[R_{t+1} \mid S_{t} = s] + \gamma\mathbb{E}[v(S_{t+1}) \mid S_{t} = s]$ and I'm tempted to say that $\mathbb{E}[G_{t+1} \mid S_{t} = s] = \mathbb{E}[v(S_{t+1}) \mid S_{t} = s]$ but that would only be right in the case that both follow conditions are verified:
+
+- We have the value function of a particular state $s^\prime$ inside the expectation of the second term (something like $\mathbb{E}[v(s^\prime) \mid S_{t} = s]$ which would directly give $v(s^\prime)$ since it's a scalar) and not $v(S_{t+1})$.
+- We have $\mathbb{E}[G_{t+1} \mid S_{\textbf{t+1}} = s^\prime]$ in the second term.
+
+I'm probably not understanding something correctly especially what $v(S_{t+1})$ would mean (that wasn't covered in the material I'm following but for me it would be just a function that maps the possible states at time step $t+1$ to the expected return starting from that step at that time step).
+"
+"['machine-learning', 'terminology', 'papers', 'accuracy', 'federated-learning']"," Title: What is the difference between the definition of ""accuracy"" in machine learning and federated learning?Body: What is the difference between the definition of "accuracy" in machine learning and federated learning?
+In particular, how is the accuracy calculated in the following paper:
+
+Cai, Lingshuang, et al. "Dynamic Sample Selection for Federated Learning with Heterogeneous Data in Fog Computing." ICC 2020-2020 IEEE International Conference on Communications (ICC). IEEE, 2020.
+
+"
+"['reinforcement-learning', 'deep-rl', 'dqn', 'hierarchical-rl']"," Title: How to detect entities in Montezuma's Revenge environmentBody: I'm thinking of implementing "Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation" paper. In this paper authors used some custom object detector for entity detection(eg: Key, rope, ladder, etc) but they did not give any information about this custom detector. Can you please give me a suggestion on how to Implement this object detector?
+"
+"['machine-learning', 'deep-learning', 'data-preprocessing']"," Title: What to do when you have massive amount of data but you don't have enough computation power for training a machine learning model?Body: For example, I have a massive amount of data, but I have limited computational resources and time to train on the full data. Other cases may include, I have huge amounts of 360-degree images, where I need to train on full-size images (without cropping down), but I have limited computation power (GPU, RAM, etc.), what can I do in those cases?
+"
+"['machine-learning', 'natural-language-understanding']"," Title: Text matching: fuzzy names matching with learningBody: I'm new to AI/ML and I want to research and learn about techniques that could help me to solve this complex task. Any hint would be appreciated.
+Let me explain it with an example:
+Let's look at two columns PUR.SUPPLY.MTL_REQ_HDR_ID
and MTL.PO_REQUISITION_HEADERS_TAB.ID
. It is likely they are related (one is the FK and the other one is PK).
+
+As a human I was able to do it by doing the following:
+
+- I decoded abbreviations,
+- I identified context (MTL module),
+- I identified the subject (requisition header),
+- I identified ID keyword,
+- I identified irrelevant information (TAB postfix),
+- I matched words that are not in the exact order,
+- I estimated which elements/words do not have to match/occur,
+
+I would like to match millions of columns relatively quickly (seconds). I would like algorithm to learn:
+
+- what words are likely context,
+- what are irrelevant,
+- what are subjects,
+- ideally learn some patterns (prefixes, postfixes, name formats, etc),
+- based on user responses - approve/reject match,
+- build dictionary of abbreviations,
+- estimate probability...
+
+I know this is a complex task, but maybe you know a technique, tool, library, article, example ... anything that could be helpful? Any help would be appreciated.
+Thanks.
+"
+"['natural-language-processing', 'definitions', 'books', 'tf-idf', 'logarithm']"," Title: Why do we commonly use the $\log$ to squash frequencies?Body: Term frequency and inverse document frequency are well-known terms in information retrieval.
+I am presenting the definitions for both from p:12,13 of Vector Semantics and Embeddings
+On term frequency
+
+Term frequency is the frequency of the word $t$ in the term frequency
+document $d$. We can just use the raw count as the term frequency:
+$$tf_{t, d} = \text{count}(t, d)$$
+More commonly we squash the raw frequency a bit, by using the
+$\log_{10}$ of the frequency instead. The intuition is that a word appearing 100 times in a document doesn’t make that word 100 times more likely to be relevant to the meaning of the document.
+
+On inverse document frequency
+
+The $\text{idf}$ is defined using the fraction $\dfrac{N}{df_t}$, where $N$ is the total number of documents in the collection, and $\text{df}_t$ is the number of documents in which term $t$ occurs.......
+Because of the large number of documents in many collections, this measure too is usually squashed with a log function. The resulting definition for inverse document frequency ($\text{idf}$) is thus
+$$\text{idf}_t = \log_{10} \left(\dfrac{N}{df_t} \right)$$
+
+If we observe the bolded portion of the quotes, it is evident that the $\log$ function is used commonly. It is not only used in these two definitions. It has been across many definitions in the literature. For example: entropy, mutual information, log-likelihood. So, I don't think squashing is the only purpose behind using the $\log$ function.
+Is there any reason for selecting the logarithm function for squashing? Are there any advantages for $\log$ compared to any other squash functions, if available?
+"
+"['terminology', 'probability', 'entropy']"," Title: What does the product of probabilities raised to own powers used for entropy calculation quantify?Body: Suppose $X$ is a random variable taking $k$ values.
+$$Val(X) = \{x_1, x_2, x_3, \cdots, x_k\} $$
+Then what is the following expression of $N(X)$ called in literature if exists? What does it signify?
+$$ N(X) = \prod \limits_{i = 1}^{k} p(x_i)^{p(x_i)}$$
+I am using the notation $N(X)$ for the sake of my convenience only.
+
+Background: I am asking this question because of the definition of entropy I encountered. Entropy is calculated as follows.
+$$ H(X) = - \sum\limits_{i = 1}^{k} p(x_i) \log p(x_i) $$
+If I further solve $H(X)$ as follows, I will get $H(X)$ in terms of N(X).
+$$ H(X) = - \sum\limits_{i = 1}^{k} p(x_i) \log p(x_i) = - \sum\limits_{i = 1}^{k} \log p(x_i)^{p(x_i)} $$
+$$\implies H(X)= - \log \prod \limits_{i = 1}^{k} p(x_i)^{p(x_i)}
+ = - \log N(X)$$
+Entropy is used to characterize the unpredictability of a random variable.
+
+A logarithm is generally applied to many quantities in AI in order to bring them into the desirable range where overflow and underflow won't happen. Hence I am thinking that $\dfrac{1}{N(X)}$ is the actual quantity one has to measure (the entropy?). Hence I am guessing that $N(X)$ can be treated as reciprocal of entropy. So, does $N(X)$ is a quantity that has quantified the predictability of a random variable?
+$$N(X) = \dfrac{1}{2^{H(X)}} = \dfrac{1}{2^{entropy}}$$
+So, I am wondering whether there is any quantity that $N(X)$ quantify.
+"
+"['terminology', 'notation']"," Title: To denote a training example should I use row vector or column vector?Body: This code accesses the first 3 examples in the iris data set,
+from sklearn.datasets import load_iris
+iris = load_iris()
+print(iris.data[:3])
+
+and gives
+[[5.1 3.5 1.4 0.2]
+ [4.9 3. 1.4 0.2]
+ [4.7 3.2 1.3 0.2]]
+
+To denote the first example, $x_1$, should I use a column vector like
+\begin{bmatrix}
+5.1\\3.5\\1.4\\0.2
+\end{bmatrix}
+or a row vector like the following?
+$$[5.1 \ 3.5 \ 1.4 \ 0.2]$$
+Andrew Ng suggests putting examples in columns
+
+while typical relational databases putting examples in rows.
+I'd just like to know the pros and cons of different notations so that I can decide which one I would follow.
+"
+"['reinforcement-learning', 'deep-learning', 'convolutional-neural-networks', 'transpose-convolution']"," Title: CFD Reinforcement Learning Topology optimization wind tunnelBody: I want to create a reinforcement learning environment, designed for win tunnel simulations, where for each iteration a deep convolutional model could receive the 3D vector/scalar fields from the past simulation and output a better shape that maximizes the reward function (e.g. minimize drag, maximize lift, etc.)
+The observation and action space for the neural network is the same, the inputs of the model will be 3D arrays representing velocity field, pressure field, etc. and the output will be a 3D array (created using Conv3DTranspose) with values [0, 1] which represents the mesh. I'm thinking that the architecture of the model could be something similar to an auto-encoder.
+My plan is to use the algorithm of Marching Cubes
in order to create the mesh from those points and openFoam
for the CFD simulations.
+This is a small diagram showing the workflow
+
+The goal will be to have multiple trained models specialized in optimizing a particular reward function, like minimizing drag or maximizing lift, for any object/shape given as input.
+What are your thoughts on this? Do you think it makes sense?
+"
+"['long-short-term-memory', 'forecasting']"," Title: LSTM Forecast EvolutionBody: I have a confusion about the way the LSTM networks work when forecasting with an horizon that is not finite but I'm rather searching for a prediction in whatever time in future. In physical terms I would call it the evolution of the system.
+Suppose I have a time series $y(t)$ (output) I want to forecast, and some external inputs $u_1(t), u_2(t),\cdots u_N(t)$ on which the series $y(t)$ depends.
+It's common to use the lagged value of the output $y(t)$ as input for the network, such that I schematically have something like (let's consider for simplicity just lag 1 for the output and no lag for the external input):
+$$
+[y(t-1), u_1(t), u_2(t),\cdots u_N(t)] \to y(t)
+$$
+In this way of thinking the network, when one wants to do recursive forecast it is forced to use the predicted value at the previous step as input for the next step. In this way we have an effect of propagation of error that makes the long term forecast badly behaving.
+Now, my confusion is, I'm thinking as a RNN as a kind of an (simple version) implementation of a state space model where I have the inputs, my output and one or more state variable responsible for the memory of the system. These variables are hidden and not observed.
+So now the question, if there is this kind of variable taking already into account previous states of the system why would I need to use the lagged output value as input of my network/model ?
+Getting rid of this does my long term forecast would be better, since I'm not expecting anymore the propagation of the error of the forecasted output. (I guess there will be anyway an error in the internal state propagating)
+Thanks !
+"
+"['deep-learning', 'convolutional-neural-networks', 'reference-request', 'structured-data', 'model-request']"," Title: What is the best way to train neural network with imbalanced mixed data (images and structured data)?Body: I have structured data and image data to solve a regression problem. One sample of structured data can be related to N images.
+If I use only structured data, I get decent performance, but not enough to properly solve the problem. I want to use related images to the structured data to improve performance.
+My approach was to create 3 neural networks. The first one for the image input, the second one for structured input, and the third one to combine both image and structured networks and output the final result.
+The main problem is how to properly combine one sample of structured data with N images. All the images already saved as bottleneck features from one of Keras applications. I combined the structured data with each corresponding image and got a very good result. (Duplicating structured sample for each corresponding image) But investigation showed that the validation dataset had training structured samples, but only combined with different images. So the network just memorized the dataset very well (on 110k samples) giving great synthetic results and bad generalization on real-world data. After I fixed validation and training datasets (each dataset doesn't have the same sample of structured data), the neural net showed real performance, which is bad.
+So my question is: What is the state-of-the-art to combine one sample of structured data with N images? Of course, structured data and images are logically connected. Train 2 neural networks alone and then combine their outputs in third network? Or train all three networks at once? Or maybe train images with CNN and then combine CNN output with structured data using some gradient boosting algorithm?
+"
+"['machine-learning', 'adversarial-ml']"," Title: How could poisoning attacks be prevented in adversarial Machine LearningBody: How we could prevent poisoning attacks in adversarial Machine Learning?
+I read it from this link and other sources. As per my understanding, poisoning could be done after the ML algorithm has been made or while building up the model with test data. One example I read was like a car is driving and a small image could be pasted on a wall, which could make it turn left, so the car's AI algorithm misclassifies it.
+But for poisoning the test data the attacker needs access to internal software at the time before the model is built so that the model that is built is corrupted. How could an attacker do that? That seems impossible. Or it could be in cases where the ML model is being built dynamically.
+Just poured my thoughts out. I am interested in knowing thoughts about the above, and, specifically, what are the ways in which poisoning could be prevented?
+"
+"['machine-learning', 'adversarial-ml']"," Title: How could an attacker poision the training data?Body: I came across the following definition of Backdoor attack (in this paper):
+
+These attacks are accomplished in two steps. First, special patterns are embedded in the targeted model during the training phase, which is typically achieved by poisoning training data.
+
+How could the training data be poisoned? Isn't the training data local to the software developer who is developing the ML algorithm? And won't he train the data on his local machine (could be a company too) before releasing the software out?
+"
+"['computational-learning-theory', 'pac-learning', 'approximation-error']"," Title: Characterize the high probability bound for learning algorithmBody: Suppose we have a dataset $S = (x_1, \dots x_n)$ drawn i.i.d from distribution $D$, a learning algorithm $A$ and error function $err$. The performance of $A$ is therefore defined by the error/confidence pair $(\alpha, \beta)$, that is
+$$P(err(A(S)) \geq \alpha) \leq \beta$$
+where the randomness is taken on $S$. Usually, by solving this inequality, we can get some constraints between $\alpha$ and $\beta$, in the form that $\alpha \geq f(\beta, n)$. My understanding is that if we treat $\beta$ as a constant, then we have the high probability error bound in terms of $n$. Is that correct?
+Another question is that what if the function $f$ we get is not uniform across all $\beta$, for example
+\begin{equation}
+\alpha \geq \begin{cases} f_1(n, \beta) \quad \beta \geq 0.5 \\
+f_2(n, \beta) \quad \beta< 0.5
+\end{cases}
+\end{equation}
+In this case, how to derive the high probability error bound?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'training', 'datasets']"," Title: Identifying rotating and resizing letters with background noiseBody: I'm trying to complete a captcha, and here is what it looks like:
+
+Between captchas the calligraphy of the letters is the same, but the letters may be resized and rotated. And the background noise (the small dots and lines over and around the letters) will be different. Any Hangul letter may appear.
+Edit 1: I can generate any number of new captchas with an answer for each of them. But to be clear, the answers that are generated are for entire captchas, that is, multiple Hangul letters arranged in a specific order as the answer for each captcha, not for individual letters.
+
+- What type of machine learning is best for this problem?
+- How do I extract good data from the image above for this problem?
+
+Update 1: Unfortunately no one has given any suggestions for how to solve this yet. My idea at the moment is to mimic the model in this paper: https://www.ics.uci.edu/~xhx/publications/HHR.pdf
+"
+"['transformer', 'natural-language-understanding', 'seq2seq', 'natural-language-generation']"," Title: What is the best way to generate German paraphrases?Body: What is the best method to generate German paraphrases? The state-of-the-art are seq2seq transformer models, like T5, but they only work for English sentences. I found the multilingual MT5 model, but how do you fine-tune this for German?
+"
+"['objective-functions', 'activation-functions', 'artificial-neuron', 'perceptron', 'loss']"," Title: Where does the so-called 'loss' / 'loss function' fit into the idea of a perceptron / artificial neuron (as presented in the figure)?Body: I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. Chapter 1.2.1.3 Choice of Activation and Loss Functions presents the following figure:
+
+
+
+$\overline{X}$ is the features, $\overline{W}$ is the weights, and $\phi$ is the activation function.
+So this is a perceptron (which is a form of artificial neuron).
+But where does the so-called 'loss' / 'loss function' fit into this? This is something that I've been unable to reconcile.
+
+EDIT
+The way the loss function was introduced in the textbook seemed to imply that it was part of the architecture of the perceptron / artificial neuron, but, according to hanugm's answer, it is external and instead used to update the weights of the neuron. So it seems that I misunderstood what was written in the textbook.
+In my question above, I pretty much assumed that the loss function was part of the architecture of the perceptron / artificial neuron, and then asked how it fit into the architecture, since I couldn't see any indication of it in the figure.
+Is the loss / loss function part of the architecture of a perceptron / artificial neuron? I cannot see any indication of a loss / loss function in figure 1.7, so I'm confused about this. If not, then how does the loss / loss function relate to the architecture of the perceptron / artificial neuron?
+"
+"['natural-language-processing', 'intuition']"," Title: How do very rare words tend to have very high PMI values?Body: Consider the following formulation for pointwise mutual information (PMI):
+$$\text{PMI}(w, c) = \dfrac{p(w, c)}{p(w)p(c)}$$
+Suppose there are $W$ words with $C$ context words. Then one can write in terms of frequency that
+$$\text{PMI}(w, c) = \dfrac{\sum\limits_{i = 1}^{W} \sum\limits_{j = 1}^{C} f_{ij} }{\sum\limits_{i = 1}^{W}f_i \sum\limits_{j = 1}^{C} f_j} $$
+I am going to calculate $\text{PMI}(w, c)$ for two different words and contexts based on the following table. The table is taken from fig 6.10 of this book.
+
+I calculated PMI for all pairs and tabulated below.
+$$\begin{array}{|c|c|c|} \hline
+ & \text{computer} & \text{data} & \text{result} & \text{pie} & \text{sugar} \\ \hline
+ \text{cherry} & 8.2 \times 10^{-7} & 2.9 \times 10^{-6} & 3.9 \times 10^{-5} & 1.7 \times 10^{-3} & 8.4 \times 10^{-4} \\ \hline
+\text{strawberry} & 0 & 0 & 2.6 \times 10^{-5} & 1.4 \times 10^{-3} & 3.8 \times 10^{-3} \\ \hline
+\text{digital} & 9.6 \times 10^{-5} & 8.6 \times 10^{-5} & 5.2 \times 10^{-5} & 2.8 \times 10^{-6} & 1.9 \times 10^{-5} \\ \hline
+\text{information} & 8.6 \times 10^{-5} & 9.1 \times 10^{-5} & 1.03 \times 10^{-4} & 1.2 \times 10^{-6}& 2.7 \times 10^{-5}\\ \hline
+\end{array}$$
+Based on the above values, we can also notice the following fact:
+
+PMI has the problem of being biased toward infrequent events; very rare words tend to have very high PMI values.
+
+However, it's unclear to me how this apparent behaviour is related to the mathematical formulation of the PMI above.
+How do we understand the fact quoted above from the fractional form of PMI given by the equations above?
+"
+"['neural-networks', 'reference-request', 'generative-adversarial-networks', 'generative-model', 'state-of-the-art']"," Title: What is the state of the art in melody generation?Body: Generative Adversarial Networks can generate realistic photos of people, such as thispersondoesnotexist.com. I wonder whether one can train an artificial intelligence on a batch of plain solo melodies (no instruments) and ask it to produce a new and similar one.
+This article suggests the techniques require a lot of work and are still young:
+
+We have explored and evaluated the generation of music using a Generative Adversarial Network as well as with an alternative method in the form of an N-gram model. Our GAN is able to capture some of the structure of single track music. We have accomplished our goal of identifying structural similarities shared across music compositions. However, the music we created lacks coherent melodies and needs improvement.
+
+What is the state of the art in melody generation?
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks', 'transfer-learning', 'fine-tuning']"," Title: Would this count as a Transfer Learning approach?Body: I have two datasets, Dataset 1(D1) and Dataset 2(D2). D1 has around 22000 samples, and D2 has around 8000 samples. What I am doing is that I train a Deep Neural Network model with around three layers on the dataset D1, which has an accuracy of around 84% (test accuracy = 82%).
+Now, when I use that model to make predictions on D2 without any fine-tuning or anything, I get an accuracy of around 15%(test accuracy = 12.3%). But when I add three more layers to the pre-trained model while keeping the three layers of the initial model(trained on D1) frozen, I get around 90% accuracy (test accuracy = 87.6%) on D2.
+This tells me that because the initial model was performing so poorly without any fine-tuning, most of the learning that led to the 90% accuracy was only because of the additional layers, not the layers that were transferred from the model trained on the D1 dataset. Is this a correct inference? And if it is, then is it still valid to call this a Transfer Learning application? Or does it has to have more accuracy without fine-tuning to be rightly listed as a Transfer Learning problem.
+"
+"['neat', 'hyper-parameters', 'neuroevolution']"," Title: How to ensure that the ES-HyperNEAT algorithm generates an ANN in the substrate?Body: I'm trying to implement the ES-HyperNEAT algorithm using the original paper, as well as the pseudocode provided in the official user page. Occasionally, the algorithm would be unable to generate a network in the substrate. This happens when it finds no valid nodes that could connect a path between the input and output neurons.
+I've noticed that this is highly dependent on how the hyperparameters (e.g., variance threshold and band threshold) were tuned.
+Is my implementation correct, i.e., is this normal behavior? If so, is there a good way to ensure that a network is always generated (aside from directly connecting the input and output neurons)?
+"
+['complexity-theory']," Title: How is complex systems research interacting with AI research?Body: Because future AI may produce emergent phenomena, and because these are probably gaps in our current understanding of this, it feels like complex systems may be an increasingly important research field.
+Unless there is some kind of commonality of emergent behaviour (or aspects of complex systems more generally) it seems that future AI systems may behave in ways very difficult to predict. Potentially complex system research of the brain could help in artificial neural network understanding but because of the diversity of the latter this may be a loose similarity. Further, this may or may not hold for other types of AI.
+The main question is from a basic educational level, how does complex systems research affect AI and vice vera. Possible subquestions (whatever helps understanding of this topic better): Is there much research at the intersection of AI and complex systems (so someone like Melanie Mitchell)? What kinds of things may complex systems research inform AI? Is AI now helping us understand AI better?
+"
+"['recurrent-neural-networks', 'teacher-forcing', 'bidirectional-rnn']"," Title: Do bi-directional RNNs necessarily use 100% teacher forcing?Body: I typically think of teacher forcing as optional when training an RNN. We may either:
+
+- use the output of time-step $t$ as the input to time-step $t+1$
+
+- use the $(t+1)$th input as the input to time-step $t+1$
+
+
+When I actually sat down to write a bidirectional RNN from scratch today I realised it would be impossible to do without 100% teacher forcing, because each time step needs access to the "history" going back to the 0th time-step (forward direction) and going back (or forward - however you want to think of it) to the last time-step (backward direction).
+Is that correct?
+"
+"['reinforcement-learning', 'optimal-policy']"," Title: What does $v(S_{t+1})$ mean in the optimal state-action value function?Body: In Sutton & Barto's Reinforcement Learning: An Introduction page 63 the authors introduce the optimal state value function in the expression of the optimal action-value function as follows: $q_{*}(s,a)=\mathbb{E}[R_{t+1}+\gamma v_{*}(S_{t+1})|S_{t}=s, A_{t}=a], \forall s \in S, \forall a \in A$.
+I don't understand what $v_{*}(S_{t+1})$ could possibly mean since $v_{*}$ is a mapping, under the optimal policy $\pi_{*}$, from states to numbers which are expected returns starting from those states and at different time steps.
+I believe that the authors use the same notation to denote the state-value function $v$ that verify $v(s)=\mathbb{E}[G_{t}|S_{t}=s], \forall s \in S$ and the random variable $\mathbb{E}[G_{t+1}|S_{t+1}]$ but I'm not sure.
+"
+"['reinforcement-learning', 'deep-rl', 'soft-actor-critic']"," Title: How to enforce action bounds between 0 & 1 in soft actor-critic algorithm?Body: In the paper "Soft Actor-Critic Algorithms and Applications", appendix C shows enforcing action bounds using the tanh squashing function which is in (-1, 1). I have action bounds in (0, 1), so can I just modify the tanh output by applying the following transformation:
+output = 0.5 * (tanh_output + 1). If so, do I need to change logprob formula too?
+I have not seen any SAC implementation with different action bounds other than the paper's (-1, 1).
+"
+"['convolutional-neural-networks', 'image-processing', 'feature-engineering']"," Title: Feeding CNN FFT of an image, a dumb idea?Body: My dataset consists of about 40,000 200x200px grayscale images of centered blobs bathed in noise and occasional artifacts like stripes other blobs of different shapes and sizes, fuzzy speckles and so on in their neighborhood.
+They are used in a binary classification problem, with emphasis on recall.
+I read that using FFT of image and FFT of the convolutional kernel and multiplying the two, produces a similar result as convolutions would but at a way lower resource expense. This is probably the most straightforward article I found if you need a more detailed description(https://medium.com/analytics-vidhya/fast-cnn-substitution-of-convolution-layers-with-fft-layers-a9ed3bfdc99a)
+What I want to do however is simply feed the FFT of images to the standard CNN. The reasoning being, maybe it would be easier for the network to catch on to features that it would miss or tend to weigh less. Or in other words, FFT as a feature engineering technique.
+Would this be an idea worth trying to pursue?
+If so, any suggestion on which FFT components to extract (Amplitude/Phase, Real/Imaginary)?
+"
+"['reinforcement-learning', 'comparison', 'markov-decision-process', 'notation', 'multi-armed-bandits']"," Title: Is the Bandit Problem an MDP?Body: I've read Sutton and Barto's introductory RL book. They define a policy as a mapping from states to probabilities of selecting each possible action. If the agent is following policy $\pi$ at time $t$, then $\pi(a|s)$ as the probability of taking action $A_t = a$ when the current state is $S_t = s$. This definition is the context of the markov assumption, which is why the policy is only dependent on the current state.
+When discussing the standard k-armed bandits problem, they write $\pi(a)$ to denote the probability of taking action $a$, since there are no states. However, when designing the agent, clearly, the agent needs to keep track of what the past rewards are for each lever, so either there is a summary statistic of each lever, or the entire history of actions and rewards must be kept.
+Is the k-armed bandit problem then a MDP? Why isn't the notation $\pi(a|A_0, R_1, A_1, \ldots, R_T)$ for some sequence $A_0, R_1, A_1, \ldots, R_T$?
+"
+"['deep-learning', 'caffe']"," Title: Finetuning solver for Caffe neural networkBody: We're working on object detection in thermal images using neural network with Caffe framework. We use SSD ResNet-10 network available in OpenCV repository as it seems to provide the best performance on Raspberry Pi for our needs (in comparison to MobileNet etc.)
+https://github.com/opencv/opencv/blob/master/samples/dnn/face_detector/solver.prototxt
+train_net: "train.prototxt"
+test_net: "test.prototxt"
+
+test_iter: 2312
+test_interval: 5000
+test_initialization: true
+
+base_lr: 0.01
+display: 10
+lr_policy: "multistep"
+max_iter: 140000
+stepvalue: 80000
+stepvalue: 120000
+gamma: 0.1
+momentum: 0.9
+weight_decay: 0.0005
+average_loss: 500
+iter_size: 1
+type: "SGD"
+
+solver_mode: GPU
+random_seed: 0
+debug_info: false
+snapshot: 1000
+snapshot_prefix: "snapshot/res10_300x300_ssd"
+
+eval_type: "detection"
+ap_version: "11point"
+
+Train batch size is 16. Test batch size is 1.
+The training process starts at loss 23.4728 and reaches plateau around loss 1.2, learning rate is decreased at iteration 80000 and loss falls down to 0.89. Further decrease continues very slowly to iteration 120000 where loss is around 0.85. Then LR is decreased. Process ends at iteraion 140000 with loss around 0.80 and test evaluation around 0.90.
+I noticed that selecting different optimizer gives different results. I tried Nesterov, Adam (with fixed LR) and SGD with different base_lr (0.05) and step size (100000). Are there any recommendation that I could try except of trial&error and waiting 12 hours to compare the results? Reduce/increase batch size? More iterations? Different step sizes?
+Adam provides the worst test evaluation. SGD with base_lr reduced to 0.05 and step size 100000 seems to provides the best result now (test eval = 0.94)
+
+"
+"['long-short-term-memory', 'time-series', 'forecasting']"," Title: LSTM Recursive ForecastBody: I am confused about the way the LSTM networks work when forecasting with a horizon that is not finite, but I'm rather searching for a prediction in whatever time in future. In physical terms, I would call it the evolution of the system.
+Suppose I have a time series $y(t)$ (output) I want to forecast, and some external inputs $u_1(t), u_2(t),\cdots u_N(t)$ on which the series $y(t)$ depends.
+It's common to use the lagged value of the output $y(t)$ as input for the network, such that I schematically have something like (let's consider for simplicity just lag 1 for the output and no lag for the external input):
+$$
+[y(t-1), u_1(t), u_2(t),\cdots u_N(t)] \to y(t)
+$$
+In this way of thinking the network, when one wants to do recursive forecast it is forced to use the predicted value at the previous step as input for the next step. In this way we have an effect of propagation of error that makes the long term forecast badly behaving.
+Now, my confusion is, I'm thinking of an RNN as a kind of a (simple version) implementation of a state-space model where I have the inputs, my output and one or more state variable responsible for the memory of the system. These variables are hidden and not observed.
+So, now, the question: if there is this kind of variable taking already into account previous states of the system why would I need to use the lagged output value as input of my network/model?
+Getting rid of this does my long term forecast would be better, since I'm not expecting anymore the propagation of the error of the forecasted output. (I guess there will be anyway an error in the internal state propagating)
+"
+"['reinforcement-learning', 'value-iteration', 'policy-iteration', 'dynamic-programming']"," Title: Are policy and value iteration used only in grid world like scenarios?Body: I am trying to self learn reinforcement learning. At the moment I am focusing on policy and value iteration, and I am finding several problems and doubts.
+One of the main doubts is given by the fact that I can't find many diversified examples on how to implement these on python, instead I find always only the classical grid world example.
+So, my doubt is: Are policy and value iteration used only in grid world like scenarios, or can be used also in other contexts?
+"
+"['neural-networks', 'backpropagation', 'supervised-learning', 'perceptron']"," Title: Backpropagation not working as expectedBody: I'm new to neural networks and I try to make a model that is guessing if a point is below or above relative to a function output. The idea is inspired from this video https://youtu.be/DGxIcDjPzac .
+What am I doing wrong?
+In the gif below I start the training but it seems that is not working. The blue line is the function (y = x + 50) and all the points above it should be green, but aren't. In order to simplify the example and to debug easier, I picked a simple function such that I can use only a perception for the model.
+I also made a method backPropagationDebug(...)
to display for the points that are predicted wrong all that matrices in each step, but I couldn't find what's wrong.
+
+public void backPropagation(double[][] input, double[][] expected) {
+ double[][][] outputs = getOutputs(input);
+
+ double[][] currentOutput = outputs[outputs.length - 1];
+ double[][] currentError = Matrix.subtract(expected, currentOutput);
+
+ for (int i = brain.length - 1; i >= 0; i--) {
+ final double[][] layer = brain[i];
+ final double[][] previousOutput = outputs[i];
+
+ final double[][] layerTranspose = Matrix.transpose(layer);
+ final double[][] previousError = Matrix.multiply(layerTranspose, currentError);
+
+ /* FIST BIT */
+ double[][] errorSigmoid = Matrix.copyOf(currentError);
+
+ for (int k = 0; k < errorSigmoid.length; k++) {
+ errorSigmoid[k][0] *= - derivativeActivationFunction(currentOutput[k][0]);
+ }
+
+ /* SECOND BIT */
+ final double[][] slopeMatrix = Matrix.multiply(errorSigmoid, Matrix.transpose(previousOutput));
+
+ /* UPDATE THE WEIGHTS */
+ for (int k = 0; k < layer.length; k++) {
+ for (int l = 0; l < layer[0].length; l++) {
+ layer[k][l] = layer[k][l] - learningRate * slopeMatrix[k][l];
+ }
+ }
+
+ currentOutput = previousOutput;
+ currentError = previousError;
+ }
+}
+
+The backpropagation steps are inspired from this formulas:
+
+
+(From: Make Your Own Neural Network By Tariq Rashid)
+The code is on github: https://github.com/StamateValentin/Artificial-Intelligence-Playground/tree/7a7446b7faedd7673bc53a62304ff3a5180d77eb
+The resources I used are in the README.md file.
+"
+"['neural-networks', 'relu']"," Title: Can Neural Networks using ReLU activation work without using the bias term in their neurons?Body: I created a super simple NN of 1 input, 2 hidden layers of 2 neurons each and 1 output neuron as shown below.
+
+All activations are ReLUs and neurons doesn't use the bias term. What I found is that the output graph is a combination of two linear functions (one when the input is negative and another when the input is positive) kind of like this.
+
+I think without the bias term, the output will be a linear function (for negative and positive inputs separately) no matter how big the network is. My question is, is this useful at all as an architecture? I assume it might be - if multiple output nodes are available, or is it? Does any of this mean that the a bias term is mandatory? Just trying to get my intuition right here...
+"
+"['deep-learning', 'image-recognition', 'pooling']"," Title: DeepLabV3: Why use global average pooling in the ASPP module?Body: I'm trying to understand the rationale of the various modifications the authors of the DeepLab models have made to their third version, DeepLabV3. In the paper, the following is written:
+
+ASPP with different atrous rates effectively captures
+multi-scale information. However, we discover that as the sampling
+rate becomes larger, the number of valid filter weights (i.e.,
+the weights that are applied to the valid feature region, instead of
+padded zeros) becomes smaller. This effect is illustrated in Fig. 4
+when applying a 3×3 filter to a 65×65 feature map with different atrous
+rates. In the extreme case where the rate value is close to the
+feature map size, the 3×3 filter, instead of capturing the whole image
+context, degenerates to a simple 1×1 filter since only the center filter
+weight is effective. To overcome this problem and incorporate global
+context information to the model, we adopt image-level
+features, similar to [58,95]. Specifically, we apply global average
+pooling on the last feature map of the model, feed the resulting
+image-level features to a 1×1 convolution with 256 filters (and batch
+normalization [38]), and then bilinearly upsample the feature to the
+desired spatial dimension.
+
+I do not understand how global pooling solves this problem. Is it simply because it does not suffer from the same issue of ASPP (the degeneration of the weights), and serves as an alternative?
+From: Chen, L. C., Papandreou, G., Schroff, F., & Adam, H. (2017). Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587.
+"
+"['neural-networks', 'convolutional-neural-networks', 'filters', 'explainable-ai', 'convolutional-layers']"," Title: Are these visualisations the filters of the convolution layer or the convolved images with the filters?Body: There are several images related to convolutional networks on the Internet, an example of which I have given below
+
+My question is: are these images the weights/filters of the convolution layer (the weights that are learned in the learning process), or the convolved images of the previous layer's image with the filters of the current layer?
+image source:
+https://stats.stackexchange.com/questions/362988/in-cnn-do-we-have-learn-kernel-values-at-every-convolution-layer
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'object-detection', 'image-segmentation']"," Title: How to change a single object detection network to a multiple object detection network?Body: I have trained a CNN network to detect a circle and approximate its centre and radius in an image. What I want to do now is detect the centre and radius of all the circles if there are multiple circles present in an image.
+How do I proceed to go on about it? Do I have to make changes to my dataset to be able to do so? I tried to look at different architectures that do multiple object detection, but I couldn't understand what changes I could make to my architecture.
+"
+"['deep-learning', 'convolutional-neural-networks', 'classification', 'imbalanced-datasets', 'pose-estimation']"," Title: Evaluating a convolutional neural network on an imbalanced (academic) datasetBody: I have trained a posture analysis network to classify in a video of humans recorded in public places if there is a) shake-hand between two humans, b) Standing close together that their hands touch each other but not shake hand and c) No interaction at all.
+There are multiple labels to identify different parts of a human. The labels are done to train the network to spot hand-shaking in a large dataset of videos of humans recorded in public.
+As you can guess, this leads to an imbalanced dataset.
+To train, I sampled data such that 60% of my input contained handshaking images and the rest contained different images than hand-shaking.In this network, we are not looking at just labels but also the relative position of individual labels wrt to one another. We have an algorithm that can then classify them into the three classes.
+I am stuck on how to evaluate the performance of this network. I have a large dataset and it is not labeled. So I have decided to pick 25 from class A) and B) and 50 from class (C) to create a small dataset to show the performance of the network.
+And to run the network on the large dataset without labels, but because classes A and B are quite rare events, I would be able to individually access the accuracy of the network prediction of True positive and false-positive cases.
+Is this a sound way to evaluate ? Can anyone having experience or opinion share their input on this? How else can I evaluate this?
+"
+"['objective-functions', 'optimization', 'gradient-descent', 'binary-classification', 'cross-entropy']"," Title: In logistic regression, why is the binary cross-entropy loss function convex?Body: I am studying logistic regression for binary classification.
+The loss function used is cross-entropy. For a given input $x$, if our model outputs $\hat{y}$ instead of $y$, the loss is given by
+$$\text{L}_{\text{CE}}(y,\hat{y}) = -[y \log \hat{y} + (1 - y) (\log{1 - \hat{y}})]$$
+Suppose there are $m$ such training examples, then the overall total loss function $\text{TL}_{\text{CE}}$ is given by
+$$\text{TL}_{\text{CE}} = \dfrac{1}{m} \sum\limits_{i = 1}^{m} \text{L}_{\text{CE}} (y_i , \hat{y_i}) $$
+It is said that the loss function is convex. That is, If I draw a graph between the loss values wrt the corresponding weights then the curve will be convex. The material from textbook did not give any explanation regarding the convex nature of the cross-entropy loss function. You can observe it from the following passage.
+
+For logistic regression, this (cross-entropy) loss function is conveniently convex. A
+convex function has just one minimum; there are no local minima
+to get stuck in, so gradient descent starting from any point is
+guaranteed to find the minimum. (By contrast, the loss for multi-layer
+neural networks is non-convex, and gradient descent may get stuck in
+local minima for neural network training and never find the global
+optimum.)
+
+How did they conclude conveniently that the loss function is convex? Is it by plotting or some other means?
+"
+"['machine-learning', 'terminology', 'overfitting', 'books']"," Title: What are the 'noisy factors' leading to overfitting?Body: Consider the following excerpt from section 5.5 Regularization (p. 13) of this chapter Logistic Regression.
+
+There is a problem with learning weights that make the model perfectly match the training data. If a feature is perfectly predictive of the outcome because it happens to only occur in one class, it will be assigned a very high weight. The weights for features will attempt to perfectly fit details of the training set, in fact too perfectly, modeling noisy factors that just accidentally correlate with the class. This problem is called overfitting.
+
+What are the 'noisy factors' here? Does it refer to the features that are irrelevant to the class label?
+Or does it mean the noise/errors in the values taken by features that accidentally correlate with the class label?
+"
+"['reinforcement-learning', 'deep-rl', 'reward-functions', 'reward-design', 'reward-shaping']"," Title: How would you shape a reward function if there was four quantities to optimize?Body: I found this article quite useful on how to shape a reward function in RL. However, the example they gave is quite simple, where the goal is to minimize only two quantities (velocity and distance).
+How would you formulate the reward function if you had, for instance, 4 quantities to optimize?
+
+"
+"['machine-learning', 'natural-language-processing', 'supervised-learning', 'named-entity-recognition']"," Title: Extracting keywords from messagesBody: I'm starting a project where I want to extract keywords from given messages. The keywords are for example something like: "hard disk", "watch" or other technical components. I'm working with a dataset where a technician wrote a small text if he maintenanced something on a given object.
+The messages are often very different in their form. For example sometimes the messages start with the repaired object and sometimes with the current date.
+I looked into some NER-Libaries and it doesn't seem like they can handle tasks like that. Especially the German language makes it hard for those libaries to detect entities.
+I had the idea to use CRFsuite to train my own NER-model. But I'm not sure how accurate the outcome will be. It would mean that I have to tag a lot of training data and I'm not sure if the outcome will match the time I have to spend to tag those keywords.
+Does anybody have any experience with such custom NER-models? How accurate can such a model extract wanted keywords?
+"
+"['neural-networks', 'recurrent-neural-networks']"," Title: Can we modelize an RNN by an ANN that takes precedent output as a part of input?Body: Is it possible to consider an RNN as a classical feedforward neural network that just take the precedent output as a part of the input ?
+"
+"['reinforcement-learning', 'q-learning', 'reference-request', 'papers', 'convergence']"," Title: Is there any work that applies the approach in ""Finite-Sample Convergence Rates for Q-Learning and Indirect Algorithms"" to standard Q-learning?Body: I am trying to mathematically characterize the finite sample convergence rates for Q-learning. To this end, I have read the following papers
+
+In the latter, they introduce a rather simple approach that seems appealing to me; however, they only sketch it for phased Q-learning.
+I would be interested in knowing about any source where I could find the same approach modified for standard Q-learning; in section 4 of the paper, they claim that
+
+all of our results can be generalized to apply to standard Q-learning
+
+Moreover, if you feel I am missing any paper that could be of interest with regards to the finite sample convergence of Q-learning, I would greatly appreciate it if you could post the name of it.
+"
+"['problem-solving', 'homework']"," Title: What is the possible solution to the Problem?Body: Don't need a complete solution just some guidance on how to solve it.
+
+Consider that a person has never been to the city airport. It's early in the morning and assumes that no other person is awake in the town who can guide him on the way. He has to drive in his car but doesn’t know the way to the airport. Clearly identify the four components of problem-solving in the above statement, i.e. problem statement, operators, solution space, and goal state. Should he follow a blind or heuristic search strategy? Try to model the problem in a graphical representation.
+
+"
+"['neural-networks', 'deep-learning', 'generative-adversarial-networks', 'pytorch']"," Title: GAN performs worse after 50 epochs than after 2Body: I am training GAN on SVHN dataset (house numbers in Google Street View images, dimensions: 3x32x32 - 3 color channels).
+The problem is that it performs worse after some training (e.g. after 50 epochs) than after only 2.
+Could you please check out my code? Maybe you will be able to notice what can I improve.
+I have already tweaked betas in ADAM optimizer (it helped a little bit, because before that, with default settings, d_loss went to 0 after 5 epochs). I also added an extra discriminator training step.
+You can find the code below:
+batch_size=128
+
+# the input data is scaled to a mean of 0.5 and a standard deviation of 0.5:
+transform = transforms.Compose([
+ transforms.ToTensor(),
+ transforms.Normalize(mean=(0.5,), std=(0.5,)),
+ transforms.Lambda(lambda x: x.view(-1))])
+
+
+# wypłaszczanie do loadera (view) albo konwolucyjne
+# lambda
+train_dataset = SVHN(root=".", split='train', download=True,
+ transform=transform)
+
+test_dataset = SVHN(root=".", split='test', download=True,
+ transform=transform)
+
+train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, drop_last=True, num_workers=0)
+test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=True, drop_last=True, num_workers=0)
+
+# Initialize random noise
+
+def noise(size):
+ n = torch.randn(size, 100)
+ return n.to(device)
+
+# Define the generator model
+
+class Generator(nn.Module):
+ def __init__(self):
+ super().__init__()
+ self.model = nn.Sequential(
+ # take a 100-dimensional input (random noise)
+ nn.Linear(100, 256),
+ nn.LeakyReLU(0.2),
+ nn.Linear(256, 512),
+ nn.LeakyReLU(0.2),
+ nn.Linear(512, 1024),
+ nn.LeakyReLU(0.2),
+ nn.Linear(1024, 3*32*32),
+ nn.Tanh()
+
+ )
+
+ def forward(self, x): return self.model(x)
+
+# Define the discriminator model
+
+class Discriminator(nn.Module):
+ def __init__(self):
+ super().__init__()
+ self.model = nn.Sequential(
+ nn.Linear(3*32*32, 1024),
+ nn.LeakyReLU(0.2),
+ nn.Dropout(0.3),
+ nn.Linear(1024, 512),
+ nn.LeakyReLU(0.2),
+ nn.Dropout(0.3),
+ nn.Linear(512, 256),
+ nn.LeakyReLU(0.2),
+ nn.Dropout(0.3),
+ nn.Linear(256, 1),
+ nn.Sigmoid()
+
+ )
+ def forward(self, x): return self.model(x)
+
+# define generator training - input is fake data
+def generator_train_step(fake_data):
+ # reset the gradient so the parameters will update correctly
+ g_optimizer.zero_grad()
+
+ # predict the output of the discriminator on fake data
+ prediction_fake = discriminator(fake_data)
+
+ # torch.ones, because we want 1s outputted by the discriminator when training the generator
+ error = loss(prediction_fake, torch.ones(len(real_data), 1).to(device))
+ error.backward()
+
+ g_optimizer.step()
+ return error
+
+# define discriminator training:
+
+# discriminator as an input takes real data and fake data
+def discriminator_train_step(real_data, fake_data):
+ # reset the gradient so the parameters will update correctly
+ d_optimizer.zero_grad()
+
+ #print(real_data.shape)
+ #print(fake_data.shape)
+
+ prediction_real = discriminator(real_data)
+
+ # calculate loss on real data (expected 1, so torch.ones)
+ real_loss = loss(prediction_real, torch.ones(len(real_data),1).to(device))
+ real_loss.backward(retain_graph=True)
+
+ prediction_fake = discriminator(fake_data)
+
+ # calculate loss on fake data (expected 0, so torch.zeros)
+ fake_loss = loss(prediction_fake, torch.zeros(len(fake_data), 1).to(device))
+ fake_loss.backward(retain_graph=True)
+
+ d_optimizer.step()
+ return real_loss + fake_loss
+
+lr = 1e-3
+
+discriminator = Discriminator().to(device)
+generator = Generator().to(device)
+d_optimizer = optim.Adam(discriminator.parameters(), lr=lr, betas=(0.5, 0.999))
+g_optimizer = optim.Adam(generator.parameters(), lr=lr, betas=(0.4, 0.9))
+loss = nn.BCELoss()
+num_epochs = 50
+log = Report(num_epochs)
+
+for epoch in range(num_epochs):
+ N = len(train_loader)
+ for i, (images, _) in enumerate(train_loader):
+ real_data = images.view(len(images), -1).to(device)
+ fake_data = generator(noise(len(real_data))).to(device)
+ fake_data = fake_data.detach()
+ d_loss = discriminator_train_step(real_data, fake_data)
+ fake_data = generator(noise(len(real_data))).to(device)
+
+ d_loss = discriminator_train_step(real_data, fake_data)
+
+ g_loss = generator_train_step(fake_data)
+
+ log.record(epoch+(1+i)/N, d_loss=d_loss.item(), g_loss=g_loss.item(), end='\r')
+ log.report_avgs(epoch+1)
+ z = torch.randn(10, 100).to(device)
+ # 10 pictures of dimensions of 3x32x32:
+ sample_images = generator(z).data.cpu().view(10, 3, 32, 32)
+ grid = make_grid(sample_images, nrow=4, normalize=True)
+ show(grid.cpu().detach().permute(1,2,0), sz=10)
+log.plot_epochs(['d_loss', 'g_loss'])
+
+In case you would like to see the results of training this GAN, I enclose an example generated images after epoch 2:
+EPOCH: 2.000 d_loss: 4.936 g_loss: 1.919 (130.46s - 3131.14s remaining)))
+
+
+and after epoch 50:
+
+EPOCH: 50.000 d_loss: 1.253 g_loss: 1.038 (3487.47s - 0.00s remaining))
+
+And here is the plot of discriminator loss and generator loss:
+
+"
+"['deep-learning', 'papers', 'object-detection', 'r-cnn']"," Title: What to do when the ROIs are smaller than $227 \times 227$ in R-CNN?Body: As English is not my native language, I have some hard time understanding the following sentence:
+
+Regardless of the size or aspect ratio of the candidate region, we warp all pixels in a tight bounding box around it to the required size. Prior to warping, we dilate the tight bounding box so that at the warped size there are exactly p pixels of warped image context around the original box (we use p = 16).
+
+This is from the R-CNN paper. I already extracted the ROI, but now, they say that the input of the CNN should be 227 x 227, but a lot of my ROIs are much smaller. How can I deal with it?
+"
+"['computer-vision', 'terminology', 'image-processing']"," Title: What do ""spatial"" and ""temporal"" mean in the context of image processing?Body: I am new to image processing. I am trying to understand CNNs from this blog post. Here's an excerpt from that article that mentions these terms.
+
+A ConvNet is able to successfully capture the Spatial and Temporal dependencies in an image through the application of relevant filters. The architecture performs a better fitting to the image dataset due to the reduction in the number of parameters involved and reusability of weights.
+
+I am not able to understand the terms spatial and temporal, and their respective dependencies in images. I have encountered the spatial and temporal many times. However, still, I am not able to understand how space (spatial) and temporal(time) concepts map to an image.
+(By the way, in the quote above, what does the term "reusability of weights" mean?)
+"
+"['convolutional-neural-networks', 'comparison', 'autoencoders']"," Title: What is the conceptual difference between convolutional neural networks and auto-encoders?Body: I'm familiar with Auto-Encoders and I'm about to dive into CNNs. By having a look at the most important component of a CNN, the filter:
+
+I wonder how it is different from Auto-Encoders:
+
+For me, it looks conceptually the same. I even have to admit it is quite of another higher concept: Dimensionality reduction. In e.g. PCA, as well as in AE and in CNN(?) you transform higher data / higher dimensions onto lower / compressed data.
+I can see that the methods are somehow different but yet, can't really explain in which manner, finally.
+"
+['convolutional-neural-networks']," Title: How is parameter sharing done in CNN?Body: I am trying to understand the concept of parameter sharing in a convolution neural network from Parameter Sharing. I have a few confusions:
+Parameter sharing refers to the fact that for generating a single activation map, we use the same kernel throughout the image. And for that activation map, the weights of that kernel remain the same through the image?
+Denoting a single 2-dimensional slice of depth as a depth slice (e.g. a volume of size [55x55x96] has 96 depth slices, each of size [55x55]), we are going to constrain the neurons in each depth slice to use the same weights and bias.
+Does the above paragraph refer to the fact that output of neurons in one activation map is generated by using the same weights in kernel throughout the image? And that kernel is convolved on the entire image?
+No. of parameters without parameter sharing:
+There are 555596 = 290,400 neurons in the first Conv Layer, and each has 11113 = 363 weights and 1 bias. Together, this adds up to 290400 * 364 = 105,705,600 parameters on the first layer of the ConvNet alone. Clearly, this number is very high.
+No. of parameters with parameter sharing
+With parameter sharing scheme, the first Conv Layer in our example would now have only 96 unique set of weights (one for each depth slice), for a total of 9611113 = 34,848 unique weights, or 34,944 parameters (+96 biases). Alternatively, all 5555 neurons in each depth slice will now be using the same parameters. What does this bold sentence mean?
+Also, how the parameters are different for both schemes? In both cases, we are using 96 kernels with 11113 size and the resulting output is 55*55. Then how the number of parameters for both schemes coming out to be different?
+"
+"['models', 'word-embedding', 'text-generation']"," Title: How to generate text descriptions from keywords?Body: I wonder how can I build a neural network which will generate text description from given tag/tags. Let's assume I have created such data structure:
+{
+ 'tag1': ['some description1', 'some description2', 'some description3'],
+ 'tag2': ['some description4', 'some description5', 'some description6'],
+ 'tag3': ['some description7', 'some description8', 'some description9']
+}
+
+Then I would like to create a neural network which will generate randomly generated description based on given tags. For example:
+INPUT: ['TAG1', 'TAG2', 'TAG3'] => OUTPUT: 'some description1. some description5 some description9'
+
+Then I thought that it can be a good idea to implement a LSTM and doing text generation, but here I have a problem I know how I can do it for one tag. I can create one corpus of text contains different sentences for tag, then do the training and generate a sentence for given tag, but what If I have multiple tags should I create a corpus for each tag or maybe there is a better way to do that? If you know any articles which covers this problem, I would appreciate if you share them with me. If you have a neural network proposition which will solve this problem, I am also open for proposals.
+
+PS. I know, I can solve this problem with easy Map, for example: ['tag1', 'tag2', 'tag3'].map(tag => tagSentenceMap.get(tag).randomChoice()).join('. ')
but this is not the case for me.
+
+"
+"['neural-networks', 'reference-request', 'terminology', 'philosophy', 'books']"," Title: Is there a recent book that covers the theoretical and philosophical aspects of artificial intelligence?Body: What are some recent books that introduce AI and neural networks while also discussing the related philosophical issues, like epistemology and whether AI is really thinking, etc.?
+"
+"['neural-networks', 'reinforcement-learning', 'genetic-algorithms', 'neat', 'neuroevolution']"," Title: How does the paper implement NEAT without a global set tracking Innovations?Body: I have been reading this paper on NEAT and trying to implement the algorithm in C#. For the most part, I understand everything in the paper however, there are 2 things I don't understand that confuse me.
+
+- In the paper it states:
+
+
+A possible problem is that the same structural innovation will receive different innovation numbers in the same generation if it occurs by chance more than once. However, by keeping a list of the innovations that occurred in the current generation, it is possible to ensure that when the same structure arises more than once through independent mutations in the same generation, each identical mutation is assigned the same innovation number. Extensive experimentation established that resetting the list every generation as opposed to keeping a growing list of mutations throughout evolution is sufficient to prevent innovation numbers from exploding.
+
+This implies that no global list/set is used to track innovations. If you have a global set to track them, then you wouldn't need to use a list during the evolution of the current generation because as soon as an innovation is created, it would be added to the set. This would be seen by the next evaluated Net/Genome.
+From what I have read everyone uses a global list to track the innovations. This makes sense to me. I am just very confused as to how they did it in the paper considering they did extensive testing to figure out they only need a list for the evaluation of the current generation.
+"
+"['computer-vision', 'classification', 'papers', 'transformer', 'vision-transformer']"," Title: Why class embedding token is added to the Visual Transformer?Body: In the famous work on the Visual Transformers, the image is split into patches of a certain size (say 16x16), and these patches are treated as tokens in the NLP tasks. In order to perform classification, a CLS token is added at the beginning of the resulting sequence:
+$$
+[\textbf{x}_{class}, \textbf{x}_{p}^{1}, \ldots, \textbf{x}_{p}^{N}]
+,$$
+where $ \textbf{x}_{p}^{i}$ are image patches. There multiple layers in the architecture and the state of the CLS token on the output layer is used for classification.
+I think this architectural solution is done in the spirit of NLP problems (BERT in particular). However, for me, it would be more natural not to create a special token, but perform 1d Global Pooling in the end, and attach an nn.Linear(embedding_dim, num_classes)
as more conventional CV approach.
+Why it is not done in this way? Or is there some intuition or evidence that this would perform worse than the approach used in the paper?
+"
+"['reinforcement-learning', 'comparison', 'rewards', 'value-iteration', 'return']"," Title: What is the difference between a reward and a value for a given state?Body: I am trying to learn reinforcement learning and I am focusing on the value iteration. I am looking at the example of grid world, and I am trying to implement it in python. While doing this, I encountered the situation in which I had to set the rewards for the agent, but looking at the theory, I have found that each state has also a value, which is found using the value iteration.
+So, my doubt is: What is the difference between a reward and a value for a given state? And should the initial values of the states always be set equal to zero?
+"
+"['neural-networks', 'reinforcement-learning', 'genetic-algorithms', 'neat', 'neuroevolution']"," Title: In NEAT, how do node numbers work?Body: I have read a lot of debates about node ids and such. I'm not 100% sure how it works, but I am assuming the next node added to a network would be the next number in that specific networks list?
+For example, say we start with a network with 2 inputs 1 output (nodes 1,2,3). Let's say in generation 1, one network splits a connection creating node 4. Then in generation 2, a different network splits a different connection. This would be node 4 for that specific network right? From my understanding (correct me if I'm wrong), this second split would result in 1 new innovation connection. The connection from the input to node 4 would be new but the connection from node 4 to 3 would already exist from the first split?
+"
+"['reinforcement-learning', 'deep-learning', 'papers', 'pytorch', 'unsupervised-learning']"," Title: How does CURL extract labels from logits?Body: While going over the pseudocode of the CURL paper, the method to identify labels from the logits wasn't clear to me. I believe this technique might be common in other PyTorch/Deep Learning tasks. I have attached the pseudocode below -
+
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'open-ai', 'gym', 'atari-games']"," Title: Too slow search using MCTS in OpenAI Atari gamesBody: I'm recently using Monte Carlo Tree Search in OpenAi Gym Atari, but the result isn't satisfying.
+Without render, the game lasts about 180 steps ( env.step() was called this much time ) with random agent. However, my MCTS agent only made the game last 12 steps. And it took pretty much time to give a next step.
+I guess it's the problem of rollout. I build the MCTS tree using nodes containing AtariEnv objects, and deepcopy it each time I rollout, add the reward.
+So it takes about 1 second to expand nodes and rollout, if I do 100 iterations, that would be massive waiting time.
+My code of rollout is shown below:
+def rollout_(current,if_render):
+ '''
+ current is going to be a Node object
+ '''
+ sandBox = deepcopy(current.state)
+ endReward = 0
+ done = False
+ while done != True:
+ action=sandBox.action_space.sample()
+ _,reward,done,info = sandBox.step(action)# wierd return obs_next
+ if reward > 0:
+ reward *= 2
+ endReward += reward-0.008
+ return endReward
+
+Anyone can help?
+"
+"['machine-learning', 'perceptron']"," Title: Is my flowchart a good representation of the perceptron learning algorithm?Body: I made a flowchart for a simplified perceptron leaning algorithm.
+
+Here is the process of the learning algorithm.
+
+- Initialize the weights first.
+
+- Get a training example randomly and make a prediction. If the prediction matches the ground-truth value, then get another training example. If the prediction doesn't match the ground-truth value, update the weights.
+
+- repeat step 2 until all predictions match the ground-truth value (or other stop criteria)
+
+
+Is my flowchart a good representation? If not, what are the errors, and what might be improved?
+"
+"['recurrent-neural-networks', 'word-embedding', 'books', 'maximum-likelihood']"," Title: Why can't recurrent neural network handle large corpus for obtaining embeddings?Body: In order to learn the embeddings, we need to train a model based on some objective function. The model can be an RNN and the objective function can be the likelihood. We learn the embeddings by calculating the likelihood, and the embeddings are considered good if the likelihood is maximum for them.
+The following paragraph says that it is difficult to scale RNN to estimate the maximum likelihood for large corpus due to scaling issues:
+
+Likelihood-based optimization is derived from the objective $\log p(w; U)$, where $U \in R_{K \times V}$ is matrix of word embeddings, and $w =\{w_m \}_{m=1}^M$ is a corpus, represented as a list of $M$ tokens. Recurrent neural network language models optimize this objective directly, backpropagating to the input word embeddings through the recurrent structure. However, state-of-the-art word embeddings employ huge corpora with hundreds of billions of tokens, and recurrent architectures are difficult to scale to such data. As a result, likelihood-based word embeddings are usually based on simplified likelihoods or heuristic approximations.
+
+What is the type of scaling, wrt RNN, is referred to here? Why is it difficult to scale RNN?
+
+The paragraph above is taken from the page 329 of Chapter 14: Distributional and distributed semantics of the textbook Natural Language Processing by Jacob Eisenstein
+"
+"['agi', 'singularity']"," Title: Can an animal-level artificial general intelligence kickstart the Singularity?Body: Most people seem to assume that we need a human-level AI as a starting point for the Singularity.
+Let's say someone invents a general intelligence that is not quite on the scale of a human brain, but comparable to a rat. This AI can think on its own, learn and solve a wide range of problems, and basically demonstrates rat-level cognitive behavior. It's just not as smart as a human.
+Is this enough to kickstart the exponential intelligence explosion that is the Singularity?
+In other words, do we really need a human-level AI to start the Singularity, or will an AI with animal-level intelligence suffice?
+"
+"['tensorflow', 'image-recognition', 'transfer-learning', 'multiclass-classification', 'data-labelling']"," Title: How to deal with images that do not contain any object of interest?Body: I'm currently working on an iOS App where I want to detect if there is a table, chair or bench in the current camera input.
+My idea was to take the MobileNetV2 model and get it to classify these three categories with transfer learning in TensorFlow. Because there are cases where none of these three objects are visible, I would add a fourth class "none" and feed it with random pictures of different unrelated things.
+Is this approach a good idea, or is there a better way of doing this?
+"
+"['tensorflow', 'python', 'keras', 'faster-r-cnn']"," Title: Confusion about faster RCNN neither object nor background labelBody: I am trying to construct a faster RCNN from scratch using KERAS. I am generating the tensor which contains whether anchor at each location corresponds to object or background or neither for training the RPN.
+The output tensor for the RPN is suppose H x W x L where the L dimension corresponds to whether an object is detected or is background or neither based on IOU thresholds.
+My question is this: What should be the label value for neither an object nor background label and how to stop the gradient flow for this label.
+"
+"['neural-networks', 'batch-normalization', 'normalisation', 'layer-normalization']"," Title: How to choose proper normalization strategy for the activations?Body: I am reading a survey on various normalization techniques adopted in neural network architectures.
+The purpose of introducing normalization is understandable - to stabilize the training and avoid covariate shifts.
+There is a plethora of proposed approaches:
+
+- Batch Normalization. Probably, the most well-known approach. One averages over the batch and spatial dimensions and gets the mean and std vectors of size
(num_channels,)
:
+$$
+\mu_c = \sum_{n, h, w}^{N, H, W} x_{nchw}
+\quad
+\sigma_c = \sqrt{\frac{1}{NHW}\sum_{n, h, w}^{N, H, W}(x_{nchw} - \mu_c)^2}
+$$
+- Layer Normalization. This technique became very popular after the success of Transformer architectures. The average is over the channel and spatial dimensions and gets the mean and std vectors of size
(batch_size, num_channels)
:
+$$
+\mu_n = \sum_{c, h, w}^{C, H, W} x_{nchw}
+\quad
+\sigma_n = \sqrt{\frac{1}{NHW}\sum_{c, h, w}^{C, H, W}(x_{nchw} - \mu_n)^2}
+$$
+- Instance Normalization. This approach is popular in style transfer applications. The average is over the channel and spatial dimensions and gets the mean and std vectors of size
(batch_size,)
:
+$$
+\mu_{nc} = \sum_{n, h, w}^{N, H, W} x_{nchw}
+\quad
+\sigma_{nc} = \sqrt{\frac{1}{NHW}\sum_{n, h, w}^{N, H, W}(x_{nchw} - \mu_{nc})^2}
+$$
+There are much more approaches, but I listed these 3 as the simplest.
+
+Then there are trainable parameters $\gamma$ and $\beta$, and the final output is:
+$$
+\gamma \left(\frac{x - \mu(x)}{\sigma(x)}\right) + \beta
+$$
+As far as I understand batch normalization forces weights to output something like $\mathcal{N}(\beta, \gamma)$ (normal distribution with mean $\beta$ and std $\gamma$). However, there are problems when batch size is small since the estimate would be inaccurate. Also, it seems to average over all images in the batch, but if there are different classes, probably one would like to have them to be distributed slightly different. This choice is the most widely used in CNN still, despite some [recent work] (https://arxiv.org/abs/2102.06171) says, that this layer can be replaced by another strategy.
+Layer normalization seems to equalize different channels. Is there some intuition why it has to be so? Why do we need to make output activations similar to each other?
+Instance normalization seems to be the most specific in the list. But I have not seen a lot of usage of this outside style transfer and GAN's.
+
+Overall, the ultimate question is - how to choose a particular
+normalization strategy for the given problem and architecture?
+
+"
+"['machine-learning', 'perceptron']"," Title: Is $(y_i - \hat y_i)x_i$, part of the formula for updating weights for perceptron, the gradient of some kind of loss function?Body: A post gives a formula for perceptron to update weights
+
+I understand almost all the parts of it, except for the part $(y_i - \hat y_i)x_i$ where does it come from? Is it the gradient of some kind of loss function? If yes, what is the definition of the loss function?
+The OP seems doesn't give the hypothesis, so that $\hat y_i = h(x_i)$
+However, this hypothesis seems prevalent
+\begin{align}
+\hat{y} &= sign(\mathbf{w} \cdot \mathbf{x} + b) \tag{1}\\
+&= sign({w}_{1}{x}_{1}+{w}_{2}{x}_{2} + ... + w_nx_n + b) \\
+\end{align}
+where
+$$
+sign(z) =
+\begin{cases}
+1, & z \ge 0 \\
+-1, & z < 0
+\end{cases}
+$$
+How do I get $(y_i - \hat y_i)x_i$ from function (1)
+"
+"['reinforcement-learning', 'dynamic-programming', 'policy-improvement', 'policy-improvement-theorem']"," Title: How do we get from conditional expectation on both state and action to only state in the proof of the Policy Improvement Theorem?Body: I'm going through Sutton and Barto's book Reinforcement Learning: An Introduction and I'm trying to understand the proof of the Policy Improvement Theorem, presented at page 78 of the physical book.
+The theorem goes as follows:
+
+Let $\pi$ and $\pi'$ be any pair of deterministic policies such that, for all $s\in S$,
+$q_{\pi}(s,\pi'(s))\geq v_{\pi}(s)$.
+Then the policiy $\pi'$ must be as good as, or better than, $\pi$. That is, it must obtain greater or equal expected return from all states $s\in S$:
+$v_{\pi'}(s)\geq v_{\pi}(s)$.
+
+I take it that for the proof, the policy $\pi'$ is identical to $\pi$ except for one particular state $s$ (at each time step) for which we have $\pi'(s)=a\neq \pi(s)$, as suggested by @PraveenPalanisamy in his answer here.
+The proof start from the statement of the theorem: $v_{\pi}(s)\leq q_{\pi}(s,\pi'(s))$
+And then $q_{\pi}(s,\pi'(s))$ is developed as $\mathbb{E}[R_{t+1}+\gamma v_{\pi}(S_{t+1})|S_{t}=s,A_{t}=\pi'(s)]=\mathbb{E}_{\pi'}[R_{t+1}+\gamma v_{\pi}(S_{t+1})|S_{t}=s]$
+I don't understand how did we get rid of the condition $A_{t}=\pi'(s)$. I don't think it's related to adding the subscript $\pi'$ to the expectation because it's something that should be done by definition since for the following time steps we choose policy $\pi$ which is exactly $\pi'$.
+"
+"['deep-learning', 'object-detection', 'data-labelling']"," Title: How to add negative samples for object detection?Body: My question is: how to add certain negative samples to the training dataset to suppress those samples that are recognized as the object.
+For example, if I want to train a car detector. All my training images are outdoor images with at least one car. However, when I use the trained detector on indoor images, sometimes I got the wrong object detected (false positive). How can I add more indoor images (negative samples) to the training dataset to improve the accuracy? Can I just add them without any labeling?
+"
+"['neural-networks', 'reinforcement-learning', 'deep-rl', 'activation-functions', 'hyperparameter-optimization']"," Title: Are there guiding principles as to which activation functions suit a given RL algorithm?Body: Are there rules of thumb as to which activation functions work well (or which one would not) on the policy and value network of a class of RL algorithms? For hidden layers and for the output layer.
+For example, I came across [1], which mentions ELU to be indispensable to MPO [2], and tanh (output activation) to be indispensable to SAC's Gaussian policy.
+"
+"['hyperparameter-optimization', 'hyper-parameters', 'feature-selection', 'singular-value-decomposition']"," Title: How many singular vectors do we need to calculate for SVD?Body: In the geometrical interpretation of SVD, the data points that we have need to be imagined as points in high dimensional space (say $d$-dimensional space). But we need to find a hyperplane in $k-$dimensional subspace that best fits the given data points
+
+To gain insight into the SVD, treat the rows of an $n \times d$ matrix $A$ as $n$ points in a $d$-dimensional space and consider the problem of finding the best $k$-dimensional subspace with respect to the set of points.
+
+My doubt here is about the uniqueness of $k$. Can we do decomposition for any $k \le d$ or for only certain values of $k$ or only for an unique $k$?
+
+The paragraph is taken from the material on Singular Value Decomposition available here.
+"
+"['machine-learning', 'deep-learning', 'time-complexity', 'space-complexity']"," Title: How would we get a good estimation of the asymptotic performance of machine learning algorithms?Body: The following question is from the webbook Neural Networks and Deep Learning by Michael Nielson:
+
+How do our machine learning algorithms perform in the limit of very large data sets? For any given algorithm it's natural to attempt to define a notion of asymptotic performance in the limit of truly big data. A quick-and-dirty approach to this problem is to simply try fitting curves to graphs like those shown above, and then to extrapolate the fitted curves out to infinity. An objection to this approach is that different approaches to curve fitting will give different notions of asymptotic performance. Can you find a principled justification for fitting to some particular class of curves? If so, compare the asymptotic performance of several different machine learning algorithms.
+
+The ability to mimic complex curves and fit to the data points comes due to the non-linearity used, since, had we only used a linear combination of weights and biases, we would not have been able to mimic these. Now the output depends a lot on our choice of non-linearity. Suppose we have a model. It overfits and we get an order 5 polynomial, while in another case it underfits and we get a linear model. So how would we get a good estimation of the asymptotic performance, as questioned by the author?
+"
+"['neural-networks', 'convolutional-neural-networks', 'optimization', 'convolutional-layers', 'dense-layers']"," Title: What gets optimized in convolutional neural network?Body: In a convolutional neural network, the hyperparameters such as number of kernels and stride, kernel size, etc are determined. After some combination of convolutions, ReLU and pooling layer there is the fully connected (FC) layer in the end which yields a classification result. I originally thought that during training the values of kernels would be optimized and that kernels such as edge detection are a result of optimization.
+But at the end if we have weights to optimize at the FC layer, what is it that gets optimized during training of the CNN? Do both the kernel values and weights in FCC get optimized? If so, it seems like we're dealing with two different types of parameters. How are both trained simultaneously? If not so, are there simply sets of kernels known to work and automatically implemented in CNN modules?
+"
+"['agi', 'research', 'academia', 'open-cog']"," Title: Which topics about/in OpenCog could be researched in a Ph.D. thesisBody: In this interview with Lex Fridman and Ben Goertzel, at 2:23:48, Lex asks about possibilities for young people in the domain of AGI research. Ben Goertzel then answers that there are various possibilities we can find on the OpenCog framework, including Ph.D. theses.
+I was wondering if anyone here knows what exactly he meant when he said we can find Ph.D. theses there? (He said about them at 2:25:18)
+(I am considering doing a Ph.D. in AI, so I am interested in finding interesting topics for research)
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks']"," Title: AI model to predict/generate person's imageBody: I want to make a model that predicts person's shape depending on his son's image.
+My plan is to create a dataset and each data point in it consists of two images; One for the father or mother and one for the son. Then make a model and train it with this dataset.
+So when I give the model an image of a son, it predicts / generates / draws the father's image.
+
+- Is that is possible ?
+- If yes, How can I make it ? ML ? Deep Learning ? Something else ?
+
+I searched a lot but didn't find something helpful; So any ideas or opinions are welcomed.
+"
+"['machine-learning', 'sampling']"," Title: Why is ancestral sampling used in autoregressive models?Body: I have been reading about autoregressive models. Based on what I've read, it seems to me that all autoregressive models use ancestral sampling. For instance, this paper says the following in Abstract:
+
+We introduce a deep, generative autoencoder capable of learning hierarchies of distributed representations from data. Successive deep stochastic hidden layers are equipped with autoregressive connections, which enable the model to be sampled from quickly and exactly via ancestral sampling.
+
+However, what I don't understand is why (as I understand it) all autoregressive models use ancestral sampling. Why is ancestral sampling used in autoregressive models?
+"
+['probability-distribution']," Title: Rotationally independent distributionsBody: Maxwell's theorem states that multivariate normal distribution $\mathcal{N}(\mathbf{0}, \sigma^2\mathbf{I})$ is the only distribution of a random vector that is invariant and have independent components after random rotations (orthogonal transformation).
+Generally, many distributions are rotationally invariant and it's called spherically symmetric distributions. However, any non-normal spherically symmetric distribution has uncorrelated but dependent components.
+Consequently, I am wondering if there are some distributions of vectors which are not neccessary to be rotationally invariant but have rotationally independent components.
+"
+"['prediction', 'uncertainty-quantification', 'validation']"," Title: What are the standard ways to measure the quality of a set of numerical predictions that include uncertainties?Body: I have a radial basis function that supplies uncertainties (standard deviations) with its predictions, which are numerical values.
+This function is computed for a particular point by computing its relative distance to a large set of other reference points in high dimensional space, and compositing a prediction from them.
+Over the training set I can compute R to get a correlation between prediction and actual. Weights are assigned to each dimension and optimized to maximize R.
+Over a validation set, it seems I'd want to calculate something other than R to measure the model's predictive power, since its predictions are not single values, but ranges.
+"
+['alpha-beta-pruning']," Title: Alpha beta pruning - rules for updating alpha/beta valueBody: I have been working on a problem to which I've applied alpha-beta pruning. While I got most of the answers right, there is one part I'm not quite getting:
+
+Note that I've only provided a part of the tree I'm working on. Node $B$ starts with the following values:
+$B$
+
+- $v = \infty$
+- $\alpha = - \infty$
+- $\beta = \infty$
+
+Now, we push the alpha and beta values down to node $D$ from parent node $B$, and calculate its value:
+$D$
+
+- $v = -\infty$
+- $\alpha = -\infty$
+- $\beta = \infty$
+
+As leaf node $J$ has a value of $-7$, we push that back up to parent node $D$, changing node $D$'s value to $-7$ (as it is better than the old value of $-\infty$), and we also change the $\alpha$ value of node $D$ (as it is also better than the old value of $-\infty$).
+New $D$
+
+- $v = -7$
+- $\alpha = -7$
+- $\beta = \infty$
+
+We now push the value of $-7$ back up to parent node $B$, changing node $B$'s value to $-7$ (as it is better than the old value of $\infty$), and we also change the $\beta$ value of node $B$ (as it is also better than the old value of $\infty$).
+New $B$
+
+- $v = -7$
+- $\alpha = -\infty$
+- $\beta = -7$
+
+We now traverse down to node $E$ (and we don't prune it because node $B$'s value is NOT <= its $\alpha$ value), and we push the alpha and beta values down from node $B$, and calculate its value:
+$E$
+
+- $v = -\infty$
+- $\alpha = -\infty$
+- $\beta = -7$
+
+As leaf node $K$ has a value of $0$, we push that back up to parent node $E$, changing node $E$'s value to $0$ (as it is better than the old value of $-\infty$). Now, this is the point where my confusion lies. According to my understanding, at this point we would also set the $\alpha$ value of node $E$ to $0$ (as it is better than the old value of $-\infty$). However, the answer I received to this question specifies that we do NOT change the $\alpha$ value of node $E$, and rather leave it as $-\infty$.
+Can someone please explain to me why this is the case?
+UPDATE
+I did not originally include the full subtree - this is it:
+
+In this instance, only node M should be pruned. However, my question still stands as to why the answer did not update the alpha value of node E, as no pruning happened in that part of the tree.
+"
+"['machine-learning', 'terminology', 'datasets', 'math']"," Title: What is the meaning or implications of the rank of a dataset for machine learning algorithms?Body: Consider a dataset with $n$ training examples and $d$ features.
+Let $D_{n \times d}$ be the data matrix and $r$ be the rank of it.
+In matrices, rank $r$ is generally useful in
+
+- Knowing the dimension of (optimal) vector space that can generate the rows or columns of the matrix.
+
+- Knowing the number of linearly independent rows or linearly independent columns in the matrix. Note that column rank and row rank are same for a matrix and is generally called as the rank of a matrix.
+
+
+In fact, both 1 and 2 are same and just rephrased.
+What is the meaning or implications of the rank $r$ of a dataset $D_{n \times d}$ for machine learning algorithms?
+"
+"['machine-learning', 'data-science']"," Title: Taking a machine learning model to production\deploymentBody: I've designed a machine learning model for the predictive maintenance of machines. The data used for training and testing the ML model is the data from various sensors connected to various parts of the machines. Now, I'm searching for a good approach for deploying the model in the real-time environment as explained here. I did some research and found some information about using real-time data for prediction such as using Kafka. I have some questions unanswered regarding the deployment of the ML model. Following are the details of my system:
+
+- The sensors (pressure, temperature, flow, vibration, etc) are deployed across the parts of the machines.
+- The ML model is trained with historical data.
+- For predictive maintenance (anomaly detection), streams of data will be available via MQTT. As there are 3000 machines, the volume of data will be very high.
+
+My questions are:
+
+- Where will be the best place to perform prediction operation, at the factory premice where machines are located (edge computing), at our office (that designs ML model), or at cloud server? I want to know it in regard to operational cost.
+- Is there any way to estimate the effectiveness of the complete system (full-stack ML architecture)?
+
+"
+"['image-recognition', 'facial-recognition', 'object-tracking']"," Title: Is it possible to create a simple face-tracking app that can monitor how much time one spends at their desk?Body: Context: I'm an experienced programmer with a graduate education in AI and previous CUDA programming experience. I'm versed in Machine Learning but am out of the loop -- I've not used any of the modern software packages of the last 10 years.
+Question: Is it possible using modern AI software to easily create a face-tracking application that can use a webcam to track the amount of time spent at one's desk.
+My environment is Fedora Linux. I also have an NVidia GTX 1660 for acceleration.
+To make this question and answer precise, I've narrowed it to the following sub-questions:
+As of June 2021,
+
+- Is there existing software that one can simply "set up" with a small amount of programming work (or none at all) that would facilitate training a video classifier from webcam recordings?
+
+- How does one provide training examples to this software, or how is data labeled? Does it provide some sort of GUI or accessory tool to label still frames or video sequences?
+
+- Does said software provide "hooks" or an event API so that one may invoke code on the event of e.g. a classifier edge?
+
+- Finally (and consider this optional), would it be realistic for a seasoned programmer to accomplish such a project using said software in about 30 hours? I understand that this is subjective -- just assume a graduate student in the AI field and ballpark terms. Or, answer in terms of the software's intended audience.
+
+
+First posting in this community, so just offer guidance if you'd like to see this question refined.
+"
+"['objective-functions', 'activation-functions', 'perceptron']"," Title: An explanation involving the sign activation, its affect on the loss function, and the perceptron and perceptron criterion: what is this saying?Body: I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. Chapter 1.2.1.3 Choice of Activation and Loss Functions says the following:
+
+The classical activation functions that were used early in the development of neural networks were the sign, sigmoid, and the hyperbolic tangent functions:
+$$\Phi(v) = \text{sign}(v) \ \ \text{(sign function)} \\ \Phi(v) = \dfrac{1}{1 + e^{-v}} \ \ \text{(sigmoid function)} \\ \Phi(v) = \dfrac{e^{2v} - 1}{e^{2v} + 1} \ \ \text{(tanh function)}$$
+While the sign activation can be used to map to binary outputs at prediction time, its non-differentiability prevents its use for creating the loss function at training time. For example, while the perceptron uses the sign function for prediction, the perceptron criterion in training only requires linear activation.
+
+I am having trouble understanding this part:
+
+While the sign activation can be used to map to binary outputs at prediction time, its non-differentiability prevents its use for creating the loss function at training time. For example, while the perceptron uses the sign function for prediction, the perceptron criterion in training only requires linear activation.
+
+I've read over this a number of times, but I still don't have a good idea of what it is saying (or at least the point it is trying to make). What is this actually saying? What is the point this is trying to make? Perhaps a more detailed explanation of what this is saying will clarify it for me.
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'objective-functions', 'unsupervised-learning']"," Title: Loss function to Push response value towards extremesBody: I have a feature map whose values are in the range of [0,1]. I want to push these values either towards extreme 0 or 1 using some loss function. Since I don't have any target value so it had to be in unsupervised way. I want to visualize this feature map in a way that either pixels values are approaching 1 or 0. One possible technique is to use entropy loss. What are other possible techniques used in Loss function to get extreme pixel values?
+"
+"['reinforcement-learning', 'value-functions', 'value-iteration']"," Title: Is it possible to have values of the states equal to $0$ at the end of the value iteration?Body: I am new to Reinforcement Learning and I am trying to self learn it. I have already posted some quesiton here and your answershave been really useful to me, so here I am posting another one.
+I am studying the value iteration, and while doing the simulation using python, I get that at some states it is associated a value of $0$. I think I have to mention that I have tried to assign to the ststes an initial value different from zero, in order to simulate the fact that the agent already have some information about the enviroment before starting.
+So, my questio is:
+Is it possible to have values of the states equal to $0$ at the end of the value iteration?
+"
+"['neural-networks', 'reinforcement-learning', 'deep-rl', 'policy-gradients', 'reinforce']"," Title: How does the neural network learn when used in the REINFORCE algorithm?Body: As per my understanding, you run an entire episode, which contains many steps, and then back-propagate using just a single loss value. How does the neural network learn to differentiate between good and bad actions?
+"
+['natural-language-processing']," Title: Is there any dataset to convert text to sign language?Body: I'm going to start working on one university project and I would like to ask a question regarding it. My project is about "Sign language synthesis from NLP" and I need to develop an application where:
+
+- Take spoken language from user microphone
+- Recognize a word with an algorithm and convert words to sign language
+
+Output should be images with sign language.
+For instance, if we say "I go home", we should have images of those words in sign language.
+My question is that is there any dataset you would recommend to get the images for the sign language?
+"
+['neural-networks']," Title: When does a neural network have a single and when does it have multiple outputs?Body: What I understand is, each input in a neural network is a feature.
+However, what I don't understand is, when we need multiple outputs in a neural network.
+For example, say, if we are classifying cats and dogs, only one output is enough. 0 = cat, 1 = dog.
+When does a neural network have a single and when does it have multiple outputs?
+"
+"['singular-value-decomposition', 'vector-space']"," Title: What is the role of left singular vectors in SVD?Body: SVD decomposition of a data matrix $A$ of order $n \times d$ and rank $r$ can be expressed as follows
+$$A_{n\times d} = U_{n\times r}D_{r \times r}V^{T}_{r \times d}$$
+The rows of the data matrix $A$ are the data points in $d$ dimensional space. Thus, there are $n$ points in $d$ dimensional space.
+The matrix $V$ contains $r$ right singular vectors as columns. Right singular vectors are orthonormal and forms a $r-$dimensional subspace that best fits the given $n$ data points.
+The matrix $D$ is a diagonal matrix that contains singular values. Singular values signify the least squares (loss) of n-data points on the subspace $r$ right singular vectors.
+The matrix $U$ contains left singular vectors as columns. Left singular vectors are also orthonormal.
+But what does the $r$ left singular vectors signify?
+"
+"['natural-language-processing', 'terminology']"," Title: Which tasks are called as downstream tasks?Body: The following paragraph is from page no 331 of the textbook Natural Language Processing by Jacob Eisenstein. It mentions about certain type of tasks called as downstream tasks. But, it provide no further examples or details regarding these tasks.
+
+Learning algorithms like perceptron and conditional random fields
+often perform better with discrete feature vectors. A simple way to
+obtain discrete representations from distributional statistics is by
+clustering, so that words in the same cluster have similar
+distributional statistics. This can help in downstream tasks, by
+sharing features between all words in the same cluster. However, there
+is an obvious tradeoff: if the number of clusters is too small, the
+words in each cluster will not have much in common; if the number of
+clusters is too large, then the learner will not see enough examples
+from each cluster to generalize.
+
+Which tasks in artificial intelligence or NLP are called as downstream tasks?
+"
+"['neural-networks', 'objective-functions', 'definitions', 'perceptron']"," Title: Is the formula $\frac {1}{s}\sum _{j=1}^{s}|d_{j}-y_{j}(t)|$ the correct form of 0-1 loss function, in the context of Perceptron?Body: Per page 7 of this MIT lecture notes, the original single-layer Perceptron uses 0-1 loss function.
+Wikipedia uses
+$${\displaystyle {\frac {1}{s}}\sum _{j=1}^{s}|d_{j}-y_{j}(t)|} \tag{1}$$
+to denote the error.
+Is the formula (1) the correct form of 0-1 loss function?
+"
+"['neural-networks', 'perceptron']"," Title: What are the labels in figure1 in the Paper ""The perceptron: A probabilistic model for information storage and organization in the brain""?Body: This figure
+
+comes from The perceptron: A probabilistic model for information storage and organization in the brain
+I guess the first circle (neuron) labels RETINA, the second labels perceptron area, what about the third one? what are the labels pointed out by the arrows?
+"
+"['agi', 'ai-safety', 'value-alignment']"," Title: The only convergent instrumental goal for self modifying AIBody: Conjecture: regardless of the initial reward function, one of the winning strategies would be to change the reward function to a simpler one (e.g. "do nothing"), thus getting a full reward for each passing unit of time. For such an agent, the only priority would be to prolong its existence (to maximize the overall reward collected).
+So:
+
+- The notion of externally defined reward function is incompatible with the concept of self-adjusting AGI.
+- Any AGI will always settle on self-preservation as its only goal
+- It is therefore impossible to create an AGI with benevolence towards humans "build-in". Instead, the problem of AI alignment should be reformulated in terms of "what changes to the environment (the physical reality that AGI shares with humans) would irreversibly tie wellbeing of humanity to AGIs existence".
+- Since any "kill switch" or similar artificial measure is not irreversible and can be overcome by a super-intelligent agent, the only way to tie AGI existence and human wellbeing is the modification of laws of physics, logic, and reasoning. Which is impossible.
+- AI alignment is impossible
+
+What flaws do you see in this line of reasoning?
+"
+"['image-processing', 'feature-extraction', 'c++', 'opencv', 'sift']"," Title: In SIFT, how is the coordinate system being rotated?Body: I need to understand how SIFT calculates the descriptors for the keypoints.
+Intuitively, I understand that it takes each keypoint, calculates the gradients for each pixel in a neighborhood of the keypoint, and that's basically the descriptor for the keypoint. The paper mentions a coordination system rotation in the keypoint, I assume this is when the image is rotated, the keypoint descriptor doesn't change.
+My question:
+I'm following this implementation of SIFT. In the part of the calculation function, there is this cos/sin calculation:
+I think this is related to the coordinate system rotation. Can you explain how the coordinate system is being rotated? Why does that have to do with the hist_width
?
+"
+"['deep-learning', 'math']"," Title: What math should I learn before and while using and applying deep learning?Body: I want to learn deep learning. After researching a little, I came to the conclusion that I need a lot of math. I've started a linear algebra course, and it takes a long time (2-3 weeks). I want to start using and applying deep learning to solve problems in this summer, but I assume I would not have enough time to learn all subjects (linear algebra, statistics and probability and calculus 1).
+So, what math should I learn before and while using and applying deep learning?
+"
+"['natural-language-processing', 'books']"," Title: Example of lemma having multiple boldface formsBody: Number of lemmas can be used as a rough measure for the number of words in a language. A lemma can have multiple word-form types. It can be understood from the following paragraph taken from p12 of Regular Expressions,Text Normalization, Edit Distance
+
+Another measure of the number of words in the language is the number
+of lemmas instead of wordform types. Dictionaries can help in giving
+lemma counts; dictionary entries or boldface forms are a very rough
+upper bound on the number of lemmas (since some lemmas have multiple
+boldface forms). The 1989 edition of the Oxford English Dictionary had
+615,000 entries.
+
+It is also given that a lemma can have multiple boldface forms, what are the boldface forms referred here? Are they different from wordforms?
+If possible, provide an example for lemma having multiple boldface forms.
+"
+"['reinforcement-learning', 'deep-rl', 'rewards', 'return', 'sample-efficiency']"," Title: How do I represent sample efficiency of RL rewards in mathematical notation?Body: I define sample efficiency as the area under the curve/graph, where $x$-axis is the number of episodes while y-axis is the cumulative reward for that episode. I would like to formally define it with a mathematical function.
+If the notation for cumulative reward for $x$th episode is:
+$$R_x = \sum_{t=0}^{t=T} r_t,$$
+where $r_t$ is the reward for timestep $t$ and $T$ is the max number if steps per episode.
+So is the equation for area under the graph/curve the one below?
+$$\text{Sample Efficiency} =\int_{a}^{b} R_x \ dx$$
+I will be just using a Python library to get the area under the graph which uses Simpson's rule for integrating.
+"
+"['reinforcement-learning', 'value-iteration', 'policy-improvement']"," Title: In value iteration, what happens if we try to obtain the greedy policy while looping through the states?Body: I am referring to the Value Iteration (VI) algorithm as mentioned in Sutton's book below.
+
+Rather than getting the greedy deterministic policy after VI converges, what happens if we try to obtain the greedy policy while looping through the states (i.e. using the argmax equation inside the loop)? Once our $\Delta < \theta$ and we break out of the loop, do we have an optimal policy from the family of optimal policies? Is this a valid thing to do?
+I implemented the gambler's problem exercise mentioned in Sutton's book. The policies obtained after using standard VI and the method I described above are mostly similar, yet different for some states.
+"
+"['neural-networks', 'hyperparameter-optimization', 'cross-validation', 'multi-label-classification', 'k-fold-cv']"," Title: Is it valid to implement hyper-parameter tuning and THEN cross-validation?Body: I have a multi-label classification task I am solving. I have done hyperparameter tuning (with Keras Tuner) to determine the best configuration for my neural network.
+Is it valid to do this (determine the best hyper-parameters) and then do cross-validation to get a more accurate test estimation of the dataset?
+I don't see how this would be invalid, given that the cross-validation examples I have seen already have network architectures known a priori, presumably because this is what they chose or feel is the best way of proceeding.
+For hyperparameter tuning, all data is split into training and test sets - the training set is further split, when fitting the model, for a 10% validation set - the optimal model is then used to predict on the test set.
+For k-fold cross-validation, all data (same as above) is used, but I just split (with sklearn) the data into training and test datasets (so no validation dataset). The test set is used to determine the model performance at each iteration of k-fold cross-validation.
+"
+"['reinforcement-learning', 'terminology', 'bellman-equations', 'bellman-operators']"," Title: What do the terms 'Bellman backup' and 'Bellman error' mean?Body: Some RL literature use terms such as: 'Bellman backup' and 'Bellman error'. What do these terms refer to?
+"
+['neural-networks']," Title: What type of neural network do I need?Body: I am working on protein structure prediction.
+Suppose, I am solving a problem using Neural Networks. I know how many inputs and outputs there will be in the model, as it directly depends on the problem statement.
+However, how do I know:
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'regression']"," Title: Regress values inside the bounding boxes to predict a value in Object DetectionBody: I am currently working on an object detection task. I have a dataset of Grayscale and Depth Images. The annotation format is x1, y1, x2, y2, class, depth. I have calculated this depth (of each object/bounding box) using a clustering algorithm and depth image.
+My plan is to use the Grayscale images to detect the bounding boxes and the class labels using a pre-trained CNN.
+Furthermore, I want to use the depth images to predict the depth (ground truth values in the dataset as mentioned above). For this task, my plan is to build a Regression-based Neural Network that regresses the depth values inside the bounding boxes of the depth image and compares them to the ground truth values. An RMSE loss function can be used to keep track of the predictions.
+How do I go about making this NN and is there a better alternative?
+"
+"['reinforcement-learning', 'q-learning', 'markov-decision-process', 'reward-functions', 'exploration-strategies']"," Title: In addition to the reward function, which other functions do I need to implement Q-learning?Body: In general, $Q$ function is defined as
+$$Q : S \times A \rightarrow \mathbb{R}$$
+$$Q(s_t,a_t) = Q(s_t,a_t) + \alpha[r_{t+1} + \gamma \max\limits_{a} Q(s_{t+1},a) - Q(s_t,a_t)] $$
+$\alpha$ and $\gamma$ are hyper-parameters. $r_{t+1}$ is the reward at next time step. $Q$ values are initialized arbitrarily.
+In addition to the reward function, which other functions do I need to implement Q-learning?
+"
+"['reinforcement-learning', 'alphago-zero']"," Title: Is there a way to beat AlphaGo Zero with different method?Body: As I read the research from
+https://deepmind.com/research
+It seem AlphagoZero use zero knowledge and use Reinforcement learning to improve the ai skill of playing.
+Is there a way to beat AlphagoZero? can anyone share an idea.
+My friend said we can find a specific move to beat AlphagoZero.
+From my point of view I think the only way to beat AlphagoZero is computational power.
+But I can't find any other way to beat AlphagoZero, I hope there is many other way to beat it.
+Also wanna keep some related topic below:
+Would AlphaGo Zero become perfect with enough training time?
+"
+"['reinforcement-learning', 'value-iteration', 'policy-iteration']"," Title: Can we combine policy evaluation and value iteration steps for solving model-based MDP?Body: In Sutton & Barto (2nd edition), at the very end on page 83, the following is mentioned:
+
+In general, the entire class of truncated policy iteration algorithms can be thought of as sequences of sweeps, some of which use policy evaluation updates and some of which use value iteration updates.
+
+and this on the beginning of page 84:
+
+max operation is added to some sweeps of policy evaluation.
+
+I understand that the entire class of truncated policy iteration algorithms can be classified as generalized policy iteration (GPI). Also, I know value iteration (VI) is a combination of one sweep of policy evaluation (PE) and one sweep of policy improvement.
+My question: What do we mean by combining multiple PE and VI updates in truncated policy iteration?
+"
+"['neural-networks', 'convolutional-neural-networks', 'optimization', 'filters', 'pruning']"," Title: Is pruning only applicable to convolutional neural networks?Body: This article talks about pruning in the context of convolutional neural networks:
+
+One of the first methods of pruning is pruning entire convolutional filters. Using an L1 norm of the weight of all the filters in the network, they rank them. This is then followed by pruning the ‘n’ lowest ranking filters globally. The model is then retrained and this process is repeated.
+There also exist methods for implementing structured pruning for a more light-touch approach of regulating the output of the method. This method utilizes a set of particle filters that are the same in number as the number of convolutional filters in the network.
+
+Is pruning only applicable to CNNs?
+"
+"['supervised-learning', 'contextual-bandits', 'exploration-strategies']"," Title: (explore-exploit + supervised learning ) vs contextual banditsBody: Lets take an ad recommendation problem for 1 slot. Feedback is click/no click. I can solve this by contextual bandits. But I can also introduce exploration in supervised learning, I learn my model from collected data every k hours.
+What can contextual bandits give me in this example which supervised learning + exploration cannot do?
+"
+"['reinforcement-learning', 'q-learning', 'terminology', 'off-policy-methods', 'exploration-strategies']"," Title: Which policy do I need to use in updating Q function?Body: Policy function can be of two types: deterministic policy and stochastic policy.
+Deterministic policy is of the form $\pi : S \rightarrow A$
+Stochastic policy is defined using conditional probability distributions and I generally remember as $\pi: S \times A \rightarrow [0,1]$. (I personally don't know whether the function prototype is correct or not)
+I am guessing that both type of policies can be used for Q learning. As one can read from this answer that both reward and policy function are needed to implement $Q$ learning algorithm
+
+In addition to the RF, you also need to define an exploratory policy
+(an example is the $\epsilon$-greedy), which allows you to explore the
+environment and learn the state-action value function $\hat{q}$.
+
+I have no doubt about the necessity of reward function as it is obvious from the updating equation of $Q$.
+And coming to the (usage of policy), you can find it from the line 5 of the pseudocode provided in the answer
+
+Choose $a$ from $s$ using policy derived from $Q$
+
+One can notice that policy is used for computing $Q$ and $Q$ updation also needs a policy.
+Henceforth I conclused myself that the correct statement for the line 5 of pseudocode has to be
+
+Choose $a$ from $s$ using policy derived from $Q$ updated so far
+
+Is my conclusion true? Else, how is it possible to break that cyclic dependency between policy and $Q$ function?
+"
+"['convolutional-neural-networks', 'object-detection']"," Title: Vector input to CNN for object detectionBody: I am training a 3D object detection network (Retinanet-based as of the moment) for re-detecting tracked objects. I would like to be able to add the velocity vector of the tracked object as an input to the detection network, as the velocity directly informs the direction along which the principal axis of the 3D bounding box should lie. I would like to include this information as early as possible (i.e. pass in as an input feature map) rather than simply adding it at the end with a few fully connected layers.
+Is there a good or established way to encode such a vector in a feature map?
+"
+"['neural-networks', 'convolutional-neural-networks']"," Title: What is a Silhouette Neural NetworkBody: I was going through a study in which I found something called a dilated Silhouette Neural Network. I want to know what it is, what it can do, and how it is better from a CNN?
+Link to the journal: Link
+"
+"['machine-learning', 'terminology', 'statistical-ai']"," Title: Is there any model that is probabilistic but not statistical?Body: While studying about the n-gram models, I encountered the terms "statistical model" and "probabilistic model" several times.
+I got a basic doubt that will there be any probabilistic model that is not statistical restricted to models that works on datasets.
+In machine learning, we use datasets. Any model that uses dataset can be called as a statistical model since statistics is a branch of mathematics that tries to find insights related to data.
+All the models that calculates probabilities using datasets, for any task, are called (empirical) probabilistic models.
+Thus, if I am not wrong, every probabilistic model has to be a statistical model since it uses data. Am I wrong?
+Is there any model in literature that is a statistical model but not probabilistic?
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-processing']"," Title: Convolutional Neural Network (CNN) with Tree architecture to organize the number of classesBody: At the moment, I have around 1.000 classes with accuracy and loss that are acceptable. In the long term, there could be more than 100.000 classes. The main problem is that every time a new class is needed, the model needs to be rebuilt.
+For this, I made a POC with a Siamese Network with the goal that new classes can be added without the need to rebuild. The results were not what I expected, and probabilities are a must. As far as I know, this could not be done with this network. The conclusion was that this was not the best option for this case.
+Before I start implementing, I would appreciate some feedback and second opinion on the following architecture:
+The next thing I would do is build a hierarchy chain of CNN’s. The structure is already available in a database and I could automate the build of the CNN’s to a certain level.
+The first CNN could have 4 “main” classes. Based on the probability, the next layer will be determined.
+Then the second CNN would have 50 to 200 classes. Based on the probability, the next layer will be determined as well.
+Then the last layer would be a CNN with up to 1.000 classes. In case there are more, this could be divided even further.
+This way, I could gradually build up the model without the need to rebuild everything (last layer). And the first and second layer only needs to be rebuilt if the accuracy and probability start dropping.
+I found a paper with a similar proposal, but could not find feedback or experiences of others. Is this something that is feasible? What could be the problems I will face with a structure like this? Or would you tackle this problem in another way?
+"
+"['reinforcement-learning', 'monte-carlo-methods']"," Title: GLIE MC control (reinforcement learning): how the policy affects evaluation?Body: In his lecture 5 of the course "Reinforcement Learning", David Silver introduced GLIE Monte-Carlo Control.
+
+I understand that we do policy evaluation for one step and then policy improvement. My question is how does the improved policy come into play in this GLIE algorithm?
+Is Gt
(return) based on the policy somehow? is that where the new policy comes in?
+Asked another way, how are policy evaluation and policy improvement connected in this image?
+"
+"['generative-model', 'artificial-creativity']"," Title: What approaches are there to generate complex structures like syntactic trees?Body: What approaches are there to generate and evaluate complex structures like, let's say, syntactically correct code? I know the approach of Genetic Programming (GP) as a type of Evolutionary Algorithm, but I wonder if there are any other techniques that are being used to produce complex structures more efficiently.
+Note that the syntactically correct code example is just that, an example. The code wouldn't be generated to solve a specific task, although it could try to maximize a fitness function. We could be talking about 3D models, music compositions, etc. What interests me about this issue is if there are Computational Creativity techniques being used or researched in the last years, apart from the mentioned GP.
+"
+"['deep-rl', 'actor-critic-methods', 'loss', 'soft-actor-critic', 'learning-curve']"," Title: How to interpret the training loss curves in Soft-Actor-Critic (SAC)?Body: I am using stable-baseline3 implementation of the Soft-Actor-Critic (SAC) algorithm. The plotted training curves look promising. However, I am not fully sure how to interpret the actor and critic losses. The entropy coefficient $\alpha$ is automatically learned during training. As the entropy decreases, the critic loss and actor loss decrease as well.
+
+- How does the entropy coefficient affect the losses?
+- Can this be interpreted as the estimations becoming more accurate as the focus is shifted from exploration to exploitation?
+- How can negative actor losses be interpreted, what do actor losses tell in general?
+
+
+Thanks a lot in advance
+"
+"['classification', 'image-segmentation']"," Title: What are the existing AI methods to approach 3D volumes of computed tomography?Body: I have a dataset which consists of computed tomography images (CT scans) of parts that contain pores and cracks. The sets for each part are of about 1100 * 1100 * 3000-ish resolution. Currently, I use a method of thresholding and calculations to find the volumes and locations of these defects, and I would like to reproduce those results with a machine learning approach.
+What are the methods known for this type of problem, and what are your general recommendations?
+Edit:
+
+- Here is the current method I am using :
+
![]()
+- And this is what I aim to achieve :
+
![]()
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'recurrent-neural-networks', 'pretrained-models']"," Title: How to design a neural network with arbitrary input and output length?Body: I am trying to build a neural network that has an input of $n$ pairs of integer values (where $n$ is random) and a corresponding output of a binary array with length $n$.
+The input will be a set of integer value coordinates $[(x_{1}, y_{1}), (x_{2}, y_{2}), (x_{3}, y_{3}), \dots, (x_{50}, y_{50}), \dots]$, where each instance can be of various lengths, like $[(x_{1}, y_{1}), (x_{2}, y_{2}), (x_{3}, y_{3}), \dots, (x_{52}, y_{52})]$ or $[(x_{1}, y_{1}), (x_{2}, y_{2}), (x_{3}, y_{3}), \dots, (x_{101}, y_{101})]$, etc.
+The output is a set of binary arrays with each instance having the same length as the corresponding input.
+
+May I know if anyone has any recommendations on what neural network would fit this use case?
+"
+"['reinforcement-learning', 'q-learning', 'off-policy-methods', 'exploration-exploitation-tradeoff', 'exploration-strategies']"," Title: Which policy has to be followed by a player while construction of its own Q-table?Body: Consider the scenario, where there are two players. One of the players perform the action randomly, whereas I want second player as a Q-player. I mean, the player selects a best action from the Q-table for given state i.e., the action with maximum Q-value. So, in case of second player, a Q-table is required.
+It is known that Q-table has to be constructed only by running several episodes after its arbitrary initialization. So, the two players has to play with some policy to construct a Q-table for player two. Since the first player uses random policy. I have doubt regarding the policy of second player for constructing Q-table.
+I have doubt regarding the policy of second player only. I know the policy he need to follow after the completion of updation of Q-table. I am not sure about the policy he need to follow while updating the Q-table only.
+Which policy does my second player need to follow while constructing the Q-table for himself? Can he use random policy like player one? Or does he need to use arbitrarily initialized or partially updated Q-table itself for selecting best action? Or can he use some other policy till the completion of updating Q-table?
+"
+"['natural-language-processing', 'terminology', 'books', 'test-datasets', 'validation-datasets']"," Title: Are the held-out datasets used for testing, validation or both?Body: I came across a new term "held-out corpora" and I confused regarding its usage in the NLP domain
+Consider the following three paragraphs from N-gram Language Models
+#1: held-out corpora as a non-train data
+
+For an intrinsic evaluation of a language model we need a test set. As
+with many of the statistical models in our field, the probabilities of
+an $n-$gram model come from the corpus it is trained on, the training
+set or training corpus. We can then measure training set the quality
+of an n-gram model by its performance on some unseen data called the
+test set or test corpus. We will also sometimes call test sets and
+other datasets that are not in our training sets held out corpora
+because we hold them out from the held out training data.
+
+This paragraph clearly says that held-out corpora can be used for either testing or validation or others except training.
+#2: development set or devset for hyperparameter tuning
+
+Sometimes we use a particular test set so often that we implicitly
+tune to its characteristics. We then need a fresh test set that is
+truly unseen. In such cases, we call the initial test set the
+development test set or,devset. How do we divide our data into
+training, development, and test sets? We want our test set to be as
+large as possible, since a small test set may be accidentally
+unrepresentative, but we also want as much training data as possible.
+At the minimum, we would want to pick the smallest test set that gives
+us enough statistical power to measure a statistically significant
+difference between two potential models. In practice, we often just
+divide our data into 80% training, 10% development, and 10% test.
+Given a large corpus that we want to divide into training and test,
+test data can either be taken from some continuous sequence of text
+inside the corpus, or we can remove smaller “stripes” of text from
+randomly selected parts of our corpus and combine them into a test
+set.
+
+This paragraph clearly says that development set is used for hyperparameter tuning.
+#3: held-out corpora for hyperparameter tuning
+
+How are these $\lambda$ values set? Both the simple interpolation and
+conditional interpolation $\lambda'$s are learned from a held-out
+corpus. A held-out corpus is an additional training corpus that we
+use to set hyperparameters like these $\lambda$ values, by choosing
+the $\lambda$ values that maximize the likelihood of the held-out
+corpus.
+
+This paragraph is clearly saying that held-out corpus is used for hyper-parameter training.
+I am interpreting or understanding the terms as follows:
+Train corpus is used to train the model for learning parameters.
+Test corpus is used for evaluating the model wrt parameters.
+Development set is used for evaluating the model wrt hyperparameters.
+Held-out corpus includes any corpus outside training corpus. So, it can be used for evaluating either parameters or hyperparameters.
+To be concise, informally, data = training data + held-out data = training data + development set + test data
+Is my understanding true? I got confusion because of paragraph 3, which says that held-out corpus is used (only) for learning the hyperparameters while paragraph 1 says that held-out corpus includes any corpus outside train corpus. Does held-out corpora include devset or same as devset?
+"
+"['computer-vision', 'papers', 'image-processing', 'image-super-resolution', 'tecogan']"," Title: Why is AI Super Resolution Reconstruction more than just guessing?Body: I saw a video on Youtube about AI and Super Resolution Image Reconstruction with TecoGAN. I must say I am impressed.
+Now, I am wondering how reliable this is.
+I have learned at university that you lose information if you do not sample to fullfill Nyquist. I also don't think that the example images are in any way sparse...
+Is the AI just trying to fill in the blanks by guessing?
+This would be fine for entertainment, but probably not so much to enhance robbery pictures and charge people based on enhanced pictures. It also wouldn't be a good solution for improving the resolution of scientific data if it is just "guessing".
+"
+"['reinforcement-learning', 'dqn']"," Title: In a DDQN architecture, why is the value of a state assumed to be the average of the Q values of the actions?Body: In a Dueling DQN agent (Wang et al.), the Q function is decomposed as
+$$
+Q(s, a)=V(s) + A(s, a) - \frac{1}{|A|}\sum_{a'\in \mathcal{A}}A(s, a')
+$$
+representing the value of the state, plus the advantage of the action, minus the average advantage of all actions available in that state.
+However, this formulation means that the value of the given state is skewed, even after subtracting out the mean advantage of all actions available. Why isn't it set up so that the advantage of the best action is 0 (with the other actions' advantages being negative), thus leading to a more accurate $V(s)$?
+"
+"['genetic-algorithms', 'genetic-programming', 'path-finding']"," Title: What exactly is the population in the problem of finding the best path in a network of nodes using genetic algorithms?Body: I have 17 nodes in my network with 3000 different paths in total. I have to select the path with highest available bandwidth, using genetic algorithm. I'm confused about the approach! Should I have all paths as the population, or should I create a population same size as the nodes(17).
+"
+"['computer-vision', 'adversarial-ml', 'captcha']"," Title: Why adversarial images are not the mainstream for captchas?Body: In order to check, whether the visitor of the page is a human, and not an AI many web pages and applications have a checking procedure, known as CAPTCHA. These tasks are intended to be simple for people, but unsolvable for machines.
+However, often some text recognition challenges are difficult, like discerning badly, overlapping digits, or telling whether the bus is on the captcha.
+As far as I understand, so far, robustness against adversarial attacks is an unsolved problem. Moreover, adversarial perturbations are rather generalizable and transferrable to various architectures (according to
+https://youtu.be/CIfsB_EYsVI?t=3226).
+This phenomenon is relevant not only to DNN but for simpler linear models.
+With the current state of affairs, it seems to be a good idea, to make CAPTCHAs from these adversarial examples, and the classification problem would be simple for human, without the need to make several attempts to pass this test, but hard for AI.
+There is some research in this field and proposed solutions, but they seem not to be very popular.
+Are there some other problems with this approach, or the owners of the websites (applications) prefer not to rely on this approach?
+"
+"['neural-networks', 'reinforcement-learning', 'objective-functions']"," Title: Can people set loss function of neural network by themselves instead of choosing cross entropy or mean square error?Body: I found people used deep neural network to get optimal policy by solving a nonconvex optimization problem. Moreover, they didn't use any set of training data and claimed that it's the difference between their approach and the supervised learning. I wonder can people set loss function of neural network by themselves instead of choosing cross entropy or mean square error?
+My experience in machine learning is very limited. I audited two machine learning courses offered by applied math department in my school. I read twenty or more papers on the application of machine learning. I began to use Keras very recently.
+"
+"['generative-model', 'resource-request']"," Title: Book(s) on generative modelsBody: Generative models in artificial intelligence span from simple models like Naive Bayes to the advanced deep generative models like current day GANs. This question is not about coding and involves only science and theoretical part only.
+Are there any standard textbooks that covers topics from scratch to the advanced?
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'training-datasets']"," Title: Can people use neural networks without providing the set of training data?Body: It seems that neural networks (NNs) can be applied to supervised learning, unsupervised learning and reinforcement learning. Some people even train neural networks without the set of training data. If NNs are used in reinforcement learning, is it possible that we don't need training data?
+"
+"['bayesian-deep-learning', 'uncertainty-quantification', 'single-shot-multibox-detector', 'epistemic-uncertainty']"," Title: Does MobileNet SSD v2 only capture aleatoric uncertainty (and so not the epistemic one)?Body: Regarding the MobileNet SSD v2 model, I was wondering to what extend it captures uncertainty of the predictions.
+There are 2 types of uncertainty, data uncertainty (aleatoric) and model uncertainty (epistemic).
+The model outputs bounding boxes with a confidence score, but what uncertainty does that score represent?
+From what I know models usually only capture aleatoric uncertainty in their predictions, but not the epistemic one. Is this also true for MobileNet SSD v2?
+"
+"['natural-language-processing', 'classification', 'probability', 'naive-bayes', 'probability-theory']"," Title: How would the probability of a document $P(d)$ be computed in the Naive Bayes classifier?Body: In naive Bayes classification, we estimate the class of a document as follows
+$$\hat{c} = \arg \max_{c \in C} P(c \mid d) = \arg \max_{c \in C} \dfrac{ P(d \mid c)P(c) }{P(d)} $$
+It has been said in page 4 of this textbook that we can ignore the probability of document since it remains constant across classes.
+
+We can conveniently simplify the above equation by dropping the denominator $p(d)$. This is possible because we will be computing $\dfrac{P(d \mid c)P(c)}{P(d)}$for each possible class. But $P(d)$ doesn't change for each class; we are always asking about the most likely class for the same document $d$, which must have the same probability $P(d)$. Thus, we can choose the class that maximizes this simpler
+formula
+$$\hat{c} = \arg \max_{c \in C} P(c \mid d) = \arg \max_{c \in C}
+ P(d \mid c)P(c) $$
+
+Since the value of the document does not influence the choice of the class, naive Bayes algorithm does not consider that.
+But, I want to know the value of $P(d)$. Is it $\dfrac{1}{N}$, if total number of documents are $N$? How should I calculate $P(d)$?
+"
+"['natural-language-processing', 'computational-linguistics']"," Title: How to Study Improving In-depth Reading Comprehension?Body: There are multiple datasets for machine comprehension tasks such as SQuAD. However, most of the questions are straightforward. One can find the answers easily by using the find feature of the browser to look for the question keywords in the passage.
+I'd appreciate it if you let us know about standardized in-depth reading comprehension tests for either human or machine that are generalizable. By generalizable, I mean they include a broad range of disciplines and academic levels and are not specifically designed for a target population.
+I thought of GRE reading comprehension but was not able to find any study indicating that GRE reading comprehension questions are standardized or generalizable.
+"
+"['reinforcement-learning', 'actor-critic-methods', 'proximal-policy-optimization', 'multi-agent-systems', 'continuous-action-spaces']"," Title: RLlib's Multi-agent PPO continuous actions turn into nanBody: After some amount of training on a custom Multi-agent sparse-reward environment using RLlib's (1.4.0) PPO network, I found that my continuous actions turn into nan (explodes?) which is probably caused by a bad gradient update which in turn depends on the loss/objective function.
+As I understand it, PPO's loss function relies on three terms:
+
+- The clipped surrogate objective which depends on outputs of old policy and new policy, the advantage, and the "clip" parameter(=0.3)
+
+- The Value Function Loss
+
+- The Entropy Loss [mainly there to encourage exploration]
+
+
+Total Loss = Surrogate objective (clipped) - vf_loss_coeff * VF Loss + entropy_coeff * entropy.
+
+The surrogate loss
+( Reference: https://arxiv.org/abs/1707.06347 )
+I have a bunch of questions:
+
+- Is the ratio rt(theta) used that of the actual actions taken from new policy vs. old policy or is it the probability distributions of those actions? (since actions are continuous?)
+
+- Follow up question to 1: Assuming it is probability, can the probability ever be 0? Because if it is ever 0, then log probability would result in log(0) = inf/undefined - which would prove that is the root cause?
+
+- If 1 and 2 are safely debunked, then do I
+
+
+(A) lower my learning rate?
+(B) Reduce the number of layers in my network?
+(C) Use gradient clipping or action or reward clipping of some sort?
+To anyone who would be kind enough to share any insights into the matter, you have my gratitude.
+For more information, see relevant part of progress table below: where the total loss becomes inf. The only change I found is that the policy loss was all negative until row #445.
+
+
+
+
+ |
+Total loss |
+policy loss |
+VF loss |
+
+
+
+
+430 |
+6.068537 |
+-0.053691725999999995 |
+6.102932 |
+
+
+431 |
+5.9919114 |
+-0.046943977000000005 |
+6.0161843 |
+
+
+432 |
+8.134636 |
+-0.05247503 |
+8.164852 |
+
+
+433 |
+4.222730599999999 |
+-0.048518334 |
+4.2523246 |
+
+
+434 |
+6.563492 |
+-0.05237444 |
+6.594456 |
+
+
+435 |
+8.171028999999999 |
+-0.048245672 |
+8.198222999999999 |
+
+
+436 |
+8.948264 |
+-0.048484523 |
+8.976327000000001 |
+
+
+437 |
+7.556602000000001 |
+-0.054372005 |
+7.5880575 |
+
+
+438 |
+6.124418 |
+-0.05249534 |
+6.155608999999999 |
+
+
+439 |
+4.267647 |
+-0.052565258 |
+4.2978816 |
+
+
+440 |
+4.912957700000001 |
+-0.054498855 |
+4.9448576 |
+
+
+441 |
+16.630292999999998 |
+-0.043477765999999994 |
+16.656229 |
+
+
+442 |
+6.3149705 |
+-0.057527818 |
+6.349851999999999 |
+
+
+443 |
+4.2269225 |
+-0.05446908599999999 |
+4.260793700000001 |
+
+
+444 |
+9.503102 |
+-0.052135203 |
+9.53277 |
+
+
+445 |
+inf |
+0.2436709 |
+4.410831 |
+
+
+446 |
+nan |
+-0.00029848056 |
+22.596403 |
+
+
+447 |
+nan |
+0.00013323531 |
+0.00043436907999999994 |
+
+
+448 |
+nan |
+1.5656527000000002e-05 |
+0.0002645221 |
+
+
+449 |
+nan |
+1.3344318000000001e-05 |
+0.0003139485 |
+
+
+450 |
+nan |
+6.941916999999999e-05 |
+0.00025863337 |
+
+
+451 |
+nan |
+0.00015686743 |
+0.00013607396 |
+
+
+452 |
+nan |
+-5.0206604e-06 |
+0.00027541115000000003 |
+
+
+453 |
+nan |
+-4.5543664e-05 |
+0.0004247162 |
+
+
+454 |
+nan |
+8.841756999999999e-05 |
+0.00020278389999999998 |
+
+
+455 |
+nan |
+-8.465959e-05 |
+9.261127e-05 |
+
+
+456 |
+nan |
+3.8680790000000003e-05 |
+0.00032097592999999995 |
+
+
+457 |
+nan |
+2.7373152999999996e-06 |
+0.0005146417 |
+
+
+458 |
+nan |
+-6.271608e-06 |
+0.0013273798000000001 |
+
+
+459 |
+nan |
+-0.00013192794 |
+0.00030621013 |
+
+
+460 |
+nan |
+0.00038987884 |
+0.00038019830000000004 |
+
+
+461 |
+nan |
+-3.2747877999999998e-06 |
+0.00031471922 |
+
+
+462 |
+nan |
+-6.9349815e-05 |
+0.00038836736000000006 |
+
+
+463 |
+nan |
+-4.666238e-05 |
+0.0002851575 |
+
+
+464 |
+nan |
+-3.7067155e-05 |
+0.00020161088 |
+
+
+465 |
+nan |
+3.0623291e-06 |
+0.00019258813999999998 |
+
+
+466 |
+nan |
+-8.599938e-06 |
+0.00036465342000000005 |
+
+
+467 |
+nan |
+-1.1529375e-05 |
+0.00016500981 |
+
+
+468 |
+nan |
+-3.0851965e-07 |
+0.00022042097 |
+
+
+469 |
+nan |
+-0.0001133984 |
+0.00030230957999999997 |
+
+
+470 |
+nan |
+-1.0735256e-05 |
+0.00034000343000000003 |
+
+
+
+
+Optional
+For even further context, check my related question
+"
+"['transfer-learning', 'weights-initialization', 'multiclass-classification', 'uncertainty-quantification', 'k-fold-cv']"," Title: Why would the ""improvement"" be the result of random initialization, and so why should we use multiple runs?Body: I got this feedback for my thesis paper.
+
+The improvement shown in the results section could be the result of random initialization. There should be multiple runs with means and standard deviations.
+
+Can anyone explain this feedback with details?
+I used a neural network with pre-trained weights for transfer learning (specifically, EfficientNetB0, with 'noisy-student'
for the weights
). It was a classification problem to classify between Covid-19, Viral Pneumonia, and normal cases. I normalised the dataset so that the images are in the range [0, 255] and I also did k-fold cross-validation.
+"
+"['papers', 'perceptron']"," Title: Could someone help tell what the labels are pointed out by red rectangles?Body: The following figure comes from the paper The perceptron: A probabilistic model for information storage and organization in the brain
+
+I can tell the labels pointed out by blue rectangles are: "Projection area", "A-units", "$R_1$", "inhibitory connections" and "$R_2$".
+Could someone help tell what the labels are pointed out by red rectangles?
+"
+"['machine-learning', 'convolutional-neural-networks', 'training', 'backpropagation', 'weights']"," Title: Can some of the weights be fixed during the training of a neural network?Body: Is it possible to exclude specific layers from the optimization?
+For example, let's say I have an input layer, 2 hidden layers, and the output layer. I know there is a perfect solution for my problem with this setup and I already know the perfect weights between the first and the second hidden layer.
+Can I have the weights between the first and the second hidden layer be fixed during the training phase?
+I understand that I could just not update these specific weights after I computed the backpropagation for the entire network. But if I throw away those specific weights, will this affect the optimization of the rest of my weights?
+"
+"['neural-networks', 'training', 'overfitting', 'accuracy', 'cross-validation']"," Title: Should I continue training if the neural network attains 100% training accuracy?Body: I have a neural network where there are two hidden layers. Each hidden layer has 128 neurons. The input layer has 20 inputs, and the output layer has 3 outputs.
+I have 1 million records of data. 80% is used to train the network, 20% is used for validation. I run the training for 100000 epochs.
+I see that the neural network attains 100% accuracy on the training data after only 12000 epochs.
+Should I stop training or continue until all 100000 epochs are complete? Please, explain why.
+"
+"['reinforcement-learning', 'td3']"," Title: TD3 sticking to end valuesBody: I am using TD3 on a custom gym environment, but the problem is that the action values stick to the end. Sticking to the end values makes reward negative, to be positive it must find action values somewhere in the mid. But, the agent doesn't learn that and keeps action values to maximum.
+I am using one step termination environment (environment needs actions once for each episode).
+How can I improve my model? I want action values to be roughly within 80% of maximum values.
+In DDPG, we have inverted gradients, but could something similar be applied to TD3 to make action values search within legal action space more?
+The score decreases as episodes increases.
+
+"
+"['reference-request', 'transfer-learning', 'bayesian-deep-learning', 'bayesian-neural-networks', 'one-shot-learning']"," Title: How could Bayesian neural networks be used for transfer learning?Body: In transfer learning, we use big data from similar tasks to learn the parameters of a neural network, and then fine-tune the neural network on our own task that has little data available for it. Here, we can think of the transfer learning step as learning a (proper) prior, and then fine-tuning as learning the posterior.
+So, we can argue that Bayesian networks can also solve the problem of small data-set regimes. But, what are the directions that we can mix Bayesian neural networks with similar tasks to transfer learning, for example, few-shot learning?
+They make sense when they both take a role as a solution to the low data regime problems, but I can't think of a mix of them to tackle this issue.
+Is it possible, for example, to learn a BNN for which we have picked a good prior to learn the posterior with little data and use the weight distribution for learning our new task? Is there any benefit in this?
+"
+"['machine-learning', 'terminology', 'overfitting', 'test-datasets']"," Title: What does it mean by overfitting the test set?Body: Consider the following statement from p14 of Naive Bayes and Sentiment Classification
+
+While the use of a devset avoids overfitting the test set, having a
+fixed training set, devset, and test set creates another problem: in
+order to save lots of data for training, the test set (or devset)
+might not be large enough to be representative.
+
+I heard about overfitting on train data. A model is said to be overfit on train data if it is giving low train error and high test error.
+But, what does it mean overfitting on test set?
+"
+"['deep-learning', 'terminology', 'sequence-modeling']"," Title: What is the difference between model setup, model configuration, and model customization?Body: In the context of research papers related to deep learning models, the authors usually mention these terms in the experiment section when they are talking about the model: configuration, setup. For example: Akbik et al. 2018.
+For example:
+
+- "We utilize the BiLSTM-CRF sequence labeling architecture proposed by Huang et. al (2015) in all configurations of our comparative evaluation."
+
+- "Baselines. We also evaluate setups that involve only previous word embeddings."
+
+
+What is the difference between the terms? Is the model architecture the same with different hyperparameters?
+Thank you in advance.
+"
+"['probability', 'statistical-ai']"," Title: How can the probability of two disjoint events be non-zero?Body: Let $A$ and $B$ be two models for a classification task. Let $x$ be a test set and $M$ be a metric for the classification task. $X$ be a random variable on test sets.
+Now,
+$M(A,x) = $ Score of model $A$ on test set $x$
+$M(B,x)$ = Score of model $B$ on test set $x$
+$\delta(x) =$ difference in performance of models wrt test set $x$ $= M(A, x)-M(B,x)$
+Now, consider the following (statistical) hypothesis on the performance difference $\delta$
+$$H_o : \delta(x) \le 0$$
+$$H_1 : \delta(x) > 0$$
+We define $p-$value as follows
+$$P(\delta(X) \ge \delta(x) | H_o \text{is true} ) $$
+With this as context, I confused with the following paragraph (taken from p15 of Naive Bayes and Sentiment Classification)
+
+So in our example, this $p-$value is the probability that we would see
+$\delta(x)$ assuming $A$ is not better than B. If $\delta(x)$ is huge
+(let’s say $A$ has a very respectable $M$ of $.9$ and $B$ has a
+terrible $M$ of only $.2$ on $x$), we might be surprised, since that
+would be extremely unlikely to occur if $H_0$ were in fact true, and so
+the $p-$value would be low (unlikely to have such a large $\delta$ if
+$A$ is in fact not better than $B$). But if $\delta(x)$ is very small,
+it might be less surprising to us even if $H_0$ were true and $A$ is
+not really better than $B$, and so the $p-$value would be higher.
+
+It is told in the paragraph that $p-$value is very low if $A's$ performance is better than $B$.
+I am thinking that $p-$value should be zero if $A's$ performance is better than $B$ since it is a disjoint event wrt $H_0$. Where am I going wrong?
+"
+"['natural-language-processing', 'natural-language-understanding', 'speech-synthesis']"," Title: How to measure the similarity the pronunciation of two words?Body: I would like to know how I could measure the pronunciation of two words. These two words are quite similar and differ only in one vowel.
+I know there is, e.g., the Hamming distance or the Levenshtein distance but they measure the "general" difference between words. I'm also interested in that but mainly I would like to know how they sound differently. I think there must be something like this to test text-to-speech results?
+Best would even be an online source where I could just type in those two words.
+"
+"['reference-request', 'prediction', 'biology', 'healthcare']"," Title: Has someone correctly predicted one of the variants of SARS-CoV-2 (like the Delta variant)?Body: Without any evidence, I have wondered it might be possible to predict the upcoming mutations of the COVID-19 virus. I am further assuming people did so.
+So, has someone correctly predicted the emergence of one of the variants of SARS-CoV-2 (like the Delta variant)?
+I would be happy to have an explanation in layman's terms and citations to papers (if any).
+"
+"['bert', 'pretrained-models', 'fine-tuning', 'named-entity-recognition']"," Title: Does BERT freeze the entire model body when it does fine-tuning?Body: Recently, I came across the BERT model. I did some research and tried some implementations.
+I wanted to tackle a NER task, so I chose the BertForSequenceClassifications provided by HuggingFace.
+for epoch in range(1, args.epochs + 1):
+ total_loss = 0
+ model.train()
+ for step, batch in enumerate(train_loader):
+ b_input_ids = batch[0].to(device)
+ b_input_mask = batch[1].to(device)
+ b_labels = batch[2].to(device)
+ model.zero_grad()
+
+ outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
+ loss = outputs[0]
+
+ total_loss += loss.item()
+ loss.backward()
+ torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
+ # modified based on their gradients, the learning rate, etc.
+ optimizer.step()
+
+The main part of my fine-tuning follows as above.
+I am curious about to what extent the fine-tuning alters the model. Does it freeze the weights that have been provided by the pre-trained model and only alter the top classification layer, or does it change the hidden layers that are contained in the already pre-trained BERT model?
+"
+"['reinforcement-learning', 'value-functions', 'bellman-equations', 'linear-algebra']"," Title: How can we find the value function by solving a system of linear equations?Body: I am following the book "Reinforcement Learning: An Introduction" by Richard Sutton and Andrew Barto, and they give an example of a problem for which the value function can be computed explicitly by solving a system of $\lvert S \rvert $ equations that have $\lvert S \rvert $ unknowns. Each of these $\lvert S \rvert$ equations is given by:
+$$v_{\pi}(s) = \sum_{a} \pi(a\rvert s) \sum_{s^{\prime}}\sum_{r} p(s^{\prime}, r \rvert s,a)[r + \gamma v_{\pi}(s^{\prime})] $$
+I am having a hard time understanding how one could solve this system of equations. It seems to me as if each equation consists of a summation of an infinite amount of terms and therefore one would not be able to analytically solve them. Could anyone offer any intuition as to how this system of equations could be explicitly solved?
+"
+"['deep-learning', 'hyperparameter-optimization', 'hyper-parameters', 'embeddings']"," Title: How to determine the embedding size?Body: When we are training a neural network, we are going to determine the embedding size to convert the categorical (in NLP, for instance) or continuous (in computer vision or voice) information to hidden vectors (or embeddings), but I wonder if there are some rules to set the size of it?
+"
+"['policy-gradients', 'proofs', 'open-ai', 'reinforce']"," Title: Why adding a baseline doesn't affect the policy gradient?Body: On the OpenAI's Spinning Up, they justify the fact that adding a baseline $b(s_t)$ in the policy gradient doesn't change its gradient by saying that this is
+
+an immediate consequence of the EGLP Lemma
+
+However, I did not manage to prove it with this lemma. Can somebody help me, please?
+The proof is trivial when $b$ is a constant, but I struggle to derive it whenever $b$ is a function of the current state $s$ because you can't take it out of the integral.
+"
+"['natural-language-processing', 'machine-translation', 'text-generation']"," Title: Semantic-based evaluation of translations instead of BLEUBody: I have a text generation model and I want to evaluate its output by comparing it to a set of gold human-annotated references.
+I went through machine-translation metrics and I found that BLEU is used as the main metric usually.
+I didn't like using it because it's shallow as it uses ngrams comparison; the semantics of the translation is missed.
+Is there any other metric to do a semantic-based evaluation?
+I've thought of using a text similarity model to evaluate the output or even an NLI (Natural language inference) system. I am not sure how precise the evaluation will be because SOTA systems are not really accurate.
+"
+"['reinforcement-learning', 'probability-distribution', 'proximal-policy-optimization', 'continuous-action-spaces', 'normal-distribution']"," Title: How to define a continuous action distribution with a specific range for Reinforcement Learning?Body: Specifically for continuous control PPO, let's say my action space range is between $X$ (low) and $Y$ (high) and they are all sampled from a Gaussian Action Distribution with mean $\mu$ and standard deviation $\rho$.
+From what I understood, the actions sampled should fall between $\mu - \rho$ and $\mu + \rho$, but that's not what happens in practice? What am I misunderstanding here? How do I ensure this range constraint from a custom action distribution with a given mean and standard deviation?
+Any advice or tips for me? I would really appreciate any insights!
+"
+"['neural-networks', 'terminology', 'math', 'activation-functions', 'tanh']"," Title: Why is tanh a ""smoothly"" differentiable function?Body: The sigmoid, tanh, and ReLU are popular and useful activation functions in the literature.
+The following excerpt taken from p4 of Neural Networks and Neural Language Models says that tanh
has a couple of interesting properties.
+
+For example, the tanh function has the nice properties of being
+smoothly differentiable and mapping outlier values toward the mean.
+
+A function is said to be differentiable if it is differentiable at every point in the domain of function. The domain of tanh
is $\mathbb{R}$ and $ \dfrac{e^x-e^{-x}}{e^x+e^{-x}}$ is differentiable in $\mathbb{R}$.
+But what is meant by "smoothly differentiable" in the case of tanh
activation function?
+"
+"['comparison', 'perceptron', 'neurons']"," Title: What are (all) the differences between a neuron and a perceptron?Body: I know two differences between a neuron and a perceptron
+
+
+- Neuron employs non-linear activation function and perceptron employs only a threshold activation function.
+
+- The output of a neuron is not necessarily a binary number and the output of a perceptron is always a binary number
+
+
+
+I know no other difference between a perceptron and a neuron other than the above.
+Are there any other differences between perceptron and neuron?
+"
+"['neural-networks', 'hyper-parameters', 'input-layer']"," Title: Why is the input layer of a neural network usually not counted?Body: I came across the following statement from the caption of figure 7.8 from the textbook Neural Networks and Neural Language Models
+
+the input layer is usually not counted when enumerating layers
+
+Why is the input layer excluded from counting?
+Is the reason just convention or based on its contribution?
+"
+"['neural-networks', 'deep-learning', 'ai-design', 'conditional-random-field']"," Title: Conditional input deep neural networkBody: I need to input data conditionally to my deep network. In order to explain cases, I'd like to give an example. Assume that I have a 50-attribute dataset. For some attributes, a specific part of hidden layers is responsible, and for others, a different part is responsible. Also, for some cases, the same parts of the hidden layers might intersect. I think I can decide which attributes must go which hidden neurons in the input layer by using some kind of if-else block. However, I could not figure out how.
+
+My current idea
+
+I can enter an identity element for some attributes. For example, I have att1, att2, att3, etc. I have ins1, ins2, etc.
+For ins1 -> att1 = 0.5, att2 = 0.2, att3 = None
+For ins2 -> att1 = 0.1, att2 = None, att3 = None
+But, if I do this approach, the number of attributes for an instance becomes bigger unnecessarily.
+
+End of my current idea
+
+Are there any opinions on this? Should I rearrange my excel file or is there any way to use if-else conditions?
+Regards,
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'multi-armed-bandits', 'contextual-bandits']"," Title: Why do I get bad results no matter my neural network function approximator for parametrized Q-learning implementation for Contextual Bandits?Body: I'd like to ask you why, no matter my neural network function approximator for parametrized Q-learning implementation for a Contextual Bandits environment, I'm getting bad results. I don't know if it's a problem with my formulation of the problem and how I'm trying to solve it, or is it the neural architecture. I tried different fully-connected neural networks with different number of layers and different number of neurons (sticking to low numbers since my environment is not complex) but I always get bad results, and it seems the results are random.
+if my implementation of the Q-learning algorithm for the Contextual Bandits problem is right. I made an environment that randomly generates three integers between 0 and 89 and given an action (integer between 0 and 4) it returns a reward following a certain logic (if all three integers are between 0 and 29 and the action is 0 then the reward is 0 otherwise it's -1).
+My environment is:
+class Environment():
+
+ def __init__(self):
+
+ self._observation = np.zeros((3,))
+
+ def interact(self, action):
+ self._observation = np.zeros((3,))
+ c1, c2, c3 = np.random.randint(0, 90, 3)
+ self._observation[0]=c1
+ self._observation[1]=c2
+ self._observation[2]=c3
+ reward = -1.0
+ condition = False
+ if (c1<30) and (c2<30) and (c3<30) and action==0:
+ condition = True
+ elif (30<=c1<60) and (30<=c2<60) and (30<=c3<60) and action==1:
+ condition = True
+ elif (60<=c1<90) and (60<=c2<90) and (60<=c3<90) and action==2:
+ condition = True
+ else:
+ if action==4:
+ condition = True
+ if condition:
+ reward = 0.0
+
+ return {"Observation": self._observation,
+ "Reward": reward}
+
+The interaction method doesn't return state or time step, not like what TF-Agents environments' step method does. I just thought it's not necessary for the current problem; I don't rely on time steps since each state doesn't influence the next state. I thought that observation is what should be returned, the state being a more general data that could contain information the agent can't observe. I don't return the action too because we can get it outside the environment.
+My function approximator of the Q-values are neural networks, always a fully connected architecture. For instance:
+model = keras.models.Sequential([
+ keras.layers.Dense(16, activation="relu", input_shape=[n_inputs]),
+ keras.layers.Dense(16, activation="relu"),
+ keras.layers.Dense(n_outputs)])
+
+I took the next blocks of code from Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition and adapted them to my situation:
+env = Environment()
+
+n_inputs = 3 #Observations are made of three integers
+n_outputs = 4 #Four actions
+
+def epsilon_greedy_policy(observation, epsilon=0):
+ if np.random.rand() < epsilon:
+ return np.random.randint(4)
+ else:
+ Q_values = model.predict(observation[np.newaxis])
+ return np.argmax(Q_values[0])
+
+replay_buffer = deque(maxlen=2000)
+
+def sample_experiences(batch_size):
+ indices = np.random.randint(len(replay_buffer), size=batch_size)
+ batch = [replay_buffer[index] for index in indices]
+ observations, rewards, actions = [np.array([experience[field_index] for experience in batch]) for field_index in range(3)]
+ return observations, rewards, actions
+
+def play_one_step(env, observation, epsilon):
+ action = epsilon_greedy_policy(observation, epsilon)
+ observation, reward = env.interact(action).values()
+ replay_buffer.append((observation, reward, action))
+ return observation, reward
+
+batch_size = 16
+optimizer = keras.optimizers.Adam(learning_rate=1e-3)
+loss_fn = keras.losses.mean_squared_error
+
+def training_step(batch_size):
+ experiences = sample_experiences(batch_size)
+ observations, rewards, actions = experiences
+ target_Q_values = rewards
+ mask = tf.one_hot(actions, n_outputs)
+ with tf.GradientTape() as tape:
+ all_Q_values = model(observations)
+ Q_values = tf.reduce_sum(all_Q_values * mask, axis=1, keepdims=True)
+ loss = tf.reduce_mean(loss_fn(target_Q_values, Q_values))
+ grads = tape.gradient(loss, model.trainable_variables)
+ optimizer.apply_gradients(zip(grads, model.trainable_variables))
+
+epsilon = 0.01
+obs = np.random.randint(0,90,3)
+for episode in tqdm(range(1000)):
+ if episode<250:
+ obs, reward = play_one_step(env, obs, epsilon)
+ else:
+ obs, reward = play_one_step(env, obs, epsilon)
+ training_step(batch_size)
+
+I'm not sure at how to evaluate the performance of the agent, but I tried this as a first approach just to see if the predicted Q-values will enable a greedy-policy to choose the best action:
+check0 = np.random.randint(0,30,3)
+
+for i in range(30):
+ arr = np.random.randint(0,30,3)
+ check0 = np.vstack((check0, arr))
+
+predictions = model.predict(check0)
+
+c = 0
+for i in range(predictions.shape[0]):
+ if np.argmax(predictions[i])==0:
+ c+=1
+
+(c/predictions.shape[0])*100
+
+Every time I ran the code above it gave me a totally different value. Sometimes it's 0%, sometimes it's 45%, sometimes it's 19%...
+The issue is that no matter my model architecture, at the end, I get random results. I wonder if it's something wrong in the overall approach to solve the problem. I want to solve a Contextual Bandit where the agent observe a continuous context, take actions and try to link together the rewards obtained with the actions and the context in order to "understand" the logic behind it.
+I hope you can help me figure out why do I get these random results.
+Thank you.
+"
+"['neural-networks', 'reinforcement-learning', 'multi-agent-systems']"," Title: RLLib - What exactly do the avail_action and action_embed_size represent? How do they work with the action_mask to phase out invalid actions?Body: So, I'm fairly new to reinforcement learning and I needed some help/explanations as to what the action_mask and avail_action fields alongside the action_embed_size actually mean in RLlib (the documentation for this library is not very beginner friendly/clear).
+For an example, this is one of the resources (Action Masking With RLlib) I tried to use to help understand the above concepts. After reading the article, I completely understand what the action_mask does, but I'm still a bit confused as to what exactly the action_embed_size is and what the avail_actions fields actually are/represent (are the indices of avail_actions supposed to represent the action 0 if invalid, 1 if valid? Or are the elements supposed to represent the actions themselves - a value of 1, 4, 5, etc corresponding to the actual value of the action itself?).
+Also when/how would there be a difference with the action_space and action_embed_size?
+This is from the article that I used to sort of familiarize myself with the whole concept of Action Masking (this network is designed to solve the Knapsack Problem):
+class KP0ActionMaskModel(TFModelV2):
+
+ def __init__(self, obs_space, action_space, num_outputs,
+ model_config, name, true_obs_shape=(11,),
+ action_embed_size=5, *args, **kwargs):
+
+ super(KP0ActionMaskModel, self).__init__(obs_space,
+ action_space, num_outputs, model_config, name,
+ *args, **kwargs)
+
+ self.action_embed_model = FullyConnectedNetwork(
+ spaces.Box(0, 1, shape=true_obs_shape),
+ action_space, action_embed_size,
+ model_config, name + "_action_embedding")
+ self.register_variables(self.action_embed_model.variables())
+
+ def forward(self, input_dict, state, seq_lens):
+ avail_actions = input_dict["obs"]["avail_actions"]
+ action_mask = input_dict["obs"]["action_mask"]
+ action_embedding, _ = self.action_embed_model({
+ "obs": input_dict["obs"]["state"]})
+ intent_vector = tf.expand_dims(action_embedding, 1)
+ action_logits = tf.reduce_sum(avail_actions * intent_vector,
+ axis=1)
+ inf_mask = tf.maximum(tf.log(action_mask), tf.float32.min)
+ return action_logits + inf_mask, state
+
+ def value_function(self):
+ return self.action_embed_model.value_function()
+
+From my understanding, the action_embedding
is the output of the neural network and is then dotted with the action_mask
to mask out illegal/invalid actions and finally passed to some kind of softmax function to get the final neural network output?
+Please, correct me if I'm wrong.
+"
+"['machine-learning', 'prediction', 'time-series', 'forecasting']"," Title: What is a better approach to perform predictions of time-series several values ahead?Body: Suppose one has a time series (univariate or multivariate) and the goal is to predict values of these series several steps ahead. I see two possible strategies:
+
+- Create a model (recurrent, convolutional, transformer, whatever) that predicts the value of the signal in the next moment of time, based on the values from previous timestamps from
(t_start, t_end)
. If we aim to predict not one, but several steps ahead we can pass (signal[t_start + 1: t_end], signal[t_end + 1])
to predict signal[t_end + 2]
and so on. In the training stage, we can pass the predicted value of signal[t_end + 1]
or the ground truth with some probability, this can be seen as some kind of teacher forcing. In the inference stage, one passes each time the predicted signal. The optimization algorithm aims to minimize (MSE, MAE) loss between the ground truth and prediction. In other words
+$$
+\begin{aligned}
+x_{t+1} &= f(x_t, \ldots, x_{t-N+1}) \\
+x_{t+2} &= f(x_{t+1}, \ldots, x_{t-N+2}) \\
+x_{t+k} &= f(x_{t+1}, \ldots, x_{t-N+k}) \\
+\end{aligned}
+$$
+
+- Create a model that predicts simultaneously several values ahead. Standard layers from DL frameworks (PyTorch of Tensorflow) for sequence processing problems have two options - output single hidden state in the end or the whole sequence of the hidden states. Therefore, seems like they do not have the functionality, say, to predict values of the time series 16 steps ahead from the values of the last 256 timestamps.
+$$
+[y_{t+k}, \ldots, y_{t+1}] = f(x_t, \ldots x_{t - N + 1})
+$$
+I see two potential solutions:
+
+- output hidden state (16) times larger than the expected output and reshape - however, it seems that this approach breaks the locality and causal structure and would not achieve good performance.
+- Choose the option, that returns the sequence of the same length as the input (here 256) and take the last (16) tokens of the output. This approach is inapplicable if the length of the prediction exceeds the length of the previous history, but I think, that such long predictions would produce poor quality in any case.
+
+
+
+How stock, weather, sales prediction problems are solved usually in practice?
+"
+"['academia', 'programming-languages', 'software-evaluation', 'education']"," Title: How much C++ is needed for research in machine learning and artificial intelligence?Body: I am currently doing a master's in applied mathematics, and I recently got interested in machine learning and artificial intelligence, and I am thinking of going for a Ph.D. in this area. I have a reasonable maths and stats background, but I haven't done any course in ML/AI. Next semester, I am thinking of doing courses in ML (uses the book by Bishop), AI (uses the book by Norvig) and reinforcement learning at my university. Another advanced course in C++ is being offered, which I am also very interested to take, but the problem is it will be very difficult to manage all of these courses together. I have some knowledge of C++ (built some parts of a reasonably big project in the past but got a bit rusty nowadays) and very basic knowledge of Python, though I find Python much easier to learn and use than C++.
+So, my question is: how important is C++ if I go for a Ph.D. in ML/AI/CV/NLP, etc.? Should I bother taking the C++ course or be more focused on Python and do the other three courses i.e., ML, AI, and reinforcement learning?
+"
+"['decision-trees', 'feature-engineering']"," Title: How does a decision tree split a continuous feature?Body: Decision trees learn by measuring the quality of a split through some function, apply this to all features and you get the best feature to split on.
+However, with a continuous feature it becomes problematic because there are an infinite number of ways you can split the feature. How is the optimal split for a continuous feature chosen?
+"
+"['convolutional-neural-networks', 'attention', 'convolution']"," Title: Are there any advantages of the local attention against convolutions?Body: Transformer architectures, based on the self-attention mechanism, have achieved outstanding performance in a variety of applications.
+The main advantage of this approach is that the given token can interact with any token in the input sequence and extract global information since the first layer, whereas CNN has to stack multiple convolutional or pooling layers in order to achieve a receptive field, that would involve the whole input sequence.
+By receptive field I mean the number of timestamps from the input signal on which does the output depend. For example, for sequence of two Conv1D
with kernel_size=3
receptive field is 5. And in transformer the output of the first blocks depends on the whole sequence.
+However, this comes at large computational and memory cost in the vanilla formulation:
+$$
+O(L^2)
+$$
+where $L$ is the length of the sequence.
+There have been proposed various mechanisms, that try to reduce this amount of computation:
+
+- Random attention
+- Window (Local attention)
+- Global attention
+
+All these forms of attention are illustrated below:
+
+And one can combine different of these approaches as in the Big Bird paper
+My question is about local attention, attending only to the tokens in the fixed neighborhood of size $K$.
+By doing so, one reduces the number of operations to:
+$$
+O(L K)
+$$
+However, now it is local as the ordinary convolution, and global receptive field will be achieved only via stacking many layers.
+Are there any advantages of Local self-attention against CNN, or it can be beneficial only in combination with other forms of attention?
+"
+"['reinforcement-learning', 'deep-rl', 'math', 'policy-gradients']"," Title: How to interpret the policy gradient expression in reinforcement learning?Body: I'm currently going through the OpenAI's spinning up introduction course to reinforcement learning. On one of the final sections, they derive an expression for the gradient of the undiscounted return with respect to the policy weights:
+$$\nabla_{\theta} J\left(\pi_{\theta}\right)=\underset{\tau \sim \pi_{\theta}}{\mathrm{E}}\left[\sum_{t=0}^{T} \nabla_{\theta} \log \pi_{\theta}\left(a_{t} \mid s_{t}\right) R(\tau)\right]$$
+Then they give the following explanation:
+
+Taking a step with this gradient pushes up the log-probabilities of each action in proportion to $R(\tau$).
+
+My question is: How does this expression mathematically reflect the fact that this gradient will push up the log probabilities of the actions?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'deep-neural-networks', 'fully-convolutional-networks']"," Title: Computational complexity of a CNN networkBody: In the following network, the convolution operations of convolutional blocks are performed by three 1-D kernels with the sizes 8, 5, and 3 respectively along with stride equal to 1. The final network is constructed by stacking three convolution blocks with the filters of sizes 128, 256, and 128 in each block. Pooling operation is excluded from the network. I wanted to find the computational complexity of the following network. I was wondering if you could give me some hints to compute the computational complexity of this network. I appreciate your time! Thanks!
+
+"
+"['game-ai', 'minimax']"," Title: How to handle cycles in minimax algorithmBody: For example, I am implementing AI for turn based game and have enough computational resources for build full game tree. My problem is the game can be infinite if both players will repeat moves and my minimax implementation stucks because game tree is infinite respectively.
+For example, my game is in state S1, player 1 do action A1, player 2 do action A2 and we are again in state S1. I can't evaluate S1 node because I need to evaluate all subnodes.
+I have no idea how to handle this.
+"
+"['machine-learning', 'hidden-markov-model']"," Title: What is meant by decoding in a Hidden Markov Model?Body: HMM contains two types of states: observable and hidden. Let $\{ h_1,h_2,h_3,\cdots,h_n\}$ be hidden states and $\{o_1,o_2,o_3,\cdots, o_m\}$ be the observable states.
+Suppose the $n^2$ transition probabilities $p(h_j|h_i)$ and the $mn$ emission probabilities $p(o_j/h_j)$ are given along with the initial probability distribution vector $\pi =[\pi_1, \pi_2, \pi_3, \cdots, \pi_n]$
+Then what is meant by decoding in HMM?
+"
+['terminology']," Title: Does this code mean the model trains 10 epochs?Body: Here is an implementation for Perceptron
+class Perceptron:
+ def __init__(self, eta=.1, n_iter=10, model_w=[.0, .0], model_b=.0):
+ self.eta = eta
+ self.n_iter = n_iter
+ self.model_w = model_w
+ self.model_b = model_b
+
+ def predict(self, x):
+ if np.dot(self.model_w, x) + self.model_b >= 0:
+ return 1
+ else:
+ return -1
+
+ def update_weights(self, idx, model_w, model_b):
+ w = model_w
+ b = model_b
+ w += self.eta * y_train[idx] * x_train[idx]
+ b += self.eta * y_train[idx]
+ return w, b
+
+ def fit(self, x, y):
+ if len(x) != len(y):
+ print('error')
+ return False
+ for i in range(self.n_iter):
+ for idx in range(len(x)):
+ if y[idx] != self.predict(x[idx]):
+ self.model_w, self.model_b = self.update_weights(idx,
+ self.model_w, self.model_b)
+
+Does this code
+Perceptron(eta=.1, n_iter=10)
+
+mean the model trains 10 epochs?
+"
+"['comparison', 'training', 'activation-functions', 'perceptron', 'logistic-regression']"," Title: Is the main difference between the logistic regression and the perceptron the activation function they use?Body: I went through a Stats StackExchange's post about the difference between logistic regression and perceptron, which is too long to get the key point.
+I'd like to consider the question in terms of the formulas for them.
+The logistic regression is defined as
+$$\hat{y} = \sigma(\mathbf{w} \cdot \mathbf{x} + b)$$
+where
+$$
+\sigma(z) =
+\dfrac {1}{1+e^{-z}}
+$$
+The perceptron is defined as
+$$\hat{y} = sign(\mathbf{w} \cdot \mathbf{x} + b)$$
+where
+$$
+sign(z) =
+\begin{cases}
+1, & z \ge 0 \\
+-1, & z < 0
+\end{cases}
+$$
+So, the main between the two models is the activation function, is my understanding correct?
+"
+"['deep-learning', 'long-short-term-memory', 'time-series', 'multilayer-perceptrons', 'forecasting']"," Title: Why doesn't the LSTM model improve the time-series forecasting significantly with respect to the MLP model?Body: I have recently started learning time series forecasting. I have a dataset of the weekly payment history of 10k clients over 1 year, and I want to predict the future 5 payments for a test set of 1k clients.
+From what I have tried, I've found that using LSTMs instead of a simple MLP doesn't improve the prediction as much as I anticipated.
+My understanding is that LSTMs captures the relations between time steps, whereas simple MLPs treat each time step as a separated feature (doesn't take succession into consideration).
+So, my question is: why doesn't the LSTM model improve the forecasting significantly? What are the best models for such a task, given that the time series are short (maximum sequence's length = 52)?
+"
+"['computer-vision', 'deep-neural-networks', 'image-processing', 'image-segmentation', 'semantic-segmentation']"," Title: Dissection of a depth mapBody: I am curious about how depth maps work. While searching I came across this website which contains some images and their depth maps. I took this depth map and tried to study it using a python pillow.
+
+from PIL import Image
+import numpy as np
+
+image = Image.open('elephant_depth_s.png')
+img = np.asarray(image)
+
+print(img.shape)
+
+The depth map shape is (400, 400, 3)
with 3 channels. Contrary to my assumption this depth map has three channels instead of one. Even though most of the values are zeros some are not. This len(np.where(img>0))
shows that all the channels have some values greater than zero. My question is;
+
+- In color images, RGB channel values are used for creating corresponding color pixels. Example RGB (255,255,0) creates yellow.
+
+In this depth map how these three channel corresponds to the depth?
+Can you please give us some more information on depth maps and their real-world applications?
+"
+"['deep-learning', 'keras', 'transformer', 'time-series', 'sequence-modeling']"," Title: What is the proper way to process continuous sequence data, such as time-series, using the Transformer?Body: What is the right way to input continuous, temporal (time-series) data into the Transformer? Assume we're using the basic TransformerBlock here.
+Since data is continuous with no tokens, Token embedding can be directly skipped. How about positional encoding? I tried this example, removing Token embedding while keeping positional encoding but ended up with shape-related errors. Skipping both token and positional encoding resulted in a network that runs and trains but results were relatively poor compared to the LSTM benchmark with the same data.
+I am unsure if the positional encoding is still needed.
+Overall, my question is, what is the proper way to process continuous sequence data, such as time-series, using the Transformer architecture?
+"
+"['computer-vision', 'style-transfer']"," Title: Upscaling a low-res IR image with a high res webcam imageBody: I have a low resolution thermal/IR image (for example 32x32 or 80x64) and a high resolution webcam image. I would like to combine the two to "fake" a high resolution thermal image (I can already map them together via homography). One could probably just apply a FLIR-like palette to the IR image, scale it up, and combine it with the brightness channel of the visible spectrum image. But that would of cause visible artifacts at the pixel edges of the IR picture.
+I wonder if there is an AI based approach to colorize the webcam image with the IR data. When a warm IR pixel partially covers a person and partially the background, it would only color the "person" warm, and take the "background" color from the neighboring IR pixel. For this it would have to consider a small vicinity of either picture at a time.
+Although I'm familiar with machine learning in the context of multivariate analysis and classification, I have no experience with modern deep learning or AI based image processing. I would guess that something like style transfer would be a starting point for what I'm trying to achive. One would need 1) a way to identify features (like foreground/background, person/wall) and 2) a way to combine these features with the IR truth to result in a colorized bitmap, I assume.
+What would be the best approach to do this? Maybe this is already a solved problem - I have a feeling this might already be a solved problem. In any case I would be grateful for literature pointers.
+"
+"['implementation', 'random-variable']"," Title: Is it possible to use (infinite cardinal) random variables during implementation?Body: Random variables can be broadly classified into three types:
+
+- random variables whose range is finite,
+- random variable whose range is countably infinite and
+- random variables whose range is uncountable.
+
+
+Random variable is called discrete if its range (the set of values
+that it can take) is finite or at most countably infinite.
+Random variables that can take an uncountably infinite number of
+values are not discrete
+
+Almost all the probabilistic models used in artificial intelligence contain random variables.
+In theory, one can deal with all three types of random variables. For suppose, in reinforcement learning or probabilistic graphical models, we can take any type of random variables as state or action spaces (in RL) and as nodes (in PGM) and can analyze.
+But, in several textbooks, most of the analysis is restricted to random variables of the first type. The reason they mention is "to make analysis easy". It will be complex if we deal with either type 2 or type 3 random variables. So, textbooks and materials generally prefer analysis with type 1 only.
+My doubt is:
+Do researchers use random variables of type 2 or type 3 during the implementation of (any) AI tasks? Is it impossible to use them due to their (infinite) cardinality? If possible, please provide an example mechanism for implementing such random variables.
+"
+"['terminology', 'definitions', 'zero-shot-learning']"," Title: What is meant by ""Zero-Shot Visual Recognition""?Body: Many recent research papers contain the phrase "Zero-Shot Visual Recognition".
+What exactly is meant by zero-shot visual recognition? Does the task need only images or also the other data like text?
+"
+"['neural-networks', 'applications', 'bayesian-networks', 'bayesian-deep-learning']"," Title: What are the practical problems where full bayesian treatment is affordable?Body: Suppose, I have a problem, where there is rather a small number of training samples, and transfer learning from ImageNet or some huge NLP dataset is not relevant for this task.
+Due to the small number of data, say several hundred samples, the use of a large network will very probably lead to overfitting. Indeed, various regularization techniques can partly solve this issue, but, I suppose, not always. A small network will not have much expressive power, however, with the use of Bayesian approaches, like HMC integration, one can effectively obtain an ensemble of models. Provided models in the ensemble are weakly correlated, one can boost the classification accuracy significantly.
+Here I provide the picture from Mackay's book "Information Theory Inference and Learning Algorithms". The model under consideration is single layer neural network with a sigmoid activation function:
+$$
+y(x, \mathbf{w}) = \frac{1}{1 + e^{-(w_0 + w_1 x_1 + w_2 x_2)}}
+$$
+On the left picture, there is a result of Hamiltonian Monte Carlo after a sufficient number of samples, and, on the right, there is an optimal fit.
+
+Integration over the ensemble of models produces a nonlinear separating boundary for NN.
+I wonder, can this approach be beneficial for some small-size problems, but not toy, with real-life applications?
+"
+"['natural-language-processing', 'resource-request']"," Title: Language Processing: Determine if one paragraph is relevant to another paragraphBody: Context: I want to determine if someone's written review contains content that is relevant to a paragraph that they are reviewing.
+To do so, I am trying to determine if one paragraph is relevant to another paragraph. I initially tried to use TF-IDF to calculate the relevancy, but I think TF-IDF works well for determining if one paragraph is relevant to a whole set of paragraphs. I only want to determine if two paragraphs are relevant with each other.
+What would be a good approach for this problem?
+"
+"['machine-learning', 'long-short-term-memory', 'datasets', 'data-preprocessing', 'data-science']"," Title: How can I address missing values for LSTM?Body: I'm a student and writing my first paper for submission on conference. I have a question
+there is a dataset below. this is temporal-spatial dataset.
+Date Hour City Sensor1 Sensor2 Sensor3 Sensor4 ...
+21-06-10 0 Region1 0.12 0.52 0.33 0.44 ...
+21-06-10 1 Region2 0.16 0.83 0.34 0.49 ...
+21-06-10 2 Region1 0.21 0.44 0.57 0.5 ...
+...
+
+My Task is anomaly detection for each region
+I want to use LSTM. So, I represent the temporal-spatial data to two time-series data. my dataset can be represented below.
+City Date Hour Sensor1 Sensor2 Sensor3 Sensor4 ...
+Region1 21-06-10 0 0.12 0.52 0.33 0.44 ...
+Region1 21-06-10 2 0.21 0.44 0.57 0.5 ...
+...
+
+
+City Date Hour Sensor1 Sensor2 Sensor3 Sensor4 ...
+Region2 21-06-10 1 0.16 0.83 0.34 0.49 ...
+...
+
+However, then, there is no a row with attribute 'Hour=1' in Region1 dataset
+(you can see the table below)
+City Date Hour Sensor1 Sensor2 Sensor3 Sensor4 ...
+Region1 21-06-10 0 0.12 0.52 0.33 0.44 ...
+Region1 21-06-10 1 NaN NaN NaN NaN ...
+Region1 21-06-10 2 0.21 0.44 0.57 0.5 ...
+...
+
+Can I insert estimated values into the row with attribute 'Hour=1' in Region1 dataset? (for example, I want to insert average between the first row and the third row)
+Can I claim to have utilized a real world dataset even with this missing value estimation?
+"
+['classification']," Title: How can I approach this problem of producing a 3-bit binary string given a sequence of letters?Body: Suppose, I have the following data-set:
+... ...
+... ...
+AABBB 7.027 5.338 5.335 8.122 5.537 6.408
+ABBBA 5.338 5.335 5.659 5.537 5.241 7.043
+BBBAA 5.335 5.659 6.954 5.241 8.470 8.474
+BBAAA 5.659 6.954 5.954 8.470 9.266 9.334
+BAACA 6.954 5.954 6.117 9.266 9.243 12.200
+AABAA 5.954 6.117 6.180 9.243 8.688 11.842
+ACAAA 6.117 6.180 5.393 8.688 5.073 7.722
+ABAAC 6.180 5.393 6.795 5.073 8.719 7.854
+BAACC 5.393 6.795 5.796 8.719 9.196 9.705
+... ...
+... ...
+
+Apparently, the feature values represent a string pattern comprising of only three letters A
, B
, and C
.
+I have to design a neural network that would be able to detect these patterns and spit out a binary representation of these strings where the letters should be encoded in 3-bit binary(one-hot encoding).
+My first question is, What kind of problem is it and why?
+My next question is, How should I approach this problem to solve it?
+"
+"['reinforcement-learning', 'reference-request', 'markov-decision-process', 'monte-carlo-methods', 'sample-complexity']"," Title: What is the sample complexity of Monte Carlo Exploring Starts in RL?Body: We can use a model-free Monte Carlo approach to solving an MDP $(S,A,R,P,\gamma)$ with transition dynamics $P$ unknown by estimating Q-values by rolling out trajectories starting from random states $s_0 \in S$ and improving the policy $\pi$ greedily. This is the Monte Carlo Exploring Starts algorithm in Sutton and Barto page 99 2nd edition.
+Does anyone know if there is a sample complexity result for this algorithm?
+"
+"['reinforcement-learning', 'q-learning', 'reference-request', 'deep-rl', 'dqn']"," Title: Where can I find the original conference paper that introduced Q-learning and Deep Q-Learning?Body: I tried searching a lot, but I could neither find the paper that introduced Q-Learning nor the paper that introduced Deep Q Learning. If anyone knows anything about it please do tell me.
+"
+"['classification', 'training', 'transfer-learning']"," Title: Model not learning anything, what can be the problem?Body: I've trained a model for heart sound classification with transfer learning (MobileNet) on Physionet dataset, and it works fine.
+However, when I train it on my own dataset, it seems that it can not learn anything: more specifically, the loss is not decreasing and the accuracy is not going up. I've checked my labels and they seem to be correct. What other things should I check?
+"
+"['machine-learning', 'objective-functions', 'multi-task-learning']"," Title: Is optimizing weighted sum multi objective tasks considered a multi-task learning?Body: I have two sequence prediction tasks, finding $\vec{\pi} \in \Pi$ and $\vec{\psi} \in \Psi$. Each sequence has its own objective function, i.e. $f_1(\vec{\pi})$ and $f_2(\vec{\psi})$. The input for the two sequence prediction tasks are also of different domain.
+Say that by modification and extension in the model design, I can use one seq2seq or Pointer Network (or its variants) to produce the two sequence one at a time. In the training stage, however, the two objective functions are combined into $F(\vec{\pi}, \vec{\psi}) = \alpha f_1(\vec{\pi}) + \beta f_2(\vec{\psi})$ and the loss function to train the model use the combined objective function $F(\vec{\pi}, \vec{\psi})$.
+Is this considered multi-task learning?
+"
+"['tensorflow', 'object-detection', 'transfer-learning', 'pretrained-models', 'fine-tuning']"," Title: Is it possible that the fine-tuned pre-trained model performs worse than the original pre-trained model?Body: I have downloaded a pre-trained EfficientDet D2 model (Tensorflow 2.0) and trained it on some data (about 20000 images with 20 classes). I set the number of steps to 25000 and batch size to 3 (computer resources are not the best).
+However, if I try to make predictions, the pre-trained model makes better predictions than the model I have trained on the additional data. Is this expected behaviour?
+For example, an image of a person may be 78% accurate on the pre-trained model and only 54% accurate on the same image when trained.
+"
+"['reinforcement-learning', 'deep-learning', 'dqn']"," Title: Why don't I get the same results of Q-Learning as in Aurélion Géron's Hands-on Machine Learning book?Body: I noticed something rather intriguing while testing the Deep Q-Network implementation from Aurélion Géron's book Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition; I copy-pasted the code exactly as it is but added some lines to get the graph on figure 18-10 presenting the sum of total rewards gained during each episode.
+So everything is the same as the book except the training part where I added the lines related to rewards lists and the plotting:
+all_rewards = []
+for episode in range(600):
+ obs = env.reset()
+ episode_rewards = []
+ for step in range(200):
+ epsilon = max(1 - episode / 500, 0.01)
+ obs, reward, done, info = play_one_step(env, obs, epsilon)
+ episode_rewards.append(reward)
+ if done:
+ break
+ all_rewards.append(episode_rewards)
+ if episode > 50:
+ training_step(batch_size)
+
+sum_rewards = []
+for i in range(len(all_rewards)):
+ sum_rewards.append(sum(all_rewards[i]))
+
+import matplotlib.pyplot as plt
+episodes = range(1,601)
+plt.plot(episodes, sum_rewards)
+
+At my surprise I didn't get the same graph as the one the author presents in its book, so I reran the code again and got a totally different graph from what I had for the first time. Please find below two graphs that I obtained. I'm plotting the total sum obtained during each episode with respect to the episode.
+
+
+I'd like to ask you if there is some intrinsic to the algorithm that makes it so random and in that case I'd like some references (if there are any) that prove that or I'm just doing something wrong. Thank you.
+"
+"['reference-request', 'agi', 'research', 'algorithm-request', 'model-request']"," Title: What algorithms are used in Artificial General Intelligence research?Body: I've read on wiki that already in 2017 there were over 40 institutions researching AGI, and I wonder what type of algorithms are being studied and developed in this field.
+For example, for comparison with narrow AI, where models/techniques, such as ANNs, CNNs, SVMs, DT/RT, evolutionary algorithms, or reinforcement learning are used, how would AGI models differ? Do they also use these models but in some specialised way or maybe these algorithms are completely new and different from these currently used in narrow AI?
+"
+"['function-approximation', 'logistic-regression']"," Title: Is it possible to compute the logical AND and OR with logistic regression?Body: It's easy to build a perceptron that can compute the logical AND and OR functions of its binary inputs.
+Logistic regression could be used as a binary classifier.
+$$z^{(i)} = w^T x^{(i)} + b$$
+$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})$$
+$$
+\sigma(z) = \frac{1}{1+e^{-z}}
+$$
+Is it possible to compute the AND and OR with logistic regression?
+"
+"['reinforcement-learning', 'reference-request', 'continuous-action-spaces', 'continuous-state-spaces']"," Title: Is there a gentle introduction to reinforcement learning applied to MDPs with continuous state spaces?Body: I am looking for a gentle introduction (videos, lecture notes, tutorials, books) on reinforcement learning (MDPs) involving continuous states (or very large cardinality of state space). In particular, I am looking for ways on how to deal with them, including a good discussion on the build up to important and relevant concepts.
+Most of the books I encountered just state that we need function approximation, and then moved on to talk about radial basis functions. These ideas, however, are very abstract and are not easy to understand. For example, why specifically those functions?
+"
+"['neural-networks', 'comparison', 'terminology', 'multi-label-classification', 'multi-task-learning']"," Title: Do the terms multi-task and multi-output refer to the same thing in the context of deep learning?Body: Do the terms multi-task and multi-output refer to the same thing in the context of deep learning (with neural networks)? For example, do neural networks for multi-task learning use multiple outputs?
+If not, what is the difference between them?
+It would be helpful if you can also give examples.
+I found some of the terms here. When I went to study this term on the Internet, I found it very convoluted, as different authors are found to be mixing up those terms.
+"
+"['reinforcement-learning', 'q-learning', 'state-spaces']"," Title: Defining states and possible actions in Q learningBody: I am trying to define the number of states and possible actions for a reinforcement learning problem that I want to solve with Q-learning, but I am a bit confused, as I'm totally new to reinforcement learning.
+The problem I'm trying to solve is to assign different groups with people in the sample group having sequential numbers. Let's say there are three in each group.
+Group 1, Group2, Group3.
+1:{"group: Group1, "number": 1},
+2:{"group: Group2, "number": 2},
+3:{"group: Group3, "number": 3},
+4:{"group: Group2, "number": 4},
+5:{"group: Group1, "number": 5},
+6:{"group: Group3, "number": 6},
+7:{"group: Group3, "number": 7},
+8:{"group: Group2, "number": 8},
+9:{"group: Group1, "number": 9},
+
+An optimal output will be a case where the numbers are sequential with respect to the group. For example, all in group1 should have number 1, 2, 3, or 4,5,6 or 7, 8, 9. and not 1, 5, 9 as in the dictionary above.
+In other words, group1, group2, group3 represent the group ids, which means I have 3 groups. The number represents seat numbers. All in each of the groups need to sit close to each other e.g seat numbers 1,2,3 or 4,5,6, or 7,8,9.
+I am wondering if the possible state will be all possible combinations of the group and numbers, in which case it will be 1680 and the possible action is the number of numbers to swap to get the desired output which is 9.
+Any useful information will be very much appreciated.
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'terminology', 'backpropagation']"," Title: What does ""differentiable architecture"" mean?Body: I'm currently reading a paper that uses CNN's as a base approach to solving some image classification issues and I've found that they kept mentioning the term "Differentiable Architecture", for which I have no idea about its meaning, as I'm new to this world of Deep Learning, Neural Networks, etc., so to sum up my question is
+What does "differentiable architecture" mean?
+"
+"['convolutional-neural-networks', 'computer-vision', 'feature-engineering', 'sample-efficiency']"," Title: Does randomly adding hand-engineered features increase the CNN's sample efficiency/performance?Body: It is a known fact that preprocessing images using CV techniques will improve CNN performance (see this answer).
+But what happens when you feed in the entire image and the filtered image randomly to the network? Would the Neural Network learn to focus on the relevant aspects of an unfiltered image?
+If yes, please explain how randomly processed images improve the CNN's performance/sample efficiency.
+"
+"['reinforcement-learning', 'rewards', 'reinforce', 'batch-normalization', 'standardisation']"," Title: How to normalize rewards in REINFORCE?Body: I'm trying to solve a reinforcement learning problem using a Monte Carlo policy gradient algorithm and, more specifically, REINFORCE, with rewards attributed to individual moves instead of applied to all steps in a rollout.
+For this, I do $M$ rollouts, each with $N$ steps, and record the rewards. Let's say I fill an $M \times N$ matrix with the rewards. Sometimes just using these rewards as-is will work, but sometimes the rewards are always positive (or always negative), or the magnitudes cover a large range.
+A simple thing is to just subtract the overall mean and divide it by the overall standard deviation.
+In my particular case, though, the beginning is easier, and during bootstrapping the rewards will be higher. A typical case would have high rewards at the beginning with a taper to zero before the end of the rollout. So, it seems to make sense to subtract the mean along with the trial ($M$) dimension. Likewise, it might make sense to normalize the standard deviation along that dimension as well.
+My question: Have others already figured this out and developed best practices for normalizing rewards?
+I may have added my own twist, but I'm training in batches and using the stats of the batch (multiple rollouts) to do normalization. The subtracting the mean part is called a baseline in some papers I think. This article discusses it a bit.
+"
+"['reinforcement-learning', 'monte-carlo-methods', 'off-policy-methods']"," Title: Doubt in Sutton & Barto's off-policy Monte Carlo control algorithmBody: The algorithm is described as below:
+
+My understanding: In the third last step, we act greedily w.r.t $Q$. Since we use importance sampling, this $Q \approx Q_\pi$. However, in the next step, whenever $A_t \neq \pi(S_t)$, it means the behavior policy isn't aligned with the target (greedy) policy. Hence, we can't use importance sampling and for such $(S_t, A_t)$ we simply take the average of $Q(S_t, A_t)$. Which means these $Q$ values aren't estimates of $Q_\pi$ but rather $Q_b$.
+What's been bothering me is when the behavior and target policy eventually align for state $S_t$, won't that alignment be incorrect? Because in the previous step, we would be doing:
+
+$\pi(S_t) = \arg \max [Q_\pi(S_t, a_1), Q_b(S_t, a_2), Q_b(S_t, a_3)]$
+
+assuming $A(S_t) = \{a_1, a_2, a_3\}$ and the true greedy action is $a_1$.
+"
+['style-transfer']," Title: How to apply the formatting of one json file to another. Coding style transfer for JSONBody: Time after time I need to merge two large JSON files, or more precisely add a json fragment to another file.
+The too pieces are often written by different people and have different formatting (spacing), so margining them mechanically result in an ugly code with ragtag spacing, and formatting one of them manually or semi-manually takes a lot of time and effort.
+How I can reformat the fragment in the same style the first file is formatted. I do not know what IDE, formatter, or style guide was used
+I know that I can use automated tool to reformat the whole document into whatever is supported, but prefer to keep the original style.
+I have heard that coding style can be imitated e.g. in the context of adversary authorship recognition by AI and imagine that for simple cases such as JSON that should be easy. I am interested in merely spacing, indentation, brackets placing, not, say property naming or nesting convention (not sure of ordering, probably not so interesting).
+I am software developer trying to automate my tasks, rather than AI researcher, so please be patient.
+I posted to stackoverflow, yet the task might be to challenging and open ended for traditional programming methods https://stackoverflow.com/questions/68396829
+"
+"['terminology', 'papers', 'one-hot-encoding']"," Title: Is label-embedding similar to one-hot encoding?Body: In one-hot encoding, a vector is given to each class label. For each class, only one entry of the vector is equal to 1 and the remaining entries are zeros in this encoding.
+Thus, in one-hot encoding, we are encoding the class label.
+Is it true that label-embedding gives a vector for each class label like in one-hot encoding? Is one-hot encoding a type of label-embedding?
+"
+"['convolutional-neural-networks', 'activation-functions']"," Title: Using a rectified Tanh to train a CNN?Body: I have been experimenting with activation functions on CNN, and it occurred to me to use a rectified tanh function. So that is basically if z > 0 tanh(z) else 0
. I have implemented it and I compared with ReLu on odd MNIST. They both achieved about 94% success rate in 10 epochs. My logic was that usually humans tend to stop feeling more confident once they have learnt something. Similarly, I thought a Convolutional layer neuron should not feel much more confident (higher activation) with growing evidence (higher weighted input). So is there any evidence of perhaps such a rectified tanh being more successful ?
+"
+"['machine-learning', 'long-short-term-memory', 'time-series']"," Title: End-to-end learning using LSTM-AEBody: I want to use prediction models like LSTM-AE to predict time-series data. The feature that the neural network should learn is in frequency between 40-60Hz. So, in order to learn the feature more effectively and removing the noises, the signal will be filtered using a bandpass filter and the result will be then passed to the network.
+The problem is that, if I want to develop an end-to-end (i.e. omitting the bandpass filtering) solution, how can I do that?
+"
+"['reinforcement-learning', 'natural-language-processing']"," Title: The model learns well, but the validation decreases over timeBody: I have trained a model for four days. I noticed a behaviour quite strange/unnatural.
+During the training, the score and loss look like this:
+
+However, when I see the validation score, I got:
+
+It seems the model learning by heart at the beginning and not generalise well afterwards. Is this a natural behavior? Maybe it's really not normal and there must be some errors in the code or algorithm? I don't know what to think anymore. Can you help me? What is a good solution?
+"
+"['reinforcement-learning', 'markov-decision-process', 'proofs', 'policies']"," Title: Proof that there always exists a dominating policy in an MDPBody: I think that it is common knowledge that for any infinite horizon discounted MDP $(S, A, P, r, \gamma)$, there always exists a dominating policy $\pi$, i.e. a policy $\pi$ such that for all policies $\pi'$: $$V_\pi (s) \geq V_{\pi'}(s) \quad \text{for all } s\in S .$$
+However, I could not find a proof of this result anywhere. Given that this statement is fundamental for dynamic programming (I think), I am interested in a rigorous proof. (I hope that I am not missing anything trivial here)
+"
+"['neural-networks', 'terminology', 'pytorch', 'linear-algebra']"," Title: Is there any difference between affine transformation and linear transformation?Body: Consider the following statements from A Simple Custom Module of PyTorch's documentation
+
+To get started, let’s look at a simpler, custom version of PyTorch’s
+Linear module. This module applies an affine transformation to its
+input.
+
+Since the paragraph is saying PyTorch’s Linear module, I am guessing that affine transformation is nothing but linear transformation.
+Suppose $x = [x_1, x_2, x_3,\cdots,x_n]$ be an input, then the linear transformation on $x$ can be $a.x+b$, where $a$ and $b$ are $n-$ dimensional vectors of real numbers. And dot($.$) stands for dot product.
+Is affine transformation same as the linear transformation? If yes, then why the name affine is used? Does it cover something more or less than linear transformation?
+"
+"['neural-networks', 'convolutional-neural-networks', 'convolution']"," Title: How is the convolution operation connected to neural networks?Body: I've been reading up on the convolution operation and neural networks. I understand that the convolution operation is defined as:
+$$(f * g)(t)=\int_{-\infty}^{\infty} f(\tau) g(t-\tau) d \tau$$
+The convolution operation has some properties, such as commutativity, associativity, etc.
+How is the convolution connected to neural networks? How do we use this operation in a CNN?
+"
+"['neural-networks', 'training', 'pytorch', 'gradient']"," Title: What does it mean by ""zeros the networks parameters gradients"" in the context of training a neural network?Body: Consider the following PyTorch code
+# Run a sample training loop that "teaches" the network
+# to output the constant zero function
+for _ in range(10000):
+ input = torch.randn(4)
+ output = net(input)
+ loss = torch.abs(output)
+ net.zero_grad()
+ loss.backward()
+ optimizer.step()
+
+and its corresponding explanation on training a neural network
+A training loop…
+
+- acquires an input,
+- runs the network,
+- computes a loss,
+- zeros the network’s parameters’ gradients,
+- calls loss.backward() to update the parameters’ gradients,
+- calls optimizer.step() to apply the gradients to the parameters.
+
+Code contains net.zero_grad() which has been explained as zeros the network’s parameters’ gradients.
+What does it mean by zeros the networks parameters gradients? In general, loss is back propagated by calculating the gradients of loss wrt parameters. But, I didn't understand the phrase "zeros of networks parameters gradient". What does that particular step do?
+"
+"['neural-networks', 'regularization', 'decision-trees', 'dropout']"," Title: Is the dropout technique specific only to neural networks?Body: In one Udemy course was mentioned that "dropout is unique to neural networks". However, I remember an example of decision trees where nodes that are not participating in the overall result are removed, and I think that this technique is also called "dropout". Am I correct?
+"
+"['reinforcement-learning', 'deep-learning', 'tensorflow', 'keras']"," Title: Would the reward normalization be wrong in early episodes?Body: It's confusing me that how can we normalize the reward without actually knowing the true mean and variance of the reward distribution, specifically, at the early steps and episodes. This may cause problem for the RL algorithms that use the replay buffer such as DDPG, because this wrongly calculated rewards can stay in buffer for too long and the network will adapt with them. Is there something that I am missing or misunderstood? For algorithms with replay buffer, using standardization is better that normalization?
+"
+"['reinforcement-learning', 'game-ai']"," Title: How to approach a two-agent two-step action game?Body: A simple two-player sniper game:
+
+- Each player has 9 houses that he can reside in. So 18 houses in total. The houses can be considered in a row: e.g. 1-9 for player A, and 10-18 for player B.
+
+- Each step, the player should make two actions! First, he can use his gun's limited view to check out three consecutive houses of the enemy to see if he is there (for example, he choose 3,4,5.). Then, based on that result, he can choose one house to shoot. That means if he guessed correctly, he will know the other player is in one of those three houses. Otherwise, he can shoot one of the remaining six houses.
+
+- The killer wins!
+
+
+
+Please note that in each step, the player has to perform two actions without interruption from the other player. Based on the result of the first action (limited view), he will have more information to select his second action (shooting). Thus, the first action is informative to reduce action space.
+
+I have decided to use stable-baselines3
. I have to create an environment. I am not sure about the policy network.
+How should I approach this game for training an AI agent? I would really appreciate it if you can guide me on env creation, policy selection, or any general tips.
+"
+"['reinforcement-learning', 'proximal-policy-optimization', 'loss', 'continuous-action-spaces', 'kl-divergence']"," Title: KL divergence coefficient update doesn't make sense in RLlib's PPO implementationBody: I am using RLlib (Ray 1.4.0)'s implementation of PPO for a multi-agent scenario with continuous actions, and I find that the loss includes the KL divergence penalty term, apart from the surrogate loss, value function loss, and entropy.
+The KL coefficient is updated in the update_kl() function as follows:
+ if sampled_kl > 2.0 * self.kl_target:
+ self.kl_coeff_val *= 1.5
+ # Decrease.
+ elif sampled_kl < 0.5 * self.kl_target:
+ self.kl_coeff_val *= 0.5
+ # No change.
+ else:
+ return self.kl_coeff_val
+
+I don't understand the reasoning behind this. If the point of the KL "target" is to reach the target, then why do the conditions above imply that the KL coefficient is getting larger (multiplied by 1.5 when the sampled KL is already found to be larger than the target?) when it is supposed to be made smaller instead? I feel like I am missing something here, but I am not able to get my head around it.
+I would appreciate any insights on this. Thank you.
+"
+"['neural-networks', 'pytorch', 'weights']"," Title: What are the numbers that are useful (may need to be stored) other than parameters of a model?Body: Consider the following method related to buffers in PyTorch
+buffers(recurse=True)
+
+Returns an iterator over module buffers.
+
+Parameters
+
+ recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.
+
+Yields
+
+ torch.Tensor – module buffer
+
+buffers()
is a method used for models (say neural networks) in PyTorch. model.buffers()
contains the tensors related to the model and you can see it from the example provided below.
+>>> for buf in model.buffers():
+>>> print(type(buf), buf.size())
+<class 'torch.Tensor'> (20L,)
+<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
+
+The following method informs that a model in PyTorch has both parameters and buffers. So, buffers cannot be the same as the parameters (say weights) of the model.
+ cpu()
+
+ Moves all model parameters and buffers to the CPU.
+
+I am not aware of any numbers to store for a model other than its parameters. So, I have no idea what buffers of a model in the PyTorch store. But, this implies that there are some other numbers related to a model that needs to be stored for efficiency or other purposes.
+Is it PyTorch specific? Else, what are those numbers, other than parameters, that need to be stored for a model?
+"
+"['neural-networks', 'reference-request', 'pytorch', 'implementation', 'weights']"," Title: What are the applications in which the precision of the neural network's weights is unimportant?Body: While reading about Module in PyTorch, I came across a new data type called half
datatype.
+half()
method when calls on a Module casts all floating-point parameters and buffers to half datatype.
+It is a 16-bit floating-point number as mentioned here.
+It is mentioned in Wikipedia that
+
+It is intended for storage of floating-point values in applications
+where higher precision is not essential for performing arithmetic
+computations.
+
+It implies that the precision of parameters (say, weights for a neural network) is not important in certain applications and hence one can use half datatype while implementing a neural network.
+Did any research support the statement that precision, that is the range of values it takes, of weights, is unimportant for certain applications?
+"
+"['convolutional-neural-networks', 'math']"," Title: How can I compute a mathematical formula for my CNN?Body: Let's say, for example, I have built the following CNN model using Keras:
+model = Sequential()
+model.add(Conv2D(32, (3,3), activation='relu', input_shape=(32,32,3)))
+model.add(MaxPooling2D((2, 2)))
+model.add(Conv2D(32, (3,3), activation='relu'))
+model.add(MaxPooling2D((2, 2)))
+model.add(Flatten())
+model.add(Dense(512))
+model.add(Dense(10, activation='softmax'))
+
+I wish to be able to transform the above model into a mathematical formula.
+
+I understand the basic structure of a CNN as follows:
+
+where
+
+
+
+However, I do not know how to go from the above recursive formula to something like this (the first two summations are weights and the second two are adjustable biases):
+
+Note: The formula above is just an example and not representative of the code given above.
+
+
+- Do I need to trace each weight, each bias and each connection of every neuron? If so, how?
+- Furthermore, I would highly appreciate it if someone could provide a generalized strategy for tackling such a problem (like finding a math formula to suit a different kind of classifier).
+- Lastly, is this an easy task and is it a worthwhile one?
+
+
+Note: This question was originally posted on Stack Overflow. Unfortunately, I received no responses even after offering a bounty. Hence, I am uploading the question here. Link to the original post here.
+"
+"['machine-learning', 'computer-vision', 'input-layer']"," Title: How can I take continuous video input into my model?Body: Let's say I have designed an ML model that can take video input of a dog running around and give the breed of the dog as output. However, I do not want to wait for the video to finish before it is input into my model. I want something like the following to happen:
+I am casually taking a video of my backyard when mid-way through a dog runs past the camera. Immediately, my model should identify (a) a dog has appeared within view and (b) the dog is a Labrador Retriever.
+In an attempt to achieve the above, I have the following questions:
+
+- Do I need to train a new model that detects when a dog has appeared within view?
+- How can I make my model such that the input is continuous and that the model keeps running providing instant output?
+
+Note: This question was originally posted on Stack Overflow. It was closed as a consequence of it being unfitting to the site. Hence, I am uploading the question here. Link to the original post here.
+"
+"['computer-vision', 'image-recognition', 'papers', 'optimization', 'explainable-ai']"," Title: What does the lambda parameter in the paper ""Interpretable Explanations of Black Boxes by Meaningful Perturbation"" do?Body: I do not understand the purpose of the $\lambda$ parameter in equation 3 of the paper Interpretable Explanations of Black Boxes by Meaningful Perturbation.
+$$m^{*}=\underset{m \in[0,1]^{\Lambda}}{\operatorname{argmin}} \lambda\|\mathbf{1}-m\|_{1}+f_{c}\left(\Phi\left(x_{0} ; m\right)\right) \tag{3}\label{3}$$
+As far as I understand, the argmin function returns the $m$ for which the term $\lambda\|\mathbf{1}-m\|_{1}$ is smallest. If that's the case, I don't understand the purpose of $\lambda$, since it doesn't change the result of $m$.
+"
+"['computer-vision', 'math', 'hill-climbing']"," Title: Questions about a research paper on salient region detection and segmentationBody: I am reading this paper in an attempt to recreate the salient region detection and segmentation model employed. I have the following questions pertaining to section 3 of the paper and I would highly appreciate it if someone could provide clarity on them.
+
+- The word "scales" is used at multiple points in the section, for example, line 4 of the section states "saliency maps are created at different scales". I do not exactly understand what the authors mean by the word scales. Moreover, is there a mathematical way to think about it?
+
+- I understand that a saliency value
is computed for each pixel at (
) using the equation
+
+
+However, there is no mention of
in the equation. Hence, I am confused as to what pixel the saliency value is being computed for. Is it
?
+
+- I did not understand what the authors meant by the term "bin" in section 3.2 line 5 where it is stated, "The hill-climbing algorithm can be seen as a search window being run across the space of the d-dimensional histogram to find the largest bin within that window."
+
+Note 1: This question was originally posted on Stack Overflow. I was advised to post it on another platform as a consequence of it being unfitting to the site. Hence, I am uploading the question here. Link to the original post here.
+Note 2: In case you are unable to access the link to the research paper, the following citation may help: Achanta, R., Estrada, F., Wils, P., & Süsstrunk, S. (2008, May). Salient region detection and segmentation.
+"
+"['neural-networks', 'architecture', 'batch-normalization']"," Title: Where does batch normalization layers present in a neural network?Body: Batch normalization is a procedure widely used to train neural networks. Mean and standard deviation are calculated in this step of training.
+Since we train a neural network by dividing training data into batches, we use the word batch normalization as we consider a batch of training vectors at a time.
+My doubt is about the position of batch normalization layers in a neural network.
+Is it present before the input layer only? Or is it before every layer? Or is it dependent on the underlying task?
+Suppose there is a neural network of 4 layers: $I \rightarrow h1 \rightarrow h2 \rightarrow h3 \rightarrow O$
+Which one of the following is true
+$$bn \rightarrow I \rightarrow h1 \rightarrow h2 \rightarrow h3 \rightarrow O$$
+$$bn1 \rightarrow I \rightarrow bn2 \rightarrow h1 \rightarrow bn3 \rightarrow h2 \rightarrow bn4 \rightarrow h3 \rightarrow bn5 \rightarrow O$$
+Here $I$ stands for input layer, $h$ for the hidden layer, $O$ for output layer, and $bn$ for batch normalization layer.
+"
+"['reinforcement-learning', 'game-ai', 'muzero', 'observation-spaces', 'board-games']"," Title: Scrabble rack observation with MuZeroBody: Currently I'm trying to implement Scrabble with MuZero.
+The $15 \times 15$ game board observation (as input) is of size $27 \times15 \times15$ (26 letters + 1 wildcard) with a value of 0 or 1.
+However I'm having difficulties finding a suitable way to encode the player's rack of letters (Always 7 letters on the rack).
+The available tiles are: 26 letters $(A-Z)$ and 1 wildcard.
+A rack can also contain multiple tiles of the same letter.
+Example: rack of player 1 is $[A,A,C,E,T,T,H] -> A:2x, C:1x, E:1x, T:2x, H:1x$
+How can I represent a rack of tiles as a $(? \times)15 \times15$ (or other board size) matrix ?
+"
+"['computer-vision', 'terminology', 'papers', 'reasoning']"," Title: What is language-conditioned visual reasoning?Body: Can anyone explain what language-conditioned visual reasoning is?
+I saw this term in this paper and I searched on the internet but I couldn't find a proper explanation.
+"
+"['long-short-term-memory', 'datasets', 'prediction', 'time-series', 'forecasting']"," Title: How can I use a prediction model (e.g., ARMA model or LSTM) for multi-variate data?Body: I have a question
+I have had a dataset below
+ sensor1 sensor2 sensor3 ...
+2021-01-01 1.32 2.2 1.0
+2021-01-02 4.3 2.0 0.8 ...
+...
+
+I know ARMA model is useful for time-series forecasting
+
+However, how can I use ARMA model for data with multiple attributes ?
+If data with a single attribute can be a input for ARMA model, should I aggregate the attribute set?
+(for example, after normalizing each attribute, I add up all values every rows to transform all attributes to a single attribute)
+"
+['natural-language-processing']," Title: Building an AI that predicts the pronunciation of wordsBody: I want to create an AI that converts words to International Phonetic Alphabet (IPA), but I am not sure which architecture I am supposed to use.
+It is not possible to translate the characters one by one since there are multiple characters in the source word corresponding to one IPA character. There are solutions for this kind of problem, for example using an Encoder that encodes the content of the input which the decoder then translates, but I am uncertain if this isn't too abstract for this problem.
+Can anyone think of a suitable solution for this task?
+"
+"['deep-learning', 'python']"," Title: Train a deep learning model with input as a vector and predicts as a vector?Body: I am trying to build a Deep Learning model that takes a numeric vector $X$ of dimension $1 \times 50$ and predicts a numeric vector $y$ of dimension $1 \times 50$.
+It's a linear regression problem. I am trying to achieve the coefficients/weights that can help me
+Code I used:
+X = np.array(...) // array of 50 features and 5 sample vectors (shape of X is 5x50)
+y = np.array(...) // array of 50 features (shape of y is 5x50)
+model = Sequential([Dense(1, input_shape=[5,50])])
+
+optimizer = Adam(0.001)
+
+model.compile(loss='mse', optimizer=optimizer, metrics=['mae', 'mse'])
+
+model.fit(X,y, epochs=250, validation_split=0.25))
+
+
+So, basically we are achieving X*w ~ y
where $w$ is the weights/coefficients that we want to identify using DL.
+Programmatically, I tried using the same logic and calculated $w = y . X^{-1}$ for all the vectors. Took average of the coeff and applied on the test data.
+"
+"['terminology', 'pytorch', 'implementation', 'resource-request', 'hardware']"," Title: What exactly is an XPU?Body: I know about CPU, GPU and TPU. But, it is the first time for me to read about XPU from PyTorch documentation about MODULE.
+xpu(device=None)
+
+
+Moves all model parameters and buffers to the XPU.
+
+This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.
+
+Note: This method modifies the module in-place.
+
+Parameters
+
+ device (int, optional) – if specified, all parameters will be copied to that device
+
+Returns
+
+ self
+
+Return type
+
+ Module
+
+CPU stands for Central Processing Unit.
+GPU stands for Graphical Processing Unit.
+TPU stands for Tensor Processing Unit.
+Most of us know that these processing units are highly useful in the research of some computational intensive domains of AI including deep learning. So, I am wondering whether XPU
is also useful in AI research since it is used in PyTorch.
+From the context, I can say that PU
stands for processing unit
. But I have no idea of what X
is.
+What is the full form for XPU? Where can I read about XPU in detail?
+"
+"['terminology', 'image-processing', 'convolution', '1d-convolution', 'channel']"," Title: What does 'channel' mean in the case of an 1D convolution?Body: While reading about 1D-convolution in PyTorch, I encountered the concept of channels
+in_channels (int) – Number of channels in the input image
+
+out_channels (int) – Number of channels produced by the convolution
+
+Although I encountered this concept of channels early, I am confused about channels and might understand them in the wrong manner.
+Since the operation we are discussing is 1D convolution, then there will be two lists of numbers, one is the input list and the other is the filter list. The last one is the feature map (output list).
+They look like below
+
+The left one is the input list, the middle one is the filter list and the rightmost is the output list.
+Each cell in the input list contains a whole number. Each cell may take value in the fixed range $[a,b]$ of numbers.
+What is the concept of channels used here? From where the channels are coming? Is the number of channels stand for the number of elements in the corresponding list?
+"
+"['reinforcement-learning', 'deep-rl', 'markov-decision-process', 'proofs']"," Title: How to prove Lemma 1.6 in the book ""Reinforcement Learning: Theory and Algorithms""Body: I am trying to prove the following lemma from Reinforcement Learning: Theory and Algorithms on page 8.
+Lemma 1.6. We have that:
+$$
+\left[(1-\gamma)\left(I-\gamma P^{\pi}\right)^{-1}\right]_{(s, a),\left(s^{\prime}, a^{\prime}\right)}=(1-\gamma)\sum_{h=0}^{\infty} \gamma^{t} \mathbb{P}_{h}^{\pi}\left(s_{h}=s^{\prime}, a_{h}=a^{\prime} \mid s_{0}=s, a_{0}=a\right)
+$$
+where $\pi$ is a deterministic and stationary policy with:
+$$
+P_{(s, a),\left(s^{\prime}, a^{\prime}\right)}^{\pi}:=\left\{\begin{aligned}
+P\left(s^{\prime} \mid s, a\right) & \text { if } a^{\prime}=\pi\left(s^{\prime}\right) \\
+0 & \text { if } a^{\prime} \neq \pi\left(s^{\prime}\right)
+\end{aligned}\right.
+$$
+The Corollary 1.5 should also be useful: $Q^{\pi}=\left(I-\gamma P^{\pi}\right)^{-1} r$
+To be honest, I don't have much idea to do it. I saw that the LHS of Lemma 1.6 is related to $Q^\pi$, so my idea is to expand Q and see if it's possible to separate out the $r$. I did the following but end up clueless,
+$$\begin{aligned}
+Q^{\pi}(s, a) &=E\left[\sum_{t=0}^{\infty} \gamma^{t} R\left(s_{t}, a_{t}\right) \mid \pi, s_{0}=s, a_{0}=a\right] \\
+\
+&=R(s, a)+\gamma \sum_{s^{\prime}} P\left(s^{\prime} \mid s, a\right) E\left[\sum_{t=0}^{\infty} \gamma^{t} R\left(s_{t+1} \cdot a_{t+1}\right) \mid \pi, S_{1}=s^{\prime}, a_{1}=\pi\left(s^{\prime}\right)=a^{\prime}\right] \\ &= R(s, a)+\gamma \sum_{s^{\prime}} P\left(s^{\prime} \mid s, a\right) \left[R(s^\prime,a^\prime) + \gamma\sum_{s^{\prime\prime}}P(s^{\prime\prime}|s^\prime,a^\prime)Q(s^{\prime\prime}, a^{\prime\prime})\right]
+\end{aligned}$$
+I have been staring at this equation for hours with no progress. I hope I can get some guidance from you guys.
+"
+"['deep-learning', 'computer-vision', 'papers', 'transformer', 'attention']"," Title: Computing the mean attention distance for ViTBody: Recently I came across the paper that introduces the Vision Transformer (ViT) "AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE".
+The thing I don't really understand at the moment is, what is meant with "mean attention distance".
+More specifically in the caption of Figure 11 on page 18 of the paper they state:
+" ... Attention distance was computed for 128 example images by averaging the distance
+ between the query pixel and all other pixels, weighted by the attention weight. ..."
+
+How can the query be on a pixel level?
+Isn't the overall approach of the ViT to divide the input image into patches which are linearly embedded, combined with a positional embedding and then feed into the transformer encoder.
+So the attention should be on the patch level not on the pixel level?
+I would be very happy if someone could elaborate a bit more on the above sentence, so far I found no deeper explanation.
+"
+"['reinforcement-learning', 'rewards', 'monte-carlo-methods', 'return']"," Title: In the cross-entropy method, should I select state-action pairs by their immediate reward or by the episode reward?Body: I am trying to understand the code mechanics when selecting the elite states and elite actions. It appears clear to me that they are those that appear in the episodes with the rewards bigger than the threshold.
+My question is: should I select state-action pairs by their immediate reward or by the episode reward?
+I am applying the method to a craft environment interesting to me and I have been studying an example applying the OpenAI's Gym taxi environment, but I do not fully understand the code.
+"
+"['terminology', 'pytorch', 'image-processing', 'convolution']"," Title: What does 'input planes' mean in the phrase 'input signal/image composed of several input planes'?Body: PyTorch documentation provided the following descriptions to the Convolution layers
+nn.Conv1d Applies a 1D convolution over an input signal composed of several input planes.
+
+nn.Conv2d Applies a 2D convolution over an input signal composed of several input planes.
+
+nn.Conv3d Applies a 3D convolution over an input signal composed of several input planes.
+
+nn.ConvTranspose1d Applies a 1D transposed convolution operator over an input image composed of several input planes.
+
+nn.ConvTranspose2d Applies a 2D transposed convolution operator over an input image composed of several input planes.
+
+nn.ConvTranspose3d Applies a 3D transposed convolution operator over an input image composed of several input planes.
+
+If you observe the descriptions on the right side. Each description is of the form "Applies an operation over an input signal/image composed of several input planes." It is not just confined to Convolution layers, same phrase has been used for several other layers including pooling layers and a normalization layer.
+I have doubt with the word "input planes" used here.
+What is the meaning of the input plane used here? Does it refer geometrical plane or some other?
+"
+"['training', 'datasets', 'data-augmentation']"," Title: Train Validation Test Splitting After or Before Data Augmentation?Body: I have seen tutorials online saying that you should do data augmentation AFTER doing the train/val/test split. However, when I go online to read some research papers, I see numerous instances of authors saying that they first do data augmentation on the dataset and then split it because they don't have enough data. Is it just that these are silly mistakes, even for papers with many citations, or is this acceptable?
+Example: Research paper.
+they say:
+"Among these selected 480 images, 94 images were col-lected while changing the viewing angle, including images of 30 youngapples, 32 expanding apples, and 32 ripe apples.These 480 images were then expanded to 4800 images using dataaugmentation methods, yielding the training dataset. The training da-taset is used to train the detection model. The remaining 480 images areused as the test dataset to verify the detection performance of theYOLOV3-dense model".
+"
+"['neural-networks', 'reference-request', 'research']"," Title: How much research, approximately, is done in ANNs?Body: Does someone know where can I find information about how much research, nowadays, is done in ANNs?
+I've checked in this document Redes Neuronales: Conceptos básicos y aplicaciones, Universidad Tecnológica Nacional, México (2001) by D. J. Matich, that "nowadays research is uncountable" but that was in 2001.
+I found nothing else on my further google search. Then, I've consulted Google Scholar, and by clicking on the option "Since 2021" it displayed 38,800 results, but, AFAIK, it includes a lot of different types of documents, e.g. books.
+"
+"['convolutional-neural-networks', 'pytorch', 'implementation', 'convolution']"," Title: Is there any gain by lazy initialization of weights, biases and number of input channels for a convolution operation?Body: The basic layers for performing convolution operations 1,2,3 in PyTorch are
+nn.Conv1d: Applies a 1D convolution over an input signal composed of several input planes.
+
+nn.Conv2d: Applies a 2D convolution over an input signal composed of several input planes.
+
+nn.Conv3d: Applies a 3D convolution over an input signal composed of several input planes.
+
+Along with them, there are lazy versions 1,2,3 of each of the aforementioned layers. They are
+nn.LazyConv1d: A torch.nn.Conv1d module with lazy initialization of the in_channels argument of the Conv1d that is inferred from the input.size(1).
+
+nn.LazyConv2d: A torch.nn.Conv2d module with lazy initialization of the in_channels argument of the Conv2d that is inferred from the input.size(1).
+
+nn.LazyConv3d: A torch.nn.Conv3d module with lazy initialization of the in_channels argument of the Conv3d that is inferred from the input.size(1).
+
+We can observe from the description of lazy layers that the in_channels
argument undergoes lazy initalization. Along with it, the attributes that will be lazily initialized are weight
and bias
.
+
+Lazy Initialization is a performance optimization where you defer
+(potentially expensive) object creation until just before you actually
+need it. Lazy initialization is primarily used to improve performance,
+avoid wasteful computation, and reduce program memory requirements
+
+Since in_channels
, weight
and bias
are undergoing lazy initialization in lazy convolution layers of PyTorch, I am guessing that there may be cases that the layers can perform convolution operation without the need of in_channels
, weight
and bias
or bypassing some of them.
+Am I guessing correct? Are there any cases in which convolution operation is said to be done without initializing weights or number of input channels? If not, what is the gain we are getting by making such lazy initialization?
+Is it purely an implementation technique to postpone initialization of those three quantities till the actual execution of the convolution operation in order to use resources minimally?
+"
+"['convolutional-neural-networks', 'pytorch', 'convolution', 'resource-request', 'convolutional-layers']"," Title: Is there any animation that illustrates the ""fold"" and ""unfold"" operations of convolutional layers?Body: There are fourteen convolution layers in PyTorch. Among them six are related to convolution, another six are related to transposed convolution. The remaining two are fold and unfold operations.
+The documentation of PyTorch itself provided this link in order to visualize the operations to understand the convolution and transposed convolution easily. You can see the visuals and understand those variants of convolution operation easily.
+Are there any such visuals (e.g. a diagram or animation) or any other resources available to visualize and understand the remaining two operations: fold and unfold?
+"
+"['machine-learning', 'text-classification', 'cosine-similarity']"," Title: How to calculate cosine similarity for classification when you have say 10000 samples belonging to two classes have a bunch of samplesBody: Does anyone have experience with using Cosine Similarity for text classification? I see a number of articles on how to find cosine similarity between documents using Doc2Vec, Gensim, etc.
+I have a classification problem (binary) where I want to try out the cosine similarity. I do know how to calculate it, but all the articles that I see only explain until the point of calculating it between two documents.
+Right now, I am planning to do this.
+
+- Calculate the cosine similarity of 'my paragraph' (the one that I want to classify) with all samples in
classi
(their class is known). Then take the average (call that avgi
)
+
+- Calculate the cosine similarity of my paragraph (the one that I want to classify) with all samples in
classo
(their class is known). Then take the average (call that avgo
)
+
+- Compare
avgi
and avgo
and then predict the class for 'my paragraph'
+
+
+That sounds like a very manual way of doing it. Is there some better/widely used way of doing it?
+"
+"['neural-networks', 'long-short-term-memory', 'input-layer']"," Title: Using a Neural Network (LSTM) to approve/reject word-type sequencesBody: I would like to train an LSTM neural network to either "approve" or "reject" a string based on the word-type sequence.
+For instance: "Mike's Airplane" would output "approved", but "Airplane Mike's" would output "reject".
+My method for doing this is to decompose the string into an array of words.
+eg.
+["Mike's", "Airplane"]
+
+, then convert the array of words to an array of word-types since the actual word is irrelevant.
+The word types (pronoun, noun, adjective etc.) are defined constants having numerical values.
+eg.
+const wordtypes={propernoun:1, adjective:2, noun:3, ownername:4};
+console.log(wordtypes.propernoun); // 1
+
+Mike's Fast Airplane is
+["Mike's", "Fast", "Airplane"]
+
+which becomes:
+input:[properNoun, adjective, noun]
+output: "approve"
+
+properNoun represents the first word(Mike's),
+adjective the second word(Fast),
+and noun the third word(Airplane).
+I would then like to use this array to train a Neural Network so that it can approve/reject other word-type sequences.
+I am concerned with the methodology/algorithm rather than the syntax; I'm extremely new to Machine Learning and Artificial Neural Networks, so, I am using brain.js and NodeJS because they're relatively easy to use.
+
+- I would like to input multiple parameters for a single word because
+many words have multiple word types (depending on the context). For
+example, a word can be both a "noun" and a "verb". How do I represent this input?
+
+- Is this a good application for LSTM? Or is there a better-suited ML
+algorithm? My dilemma is in deriving the proper inputs & methodology
+to effectively train the Neural Network.
+
+- How is my methodology for accomplishing this approval system?
+
+
+"
+"['python', 'game-ai', 'monte-carlo-tree-search']"," Title: Why doesn't this Monte Carlo Tree Search algorithm work properly?Body: PROBLEM
+I'm writing a Monte-Carlo tree search algorithm to play chess in Python. I replaced the simulation stage with a custom evaluation function. My code looks perfect but for some reason acts strange. It recognizes instant wins easily enough but cannot recognize checkmate-in-2 moves and checkmate-in-3 moves positions. Any ideas?
+WHAT I'VE TRIED
+I tried giving it more time to search but it still cannot find the best move even when it leads to a guaranteed win in two moves. However, I noticed that results improve when I turn off the custom evaluation and use classic Monte Carlo Tree Search simulation. (To turn off custom evaluation, just don't pass any arguments into the Agent constructor.) But I really need it to work with custom evaluation because I am working on a machine learning technique for board evaluation.
+I tried printing out the results of the searches to see which moves the algorithm thinks are good. It consistently ranks the best move in mate-in-2 and mate-in-3 situations among the worst. The rankings are based on the number of times the move was explored (which is how MCTS picks the best moves).
+MY CODE
+I've included the whole code because everything is relevant to the problem. To run this code, you may need to install python-chess (pip install python-chess
).
+I've struggled with this for more than a week and it's getting frustrating. Any ideas?
+import math
+import random
+import time
+
+import chess
+import chess.engine
+
+
+class Node:
+
+ def __init__(self, state, parent, action):
+ """Initializes a node structure for a Monte-Carlo search tree."""
+ self.state = state
+ self.parent = parent
+ self.action = action
+
+ self.unexplored_actions = list(self.state.legal_moves)
+ random.shuffle(self.unexplored_actions)
+ self.colour = self.state.turn
+ self.children = []
+
+ self.w = 0 # number of wins
+ self.n = 0 # number of simulations
+
+class Agent:
+
+ def __init__(self, custom_evaluation=None):
+ """Initializes a Monte-Carlo tree search agent."""
+
+ if custom_evaluation:
+ self._evaluate = custom_evaluation
+
+ def mcts(self, state, time_limit=float('inf'), node_limit=float('inf')):
+ """Runs Monte-Carlo tree search and returns an evaluation."""
+
+ nodes_searched = 0
+ start_time = time.time()
+
+ # Initialize the root node.
+ root = Node(state, None, None)
+
+ while (time.time() - start_time) < time_limit and nodes_searched < node_limit:
+
+ # Select a leaf node.
+ leaf = self._select(root)
+
+ # Add a new child node to the tree.
+ if leaf.unexplored_actions:
+ child = self._expand(leaf)
+ else:
+ child = leaf
+
+ # Evaluate the node.
+ result = self._evaluate(child)
+
+ # Backpropagate the results.
+ self._backpropagate(child, result)
+
+ nodes_searched += 1
+
+ result = max(root.children, key=lambda node: node.n)
+
+ return result
+
+ def _uct(self, node):
+ """Returns the Upper Confidence Bound 1 of a node."""
+ c = math.sqrt(2)
+
+ # We want every WHITE node to choose the worst BLACK node and vice versa.
+ # Scores for each node are relative to that colour.
+ w = node.n - node.w
+
+ n = node.n
+ N = node.parent.n
+
+ try:
+ ucb = (w / n) + (c * math.sqrt(math.log(N) / n))
+ except ZeroDivisionError:
+ ucb = float('inf')
+
+ return ucb
+
+ def _select(self, node):
+ """Returns a leaf node that either has unexplored actions or is a terminal node."""
+ while (not node.unexplored_actions) and node.children:
+ # Pick the child node with highest UCB.
+ selection = max(node.children, key=self._uct)
+ # Move to the next node.
+ node = selection
+ return node
+
+ def _expand(self, node):
+ """Adds one child node to the tree."""
+ # Pick an unexplored action.
+ action = node.unexplored_actions.pop()
+ # Create a copy of the node state.
+ state_copy = node.state.copy()
+ # Carry out the action on the copy.
+ state_copy.push(action)
+ # Create a child node.
+ child = Node(state_copy, node, action)
+ # Add the child node to the list of children.
+ node.children.append(child)
+ # Return the child node.
+ return child
+
+ def _evaluate(self, node):
+ """Returns an evaluation of a given node."""
+ # If no custom evaluation function was passed into the object constructor,
+ # use classic simulation.
+ return self._simulate(node)
+
+ def _simulate(self, node):
+ """Randomly plays out to the end and returns a static evaluation of the terminal state."""
+ board = node.state.copy()
+ while not board.is_game_over():
+ # Pick a random action.
+ move = random.choice(list(board.legal_moves))
+ # Perform the action.
+ board.push(move)
+ return self._calculate_static_evaluation(board)
+
+ def _backpropagate(self, node, result):
+ """Updates a node's values and subsequent parent values."""
+ # Update the node's values.
+ node.w += result.pov(node.colour).expectation()
+ node.n += 1
+ # Back up values to parent nodes.
+ while node.parent is not None:
+ node.parent.w += result.pov(node.parent.colour).expectation()
+ node.parent.n += 1
+ node = node.parent
+
+ def _calculate_static_evaluation(self, board):
+ """Returns a static evaluation of a *terminal* board state."""
+ result = board.result(claim_draw=True)
+
+ if result == '1-0':
+ wdl = chess.engine.Wdl(wins=1000, draws=0, losses=0)
+ elif result == '0-1':
+ wdl = chess.engine.Wdl(wins=0, draws=0, losses=1000)
+ else:
+ wdl = chess.engine.Wdl(wins=0, draws=1000, losses=0)
+
+ return chess.engine.PovWdl(wdl, chess.WHITE)
+
+
+def custom_evaluation(node):
+ """Returns a static evaluation of a board state."""
+
+ board = node.state
+
+ # Evaluate terminal states.
+ if board.is_game_over(claim_draw=True):
+ result = board.result(claim_draw=True)
+ if result == '1-0':
+ wdl = chess.engine.Wdl(wins=1000, draws=0, losses=0)
+ elif result == '0-1':
+ wdl = chess.engine.Wdl(wins=0, draws=0, losses=1000)
+ else:
+ wdl = chess.engine.Wdl(wins=0, draws=1000, losses=0)
+
+ return chess.engine.PovWdl(wdl, chess.WHITE)
+
+ # Evaluate material.
+ material_balance = 0
+ material_balance += len(board.pieces(chess.PAWN, chess.WHITE)) * +100
+ material_balance += len(board.pieces(chess.PAWN, chess.BLACK)) * -100
+ material_balance += len(board.pieces(chess.ROOK, chess.WHITE)) * +500
+ material_balance += len(board.pieces(chess.ROOK, chess.BLACK)) * -500
+ material_balance += len(board.pieces(chess.KNIGHT, chess.WHITE)) * +300
+ material_balance += len(board.pieces(chess.KNIGHT, chess.BLACK)) * -300
+ material_balance += len(board.pieces(chess.BISHOP, chess.WHITE)) * +300
+ material_balance += len(board.pieces(chess.BISHOP, chess.BLACK)) * -300
+ material_balance += len(board.pieces(chess.QUEEN, chess.WHITE)) * +900
+ material_balance += len(board.pieces(chess.QUEEN, chess.BLACK)) * -900
+
+ # TODO: Evaluate mobility.
+ mobility = 0
+
+ # Aggregate values.
+ centipawn_evaluation = material_balance + mobility
+
+ # Convert evaluation from centipawns to wdl.
+ wdl = chess.engine.Cp(centipawn_evaluation).wdl(model='lichess')
+ static_evaluation = chess.engine.PovWdl(wdl, chess.WHITE)
+
+ return static_evaluation
+
+
+m1 = chess.Board('8/8/7k/8/8/8/5R2/6R1 w - - 0 1') # f2h2
+# WHITE can win in one move. Best move is f2-h2.
+
+m2 = chess.Board('8/6k1/8/8/8/8/1K2R3/5R2 w - - 0 1')
+# WHITE can win in two moves. Best move is e2-g2.
+
+m3 = chess.Board('8/8/5k2/8/8/8/3R4/4R3 w - - 0 1')
+# WHITE can win in three moves. Best move is d2-f2.
+
+agent = Agent(custom_evaluation)
+
+result = agent.mcts(m2, time_limit=30)
+print(result)
+````
+
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'generative-model', 'image-generation']"," Title: Is it possible to use deep learning to generate a 2D image from a few numerical values?Body: Is it possible to train a DL model that will generate a full resolution 2D image based on few numbers describing this image and what type of model or architecture would that be?
+
+What I want to achieve is that I deliver to the model some numbers for example describing positions of objects on the screen and number describing how lit the scene is and I get back a 2D image with objects in their correct positions and proper lighting, but for one set of input data values I will get always one same image (see image above). These input data also could be anything else than positions and lighting, these are only examples helping to visualize what I mean.
+This all, of course, assuming that I have a lot of annotated training data that consists of images and labels of the objects' positions and scene lighting values.
+EDIT: The final model would be trained on real images taken from Full HD camera, not some simple shapes like presented here, that I did only to explain better my question.
+"
+['image-processing']," Title: how produce image of face working with AIBody: I came across a https://generated.photos/ site that claims to produce images entirely by artificial intelligence.
+My question is how does this program work? What mechanism and libraries should I use if I want to do a project like this?
+"
+"['neural-networks', 'stochastic-gradient-descent', 'batch-learning']"," Title: Methodologies for passing the best samples for a neural network to learnBody: Just an idea I am sure I read in a book some time ago, but I can't remember the name.
+Given a very large dataset and a neural network (or anything that can learn via something like stochastic gradient descent, passing a subset of samples to modify the model, as opposed to learning from the whole dataset at once), one can train a model for, say, classification.
+The idea was a methodology for selecting the samples that would make the model learn the most from, so you can spare the network from learning from examples that would make the model make only small changes, reducing computing time.
+I guess an easy methodology would pick at first a sample that is similar to a previous one but with another label, and pick the most similar on features and label samples at last. Does that make sense?
+Is there a googleable keyword for what I am talking about?
+"
+"['machine-learning', 'long-short-term-memory', 'time-series', 'forecasting']"," Title: Time series forecasting for multiple objects with common featuresBody: I know the title of this question may raise an eyebrow, but I can't find the technical terms to define or investigate the actual problem.
+To demonstrate my problem with a simple hypothetical scenario: Let's say you have dataset pretraining to fruits!
+
+- The dataset contains $N$ fruits
+
+- Each fruit has properties ${\{p}\}$, for example, $p_1$ is type, $p_2$ is color, $p_3$ flavour. It is important to note that (i) these properties are communal across all fruits (all fruits have the properties above) and (ii) these properties are constant for each individual fruit (for a fruit,${\{p}\}_n$ stays constant over time).
+
+- Each fruit has a time series $\{W_t\}_n$ which relate to, for example, measured weight over time. It is important to note that the fruits aren't measured at regular, or the same intervals. Therefore, each fruit in the dataset will have a different weight time series.
+
+- Therefore the aggregated dataset will have $\sum_{n=1}^{N} dim(\{W_t\}_n)$ observations
+
+- Let's assume there is some hidden correlation between the weight for a fruit $\{W_t\}_n$ over time and the fruit properties ${\{p}\}$.
+
+
+So the problem is: What model(s) can we use that it is able to predict the next weight values $\{W_{(t+1)}\}$? More formally stated $f(\{p\}_n,\{W_t\}_n) = \{W_{(t+1)}\}_n$ ?
+The challenges here is:
+
+- We want to maintain the 'uniqueness' of each fruit, that is, we can't simply say if two fruits have the same properties ${\{p}\}$ then they will have the same weight changes over time. To conceptualize this, imagine things happen to certain fruits during their life time, the model is supposed to remember this has happened to those specific fruits and incorporate that into the prediction.
+- Our measurement device was bought at IKEA and sometimes it provides inaccurate readings, so we can't expect a linear or smooth weight time series per fruit.
+- We don't have a lot of weight measurements, let's say 10 on average, but we have a lot of fruits, let's say 100 000.
+
+I have some experience with vanilla and stacked LSTM's. However, I struggle to consolidate my understanding of LSTM's in the abovementioned scenario.
+Thank you for reading. I hope this will get the creative juices flowing, or give you a fun mental challenge at least.
+"
+"['math', 'pytorch', 'resource-request', 'pooling']"," Title: Is there any closed form analytical expression to represent fractional max pooling?Body: There are Nineteen types of pooling layers in PyTorch.
+Almost all of the layers are provided with corresponding analytical formulae. But analytical formulae are not provided for the fractional max-pooling layers. Instead, they provided this research paper to understand about fractional max pooling. So, I am thinking that it may be complex for a newcomer to understand about fractional max pooling.
+Is there any closed analytical formulae available for fractional max pooling like most of the other pooling layers? If no, is there any simple pseudo-code or visuals (diagram or animation) available for this layer?
+"
+"['image-generation', 'metric', 'similarity']"," Title: Is there any metric for calculating how natural a single image is given a dataset of the same class images?Body: Suppose there is a dataset $D$ of images. We have enough number $n$ of images in the dataset and all the images are of a single class.
+Suppose I generated a new image $I$, which is not present in the given dataset, of the same class using a generator neural network. I want to calculate how natural the image $I$ is wrt the dataset $D$
+$m(I, D) = $ how natural the image $I$ with respect to dataset $D$ of images.
+I don't want metrics that are applied to a bunch of generated images. I have only one generated image.
+
+I came up with a naive metric
+$m(I, D) = \sum\limits_{x \in D} (x-I)^2 $
+where $x-I$, difference between two images, is defined as the sum of pixel differences of both the images i.e., $$x-I = \sum\limits_{x_i \in x, I_i \in I} \|x_i - I_i\|$$
+But, this measure shows how similar the new image $I$ w.r.t is to the set of images in my dataset at the pixel level. I want a measure of how natural it is.
+"
+"['pytorch', 'filters', 'max-pooling', 'stride']"," Title: What is the fundamental difference between max pooling and adaptive max pooling used in PyTorchBody: PyTorch provides max pooling and adaptive max pooling.
+Both, max pooling and adaptive max pooling, is defined in three dimensions: 1d, 2d and 3d. For simplicity, I am discussing about 1d in this question.
+For max pooling in one dimension, the documentation provides the formula to calculate the output.
+
+In the simplest case, the output value of the layer with input size
+$(N,C,L)$ and output $(N,C,L_{out})$ can be precisely described as:
+$$out(N_i,C_j,k) = \max\limits_{m=0, \cdots ,kernel\_size−1} input(N_i,C_j,stride×k+m)$$
+
+But, adaptive max pooling has no detailed explanation in the documentation.
+What is the fundamental difference between max pooling and adaptive max-pooling? max-pooling expects kernel_size
and stride
as input but adaptive max-pooling does not expect them as inputs and asks only for output size, does it uses kernel and stride for performing the operation? If yes, how does it calculate both?
+"
+"['deep-learning', 'tensorflow', 'reference-request', 'keras', 'pytorch']"," Title: Are there any stats available on the usage of libraries by deep learning researchers?Body: I know three Python libraries that are popular in deep learning research community: Keras, PyTorch, Tensorflow. I don't know much about Theano.
+This question is not about the efficiency, flexibility or ease of the library for its users. This question is about the usage of the library by the deep learning (academic, research) community.
+Which library is used by most of the contemporary researchers? Is there any comparison or stats available among the libraries, based on GitHub implementations or by some other means?
+"
+"['reinforcement-learning', 'comparison']"," Title: What is the advantage of RL compared with my simple classic algorithm for the MountainCarEnv?Body: What is the advantage of RL compared with the following simple classic algorithm for the MountainCarEnv
? Considering that it takes a long time to train the agent just to achieve this simple task?
+import gym
+
+envName = 'MountainCar-v0'
+env = gym.make(envName)
+
+x, v = state = env.reset()
+done = False
+maxPotential = False
+steps = 0
+
+def computeAction(state):
+ global x, v, maxPotential, steps
+ xNew, vNew = state
+ action = 1
+ if xNew < -1.1:
+ maxPotential = True
+ if not maxPotential:
+ if xNew < x: action = 0
+ else: action = 2
+ else:
+ action = 2
+ x, v = xNew, vNew
+ steps += 1
+ return action
+
+while not done:
+ state, reward, done, info = env.step(computeAction(state))
+ env.render()
+
+print('steps', steps)
+
+
+- result: around 100 steps
+
+"
+"['computer-vision', 'comparison', 'transformer', 'attention']"," Title: In Computer Vision, what is the difference between a transformer and attention?Body: Having been studying computer vision for a while, I still cannot understand what the difference between a transformer and attention is?
+"
+"['neural-networks', 'reinforcement-learning', 'backpropagation', 'hyperparameter-optimization', 'meta-learning']"," Title: Why doesn't anyone use reinforcement learning to find the best possible alternative to backpropagation?Body: To be clear, I'm very uninformed on the topic of alternative learning algorithms to backprop, all my knowledge comes from articles like these:
+lets-not-stop-at-backprop
+backprop-alternatives
+we-need-a-better-learning-algorithm. I also don't know exactly how you would arrange a system to find the best learning algorithm it can, or if it's even possible to make something like that with reinforcement learning.
+I was thinking that you could take a system and have it generate neural nets in the space of all neural nets and generate rules for how to deal with the weights in the network, and then just let the system run trying to find the best possible arrangement of neurons and training rules such that it can learn how to do x thing very very fast, with very little training.
+Is this something that has already been tried or something that isn't possible?
+"
+"['machine-learning', 'unsupervised-learning', 'feature-selection', 'principal-component-analysis']"," Title: Is there a way to select the subset of most important features using PCA?Body: Is there a way to select the most important features using PCA? I am not looking for the principal components with the highest scores but a subset of the original features.
+"
+"['machine-learning', 'papers', 'objective-functions', 'decision-trees', 'gradient-boosting']"," Title: Why is the exponential loss used in this case?Body: I am reading the paper Tracking-by-Segmentation With Online Gradient Boosting Decision Tree. In Section 2.1, the paper says
+
+Given training examples, $\left\{\left(\mathbf{x}_{i}, y_{i}\right) \mid \mathbf{x}_{i} \in \mathbb{R}^{n}\right.$ and $y_{i} \in$ $\mathbb{R}\}_{i=1: N}, f(\cdot)$ is constructed in a greedy manner by selecting parameter $\theta_{j}$ and weight $\alpha_{j}$ of a weak learner iteratively to minimize an augmented loss function given by
+$$
+\mathcal{L}=\sum_{i=1}^{N} \ell\left(y_{i}, f\left(\mathbf{x}_{i}\right)\right) \equiv \sum_{i=1}^{N} \exp \left(-y_{i} f\left(\mathbf{x}_{i}\right)\right)
+$$
+where an exponential loss function is adopted ${ }^{1}$. The greedy optimization procedure is summarized in Algorithm 1.
+
+I cannot understand the exponential loss function. In my opinion, the loss function should get the smallest value when $y_i=f(x_i)$. But the loss function in the image obtains a smaller value if $(-y_i f(x_i))$ becomes smaller.
+"
+['terminology']," Title: Is my understanding about the number of iterations correct?Body: Per google machine-learning glossary, when I have 100 training examples and update my model for each training example, if I train my model 5 epochs without early-stop, there are 500 iterations in total, is my understanding correct?
+"
+"['deep-learning', 'natural-language-processing', 'attention', 'bert']"," Title: Isn't attention mask for BERT model useless?Body: I have just dived into deep learning for NLP, and now I'm learning how the BERT model works. What I found odd is why the BERT model needs to have an attention mask. As clearly shown in this tutorial https://huggingface.co/transformers/glossary.html:
+from transformers import BertTokenizer
+tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
+
+sequence_a = "This is a short sequence."
+sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
+
+encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
+encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
+
+padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
+
+Output of padded sequences input ids:
+padded_sequences["input_ids"]
+
+[[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
+
+Output of padded sequence attention mask:
+padded_sequences["attention_mask"]
+[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
+
+In the tutorial, it clearly states that an attention mask is needed to tell the model (BERT) which input ids need to be attended and which not (if an element in attention mask is 1 then the model will pay attention to that index, if it is 0 then model will not pay attention).
+The thing I don't get is: why does BERT have an attention mask in the first place? Doesn't model need only input ids because you can clearly see that attention_mask has zeros on the same indices as the input_ids. Why does the model need to have an additional layer of difficulty added?
+I know that BERT was created in google's "super duper laboratories", so I think the creators had something in their minds and had a strong reason for creating an attention mask as a part of the input.
+"
+"['neural-networks', 'training', 'reference-request', 'forward-pass']"," Title: Is there any existing mechanism that allows us to pass input from randomly selected layers of neural network per iteration?Body: Consider the following neural network with $\ell$ layers.
+$$i_0 \rightarrow h_1 \rightarrow h_2 \rightarrow h_3 \cdots \rightarrow h_{\ell-1} \rightarrow o_{\ell} ,$$
+where $i, h, o$ stands for input, hidden and output layer respectively.
+In general, an input passes from $i_0$ to $o_{\ell}$, which is known as the forward pass. And then the weight updating happens from $o_{\ell}$ to $i_0$ which is called backward pass.
+I want to know whether the following mechanism exists in literature assuming that all layers have the same input and output dimensions.
+For each iteration
+
+- Select a subset $L \subseteq \{0, 1, 2, 3, \cdots, \ell\}$ randomly.
+- Input passes through layers whose indices are present in $L$ only i.e, forward pass happens by dropping some layers.
+- Update weights for layers whose indices are in $L$ i.e., update the weights of layers which are participated in step (2).
+
+What is the name of the technique mentioned above, if it is present in literature?
+"
+"['deep-learning', 'natural-language-processing', 'long-short-term-memory']"," Title: How will actual labels be matched with predicted labels when LSTM discards data even from current time stamp input data?Body: I read the tutorial of LSTM from here. However, I have certain doubts that I need to address.
+
+- Since we use true labels and do not remove anything from the original data, then how is it possible for the LSTM model's predicted output to match the real labels as it throws data?
+
+- And how do we determine the number of output neurons?
+
+
+According to my understanding, in word-to-word prediction, one cell's outputs are the number of words (exiting in vocabulary).
+"
+"['deep-learning', 'datasets', 'pytorch', 'batch-normalization', 'data-augmentation']"," Title: Batch normalization for multiple datasets?Body: I am working on a task of generating synthetic data to help the training of my model. This means that the training is performed on synthetic + real data, and tested on real data.
+I was told that batch normalization layers might be trying to find weights that are good for all while training, which is a problem since the distribution of my synthetic data is not exactly equal to the distribution of the real data. So, the idea would be to have different 'copies' of the weights of batch normalization layers. So that the neural network estimates different weights for synthetic and real data, and uses just the weights of real data for evaluation.
+My question is, how to perform batch normalization in the aforementioned case? Is there already an implementation of batch norm layers in PyTorch that solves the problem?
+"
+['deep-learning']," Title: Not able to find a good fit for a simple function with neural networksBody: I have been trying to adjust a neural network to a simple function: the mass of an sphere.
+I have tried with different architectures, for example, a single hidden layer and two hidden layers, always with 128 neurons each, and training them for 5000 epochs.
+The code is the usual one. Just in case, I publish one of them
+model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])
+ ,keras.layers.Dense(128, activation="relu")
+ ,keras.layers.Dense(1, activation="relu")])
+model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
+history = model.fit(x, y, validation_split=0.2, epochs=5000)
+
+The results are shown in the graphs.
+
+
+I suspect that I am making an error somewhere, because I have seen that deep learning is able to match complex functions with much less epochs. I shall appreciate any hint to fix this problem and obtain a good fit with the deep learning function.
+In order to make it clear I post the graph's code.
+rs =[x for x in range(20)]
+def masas_circulo(x):
+ masas_circulos =[]
+ rs =[r for r in range(x)]
+ for r in rs:
+ masas_circulos.append(model.predict([r])[0][0])
+
+ return masas_circulos
+
+ masas_circulos = masas_circulo(20)
+ masas_circulos
+ esferas = [4/3*np.pi*r**3 for r in range(20)]
+ import matplotlib.pyplot as plt
+ plt.plot(rs,masas_circulos,label="DL")
+ plt.plot(rs,esferas,label="Real");
+ plt.title("Mass of an sphere.\nDL (1hl,128 n,5000 e) vs ground_truth")
+ plt.xlabel("Radius")
+ plt.ylabel("Sphere")
+ plt.legend();
+
+"
+"['deep-learning', 'convolutional-neural-networks', 'training', 'recurrent-neural-networks', 'pytorch']"," Title: Feeding the output back to input in 3D CNN modelBody: I am currently designing a Model which takes Input 3D Grid and Model Output at $t-1$. The model figure is described below
+
+I have two thoughts in training the model for above situation.
+
+- Feed output $t-1$ from ground truth with some noise. And, during testing feed output of the model as previous output at $t-1$. Maybe we can fine tune with model output when training loss is sufficiently low.
+
+- Feed model output to next stage as $t-1$ output. But I am not sure if this works.
+
+
+The situation is similar to RNNs but I am using 3D CNN’s here. I don’t know if RNN can be used here. How can I train such a model.
+"
+"['model-based-methods', 'exploration-exploitation-tradeoff', 'dynamic-programming']"," Title: Is there a notion of exploration-exploitation tradeoff in dynamic programming (or model-based RL)?Body: Is there a notion of exploration-exploitation tradeoff in dynamic programming (or model-based RL)?
+"
+"['reinforcement-learning', 'deep-learning', 'deep-rl', 'image-processing']"," Title: What do equations 1 and 3 describe in the ""Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels"" paper?Body: This paper uses image augmentation to improve RL algorithms. It contains the following paragraph -
+"Our approach, DrQ, is the union of the three separate regularization mechanisms introduced above:
+
+- transformations of the input image (Section 3.1).
+- averaging the Q target over K image transformations (Equation (1)).
+- averaging the Q function itself over M image transformations (Equation (3))."
+
+I do not understand how part 2 and 3 (Equation 1 and 3) and would highly appreciate some detailed elaboration on it.
+Here are the equations -
+
+
+"
+"['machine-learning', 'deep-learning', 'semantic-segmentation']"," Title: What to do when model stops learning after some epochsBody: I am training a segmentation model on 3D data, after around 170 epochs which took around 4 days, I notice the model is no more learning and the dice score is at 0.51. What is the best approach at this point to keep model learning?
+Learning rate: 1e-4
+Batch size: 6
+optimizer: adamw
+loss: generalized dice loss
+PS: there are more augmentation in training data than validation data, this might be the reason, the validation loss curve is below training loss curve
+
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'recurrent-neural-networks', 'padding']"," Title: Is reconciling shape discrepancies the only purpose of padding?Body: Padding is a technique used in some of the domains of artificial intelligence.
+Data is generally available in different shapes. But in order to pass the data as input to a model in deep learning, the model allows only a particular shape of data to pass through it. And hence there is a need to allow padding in case if the input data shape contains dimensions that are less than the dimensions of the input of the model under consideration. For example, we pad input sentences in RNN to match the input shape of the RNN model. Sometimes we pad the input data in order to make a desired shape output. For example, padding is used in convolution operation to keep the size of feature maps intact.
+Is handling this type of shape issues is the only purpose of padding? If no, what are the other purposes of padding that are not related to the shaping requirements of data?
+"
+"['comparison', 'terminology', 'image-generation']"," Title: Is there any difference between ""image generation"" and ""image synthesis""?Body: Generative Adversarial networks (aka GANs) are used for image generation. The phrase image synthesis is also used in literature.
+I know that the phrase image generation stands for
+
+An act of generating an image
+
+The formal definition for image synthesis is given by
+
+Image synthesis is the process of artificially generating images that
+contain some particular desired content.
+
+The only difference I can notice is that image synthesis is a focused image generation. The focus is on the parts of the image generated.
+But, I have an issue with the word synthesis. The word "synthesis" has the following meanings
+
+- The combination of components or elements to form a connected whole. Often contrasted with analysis
+- The production of chemical compounds by reaction from simpler materials.
+- (in Hegelian philosophy) the final stage in the process of dialectical reasoning, in which a new idea resolves the conflict between thesis and antithesis.
+
+It gives me a sense that I need to use the phrase "image synthesis" if I am generating an image by combining some simpler elements, which is not exactly the same as the focused sense given in the formal definition of image synthesis.
+Why does the word "synthesis" is used in the phrase "image synthesis"? Are we synthesizing/combining anything that does not happen in other variants of image generation?
+"
+"['machine-learning', 'reference-request', 'terminology', 'regularization']"," Title: Does regularization just mean using an augmented loss function?Body: We need to use a loss function for training the neural networks.
+In general, the loss function depends only on the desired output $y$ and actual output $\hat{y}$ and is represented as $L(y, \hat{y})$.
+As per my current understanding,
+
+Regularization is nothing but using a new loss function
+$L'(y,\hat{y})$ which must contain a $\lambda$ term (formally called
+as regularization term) for training a neural network and can be
+represented as
+$$L'(y,\hat{y}) = L(y, \hat{y}) + \lambda \ell(.) $$
+where $\ell(.)$ is called regularization function. Based on the
+definition of function $\ell$ there can be different regularization
+methods.
+
+Is my current understanding complete? Or is there any other technique in machine learning that is also considered a regularization technique? If yes, where can I read about that regularization?
+"
+"['neural-networks', 'machine-learning', 'backpropagation']"," Title: How to create a neural network from a set of equations?Body: Say I have these equations:
+$$x_1 = x_2 + 2y_1 + b$$
+$$x_2 = y_2 + c$$
+$$y_1 = z + a$$
+$$y_2 = y_3 + d$$
+$$z = z_1 + e$$
+$x_1$ depends on $x_2$ (depends on $y_2$ (depends on $y_3$)) and $y_1$ (depends on $z$ (depends on $z_1$)).
+$x_1$ is my final equation and $y_3$ and $z_1$ are my initial variables.
+How do I represent them in a neural network? My final aim is to backtrack from $x_1$, and see what change of an amount $n$ in $x_1$ resulted from which of $y_3$ or $z_1$.
+All these variables are item prices in the real world.
+My inputs are $z_1$ and $y_3$ and my output is $x_1. z_1$ and $y_3$ are prices and the final output $x_1$ is also a price.
+"
+"['reinforcement-learning', 'function-approximation', 'weights', 'temporal-difference-methods', 'finite-markov-decision-process']"," Title: Recursive Least squares (RLS) for mini batchBody: For my application I am considering a learning problem where I simulate a bunch of episodes say '$n$' first, and than carry out the recursive least squares update. Similar to $TD(1)$.
+I know that RLS can be used to update parameters being learned as they arrive. This can be done efficiently for single data point and the derivations are easily available online and also easy to understand.
+However for my case I am looking for same equations when data arrive as a mini batch and not a single data point at a time. I could not find any material regarding RLS for mini batches.
+According to my understanding the same equations can be also used by appropriately considering matrix dimensions. However I do not know if this is valid.
+What are the alternatives to be used?
+"
+"['natural-language-processing', 'datasets', 'data-labelling']"," Title: An online editor that allows data labeling formatBody: I have a set of students (~20) that will work on annotating data for an NLP project.
+The annotation task will be as in the following:
+text: I like this piza place.
+label: [pos, neg]
+comments:
+text fluency: [1,2,3,4,5]
+
+The students will need to correct the text first (e.g. correcting piza
word), and then fill the fields below.
+Is there an online solution to add the data in this format and then to share the link with the students?
+I tried to do this in Google forum, but I wasn't able to; I don't know actually if it's possible there.
+I am looking for a solution that can allow saving the edits after annotating # instances, as there are many instances and the students won't be able to annotate everything at once. I know that a good solution would be building a website, but I am looking for something that already exists.
+"
+"['tensorflow', 'pytorch', 'transformer', 'time-series', 'positional-encoding']"," Title: Positional Encoding in Transformer on multi-variate time series data hurts performanceBody: I set up a transformer model that embeds positional encodings in the encoder. The data is multi-variate time series-based data.
+As I just experiment with the positional encoding portion of the code I set up a toy model: I generated a time series that contains the log changes of a sine function and run a classification model that predicts whether the subsequent value is positive or negative. Simple enough. I also added a few time series with random walks to try to throw off the model.
+Predictably, the model very quickly reaches a categorical accuracy of around 99%. Without positional encoding that happens already in the 3rd epoch. However, with positional encoding (I use the same implementation as proposed in the "Attention is all you need" paper), it takes over 100 epochs to reach a similar accuracy level.
+So, clearly, all else being equal, learning with positional encoding takes much longer to reach an equal accuracy level than without positional encoding.
+Has anyone witnessed similar observations? Apparently adding the positional encodings to the actual values seems to confuse the model. I have not tried concatenations yet. Any advice?
+Edit: Or does it simply mean that learned positional encodings perform better than sin/cos encodings? I have not made any special provisions to encourage learned positional encodings, I simply either added the positional encodings to the actual values or I did not.
+"
+"['neural-networks', 'machine-learning', 'backpropagation', 'feedforward-neural-networks', 'relu']"," Title: ReLU function converging to local optimum in one case and diverging in the other oneBody: I implemented a simple neural network with 1 hidden layer. I used ReLU as activation function for the hidden layer and the output layer just uses the linear function.
+To check my implementation I tested my neural network with following architecture:
+Input Layer: 5 nodes
+Hidden Layer: 2 nodes (ReLU)
+Output Layer: 1 node (Linear Combination)
+
+Learning Algorithm: Batch Gradient Descent
+Error: Squared Error
+
+I trained the neural network for 1000 times over the same input and target output:
+Input: [[1, 2, 3, 4, 5], [1, 2, 3, 4, 6]]
+Target Output: [[15], [16]]
+
+I expected the network to learn the sum function. However, the network ended up learning a constant function i.e weights were all negative for first layer and the bias values were negative numbers, thus application of ReLU function to it resulted in all 0's. Thus the output was simply the bias values for output layer which was 15.5
+How should I interpret the above written results? I could think of a few reasons:
+
+- Should I consider that the network converged to a local optimum?
+- My test dataset (synthetic) was very poor. Had there been negative numbers, I could have ended up with better results?
+
+I tried to verify the 2nd point but it so happened that the results became no better. I used:
+Input: [[1, 2, 3, 4, 5], [-1, -2, -3, -4, -6]]
+Target Output: [[15], [-16]]
+
+It so happened that the neural network was able to evaluate both the training inputs accurately i.e 15 and -16. However, still it outputs 15 for case [1, 2, 3, 4, 6] instead of expected 16 as the weights for first layer are negative.
+This made me believe that my training dataset is poor but then I tried training on a 1000 random test inputs, and the results were very poor. The weights became very large. I really can't understand what the problem is. I doubt that there might be some error in my implementation.
+Another observation was:
+I initialized the weights and biases to optimal values i.e values that correspond to sum function:
+ 'W': [[ 1., -1.],
+ [ 1., -1.],
+ [ 1., -1.],
+ [ 1., -1.],
+ [ 1., -1.]]
+ 'b': [0., 0.]
+
+ 'W': [[ 1.],
+ [-1.]]
+ 'b': [0.]
+
+I ran the training on that 1000 length training set but there was no effect on parameters as the error was in any case 0. Why wasn't my neural network able to learn these parameters.
+For reference this is my code for neural network (hard coded for 3 layer network):
+class NeuralNetwork:
+ def __init__(self, layers, alpha):
+ self.num_layers = len(layers) # has to be 3
+ self.layers = layers
+ self.alpha = alpha
+ self.weights = [{'W': None, 'b': None} for i in range(self.num_layers - 1)]
+ for i in range(self.num_layers - 1):
+ self.weights[i]['W'] = np.array([[np.random.normal(0, np.sqrt(2/layers[i])) for ii in range(layers[i+1])] for jj in range(layers[i])])
+ self.weights[i]['b'] = np.array([np.random.normal(0, np.sqrt(2/layers[i])) for ii in range(layers[i+1])])
+
+ def evaluate(self, input_feature):
+ psi = input_feature @ self.weights[0]['W'] + self.weights[0]['b']
+ x = np.maximum(psi, 0)
+ y = x @ self.weights[1]['W'] + self.weights[1]['b']
+ return y
+
+ def update_weights(self, training_input, target_output):
+ training_output = self.evaluate(training_input)
+
+ dely = target_output - training_output
+
+ db1 = np.sum(dely, axis = 0)
+ dw1 = np.sum(a*dely, axis = 0).T
+
+ da = dely @ (self.weights[1]['W'].T)
+ z = training_input @ self.weights[0]['W'] + self.weights[0]['b']
+ dz = np.maximum(z, 0) * da
+
+ db0 = np.sum(dz, axis = 0).T
+ dw0 = training_input.T @ dz
+
+ self.weights[0]['W'] += self.alpha * dw0
+ self.weights[0]['b'] += self.alpha * db0
+ self.weights[1]['W'] += self.alpha * dw1
+ self.weights[1]['b'] += self.alpha * db1
+
+"
+['machine-learning']," Title: What is the meaning of R2 appearing as a negative in the RandomForestRegressor?Body: Machine learning model was created by reading an Excel file where data was stored. I applied RandomForestRegressor to create a model that predicts the size of the sieve particles according to pressure, but the value of R2 is too large negative. I found out through googling that R2 can be negative, but I don't know what it means to have such a large negative. When I applied the same amount of different data to the designed model, R2 showed a result that was close to 1, but I don't know why this data is only large negative. RMSE and SCORE scores are good, but I don't understand if only R2 scores are bad... I would appreciate it if you could let me know what is the problem and what to consider.
+My Data(Capture Image):
+
+
+My Code:
+import pandas as pd
+import numpy as np
+import sklearn
+from sklearn.model_selection import train_test_split
+from sklearn.ensemble import RandomForestRegressor
+from sklearn.ensemble import RandomForestClassifier
+from sklearn import metrics
+from google.colab import drive
+from sklearn.metrics import mean_squared_error
+from sklearn.metrics import r2_score
+
+drive.mount('/gdrive', force_remount=True)
+
+data = pd.read_csv(r"/gdrive/MyDrive/Coal_Inert_Case/Inert_Case_1.csv")
+
+x =data[['press1', 'press2', 'press3', 'press4', 'press5', 'press6', 'press7', 'press8', 'press9', 'press10', 'press11', 'press12', 'press13',
+ 'press14', 'press15', 'press16', 'press17', 'press18', 'press19', 'press20', 'press21',
+ 'press22', 'press23', 'press24', 'press25', 'press26', 'press27', 'press28', 'press29',
+ 'press30', 'press31', 'press32', 'press33', 'press34', 'press35', 'press36', 'press37',
+ 'press38', 'press39', 'press40', 'press41', 'press42', 'press43', 'press44', 'press45',
+ 'press46', 'press47', 'press48', 'press49', 'press50', 'press51', 'press52', 'press53']]
+
+
+y = data[['Sieve 16000', 'Sieve 11000', 'Sieve 8000', 'Sieve 5600', 'Sieve 4000',
+ 'Sieve 2800', 'Sieve 2000', 'Sieve 1400', 'Sieve 1000', 'Sieve 710',
+ 'Sieve 500', 'Sieve 355', 'Sieve 250', 'Sieve 180', 'Sieve 125',
+ 'Sieve 90', 'Sieve 63', 'Sieve 44', 'Sieve 31', 'Sieve 0']]
+
+X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3,random_state= 42)
+
+forest = RandomForestRegressor(n_estimators=1000,random_state= 42)
+forest.fit(X_train, y_train)
+y_pred = forest.predict(X_test)
+
+mse = mean_squared_error(y_test, y_pred)
+rmse = np.sqrt(mse)
+r2_y_predict = r2_score(y_test, y_pred)
+
+print("RMSE:", rmse)
+print("R2 : ", r2_y_predict)
+
+The output (RMSE, R2) is:
+RMSE: 0.6913667737213217
+R2 : -7294765918428.414
+
+"
+"['deep-learning', 'objective-functions', 'math', 'optimization', 'gradient-descent']"," Title: What are the necessary mathematical properties to be a loss function in gradient based optimizations?Body: Loss functions are used in training neural networks.
+I am interested in knowing the mathematical properties that are necessary for a loss function to participate in gradient descent optimization.
+I know some possible candidates that may decide whether a function can be a loss function or not. They include
+
+- Continuous at every point in $\mathbb{R}$
+- Differentiable at every point in $\mathbb{R}$
+
+But, I am not sure whether these two properties are necessary for a function to become a loss function.
+Are these two properties necessary? Are there any other mathematical properties that are necessary for a function to become a loss function to participate in gradient descent optimization?
+Note that this question is not asking for recommended properties for a loss function. Asking only the mandatory properties in a given context.
+"
+"['terminology', 'definitions']"," Title: Does ""fusion"" in ""feature fusion"" has any formal definition?Body: I encountered the phrase "fusing features" several times in the literature. I am providing an excerpt from a research paper to provide context for usage of the word fusion.
+
+The reason is that the signals measured by multiple sensors are
+disordered and correlated with multiple sources. Those methods that
+are proposed with an attempt to use multiple data sources are called
+data fusion techniques. Upon the position where the fusion
+operation is conducted, there are three general approaches:
+signal-level fusion, feature-level fusion, and decision-level fusion.
+
+I am guessing that "fusing features" refers to an act of combining several features, from different domains, and then generating new features that serves the purpose of fusing.
+If yes, the word "fusion" here refers to its common English usage
+
+The process or result of joining two or more things together to form a
+single entity.
+
+That is,we need to combine multiple features in any manner and then coming up with new features that are good enough to perform our AI task.
+Or does it have any formal definition and requirements based on the input or output features? Is there any formal definition for fusion operator?
+"
+"['neural-networks', 'deep-learning', 'training', 'hyperparameter-optimization', 'learning-rate']"," Title: Can the optimal learning rate differ for different architectures?Body: In several courses and tutorials about neural networks, people often say that the learning rate (LR) should be the first hyper-parameter to be tuned before we tweak the others. For example, in this lecture
+(minute 59:55), the lecturer says that the learning rate is the first hyper-parameter that he tunes.
+However, is it possible that the optimal learning rate is different for different architectures (for example, a different number of layers and neurons)? Or maybe the LR is architecture-independent and it depends only on the characteristics of the particular dataset we train our model on?
+Moreover, should the LR be searched in the same process (e.g. grid-search) as the other hyper-parameters?
+"
+"['multi-agent-systems', 'rule-based-systems']"," Title: How to implement a rule-based decision maker for an agent-based model?Body: I had no idea that there is a stack exchange community for A.I. :-/ So I repost this question here in hope of some guidelines. I tried to delve into the materials discussed in AI: A Modern Approach course book, I am struggling to wrap my head around the model I'm trying to build without some code examples to aid me fill some gaps.
+I have a hard time understanding how to combine a rule-based decision making approach for an agent in an agent-based model I try to develop.
+The interface of the agent is a very simple one.
+public interface IAgent
+{
+ string ID { get; }
+
+ void Execute(IAgentMessage message,
+ IActionScheduler actionScheduler);
+}
+
+For the sake of the example, let's assume that the agents represent Vehicles which traverse roads inside a large warehouse, in order to load and unload their cargo. Their route (sequence of roads, from the start point until the agent's destination) is assigned by another agent, the Supervisor. The goal of a vehicle agent is to traverse its assigned route, unload the cargo, load a new one, receive another assigned route by the Supervisor and repeat the process.
+The vehicles must also be aware of potential collisions, for example at intersection points, and give priority based on some rules (for example, the one carrying the heaviest cargo has priority).
+As far as I can understand, this is the internal structure of the agents I want to build:
+
+So the Vehicle Agent can be something like:
+public class Vehicle : IAgent
+{
+ public VehicleStateUpdater { get; set; }
+
+ public RuleSet RuleSet { get; set; }
+
+ public VehicleState State { get; set; }
+
+ public void Execute(IAgentMessage message, IActionScheduler actionScheduler)
+ {
+ VehicleStateUpdater.UpdateState(VehicleState, message);
+ Rule validRule = RuleSet.Match(VehicleState);
+ VehicleStateUpdater.UpdateState(VehicleState, validRule);
+ validRule.Fire(this, VehicleState, actionScheduler);
+ }
+}
+
+For the Vehicle agent's internal state I was considering something like:
+public class VehicleState
+{
+ public Route Route { get; set; }
+
+ public Cargo Cargo { get; set; }
+
+ public Location CurrentLocation { get; set; }
+}
+
+For this example, 3 rules must be implemented for the Vehicle Agent.
+
+- If another vehicle is near the agent (e.g. less than 50 meters), then the one with the heaviest cargo has priority, and the other agents must hold their position.
+- When an agent reaches their destination, they unload the cargo, load a new one and wait for the Supervisor to assign a new route.
+- At any given moment, the Supervisor, for whatever reason, might send a command, which the recipient vehicle must obey (Hold Position or Continue).
+
+The VehicleStateUpdater must take into consideration the current state of the agent, the type of received percept and change the state accordingly. So, in order for the state to reflect that e.g. a command was received by the Supervisor, one can modify it as follows:
+public class VehicleState
+{
+ public Route Route { get; set; }
+
+ public Cargo Cargo { get; set; }
+
+ public Location CurrentLocation { get; set; }
+
+ // Additional Property
+ public RadioCommand ActiveCommand { get; set; }
+}
+
+Where RadioCommand can be an enumeration with values None, Hold, Continue.
+But now I must also register in the agent's state if another vehicle is approaching. So I must add another property to the VehicleState.
+public class VehicleState
+{
+ public Route Route { get; set; }
+
+ public Cargo Cargo { get; set; }
+
+ public Location CurrentLocation { get; set; }
+
+ public RadioCommand ActiveCommand { get; set; }
+
+ // Additional properties
+ public bool IsAnotherVehicleApproaching { get; set; }
+
+ public Location ApproachingVehicleLocation { get; set; }
+}
+
+This is where I have a huge trouble understanding how to proceed and I get a feeling that I do not really follow the correct approach. First, I am not sure how to make the VehicleState class more modular and extensible. Second, I am not sure how to implement the rule-based part that defines the decision making process. Should I create mutually exclusive rules (which means every possible state must correspond to no more than one rule)? Is there a design approach that will allow me to add additional rules without having to go back-and-forth the VehicleState class and add/modify properties in order to make sure that every possible type of Percept can be handled by the agent's internal state?
+The examples I've seen in the Artificial Intelligence: A Modern Approach course book and in other sources are too simple for me to "grasp" the concept in question when a more complex model must be designed.
+I would be grateful if someone can point me in the right direction concerning the implementation of the rule-based part.
+I am writing in C# but as far as I can tell it is not really relevant to the broader issue I am trying to solve.
+An example of a rule I tried to incorporate:
+public class HoldPositionCommandRule : IAgentRule<VehicleState>
+{
+ public int Priority { get; } = 0;
+
+ public bool ConcludesTurn { get; } = false;
+
+
+ public void Fire(IAgent agent, VehicleState state, IActionScheduler actionScheduler)
+ {
+ state.Navigator.IsMoving = false;
+ //Use action scheduler to schedule subsequent actions...
+ }
+
+ public bool IsValid(VehicleState state)
+ {
+ bool isValid = state.RadioCommandHandler.HasBeenOrderedToHoldPosition;
+ return isValid;
+ }
+}
+
+A sample of the agent decision maker that I also tried to implement.
+public void Execute(IAgentMessage message,
+ IActionScheduler actionScheduler)
+{
+ _agentStateUpdater.Update(_state, message);
+ Option<IAgentRule<TState>> validRule = _ruleMatcher.Match(_state);
+ validRule.MatchSome(rule => rule.Fire(this, _state, actionScheduler));
+}
+
+"
+"['neural-networks', 'deep-learning', 'comparison', 'training-datasets', 'validation-datasets']"," Title: Why not make the training set and validation set one if their roles are similar?Body: If the validation set is used to tune the hyperparameters and the training set adjusts the weights, why don't they be one thing as they have a similar role, as in improving the model?
+"
+['reinforcement-learning']," Title: Reinforcement learning applicable to a scheduling problem?Body: I have a certain scheduling problem and I would like to know in general whether I can use Reinforcement learning (and if so what kind of RL) to solve it. Basically my problem is a mixed-integer linear optimization problem. I have a building with an electric heating device that converts electricity into heat. So the action vector (decision variable) is $x(t)$ which quantifies the electrical power of the heating device. The device has to take one decision for every minute of the day (so in total there are $24$ hours $\times 60$ minutes $= 1440$ variables). Each of those variables is a continuous variable and can have any value between $0$ and $2000 W$.
+The state space contains several continuous variables:
+
+- External varying electricity price per minute: Between $0$ Cents and $100$ Cents per kWh (amount of energy)
+- Internal temperature of the building: Basically between every possible value but there is a constraint to have the temperature between $20 °C$ and $22 °C$
+- Heat demand of the building: Any value between $0 W$ and $10.000 W$
+- Varying "efficiency" of the electrical heating device between $1$ and $4$ (depending on the external outside temperature)
+
+The goal is to minimize the electricity costs (under a flexible electricity tariff) and to not violate the temperature constraint of the building. As stated before, this problem can be solved by mathematical optimization (mixed-integer linear program). But I would like to know if you can solve this also with reinforcement learning? As I am new to reinforcement learning I would not know how to do this. And I have some concerns about this.
+Here I have a very large state space with continuous values. So I can't build a comprehensive $Q-$table as there are to many values. Further, I am not sure whether the problem is a dynamic programming problem (as most/all?) of the reinforcement problems. From an optimization point of view it is a mixed-integer linear problem.
+Can anyone tell me if and how I could solve this by using RL? If it is possible I would like to know which type of RL method is suitable for this. Maybe Deep-Q-Learning but also some Monte-Carlo policy iteration or SARSA? Shall I use model-free or model-based RL for this?
+Reminder: Does nobody know whether and how I can use reinforcement learning for this problem? I'd highly appreciate every comment.
+Can nobody give me some more information on my issue? I'll highly appreciate every comment and would be quite thankful for more insights and your help.
+"
+"['deep-learning', 'terminology', 'math', 'definitions', 'tensor']"," Title: What is meant by an axis of a tensor?Body: Tensor is an ordered collection of elements. The elements are generally real numbers. Tensors are used in deep learning for storing data.
+There is a wide usage of the word "axis" related to tensor. Axes are not the same as indices, which are used to access the elements of a tensor. An axis is not the same as an element of a tensor.
+What exactly is an axis in a tensor? Is it also a (sub-)tensor obtained from the actual tensor? Or is it any other indexing mechanism? If yes, why it is used?
+Suppose $a =[[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12],[13, 14, 15, 16]]$ is a tensor. Then what does axis do for $a$?
+"
+"['machine-learning', 'performance', 'metric']"," Title: How do I know what a good mean absolute error value is?Body: I have just run an MAE calculation for my machine learning models and the results show:
+
+SVM MAE = 28.850 deg.
+Random Forest MAE = 33.832 deg.
+
+How do I know what a good MAE value is? What is the range of the MAE?
+"
+"['comparison', 'terminology', 'definitions', 'activation-functions']"," Title: Is my understanding on ""smooth approximation"" correct?Body: Consider the following details regarding Softplus activation function
+
+$$\text{Softplus}(x) = \dfrac{\log(1+e^{\beta x})}{\beta}$$
+SoftPlus is a smooth approximation to the ReLU function and can be
+used to constrain the output of a machine to always be positive.
+
+It says that Softplus is a smooth approximation to the ReLU function. Let us consider the analytical form and plot of the RELU function.
+$$\text{ReLU}(x)=(x)^+=\max(0,x)$$
+
+The plot of Softplus function is
+
+If we observe both plots, we can see the Softplus is almost similar to ReLU. There is a property for Softplus that ReLU does not have. ReLU is not differentiable at zero and the derivative of ReLU is also not continuous.
+If we observe the behavior of Softplus, it is $n-$times continuously differentiable and hence a smooth function.
+Since Softplus is both a smooth function and approximates ReLU, it is considered as a smooth approximation of ReLU.
+Is my interpretation correct? if no, then what is meant by "smooth approximation" here?
+"
+"['neural-networks', 'deep-learning', 'terminology', 'activation-functions']"," Title: What does ""linear unit"" mean in the names of activation functions?Body: Activation functions, in neural networks, are used to introduce non-linearity. Many activation functions that are used in neural networks have the term "Linear Unit" in their full form. "Linear unit" can be abbreviated as LU.
+For example, consider some activation functions
+
+ELU - Exponential Linear Unit
+ReLU - Rectified Linear Unit
+................................................
+
+Why does the function name contain the term "Linear Unit"? What is meant by Linear Unit here? Is it saying anything about the nature of function under consideration?
+"
+"['reinforcement-learning', 'dqn', 'environment']"," Title: Creating DQN Learning Agent without Gym environment for a custom projectBody: In a project for college I created a simple turn based game, with up to 4 players that can either move or attack the opponents. The players are playing over the network, meaning the clients are supposed to be programmed AIs. The client itself is fully functional, meaning it has all the game logic and can simulate complete games.
+Now my task is to create a RL-Agent with a Deeq-Q-Network that learns to play the game. However, I don't really find any source to how that should be done. I was able to create an Agent with a DQN for the CartPole
environment of OpenAI gym with PyTorch. Now my guess would be to create my own environment with the gym framework, but since the game itself is already implemented I was thinking if it was possible to feed data in the DQN without having to create the gym environment. As a state
it would get the gamestate (which is a 2d grid, with information about the players and their remaining hitpoints) and all the possible moves in the current state as the action space
. And since the game is network based, it would save the network after each game and reload it when the next starts during the training. For the training I would start the games over and over with a script and let it train for a while.
+As I'm quite new to Machine Learning it seems really blurry as of how to tackle this problem and was hoping to get led in some direction on how to start.
+"
+"['transformer', 'attention', 'implementation']"," Title: Why people always say the Transformer is parallelizable while the self-attention layer still depends on outputs of all time steps to calculate?Body: When compared to an RNN seq-to-seq model, people always say the Transformer is parallelizable. In the original Attention Is All You Need paper, it also said that
+
+Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t−1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples
+
+I use the The illustrated Transformer to help to explain my question here. It said (You can search those sentences):
+
+Here we begin to see one key property of the Transformer, which is that the word in each position flows through its own path in the encoder. There are dependencies between these paths in the self-attention layer. The feed-forward layer does not have those dependencies, however, and thus the various paths can be executed in parallel while flowing through the feed-forward layer.
+
+However, actually in the self-attention layer, in order to calculate the select V, it needs the key values of all time steps! So each "time step" is not fully independent. There exist an operation in the layer that depends on the output, here is the key from all "time step".
+In the original paper, the same block is repeated 6 times. That means there are at least 6 points where the flow of independent operation of each "time step" or each token to wait for the others. Yes, it is better, but why do they call it parallelizable?
+"
+"['training', 'math', 'generative-adversarial-networks', 'discriminator']"," Title: What is Lipschitz constraint and why it is enforced on discriminator?Body: The following is the abstract for the research paper titled Improved Training of Wasserstein GANs
+
+Generative Adversarial Networks (GANs) are powerful generative models,
+but suffer from training instability. The recently proposed
+Wasserstein GAN (WGAN) makes progress toward stable training of GANs,
+but sometimes can still generate only poor samples or fail to
+converge. We find that these problems are often due to the use of
+weight clipping in WGAN to enforce a Lipschitz constraint on the
+critic, which can lead to undesired behavior. We propose an
+alternative to clipping weights: penalize the norm of gradient of the
+critic with respect to its input. Our proposed method performs better
+than standard WGAN and enables stable training of a wide variety of
+GAN architectures with almost no hyperparameter tuning, including
+101-layer ResNets and language models with continuous generators. We
+also achieve high quality generations on CIFAR-10 and LSUN bedrooms.
+
+Here, the critic stands for discriminator of the GAN. I understood that the discriminator must obey Lipschitz constraint and hence weight clipping is generally done before this paper. The paper provides an alternative way, penalizing the norm of the gradient of the critic with respect to its input, to enforce the desired Lipschitz constraint.
+What actually is Lipschitz constraint and why is it mandatory for a discriminator to obey it?
+"
+"['neural-networks', 'training']"," Title: Mapping input vectors of variable length to output vectors of variable lengths with dummy variablesBody: I have a general question about supervised ANNs that map inputs to outputs. It is possible to vary the length of the input and output vectors by inserting some dummy variables that will not be considered in the mapping (or will be mapped to other dummy variables). So basically the mapping should look like this (v: value, d: dummy)
+Input vector 1 $[v,v,v,v,v] \rightarrow$ Output vector 1 $[v,v,v,v,v]$
+Input vector 2 $[v,v,v,v,v]\rightarrow$ Output vector 2 $[v,v,v,v,v]$
+Input vector 3 $[v,v,v,d,d] \rightarrow$ Output vector 3 $[v,v,v,d,d]$
+Input vector 4 $[v,v,d,d,d] \rightarrow$ Output vector 4 $[v,v,d,d,d]$
+Input vector 5 $[v,d,d,d,d] \rightarrow$ Output vector 5 $[v,d,d,d,d]$
+The input and output vectors have a length of 5 with 5 values. However, sometimes only a vector of size e.g. 3 (which is basically a vector of length 5 with 2 dummy variables) should be mapped to an output vector of length 3. So after training the ANN should know that if it for example gets an input vector of length 3 it should produce an output vector of length 3.
+Is something like this generally possible with ANNs or other machine learning approaches? If so, what type of ANN or machine learning approach can be used for this? I'll appreciate every comment.
+Reminder: Can anybody give me more insights into this?
+"
+"['reinforcement-learning', 'deep-rl', 'sample-efficiency']"," Title: How to use a heuristic policy to increase sample efficiency of a deep reinforcement learning agent?Body: I have a heuristic solution to a problem which works quite well when certain environmental parameters are known and unchanging. However, in a real world setting these parameters will not be known and are likely to fluctuate over the course of an episode. I'm hoping to use deep RL to develop a policy that will be similar to the heuristic, but robust to these unknowns.
+My question is: does the RL agent need to be trained "from scratch" as one would typically do or is there a way to leverage the existing policy to jump start the training progress?
+In the latter case, what would this looks like? I've had a couple of thoughts, but I'm not sure how well any of them would work.
+
+- Reward actions that the heuristic would take in an environment with static parameter values, then gradually make the environment more complex and set a new reward function based on what I'm actually interested in.
+
+- Instead of taking random actions in the exploration stage, take actions dictated by the heuristic.
+
+
+"
+"['reinforcement-learning', 'markov-decision-process', 'multi-armed-bandits', 'contextual-bandits', 'discount-factor']"," Title: Is it better to model a Contextual Multi-Armed Bandit problem as an MDP with a non-zero discount factor than treating it as it is?Body: I'd like to ask if it is, generally, better to model a problem that naturally appears as a Contextual Multi-Armed Bandit like Recommender Systems as a Markov Decision Process with a non-zero discount factor (otherwise it's just an MDP with one step episodes) or is it better to treat it as it is; a Contextual Multi-Armed Bandit (MDP with a zero discount factor)
+I'm thinking about some problems like Recommender Systems where we can't define well the dynamics of the environment and so using a non-zero discount factor wouldn't make much sense since we'll take into account the recommendations for users that are independent of each other.
+"
+"['training', 'math', 'implementation', 'gradient', 'wasserstein-gan']"," Title: How to calculate the gradient penalty proposed in ""Improved Training of Wasserstein GANs""?Body: The research paper titled Improved Training of Wasserstein GANs proposed a gradient penalty in order to avoid undesired behavior due to weight clipping of the discriminator.
+
+We now propose an alternative way to enforce the Lipschitz constraint.
+A differentiable function is 1-Lipschtiz if and only if it has
+gradients with norm at most 1 everywhere, so
+we consider directly constraining the gradient norm of the critic’s
+output with respect to its input. To circumvent tractability issues,
+we enforce a soft version of the constraint with a penalty on the
+gradient norm for random samples $\hat{x} \sim P_\hat{x}$. Our new
+objective is
+$$L = \mathop{\mathbb{E}}\limits_{\tilde x \sim \mathbb{P}_g}
+ [D(\tilde x)] - \mathop{\mathbb{E}}\limits_{ x \sim \mathbb{P}_r}
+ [D(x)] + \mathop{\mathbb{E}}_{\hat{x} \sim P_{\hat{x}}} [
+ (\| \triangledown_{\hat{x}} D(\hat{x})\|_2 - 1 )^2 ]$$
+
+The last term in the discriminator's loss function is related to the gradient penalty. It is easy to calculate the first two terms. Since discriminator, in general, gives value in range $[0, 1]$, the first two terms are just the average of the sequence of probability values given by discriminator on generated and real images respectively.
+But, how to calculate $\triangledown_{\hat{x}} D(\hat{x})$ for a given image $\hat{x}$?
+"
+"['machine-learning', 'image-processing', 'resource-request', 'channel']"," Title: Material(s) for understanding ""image channels""Body: I am pretty confused about the concept of "image channels".
+I want material that explains the concept of channels from scratch to whatever is required to understand their role in machine learning. I think that it is a small concept and possibly present as a chapter in good textbooks.
+Where can I read about channels of an image in detail?
+"
+"['neural-networks', 'deep-learning', 'batch-normalization']"," Title: When does a batch normalization layer becomes active?Body: Let us assume your dataset has $n$ training samples each of size $s$ and you divided them into $k$ batches for training. Then each batch has $n_k = \dfrac{n}{k}$ training samples.
+Batch normalization can be applied to any input or hidden layer in a neural network. So, assume that I am applying batch normalization at every possible place I can.
+Now, consider a particular batch normalization layer (say $b$) of a hidden layer $\ell$. Now, I am confused about the working frequency of $b$.
+Will it be activated only after every $n_k - 1$ forward passes i.e, once per batch at the end of the batch? If no, then how $b$ calculates the mean and standard deviation for every forward pass while training if $n_k$ output vectors of $\ell$ are not available at that instant?
+Will $b$ calculates the mean and standard deviated, for every forward pass, based on the outputs of $\ell$ that are calculated so far? If yes, then why it is called batch normalization?
+To put it concisely, are batch normalization layers active for every iteration? If yes then how they are normalizing a "batch" of vectors?
+
+You can check here which says
+
+The mean and standard-deviation are calculated per-dimension over the
+mini-batches
+
+"
+"['neural-networks', 'machine-learning']"," Title: Backpropagation after N sequential input-output passBody: I'm trying to train a Neural Network in a particular situation -- similar to a genetic algorithm domain as far as I know.
+I have to run a simulation with a length of $K$ steps.
+I have a neural network $N$ that at each time step is used to produce an output, so that:
+$$
+o_{t+1} = N(i_{t})
+$$
+$i_t$ is a feature vector built upon $o_{t-1}$, and $i_0$ is given.
+My ground-truth value is $o_k$, namely the right value at the end of the simulation. So, I can evaluate the loss (e.g. MSE) only at the end of the simulation.
+Suppose to fix k to 3, the evaluation is: $N(N(N(i_0))$
+because:
+$$
+o_1 = N(i_o) \\
+o_2 = N(i_1) \\
+o_3 = N(i_2)
+$$
+So my questions are:
+
+- does it have any sense to apply backpropagation in these settings?
+- if yes, what happens to the gradients?
+
+Practically, in some simple situations, the backpropagation seems to work, but in others, the gradients explode or vanishes
+"
+"['research', 'architecture', 'data-visualization']"," Title: Do deep learning researchers generally visualize intermediate steps?Body: Many researchers in deep learning research come up with new CNN architectures.
+The architectures are (just) combinations of a few existing layers.
+Along with their mathematical intuition, in general, do they visualize intermediate steps by execution and then (do trial and error) brute force for achieving the state-of-art architectures?
+Visualizing intermediate steps refers to printing outputs in the proper format for analyzing them. Intermediate steps may refer to feature maps in CNN, hidden states in RNN, outputs of hidden layers in MLP, etc.
+"
+"['training', 'datasets']"," Title: Training on the dataset in parts vs training on the whole datasetBody: What is the difference between these two situations? are they the same ?
+#1 : train a model 20 epochs on the whole dataset
+#2 : divide dataset into n-parts then train the model 20 epochs on each part
+20 is a random number just for clarification. do we get the same result (accuracy) between these two situations? and why ?
+Side note: this question was raised in my mind when I faced a problem: dataset is bigger than the storage space. So I want to divide it into 4 parts and train the model on each part. But does this effect on accuracy ? does this method of training is correct ?
+"
+"['convolutional-neural-networks', 'convolution', 'channel']"," Title: Is a convolutional layer capable of converting, for example, a binary image into an RGBA image?Body: I am asking this question for a better understanding of the concept of channels in images.
+I am aware that a convolutional layer generates feature maps from a given image. We can adjust the size of the output feature map by proper padding and regulating strides.
+But I am not sure whether there exist kernels for a single convolution layer that are capable of changing an {RGBA, RGB, Grayscale, binary}
image into (any) another {RGBA, RGB, Grayscale, binary}
image?
+For example, I have a binary image of a cat, is it capable to convert it into an RGBA image of a cat? If no, can it at least convert a binary cat image into an RGBA image?
+I am asking only from a theoretical perspective.
+"
+['optimization']," Title: Cover a surface with smaller predefined objectsBody: I'm trying to make a program that takes a surface designed by the user, and different 3D geometries from a dataset as inputs and gives a good approximation of the surface using only the objects found in the dataset. This program shouldn't do any warping, and should avoid geometries to collide, even though cutting them could be acceptable, but again with as little loss as possible.
+I thought about hardcoding this, but I can't find any good way to optimize the surface coverage without brute-forcing it. I'm wondering what ML techniques would be best for this, and how to find a good balance between precision and speed.
+"
+"['reinforcement-learning', 'alphazero']"," Title: At what point are MCTS results discarded in AlphaZero Training?Body: Regarding the AlphaZero paper, it is not clear to me when the Monte Carlo Tree Search (MCTS) results will be cleaned up.
+I assume this has to happen at some point, since mixing results could lead to lower quality results? Imagine in the self-play the Neural Network (NN) is updated to a new version and evaluates certain patterns differently by detecting a new trick. Many iterations must follow to outperform the old best choice (visit-count). I imagine discarding old MCTS results should be done about between an episode and the next NN weight updates.
+I feel that a wrong decision here could have a strong negative impact on the overall learning process.
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: How to measure the significance of an input feature for the output of a linear layer in a neural networkBody: Suppose I have a simple linear layer $y = xA^T + b$ that is part of a neural network trained on some dataset. The weight matrix $A$ for this layer has the shape [num_outputs, num_inputs]
.
+For each layer input, I would like to find a value between 0 and 1, based on the weight matrix, representing the significance of that input to the layer output.
+Intuitively, if the values in the i-th column of the weight matrix are close to 0, then the significance of i-th inputh should also be close to 0. Conversely, if the values are close to maximum or minimum of the entire weight matrix, the significance should approach 1.
+This statistic should also adequately recognize cases where the vast majority of values in a column are close to 0, but at least one is not. Then the significance of such an input should not be close to 0, because it is important for a single neuron that detects, for example, an edge case.
+Can anyone point me in the right direction?
+"
+"['reinforcement-learning', 'alphazero']"," Title: What does it mean there is no rollout in AlphaZero's training?Body: According to a blog post by DeepMind, AlphaZero doesn't have a real rollout.
+
+AlphaGo Zero does not use "rollouts" - fast, random games used by other Go programs to predict which player will win from the current board position. Instead, it relies on its high quality neural networks to evaluate positions.
+
+Instead, I assume it just interprets the winner at a given state by the NN values head result. This replaces the rollout. So the computation time saved could be used for many expansions instead. Evaluating a state from a root node would then be the best action derived from the visit count in MCTS, which is only based on the predictions of the NN value heads. (no current score, no policy?)
+With policy, I mean the NN's policy head (softmax).
+This would mean that the NN policy is only used in the loss calculation and nowhere else?
+"
+"['terminology', 'gradient-descent', 'stochastic-gradient-descent', 'mini-batch-gradient-descent']"," Title: Why is it called ""batch"" gradient descent if it consumes the full dataset before calculating the gradient?Body: While training a neural network, we can follow three methods: batch gradient descent, mini-batch gradient descent and stochastic gradient descent.
+For this question, assume that your dataset has $n$ training samples and we divided it into $k$ batches with $\dfrac{n}{k}$ samples in each batch. So, it can be easily understood the word "batch" is generally used to refer to a portion of the dataset rather than the whole dataset.
+In batch gradient descent, we pass all the $n$ available training samples to the network and then calculates the gradients (only once). We can repeat this process several times.
+In mini-batch gradient descent, we pass $\dfrac{n}{k}$ training samples to the network and calculates the gradient. That is, we calculate the gradient once for a batch. We repeat this process with all $k$ batches of samples to complete an epoch. And we can repeat this process several times.
+In stochastic gradient descent, we pass one training sample to the network and calculates the gradient. That is, we calculate the gradient once for iteration. We repeat this process with all $n$ times to complete an epoch. And we can repeat this process several times.
+Batch gradient descent can be viewed as a mini-batch gradient descent with $k = 1$ and stochastic gradient descent can be viewed as a mini-batch gradient descent with $k = n$.
+Am I correct regarding the usage of terms in the context provided above? If wrong then where did I go wrong?
+If correct, I am confused about the usage of the word "batch" in "batch gradient descent". In fact, we do not need the concept of batch in batch gradient descent since we pass all the training samples before calculating gradient. In fact, there is no need for batch gradient descent to partition the training dataset into batches. Then why do we use the word "batch" in batch gradient descent? Similarly, we are using the word "mini-batch" in "mini-batch gradient descent". In fact, we are passing a batch of samples before calculating the gradient. Then why it is called "mini-batch" gradient descent instead of "batch" gradient descent?
+"
+['reinforcement-learning']," Title: Reinforcement learning algorithms that deal with noisy state observationsBody: I was recently considering training an agent that perform a task by reinforcement learning. Both the state and actions are continuous, but could be discretized if needed. The problem is that in my case the state observations and reward will be quite noisy, so given the same state and action, the next state and received reward will be different on each run, and the noise cannot be described by canonical probability distributions.
+Up to now I have tried deep Q-network, stochastic policy gradient and deep deterministic policy gradient. While I could successfully implemented these algorithms in the CartPole game, they all failed to learn my particular task.
+I hope to know are there any reinforcement learning methods that can deal with noisy state observations?
+"
+"['pytorch', 'batch-normalization', 'channel']"," Title: What is an ""additional channel dimension"" contain in batch normalization?Body: Consider the following explanations regarding batch normalization layers in PyTorch
+#1: one dimensional batch normalization
+
+class torch.nn.BatchNorm1d(.........)
+Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D
+inputs with optional additional channel dimension)............
+
+#2: Second dimensional batch normalization
+
+class torch.nn.BatchNorm2d(..........)
+Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension)
+
+#3: Third dimensional batch normalization
+
+class torch.nn.BatchNorm3d(..............)
+Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension)
+
+All these say that there can be an extra element to each vector undergoing batch normalization and is related to channel.
+Is the channel referred here is same as the channels of the image? If yes, then what does it contain? Does it contain the number of channels in that particular layer?
+Else, what does this additional channel contain?.
+"
+"['neural-networks', 'reinforcement-learning', 'datasets', 'time-series', 'algorithmic-trading']"," Title: Is there a way to parallelise the RL training on multiple stocks to avoid the memory issue?Body: I have some plans in working with Reinforcement Learning in order to predict the stock price movement. For a stock like TSLA
+
+some training features might be the pivot price values and the set of the difference between two consecutive pivot points.
+I would like that my model captures the general essence of the stock market. In other words, if I want my model to predict the stock price movement for TSLA, then my dataset will be built only on TSLA stock. If I try to predict the price movement on FB stock using that model, then it won't work for many reasons. So, if I want my model to predict the price movement of any stock, then I have to build a dataset using all types of stock prices.
+For the purpose of this question, instead of taking an example of the dataset using all the stocks, I will use only three stocks, i.e. TSLA, FB, and AMZN. So, I will generate the dataset for two years for TSLA, two years of FB, and two years of AMZN, and then pass it back to back to my model. So, in this example, I pass 6 years of data to my model for training purposes. If I start with FB, then the model will learn and memorize some patterns from the FB features. The problem is when the model is made to train on the AMZN features, it already starts to forget the information of the training on the FB dataset.
+Is there a way to parallelise the training on multiple stocks to avoid the memory issue?
+Instead of my action being a real value, it will be an action vector where the size is depending on the number of parallel stocks.
+"
+"['generative-adversarial-networks', 'probability', 'probability-distribution', 'generator']"," Title: Which probability distribution a generator in Generative Adversarial Network (GAN) is capturing: dataset or ground truth?Body: Consider the following statement from the abstract of the paper titled Generative Adversarial Nets
+
+We propose a new framework for estimating generative models via an
+adversarial process, in which we simultaneously train two models: a
+generative model $G$ that captures the data distribution, and a
+discriminative model $D$ that estimates the probability that a sample
+came from the training data rather than $G$.
+
+Let $D'$ be a dataset of $n$ digital and discrete* images, each of size $C \times H \times W$. Suppose the generative adversarial network is trained on the dataset $D'$.
+Our sample space (or data set) is $D' = \{I_1, I_2, I_3, \cdots, I_n\}$, where $I_j$ is the $j^{th}$ image for $1 \le j \le n$
+The random variables are $X_1, X_2, X_3, \cdots, X_{CHW}$ where
+$$X_i \in \{a, a+1, a+2, \cdots, b\} =\text{ intensity of }i^{th} \text{ pixel;} \text{ for } 1 \le i \le CHW$$
+Since we have dataset $D'$, we can calculate the joint distribution $p_{data\_set}$ having $(b-a+1)^{CHW}$ parameters calculated from the dataset. But the parameters of the original image distribution $p_{ground_{D^{'}}} $is not equal to the $p_{D'}$
+
+Simple example:
+Suppose I flipped an unbiased coin 100 times and I got 45 heads, 55 tails, then $P_{data\_set}(H) = \dfrac{45}{100}$ and $P_{data\_set}(T) = \dfrac{55}{100}$. So, $P_{data\_set} = \{\dfrac{45}{100}, \dfrac{55}{100}\}$
+but the ground truth probability distribution is $P_{ground}(H) = \dfrac{50}{100}$ and $P_{ground}(T) = \dfrac{50}{100}$. So, $P_{ground} = \{\dfrac{50}{100}, \dfrac{50}{100}\}$
+
+Which distribution is our generator capturing by the end? Is it the probability distribution calculated based on our dataset $p_{data\_set}$ of images or the actual probability distribution $p_{ground}$?
+How to understand the act of capturing here? Does it only mean to be behaving (generating instances) in the same way as data distribution?
+
+Suppose I design a machine that is capable to generate equal number of $0$'a as equal number of $1$'s for sufficiently large trails, then I can say that my machine captured the coin toss probability distribution?
+
+
+- discrete images refers to the images whose pixel values takes finite number of integer values.
+
+"
+['generative-adversarial-networks']," Title: Are there any Generative Adversarial Networks without Multi Layer Perceptrons?Body: Although the main stream research is on Generative Adversarial Networks(GANs) using Multi Layer Percepteons (MLPs). The original paper titled Generative Adversarial Nets clealry says, in abstract, that GAN is possible with out MLP also
+
+In the space of arbitrary functions $G$ and $D$, a unique solution
+exists, with $G$ recovering the training data distribution and $D$
+equal to $\dfrac{1}{2}$ everywhere. In the case where G and D are defined
+by multilayer perceptrons, the entire system can be trained with backpropagation.
+
+Are there any research papers that uses models other than MLPs and are comparatively successful?
+"
+"['deep-learning', 'terminology', 'papers', 'activation-functions', 'gradient']"," Title: What is meant by ""well-behaved gradient"" in this context?Body: Consider the following statement (from the paper Generative Adversarial Nets) about the success of discriminative models
+
+So far, the most striking successes in deep learning have involved discriminative models, usually those that map a high-dimensional, rich sensory input to a class label. These striking successes have primarily been based on the backpropagation and dropout algorithms, using piecewise linear units which have a particularly well-behaved gradient.
+
+The piece-wise linear units they are referring to are, I guess, the activation functions. The primary purpose of activation functions is to introduce non-linearity, and there are no other mathematical requirements, such as continuity, differentiability, etc. But not all the activation functions may work well, and some are preferred over others, based on their nature, as well as the task under consideration.
+After reading the quoted paragraph, one can conclude that the activation functions that have well-behaved gradient are showing better results than the others, at least in discriminative tasks.
+What does "well-behaved" in this context stand for? Can we have some mathematical properties of the gradient in order to recognize it as a well-behaved gradient? Or the usage of the phrase "well-behaved" is highly dependent on the discriminative task under consideration?
+"
+"['history', 'notation', 'etymology', 'noise']"," Title: Why is noise vector represented by letter $z$?Body: Most of the notations in Artificial Intelligence are borrowed from the mathematics.
+$x$ stands for input (vector), $y$ stands for output (vector) etc., and the list is long.
+But, I am not sure whether $z$ has any (widely used) role in mathematics.
+Is there any reason behind the usage of letter $z$ to represent a noise vector? Or is it just selected randomly without any reason?
+"
+"['probability-distribution', 'notation', 'random-variable']"," Title: Is it abuse of notation to use tilde operator in this context?Body: The following is a way to use tilde (∼) in context of random variables or random vectors.
+
+In statistics, the tilde is frequently used to mean "has the
+distribution (of)," for instance, $X∼N(0,1)$ means "the stochastic
+(random) variable $X$ has the distribution $N(0,1)$ (the standard
+normal distribution). If X and Y are stochastic variables then $X∼Y$
+means "$X$ has the same distribution as $Y$.
+
+Consider the following usage of tilde in the paper titled Generative Adversarial Nets
+$$x ∼ p_{data}(x)$$
+$$z ∼ p_z(z)$$
+I am thinking that the following is the standard (and possibly correct) notation
+$$x ∼ p_{data}$$
+$$z ∼ p_z$$
+$p_{data}$ is a probability distribution and $p_{data}(x)$ is not a probability distribution and it is a value in $[0, 1]$. It is same in case of noise probability distribution.
+Is it an abuse of notation to use in such a way or is it also a standard and allowed notation to use?
+"
+"['math', 'generative-adversarial-networks', 'expectation', 'random-variable', 'iid']"," Title: What are the iid random variables for a dataset in the GAN framework?Body: I am trying to understand why mean is used for expectation in training Generative Adversarial Networks.
+The answer tells that it is due to the law of large numbers which is based on the assumption that random variables are independent and identically distributed.
+
+If I have a dataset of all possible $32 \times 32$ grayscale images. Then my sample space consists of $256^{32 \times 32}$ elements. Suppose I define 1024 random variables as
+$$X_i = \text{ intensity of } i^{th} \text{ pixel for } 1 \le i \le 1024$$
+Then it is clear that all the random variables are iid since
+
+- $X_i \perp X_j$ for all $i, j$ such that $i \ne j$ and
+- $p(X_i = k) = \dfrac{1}{256}$ for all $i$
+
+
+But these properties do not hold if I take a dataset of (say flower) images since pixel intensities are not independent of each other and the intensity values are not uniformly distributed as well.
+Then how can the law of large numbers be applicable for GAN as the dataset (sample space) does not cover all the possible elements? If I am wrong, then what is the sample space they are considering and what are the random variables they are using implicitly that leads to the satisfaction of iid condition and then the law of large numbers?
+"
+"['neural-networks', 'deep-learning', 'activation-functions']"," Title: Does the output layer in a deep neural network need an activation function?Body: I have enrolled in a course that uses only one hidden layer, and that is the only layer that has activation functions. The model can be visualized as follows:
+
+and here is a PyTorch implementation:
+class MnistModel(nn.Module):
+ """Feedfoward neural network with 1 hidden layer"""
+ def __init__(self, in_size, hidden_size, out_size):
+ super().__init__()
+ # hidden layer
+ self.linear1 = nn.Linear(in_size, hidden_size)
+ # output layer
+ self.linear2 = nn.Linear(hidden_size, out_size)
+
+ def forward(self, xb):
+ # Flatten the image tensors
+ xb = xb.view(xb.size(0), -1)
+ # Get intermediate outputs using hidden layer
+ out = self.linear1(xb)
+ # Apply activation function
+ out = F.relu(out)
+ # Get predictions using output layer
+ out = self.linear2(out)
+ return out
+
+shouldn't the output layer also have activation functions?
+"
+"['terminology', 'definitions', 'probability']"," Title: What exactly is a Parzen?Body: I came across the term "Parzen" while reading the research paper titled Generative Adversarial Nets. It has been used in the research paper in two contexts.
+#1: In phrase "Parzen window"
+
+We estimate probability of the test set data under $p_g$ by fitting a
+Gaussian Parzen window to the samples generated with $G$ and reporting
+the log-likelihood under this distribution.
+
+#2: In phrase "Parzen density estimation"
+
+Evaluating $p(x)$ in Generative autoencoders and Adversarial models: Not
+explicitly represented, may be approximated with Parzen density
+estimation
+
+Is there any definition for the word Parzen and how is it related to the probability distributions?
+"
+"['objective-functions', 'convergence', 'convex-function']"," Title: How to check whether my loss function is convex or not?Body: Loss functions are useful in calculating loss and then we can update the weights of a neural network. The loss function is thus useful in training neural networks.
+Consider the following excerpt from this answer
+
+In principle, differentiability is sufficient to run gradient descent. That said, unless $L$ is convex, gradient descent offers no guarantees of convergence to a global minimiser. In practice, neural network loss functions are rarely convex anyway.
+
+It implies that the convexity property of loss functions is useful in ensuring the convergence, if we are using the gradient descent algorithm. There is another narrowed version of this question dealing with cross-entropy loss. But, this question is, in fact, a general question and is not restricted to a particular loss function.
+How to know whether a loss function is convex or not? Is there any algorithm to check it?
+"
+"['comparison', 'training', 'objective-functions', 'generative-adversarial-networks']"," Title: Does average loss function in GAN training is just an approximation of value function and does not ensure convergence of generator and discriminator?Body: The value function on which convergence has been proved by the original paper of GAN is
+$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]$$
+and the loss function used in training are
+$$\max L(D) = \frac{1}{m} \sum_{i=1}^{m}\left[\log D\left(\boldsymbol{x}^{(i)}\right)+\log \left(1-D\left(G\left(\boldsymbol{z}^{(i)}\right)\right)\right)\right]$$
+$$\min L(G) = \frac{1}{m} \sum_{i=1}^{m}\left[\log \left(1-D\left(G\left(\boldsymbol{z}^{(i)}\right)\right)\right)\right]$$
+where $\{z^{(1)}, z^{(2)}, z^{(3)}, \cdots, z^{(m)}\}$ and $\{x^{(1)}, x^{(2)}, x^{(3)}, \cdots, x^{(m)}\}$ ate the noise samples and data samples for a mini-batch respectively.
+I found after analyzing some questions 1, 2 on our main site that the loss function used for training is just an approximation of the value function and are not same in formal sense.
+Is it true? If yes, what is the reason behind the disparity? Is the loss function used for implementation also ensures convergence?
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: Is there a systematic way of conducting deep learning experiments?Body: I have been working on a computer vision problem with the use of cnns, but quite frustratingly I'm often in the situation of not knowing what to do to improve my results. It seems to me that most of the time I am mostly making random changes and experimenting in hope that this change will bring some improvement.
+I notice how this is different from non-AI software development where debugging can be performed by trying to pinpoint where exactly in the code lies an unexpected behaviour.
+I wonder if there is a technique that could better orient the research effort.
+"
+"['machine-learning', 'classification', 'support-vector-machine', 'binary-classification', 'anomaly-detection']"," Title: How can I weight each point in one-class SVM?Body: I want to give weights to some data points
+Specifically, these are points related to anomalies
+(I'm implementing one-class SVM for anomaly detection)
+Exactly, I want to consider some data points that are likely to be anomalies as more important data points
+Is it possible in one-class SVM ?
+"
+['convolutional-layers']," Title: Confusion about conversion of RGB image to grayscale image using a convolutional layer with 2-dimensional filtersBody: Let us imagine $x$ as a tensor containing 1000 RGB images, each of size $64 \times 32$.
+>>> x = torch.randn(1000, 3, 64, 32)
+>>> print(x.shape)
+torch.Size([1000, 3, 64, 32])
+
+I am using a 2d convolutional layer that converts RGB images to single channel (say grayscale) images
+>>> in_ch = 3
+>>> out_ch = 1
+>>> m = nn.Conv2d(in_ch, out_ch, 3, 1, 1)
+>>> print(m)
+Conv2d(3, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
+
+I passed the tensor $x$ in the convolutional layer and obtained another tensor of 1000 grayscale images, each of size $64 \times 32$.
+>>> output = m(x)
+>>> print(output.shape)
+torch.Size([1000, 1, 64, 32])
+
+Now, I can say that my convolutional layer converted an RGB image into a grayscale image using 2d kernel.
+How it is doing?
+RGB image has 3 planes each of size $64 \times 32$. If a kernel of 2 dimensions is used, then we will get 3 planes in output, corresponding to R, G, and B. How is it possible to convert an image with 3 channels into an image with one channel using 2d kernel?
+I can visualize easily if I use a 3d kernel since the kernel considers three channels simultaneously and produces a single feature map for an RGB image.
+"
+"['convolutional-neural-networks', 'convolution', 'convolutional-layers', 'convolution-arithmetic']"," Title: Why do we add 1 in the formula to calculate the shape of the output of the convolution?Body: In the formula to calculate output shape of tensor after convolution operation
+$$
+W_2 = (W_1-F+2P)/S + 1,
+$$
+where:
+
+- $W_2$ is the output shape of the tensor
+- $W_1$ is the input shape
+- $F$ is the filter size
+- $P$ is the padding
+- $S$ is the stride.
+
+Why do we add $1$? It gets us to the correct answer, but how is this formula derived?
+Source: https://cs231n.github.io/convolutional-networks/#pool
+"
+"['comparison', 'terminology', 'pytorch', 'filters', 'convolutional-layers']"," Title: Is ""kernel"" different from ""filter"" in convolutional neural networks?Body: Recently I asked a question on how a convolution 2d layer changes an RGB image into a grayscale image. Assume that our task is to convert an RGB image into a grayscale image. I use to believe that filter and kernel are one and the same.
+Consider Conv2d in PyTorch.
+class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
+
+The parameters in_channel
, out_channel
and kernel_size
are key to our discussion here.
+I have no doubt about in_channel
. It simply says the number of channels in the input image. It is 3 for our task since we have RGB images as input.
+The doubt is regarding the parameters out_channel
and kernel_size
. out_channel
refers to the number of channels in the output image. It is 1 for our task since we want grayscale images as output. It is also equal to the number of filters we are needing. So, we just use one filter to convert an RGB image into a grayscale image. kernel_size
is the size of the kernel which is showing $3 \times 3$ in our case. Now, my convolution layer is
+>>> in_ch = 3
+>>> out_ch = 1
+>>> m = nn.Conv2d(in_ch, out_ch, 3, 1, 1)
+>>> print(m)
+Conv2d(3, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
+
+Since I have doubt about the conversion of RGB image to grayscale image using a single filter and whose size is showing $3 \times 3$, I checked the shape of weights in the filter and realized that the single filter is a 3-dimensional filter of size $ 3 \times 3 \times 3$
+>>> print(m.weight.shape)
+torch.Size([1, 3, 3, 3])
+
+Now, the filter size is $3 \times 3 \times 3$ and kernel_size
is $3 \times 3$.
+So, can I safely conclude that the filter is different from the kernel? Can I conclude that kernel is just a part of filter and filter may comprise several kernels? Or is it true that the usage in PyTorch is a bit misleading since I found that our site is also using the same tag for both filter and kernel?
+"
+"['training', 'terminology', 'papers', 'generative-adversarial-networks', 'gradient']"," Title: What does it mean by strong or sufficient gradient for training in this context?Body: It has been mentioned in the research paper titled Generative Adversarial Nets that generator need to maximize the function $\log D(G(z))$ instead of minimizing $\log(1 −D(G(z)))$ since the former provides sufficient gradient than latter.
+
+$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] +
+ \mathbb{E}_{z ∼ p_z}[log (1 - D(z))]$$
+In practice, the above equation may not provide sufficient gradient
+for $G$ to learn well. Early in learning, when $G$ is poor, $D$ can
+reject samples with high confidence because they are clearly different
+from the training data. In this case, $\log(1 −D(G(z)))$ saturates.
+Rather than training $G$ to minimize $\log(1 −D(G(z)))$ we can train G
+to maximize $\log D(G(z))$. This objective function results in the same
+fixed point of the dynamics of $G$ and $D$ but provides much stronger
+gradients early in learning.
+
+A gradient is a vector containing the partial derivatives of outputs w.r.t inputs. At a particular point, the gradient is a vector of real numbers. These gradients are useful in the training phase by providing direction-related information and the magnitude of step in the opposite direction. This is my understanding regarding gradients.
+What is meant by sufficient or strong gradient? Is it the norm of the gradient or some other measure on the gradient vector?
+If possible, please show an example of strong and weak gradients with numbers so that I can quickly understand.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'attention']"," Title: Attention mechanism: Why apply multiple different transformations to obtain query, key, valueBody: I have two questions about the structure of attention modules:
+Since I work with imagery I will be talking about using convolutions on feature maps in order to obtain attention maps.
+
+- If we have a set of feature maps with dimensions [B, C, H, W] (batch, channel, height, width), why do we transform our feature maps before we calculate their affinity/correlation in attention mechanisms? What makes this better than simply taking the cosine distance between the feature vectors (e.g. resizing the maps to [B, C, HW] and [B, HW, C] and multiplying them together). Aren't the feature maps already in an appropriate feature/embedding space that we can just use them directly instead of transforming them first?
+
+- Most of the time, attention mechanisms will take as input some stack of feature maps (F), and will apply 3 transformations on them to essentially produce a "query", "key" and "value". The query and key will be multiplied together to get the affinity/correlation between a given feature vector and all other feature vectors. In computer vision these transformation will typically be performed by the different 1x1 convolutions. My question is, how come we use 3 different 1x1 convolutions? Wouldn't it make more sense to apply the same 1x1 convolution to the input F? My intuition tells me that since we want to transform/project the feature maps F into some embedding/feature space that it would make the most sense if the "query", "key" and "value" were all obtained by using the same transformation. To illustrate what I mean lets pretend we had a 1x1 feature map and we wanted to see how well the pixel correlates with itself. Obviously it should correlate 100% because it is the same pixel. But wouldn't applying two sets of 1x1 convs to the pixel lead to the chance that the pixel would undergo a different transformation and in the end would have a lower correlation than it should?
+
+
+"
+"['machine-learning', 'python', 'datasets', 'data-preprocessing', 'r']"," Title: Data analysis before feeding to ML pipelineBody: I'm new to machine learning and I've been working through a dataset of ~3000 records with ~100 features. I've been hand rolling Python and R scripts to analyse the data. For example, plotting the distribution of each feature to see how normal it is, identify outliers, etc. Another example is plotting heatmaps of the features against themselves to identify strong correlations.
+Whilst this has been a useful learning exercise, going forward I suspect there are tools that automate a lot of this data analysis for you, and produce the useful plots and possibly give recommendations on transforms, etc? I've had a search around but don't appear to be finding anything, I guess I'm not using the right terminology to find what I'm looking for.
+If any useful open source tools for this kind of thing spring to mind that would be very helpful.
+"
+"['deep-learning', 'image-segmentation', 'metric']"," Title: Aside from dice score, what other good metrics are used to evaluate segmentation models?Body: I have a segmentation which outputs only one channel image (2 class segmentation). I have used dice score for most of the time, but now higher powers in my team want me to expand evaluation metrics for segmentation model (if it's even possible). I have done some research and as far as right now I have found mainly that everybody uses dice score, and sometimes pixel to pixel binary accuracy, but for the latter seems not the best idea.
+If anybody knows something exciting or useful, I'd be glad to hear from them.
+"
+"['neural-networks', 'machine-learning', 'terminology', 'numerical-algorithms']"," Title: What is numerical stability?Body: I came across the phrase "numerical stability" several times. But almost in the same context.
+I encountered this word mostly in the analytical formula for batch normalization.
+
+$$y = \dfrac{x - \mathbb{E}[x]}{\sqrt{Var[x]+\epsilon}}* \gamma +
+ \beta$$
+eps – a value added to the denominator for numerical stability. Default: 1e-5
+
+Is the phenomenon of "numerical instability" happens during the training of neural networks? Or is it a general one in other models also? What is the reason for its occurrence?
+"
+"['pytorch', 'vectors']"," Title: Is this calculation of the vector-Jacobian product in the PyTorch documention wrong?Body: In the official PyTorch documentation there is the following calculation (here):
+$$
+J^{T} \cdot \vec{v}=\left(\begin{array}{ccc}
+\frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}} \\
+\vdots & \ddots & \vdots \\
+\frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}
+\end{array}\right)\left(\begin{array}{c}
+\frac{\partial l}{\partial y_{1}} \\
+\vdots \\
+\frac{\partial l}{\partial y_{m}}
+\end{array}\right)=\left(\begin{array}{c}
+\frac{\partial l}{\partial x_{1}} \\
+\vdots \\
+\frac{\partial l}{\partial x_{n}}
+\end{array}\right)
+$$
+However, I am wondering why the result isn't as follows:
+$$
+J^{T} \cdot \vec{v}=\left(\begin{array}{ccc}
+\frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}} \\
+\vdots & \ddots & \vdots \\
+\frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}
+\end{array}\right)\left(\begin{array}{c}
+\frac{\partial l}{\partial y_{1}} \\
+\vdots \\
+\frac{\partial l}{\partial y_{m}}
+\end{array}\right)=\left(\begin{array}{c}
+m\frac{\partial l}{\partial x_{1}} \\
+\vdots \\
+m\frac{\partial l}{\partial x_{n}}
+\end{array}\right)
+$$
+As the matrix is multiplied by a vector and the $\partial y_{x}$ terms cancel out it is should be $m$ times $\frac{\partial l}{\partial x_{x}}$.
+I know that the official PyTorch docs are probably right but I just can't get my head around why.
+"
+['batch-normalization']," Title: Example for batch normalization in an aritifical neural networkBody: Suppose the following is the neural network I want to train and assume that there is a batch normalization layer for each layer of the neural network
+
.
+My focus is on the activity of batch normalization layer of the hidden layer. Assume that mini-batch size is $3$. The following are the first four outputs of the hidden layer without using batch normalization layer.
+$$\left(\begin{array}{c} 4 \\ 5 \\ 2 \\ 1 \\ \end{array}\right), \left(\begin{array}{c} 3 \\ 4 \\ 6 \\ 0 \\ \end{array}\right), \left(\begin{array}{c} 1 \\ 4 \\ 7 \\ 9 \\ \end{array}\right), \left(\begin{array}{c} 3 \\ 5 \\ 7 \\ 9 \\ \end{array}\right)$$
+So the first mini-batch is$\left(\begin{array}{c} 4 \\ 5 \\ 2 \\ 1 \\ \end{array}\right), \left(\begin{array}{c} 3 \\ 4 \\ 6 \\ 0 \\ \end{array}\right), \left(\begin{array}{c} 1 \\ 4 \\ 7 \\ 9 \\ \end{array}\right)$ and has the following statistics
+mean = $\mathbb{E}[X] = \left(\begin{array}{c} \dfrac{8}{3} \\ \dfrac{13}{3} \\ 5 \\ \dfrac{10}{3} \\ \end{array}\right), Var(X) = \left(\begin{array}{c} \dfrac{14}{9} \\ \dfrac{2}{9} \\ \dfrac{14}{3} \\ \dfrac{146}{9} \\ \end{array}\right)$
+So, I am thinking that the batch normalization can be applied during from fourth iteration since mini-batch of outputs of hidden layer are available and so fourth vector $\left(\begin{array}{c} 3 \\ 5 \\ 7 \\ 9 \\ \end{array}\right)$ will be mapped to $\left(\begin{array}{c} 0.26 \\ 1.41 \\ 0.92 \\ 1.4 \\ \end{array}\right)$
+Is it true that if we apply batch normalization the outputs of neural network are
+$$\left(\begin{array}{c} 4 \\ 5 \\ 2 \\ 1 \\ \end{array}\right), \left(\begin{array}{c} 3 \\ 4 \\ 6 \\ 0 \\ \end{array}\right), \left(\begin{array}{c} 1 \\ 4 \\ 7 \\ 9 \\ \end{array}\right),\left(\begin{array}{c} 0.26 \\ 1.41 \\ 0.92 \\ 1.4 \\ \end{array}\right)$$
+I am thinking that I may be totally wrong because the batch normalization layer is not active till the completion of first mini-batch and also using the stats of first mini batch on the outputs of next mini-batch. If wrong, I want to know what is wrong or a simple numerical example by at-least running the batch normalization layer of neural network for mini-batch + 1
number of iterations.
+"
+"['comparison', 'batch-normalization']"," Title: Is there any difference between conditional batch normalization and batch normalization except the usage of MLPs for predicting $\beta$ and $\gamma$?Body: Batch normalization in neural networks uses $\beta$ and $\gamma$ for scaling. The analytical formula is given by
+$$\dfrac{x - \mathbb{E}[x]}{\sqrt{Var(X)}}* \gamma + \beta$$
+Conditional batch normalization uses multi-layer perceptrons to calculate the values of $\gamma$ and $\beta$ instead of giving fixed values to them.
+Is it only the difference between them or is there any other fundamental difference between them in terms of functionality?
+"
+"['reinforcement-learning', 'ai-design', 'model-based-methods', 'model-free-methods']"," Title: If we can model the environment, wouldn't be meaningless to use a model-free algorithm?Body: I am trying to understand the concept of model-free and model-based approaches. As far as I understand, having a model of the environment does not mean that an RL agent has to be model-based. It is about the policy. However, if we can model the environment, why should we want to employ a model-free algorithm? Isn't it better to have a model and expectation about the next reward and state? If you have a better understanding of all these, can you explain them to me as well?
+"
+"['research', 'data-visualization', 'tensor']"," Title: Do researchers generally treat tensors just as mathematical objects with certain shape?Body: Most of the practical research in AI that includes neural networks deals with higher dimensional tensors. It is easy to imagine tensors up to three dimensions.
+When I ask the question How do researchers imagine vector space? on Mathematics Stack exchange, you can read the responses
+
+Response #1:
+I personally view vector spaces as just another kind of algebraic
+object that we sometimes do analysis with, along the lines of
+groups, rings, and fields.
+Response #2
+In research mathematics, linear algebra is used mostly as a
+fundamental tool, often in settings where there is no geometric
+visualization available. In those settings, it is used in the same
+way that basic algebra is, to do straightforward calculations.
+Response #3:
+Thinking of vectors as tuples or arrows or points and arrows... is
+rather limiting. I generally do not bother imagining anything visual
+or specific about them beyond what is required by the definition...
+they are objects that I can add to one another and that I can
+"stretch" and "reverse" by multiplying by a scalar from the scalar
+field.
+
+In concise, mathematicians generally treat vectors as objects in vector space rather than popular academic/beginner imaginations such as points or arrows in space.
+A similar question on our site also recommends not to imagine higher dimensions and to treat dimensions as degrees of freedom.
+I know only two kinds of treatments regarding tensors:
+
+- Imagining at most up to three-dimensional tensors spatially.
+
+- Treating tensors as objects having shape attribute which looks like $n_1 \times n_2 \times n_3 \times \cdots n_d$
+
+
+Most of the time I prefer the first approach. But I am feeling difficulty with the first approach when I try to understand codes (programs) that use higher dimensional tensors. I am not habituated with the second approach although I think it is capable enough to understand all the required tasks on tensors.
+I want to know:
+
+- How do researchers generally treat tensors?
+- If it is the second approach I mentioned: Is it possible to understand all the high dimensional tensor-related tasks?
+
+"
+"['game-ai', 'monte-carlo-tree-search', 'minimax']"," Title: How does one handle different player turns in MCTS?Body: Suppose we have a two player game like Tic Tac Toe where the two players take turns to play their moves. It is my understanding that in the game tree that MCTS builds, consecutive levels in the tree correspond to different player's turns.
+So, for instance, in the root node it is Player1's turn to play, in the children of the root node it is Player2's turn to play, in the children of those children it is Player1's turn again, etc.
+Is that correct?
+If so, is it really prudent to treat nodes where it's the enemy's turn to play the same as those where we choose the next action (i.e. by averaging rollout results in backpropagation). Since, it's not us choosing the next action but the enemy, shouldn't we "pick" the minimum "return" (like in minimax) in those cases instead of the average like we do for nodes where we get to pick the next action?
+By picking I mean to only count the win ratio of that child node (i.e. the minimum win ratio).
+I suspect I am missing something (e.g. that might mess up exploration vs exploitation with UCT) but I can't put my finger on it.
+What do you guys think about this?
+Edit: Maybe a solution to this is only considering good moves for the opponent? But then again.. how do we define good? Heuristics?
+"
+"['generative-adversarial-networks', 'probability', 'notation']"," Title: Does generator in conditonal GAN obey probability laws?Body: In probability, we have two types of probability functions: unconditional probability $p(x)$ and conditional probability $p(x | y)$. Both are fundamentally different and the latter can be obtained by the following equation
+$$p(x|y) = \dfrac{p(x, y)}{p(y)} \text{ provided } p(y) \ne 0$$
+I never heard of formal definition for conditioning except for conditional probability function.
+But in case of neural networks, I came across the notion of conditioning.
+$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x|y)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z|y)))]$$
+Since neural network $D$ is intended to implement a probability function, we can at-least think about conditioning on an input. But the neural network $G$ is not intended to implement probability function. $G$ is intended to provide datasamples by learning an underlying probability distribution whose output is not in the range $[0, 1]$.
+Does $G$ obey the laws of probability? If yes, how, since its output is not restricted to $[0, 1]$? If no, then why the authors use the notation of conditional probability for $G$ also?
+"
+"['generative-adversarial-networks', 'notation', 'input-layer', 'conditional-probability', 'conditional-gan']"," Title: Is there any difference between 'input' and 'conditional input' in the case of neural networks?Body: In the research paper titled Conditional Generative Adversarial Nets by Mehdi Mirza and Simon Osindero, there is a notion of conditioning a neural network on a class label.
+It is mentioned in the abstract that we need to simply feed extra input $y$ to the generator and discriminator of an unconditional GAN.
+
+Generative Adversarial Nets were recently introduced as a novel
+way to train generative models. In this work we introduce the
+conditional version of generative adversarial nets, which can be
+constructed by simply feeding the data, $y$, we wish to condition on to
+both the generator and discriminator. We show that this model can
+generate MNIST digits conditioned on class labels. We also illustrate
+how this model could be used to learn a multi-modal model, and provide
+preliminary examples of an application to image tagging in which we
+demonstrate how this approach can generate descriptive tags which are
+not part of training labels.
+
+So, I cannot see whether there is any special treatment for input $y$.
+If there is no special treatment for the data $y$, then why do they call $y$ a condition and follow the notation of conditional probability such as $G(z|y), D(x|y)$ instead of $G(z,y), D(x,y)$?
+If there is a special treatment to input $y$, then what is that special? Don't they pass $y$ in the same way as $x$ to the neural networks?
+"
+"['neural-networks', 'deep-learning', 'autoencoders', 'time-series', 'representation-learning']"," Title: Train separate AutoEncoder's on each class or one AE for all classes to learn features?Body: I'm working on a project where the dataset contains time series of three classes, depending on the shape of the series. I want to learn the representations of these series as vectors, so naturally I use AutoEncoder for the task (precisely, I use LSTM-AutoEncoder to better handle the sequential data).
+My question is: should I train one model for all classes or one model for each class? If possible, could you also point out what are the pros and cons of each approach? One thing that worries me about the latter approach is that the AE will simply memorize the data without any learning (again, would that be a concern?)
+Thank you very much in advance!
+Sincerely,
+"
+"['comparison', 'terminology', 'objective-functions', 'generative-adversarial-networks', 'value-functions']"," Title: Is there any difference between an objective function and a value function?Body: I found the usage of both objective function and value function in the same context.
+Context #1: In the paper titled Generative Adversarial Nets by Ian J. Goodfellow et al.
+
+We simultaneously train G to minimize $\log(1 −D(G(z)))$. In other
+words, $D$ and $G$ play the following two-player minimax game with
+value function $V (G,D)$:
+$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] +
+ \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]$$
+
+Context #2: In the paper titled Conditional Generative Adversarial Nets by Mehdi Mirza et al.
+
+The objective function of a two-player minimax game would be as
+$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x|y)] +
+ \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z|y)))]$$
+
+In fact, the second paper also iterated context #1 i.e., used the term "value function" at another place.
+We can observe that objective function is a function which we want to optimize
+
+The objective function is the most general term that can be used to
+refer to a cost (or loss) function, to a utility function, or to a
+fitness function, so, depending on the problem, you either want to
+minimize or maximize the objective function. The term objective is a
+synonym for goal.
+
+Since the generator or discriminator has to perform optimization, it is agreeable to use the term objective function in this context.
+But what is the definition for the value function and how is it different from the objective function in this context?
+"
+"['research', 'academia']"," Title: Why many deep learning research papers continue to be in arXiv?Body: There are plenty of research papers, especially in deep learning are present only in arXiv with large number of citations. I cannot find them in journals as peer-reviewed ones.
+For example if I search for Conditional Generative Adversarial Nets then I can find only an arXiv pre-print and has been Cited by 5722
+This is not the single paper and I personally found lot of papers in pre-print only with no journal/conference affiliation. Many research papers are at-least 3 years old.
+Is it solely due to the will of authors or is there any other reason for this phenomenon of not getting published even though they are widely accepted especially in the domain of deep learning?
+"
+"['neural-networks', 'calculus', 'education']"," Title: Are calculus and differential geometry required for building neural networks?Body: I've been studying geometry and linear algebra for months with the goal to build neural networks. But now I'm reading that perceptrons require fitting curves, and curves are not expressed as linear functions. So, I might need to study differential geometry and calculus for building good fitting curves in perceptrons.
+I already know how to code and was hoping to get my hands dirty by coding a few neural networks. But should I study calculus and differential geometry before coding?
+From this video, I understand that the least squares approximation can be used to fit a curve through a set of points, so maybe linear algebra is enough for building good neural networks?
+"
+"['convolutional-neural-networks', 'convolution', 'convolutional-layers']"," Title: Why do we lose detail of an image as we go deeper into a ConvNet?Body: I was reading this research paper titled 'Image Style Transfer using Convolutional Neural Networks' which as the title suggests was based on Neural Style Transfer. I came across this line which didn't make immediate sense to me.
+
+Here's how it went -
+
+We reconstruct the input image from from layers ‘conv1 2’ (a), ‘conv2 2’ (b), ‘conv3 2’ (c), ‘conv4 2’ (d) and ‘conv5 2’ (e) of the original VGG-Network. We find that reconstruction from lower layers is almost perfect (a–c). In higher layers of the network, detailed pixel information is lost while the high-level content of the image is preserved (d,e).
+
+The line that is italicised; Why does that happen?
+"
+"['definitions', 'social', 'education']"," Title: Explaining AI to Non-Technical IndividualsBody: How does one approach proposing AI to management? This is something I have struggled with for a long time. I want to implement AI toward a specific problem in my place of work. My supervisors are generally willing to listen; but they want to know how the algorithm(s) is going to work. They are not programmers. My tendency is to write out the math and step through it. However, most of them don't want to do that because they have a limited amount of time to sit there and listen. On top of that, some of these algorithms can get somewhat complex.
+Lets take a simple neural network for example; how would you explain the way it works without diving into the math?
+"
+"['deep-learning', 'natural-language-processing', 'bert', 'text-classification']"," Title: Training and Evaluating BERT and XLNETBody: I am thinking about a project and have a few questions before I accept it. Would be grateful I anyone experienced of you could give me some advice.
+In the project, I have been given a data set with (rather small) 30.000 text documents, which are labeled with 0 and 1. I want to train and evaluate (with respect to accuracy) a BERT and XLNet model.
+Can you give me some rough estimates for the following questions?:
+
+- How much computing power do I need for this task, i.e. can I simply use my private laptop for this or do I need a special CPU/GPU for it?
+- So far, I just worked with classical machine learning models (e.g. random forests, SVMs, etc.). I am not experienced deep learning architectures yet. How difficult would it be to implement a BERT oder XLNet model with my own data set, having no experience with BERT oder XLNet yet? I.e. how much code would it be that I have to develop by myself? And would I need a deep understanding for it or would be sufficient to follow an online tutorial and basically copy the code from there?
+Many thanks.
+
+"
+"['neural-networks', 'reinforcement-learning', 'observation-spaces', 'padding']"," Title: How do neural networks deal with inputs of different sizes that are padded in order to have them of the same size?Body: I am trying to create an environment for RL where the size of my input (observation space) is not fixed. As a way around it, I thought about padding the size to a maximum value and then assigning "null" to those values that do not exist. Now, these "null" values are meaningful in a certain sense, because they are related to the shape and size of the input.
+If these "null" values were zeros, would neural networks be able to distinguish between these zeros (nulls) and the zeros that are actually part of the picture? If that's not the case, should I assign a different number for the padding? What should I be mindful of in these scenarios? Is there any example I can look at with a similar situation?
+"
+"['reinforcement-learning', 'deep-learning', 'deep-rl', 'dqn', 'pytorch']"," Title: Deep Q-Learning with multiple discrete actionsBody: I am working on a DQN project with Pytorch, where I should choose multiple discrete actions, each in a range, say, (0, 15)
. I am wondering how I can model it, such that the sum of actions is 15. Does anyone know how to model that?
+"
+"['generative-adversarial-networks', 'generative-model', 'history', 'image-generation']"," Title: Is image generation not existent before generative adversarial networks?Body: Although the GAN is widely used due to its capability, there were generative models before the GAN which are based on probabilistic graphical models such as Bayesian networks, Markov networks, etc.
+It is now a well-known fact that GANs are excelling at image generation tasks. But I am not sure whether the generative models that were invented before GANs were used for image generation or not.
+Is it true that other generative models were used for image generation before the proposal of the GAN in 2014?
+"
+"['deep-learning', 'implementation', 'computer-programming', 'tensor']"," Title: What are the (key) purposes of unsqueezing operation on tensors?Body: The unsqeeze operation is used in several deep learning algorithms. However, I only found this operation in the code/implementation of the algorithms presented in the papers, which do not mention it.
+The unsqueeze operation never modifies data in a tensor, and it only changes the positions of available data in the tensor. Wherever I see its use in coding till now, it is used before matrix multiplication only. So I am unaware of other uses of it.
+Is unsqueeze operation only useful to handle compatibility issues i.e., to make the data compatible for underlying operation and has no other significance, or does it have (used for) any other purposes in deep learning?
+"
+"['convolutional-neural-networks', 'bias']"," Title: Can we change bias and control the output of neural network?Body: I have read the use of Targeted Adversarial Attacks for making the model perform better. But can we change the bias of the neural networks and control the outcome of the network rather than changing the input. if yes, can you share some resources or research papers on targeted bias in neural networks?
+"
+['terminology']," Title: Assume 120 examples, a model makes 20 correct predictions and updates weight for the other 100. Should I count this epoch 100 iterations or 120?Body: Per google's glossary, an iteration refers to
+
+A single update of a model's weights during training ...
+
+The following code comes from a github repo
+def fit(self, x, y, verbose=False, seed=None):
+ indices = np.arange(len(x))
+ for i in range(self.n_epoch):
+ n_iter = 0
+ np.random.seed(seed)
+ np.random.shuffle(indices)
+ for idx in indices:
+ if(self.predict(x[idx])!=y[idx]):
+ self.update_weights(x[idx], y[idx], verbose)
+ else:
+ n_iter += 1
+ if(n_iter==len(x)):
+ print('model gets 100% train accuracy after {} epoch(s)'.format(i))
+ break
+
+Note that this model doesn't update weights for each single example, because when the model make a correct prediction for some example, it skips the example without updating weights.
+In this kind of scenario where model makes a correct prediction for $i$th input $x_i$ and jump into next example $x_{i+1}$ without updating weights for $x_i$, does it count as an iteration?
+Assume there are 120 training examples, in one epoch, the model makes 20 correct prediction and updates weight for the other 100. Should I count this epoch 100 iterations or 120 iterations?
+Note: This question is NOT about coding. The code cited above works well. This question is about terminology. The code is just to illustrate the scenario in question.
+"
+"['reinforcement-learning', 'deep-rl', 'exploration-exploitation-tradeoff', 'exploration-strategies']"," Title: How to reduce the number of episodes before the agent learns in this game?Body: The initial environment state is 0.25. Each time step the agent performs a discrete action
of 0
or 1
. If action is 1
, then the new state will be state + 0.1
. If action is 0
, the new state will be state - random() * 0.2
. The reward is state - 0.5
, however if state > 0.98
(or state < 0
) the agent dies (with no reward).
+First question: How do I teach the agent not to be too greedy? How to verify that the agent learned?
+Main question: How to reduce the number of trials (i.e. the number of episodes) before the agent learns?
+I would also appreciate any relevant references.
+Here is the environment and here is what I tried.
+It works, however:
+
+- It took 1000 episodes of max 2000 timesteps, which is unacceptable
+for me (I wish to drastically reduce the number of episodes and timesteps).
+
+- The behavior is far from optimal. Ideally, the agent should
+choose action
0
only if the state is larger than 0.88 (or something below that and within a small interval such as 0.01). [Edit] However, the threshold is 0.75, that forces the agent to choose 0
even if it could safely choose 1
, e.g. following 0.8 -> 0.76 -> 0.75 -> 0.74 trajectory before choosing
+1
again.
+
+
+"
+['perceptron']," Title: What's the difference between a ""perceptron"" and a GLM?Body: In a comment to this question user nbro comments:
+
+As a side note, "perceptrons" and "neural networks" may not be the same thing. People usually use the term perceptron to refer to a very simple neural network that has no hidden layer. Maybe you meant the term "multi-layer perceptron" (MLP).
+
+As I understand it, a simple neural network with no hidden layer would simply be a linear model with a non-linearity put on top of it. That sounds exactly like a generalized linear model (GLM), with the non-linearity being the GLM's link function.
+Is there a notable difference between (non-multi-layer) perceptrons and GLMs? Or is it simply another case of two equivalent methods having different names from different researchers?
+"
+"['deep-learning', 'tensorflow', 'papers', 'implementation', 'kl-divergence']"," Title: How is this statement from a TensorFlow implementation of a certain KL-divergence formula related to the corresponding formula?Body: I am trying to understand a certain KL-divergence formula (which can be found on page 6 of the paper Evidential Deep Learning to Quantify Classification Uncertainty) and found a TensorFlow implementation for it. I understand most parts of the formula and put colored frames around them. Unfortunately, there is one term in the implementation (underlined red) that I can't tell how it fits in the formula.
+
+Is this a mistake in the implementation? I don't understand how the red part is necessary.
+"
+"['neural-networks', 'logistic-regression', 'iris-dataset']"," Title: Does it make sense for a logistic regression model to perform better than a neural network on the Iris data set?Body: Per a review post, a simple Logistic Regression model on the Iris data set gets about 97% test accuracy on iris dataset whereas a neural network gets just 94%. The neural network model used in Keras is
+model = tf.keras.Sequential([
+ tf.keras.layers.Dense(500, input_dim=4, activation='relu'),
+ tf.keras.layers.Dense(256, activation='relu'),
+ tf.keras.layers.Dense(256, activation='relu'),
+ tf.keras.layers.Dense(128, activation='relu'),
+ tf.keras.layers.Dropout(0.2),
+ tf.keras.layers.Dense(128, activation='relu'),
+ tf.keras.layers.Dense(3, activation='softmax')
+])
+model.compile(loss='categorical_crossentropy',
+ optimizer='adam',
+ metrics=['accuracy'])
+
+The model is fit for 30 epochs using a batch size of 20.
+Note that I did try fewer neurons and layers but none of them got better performance.
+Does this make sense? Can any other neural network get a higher test accuracy than a logistic regression model?
+"
+"['neural-networks', 'training', 'stability']"," Title: Is stability an attribute of model or training algorithm used or combination of both?Body: From this answer, stability is attributed to a learning algorithm
+
+A stable learning algorithm is one for which the prediction does not
+change much when the training data is modified slightly.
+
+At some other places, I read the phrase "stability of neural network model". I am not sure whether the stability of a learning algorithm and the stability of a model are the same or not. If same then
+
+A stable model is one for which the prediction does not
+change much when the training data is modified slightly.
+
+Is it true? If not, is there anything called stability of a model and is different from the stability of a learning algorithm?
+Suppose I am training a neural network model with a gradient descent algorithm. For which one, do I need to attribute stability or instability? Is it to the neural network model or to the gradient descent training algorithm? Or it should be attributed to the combination of both?
+"
+"['math', 'gradient-descent', 'graphs', 'vanishing-gradient-problem']"," Title: Is there any geometrical interpretation on overcoming gradient related problems by adjusting/changing loss function?Body: There are instances in literature where we need to change loss function in order to escape from gradient problems.
+Let $L_f$ be a loss function for a model I need to train on. Some times $L_f$ leads to the problems due to gradient. So I reformulate it to $L_g$ and can apply the optimization successfully. Most of the times the new loss function is obtained by making a small adjustments on $L_f$.
+
+For example: Consider the following excerpt from the paper titled Evolutionary Generative Adversarial Networks
+
+In the original GAN, training the generator was equal to minimizing
+the JSD between the data distribution and the generated distribution,
+which easily resulted in the vanishing gradient problem. To solve this
+issue, a nonsaturating heuristic objective (i.e., “$− \log D$ trick”)
+replaced the minimax objective function to penalize the generator
+
+
+How can one understand those facts geometrically? Are there any simple examples on either 2d or 3d that shows two types of curves: one gives no gradient issues and the other gives gradient issues yet both obtains the same objective?
+"
+"['probability-distribution', 'metric']"," Title: Are there any other metrics available for calculating the distance between two probability distributions other than those mentioned?Body: The divergence between two probability distributions is used in calculating the difference between the true distribution and generated distribution. These divergence metrics are used in loss functions.
+Some divergence metrics that are generally used in literature are:
+
+- Kullback-Leibler Divergence
+- Jensen–Shannon divergence
+- f-divergence
+- Wasserstein distance
+
+Some other divergence measures include:
+
+- Squared Hellinger distance
+- Jeffreys divergence
+- Chernoff's $\alpha-$divergence
+- Exponential divergence
+- Kagan's divergence
+- $(\alpha, \beta)-$product divergence
+- Bregman divergence
+
+I think some naive divergence measures include
+
+- Least-squares divergence
+- Absolute deviation
+
+Along with these, are there any other divergence measures available to compute the distance between the true probability distribution and estimated probability distribution in artificial intelligence?
+"
+"['machine-learning', 'convolutional-neural-networks', 'image-processing', 'convolutional-layers', 'filters']"," Title: What is the significance behind having small kernel sizes over having one large kernel size that covers the entire input in a CNN?Body: I have hardly ever seen anyone cover the entire input image with a filter of the same dimensions. I was wondering why that is the case, and if the performance in say, an image detection application would decrease if someone used kernel size = the size of the input image
itself?
+"
+"['math', 'generative-model', 'image-generation']"," Title: Is the range of inception score flexible or bounded based on number of classes?Body: Inception score is used to evaluate the generative models. It is a score given based on quality and diversity of images generated.
+I have doubt about the range of inception score because of the reason that an article mentions about the possibility of range $[0, \infty]$ and still talks about upper bound in practical setting
+
+The lowest score possible is zero. Mathematically the highest possible
+score is infinity, although in practice there will probably emerge a
+non-infinite ceiling. For a ceiling to the IS, imagine that our
+generators produce perfectly uniform marginal label distributions and
+a single label delta distribution for each image — then the score
+would be bounded by the number of labels.
+
+Suppose I have 1000 classes/labels in my task, then is it possible to get an inception score of 2000? Or is it mandatory that the inception score must lie in $[1, 1000]$?
+To be concise: Is bounding inception score to a particular range $[1, \text{number of classes}]$ optional or mandatory?
+"
+"['tensor', 'affine-transformations']"," Title: Which product operation should be used in affine transformation?Body: Affine transformation as I am aware can be expressed as either dot product followed by addition or a matrix multiplication followed by addition
+$$a.x+b$$
+$$a^{T}x + b$$
+where the first one is based on dot product and the second one is based on matrix multiplication. it should be noticed that $a, x$ are column vectors here and $b$ is a real number.
+In case if a matrix $a$ is compatible (say order $m \times n$) and $x, b$ are column vectors of order $m \times 1$. Then affine transformation is (generally) a matrix multiplication followed by a vector addition.
+$$a^{T}x + b$$
+I think it is correct.
+I have doubt about the product operation between $a$ and $b$ if $a, b$ and $x$ are tensors of higher dimensions. In the case of tensors, should we need to perform Hadamard product or normal matrix multiplication? Or are they both equivalent like in the case of affine transformation on column vectors?
+I got this doubt because I encountered an affine transformation that neither uses dot product nor matrix multiplication but uses Hadamard product.
+
+Background: Recently I came across an affine transformation that applies Hadamard product on reshaped weight $a$ and input $x$ and then adds reshaped bias $b$ to it
+Initially, the dimensions are as follows
+a -> [r, c]
+x -> [r, c, d1, d2]
+b -> [r, c]
+
+Later they reshaped weight and bias to the shape of the input
+a -> [r, c, d1, d2]
+x -> [r, c, d1, d2]
+b -> [r, c, d1, d2]
+
+and finally, they are performing a * x + b
+here $*$ is Hadamard product, an element-wise multiplication operation and (i think) is entirely different from normal matrix multiplication.
+Is there any clue on what they did? Is it possible to view Hadamard's product as normal matrix multiplication?
+"
+"['reinforcement-learning', 'markov-decision-process', 'state-spaces']"," Title: Is there any inherent assumption of start and goal states in an MDP?Body: MDP stands for the Markov decision process. It is a 5-length tuple used in reinforcement learning.
+$$MDP = (S, A, T, R, \pi)$$
+$S$ stands for a set of states, also called state space.
+$A$ stands for a set of actions, also called action space.
+$T$ is a probability distribution function $$T: S\times A \times S\rightarrow [0,1]$$
+$R$ is a reward function
+$$R: S\times A \rightarrow \mathbb{R}$$
+$\pi$ is a policy function
+$$\pi: S\times A \rightarrow [0,1]$$
+This question is restricted to continuous spaces i.e., state and action spaces are continuous. And also to stochastic policy function. And also consider only the basic MDP instead of its flavors.
+In general, MDP in reinforcement learning is applied mostly to games. And most of the games have certain start states as well as goal states.
+Is there any reason for not specifying start and goal states in MDP like in a finite automaton?
+Or does MDP has an implicit start and goal states (say from the values of reward function)?
+Or is the MDP, by nature, defined irrespective to start and goal states? If yes, can I just imagine MDP as a state-space search problem without a particular goal?
+"
+"['bayesian-networks', 'markov-chain', 'probabilistic-graphical-models']"," Title: In the original GAN paper, why is it mentioned that you can sample deep directed graphical models without a Markov chain?Body: In the original GAN paper (table 2), why is it mentioned that you can sample deep directed graphical models without a Markov chain (well, they say without difficulties, but others list MCMC as a difficulty).
+I was wondering how this is done because I have only seen MCMC based approaches.
+
+"
+"['reinforcement-learning', 'policies']"," Title: How to allow RL systems to find better policies after code changes?Body: Suppose that in version 1 of a reinforcement-learning system an optimal policy $A$ got generated for executing a task. But, in a newer version 2 of that application (with new code changes), there might be some policy $B$ that would do slightly (1-2%) better than policy $A$.
+How do you allow the system to learn that "better" policy $B$? I think the answer is retraining.
+But during the training process, the old policy $A$ might still keep accumulating rewards delaying policy $B$ to be recognised as the "better" policy than $A$. This could get worse if each newer version of the system would contain a better policy which is only slightly better than the previous release's best policy. It would take a very long time to find the best policy.
+Is this accepted in real-world RL systems? Or should I be figuring out a way to tell the system that "Hey, there might be a better policy somewhere, try to find that instead of rewarding existing policies."?
+"
+"['time-series', 'sequence-modeling', 'seq2seq']"," Title: Is seq2seq the best model when input/output sequences have fixed length?Body: I understand that seq2seq models are perfectly suitable when the input and/or the output have variable lengths. However, if we know exactly the input/output sequence lengths of the neural network. Is this the best approach?
+"
+"['transformer', 'attention', 'memory']"," Title: What is the bit memory task?Body: I learned from this post about the so-called bit memory:
+
+They froze its self-attention and feed-forward layers and, in separate copies, fine-tuned peripheral layers on each on a wide range of tasks: Bit memory (memorizing strings of bits), Bit XOR (performing logical operations on pairs of strings of bits), ListOps (parsing and performing mathematical operations), MNIST, CIFAR-10 (classification of images), CFAR-10 LRA (classification of flattened, greyscale images), and remote homology detection (predicting what kind of protein structure an amino acid is part of).
+
+I wonder what the "bit memory" task is? Is it an identity function as described in this post? Or the memory network?
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl', 'model-based-methods', 'model-free-methods']"," Title: In deep reinforcement learning, what is this model with state as input and value as output?Body: I was looking at this implementation for creating an agent for playing Tetris using DeepRL.
+This model uses "a state based on the statistics of the board after a potential action. All predictions would be compared but the action with the best state would be used".
+So at each iteration, it's feeding a set of future states, computed based on the current state (future state made up of statistics of the game like nb of holes in the board, cleared rows, total height...)
+to a neural network and outputs one "value" per future state.
+So at every step, you predict N "values" from the neural network for all N possible future states for the one you are currently in and choose the greatest one as your future state and thus associated action.
+Now, my issue: the implementation says it's "deep Q-learning", but I do not see it that way. The action, nor some sort of current state is given as input of the network.
+Since it is feeding the "future states", for me, it looks more like a value iteration algorithm with a neural network or at least something where you know the transition model?
+Did I miss something and it is actually DQN?
+If not, do you have any references for this kind of RL model? Does this have a name?
+"
+"['reinforcement-learning', 'deep-rl', 'activation-functions', 'soft-actor-critic', 'td3']"," Title: Is it possible to use Softmax as an activation function for actor (policy) network in TD3 or SAC Reinforcement learning algorithms?Body: As I understand from literature, normally, the last activation in an actor (policy) network in TD3 and SAC algorithms is a Tanh function, which is scaled by a certain limit.
+My action vector is perfectly described as a vector, where all values are between 0 and 1, and which should sum up to 1. This is perfect for a Softmax function. But these values are not probabilities of discrete actions. Each value in action vector should be a percentage from the whole portfolio to be invested in a certain stock.
+But I cannot figure out, if it would be mathematically fine to use Softmax as an activation layer in TD3 or SAC?
+"
+"['reinforcement-learning', 'value-functions', 'temporal-difference-methods']"," Title: What are the recurrences used for updating state value function in $TD$ and $TD(\lambda)$ learning?Body: There are two types of value functions in reinforcement learning: State value function $V^{\pi} (s)$, state-action value function $Q^{\pi}(s, a)$.
+State value function:
+This value tells us how good to be in state $s$ if we are following policy $\pi$. Formally, it can be defined as the average returns obtained at time step $t$ from state $s$ if we follow policy $\pi$.
+$$V^{\pi}(s) = \mathbb{E}_{\pi}[R_{t}|s_t = s] = \mathbb{E}_{\pi} \left[ \sum \limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1} \mid s_t = s\right] = \mathbb{E}_{\pi} \left[ \sum \limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1} \mid s_t = s, a_t = a \right]$$
+State-action value function:
+This value tells us how good is to to perform action $a$ in state $s$ if we are following policy $\pi$. Formally, it can be defined as the average returns obtained at time step $t$ from state $s$ and action $a$ if we follow policy $\pi$ further.
+$$Q^{\pi}(s, a) = \mathbb{E}_{\pi}[R_{t}|s_t = s, a_t = a] = \mathbb{E}_{\pi} \left[ \sum \limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1} \mid s_t = s, a_t = a\right] = \mathbb{E}_{\pi} \left[ \sum \limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1} \mid s_t = s, a_t = a \right]$$
+Now, Q-learning and SARSA learning algorithms are generally used to update $Q$ function under policy $\pi$ using the following recurrences respectively
+$$Q(s_t,a_t) = Q(s_t,a_t) + \alpha[r_{t+1} + \gamma \max\limits_{a} Q(s_{t+1},a) - Q(s_t,a_t)] $$
+$$Q(s_t,a_t) = Q(s_t,a_t) + \alpha[r_{t+1} + \gamma Q(s_{t+1},a_{t+1}) - Q(s_t,a_t)] $$
+Now my doubt is about the recurrence relations in Temporal Difference (TD) algorithms that update state value functions. Are they same as the recurrences provided above?
+$$V(s_t) = V(s_t) + \alpha[r_{t+1} + \gamma \max V(s_{t+1}) - V(s_t)] $$
+$$V(s_t) = V(s_t) + \alpha[r_{t+1} + \gamma V(s_{t+1}) - V(s_t)] $$
+If yes, what are the names of the algorithms that uses these recurrences?
+"
+"['neural-networks', 'tensorflow', 'games-of-chance']"," Title: Training a sequential model that can only evaluate after several hundred cyclesBody: I'm attempting to build a neural network to play the card game, Lost Cities.
+A brief overview of the game:
+
+- The game involves two players taking turns to play cards on expeditions.
+- Expeditions incur a debt when you play the first card. Subsequent cards will buy out of that debt and return a profit.
+- Each player must either play or discard a card (with restrictions), and then draw a new card from the deck or the discard pile.
+- The game ends when the last card is drawn.
+
+The expanded rules can be found here.
+I'm attempting to train a sequential model to play this game, with the state of the board/hand/discard pile as a set of inputs and a three heuristic values for each card (value of playing, value of discarding, value of picking up from this discard) as the output values. However, it's unclear to me how I should approach this from a training perspective.
+My biggest hurdle is that the network's success can only be evaluated by wether or not it can "beat" itself or a competing network in a game. A player's raw score during the game is not viable due to the nature of play. To me, this means that the network will have to be used with several hundred different sets of input data (for each turn of state of the game board during a match) before any meaningful results are generated.
+So far, the only solution I've had minor success with is a generational algorithm that creates "fuzzy" children to compete against each other for the most "wins". This was done in Python's standard library, and was obscenely slow, even with a reduced sequential network.
+My question:
+Is there an established method to deliver delayed feedback to a network (after several uses of the network)?
+I'm very new at this, so any and all feedback is more than welcomed.
+"
+"['tensor', 'dimensionality']"," Title: How to visually or intuitively understand single element multi-dimensional tensors?Body: Consider the following code in PyTorch
+>>>torch.tensor([8]).shape
+torch.Size([1])
+>>>torch.tensor([[8]]).shape
+torch.Size([1, 1])
+>>>torch.tensor([[[8]]]).shape
+torch.Size([1, 1, 1])
+
+We can notice that we want to store only a single element $8$ in a tensor. But it is possible in tensors to store $8$ in any n-dimensional tensor where $n \in \mathbb{N}$. In strict case $\mathbb{N}$ may be replaced by $\mathbb{W}$.
+But, I am facing difficulty in understanding this fact of a single element contributing to all dimensions. If the element is present in all dimensions, then I am assuming that it has to be present multiple times, which is not the case. I can't understand how a single element is contributing any number of dimensions without repeating itself multiple times.
+How to understand this phenomenon? How should I interpret or visualize this fact intuitively?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'feature-engineering']"," Title: How to pass variable length data as feature to a neural network?Body: I am working on building a model to classify the type of touch the user makes(Long Press, Left Swipe, Right swipe and so on). I have data with features that characterise the user's touch, like duration, velocity in x-direction, velocity in y-direction etc. One feature that's also present is the trajectory of the touch.
+The problem is that for touches like taps or long-press, the length of the trajectory array is 2 or 3 points, but for swipes, it reaches up to 40-100 points. What I thought can work is either use padding or use CNNs. But the problem with padding is that as I am using trajectories, if I pad them with 0s it might affect the learning because a '0' still has meaning in trajectories as some points. And what I think the problem might be with CNNs is that first I don't know if such an architecture could work for all features (touch duration, xVelocity etc.) as they are not spatially related. I may be wrong about this, feel free to correct it. I also thought of using RNNs but I did not as they are mainly used for NLP tasks and all the features are not related to each other sequentially.
+What are the different ways I can handle this kind of variable-sized input feature for neural networks?
+"
+"['geometric-deep-learning', 'graph-neural-networks', 'embeddings', 'knowledge-graph']"," Title: Is graph embedding linear in its maintaining of graph geometry?Body: It is claimed that the main goal of graph embedding methods is to pack every node's properties into a vector with a smaller dimension, so node similarity in the original complex irregular spaces can be easily quantified in the embedded vector spaces using standard metrics.
+However, I can find no formal explanation as to why nodes with similar properties should be embedded so that their separation in embedded space respects their similarity. Is there such a proof, or is it a convenient consequence of embedding?
+"
+"['comparison', 'information-theory', 'semantic-networks']"," Title: Why would the Dice coefficient be more suitable than mutual information when you don't want 0-0 matches to be significant?Body: I'm confused about the interpretation and assumptions of the Dice coefficient versus the more popular measure mutual information. I'm specifically referencing its use in hierarchical semantic network analysis, or ranking the significance of collocation of words.
+I'm referencing Translating Collocations for Bilingual
+Lexicons: A Statistical Approach which talks about how the Dice coefficient is more appropriate when you don't want 0-0 matches to be significant. However, as a amateur in probability, it's not really clear to me from the respective formulas why this would be.
+Could someone explain?
+"
+"['deep-learning', 'overfitting', 'regularization', 'batch-normalization']"," Title: Why is BatchNormalization causing severe overfitting to my data?Body: So I've been making a mini version of VGGNet, trying to tweak the hyperparameters to match the CIFAR-100 dataset.
+It was running slow at first but I was able to get decent accuracy after 60 epochs or so.
+However, when I added BatchNormalization layers to my two fully-connected hidden layers, it started learning at like 20% accuracy immediately, but began overfitting my data so badly that after 7 epochs my validation didn't improve from 0.01, compared to 20+ testing accuracy.
+Why would adding these layers which are supposed to act as a regularizer, actually cause severe overfitting instead? I'm confused.
+"
+"['convolutional-neural-networks', 'filters', 'convolutional-layers', 'computational-complexity', 'flops']"," Title: Given an input of shape $(3, 32, 32)$, which is convolved with a $(3 \times 3)$ kernel, how do I calculate the FLOPS?Body: I have an input tensor of shape $\mathbf{(3, 32, 32)}$ consisting of 3 channels, 16 rows, and 16 columns. I want to convolve the input tensor using $\mathbf{(3 \times 3)}$ kernel/filter. How can I calculate the required FLOPs?
+"
+"['reference-request', 'books', 'bayesian-statistics', 'bayesian-inference']"," Title: Is there an entry level textbook on Bayesian Inference that is a nice blend of theory and applications?Body: I am looking for a textbook that is a nice entry level to Bayesian Inference. I was hoping that there is a nice blend of theory and applications (data sets) on how concepts are applied. Programming techniques presented are welcome.
+Just for perspective, I feel that Christopher Bishop's PRML is a theoretical treatment. It is very good theoretically, but I find myself not understanding how to apply it given a data set.
+I have tried jumping from one book to another and this has just confused me. Is there any authoritative book with these requirements?
+"
+"['monte-carlo-methods', 'temporal-difference-methods', 'environment', 'learning-rate']"," Title: How does the learning rate $\alpha$ vary in stationary and non-stationary environments?Body: In Sutton and Barto's book (Chapter 6: TD learning, 2nd edition), he mentions two ways of updating value function:
+
+- Monte Carlo method: $V(S_t) \leftarrow V(S_t) + \alpha[G_t - V(S_t)]$.
+- TD(0) method: $V(S_t) \leftarrow V(S_t) + \alpha[R_{t+1} + \gamma V(S_{t+1}) - V(S_t)]$.
+
+I understand that $\alpha$ acts like a learning rate where it take some proportion of MC/TD error and update value function.
+From my understanding, in stationary environments, transition probability distribution and reward distribution don't vary with time. Hence, one should supposedly use $\alpha-$decay to update value functions. On the other hand, since distributions change with time in non-stationary environments, $\alpha$ should be kept constant so as to keep updating the value function with recent TD/MC errors (in other words, history doesn't matter).
+What's been bothering me is that in Example 6.2, 6.5, and 6.7, probability and reward distribution doesn't change. So why is constant-$\alpha$ being used?
+Question: How does $\alpha$ vary in stationary and non-stationary environments?
+"
+"['generator', 'normal-distribution', 'uniform-distribution', 'noise']"," Title: How does noise samples from uniform distribution contribute to the diversity of generator output?Body: In a Generative Adversarial Network (GAN), there are two multi-layer perceptrons. One is the generator network and another is a discriminator network.
+The input for the generator network is a noise vector $z$. The input for a discriminator network is either a generated sample $G(z)$ i.e., the output of a generator network or a training sample $x$ for a training dataset.
+My doubt is regarding the input of the generator. The noise vector is generally sampled from the standard normal distribution.
+$$z \sim \mathcal{N(0, 1)}$$
+Although I am not sure, I think : since the values in the normal distribution vary, the output of the generator can vary accordingly.
+But some of the research papers say that the noise vector can also be sampled from a uniform distribution i.e., $z \sim \mathcal{U(a, b)}$ for $a<b$.
+$$ U(x) = \begin{cases}
+ \dfrac{1}{b-a} & x\in [a, b] \\
+ 0 & x\not\in [a, b] \\
+ \end{cases}
+$$
+It is clear that uniform distribution does not vary like normal distribution and takes only two possible values, hence all samples have equal probability in the given range. Then how can it contribute to the diversity of the output of the generator network?
+"
+"['time-series', 'rule-based-systems']"," Title: Transforming a complex if-else decision-making to MLBody: I have a time series classification problem that uses a series of if-else statements to arrive at a particular label. I am attempting to use ML/DL to make the system simpler.
+So far, I have tried using a tabular data approach where I take a snapshot of information up to a particular point. For example, this will be the rolling sum of certain columns and so on. I have also tried LSTM and CNN. All these approaches have failed to give me F1 scores significantly above 50 %.
+Are there other ML/DL approaches that I should try before giving up? The models were built using AutoKeras and PyCaret.
+"
+"['reinforcement-learning', 'dqn', 'state-spaces', 'action-spaces']"," Title: How to incorporate action information in the state input of a DQN?Body: I am working on an RL problem that I am trying to solve using a Deep Q-network. The problem concerns choosing drivers to take specific taxi orders. I am familiar with most of the existing works and that they use RL to determine which orders to take. I specifically look at the situation where we want to determine which drivers to take.
+This means that action space concerns the various drivers we can choose. Initially, we assume a fixed number of drivers to ensure a fixed action space.
+My question is about defining the state space. First of all, the state space consists of information about the next order we are trying to assign to a driver from our action set. Besides that, we also want to incorporate state information about the different drivers (e.g. their location). However, this would mean we include state information about the actions as input of the DQN. The reason is that the state of the drivers is the main thing that changes when choosing a different action and therefore determines the choice we want to make at the next timestep. I am for example thinking about creating a list of size |drivers|
with element i
defining the location of agent i
.
+I tried to find existing work that uses a similar setting (so that incorporates action states in the state input), however, I did not succeed in this yet. Therefore I am wondering:
+
+- Is this a logical/reasonable approach to the problem?
+- If yes, is someone familiar with existing works that use a comparable approach?
+
+I am familiar with works that take (state, action) as input, which describes the full pair of the state s and the action a, and then produce a single Q(s,a) for each specific pair of state + action. This is an approach we do not want to take, given that it leads to |A(s)|
passes through the network instead of a single pass (as explained here).
+"
+"['neural-networks', 'machine-learning', 'generative-adversarial-networks']"," Title: Can NeuralHash be used as a loss for an Autoencoder?Body: I've recently read about NeuralHash, and immediately thought that it might be used as a loss for an autoencoder. However, it only seems to preserve "structure" from what I've read, not actual pixel values (which makes sense, given its purpose). Thus, how likely it is that an autoencoder performs well given a loss that compares the NeuralHash of its output with the NeuralHash of its input?
+I feel like, assuming that NeuralHash is secure, it should either work well, that is produce an image similar to its input (because the hash is approximately unique) or not work at all (otherwise we would have found a collision), no middle-ground. Is there any thoughts/research on this?
+"
+"['machine-learning', 'reinforcement-learning', 'deep-rl', 'neat']"," Title: In what situation would you want to use NEAT over reinforcement learning?Body: NEAT is an evolutionary algorithm. When would you want to use NEAT over more traditional/common RL algorithms like PPO or SAC etc. What advantage does it give you?
+"
+"['machine-learning', 'incremental-learning', 'online-learning', 'catastrophic-forgetting']"," Title: Do other online/incremental algorithms not suffer from catastrophic forgetting?Body: All the literature I read seems to indicate catastrophic forgetting affects only neural networks. Do other online/incremental algorithms not suffer from catastrophic forgetting (for example, SGDClassifier)? Why would that be the case?
+"
+"['deep-learning', 'convolutional-neural-networks', 'tensorflow', 'keras']"," Title: What would be the advantage of making channel dimension first in TensorFlow Keras implementation?Body: I was reproducing the findings of a research article in which I discovered that they had switched the Channel dimension from last to first. To clarify this concept, I went through A Gentle Introduction to Channels-First and Channels-Last Image Formats . The author of this link stated:
+
+When represented as three-dimensional arrays, the channel dimension
+for the image data is last by default, but may be moved to be the
+first dimension, often for performance-tuning reasons.
+
+There are two ways to represent the image data as a three dimensional array. The first involves having the channels as the last or third dimension in the array. This is called “channels last“. The second involves having the channels as the first dimension in the array, called “channels first“.
+
+Channels Last. Image data is represented in a three-dimensional array where the last channel represents the color channels, e.g.
+[rows][cols][channels].
+Channels First. Image data is represented in a three-dimensional array where the first channel represents the color channels, e.g.
+[channels][rows][cols].
+
+.
+We are aware of when the channel was last used and the manner in which kernels were applied, theoretically. However, I'm curious as to when the dimensions of the channel come first. How kernels will process. More precisely:
+Assume we have [rows, columns, channels] -> [2,4,3] image dimensions.
We may say we have three data channels, each with two rows and four columns, correct?
+Alternatively, if we assume that channel dimensions are first means [channels, rows, columns] -> [3,2,4]
. In other words, we now have four data channels, each with three rows and two columns, am I correct? If I am taking correctly than this is quite confusing because we are completely modifying our image.
+Question:
+What is the benefit of shifting the channel dimension first, and how will the kernels move on it?
+
+For more detail check the code:
+input_layer = tf.keras.Input(shape=input_shape, name="Time_Series_Activity")
+con_l1 = tf.keras.layers.Conv2D(64, (5, 1), activation="relu", data_format='channels_first')(input_layer)
+
+Summary of Code
+Layer (type) Output Shape Param #
+=================================================================
+Time_Series_Activity (InputL [(None, 1, 30, 52)] 0
+_________________________________________________________________
+conv2d_2 (Conv2D) (None, 64, 26, 52) 384
+_________________________________________________________________
+
+"
+"['convolutional-neural-networks', 'relu', 'calculus', 'vgg', 'style-transfer']"," Title: Why the partial derivative is $0$ when $F_{ij}^l < 0$?. Math behind style transferBody: I am currently in the process of reading and understanding the process of style transfer. I came across this equation in the research paper which went like -
+
+For context, here is the paragraph -
+
+Generally each layer in the network defines a non-linear
+filter bank whose complexity increases with the position of
+the layer in the network. Hence a given input image is
+encoded in each layer of the Convolutional Neural Network
+by the filter responses to that image. A layer with $N_l$ distinct filters has $N$ feature maps each of size $M$ , where $M_l$
+is the height times the width of the feature map. So the re-
+sponses in a layer l can be stored in a matrix $Fl ∈ R^{N_l×M_l}$
+where F l is the activation of the ith filter at position j in ij
+layer l.
+To visualise the image information that is encoded at
+different layers of the hierarchy one can perform gradient descent on a white noise image to find another image that matches the feature responses of the original image (Fig 1, content reconstructions). Let $\vec p$ and $\vec x$ be the original image and the image that is generated, and $P^l$ and $F^l$ their respective feature representation in layer l. We then define the squared-error loss between the two feature representations
+$\mathcal{L_{content}(\vec p, \vec x, l)} = {1\over 2} \Sigma_{i,j} \big(F_{ij}^l - P_{ij}^l \big)$. The derivative of this loss with respect to the activations in layer $l$ [the equation above $(2)$].
+
+I just want to know why the partial derivative is $0$ when $F_{ij}^l < 0$.
+"
+"['generative-adversarial-networks', 'applications', 'text-generation', 'gpt-3']"," Title: How can GPT-3 be used for designing electronic circuits from text descriptions?Body: I was wondering if it is possible to use GPT-3 to translate text description of a circuit to any circuit design language program, which in turn can be used to make the circuit.
+If it is possible, what approach will you suggest?
+"
+['residual-networks']," Title: Why do skip layer connections require the same layer sizes?Body: I know how skip connections work: you add the activations of the previous layer to the activations of a successive layer to stabilize information/gradient flow.
+My question is, why doesn't it just get implemented in the seemingly more sensible way of concatenating some previous layer's activations onto a later layer's activations?
+Most regularization methods are implemented somewhat transparently (to avoid possible negative consequences, e.g. BatchNorm having learnable parameters to disable it). While this method instead interferes with regular functioning of the network rather than simply making itself available in case it is useful.
+What is the reasoning behind the choice to do this rather than simply using concatenation?
+"
+"['deep-learning', 'terminology', 'geometric-deep-learning', 'books', 'non-euclidean-data']"," Title: What exactly is a grid-like topology according to the book Deep Learning?Body: I am reading this book called "Deep Learning" (by Goodfellow, Bengio and Courville).
+On page 326, in the first paragraph, it says:
+
+CNNs, are a specialized kind of neural network for processing data that has a known grid-like topology. Examples include time-series data, which can be thought of as a 1-D grid taking samples at regular time intervals, and image data, which can be thought of as a 2-D grid of pixel
+
+Considering an image as a grid is completely intuitive. And, similarly, we can extend the logic to a 1-D time series.
+But then what cannot be considered as having a grid-like structure?
+"
+"['reference-request', 'attention', 'convolution', 'sequence-modeling']"," Title: Couldn't the self-attention mechanism be replaced with a global depth-wise convolution?Body: The main advantages of the self-attention mechanism are:
+
+- Ability to capture long-range dependencies
+- Ease to parallelize on GPU or TPU
+
+However, I wonder why the same goals cannot be achieved by global depthwise convolution (with the kernel size equal to the length of the input sequence) with a comparable amount of flops.
+Note:
+In the following, I am comparing against the original architecture from the paper Attention Is All You Need.
+Idea:
+Consider the depthwise convolution of size $L$ with circular padding:
+$$
+y_{t,c} = W_{t^{'},c} x_{t^{'} + t, c}
+$$
+Here, $x$ is the input signal and $y$ is the output signal,
+$t$ is the position in the sequence, and $c$ is the channel index.
+Since the convolution is depthwise the given output channel depends on the unique input channel (we would like to have linear complexity in the dimension of the embedding vector).
+After a single convolution, one definitely would not have any interactions between the tokens in the sequence.
+However, a two-layer convolutional network with these tokens is able to capture long-range pair-wise interactions:
+$$
+x_{t,c}^{(2)} = W_{t^{''},c}^{(2)} \sigma(W_{t^{'},c}^{(1)} x_{t^{'} + t, c}^{(0)})
+$$
+And by stacking a not very large number of these layers (like 12 or 24) one can model interactions between tokens in the sequence of arbitrary complexity.
+Comparison of complexity:
+The asymptotic complexity of both approaches seems to be the same.
+
+- Attention: $O (L^2 d)$
+
+- Depthwise convolution: $O (L^2 d)$
+
+
+However, dot product attention seems to be a rather intuitive and biologically motivated operation that is crucial for sequence problems.
+Has this question been studied in the literature or discussed somewhere before?
+EDIT
+De-facto global depthwise convolution is used in MLP-Mixer. One stage performs convolution with global receptive field (of the size of feature map), and other operation is pointwise convolution with kernel_size=1
.
+
+"
+"['neural-networks', 'genetic-algorithms', 'neat', 'neuroevolution']"," Title: In the NEAT algorithm, what is the purpose of treating disjoint and excess genes differently?Body: In the NEAT algorithm, what is the purpose of treating disjoint and excess genes differently?
+They are treated so (or may be treated potentially) at least when calculating the distance between 2 individuals when dividing the population into species (c1
and c2
coefficients).
+"
+"['math', 'affine-transformations']"," Title: Is there any concept like 'applying affine transformation on multiple inputs'?Body: Affine transformation on $X$ is a transformation of the following form
+$$Y = wX + b$$
+In general, $w, X, Y$ and $b$ tensors.
+We generally call tensor $X$ as an input to affine transformation or the tensor which we want to transform. We call $w, b$ as weight and bias tensors respectively. We call $Y$ as output tensor after transformation. Every layer of multi layer percetron contains an affine transformation.
+Suppose I have two types of inputs, say $X_1, X_2$. Now, I want to apply affine transformation on them using other.
+Consider the following
+#1: Combining using individual affine transformations
+$$Y_1 = w_1X_1 + b_1$$
+$$Y_2 = w_2X_2 + b_2$$
+$$Y_1Y_2 = w_1w_2X_1X_2 + w_1b_2X_1 +w_2b_1X_2 + b_1b_2$$
+#2 multiplying them and applying affine
+$$Y = wX_1X_2 + b$$
+#3 concatenating them and applying affine
+$$Y = w (X_1, X_2) + b$$
+Does anyone of the above eligible to call affine transformation in terms of $X$ and $Y$ (not $XY$)? If not, is it true that there is nothing like affine transformation on two inputs taken together?
+"
+"['performance', 'batch-size']"," Title: Will there be any changes in the model's performance due to the usage of very small batch sizes?Body: I am trying to run a code that has a batch size around 28. I can run the program on my GPU with this batch size.
+But, when I modify the code for my requirements and try to run, it is showing an run-time error due to insufficient memory in GPU.
+I checked for possible batch-size that I can run and it is just 2-5.
+I am not sure whether there is any issue if I run with such small batch sizes? I mean, will there be any performance issues keeping aside the time it takes?
+"
+"['training', 'pytorch', 'image-segmentation', 'semantic-segmentation']"," Title: How to re-train an AI model to have smaller input image sizeBody: I need a PyTorch Model which can do road segmentation on OAK-D camera.
+The model provided requires Input Image Size: 896x512, which is too big for running on OAK-D camera. Thus I need to re-train it with a smaller input size(224x224) and just need the BG(background) and road classes, or if any other options available which can easily make it running on the OAK-D camera.
+Does anyone know how to do this?
+"
+"['autoencoders', 'variational-autoencoder']"," Title: Are mean and standard deviation in variational autoencoders unique?Body: In general, if I have a collection of data then mean(Expectation) and standard deviation are calculated as follows
+$$\text{mean } = \mu = \mathbb{E}[X] = \sum\limits_{i = 1}^n p_ix_i $$
+$$\text{Variance =}\sigma (X) = \sqrt{\sum\limits_{i = 1}^{n}p_i{(x_i - \mu)^2}{}}$$
+where $X$ is a random vector having support $\{x_1, x_2, x_3, \cdots, x_n\}$.
+Thus a dataset of samples have a single mean and single variance.
+Now, let us discuss about the case of variational auto-encoders. They look like follows
+
+Suppose I trained the above auto-encoder on a training set, then for each sample I will get a mean and standard deviation at latent layer. Here, we can get a new $\mu$ and $\sigma$ for each data sample. But, as we see earlier, mean and standard deviation exists for a dataset and not for each sample.
+I am confused about "how can we say that mean and standard deviation are obtained at latent layer if they are not constant in nature"?
+"
+"['neural-networks', 'reinforcement-learning', 'continuous-action-spaces', 'continuous-state-spaces', 'discretization']"," Title: Can neural networks have continuous inputs and outputs, or do they have to be discrete?Body: In general, can ANNs have continuous inputs and outputs, or do they have to be discrete?
+So, basically, I would like to have a mapping of continuous inputs to continuous outputs. Is this possible? Does this depend on the type of ANN?
+More specifically, I would like to use neural networks for learning the Q-function in Reinforcement Learning. My problem has basically a continuous state and action space, and I would like to know whether I have to discretize it or not (a more detailed description of my problem can be found here).
+"
+"['neural-networks', 'machine-learning', 'algorithm', 'bayesian-networks']"," Title: What does all the formula and pictures mean?Body: https://www.nature.com/articles/s41467-020-17419-7
+I am a medical school graduate and I really want to learn AI/ML for computer-aided diagnosis.
+I was building a symptom checker and I found the material. It clarifies the drawbacks of associative models which are performing differential diagnosis. And it suggests counterfactual(causal) approach to improve accuracy.
+The thing is I couldn't understand what the formulas mean in the article, e.g.:
+$$P(D \mid \mathcal{E}; \theta )$$
+I really want to know what | and ; are doing here, what do they mean, etc.
+I would really happy if someone can directly answer or just provide me some references to get general idea quickly.
+Here comes the most tricy part...
+
+"
+"['gym', 'real-time', 'applications']"," Title: Where to start with reinforced learning on actions and rewards sampled from slow ongoing real life systemBody: I would like some pointers, possible projects that solve conceptually similar goals, code examples or tutorials.
+I am trying to achieve a system that is able to start or stop ventilation of a given space based different outside and inside metrics such as humidity, temperature, time, etc. to achieve a decrease of relative humidity.
+Actuating the system based on simple physics gave me questionable results, this is because I am not able to model the whole dynamic of the system.
+I was thinking if reinforced learning could help me learn a good policy. The system should learn on real life data, actuating real ventilation, with the obvious slowness of such system.
+I am quite new with AI, able to comprehend and create a simple OpenAi Gym. I am not even sure if and how something like this is achievable with so limited data flow.
+I am currently recording and analyze all possible data I can measure, together with some more or less random ventilation sessions. I am sure there are better ways to do this.
+"
+"['neat', 'neuroevolution']"," Title: Different ways to produce the same network in NEATBody: I have an interesting example for the NEAT and want to clarify what behavior is correct from NEAT's perspective and why (why the opposite is wrong, what are the consequences of choosing the different one).
+So let we have an initial network of 3 nodes and 2 edges:
+Initial Condition
+Nodes: [A, B, C]
+Edges: {
+1: A->B
+2: B->C
+}
+
+
+1st Gen
+Then in the 1-st generation we get 2 mutants:
+Mutant 1 (edge 1 got split)
+Nodes: [A, B, C, D]
+Edges: {
+1: A->B DIS
+2: B->C
+3: A->D
+4: D->B
+}
+
+Mutant 2 (edge 2 got split)
+Nodes: [A, B, C, E]
+Edges: {
+1: A->B
+2: B->C DIS
+5: B->E
+6: E->C
+}
+
+
+2nd Gen
+In the second generation 2 if we mutate Mutant 1 (by splitting edge 2) and mutate Mutant 2 (by splitting edge 1) which result should we get?
+Hypothesis 1: the same result:
+Nodes: [A, B, C, D, E]
+Edges: {
+1: A->B DIS
+2: B->C DIS
+3: A->D
+4: D->B
+5: B->E
+6: E->C
+}
+
+or...
+Hypothesis 2: Two new mutants:
+Nodes: [A, B, C, D, F]
+Edges: {
+1: A->B DIS
+2: B->C DIS
+3: A->D
+4: D->B
+7: B->F
+8: F->C
+}
+
+and
+Nodes: [A, B, C, E, G]
+Edges: {
+1: A->B DIS
+2: B->C DIS
+5: B->E
+6: E->C
+9: A->G
+10: G->B
+}
+
+
+In case the second hypothesis is correct, how does it deal with crossover in the next run?
+Say these 2 mutants are breeded. We get :
+Breeding in 2nd Hypothesis
+Nodes: [A, B, C, D, E, F, G]
+Edges: {
+1: A->B DIS
+2: B->C DIS
+3: A->D
+4: D->B
+5: B->E
+6: E->C
+7: B->F
+8: F->C
+9: A->G
+10: G->B
+}
+
+Looks like a too complicated genome for the 3-rd generation, doesn't it?
+In case the first option is correct then actually innovation numbers are somewhat redundant in NEAT and can be done differently.
+We can have node list as a list of strings (node names).
+Then instead of assigning the innovation number to an edge we can use string value calculated like HASH(fromNodeName + toNodeName)
.
+That way whenever the new link is created in any generation between 2 nodes it gets the same innovation number name for it.
+When the node is created (by splitting an edge) its name can be taken right from the edge getting split and the innovation names of 2 new edges can be calculated like HASH(fromNodeName + splitEdgeName)
and HASH(splitEdgeName + toNodeName)
.
+That way the algorithm has no global variables, no shared list of all innovations and can be simply parallelized
+"
+"['natural-language-processing', 'long-short-term-memory']"," Title: What is the reason for a training loss that drops but validation that NEVER doesBody: I've been working on learning about NLP via a beginners competition on Kaggle.
+I first trained a model with an embedding layer and then a simple linear layer. I actually got way better than a flip of the coin with this model, so I decided to try to step it up with an LSTM.
+What happened was that training loss decreased and then palteaued while validation loss never decreased at all.
+In the case of overfitting, I would expect validation loss to decrease for a while but then either remain steady or perhaps even increase as the model starts to overfit.
+I can't find any reason for the strange loss curves I'm seeing:
+
+
+What could cause such a phenomenon?
+I would be happy to share my network architecture and training code if there isn't a straightforward answer (I know there usually isn't).
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'convolutional-layers']"," Title: How can equivariance to translation be a benefit of a CNN?Body: I just learnt about the properties of equivariance and invariance to translation and other transformations.
+Being invariant to translation is clearly an advantage, as even if the input gets shifted, the network will still learn the same features, and work fine.
+But how is equivariance useful?
+"
+['terminology']," Title: What is the difference between “AI Methods” and “AI Techniques”?Body: These are words that we frequently come upon. What can be said about the differences? Would these two words' subheadings be different?
+"
+"['machine-learning', 'math', 'linear-regression', 'numerical-algorithms']"," Title: Is there any domain in machine learning that solves a problem by using only analytical algorithms?Body: Most of the algorithms in machine learning I am aware of use datasets and learning happens in an iterative manner given some examples. The examples can also be understood as experience in the case of reinforcement learning.
+Consider the following from Numerical Computation chapter of Deep Learning book
+
+Machine learning algorithms usually require a high amount of numerical computation. This typically refers to algorithms that solve mathematical problems by methods that update estimates of the solution via an iterative process, rather than analytically deriving a formula to provide a symbolic expression for the correct solution. Common operations include optimization (finding the value of an argument that minimizes or maximizes a function) and solving systems of linear equations. Even just evaluating a mathematical function on a digital computer can be difficult when the function involves real numbers, which cannot be represented precisely using a finite amount of memory.
+
+I am wondering whether there is any domain in machine learning that deals with solving the problem analytically rather than computationally heavy iterative algorithms?
+"
+"['terminology', 'math', 'implementation', 'books', 'storage']"," Title: Why not undefined expression is different from numerical underflow?Body: Consider an architecture or programming language that uses $n$ bits for storing a floating point number in a particular format. Then each and every floating point number it can store should be in a given range, say $[lf, uf]$.
+If there is a need to store any floating point number less than $lf$ then we generally treat such phenomenon as underflow. Consider the following from Numerical Computation chapter of Deep Learning book.
+
+One form of rounding error that is particularly devastating is
+underflow . Underflow occurs when numbers near zero are rounded to zero.
+Many functions behave qualitatively differently when their argument is
+zero rather than a small positive number. For example, we usually want
+to avoid division by zero (some software environments will raise
+exceptions when this occurs, others will return a result with a
+placeholder not-a-number value) or taking the logarithm of zero
+(this is usually treated as $-\infty$, which then becomes not-a-number
+if it is used for many further arithmetic operations).
+
+You can observe that two examples has been given while explaining underflow: division by zero and logarithm of zero. If we treat mathematically, both are undefined. It should not be an issue of storage, especially underflow.
+Is there any reason behind proving such examples, which are mathematically undefined, under the umbrella term underflow and using the term "not-a-number"?
+"
+['math']," Title: What are the mathematical properties of natural exponential function that lead to its usefulness in artificial intelligence?Body: In mathematics, there is a proof that the following infinite series converges to a constant irrational number, denoted by $e$, called as Euler's number or Euler's constant or Napier's constant. The value of $e$ lies between 2 and 3.
+$$1 + \dfrac{1}{1!} + \dfrac{1}{2!} + \dfrac{1}{3!} + \cdots$$
+The natural exponential function, defined as follows, has some interesting properties
+$$f: \mathbb{R} \rightarrow \mathbb{R}$$
+$$f(x) = e^x$$
+It is used in several algorithms and in the definitions of functions like SoftMax.
+I am interested in knowing the possible mathematical characteristics that lead this function useful in artificial intelligence.
+The following are the properties I am aware of. But, I am not sure about how some of them will be useful
+
+- Non-linearlity: Activation functions are intended to provide non-linearity. So, it is a candidate for activation functions due to this property. You can check its graph here.
+
+- Differentiability: Loss functions used for in back-propagation algorithm need to be differentiable. So, it can be a candidate for usage in loss functions too.
+
+
+$$\dfrac{d}{dx} e^x = e^x \text{ for all } x \in \mathbb{R}$$
+
+- Continuity: I am not sure how this property is useful in algorithms. Intuitively, you can check from graph provided above that it is continuous.
+
+- Smoothness: I am not sure how this property is useful in algorithms. But seems useful. The natural exponential function has the smoothness property.
+
+
+$$\dfrac{d^n}{d^nx} e^x = e^x \text{ for all } x \in \mathbb{R} \text{ and } n \in \mathbb{N}$$.
+Are there any other properties like non-linearily, differentiability, smoothness etc., for the natural exponential function that make it superior to use in AI algorithms?
+"
+"['terminology', 'probability-distribution', 'softmax']"," Title: Where can I read about the multinoulli distribution?Body: I encountered the term multinoulli distribution in the following sentence from Chapter 4: Numerical Computation of the deep learning book.
+
+The softmax function is often used to predict the probabilities
+associated with a multinoulli distribution.
+
+I am guessing that multinouli distribution is any probability distribution that has been defined on a random variable taking multiple values. I know that SoftMax function is used in converting a vector into another vector of the same length with probability values that signify the chance of input falling into that particular class.
+Suppose $C$ is a random variable with support $\{c_1, c_2, c_3, \cdots, c_k\} $. Then I am guesssing that any probability distribution on $C$ is a multinouli distribution. SoftMax is an example of such multinouli distribution that uses the expression $\dfrac{e^x}{\sum e^x}$ for calculating probabilities.
+Is my guess correct about multinoulli distribution? The reason for my doubt is that I never came across the word multinoulli and I cannot find even on internet. If my guess is wrong, where can I read about multinoulli distribution?
+"
+"['research', 'proofs', 'models', 'supervised-learning', 'efficiency']"," Title: Non-locally Electrically Programmable Logic Gates - Technological Advances ProgressBody: Preface: I’d like to clarify that I understand what a relay is and that a PLC uses a fairly conventional microprocessor that only digitally establishes logical logic gate configuration as a digitally programmable alternative to relay banks for analog and/or (depending on the PLC) digital signals. My question is based on the understanding that to date actual logic gates (as far as I know) aren’t non-locally programmable (“re-wirable”) without a person manually rewiring truly programmable actual (not logical programming of a statically wired microprocessor) logic gates.
+Rectenna work interests me specifically around any potential relevance of varying transmission wavelengths and material resistances (if this is not possible with MoS2, generally as a concept for other potential materials) to making possible remote switch activation of logically chosen switches along an array. Essentially I am curious about if this or other research has potential for constructing truly physically reprogrammable (externally and maybe wirelessly) logic gates.
+In general any information on advances towards this capability would be appreciated as right now it seems like the only rudimentary build I could manage for my project is a 64 gate one. That’s not great because anything less than 512 gates would be very hard to make useful for my proof of concept project, and I know there’s no way I could get to a more ideal 262,144 gates.
+One example would be any publication which covers if the kind of uses of phase-engineered low-resistance contacts for ultrathin MoS2 transistors covered in the articles below would be able to be produced with varying resistance in a band usable for varying activation via radio waves for switches.
+https://doi.org/10.1038/nmat4080
+https://www.ece.cmu.edu/news-and-events/story/2019/05/rectennas-converting-radio-waves-into-electricity.html
+I’m not picky if someone knows about other technological advances approaching this capability such as biochemical non-locally programmable switch activation equivalent processes. Thanks everyone.
+Update 1: My specific question is: Have there been any significant technological advances towards non-locally electrically programmable logic gates?
+Update 2: After further review I’ve found that FPGAs are not what I am asking about. Their reprogramming like PLCs is digital not analog. They seem to just be a more generalized similar thing to PLCs rather than being factory equipment. I might incorporate one or more in my project, but they aren’t what I am referring to which is true analog reprogramming. Why does analog matter? Analog means more efficient at the surface level, but it also allows structured logic similar to ladder logic at the hardware level which enables significantly different uses in structuring and restructuring logic execution.
+Update 3: This is for an efficiency proof of concept project trying to prove it is possible to structure logic in a certain way to increase efficiency of certain specific processes. This is a project involving programming and/or design at every single level of development (transistors, machine code, assembly, mid level (such as C/C++), and high level (Python/Tensorflow). I will be creating custom NAND gate structures, writing the instructions to execute on them, writing in assembly, writing in a mid level language, and writing in Python and TensorFlow for different parts of this overall project’s functionality.
+In conclusion the straightforward version of this question is: What are the current capabilities for or research done towards creating physically rewired logic gates using non-local digital instructions?
+"
+"['image-recognition', 'image-segmentation', 'transfer-learning', 'yolo', 'labels']"," Title: Should I label static objects on video dataset?Body: I'm using nvidia Transfer Learning Toolkit to detect cars in some video frames.
+I found some dataset (for example https://www.jpjodoin.com/urbantracker/dataset.html and https://www.kaggle.com/aalborguniversity/aau-rainsnow) and I noticed that usually parked cars are not labeled, and covered under a mask.
+Why shouldn't I add also their labels? It would be easy to label them because they are static objects and I could copy-paste in all labels set. So why in video dataset they are not labelled?
+"
+"['math', 'optimization', 'gradient-descent', 'derivative']"," Title: Reason for relaxing limit in derivative in this context?Body: Consider the following paragraph from NUMERICAL COMPUTATION of the deep learning book..
+
+Suppose we have a function $y = f(x)$, where both $x$ and $y$ are real
+numbers. The derivative of this function is denoted as $f'(x)$ or as
+$\dfrac{dy}{dx}$. The derivative $f'(x)$ gives the slope of $f(x)$ at
+the point $x$. In other words, it specifies how to scale a small change
+in the input to obtain the corresponding change in the output:
+$f(x+\epsilon) \approx f(x) + \epsilon f'(x)$.
+
+I have doubt in the equation $f(x+\epsilon) \approx f(x) + \epsilon f'(x)$ given in the paragraph.
+
+In strict sense, the derivative function $f'$ of a real valued
+function $f$ is defined as
+$$f'(x) = \lim_{\epsilon \rightarrow 0}
+ \dfrac{f(x+\epsilon)-f(x)}{\epsilon}$$
+wherever the limit exists.
+
+If I replace the original definition of the derivative as follows
+$$f'(x) \approx \dfrac{f(x+\epsilon)-f(x)}{\epsilon}$$
+then I can obtain the equation given in the paragraph i.e, $f(x+\epsilon) \approx f(x) + \epsilon f'(x)$.
+But, my doubt is that how can I modify the definition with $\lim\limits_{\epsilon \rightarrow 0}$ to an approximation with out limit? How can the following two are same?
+$$f'(x) = \lim_{\epsilon \rightarrow 0} \dfrac{f(x+\epsilon)-f(x)}{\epsilon} \text { and } f'(x) \approx \dfrac{f(x+\epsilon)-f(x)}{\epsilon}$$
+"
+"['deep-learning', 'convolutional-neural-networks', 'object-detection', 'testing', 'precision']"," Title: What does a value of -1.000 mean in MS COCO Metrics for Object DetectionBody: I am training some Object-Detection-Models from the TensorFlow Object Detection API and got from the evaluation with MS COCO metrics the following results for Average Precision:
+IoU = 0.5;0.9
+maxDets = 100
+area = small
+AP = -1.000
+The other values all make sense to me. But I don´t know what the -1.000 stands for. Does it mean that there are no small objects in my dataset to be detected?
+"
+"['machine-learning', 'objective-functions', 'multiclass-classification', 'imbalanced-datasets']"," Title: How do I select the class weights for the loss function in the case of more than 2 classes?Body: I have a machine learning task where I would like to weight losses based on the frequency of the categorical values appearing in the data. The binary solution can be seen below, but I'd like to know what to do about the case of n>2 categories.
+w_0 = (n_0 + n_1) / (2.0 * n_0)
+w_1 = (n_0 + n_1) / (2.0 * n_1)
+
+The frequencies for the samples n0-n5 are:
+n_0: 1552829
+n_1: 14479
+n_2: 13445
+n_3: 13781
+n_4: 18795
+n_5: 64187
+
+"
+"['generative-model', 'variational-autoencoder']"," Title: Types of decoder parametrizations in VAE for continuous dataBody: I'm wondering what are the different choices of parametrizations available for the decoder in a variational autoencoder. If the data is discrete, you can just output probabilities for each class, so that's straightforward. If the data is a) continuous and not bounded, then the only parametrization I know of is the (multivariate) normal distribution. If the data is b) continuous and bounded, again the only parametrization I know of is a (multivariate) normal distribution, mapped to [0,1] using a sigmoid function.
+Are there other formulations that are commonly used? If there are, what are the advantages and any disadvantages compared to using a normal distribution?
+To ask the question in a different way: the normal distribution seems like the "go to" choice in problems with continuous output since it's simple and allows gradients through the reparametrization trick. Are there other reparametrizable continuous distributions which are also commonly used in machine learning? I know the exponential distribution (and related distributions such as weibull, gamma) are reparametrizable, but would you ever use those instead of a normal distribution to model continuous data?
+"
+"['reinforcement-learning', 'deep-rl', 'function-approximation']"," Title: Is Reinforcement Learning capable of learning complex functions (such as producing a 3d model given an image)?Body: I want to build an AI that can convert an image of a subject into an anatomically accurate 3D model. To do this, I was thinking of adapting the following code for Deep Deterministic Policy Gradient: https://keras.io/examples/rl/ddpg_pendulum/
+My reasons considering RL:
+
+- I don't have the needed skill (3D modelling) to procure a large dataset for the project. I was hoping RL may help me overcome that by adapting to a smaller dataset through state-reward learning. I'm looking mainly for human and animal anatomical models. Those are hard to find in large numbers.
+
+- My second concern is the required density (polygon count) of these models can be rather high. I'm not sure if it is computationally feasible for a NN to output high density models. However, I'm thinking an RL agent can step through and write each vertex one at a time in a 3D space. As compared to a single output layer in a feed-forward network.
+
+
+However, that means it will have to handle a rather large state-space (an array of length 50,000 or higher).
+With all of that said, RL has mainly been used in video games and simple control problems from the OpenAI gym. Is it a waste of time to use RL for this level of complexity?
+"
+"['comparison', 'papers', 'transformer', 'architecture', 'vision-transformer']"," Title: What is the difference between a vision transformer and image-based relational learning?Body: I am trying to figure out the difference between the architecture used in this and this paper. It looks like both used multi-headed self-attention and therefore should be the same in principle.
+"
+"['temporal-difference-methods', 'model-free-methods']"," Title: How does n-step Temporal Difference remove the notion of time-step?Body: How does n-step TD removes the notion of time-step as referenced in Sutton and Barto (2nd edition, Page 163) below?
+
+Another way of looking at the benefits of n-step methods is that they free you from the tyranny of the time step. With one-step TD methods the same time step determines how often the action can be changed and the time interval over which bootstrapping is done. With one-step TD methods, these time intervals are the same, and so a compromise must be made. n-step methods enable bootstrapping to occur over multiple steps, freeing us from the tyranny of the single time step.
+
+Consider the following example.
+We know our n-step update equation as: $$V_{t+n}(S_t) = V_{t+n-1}(S_t) + \alpha [G_{t:t+n} - V_{t+n-1}(S_t)]$$
+Now, let $t=0$ and $n=2$. This gives us: $V_2(S_0) = V_1(S_0) + \alpha [G_{0:2} - V_1(S_0)]$.
+Before our n-step TD prediction algorithm starts, we initialize with $V_0$. But we use $V_1$. Why? And how do we calculate $V_1$?
+"
+"['reference-request', 'objective-functions', 'gradient-descent', 'gradient']"," Title: Is there any significance for higher order gradients in artificial intelligence?Body: Although I don't know in detail, I am aware of the following facts regarding the use of gradients in some domains of artificial intelligence, especially in minimizing the training of neural networks.
+
+- First order gradient: It quantifies the rate of change of a function with respect to its inputs. It is useful in artificial intelligence, especially in gradient-based algorithms, to know about the direction in which the parameters need to be updated.
+
+- Second-order gradient: It somehow quantifies the curvature of the function. It is used in artificial intelligence, to know whether the function has convex or concave portions.
+
+
+In this context, I want to learn whether there is any significance for higher-order gradients in artificial intelligence? Note that higher-order refers to the order $\ge 3$.
+"
+"['neural-networks', 'training', 'reference-request', 'stochastic-gradient-descent']"," Title: Is there any way to train a neural network without using gradients?Body: The only algorithm I know for updation of weights of a neural network is based on gradients. The update equation can be roughly written as
+$$w \leftarrow w - \nabla_{w}L$$
+where $\nabla_{w}L$ is the gradient of loss function with respect to weights.
+Are there any learning algorithms for updating weights in neural networks that does not use gradients?
+"
+"['machine-learning', 'math']"," Title: Predict a part of the input based of the outputBody: I'm working on a fun project where I have a dataset of input and output data, both having a fixed size of characters.
+I would like to predict a part of the input based on a known output as follows:
+$$Input = A+B$$
+$$Output = X+Y$$
+A, B, X, Y are strings that will be concatenated and they have a fixed size
+Knowing A, X and Y; I want to predict B (even if it takes a lot of tries). The output is split in 2 because I don't really care what value Y has (so maybe I can delete it, depending how is more easy).
+Is it possible? I'm new to ML and AI and first I want to know if it possible before starting to work on the project (I am time limited). And if it is possible then could you tell me what exactly to study/learn or how I can do it?
+"
+"['algorithm', 'implementation', 'gradient']"," Title: What is the high-level algorithm followed by contemporary packages for the calculation of gradient?Body: Most of the neural network models in contemporary deep learning packages are trained based on gradients.
+Let $f: \mathbb{R}^m \rightarrow \mathbb{R}^n$ be a function for which we want to find a gradient, then the gradient is generally represented by a Jacobian matrix that looks like below
+$$J = \begin{bmatrix}
+ \dfrac{\partial y_1}{\partial x_1} & \dfrac{\partial y_1}{\partial x_2} & \dfrac{\partial y_1}{\partial x_3} &\dots & \dfrac{\partial y_1}{\partial x_m} \\
+ \dfrac{\partial y_2}{\partial x_1} & \dfrac{\partial y_2}{\partial x_2} & \dfrac{\partial y_2}{\partial x_3} &\dots & \dfrac{\partial y_2}{\partial x_m} \\
+ \cdots & \cdots & \cdots & \cdots & \cdots \\
+ \dfrac{\partial y_n}{\partial x_1} & \dfrac{\partial y_n}{\partial x_2} & \dfrac{\partial y_n}{\partial x_3} &\dots & \dfrac{\partial y_n}{\partial x_m} \\
+\end{bmatrix}
+$$
+For example: If $f(x_1, x_2) = \begin{bmatrix}
+ x_1 + x_2 \\
+ x_1x_2
+ \end{bmatrix}$ then $J = \begin{bmatrix}
+ 1 & 1 \\
+ x_2 & x_1
+ \end{bmatrix}$
+After calculating the Jacobian matrix, we can substitute the co-ordinate values of a particular point so that we can obtain a real matrix which is a gradient at a particular point.
+$$
+J_{(4, 5)} = \begin{bmatrix}
+ 1 & 1 \\
+ 5 & 4
+ \end{bmatrix}
+$$
+In order to perform the gradient of a function at a point, the algorithm I know is as follows:
+
+- Write each output of the function in the analytical form in terms of input;
+- Apply partial derivative on each output w.r.t each input;
+- Substitute the values of the input point at which we want to find the gradient.
+
+Thus, finally, we will get the gradient.
+Do the popular packages like PyTorch, Tensorflow, Keras, etc., use this or a variant of this algorithm to find the gradients at a particular point?
+If yes, will those packages be able to write the analytical forms of all the output variables in terms of input variables?
+If not, what is the high-level algorithm for calculating gradients? Is it based on geometrical slope version of gradient?
+"
+"['reinforcement-learning', 'comparison', 'temporal-difference-methods', 'sutton-barto', 'model-free-methods']"," Title: Why does one-step TD strengthen only the last action of the sequence of actions that led to the high reward, while n-step TD the last n actions?Body: In the caption of figure 7.4 (p. 147) of Sutton & Barto's book (2nd edition), it's written
+
+The one-step method strengthens only the last action of the sequence of actions that led to the high reward, whereas the n-step method strengthens the last n actions of the sequence, so that much more is learned from the one episode.
+
+Why does one-step TD strengthen only the last action of the sequence of actions that led to the high reward, while n-step TD the last n actions?
+Here's a screenshot of the figure.
+
+"
+"['reinforcement-learning', 'temporal-difference-methods', 'model-free-methods', 'pseudocode']"," Title: Why don't we bootstrap terminal state in n-step temporal difference prediction update equation?Body: In the algorithm below, when $\tau + n \geq T$, shouldn't the algorithm bootstrap with the value of the next state? For instance, when $T=5, \tau=3, \& \; n=2$, we don't bootstrap the sample return with $V_{(\tau+n)}$, i.e., $V_5$ or the terminal state.
+
+Also, on line 4, what do we mean by "can take their index mod $n + 1$"?
+"
+"['optimization', 'gradient', 'calculus']"," Title: What all does the gradient tells us other than the direction to move parameters?Body: Gradients are used in optimization algorithms.
+I know that a gradient gives us information about the direction in which one needs to update the weights of a neural network. We need to travel in the opposite direction of the gradient to get optimal values.
+Thus the gradient provides direction to update parameters.
+Is it the only information provided by the gradient? Or does it provide any other information that helps in the training process?
+"
+['ai-design']," Title: How to create a model for predicting the number of visitorsBody: I want to create a model to predict the number of visitors.
+Currently, I have a year's csv data for predicting the number of visitors, which is collected every 10 seconds.
+I would like to predict the number of future visitors on a daily basis based on this data for the past year.
+What kind of method or model can I use to achieve this? I can use a graphics board for learning.
+If you have any page of sample codes, it would be very helpful.
+
+"
+"['machine-learning', 'reinforcement-learning', 'ddpg', 'linear-programming']"," Title: Is it possible to solve a linear programming problem using reinforcement learning? (DDPG algorithm)Body: I'm trying to solve a linear programming problem using reinforcement learning.
+The linear programming problem is:
+\begin{array}{ll}
+\text{maximize}_x & C* x \\
+\text{subject to}& A*x \le b\\
+ & x_i \in [0,1],\ where \ i=1,2,3,...
+\end{array}
+For instance:
+\begin{array}{ll}
+C &= [1 \; 2 \;3 \;4]\\
+x &= [x1; x2; x3; x4]\\
+A &= [2 \;3 \;4 \;5]\\
+b &= 10\\
+\end{array}
+I've tried to use the DDPG algorithm to train in MATLAB but the result is not good. Any suggestions for this problem, and is it possible to do so, thanks?
+"
+"['neural-networks', 'classification', 'explainable-ai']"," Title: How can I interpret the way the neural network is producing an output for a given input?Body: I'm using a small neural network (2 hidden layers, 60 neurons apiece) for a rather complex binary classification problem.
+The network works well, but I'd like to know how it is using the inputs to perform the classification. Ultimately, I would like to interpret the trained network in order to learn more about the processes responsible for generating the data.
+Ideally, I would end up with an equation that would allow me to perform the classification without the network and that would have parameters that I could interpret in the context of the system the network is being used on.
+My first thought is to procedurally mask out a growing subset of the ~4000 parameters until there's an appropriate trade-off between performance and simplicity and then maybe use a symbolic logic library to try and simplify further.
+I don't think that's the best plan, so I wonder if there's an existing workflow to interpret a neural network.
+"
+"['deep-learning', 'reference-request', 'hyperparameter-optimization', 'learning-rate', 'layers']"," Title: Has the idea of using different learning rates for different layers been explored in the literature?Body: I wonder whether there are heuristic rules for the optimal selection of learning rates for different layers. I expect that there is no general recipe, but probably there are some choices that may be beneficial.
+The common strategy uses the same learning rate for all layers. Say, take Adam optimizer with lr=1e-4
, and this choice performs well in many situations.
+However, it seems that convergence to the optimal values of weights in different layers may be with different speeds. Say, values in the first few layers are close to the optimum after a few epochs, whether features in deeper layers typically require much more epochs to be close to a good value.
+Are there any rules to choose a smaller (higher) learning rate in the top layers of the network compared with the bottom layers?
+Also, neural networks can have different types of layers - convolutional, dense, recurrent, self-attention. And some of them may converge faster or slower.
+Has this question been studied in the literature?
+Different learning rates for different layers emerge in transfer learning - it is common to tune only the last few layers, and keep other frozen or evolve with a smaller learning rate. The intuition behind this is that the top layers extract generic features universal for all models and it is desirable not to spoil them during fine-tuning.
+However, my question is about training from scratch.
+"
+"['machine-learning', 'python', 'long-short-term-memory']"," Title: The results are not correct when predicting the future for a very long period of time with LSTMBody: I am currently using LSTM to try to predict future data in AirPassengers.csv.
+This is current code op my Colab (sorry for the comments are Japanese)
+https://colab.research.google.com/drive/16Ntg3dA5ywZvm35PEeMUjHtKhNdsSgbc?usp=sharing
+I wanted to use this code to make predictions for a much longer period of time with different data in the future, so I changed the prediction period in the last code block from the original code I referenced from 3 years to 20 years as follows:
+#from
+pred_time_length = 12*3
+#to
+pred_time_length = 12*20
+
+When I do this, I get values like this damped oscillation, and I think that the prediction is probably not working well.
+What is the cause of this? Also, what should I change in the code to make it work?
+
+thank you in advance!
+"
+"['graph-theory', 'semi-supervised-learning']"," Title: Why is it difficult to propagate intransitive relations over a graph?Body: In the paper Semi-Supervised Learning by Mixed Label Propagation, they say
+
+One major limitation with most graph-based approaches is that they are
+unable to explore dissimilarity or negative similarity. This is
+because the dissimilar relation is not transitive, and therefore is
+difficult to be propagated.
+
+Why is it so?
+"
+"['training', 'datasets']"," Title: Adding data to training results in loss random peaksBody: I have succesfully trained ssd_mobilenet_v2_keras for object detection, with a dataset of about 3700 images. Now I have more images to add. I tried adding only a few images (150-300) to see what happened, but what I obtain is that the trainig looks good in the first steps, but then there are some really high peaks in the loss function.
At first, I thought the problem was the quality of the pictures, so I removed them and tried to add more or less 300 bigger pictures: nothing changed. Then I tried to add only good pictures (no shadows or lights that may confuse the net, no interference with the object, only images where the objects I want to find are big and centered), but nothing.
All the things I have tried leads to the same results:
As you can see, The training looks good at the beginning, but then there are those extremely high peaks that seems to happen at random steps (sometimes after 20.000 staps, sometimes after 2.000).
I tried to train both with and without some data augmentations (random contrast, brightness and saturtion adjust, random rgb-to-grayscale, random horizontal flip, ...) but the results are more or less the same (with data augmentations it's a little better, but still far from good).
Any suggestions on why this happens and how to fix?
+
+EDIT: unfortunately I didn't take a screenshot at the end of the succesful training, I only have this one taken after 6.000 steps (total number of steps is 50.000), but then the chart followed this trend and ended with this values:
- classification loss: 4.16e-3
- localization loss: 1.11e-3
- total loss: 0.077
+"
+"['optimization', 'gradient', 'calculus', 'dimensionality']"," Title: How many directions of gradients exist for a function in higher dimensional space?Body: Gradients are used in optimization algorithms. Based on the values of gradients, we generally update the weights of a neural network.
+It is known that gradients have a direction and the direction opposite to the gradient should be considered for weight updation. In any function of two dimensions: one input, and one output, there are only two possible directions for any gradient: left or right.
+Is the number of gradient directions infinite in higher dimensions ($\ge 3$)? Or does the number of possible directions are $2n$ where $n$ is the number of input variables?
+"
+"['reinforcement-learning', 'probability-distribution', 'proximal-policy-optimization']"," Title: How to calculate policy probability ratio in multiple action spaceBody: I try to solve a navigation problem with PPO; my actions space have three-part:
+
+- robot linear velocity that is in [-3,3] range (getting from a tanh activation func)
+- robot linear angular that is in [-pi/6, pi/6] range (getting from a tanh activation func)
+- robot step-time duration that is from [0.2, 0.5, 0.8] (getting from a softmax activation func)
+
+The problem that I face is how to calculate the ratio of probability from this separate disturbing?
+Mean or sum? or was there another way to calculate log_prob from different distributions? Something like log_prob from multivariable distribution!
+"
+"['proofs', 'off-policy-methods', 'sutton-barto', 'importance-sampling', 'model-free-methods']"," Title: How to prove importance sampling ratio is uncorrelated with action-value (or state-value) estimate?Body: In Sutton & Barto (2nd edition), the following is mentioned on page 150 (p. 172 of the pdf), section 7.4:
+
+the importance sampling ratio has expected value one (Section 5.9) and is uncorrelated with the estimate.
+
+How can we prove the importance sampling ratio is uncorrelated with the estimate?
+"
+"['terminology', 'books', 'derivative']"," Title: Why does critical points and stationary points are used interchangeably?Body: Consider the following paragraph from Numerical Computation of the deep learning book.
+
+When $f'(x) = 0$, the derivative provides no information about which
+direction to move. Points where $f'(x)$ = 0 are known as critical
+points, or stationary points. A local minimum is a point where
+$f(x)$ is lower than at all neighboring points, so it is no longer
+possible to decrease $f(x)$ by making infinitesimal steps. A local
+maximum is a point where $f(x)$ is higher than at all neighboring
+points so it is not possible to increase $f(x)$ by making infinitesimal
+steps. Some critical points are neither maxima nor minima. These are
+known as saddle points.
+
+In short, points where $f'(x) =0 $ are called critical points, or stationary points.
+But, according to mathematical terminology, the definitions are as follows:
+#1: Critical point
+
+A function $y=f(x)$ has critical points at all points $x_0$ where
+$f'(x_0)=0$ or $f(x)$ is not differentiable.
+
+#2: Stationary point
+
+A point $x_0$ at which the derivative of a function $f(x)$ vanishes,
+$f'(x_0)=0$. A stationary point may be a minimum, maximum, or
+inflection point.
+
+It can be noticed that the definitions that are given in the deep learning book do match exactly with stationary points since the only premise is $f'(x)=0$. The definition for critical point is not apt since a critical point can also be a point where $f'(x)$ is nonexistent.
+Is there any reason for using the terms critical points and stationary points interchangeably? Is there no need to address the points where $f'(x)$ does not exist?
+"
+"['natural-language-processing', 'reference-request', 'question-answering', 'information-retrieval']"," Title: Is there a survey that describes the most effective approaches for an answer retrieval problem?Body: I have a dataset that contains pairs of a question and an answer. My problem is to train a model that can search for the right answer from the pool of my answers given the newly input question, so this is a kind of answer retrieval problem.
+Can anyone provide me a survey and effective approaches for this problem?
+"
+"['machine-learning', 'comparison', 'efficiency', 'expert-systems', 'decision-support-system']"," Title: A comparison of Expert Systems and Machine Learning approaches in terms of run-time-efficiency and time/space complexityBody: For part of a paper I am writing on Clinical Decision Support Systems (computer-aided medical decision making, e.g. diagnosis, treatment), I am trying to compare Expert Systems with systems based on Machine Learning approaches (Deep Learning, Artificial Neural Networks, etc.).
+Specifically, I am currently trying to make a general comparison (if possible) of expert systems with machine learning systems across dimensions of efficiency and complexity, i.e.
+
+- run-time-efficiency
+- time complexity
+- space complexity
+
+My current line of thinking, after having tried to find literature with limited success, is that, in the case where one is trying to answer questions in a very specific, limited, domain that only requires a few rules (for an expert system), expert systems are relatively "cheap" in terms of these three criteria. However, when a problem/domain becomes more complex, expert systems seem to suffer from the fact that the number of rules needed "explodes", which, I would think, could lead to things such as large search trees or other problems. My feeling from what I have generally read about machine learning approaches is that these adapt better to more complex problems with higher dimensionalities.
+I would like to find some information that either confirms/backs up my general impression, or guides me to some other understanding of this.
+Unfortunately, I can't seem to find any sources that specifically deal with this kind of comparison. I'm not sure if I this is because my problem statement is to wide/vague, I am not searching correctly, there just isn't much literature, or my question doesn't make sense.
+Some of the sources I did manage to find are:
+
+Expert systems are still used and important in areas such as robotics
+and monitoring. However, the complexity of advanced rules systems can
+lead to performance issues. ANNs are currently managing to overcome
+such performance issues through scale-out.
+Source: Forbes
+
+Unfortunately, this is the most explicit source I've found. However, it doesn't really provide any details on which this claim could backed up, nor would I consider this a solid source, especially not in an academic setting.
+
+Checking for the logical consistency of a set of interrelated logical rules results in the formulation of a
+satisfiability (SAT) problem [Bezem, 1988]. If one assumes only binary variables, say n of them, then the
+corresponding search space is of size 2n
+. That is, it can become very large quickly. This is an NP-complete problem
+very susceptible to the “dimensionality curse” problem [Hansen and Jaumard, 1990]
+Source: Yanase J, and Triantaphyllou E, 2019, A Systematic Survey of Computer-Aided
+Diagnosis in Medicine: Past and Present Developments, page 7
+
+This mentions "dimensionality curse", but in the context of checking for logical consistency of the rules of an expert system, and not really in the context of run-time-efficiency & complexity.
+I have found numerous other articles comparing expert systems and machine learning approaches, e.g. Ravuri et al., 2019, Learning from the experts:
+From expert systems to machine-learned diagnosis models, but none of them, from what I have seen, compare expert systems and machine learning approaches across the dimensions I am interested in.
+Would anyone be able to provide some input on what would be aspects in comparing expert systems and machine learning approaches in terms of the efficiency and complexity criteria listed above, and/or, be able to point me in the right direction?
+"
+"['computer-vision', 'image-recognition', 'opencv']"," Title: How to classify two very similar images using Deep Learning?Body: I am a newbie in Computer Vision.
+I have a scenario in which I have a stationary camera in a factory. I want to detect whether the technician is working on the machine or not.
+Images are like the following:
+Technician working:
+
+Technician absent:
+
+Technician not working:
+
+I am confused whether is it a Image classification issue or an Object Detection/Pose Detection problem.
+As per my knowledge this should be a classification problem, I should take multiple images of a condition in which the machine is unattended, and a condition in which the technician is working on the machine.
+I will train the model if with different individual technicians on different days with different clothes.
+Now if I am in the right direction, how much images do I need to have a good accuracy?
+I see there are different models on Tensorflow Hub on image classification like EfficientNet, etc.
+Which model/architecture will work for me?
+I am sorry if I sound noobish.
+I can train the model using simple classifiers' code (like Cat vs Dog), but I want the my architecture to understand that there is an area in the image which should only be checked if it is occupied or not to classify properly.
+OR
+Shall I cut out the middle area (where technician stands) simply using opencv. And then feed that cutout image to some classifier to detect if there is a human standing there?
+Thanks in advance!
+"
+"['search', 'depth-first-search']"," Title: How do I create the search tree for DFS applied to a grid map?Body: I have been working through some search tree problems and came across this one:
+
+Assume that that the algorithm has a closed list and that nodes are added to the frontier in the following order: Up, Right, Down, Left. For example, if node J is expanded: there is no node up from J so nothing is added to the frontier for up. K is right from J so it is added to the frontier, H is down from J so it is added to the frontier, there is no node left from J, so nothing is added to the frontier.
+a) Assume that the start node is node F and the goal node is node M. Provide the entire search tree if Depth First Search is employed.
+b) Provide the frontier at the time the search terminates
+Because I understand how a depth-first search works with regards to the frontier (it is a LIFO queue), I know that the last node added to the frontier would be the next node you need to expand. Using that knowledge, the frontier would be as follows after each expansion:
+
+- F
+- F I B E
+- E is expanded: F I B H A
+- A is expanded: F I B H
+- H is expanded: F I B J
+- J is expanded: F I B K
+- K is expanded: F I B L
+- L is expanded: F I B M
+
+The solution has been found, as we have reached M.
+I thus seem to have answered part b of the question, but as for how to draw the search tree, I am stumped. Any ideas would be appreciated.
+"
+"['recurrent-neural-networks', 'transformer', 'machine-translation']"," Title: How is Google Translate able to translate texts of arbitrarily large length?Body: Sequence-to-sequence models with attention are known to be limited by a maximum sequence length. So how can we handle sequences of arbitrarily large size? Do we just set a very large maximum sequence length?
+"
+"['optimization', 'gradient-descent']"," Title: Do gradient-based algorithms deal with the flat regions with desired points?Body: I am studying a chapter named Numerical Computation of a deep learning book. Afaik, it does not deal with flat regions with desired points.
+For example, let us consider a function whose local/global minimum or maximum values lies on flat regions. You can take this graph (just) for example.
+
+Can gradient-based algorithms work on those curves with their local/global minima, or do maxima lie on flat regions?
+"
+"['datasets', 'math', 'data-preprocessing', 'function-approximation']"," Title: Is it true that real world data is highly discontinuous?Body: A function $f$ is said to be continuous at a point $c$ if it satisfies three properties:
+
+- Should be defined at the point $c$
+- Left and right-hand limits at $c$ must be equal i.e., the limit must exist
+- Limit value at point $c$ is equal to the actual value of the function at c
+
+In short: $\lim \limits_{x \rightarrow c} f(x) = f(c)$
+I want to know whether the functions that we want to learn through real-world data, say generator in GAN, such as images, audio, video, text corpora, etc., are continuous or highly discontinuous in general? If discontinuous, what might be the reason for discontinuity? I mean, which among the three properties mentioned got a violation in the majority of cases?
+"
+"['reinforcement-learning', 'value-functions', 'policies', 'bellman-equations']"," Title: Why must the value of a state under an optimal policy equal the expected return for the best action from that state?Body: The Sutton and Barto reinforcement learning textbook states that
+
+the value of a state under an optimal policy must equal the expected return for the best action from that state.
+
+That is,
+$$v_*(s) = \max_a q_*(s, a).$$
+I am having trouble gaining intuition for this. Since state values can be written as an expectation of the action values under a given policy, I am not sure I see how
+$$v_*(s) = \sum_a \pi_*(a|s)q_*(s,a) = \max_a q_*(s, a).$$
+I'd appreciate any insights!
+"
+"['deep-learning', 'optimization']"," Title: Which class of functions are quite complicated in deep learning?Body: Deep learning is a field in which we need neural networks that are deep enough to carry on our task. The important fucntions in deep neural networks can be classified in to three classes: activation function, neural network function and loss function.
+Activation functions are a part of neural network function and neural network functions may be a part of loss functions.
+Consider the following paragraphs from Numerical Computation of a deep learning book
+
+Optimization algorithms that use only the gradient, such as gradient
+descent, are called first-order optimization algorithms. Optimization
+algorithms that also use the Hessian matrix, such as Newton’s method,
+are called second-order optimization algorithms (Nocedal and Wright,
+2006).
+The optimization algorithms employed in most contexts in this book are
+applicable to a wide variety of functions but come with almost no
+guarantees. Deep learning algorithms tend to lack guarantees because
+the family of functions used in deep learning is quite complicated. In
+many other fields, the dominant approach to optimization is to design
+optimization algorithms for a limited family of functions.
+
+The last passage is talking about the family of functions used in deep learning. Which class of functions, among the three I mentioned, they are referring to?
+"
+"['deep-learning', 'math', 'derivative']"," Title: What does it mean ""having Lipschitz continuous derivatives""?Body: We can enforce some constraints on functions used in deep learning in order to guarantee optimizations. You can find it in Numerical Computation of the deep learning book.
+
+In the context of deep learning, we sometimes gain some guarantees by
+restricting ourselves to functions that are either Lipschitz
+continuous or have Lipschitz continuous derivatives.
+
+They include
+
+- Lipschitz continuous functions
+- Having Lipschitz continuous derivatives
+
+The definition given for Lipschitz continuous function is as follows
+
+A Lipschitz continuous function is a function $f$ whose rate of
+change is bounded by a Lipschitz constant $\mathcal{L}$:
+$$\forall x, \forall y, |f(x)-f(y)| \le \mathcal{L} \|x-y\|_2 $$
+
+Now, what is meant by having Lipschitz continuous derivatives?
+Does they refer to the derivatives of Lipschitz continuous functions? If yes, then why do they mention it as a separate option?
+"
+"['reinforcement-learning', 'value-functions', 'policy-improvement-theorem']"," Title: What is the intuition behind comparing action values to state values in the policy improvement theorem?Body: Sutton and Barto, in their book (Reinforcement Learning 2nd Edition) begin the discussion of policy improvement by comparing the action value $q_\pi(s, \pi'(s))$ to the state value $v_\pi(s)$.
+What is the intuition behind this comparison?
+It seems more natural to me to compare $q_\pi(s, \pi'(s))$ and $q_\pi(s, \pi(s))$. I understand that for deterministic policies $q_\pi(s, \pi(s))$ is the same as $v_\pi(s)$ so mathematically it makes no difference but perhaps conceptually it does?
+"
+"['deep-learning', 'linear-algebra']"," Title: Do solving system of linear equations required anywhere in contemporarty deep learning?Body: Consider the following from Numerical Computation chapter of Deep Learning book
+
+Machine learning algorithms usually require a high amount of numerical computation. This typically refers to algorithms that solve
+mathematical problems by methods that update estimates of the solution
+via an iterative process, rather than analytically deriving a formula
+to provide a symbolic expression for the correct solution. Common
+operations include optimization (finding the value of an argument that
+minimizes or maximizes a function) and solving systems of linear
+equations. Even just evaluating a mathematical function on a digital
+computer can be difficult when the function involves real numbers, which
+cannot be represented precisely using a finite amount of memory.
+
+The paragraph clearly mentions that solving system of linear equations is a common operation in machine learning. I just know that solving system of linear equations is useful in reinforcement learning and some basic algorithms of machine learning including regression.
+Is solving system of linear equations useful anywhere in deep learning?
+I think that we use them nowhere since optimization is the only algorithm generally used in deep learning.
+"
+"['machine-learning', 'computer-vision', 'object-detection', 'metric']"," Title: Role of confidence or classification score in object detection mAP metricsBody: I know that mAP (mean Average Precision) is the common evaluation metric for the object detection tasks. It uses IoU (Intersection over Union) threshold such as mAP@0.5 to evaluate whether the predicted box is TP (True Positive), FP (False Positive), or FN (False Negative).
+But I am confused about the role of classification score in this metric since the positive and negative is determined by the IoU, not the classification score. So, what is the role of classification scores in mAP evaluation?
+Let's describe it by example, suppose there is a single object in an image with the ground-truth as follows:
+
+- Bounding boxes: [[100, 100, 200, 200]]
+- Class Index: [0]
+
+Then the prediction of the object detection model resulting as follows:
+
+- Bounding boxes: [[100, 100, 200, 200], [100, 100, 200, 200], [100, 100, 200, 200]]
+- Class Indexes: [3, 2, 0]
+- Class Scores: [0.9, 0.75, 0.25]
+
+When I tried to calculate the mAP using this library: https://pypi.org/project/mapcalc/
+The mAP score is 1.0. So I am confused in the mAP point of view, this prediction is calculated as the correct prediction? So what is the role of classification score in this case? Should we also define the classification score threshold when using mAP?
+"
+"['agi', 'brain', 'spiking-neural-networks']"," Title: Can brain simulation be done using Tensor Processing Units?Body: A potential way to solving AI is via whole brain simulation. Currently we have the algorithm to model a human brain (albeit far from perfectly): https://thenextweb.com/news/theres-an-algorithm-to-simulate-our-brains-too-bad-no-computer-can-run-it
+It is estimated that we might need 100 petaflops to 1 exaflop of computing power to run a brain simulation in real time. Google's Tensor Processing Unit pods, however, have already achieved 1 exaflop of computing power a while back: https://spectrum.ieee.org/heres-how-googles-tpu-v4-ai-chip-stacked-up-in-training-tests
+Since these brain simulations are basically giant spiking neural networks, can they be run on Tensor Processing Units (TPUs), which are specifically designed for neural networks? Since TPU pods can do an exaflop, they might pack enough power to finally run a whole brain simulation?
+"
+['image-recognition']," Title: How to localize and classify objects in videoBody: What methods are used to localize an object in an video and classify that object?
+Example: I have a camera which detects an pickup truck driving into a garage of three (1,2,3). In need to know if the truck was loaded or not (classification) and which garage it picked (localization). How would a schematic workflow of this problem look like?
+It is assumed that the camera is mounted in a fixe position.
+"
+"['training', 'ai-design']"," Title: Using Human Confirmation in place of a loss Function for TrainingBody: Has there been any experimentation in designing an AI to prompt a human to judge the accuracy of it's outcomes? instead of using a loss function, a human can judge the accuracy of it's estimation using some kind of metric, where it can then use that too update it's weights.
+I was looking for some feedback on whether this is a plausible idea.
+I was thinking that for domains that lack sufficient training data to solve problems this could be a possible solution.
+Of course, it isn't feasible to judge every iteration of a training loop. So maybe feedback could be provided for the average of a number of estimations. Maybe every 100 estimations you could provide feedback.
+It may not be a great training method because of the sparsity of feedback, but it could provide a place to start if you don't have a lot of data to throw at your problem initially.
+"
+"['reinforcement-learning', 'proofs', 'sutton-barto', 'policy-improvement-theorem']"," Title: Policies for which the policy improvement theorem holdsBody: According to Reinforcement Learning (2nd Edition) by Sutton and Barto, the policy improvement theorem states that for any pair of deterministic policies $\pi'$ and $\pi$, if $q_\pi(s,\pi'(s)) \geq v_\pi(s)$ $\forall s \in \mathcal{S}$, then $v_{\pi'}(s) \geq v_\pi(s)$ $\forall s \in \mathcal{S}$.
+The proof of this theorem seems to rely on $\pi$ and $\pi'$ being identical for all states except $s$. To my best understanding, this is what allows us to write the expectation $\mathbb{E}[R_{t+1} + \gamma v_\pi(S_{t+1})|S_t = a, A_t = \pi'(s)]$ as $\mathbb{E}_{\pi'}[R_{t+1} + \gamma v_\pi(S_{t+1})|S_t = a]$ in line 2, which is central to the proof (re-produced from the book below).
+\begin{aligned}
+v_{\pi}(s) & \leq q_{\pi}\left(s, \pi^{\prime}(s)\right) \\
+&=\mathbb{E}\left[R_{t+1}+\gamma v_{\pi}\left(S_{t+1}\right) \mid S_{t}=s, A_{t}=\pi^{\prime}(s)\right] \\
+&=\mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma v_{\pi}\left(S_{t+1}\right) \mid S_{t}=s\right] \\
+& \leq \mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma q_{\pi}\left(S_{t+1}, \pi^{\prime}\left(S_{t+1}\right)\right) \mid S_{t}=s\right] \\
+&=\mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma \mathbb{E}_{\pi^{\prime}}\left[R_{t+2}+\gamma v_{\pi}\left(S_{t+2}\right) \mid S_{t+1}, A_{t+1}=\pi^{\prime}\left(S_{t+1}\right)\right] \mid S_{t}=s\right] \\
+&=\mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma R_{t+2}+\gamma^{2} v_{\pi}\left(S_{t+2}\right) \mid S_{t}=s\right] \\
+& \leq \mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma R_{t+2}+\gamma^{2} R_{t+3}+\gamma^{3} v_{\pi}\left(S_{t+3}\right) \mid S_{t}=s\right] \\
+& \vdots \\
+& \leq \mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma R_{t+2}+\gamma^{2} R_{t+3}+\gamma^{3} R_{t+4}+\cdots \mid S_{t}=s\right] \\
+&=v_{\pi^{\prime}}(s)
+\end{aligned}
+Does this mean that the proof is merely proving the special case of the policy improvement theorem for when the policies are identical except at $s$? I am having trouble seeing why the proof holds for the more general case of the two policies being potentially different for all states. In that case, line 2 would not hold and the theorem would not hold for all states as it claims to do.
+"
+"['reinforcement-learning', 'q-learning', 'deep-rl']"," Title: When to activate batch normalization and dropout in deep Q-learning?Body: In the vanilla version of deep Q-learning, there are three places where the Q-network is queried:
+
+- When exploring.
+
+- When training:
+a. When calculating the optimal value of the state reached by an action (so as to compute a target discounted reward).
+b. When calculating the optimal Q-value for a given state, during training (so as to nudge the network weights and better reproduce the observed reward).
+
+
+Now, during which steps should batch normalization and dropout be activated?
+I couldn't find anything through a Google search.
+Here are my guesses, for each step:
+
+- When exploring: activate batch normalization and dropout: this lets the normalizations to be learned, and gives a chance to uncertain Q-values to be selected even if they are relatively low (because the dropout can result in a Q-value prediction higher than its average).
+
+- When training:
+a. Do not activate batch normalization and dropout for calculating the optimal state value of the state reached by an action, because we want the Bellman equation to converge faster and therefore prefer stable (optimal state value) targets.
+b. Activate batch normalization and dropout when calculating the Q-value of a chosen action, as this is the whole idea of dropout (we use it during training).
+
+
+What is the common wisdom on this?
+"
+"['reference-request', 'math', 'books', 'calculus']"," Title: What are the Calculus books recommended for beginner to advanced researchers in artificial intelligence?Body: Calculus is a branch of mathematics that primarily deals with the rate of change of outputs of a function w.r.t the inputs.
+It contains several concepts including limits, first-order derivatives, higher-order derivatives, chain rule, derivatives of special and standard functions, definite integrals, indefinite integrals, derivative tests, gradients, higher-order gradients, and so on...
+Calculus has been heavily used in optimization and maybe in several other aspects of artificial intelligence.
+What are the Calculus textbook(s) recommended that cover all the concepts required for a researcher in artificial intelligence?
+"
+"['math', 'derivative']"," Title: How to understand slope of a (non-convex) function at a point in domain?Body: Consider the following paragraph from Numerical Computation of deep learning book that says derivative as a slope of the function curve at a point
+
+Suppose we have a function $y= f(x)$, where both $x$ and $y$ are real
+numbers. The derivative of this function is denoted as $f'(x)$ or as
+$\dfrac{dy}{dx}$ . The derivative $f'(x)$ gives the slope of $f(x)$
+at the point $x$. In other words, it specifies how to scale a small
+change in the input to obtain the corresponding change in the output:
+$f(x+ \epsilon) \approx f(x)+\epsilon f'(x)$.
+
+Slope of a function $f(x)$ at a point $a$ is generally defined as the $\tan$ of the angle made by the tangent line at the point $a$ on the curve of the function $f(x)$ to the positive x-axis in anti-clock wise direction. That is, if $\theta$ is the angle made by the tangent of the curve $f(x)$ at a point $(a, f(a))$ to the positive x-axis in anti-clock wise direction. Then the slope of $f(x)$ at point $a$ is $\tan \theta$.
+In theory, tangent line should touch the curve of $f(x)$ at a single point only. Most of the textbooks draws nice convex curves and then show slope as $\tan \theta$. But, i think it is not possible for many functions to draw a tangent line at a point that touches the curve at that single point only. Else it may be a tangent line or some other traversal.
+How to understand slope as $\tan \theta$ in such cases? Where am I going wrong?
+"
+"['convolutional-neural-networks', 'image-generation', 'statistics', 'vgg', 'style-transfer']"," Title: How does a VGG-based Style-Loss incorporate color information?Body: I've recently been reading a lot about style transfer, its applications and implications. I understand what the Gram matrix is and does. I can program it. But one thing that has been boggling me is: how does the VGG style loss incorporate color information into the style?
+In the paper "Texture Synthesis by CNNs", Gatys et al. show that minimizing the MSE between the Gram matrices of a random white noise image and a "target texture" yields new instances of that texture, with stochastic variation. I understand that this must work, as the Gram matrix measures the correlation between features detected by the VGG activations across channels, without spatial relation. So if we optimize the white noise image to have the same Gram matrix, it will exhibit the same statistics, and hence look like an instance of the original texture.
+But how does this work with color? Of course, the VGG could learn something like a mean filter, with all ones, whose output would be the avg. color over that filter kernel. After all, "color" is just another statistic. But then when using that in conjunction with the Gram loss, wouldn't this information be lost, as it's all just correlation and hence "relative" to each other?
+While writing this question, I'm starting to think of it like this: Maybe the feature correlation expresses these color constraints in some form like: "if one part is red, there must be a green part close to it" (for the radish), or "if there is a rounded edge, one side of it must be in shadow (=darker)" in case of the stone texture. This would tie color to the surrounding statistics (e.g., edges, other colors) and is the only reason I can think of why this works at all.
+Can somebody confirm/refute this, and share their thoughts? Happy to discuss!
+
+Image Source: Texture Synthesis by Convolutional Neural Networks, Gatys et al.
+"
+"['intelligent-agent', 'performance', 'goal-based-agents', 'utility-based-agents', 'rational-agents']"," Title: Are goal-reaching and optimizing the utility function special cases of performance measure?Body: In AIMA, performance measure is defined as something evaluating the behavior of the agent in an environment.
+Rational agents are defined as agents acting so as to maximize the expected value of the performance measure, given the percept sequence they have seen so far.
+Goal-based agents are those acting to achieve their goals. Utility-based agents are those trying to maximize their own expected "happiness".
+Now, can we say these two design approaches induce performance measures?
+What I suggest is that in goal-based agent design we want to find a point satisfying some conditions, so it's an optimization problem with a zero objective function and either this function is the performance measure or performance measure is optimized if and only if we find a solution to this optimization problem with zero objective function. In the utility-based agent design, we have an objective function (as a performance measure) that we want to optimize, and the agent has its own utility function, which it wants to optimize, and this utility function is optimized if and only if our objective function is optimized.
+"
+"['transformer', 'attention']"," Title: Why does a transformer not use an activation function following the multi-head attention layer?Body: I was hoping someone could explain to me why in the transformer model from the "Attention is all you need" paper there is no activation applied after both the multihead attention layer and to the residual connections. It seems to me that there are multiple linear layers in a row, and I have always been under the impression that you should have an activation between linear layers.
+For instance when I look at the different flavors of resnet they always apply some sort of non linearity following a linear layer. For instance a residual block might look something like...
+Input -> Conv -> BN -> Relu -> Conv -> (+ Input) -> BN -> Relu
+or in the case of pre-activation...
+Input -> BN -> Relu -> Conv -> BN -> Relu -> Conv -> (+ Input)
+In all the resnet flavors I have seen, they never allow two linear layers to be connected without a relu in-between.
+However in the the transformer...
+Input -> Multihead-Attn -> Add/Norm -> Feed Forward(Dense Layer -> Relu -> Dense Layer) -> Add/Norm
+In the multihead attention layer it performs the attention mechanism and then applies a fully connected layer to project back to the dimension of its input. However, there is no non linearity between that and feed forward network (except for maybe the softmax used in part of the attention.) A model like this would make more sense to me...
+Input -> Multihead-Attn -> Add/Norm -> Relu -> Feed Forward(Dense Layer -> Relu -> Dense Layer) -> Add/Norm -> Relu
+or something like the pre-activated resnet...
+Input -> Relu -> Multihead-Attn -> Add/Norm -> Input2 -> Relu -> Feed Forward(Dense Layer -> Relu -> Dense Layer) -> Add/Norm(Input2)
+Can anyone explain why the transformer is the way it is?
+I have asked a similar question when I was looking at the architecture of wavenet on another forum but I never really got a clear answer. In that case it did not make sense to me again why there was no activation applied to the residual connections.
+(https://www.reddit.com/r/MachineLearning/comments/njbjfb/d_is_there_a_point_to_having_layers_with_just_a/)
+"
+"['machine-learning', 'comparison', 'unsupervised-learning', 'learning-algorithms', 'k-means']"," Title: What is the borderline between unsupervised learning and regular algorithms?Body: Unsupervised learning using neural networks is clearly machine learning since it is utilising neural nets.
+However, some algorithms, k-means clustering, for example, are considered unsupervised learning, while they look just regular algorithms (non-ML).
+What should be the borderline (criteria) to differentiate between unsupervised learning and a non-ML algorithm?
+"
+"['machine-learning', 'learning-algorithms']"," Title: Best algorithms/approaches for data sets of binary (1/0) featuresBody: I am working with a dataset with about 400 features, all binary (1 or 0). What approach would you recommend? Data set is about 500k records.
+"
+['markov-decision-process']," Title: Markov Decision Processes with variable epoch lengthsBody: I am working on modeling a transportation problem as an MDP. Multiple trucks move material from one node to various other nodes in a network. However, the time it takes a truck to travel between any 2 nodes is different based on distance, and decisions are made when a truck arrives at a node. There lies the problem. Is it possible to have an MDP where the length of time between decision epochs is not uniform?
+The most similar MDP formulation I could find was the Semi-Markov Decision process, but that uses a random length epoch.
+"
+"['classification', 'image-recognition', 'data-labelling', 'labels']"," Title: Is soft labeling the same thing as label smoothing?Body: I have some data with soft labels and I am trying to figure out the best approach to solve the problem with Machine Learning (since regular classification is of the table, i.e. hard labels). However, whenever I look up "soft label" materials, I keep getting pointed to label smoothing. Is this the main/only technique to deal with soft labels?
+"
+"['intelligent-agent', 'norvig-russell', 'randomness', 'simple-reflex-agents']"," Title: How does randomization avoid entering infinite loops in the vacuum cleaner problem?Body: Suppose we have a vacuum cleaner operating in a $1 \times 2$ rectangle consisting of locations $A$ and $B$. The cleaner's actions are Suck
, Left
, and Right
and it can't go out of the rectangle and the squares are either empty or dirty. I know this is an amateur question but how does randomization (for instance flipping a fair coin) avoid entering the infinite loop? Aren't we entering such a loop If the result of the toss is heads in odd tosses and tails in even tosses?
+This is the text from the book "Artificial Intelligence: A Modern Approach" by Russell and Norvig
+
+We can see a similar problem arising in the vacuum world. Suppose that a simple reflex vacuum agent is deprived of its location sensor and has only a dirt sensor. Such an agent has just two possible percepts: [Dirty] and [Clean]. It can Suck in response to [Dirty]; what should it do in response to [Clean]? Moving Left fails (forever) if it happens to start in square
+A, and moving Right fails (forever) if it happens to start in square B. Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Escape from infinite loops is possible if the agent can randomize its actions. For example, if the vacuum agent perceives [Clean], it might flip a coin to choose between Right and Left. It is easy to show that the agent will reach the other square in an average of two steps. Then, if that square is dirty, the agent will clean it and the task will be complete. Hence, a randomized simple reflex agent might outperform a deterministic simple reflex agent.
+
+And this is the agent program from the same source:
+function REFLEX-VACUUM-AGENT([location,status]) returns an action
+ if status = Dirty then return Suck
+ else if location = A then return Right
+ else if location = B then return Left
+
+"
+"['machine-translation', 'metric', 'bleu']"," Title: Does it make sense to use BLEU or ROUGE for any machine translation task?Body: Many machine translation metrics such as BLEU or ROUGE are used to evaluate sequence to sequence models where, usually, the sequences are pieces of natural language.
+Is it possible to use these metrics when the dataset is not constituted of natural language sequences? For instance, if the sequences are source code (in some programming language), does it still make sense to use BLEU or ROUGE? How "good" are these metrics in general?
+"
+"['deep-rl', 'dqn']"," Title: Implementing Multiple NNs in one DQN model?Body: I'm trying to build a DQN Agent to take a set of 10 best actions simultaneously (integer values from 1 to 100) as outputs per episode. The input is a float. The goal is to find the optimal combination of (10) actions per episode.
+Currently, the set up is having a single NN output 10 actions w/ the highest q-valules for each episode. But in the Memory Replay process, each individual Set (of 10 fixed actions obtained from the exploration phase) is being treated as a single action. Because the target network also takes the output of the list of 10-action from the main NN. Hence I can see the agent repeatedly trying certain Set (with a fixed 10 actions) in the replay/retrain part, whereas our goal is to find the optimal combination of 10 actions per episode, Not the optimal Set of fixed combinations. So in essence, I would like the agent to pick out and mix up the actions from the Sets with higher Q-values (known from the exploration phase) to form new optimal "Sets" in the Replay process.
+I was thinking maybe instead of using a single NN with 10 outputs I could do 10 NNs with single outputs for each episode so that each action is treated separately. And I suppose I will have 10 q-networks and target networks as well, then I could combine the results by the end of each episodes. But, I am not sure if that is necessarily the best way to fix the problem of having repetitive sets of fixed action in the replay process.
+Alternatively, I think the problem could be treated as a multi-armed bandit problem, except each arm here has "sub-arms" too so to speak, but that could require some changes to the custom environment I am working with and I don't want to touch that unless necessary.
+Maybe there is a clever manipulation within the retrain process given my current setup that I am not seeing. Here is a snippet of the code for some more clarity.
+class DQNAgent():
+
+ def __init__(self,optimizer):
+ # Initialize atributes
+ self._state_size = 1
+ self._action_size = 76
+ self._optimizer = optimizer
+
+ self.experience_replay = deque(maxlen=2000)
+
+ # # Initialize discount and exploration rate
+ # self.gamma = 0.6
+ # self.epsilon = 0.5
+ self.gamma = 0.95
+ self.epsilon = 1.0
+ self.epsilon_min = 0.01
+ self.epsilon_decay = 0.95
+ self.learning_rate = 0.01
+
+ # Build networks
+ self.q_network = self._build_compile_model()
+ self.target_network = self._build_compile_model()
+
+
+ def store(self, state, action, reward, next_state, terminated):
+ self.experience_replay.append((state, action, reward, next_state, terminated))
+
+ def _build_compile_model(self):
+
+ model = Sequential()
+ model.add(InputLayer(input_shape=(self._state_size,)))
+ model.add(Dense(100, activation='relu'))
+ model.add(Dense(100, activation='relu'))
+ model.add(Dense(self._action_size, activation='linear'))
+ model.compile(loss='mse', optimizer=self._optimizer)
+ return model
+
+ def alighn_target_model(self):
+ self.target_network.set_weights(self.q_network.get_weights())
+
+ def retrain(self, batch_size):
+ if len(self.expirience_replay) < batch_size:
+ return
+ minibatch = random.sample(self.expirience_replay, batch_size)
+
+ for state, action, reward, next_state, terminated in minibatch:
+
+ target = self.q_network.predict(np.reshape(np.array(state), (-1,1)))
+ print('target size :', np.shape(target))
+
+
+ if terminated:
+ target[0][action] = reward
+ else:
+ t = self.target_network.predict(np.reshape(np.array(next_state), (-1,1)))
+ target[0][action] = reward + self.gamma * np.amax(t)
+
+ self.q_network.fit(np.reshape(np.array(state), (-1,1)), target, epochs=1, verbose=0)
+
+
+ def act(self,state):
+ self.epsilon *= self.epsilon_decay
+ self.epsilon = max(self.epsilon_min, self.epsilon)
+ action_space = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+ 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
+ 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
+ 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68,
+ 69, 70, 71, 72, 73, 74, 75, 76] #all 76 available nodes
+ if np.random.rand() <= self.epsilon:
+ return np.array(random.sample(action_space,10))-1 #-1 to match control's index
+
+ q_values = self.q_network.predict(np.reshape(np.array(state), (-1,1)))
+ print("q_vals shape",np.shape(q_values))
+ print('q_vals type',type(q_values))
+
+ top_actions_idx = q_values[0].argsort()[-10:][::-1]
+
+"
+"['datasets', 'hyper-parameters']"," Title: Why data required for hyperparameter tuning is considered as an additional data?Body: Any parametric model may have parameters as well as hyperparameters. Learning algorithm deals with parameters and hyperparameters should be dealt outside learning algorithm. Consider the following paragraph from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)
+
+Most machine learning algorithms have settings called hyperparameters,
+which must be determined outside the learning algorithm itself; we
+discuss how to set these using additional data.
+
+My doubt is about the usage of the word 'additional' in the paragraph. Afaik, a small part of dataset under consideration is used to validate and hence in determining the hyperparameters, called as validation data. It is also a part of the dataset as training and testing data. You can check the section 5.3
for more details.
+If yes, what is the need for the usage of the word 'additional'? Is it true that data for setting hyperparameters is taken outside of the underlying dataset?
+"
+"['machine-learning', 'definitions', 'statistics', 'probabilistic-machine-learning']"," Title: What is the definition of ""confidence interval"" around a (complicated) function?Body: Consider the following excerpt from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)
+
+Machine learning is essentially a form of applied statistics with increased emphasis on the use of computers to statistically estimate complicated functions and a decreased emphasis on proving confidence intervals around these functions;
+
+This excerpt says that machine learning focuses on estimating complicated functions, but not on proving confidence intervals around those functions. What is meant by or definition for a confidence interval around a complicated function mentioned here?
+"
+"['deep-learning', 'autoencoders', 'variational-autoencoder']"," Title: Why is the prior on the latent variable standard gaussian in VAE?Body: While training a standard VAE, we assume that the prior on the latent variable Z is the standard gaussian and we use KL divergence to push the posterior as close as possible to the standard gaussian. Why not assume any other gaussian as the prior? What are the intuitive reasons for this?
+"
+"['tensorflow', 'variational-autoencoder', 'evidence-lower-bound', 'variational-inference', 'tensorflow-probability']"," Title: Tensorflow Probability Implementation of Automatic Differentiation Variational Inference with MixturesBody: In this paper, the authors suggest using the following loss instead of the traditional ELBO in order to train what basically is a Variational Autoencoder with a Gaussian Mixture Model instead of a single, normal distribution:
+$$
+\mathcal{L}_{SIWAE}^T(\phi)=\mathbb{E}_{\{z_{kt}\sim q_{k,\phi}(z|x)\}_{k=1,t=1}^{K,T}}\left[\log\frac{1}{T}\sum_{t=1}^T\sum_{k=1}^K\alpha_{k,\phi}(x)\frac{p(x|z_{k,t})r(z_{kt})}{q_\phi(z_{kt}|x)}\right]
+$$
+They also provide the following code which is supposed to be a tensorflow probability implementation:
+def siwae(prior, likelihood, posterior, x, T):
+ q = posterior(x)
+ z = q.components_dist.sample(T)
+ z = tf.transpose (z, perm=[2, 0, 1, 3])
+ loss_n = tf.math.reduce_logsumexp(
+ (−tf.math.log(T) + tf.math.log_softmax(mixture_dist.logits)[:, None, :]
+ + prior.log_prior(z) + likelihood(z).log_prob(x) − q.log_prob(z)), axis=[0, 1])
+ return tf.math.reduce_mean(loss_n, axis=0)
+
+However, it seems like this doesn't work at all so as someone with nearly no tensorflow knowledge I came up with the following:
+def siwae(prior, likelihood, posterior, x, T):
+ q = posterior(x) # distribution over variables of shape (batch_size, 2)
+ z = q.components_distribution.sample(T)
+ z = tf.transpose(z, perm=[2, 0, 1, 3]) # shape (K, T, batch_size, encoded_size)
+ l1 = -tf.math.log(float(T)) # shape: (), log (1/T)
+ l2 = tf.math.log_softmax(tf.transpose(q.mixture_distribution.logits))[:, None , :] # shape (K, 1, batch_size), alpha
+ l3 = prior.log_prob(z) # shape (K, T, batch_size), r(z)
+ l4 = likelihood(tf.reshape(z, (K*T*x.shape[0], encoded_size)))
+ l4 = l4.log_prob(tf.repeat(x, repeats=K*T, axis=0)) # shape (K*T*batch_size, )
+ l4 = tf.reshape(l4, (K, T, x.shape[0])) # shape (K, T, batch_size), p(x|z)
+ l5 = -q.log_prob(z) # shape (K, T, batch_size), q(z|x)
+ loss_n = tf.math.reduce_logsumexp(l1 + l2 + l3 + l4 + l5, axis=[0, 1])
+ return tf.math.reduce_mean(loss_n, axis=0)
+
+There are no errors when I try to use this as
+siwae(prior, decoder, encoder, x_test[:100, ...], T)
+
+but after a few training steps I get only nans. I really don't have any idea of this is an due to a wrong implementation or wrong usage of the loss - especially as I don't have much experience with tensorflow. So any help would be greatly appreciated.
+For a full, minimal example I created this colab.
+"
+"['comparison', 'books', 'norvig-russell']"," Title: What is the difference between the US and global edition of the AIMA book by Russell and Norvig?Body: The book Artificial Intelligence: A Modern Approach by Russell and Norvig has two editions: global and the US. It looks like these two are generally the same, but have some differences in the order of the chapters and in the context, is this correct?
+"
+"['face-recognition', 'face-detection']"," Title: Can a face recognition system be trained using only computer generated hyper realistic faces?Body: In order to train a face recognition system you need to have access to a large database with thousands of photos containing different faces. Companies like facebook and amazon have these databases but most average people do not.
+If you don't have access to a sufficiently large dataset with faces, could you use computer generated random faces instead? I'm asking this because computers are becoming better and better in rendering hyper realistic faces. An example is the meetmike digital human showcase video. Another example is the unreal engine project spotlight video.. Lastly you also have websites like https://thispersondoesnotexist.com/ that can generate random faces.
+What if you generate a couple of photos of the same computer generated face and you make sure that each photo shows the face in a different setting or from a different angle. Could you then use such photos to train a facial recognition system that can accurately recognize real people?
+"
+"['neural-networks', 'machine-learning', 'residual-networks']"," Title: Residual Blocks - why do they work?Body: I've learnt that idea that the residual block was invented to solve the vanishing gradient problem due to the deep layer to layer multiplication.
+I understand that for example if I have 10 layers, and I add another 5 layers, that the output of the 10th layer will 'skip' the 5 layers. Although, the output of the 10th layer will also pass through the 5 layers as well. Just before the 15th layer Relu, the output from the 10th layer is element-wise summed with the 15th layer, just prior to the final Relu.
+I have some confusion with this.
+
+- Identity mapping/function. I keep reading that it creates an identiy function or it learns an identity function. What exactly is this? Isn't is just F(x) = 5 added layers, and x =output of 10th layer and thus it is just F(X) + X?
+
+- By summing the output of the 10th layer to the 15th layer, will this not affect what was learnt in the 5 layers? I.e. from 11th -15th layer.
+
+- I believe it also helps with backpropagation so that it doesn't have to update all the weights layer by layer and it can skip back to shallow layers. Therefore, are the weights inside the residual block, i.e layers 11-15 not updated? If not, then what is the point of the 11-15th layer if they are not designed to "do anything".
+
+
+"
+"['reinforcement-learning', 'q-learning', 'markov-decision-process', 'environment']"," Title: Is it really hard to learn in a stochastic environment?Body: I understand that a stochastic environment is one that does not always lead you to the desired state by giving a particular action $a$ (But the probability to change to a not desire state is fixed, right?).
+For example, the frozen lake environment is a stochastic environment. Sometimes you want to move in one direction and the agent slips and moves in another direction. Unlike an environment with multiple agents that the probability of the actions of the other agents is changing because they keep learning (a non-stationary environment).
+Why is it difficult to learn in a stochastic environment, if, for example, Q-learning can solve the frozen lake environment? In what cases would it be difficult to learn in a stochastic environment?
+I have found some articles that address that issue, but I don't understand why it would be difficult if Q-learning can solve it (for discrete states/actions).
+"
+"['machine-learning', 'comparison', 'features', 'random-variable']"," Title: Can I always interpret features as random variables in machine learning safely?Body: Consider the following statements from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)
+
+Machine learning tasks are usually described in terms of how the
+machine learning system should process an example. An example is a
+collection of features that have been quantitatively measured from
+some object or event that we want the machine learning system to
+process. We typically represent an example as a vector $\mathbf{x} \in
+ \mathbb{R}^n$ where each entry $x_i$ of the vector is another feature. For example,
+the features of an image are usually the values of the pixels in the image.
+
+Here, an example is described as a collection of features, which are real numbers. In probability theory, a random variable is also a real-valued function.
+Can I always interpret features in machine learning as random variables or are there any exceptions for this interpretation?
+"
+"['deep-learning', 'tensorflow', 'keras', 'transformer', 'time-series']"," Title: Transformer model is very slow and doesn't predict wellBody: I created my first transformer model, after having worked so far with LSTMs. I created it for multivariate time series predictions - I have 10 different meteorological features (temperature, humidity, windspeed, pollution concentration a.o.) and with them I am trying to predict time sequences (24 consecutive values/hours) of air pollution. So my input has the shape X.shape = (75575, 168, 10)
- 75575 time sequences, each sequence contains 168 hourly entries/vectors and each vector contains 10 meteo features. My output has the shape y.shape = (75575, 24)
- 75575 sequences each containing 24 consecutive hourly values of the air pollution concentration.
+I took as a model an example from the official keras site. It is created for classification problems, I only took out the softmax
activation and in the last dense layer I set the number of neurons to 24 and I hoped it would work. It runs and trains, but it does worse predictions than the LSTMs I have used on the same problem and more importantly - it is very slow - 4 min/epoch. Below I attach the model and I would like to know:
+I) Have I done something wrong in the model? can the accuracy or speed be improved? Are there maybe some other parts of the code I need to change for it to work on regression, not classification problems?
+II) Also, can a transformer at all work on multivariate problems of my kind (10 features input, 1 feature output) or do transformers only work on univariate problems? Tnx
+def build_transformer_model(input_shape, head_size, num_heads, ff_dim, num_transformer_blocks, mlp_units, dropout=0, mlp_dropout=0):
+
+ inputs = keras.Input(shape=input_shape)
+ x = inputs
+ for _ in range(num_transformer_blocks):
+
+ # Normalization and Attention
+ x = layers.LayerNormalization(epsilon=1e-6)(x)
+ x = layers.MultiHeadAttention(
+ key_dim=head_size, num_heads=num_heads, dropout=dropout
+ )(x, x)
+ x = layers.Dropout(dropout)(x)
+ res = x + inputs
+
+ # Feed Forward Part
+ x = layers.LayerNormalization(epsilon=1e-6)(res)
+ x = layers.Conv1D(filters=ff_dim, kernel_size=1, activation="relu")(x)
+ x = layers.Dropout(dropout)(x)
+ x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
+ x = x + res
+
+ x = layers.GlobalAveragePooling1D(data_format="channels_first")(x)
+ for dim in mlp_units:
+ x = layers.Dense(dim, activation="relu")(x)
+ x = layers.Dropout(mlp_dropout)(x)
+ x = layers.Dense(24)(x)
+ return keras.Model(inputs, x)
+
+model_tr = build_transformer_model(input_shape=(window_size, X_train.shape[2]), head_size=256, num_heads=4, ff_dim=4, num_transformer_blocks=4, mlp_units=[128], mlp_dropout=0.4, dropout=0.25)
+model_tr.compile(loss="mse",optimizer='adam')
+m_tr_history = model_tr.fit(x=X_train, y=y_train, validation_split=0.25, batch_size=64, epochs=10, callbacks=[modelsave_cb])
+
+"
+"['natural-language-processing', 'papers', 'feature-extraction', 'bag-of-words', 'n-gram']"," Title: Bag of Tricks: n-grams as additional features?Body: I've been playing with PyTorch's nn.EmbeddingBag
for sentence classification for about a month. I've been doing some feature engineering, playing with different tokenizers, etc. I'm just trying to get the best performance out of this simple model as I can. I'm new to NLP, so I figured I should start small.
+Today, by chance, I stumbled on this paper Bag of Tricks for Efficient Text Classification, which very well may be the inspiration for nn.EmbeddingBag
. Regardless, I read the paper and saw that they increased performance through using "n-grams as additional features to capture some partial information about the local word order"
+So by the wording of this sentence, specifically "additional features", I take it to mean that they made n-grams as part of their vocabulary. For example "abc news" is treated as a single word in the vocabulary, and then appended to the training data that is being embedded like so:
+dataset = TextFromPandas(tweet_df)
+label, sentence, ngrams = dataset[0]
+label, sentence, ngrams
+
+# out:
+
+(1,
+ 'quake our deeds are the reason of this # earthquake may allah forgive us all',
+ ['quake our',
+ 'our deeds',
+ 'deeds are',
+ 'are the',
+ 'the reason',
+ 'reason of',
+ 'of this',
+ 'this #',
+ '# earthquake',
+ 'earthquake may',
+ 'may allah',
+ 'allah forgive',
+ 'forgive us',
+ 'us all'])
+
+I just wanted to check my assumption, because the paper is not very explicit. I already tried to string n-grams together as a new sentence in place of the old, but performance dropped significantly.
+I will continue to experiment, but I was wondering if anyone knows the specific mechanism?
+"
+"['reinforcement-learning', 'control-problem']"," Title: Can future information be included in a control problem with Reinforcement Learning?Body: I have a control problem for a heating device of a building with the goal to minimize the electricity costs for one day under a varying price for electricity in every hour (more details can be seen here:Reinforcement learning applicable to a scheduling problem?). Although the problem is basically a scheduling problem, I want to implement it like a control problem for every time step.
+Now, I have 2 questions:
+
+- Is it possible to somehow consider future values (e.g. of the electricity price) while during a control action for every time slot? E.g. when the agent knows that in 2 hours the price will fall significantly, then it should tend to consume electricity in 2 hours to get closer to the optimal solution.
+
+- Related to 1: Is it possible to get the reward just at the end of the day instead of every hour (although the control actions are every hour)? If you get the reward at every hour, this might lead to a greedy behaviour, which often results in bad results.
+
+
+"
+"['neural-networks', 'deep-learning', 'backpropagation', 'deep-neural-networks', 'hidden-layers']"," Title: How does back propagation adjust the hidden layers' weights and biases?Body: I'm new to neural networks and trying to figure out its fundamentals but I cannot fully understand the back propagation algorithm.
+In back propagation, I understand we want to go backwards from the last neurons to adjust the weights and biases to that predicted final neurons. For it to calculate error and derivative, It needs to have the last inputs, the predicted output based on the layer's weights and the actual value ( target ).
+As In the final neuron layers we have all this information. But how do we calculate the inputs of middle and hidden layers?
+
+Suppose we have the final output ( 0.73 ), we calculate the error and derivatives of W31, W32 and W33; and adjust them to match the final output, Then we shall go one layer back in our network.
+Now we need the N11, N12, N13 and N14 values and the target values of N21, N22 and N23 to calculate errors and derivatives, but we don't have them
+Should we feed forward the whole network and map all the labels and values of each neuron in memory to be able to access it later? Because it would be very, very memory and resource intensive on large networks.
+"
+"['machine-learning', 'terminology', 'generative-model']"," Title: What is the fundamental difference between the synthesis task and sampling task?Body: Among the list of tasks in machine learning, synthesis and sampling is one of the key task. Consider the following explanation regarding synthesis and sampling task from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)
+
+In this type of task, the machine learning algorithm is asked to
+generate new examples that are similar to those in the training data.
+Synthesis and sampling via machine learning can be useful for media
+applications when generating large volumes of content by hand would be
+expensive, boring, or require too much time. For example, video games
+can automatically generate textures for large objects or landscapes,
+rather than requiring an artist to manually label each pixel (Luo et
+al., 2013). In some cases, we want the sampling or synthesis procedure
+to generate a specific kind of output given the input. For example, in
+a speech synthesis task, we provide a written sentence and ask the
+program to emit an audio waveform containing a spoken version of that
+sentence. This is a kind of structured output task, but with the added
+qualification that there is no single correct output for each input,
+and we explicitly desire a large amount of variation in the output, in
+order for the output to seem more natural and realistic.
+
+The explanation does not mention any difference between the two tasks. Both sampling and synthesis, apart from the linguistic differences, I don't know any discriminating criteria, qualities or properties, that separate both tasks in machine learning.
+What is the fundamental difference between sampling task and synthesis task in machine learning?
+"
+['density-estimation']," Title: Is knowing the class of probability density function mandatory for explicit density estimation?Body: In deep learning, models may learn the probability distribution that generated the dataset. Observe the following paragraph from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)
+
+Unsupervised learning algorithms experience a dataset containing many
+features, then learn useful properties of the structure of this
+dataset. In the context of deep learning, we usually want to learn the
+entire probability distribution that generated a dataset, whether
+explicitly, as in density estimation, or implicitly, for tasks like
+synthesis or denoising. Some other unsupervised learning algorithms
+perform other roles, like clustering, which consists of dividing the
+dataset into clusters of similar examples.
+
+I read about density estimation in the same chapter, as given below
+
+In the density estimation problem, the machine learning algorithm is
+asked to learn a function $p_{model} : R^n \rightarrow R$, where
+$p_{model}$(x) can be interpreted as a probability density function
+(if $x$ is continuous) or a probability mass function (if $x$ is
+discrete) on the space that the examples were drawn from.
+
+This question is focused on explicit probability density estimation in continuous case i.e., learning density function $p_{model}$ directly.
+Suppose I have a dataset $D$ with $n$ continuous random variables (features) $X_1, X_2, X_3,\cdots, X_n$. And I don't know anything about the probability density function of individual random variables. That is, I don't know about any information about any $X_i$ , such as, whether $X_i$ follows normal distribution or any other distribution. Then, is it possible to learn density function explicitly? Or do I need to provide some necessary information such as the class of probability distribution function to be learned?
+I am thinking as follows:
+If I have some information about $X_i$, such as: $X_i$ falls to a well known distribution, then I can learn the parameters of the underlying density function from $D$. So, is it mandatory to know some information about the underlying probability density function.
+"
+"['neural-networks', 'long-short-term-memory', 'datasets', 'text-generation']"," Title: What's the best way to feed stories to a neural network?Body: I'm trying to train a model that would generate stories. I have a dataset of 2000 stories prepared. They are tokenized and one-hot encoded. I can't load them all at once as a one big dataset, because of memory limits.
+What would be the best way to fit my network so that i can reset the states after each story?
+I tried doing it in a nested for loop (for epoch/for story: model.fit) but it's working really slow cause it takes 3 seconds to fit a single story but almost 10 to load the next file and setup model.fit again.
+"
+['backpropagation']," Title: Parallelize Backpropagation - How to synchronize the weights of each thread?Body: I implemented a parallel backpropagation algorithm that uses $n$ threads. Now every thread gets $\dfrac{1}{n}$ examples of the training data and updates its instance of the net with it. After every epoch the different threads share their updated weights. For that I simply add the weights of the threads and then divide each weight by the number of threads. The problem now is that the more threads I use the worse the result. For me this means that my way of synchronizing the threads is not as good as it should be.
+Is there a better way to do it?
+"
+['convolutional-neural-networks']," Title: Is it possible to identify which feature maps were generated from a particular image after convolutional operationBody: Let's say I have a video that contains 3 grayscale sequential frames having a combined shape of (3, 24, 24). After inputting these frames together into a CNN, multiple feature maps will be generated from each of the images. Would it be possible for me to separate the temporal aspect by identifying which frame generated which feature maps?
+"
+"['convolutional-neural-networks', 'image-processing', 'image-transformations']"," Title: Why do some techniques use random augmentations during convolution processesBody: While going over PyTorch image augmentations, https://pytorch.org/vision/stable/transforms.html, I see that some augmentations can be applied with a certain probability. What is the purpose of applying stochastic augmentations rather than consistently applying a certain augmentation?
+"
+"['generative-adversarial-networks', 'discriminator']"," Title: Is discriminator a regressor or classifier in implementations?Body: GAN has two components: generator and discriminator.
+Discriminator in the original GAN is a regressor and always gives value in $[0, 1]$. You can read it in original paper
+
+$D(x)$ represents the probability that $x$ came from the data rather
+than $p_g$
+
+Is it true with most of the (advanced or) contemporary GANs? Or the do nature of discriminator, either as a regressor or as a classifier entirely depends on the context?
+"
+"['machine-learning', 'natural-language-processing', 'text-classification']"," Title: Get the name of a merchant from recordsBody: I have a bunch of bank transaction records from which I want to extract merchants' names. In a few subsets of these records, the structure of the string is the same within the subset with only the merchant name changing. For example
+subset 1
+
+XXXXX_ID_TIME_STAMP MERCHANT1 CREDIT
+XXXXX_ID_TIME_STAMP MERCHANT2 CREDIT
+
+subset 2
+
+BILL PAYMENT BANK_NAME MERCHANT NAME 3
+BILL PAYMENT BANK_NAME MERCHANT NAME 4
+
+In the above two subsets, the structure of the string is the same, only the merchant names changes
+and so on ...
+Using NLP, I want to extract merchant names in such cases. How should I approach this?
+Using regex is not feasible because I'd have to manually go through the complete data, identify all such patterns and create regex strings that'll extract the name. I would also have to do this for every new pattern.
+Is there a way where I can train a model that can identify/extract merchants in such cases?
+"
+"['math', 'gradient', 'matrices']"," Title: Isssue in understanding the derivation regarding mean squared errorBody: The following derivation is taken from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)
+I am facing difficulty in understanding the zero derivative related to minimizing gradient of mean squared error $\nabla_w \text{MSE}_{train} = 0$
+
+$\nabla_w \text{MSE}_{train} = 0$
+$\implies \nabla_w(Xw - y)^T(Xw-y) = 0$
+$\implies \nabla_w(w^T X^T-
+ y^T)(Xw-y) = 0$
+$\implies \nabla_w(w^T X^T-
+ y^T)(Xw-y) = 0$
+$\implies \nabla_w(w^T X^TXw - w^T
+ X^Ty
+ -y^TXw+y^Ty) = 0$
+$\implies \nabla_w(w^T X^TXw - 2 w^T
+ X^Ty +y^Ty) = 0$
+$\implies 2 X^TXw - 2 X^Ty = 0$
+$\implies w = 2 (X^TX)^{-1} X^Ty = 0$
+
+I have had difficulty in understanding the flow of the following two lines
+
+$\implies \nabla_w(w^T X^TXw - w^T
+ X^Ty
+ -y^TXw+y^Ty) = 0$
+$\implies \nabla_w(w^T X^TXw - 2 w^T
+ X^Ty +y^Ty) = 0$
+
+The first doubt is about in the first two lines, it is possible only if (which I feel is untrue)
+$w^T X^Ty = y^TXw$
+Note: $X,y$ here refers to input and outputs of train data
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'pytorch', 'convolutional-layers']"," Title: How do CNNs handle inputs of different sizes and shapes?Body: I am new to deep learning so feel free to correct me where I am wrong.
+Imagine this scenario where we have a 7 * 7 input. We want to slide a 3 * 3 filter with a stride of 3 and padding of zero over this input. As you know, it is not possible to do this.
+Also, CNNs have a fixed input shape(Correct me if I am wrong) or at least the input should be of the multiples of the CNN's intended input shape(e.g., 112 * 112, 224 * 224, etc)(Although the situation that this may work is rare.)
+According to this PyTorch page, ResNet (for example) accepts images of any size as long as they are bigger than 224.
+So my question is, how does it handle images of different sizes? Does it dynamically tweak parts of the structure (e.g., kernels, strides, paddings) based on the input? If yes, wouldn't that change the network architecture? Or it changes the input sizes to the intended size automatically?
+Also, this does not answer my question.
+"
+"['neural-networks', 'long-short-term-memory', 'model-request', 'action-recognition']"," Title: What type of neural network do you need if you want to detect an action or dynamic pattern instead of a static pattern?Body: Let's say that you want to detect if a man is running, walking, or dancing instead of just detecting a man still. What type of neural networks will you use for this purpose?
+"
+"['natural-language-processing', 'terminology', 'definitions', 'vector-semantics']"," Title: Is there any difference between the phrases ""text representation"" and ""text feature representation""?Body: Text representation, in simple words, is representing text in sensible numeric form. You can read in detail from the following paragraph
+
+Text representation is one of the fundamental problems in text mining
+and Information Retrieval (IR). It aims to numerically represent the
+unstructured text documents to make them mathematically computable.
+For a given set of text documents $D = \{d_i, i=1, 2,...,n\}$, where
+each $d_i$ stands for a document, the problem of text representation
+is to represent each $d_i$ of $D$ as a point $s_i$ in a numerical space
+$S$, where the distance/similarity between each pair of points in
+space $S$ is well defined.
+
+But, I came across the phrase "text feature representation" in research papers. Features, in general, are present in dataset. But, I think, features can be characters or words or documents or complete text (as a single feature?) in the case of the text. I am not sure about what we call features in the text.
+So, I am not sure about what is meant by text feature representation. Is it the same as text representation?
+"
+"['time-series', 'function-approximation']"," Title: Why use sin/cos to give periodicity in time series predictionBody: In this tutorial https://www.tensorflow.org/tutorials/structured_data/time_series#feature_engineering (scroll down a bit to "Time" heading), they take the sin/cos of the time index, and give this as an input so that the model can see the periodicity.
+Why use sin and cos (which map to a circle)? Why not map to a square, or a diamond?
+What about if you just mapped time to 1d instead of 2d, so e.g. 23:59 would be 1 and 00:00 (1 minute later) would be 0. Would that "jump" actually cause problems?
+Any actual research or experiments which look at this issue?
+"
+"['machine-learning', 'image-generation']"," Title: Why labeling facades?Body: In Pix2Pix by Isola et al. they translate images from different pairs of image categories to one another. While most other example applications for the algorithm make sense to me, I'm having difficulties understanding why one would translate facade labels to facade images. As the title says, I already don't see how labeling a facade would help solving any real world problem.
+I skimmed the related work of the paper and found little about what "facade parsing" could be used for, except maybe reconstruction from images. Where are facades reconstructed from facade labels? Can anyone tell me other example applications for facade labels and translating them to images?
+"
+['8-puzzle-problem']," Title: Total number of states reachable from the initial state in 8-puzzle problemBody: I know it's a simple question but the book Artificial Intelligence by Russel says that the number of reachable states from any initial state in the 8-puzzle problem is $\frac{9!}{2}$. However, I think it should be $9!$. Note that we can't say if we rotate the grid horizontally then state we get is the same so as to divide the total number of states by $2$. So why do we divide the total number of (initial) states by $2$? What extra states are we counting?
+"
+"['neural-networks', 'deep-learning']"," Title: How to make an output independent of input feature in neural networks?Body: Is there a way to make a certain output dimension of a neural network independent of a particular feature dimension? For example, I have a function $f_{\theta} : \mathcal{R}^{10} \rightarrow \mathcal{R}^2$, I want to make $f_{\theta}(\mathbf{x})_2$ independent of $\mathbf{x}_6$. How can this condition be imposed on a neural network?
+I am thinking of penalizing the gradient of $f_{\theta}(\mathbf{x})$ w.r.t $\mathbf{x}_6$ for a considerable range of $\mathbf{x}_6 \in [-1, 1]$. Will this give me the similar effect? If so, how can this be coded in Pytorch or any other deep learning framework?
+"
+"['neural-networks', 'deep-learning']"," Title: Which model is more efficient and why?Body: Suppose, I have two NN models:
+
+- CNN model
+- Sequential NN model
+
+They are solving the same problem. The data points have the same number of features.
+In the case of #1, we used 0.6 million data points, 35k epochs, and the model achieved 80% accuracy in the training.
+In the case of #2, we used 1.4 million data points, 1k epochs, and the model achieved 90% accuracy in the training.
+Which model is better/more efficient and why?
+"
+"['reinforcement-learning', 'long-short-term-memory', 'random-forests']"," Title: Reinforcing Learning when action has no effect on the environmentBody: I am trying to get my head around a problem where the action by the agent can not change the environment. Without going into details, my problem is about error correction in an stochastic environment.
+So, here the action by agent can not change the environment that causes these error and all we can do is to smartly correct as the errors happen. I am currently thinking about using Reinforcement learning for this agent who could correct the errors.
+Now my questions are:
+
+- Would reinforcing learning be an overkill since the agent can not influence the environment?
+- How do RL, LSTM, and even random forest compare in such scenarios?
+
+Thank you.
+"
+"['prediction', 'time-series']"," Title: How to construct a model to predict the value of a time series $y_t$ that depends from other time series $\bar{X}_t$?Body: I would like to know what are the standard approach to construct a model to predict the value of a time series $y_t$ that depends from other time series $\bar{X}_t$. I use to see around that for this kind of task there is a large amount of models but all autoregressive in a way.
+I'm thinking about, for example VAR, SARIMAX, RNN, LSTM.
+I'm looking for a model, or at least an approach, where there is no my lagged target variable as predictors. Does anyone has some references ?
+"
+"['convolutional-neural-networks', 'computer-vision', 'image-segmentation', 'instance-segmentation']"," Title: How to divide a segmented image into classes instances?Body: Is there a method/algorithm to generate instances of objects from image that was segmented by the use of any image segmentation models?
+For example, I have an image with one class and it was segmented in a given way, where 1s are objects of the same class and empty fields are of no class:
+How can I now generate list of the two objects, where list's elements for example would be positions of all the pixels inside the object (list of list).
+"
+"['papers', 'resource-request', 'fid-score']"," Title: Where can I access this research paper on Frechet distance score?Body: Frechet Inception Distance is a metric that calculates the distance between feature vectors calculated for real and generated images. It is used in evaluations how good the generated images are.
+Consider the following citation of the research paper I want to study in detail, which I think is the first paper on Frechet distance
+
+Fréchet, Maurice. "Sur la distance de deux lois de probabilité."
+Comptes Rendus Hebdomadaires des Seances de L Academie des Sciences
+244.6 (1957): 689-692.
+
+I have no clue on where to access the paper.
+In general I get PDFs of almost any research paper due to my institute subscriptions in various publishers. But, I cannot see the pdf or contents of this research paper anywhere.
+What can I do for accessing this paper?
+"
+"['natural-language-processing', 'terminology', 'precision', 'recall', 'information-retrieval']"," Title: What is meant by a ""relevant document"" in NLP?Body: In natural language processing, I came across the concept of "relevant document" several times. And several analytical formulas, such as precision, recall are based on the relevant documents.
+Precision = $\dfrac{\text{Number of documents that are relevant and retrieved to the query Q}}{\text{Number of retrieved documents to the query Q}}$
+Recall = $\dfrac{\text{Number of documents that are relevant and retrieved to the query Q}}{\text{Number of relevant documents to the query Q}}$
+What is meant by "relevance" in such cases? Is it a universally objective term or subjective term, decided by the designer, based on that particular context?
+"
+"['reinforcement-learning', 'importance-sampling']"," Title: Derive Importance Sampling as Expected Value NotationBody: I'm new to RL. Recently, I took a course on Coursera. In the Off-policy MC method, I learned the concept of Importance Sampling as follows:
+
+where the importance sampling ratio is the ratio of the target policy over the behavior policy.
+But in Suton book the expectation under the target policy is estimated like this:
+
+Given that both sources used the same importance sampling ratio. However, I ended up getting $E[G_{t}|s] = \sum{G_{t} b \frac{\pi}{\pi}} = \sum{G_{t} \frac{1}{\rho}\pi} = E[\frac{G_{t}}{\rho_{t:T-1}}|s]$ instead
+Did I do something wrong?
+"
+"['reinforcement-learning', 'control-problem']"," Title: How to Weigt Constraints in A Control Problem with Reinforcement LearningBody: I have a control problem for a heating device of a building with the goal to minimize the electricity costs for one day under a varying price for electricity in every hour.
+(more details can be seen here as well: Reinforcement learning applicable to a scheduling problem?).
+I also want to test two further goals (minimize peak load and maximize PV self-consumption rate).
+My problem also has about 10 constraints that should not be violated. I have two main questions about how to integrate the constraints into the Reinforcement Learning agent:
+Here are my two main questions (with following minor questions):
+(1) Basically I have three goals with normalized rewards between 0 and 1 for every time-slot and I have 10 constraints.
+ Should the constraints reward also be normalized for all 10 constraints? And then should I choose a higher weight for the most important constraint than for all three goals combined such that a constraint violation is more crucial than getting a better objective value for all the three goals?
+(2) Is it also possible to tell the Reinforcement Learning agent some rules directly without any constraints?
+ E.g. I have two storage systems, and the agent is only allowed to heat up 1 for every time-slot. Further, the agent should not start and stop heating up frequently (like around four starts of the device daily is desirable).
+ Can I explicitly tell these rules to the agent? Or do I have to do it indirectly by calculating a reward for every of these constraints and incorporate the weighted reward into the overall reward function of the agent?
+I'll appreciate any suggestion and comment.
+"
+"['papers', 'convolution', 'geometric-deep-learning', 'graph-neural-networks', 'linear-algebra']"," Title: Why does $I_N + D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ have eigenvalues in the range [0, 2]?Body: In Semi-supervised classification with Graph Convolutional Networks, I am unable to understand a few things.
+Given an undirected graph having
+
+- adjacency matrix $A$,
+- degree matrix $D_{ii} = \sum_j A_{ij}$,
+- normalized graph laplacian $L = I_N + D^{-\frac{1}{2}}AD^{-\frac{1}{2}} = U \Lambda U^T$, where $\lambda_{max} \approx 2$ (see page 3, 2nd paragraph, not sure which matrix they are talking about)
+
+Then, $I_N + D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ has eigenvalues in the range [0, 2]. How?
+"
+"['variational-autoencoder', 'bayesian-deep-learning', 'evidence-lower-bound']"," Title: Are some low dimensional distributions known to be hard to model with VAEs?Body: I am trying to implement a toy VAE project.
+My goal is to use a VAE to model the moon dataset from scikit-learn, with an extra constant (but noisy) z-dimension.
+To this end I use an approximate posterior with the form of a beta distribution and a uniform prior in a 1D latent space, because essentially the data is 1D. The decoder is a NN-parameterized gaussian.
+I cannot get it to work using the simple ELBO.
+I tried so far :
+
+- Increasing the number of monte carlo samples in the SGVB
+- Various deterministic pretrainings which tend to raise nans
+- Increasing the width or depth of the networks
+- Gradient clipping
+- learning rate annealing
+- Remove the noise in the data and perform Batch gradient descent instead of mini-batch
+- ...
+
+I use layers of residual blocks with Tanh nonlinearities, whose outputs are $\log \alpha$ and $\log \beta$ for the encoder, $\mu$ and $\log \sigma$ for the decoder.
+I am starting to wonder whether the distribution is actually hard to model, because I ran out of bugs to fix and strategies to improve training.
+Are some low dimensional distributions known to be hard to model this way ?
+Additionally, what obvious or non obvious mistakes could I have made ?
+ADDENDA
+Code to generate the data:
+# Adapted from sklearn.dataset.make_moons
+
+def make_moons(n_samples=100, noise=None):
+ generator = default_rng()
+
+ n_samples_out = n_samples // 2
+ n_samples_in = n_samples - n_samples_out
+
+ outer_circ_x = np.cos(np.linspace(0, np.pi, n_samples_out))
+ outer_circ_y = np.sin(np.linspace(0, np.pi, n_samples_out))
+ inner_circ_x = 1 - np.cos(np.linspace(0, np.pi, n_samples_in))
+ inner_circ_y = 1 - np.sin(np.linspace(0, np.pi, n_samples_in)) - .5
+
+ X = np.vstack([np.append(outer_circ_x, inner_circ_x),
+ np.append(outer_circ_y, inner_circ_y),
+ np.zeros(n_samples)]).T
+ y = np.hstack([np.zeros(n_samples_out, dtype=np.intp),
+ np.ones(n_samples_in, dtype=np.intp)])
+
+ if noise is not None:
+ X += generator.multivariate_normal(np.zeros(3), np.diag([noise, noise, noise])**2, size=n_samples)
+
+ return X, y
+
+# create dataset
+moon_coordinates, moon_labels = make_moons(n_samples=500, noise=.01)
+moon_coordinates = moon_coordinates.astype(np.float32)
+moon_labels = moon_labels.astype(np.float32)
+
+# normalize dataset
+moon_coordinates = (moon_coordinates-moon_coordinates.mean(axis=0))/np.std(moon_coordinates, axis=0)
+
+UPDATE
+I have found a mistake that can explain poor performance.
+In my post I said that the data is basically 1D, yet when I create the dataset I normalize the standard deviation in every dimension. This increases the magnitude of the z noise, and all of a sudden the third dimension accounts for a lot of variance and my model tries to fit to this noise.
+Removing the normalization dramatically increases the performance.
+"
+"['deep-learning', 'computer-vision', 'classification']"," Title: is there any proof that metric learning cannot achieve better on image classification task than accepted models (resnet etc)?Body: Everything is in the title.
+Metric learning seems to be closer to our way of thinking than the best performing models (supervised learning CNNs-based models like resnet or efficientnet). I was looking for research papers that would have try a metric learning-based model for classic classification task on the Imagenet dataset benchmark but I could not find it. That is why I am asking the question here.
+"
+"['classification', 'python', 'implementation', 'linear-regression']"," Title: Which of the following two implementations of a Least Squares classifier in Python is correct?Body: I am trying to solve a classification problem by implementing the Least Squares algorithm in Python. To solve this problem, I am implementing the linear algebra formula to train the classifier, which is $w = (X^TX)X^Ty$, where $w$ is the final weight vector of the classification function, $X$ is an input matrix of training data and $y$ a matrix of training labels. The classifier must be able to classify into three classes. As seen in the following snippet, during the preparation of the data, I gather my training data in matrix $X$, adding the number one at the end of each sample. I also gather my training labels in matrix $y$, coding each class as a sequence of -1 and 1.
+ X = np.matrix(np.zeros((len(train_set),4)))
+ y = np.matrix(np.zeros((len(train_set),3)))
+
+ for i, row in train_set.iterrows():
+ X[i] = [row[1], row[2], row[3], 1]
+ if row[0] == 'H':
+ y[i] = [1, -1, -1]
+ elif row[0] == 'D':
+ y[i] = [-1, 1, -1]
+ else:
+ y[i] = [-1, -1, 1]
+
+What we have in matrix $y$ in the end, is a matrix that each column is a representation of each class and can tell us which samples belong to the class of the corresponding column. Having explained what each matrix in my program contains, my question is this, which of the following two implementations of the training process is correct and why?
+At first, I went with the implementation seen in the snippet below.
+ Xtranspose = X.T
+ dotProduct = Xtranspose.dot(X)
+ inverse = np.linalg.pinv(dotProduct)
+ A = inverse.dot(Xtranspose)
+ w = A.dot(y)
+
+ for i, row in test_set.iterrows():
+ r = np.matrix([row[1], row[2], row[3], 1]).dot(w)
+
+As you can see, considering that $A = (X^TX)X^T$, I multiply $A$ with the label matrix $y$ and then I use the weight vector $w$ in the loop to test the classifier on some test data. Later, though, after some research on the internet, I found this second implementation, which actually has a higher success rate.
+ for i, row in test_set.iterrows():
+ r = np.zeros([3])
+ j = 0
+ for column in y.T:
+ w = A.dot(column.T)
+ r[j] = np.matrix([row[1], row[2], row[3], 1]).dot(w)
+ j += 1
+
+Having calculated $A$ beforehand, I now calculate the weights for each column of $y$, for each class, separately. This second method, has a 10% greater success rate than the first one. So, why does the second training method have a better success rate? Is the second method the right training method?
+"
+"['neural-networks', 'terminology', 'bottlenecks']"," Title: What does it mean by bottleneck and representational bottleneck in feedforward neural networks?Body: Consider the following paragraph from section 2: General Design Principles
of the research paper titled Rethinking the Inception Architecture for Computer Vision
+
+Avoid representational bottlenecks, especially early in the
+network. Feed-forward networks can be represented by an acyclic graph
+from the input layer(s) to the classifier or regressor. This defines a
+clear direction for the information flow. For any cut separating the
+inputs from the outputs, one can access the amount of information
+passing though the cut. One should avoid bottlenecks with extreme
+compression. In general the representation size should gently decrease
+from the inputs to the outputs before reaching the final
+representation used for the task at hand. Theoretically, information
+content cannot be assessed merely by the dimensionality of the
+representation as it discards important factors like correlation
+structure; the dimensionality merely provides a rough estimate of
+information content.
+
+This paragraph warned us to avoid bottlenecks and also representational bottlenecks. What does it mean by a bottleneck of/in a neural network and representational bottleneck?
+"
+"['reinforcement-learning', 'evolutionary-algorithms']"," Title: When would you use Evolutionary Strategies over Step-Based Reinforcement LearningBody: In Salimans et al, 2016, the authors argue that ES should be considered a competitive alternative to MDP-based RL algorithms like Q-Learning, TRPO.
+However, in practice, I notice that more often than not ES takes far more episodes to converge than MDP-based algorithms. So what would still be a reason to consider those, apart from pure academic interest?
+The authors mention that ES will show less variance in long-horizon tasks, but didn't give an example. Is this aspect crucial ?
+"
+"['definitions', 'correlation']"," Title: What is meant by correlation structure?Body: I know only about the Pearson's correlation coefficient in literature.
+Covariance between two random variables $X$ and $Y$ is defined as
+$$Cov[X, Y] = \mathbb{E}[(X - \mathbb{E}[X])(Y-\mathbb{E}[Y])]$$
+(Linear) Correlation between two random variables $X$ and $Y$ is defined as
+$$Corr[X, Y] = \dfrac{Cov[X, Y]}{\sigma(X)\sigma(Y)}$$
+Covariance is a measure of association between two random variables whereas correlation measures how much dependent they both are on each other.
+Consider the following excerpt mentioning "correlation structure" from section 2: General Design Principles
of the research paper titled Rethinking the Inception Architecture for Computer Vision
+
+Theoretically, information content cannot be assessed merely by the
+dimensionality of the representation as it discards important factors
+like correlation structure; the dimensionality merely provides a rough
+estimate of information content.
+
+What is meant by the "correlation structure" mentioned here? Is it a graph on input random variables? Is it in any way related to the aforementioned correlation?
+"
+"['transformer', 'attention', 'weights', 'weights-initialization']"," Title: Is there a proper initialization technique for the weight matrices in multi-head attention?Body: Self-attention layers have 4 learnable tensors (in the vanilla formulation):
+
+- Query matrix $W_Q$
+- Key matrix $W_K$
+- Value matrix $W_V$
+- Output matrix $W_O$
+
+Nice illustration from https://jalammar.github.io/illustrated-transformer/
+
+However, I do not know how should one choose the default initialization for these parameters.
+In the works, devoted to MLP and CNNs, one chooses xavier/glorot
or he
initialization by default, as they can be shown to approximately preserve the magnitude in the forward and backward pass, as shown in these notes.
+However, I wonder, whether there is some study of good initialization for Transformers. The default implementation in Tensorflow
and PyTorch
use xavier/glorot
.
+Probably, any reasonable choice will work fine.
+"
+"['tensorflow', 'long-short-term-memory']"," Title: Do LSTM in tensorflow work sequentially or in parallelBody: I have a basic understanding how a cell and a layer of an LSTM works. However, I get confused by what "number of units" (as termed in tensorflow) exactly means. A unit is, as far as I understand one "instance" of a LSTM, consisting of $t$ cells, for a sequence of length $t$. When I have more than one unit, do these work in parallel (i.e. 10 units not interacting with each other) or sequentially (ouput of unit 1 is the input of unit 2 and so on).
+"
+"['deep-learning', 'generative-adversarial-networks', 'style-transfer']"," Title: Expression Transfer Deep Learning ProblemBody: I have old video and I want to keep the person's face in the video but I want to transfer my facial expressions to that video. Is there any better alternative to first order motion model for that task ? I tried deepfacelab but it has kind of steep learning curve
+"
+['image-recognition']," Title: Image recognition neural network: scaling and rotationBody: Are there some effective and robust solutions for scaling and rotation for image recognition with the neural networks (NN)?
+I see tons of sources on the Web with explanation how neural network is used for image recognition, but all of them avoiding the topic of scaled or rotated images. The network trained for patterns won’t recognize it if the pattern scaled or rotated.
+Of course, there are some intuitive/naive workarounds/approaches:
+
+- Brute force – you can rotate and scale image until NN recognizes it. Too much expensive.
+- You may teach NN for all cases of rotated and scaled image, could be hard and will lead to NN weakening.
+- You may teach NN that some images (which are rotation and scale of the original image) are clusters and teach to recognize clusters and interpolate/extrapolate them. A little bit tricky in coding and debugging.
+- For rotation you can move to polar coordinates, this gives a kind of invariant both for recognizing patterns and building histograms for specific portions of the image. But for this you need to recognized the pivot point and again this is quite expensive.
+
+Are there any better solutions, ideas, hints, references?
+(I read some answers there to the rotational problem, but what I saw doesn't cover the topic).
+"
+"['reinforcement-learning', 'deep-rl', 'actor-critic-methods', 'value-functions', 'proximal-policy-optimization']"," Title: PPO when does the update happen?Body: In many places, it says PPO and Actor-Critic methods in general use TD-updates, but in the loss function for PPO, the Value function loss component uses the difference between output of the value function and the value target, which I can only assume is the discounted sum of rewards that can only be obtained at the END of the episode?
+So this might be a moment of stupidity for me, but
+
+- Is the value target in PPO set only at the end of the episode using the discounted sum of rewards? or is there a secret way of setting these value targets that I am missing?
+
+- If a learning update indeed takes place every learning step (before the end of the episode), then how does this TD-learning happen - does it use some other approximate of the value target?
+
+
+Thank you.
+Please help.
+Sincerely,
+a frustrated student
+"
+"['reinforcement-learning', 'open-ai', 'gym']"," Title: Delayed state observation or caching action in OpenAI gym. Can it still learn?Body: I am planning to use OpenAI gym for my experiment in real life.
+In my experiment design, by the limits of a real-life scenario, I can only receive the state information or the rewards about 2-3 timesteps behind when the action has happened (in OpenAI gym term, ~3 cycles of step(action)
function has occurred). For example, by the time the state at timestep i is observed, an action at timestep i+3 would have happened.
+From how I perceive the function, step(action)
, is that it needs to return next_state, reward, done
every step. And the agent will learn from state -> action -> next state -> reward tuple. So I was wondering if can I cache the action for future use along with the state with the correct time step in OpenAI gym? or delay the state observation/reward instead? Could the OpenAI be able to learn?
+I am experimenting with PPO TD3 SAC which all uses actor-critic networks. Would the network eventually be trained well enough to the point where it would still perform well with the delayed state observation?
+"
+"['chat-bots', 'programming-languages', 'computational-linguistics']"," Title: Are there theoretically linguistic inputs that could send an NLP algorithm into infinite loops or break the chatbot?Body: I was asked an interesting question today by a student in a cybersecurity and information assurance program related to getting spammed by chatbots on snapchat. He's tried many conventional means of blocking them, but he's still getting overwhelmed:
+
+- Theoretically, are there lines of code that could disrupt processing, such as commands or syntactic symbols?
+
+My sense is no — the functions would be partitioned such that linguistic data would not execute. But who knows.
+
+- Many programmers are sloppy.
+- I've had friends in video game QA produce controller inputs that programmers claim is impossible — until demonstrated.
+
+
+- Theoretically, is it possible to "break" a chatbot in the sense of the Voight-Kampff test thought experiment?
+
+This was, of course, popularized via one of the most famous films on AI, BladeRunner, adapted from one of the most famous books, ElectricSheep, and extended recently via WestWorld. In these contexts, it's a psychological test designed to send the automata into loops or errors.
+My question here is not related to "psychology" as in those popular media treatments, but linguistics:
+
+- Are there theoretically linguistic inputs that could send an NLP algorithm into infinite loops or produce errors that halt computation?
+
+My guess is no, all the way around, but still a question potentially worth asking.
+"
+"['neural-networks', 'pytorch', 'attention', 'dense-layers']"," Title: Are there any benefits of adding attention to linear layers?Body: Is attention useful only in transformer/convolution layers? Can I add it to linear layers? If yes, how (on a conceptual level, not necessarily the code to implement the layers)?
+"
+"['reinforcement-learning', 'ai-design', 'knapsack-problem']"," Title: How to represent ""terminate episode"" for Knapsack problem with a Pointer Network?Body: I am currently implementing a Pointer Network to solve a simple Knapsack Problem. However, I am a bit puzzled over the correct (or common, or "best") way to give the agent the option to stop taking the item (terminate episode). Currently, I have done it in 2 ways, adding raw dummy features or adding encoded dummy features (dummy features are all zeros). If the agent selects the dummy item, then the agent will stop taking the item anymore and the episode is terminated.
+I trained both methods for 500K episodes and evaluated their performance on a single predefined test case in each episode, after adding the gradient. I found that concatenating dummy features with the encoded features yielded a higher score earlier, but also scored 0 very often. On the other hand, adding the dummy features to the raw features learned to maximize the score very slowly. Therefore, my questions are:
+
+- Is adding the raw dummy features make learning slower because of additional encoding layer learning?
+
+- What is the most correct (or common or arguably best) way to give the agent the option to terminate the episode (in this case stop taking item)?
+
+
+"
+"['papers', 'generative-adversarial-networks']"," Title: Is the following a typo or am I understanding wrongly regarding discriminator?Body: Consider the following paragraph from the section 3: Background
of the research paper titled Generative Adversarial Text to Image Synthesis by Scott Reed et al.
+
+Goodfellow et al. (2014) prove that this minimax game has a global
+optimium precisely when $p_g = p_{data}$, and that under mild
+conditions (e.g. G and D have enough capacity) $p_g$ converges to
+$p_{data}$. In practice, in the start of training samples from D are
+extremely poor and rejected by D with high confidence. It has been
+found to work better in practice for the generator to maximize
+$\log(D(G(z)))$ instead of minimizing $\log(1 −D(G(z)))$
+
+I am guessing the bolded portion should be replaced by from G. Am I correct? If not, where am I going wrong?
+"
+"['genetic-algorithms', 'crossover-operators', 'fitness-functions', 'travelling-salesman-problem', 'chromosomes']"," Title: Is there a crossover that also considers that every index in the vector also influences the fitness function?Body: Is there a crossover that also considers that every index in the vector also influences the cost function?
+I have two vectors $v_1=[A_1, A_2, A_3, A_4, A_5]$ and $v_2=[A_5, A_3, A_2, A_1, A_4]$.
+The fitness function considers the index where an element is located. So, basically, every vector represents a matching solution. Using a recombination method would deliver a new combination, but it won't be close to the previous solution, nor would they consider what makes the parents better than the other solution.
+In TSP, the indices don't really matter on the sequence of cities.
+"
+"['neural-networks', 'supervised-learning', 'clustering']"," Title: What are some machine learning frameworks for supervised clustering?Body: I have a task where I need to take "data points" which consist of collections of items. Each item needs to be categorised according to predefined categories. That's the easy part - my solution is to train a deep neural network with cross entropy loss. By the way, the reason I don't classify each item separately is because they acquire their meaning when they come together as a set.
+The hard part is that each of these items also have a cluster label. Each cluster can only have items of one category in it, and there can be any number of clusters. Unsupervised clustering methods (applied after the neural network does the categorisation) work fairly well, but not well-enough for my needs. I'd like to:
+A. Make use of the fact that I have the ground truth labelling for these clusters
+B. Leverage my deep neural network because a lot of the "reasoning" required to solve the classification task will be conducive to the clustering task.
+Answers which address at least one of those are useful to me.
+EDIT
+I realised I might be confusing people with this concept of cluster "labels". To clarify, this is no different than the standard way a classical unsupervised clustering algorithm might return its results. If I have N data points and feed them to a clustering algo, the algo might return N labels, one for each data point, and each of which are integers in [0, C-1] where C is the number of clusters. In my example we have the labels for a training dataset and want to make use of them during training. We cannot use softmax + cross-entropy loss because the cluster labels are permutation invariant.
+"
+"['tensorflow', 'datasets', 'object-detection', 'data-labelling', 'imbalanced-datasets']"," Title: How to handle an unbalanced dataset when training object detection algorithms?Body: I am training an object detection model, and I have some very highly unbalanced data annotations. I have almost 11,000 images, all with dimensions of 1024 $\times$ 1024.
+Within those images I have the following number of annotations:
+*Class 1 - 40,000
+*Class 2 - 25,000
+*Class 3 - 900
+*Class 4 - 500
+
+This goes on for a few more classes.
+As this is an object detection algorithm that was annotated with the annotation tool Label-img, there are often multiple annotations on each photo. Do any of you have any recommendations as to how to handle fine-tuning an object-detection algorithm on an unbalanced dataset? Currently, collecting more imagery is not an option. I would augment the images and re-label, but since there are multiple annotations on the images, I would be increasing the number of annotations for the larger classes as well.
+Note: I'm using the Tensorflow Object Detection API and have downloaded the models and .config files from the Tensorflow 2 Detection Model Zoo.
+"
+"['machine-learning', 'objective-functions', 'generative-adversarial-networks']"," Title: When can we call a loss function ""adaptive""?Body: A loss function is a measure of how bad our neural network is. We can decrease the loss by proper training.
+I came across the phrase "adaptive loss function" in several research papers. For example: consider the following excerpt from the "Introduction" of the research paper titled Generative Adversarial Text to Image Synthesis by Scott Reed et al.
+
+By conditioning both generator and discriminator on side information, we can naturally model this phenomenon since the discriminator network acts as a "smart" adaptive loss function.
+
+When can we denote a loss function as adaptive? Is it a mathematical property or is solely based on the context?
+"
+"['machine-learning', 'definitions']"," Title: What is the formal definition for manifold in artificial intelligence?Body: We come across the word "manifold" in artificial intelligence, especially in the domains where learning is done based on data instances.
+What is the formal definition for manifold?
+"
+"['deep-learning', 'geometric-deep-learning', 'graph-neural-networks', 'graphs']"," Title: Can I extend Graph Convolutional Networks to graphs with weighted edges?Body: I'm researching spatio-temporal forecasting utilising GCN as a side project, and I am wondering if I can extend it by using a graph with weighted edges instead of a simple adjacency matrix with 1's and 0's denoting connections between nodes.
+I've simply created a similarity measure and have replaced the 1's and 0's in the adjacency with it.
+For example, let's take this adjacency matrix
+$$A=
+\begin{bmatrix}
+0 & 1 & 0 \\
+1 & 0 & 1 \\
+0 & 1 & 0
+\end{bmatrix}
+$$
+It would be replaced with the following weighted adjacency matrix
+$$
+A'=
+\begin{bmatrix}
+0 & 0.8 & 0 \\
+0.8 & 0 & 0.3 \\
+0 & 0.3 & 0
+\end{bmatrix}
+$$
+As I am new to graph NN's, I am wondering whether my intuition checks out. If two nodes have similar time-series, then the weight of the edge between them should be approximately 1, right? If the convolution is performed based on my current weights, will this be incorporated into the learning?
+"
+"['classification', 'math']"," Title: What should the value of $ρ$ in the $w(n+1) = w(n) + \rho*\text{error}(i)x(i)$ formula of Least Mean Squares be?Body: I am trying to better understand the Least Mean Squares algorithm, in order to implement it programmatically.
+If we consider its weight updating formula $$w(n+1) = w(n) + \rho * \text{error}(i)x(i),$$ where $w(n + 1)$ is the new weight of the classifier function, $w(n)$ is its current weight and $x(i)$ is the $i$th element of a training dataset, what should $\rho$ be?
+From what I have found online, $ρ$ is supposed to be $0 < \rho < \frac{2}{trace(X^TX)}$, where $X$ is a matrix with all the training data the algorithm has processed at that point. One idea that I had, was to take $\rho = \frac{1}{trace(X^TX)} < \frac{2}{trace(X^TX)}$, but I do not know if that is correct. Also, one characteristic that this value has is that it changes with each iteration of the algorithm, as more samples are added to matrix $X$.
+So, what is a good value for $\rho$? Should it change during the execution of the algorithm or should it stay the same?
+"
+"['reinforcement-learning', 'math', 'reinforce']"," Title: REINFORCE differentiation on sum or single value?Body: I'm currently learning Policy-gradient Methods for RL and encountered REINFORCE algorithm. I learned from this site : https://towardsdatascience.com/policy-gradient-methods-104c783251e0 that the gradient of the objective function is calculated as follows:
+
+From what I understand $\sum_{t=0}^{H}\nabla_{\theta}\log{\pi_{\theta}(a_{t}|s_{t})}$ is the sum through the entire trajectory and $\pi_{\theta}(a_{t}|s_{t})$ is the policy of the agent at time step $t$. However in Suton's book the gradient objective is defined differently.
+
+There is only $\nabla \ln{\pi(A_t | S_t)}$ at time step $t$ and no sum of all time steps. So does the algorithm not consider the policy for the whole trajectory when updating? Only a single-step policy?
+Furthermore, there is $\gamma^{t}$ (discounted reward) term in the latter and not the former. What is the reason for that?
+Hopefully, someone can help me clarify this.
+"
+"['convolutional-neural-networks', 'deep-rl', 'game-ai', 'double-q-learning', 'one-hot-encoding']"," Title: How to embed game grid state with walls as an input to neural networkBody: I've read most of the posts on here regarding this subject, however most of them deal with gameboards where there are two different categories of single pieces on a board without walls etc.
+My game board has walls, and multiple instances of food. There are 8 different categories,
+Walls, enemy food, my food, enemy powerup, my powerup, attackable enemies, threatening enemies, and current teammate.
+I have one hot encoded all of this data into a tensor of size (8, 16, 32) where (16, 32) are the sizes of the game grid. However I'm not sure whether this is appropriate since many of the categories have multiple occurrences of each category in a single (walls, food). Is it appropriate to use one hot encoding to represent categories in spatial data, where multiple one's may be present?
+The alternative I was considering was to use a CNN, however many posts have said it is inappropriate for one hot data. My reasoning was that since the data is a abstract Boolean grid representing the RGB frames, it might be appropriate.
+Does anyone have any suggestions as to the best way to represent a spatial Boolean grid representing multiple categories for input to a network?
+"
+"['deep-learning', 'tensorflow', 'training', 'object-detection']"," Title: Tensorflow object detection model total loss starts out good, but suddenly explodes up to high loss numbersBody: I'm training a Tensorflow object detection model with approx. 7500 images of two classes, which contains approx. 10,000 classes per class. I'm using Tensorflow 2.6.0, in case that is relavent. I am using Single Shot Detector (with a ResNet 50 backbone). The image dimensions are 1024 x 1024, and the batch size is set to 2. Training is being done on Ubuntu 20.04 with a GeForce RTX 2080 Super (GPU).
+After beginning training, the process is starting out at loss numbers to be expected:
+INFO:tensorflow:{'Loss/classification_loss': 2.1305692,
+ 'Loss/localization_loss': 0.6402807,
+ 'Loss/regularization_loss': 1.407957,
+ 'Loss/total_loss': 4.178807,
+ 'learning_rate': 0.014666351}
+I0903 16:56:21.947736 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 2.1305692,
+ 'Loss/localization_loss': 0.6402807,
+ 'Loss/regularization_loss': 1.407957,
+ 'Loss/total_loss': 4.178807,
+ 'learning_rate': 0.014666351}
+INFO:tensorflow:Step 200 per-step time 0.447s
+I0903 16:57:06.592366 140581900665792 model_lib_v2.py:698] Step 200 per-step time 0.447s
+INFO:tensorflow:{'Loss/classification_loss': 1.2596315,
+ 'Loss/localization_loss': 0.6752764,
+ 'Loss/regularization_loss': 3.0123177,
+ 'Loss/total_loss': 4.9472256,
+ 'learning_rate': 0.0159997}
+I0903 16:57:06.592768 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 1.2596315,
+ 'Loss/localization_loss': 0.6752764,
+ 'Loss/regularization_loss': 3.0123177,
+ 'Loss/total_loss': 4.9472256,
+ 'learning_rate': 0.0159997}
+INFO:tensorflow:Step 300 per-step time 0.452s
+I0903 16:57:51.830375 140581900665792 model_lib_v2.py:698] Step 300 per-step time 0.452s
+INFO:tensorflow:{'Loss/classification_loss': 1.0455683,
+ 'Loss/localization_loss': 0.5895866,
+ 'Loss/regularization_loss': 3.0799737,
+ 'Loss/total_loss': 4.715129,
+ 'learning_rate': 0.01733305}
+I0903 16:57:51.830749 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 1.0455683,
+ 'Loss/localization_loss': 0.5895866,
+ 'Loss/regularization_loss': 3.0799737,
+ 'Loss/total_loss': 4.715129,
+ 'learning_rate': 0.01733305}
+
+Up until about step 16,800, the loss is decreasing to these numbers:
+INFO:tensorflow:{'Loss/classification_loss': 0.5526215,
+ 'Loss/localization_loss': 0.28333753,
+ 'Loss/regularization_loss': 0.24686696,
+ 'Loss/total_loss': 1.0828259,
+ 'learning_rate': 0.037849143}
+I0903 18:59:14.666097 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.5526215,
+ 'Loss/localization_loss': 0.28333753,
+ 'Loss/regularization_loss': 0.24686696,
+ 'Loss/total_loss': 1.0828259,
+ 'learning_rate': 0.037849143}
+INFO:tensorflow:Step 16700 per-step time 0.446s
+I0903 18:59:59.247199 140581900665792 model_lib_v2.py:698] Step 16700 per-step time 0.446s
+INFO:tensorflow:{'Loss/classification_loss': 0.4649979,
+ 'Loss/localization_loss': 0.28323257,
+ 'Loss/regularization_loss': 0.2433301,
+ 'Loss/total_loss': 0.9915606,
+ 'learning_rate': 0.037820127}
+I0903 18:59:59.247609 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.4649979,
+ 'Loss/localization_loss': 0.28323257,
+ 'Loss/regularization_loss': 0.2433301,
+ 'Loss/total_loss': 0.9915606,
+ 'learning_rate': 0.037820127}
+INFO:tensorflow:Step 16800 per-step time 0.446s
+I0903 19:00:43.835976 140581900665792 model_lib_v2.py:698] Step 16800 per-step time 0.446s
+INFO:tensorflow:{'Loss/classification_loss': 0.43402833,
+ 'Loss/localization_loss': 0.1641234,
+ 'Loss/regularization_loss': 0.24129395,
+ 'Loss/total_loss': 0.8394457,
+ 'learning_rate': 0.03779093}
+I0903 19:00:43.836373 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.43402833,
+ 'Loss/localization_loss': 0.1641234,
+ 'Loss/regularization_loss': 0.24129395,
+ 'Loss/total_loss': 0.8394457,
+ 'learning_rate': 0.03779093}
+
+However, starting at about 16,900, the model total_loss
rapidly increases, up to numbers even higher than are shown below:
+INFO:tensorflow:Step 16900 per-step time 0.446s
+I0903 19:01:28.390861 140581900665792 model_lib_v2.py:698] Step 16900 per-step time 0.446s
+INFO:tensorflow:{'Loss/classification_loss': 0.5590624,
+ 'Loss/localization_loss': 0.5160909,
+ 'Loss/regularization_loss': 338.40286,
+ 'Loss/total_loss': 339.478,
+ 'learning_rate': 0.03776155}
+I0903 19:01:28.391232 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.5590624,
+ 'Loss/localization_loss': 0.5160909,
+ 'Loss/regularization_loss': 338.40286,
+ 'Loss/total_loss': 339.478,
+ 'learning_rate': 0.03776155}
+INFO:tensorflow:Step 17000 per-step time 0.445s
+I0903 19:02:12.936022 140581900665792 model_lib_v2.py:698] Step 17000 per-step time 0.445s
+INFO:tensorflow:{'Loss/classification_loss': 0.7908556,
+ 'Loss/localization_loss': 0.7274248,
+ 'Loss/regularization_loss': 858.3554,
+ 'Loss/total_loss': 859.87366,
+ 'learning_rate': 0.037731986}
+I0903 19:02:12.936432 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.7908556,
+ 'Loss/localization_loss': 0.7274248,
+ 'Loss/regularization_loss': 858.3554,
+ 'Loss/total_loss': 859.87366,
+ 'learning_rate': 0.037731986}
+INFO:tensorflow:Step 17100 per-step time 0.452s
+I0903 19:02:58.127156 140581900665792 model_lib_v2.py:698] Step 17100 per-step time 0.452s
+INFO:tensorflow:{'Loss/classification_loss': 0.7510178,
+ 'Loss/localization_loss': 0.49337074,
+ 'Loss/regularization_loss': 2617.2888,
+ 'Loss/total_loss': 2618.5332,
+ 'learning_rate': 0.03770224}
+I0903 19:02:58.127575 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.7510178,
+ 'Loss/localization_loss': 0.49337074,
+ 'Loss/regularization_loss': 2617.2888,
+ 'Loss/total_loss': 2618.5332,
+ 'learning_rate': 0.03770224}
+INFO:tensorflow:Step 17200 per-step time 0.445s
+I0903 19:03:42.625258 140581900665792 model_lib_v2.py:698] Step 17200 per-step time 0.445s
+INFO:tensorflow:{'Loss/classification_loss': 1.1258743,
+ 'Loss/localization_loss': 0.45634705,
+ 'Loss/regularization_loss': 394886900.0,
+ 'Loss/total_loss': 394886900.0,
+ 'learning_rate': 0.037672307}
+I0903 19:03:42.625638 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 1.1258743,
+ 'Loss/localization_loss': 0.45634705,
+ 'Loss/regularization_loss': 394886900.0,
+ 'Loss/total_loss': 394886900.0,
+ 'learning_rate': 0.037672307}
+INFO:tensorflow:Step 17300 per-step time 0.445s
+I0903 19:04:27.112154 140581900665792 model_lib_v2.py:698] Step 17300 per-step time 0.445s
+INFO:tensorflow:{'Loss/classification_loss': 0.57859087,
+ 'Loss/localization_loss': 0.53405523,
+ 'Loss/regularization_loss': 383440770.0,
+ 'Loss/total_loss': 383440770.0,
+ 'learning_rate': 0.037642203}
+I0903 19:04:27.112533 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.57859087,
+ 'Loss/localization_loss': 0.53405523,
+ 'Loss/regularization_loss': 383440770.0,
+ 'Loss/total_loss': 383440770.0,
+ 'learning_rate': 0.037642203}
+
+What could be the cause of this, and what would be the best way to go about fixing it?
+"
+"['reinforcement-learning', 'deep-learning', 'deep-rl', 'tic-tac-toe']"," Title: What is the optimal score for Tic Tac Toe for a reinforcement learning agent against a random opponent?Body: I guess this problem is encountered by everyone trying to solve Tic Tac Toe with various flavors of reinforcement learning.
+The answer is not "always win" because the random opponent may sometimes be able to draw the game. So it is slightly less than the always-win score.
+I wrote a little Python program to calculate that. Please help verify its correctness and inform me if it has bugs or errors.
+"
+"['reference-request', 'tensor']"," Title: Cost functions for reducing Tensors to 1-dimensional arrays?Body: I'm interested in the IT side, here, specifically how I most efficiently store a tensor in a one dimensional data structure. My assumption is that certain approaches will be more expensive than others, but I'd like to be able to validate that assumption, and show it to be wrong. Is there any work on this subject?
+"
+"['datasets', 'papers', 'features', 'representation-learning']"," Title: Why disentangling the features of variation in representation?Body: Consider the following excerpt from abstract
of the research paper titled Better Mixing via Deep Representations by Yoshua Bengio et al.
+
+It has been hypothesized, and supported with experimental evidence,
+that deeper representations, when well trained, tend to do a better
+job at disentangling the underlying factors of variation.
+
+In general, as per my current knowledge, we want to preserve the factors that contribute to variation in the final representation. But, the abstract is contrary to that. Where am I going wrong? Why there is a need to disentangle the factors of variation?
+"
+"['deep-learning', 'weights-initialization']"," Title: What are strategies for data driven weights initialization?Body: I am beginner in deep learning and currently training a few neural networks (Pytorch) for problems in audio and speech. For my tasks, simple feed-forward networks are working well enough. I use basic layers like Linear, ReLU and Softmax with nll loss. I have tested a few initialization schemes provided by Pytorch and noticed that initialization has significant (but not high) effect in the speed of training and the final accuracy of the model. I am currently using torch.nn.init.kaiming_uniform_
for initialization.
+In my understanding, all these are data independent initialization schemes. I would like to try something that is data dependent. I saw a few pre-training strategies with unsupervised learning followed by supervised learning, but they seem overly complicated.
+I am looking for something simple where I can use (preferably a fraction of the) training data to 'tweak' weights to better positions before the training. Are there any such strategies?
+Addendum-1:
+Current initialization schemes (AFAIK) are mostly random values with control over range or energy to prevent values from dying down or blowing up. My aim is to further improve the starting point of training by taking account training data (or at least some of it). I am thinking of something like this. We pass a few batches of training data through the initial network and collect statistics on neuron outputs. Based on this, we identify the misbehaving neurons and tweak the weights and biases to reduce such issues so as to improve the training speed or accuracy. Is there anything of that kind?
+"
+"['reinforcement-learning', 'comparison', 'statistical-ai']"," Title: Can I treat ""experience"" in reinforcement learning as ""training data"" in statistical learning?Body: Statistics is a branch of mathematics that extracts useful information from data. The data is generally called as "training data" in statistical (machine) learning.
+Consider the following paragraph from the section 1.1 Reinforcement Learning
of CHAPTER 1. THE REINFORCEMENT LEARNING PROBLEM
of the textbook Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto.
+
+Reinforcement learning
+is different from supervised learning, the kind of learning studied in
+most current research in field of machine learning. Supervised
+learning is learning from a training set of labeled examples provided
+by a knowledgable external supervisor. Each example is a description
+of a situation together with a specification—the label—of the correct
+action the system should take to that situation, which is often to
+identify a category to which the situation belongs. The object of this
+kind of learning is for the system to extrapolate, or generalize, its
+responses so that it acts correctly in situations not present in the
+training set. This is an important kind of learning, but alone it is
+not adequate for learning from interaction. In interactive problems it
+is often impractical to obtain examples of desired behavior that are
+both correct and representative of all the situations in which the
+agent has to act. In uncharted territory—where one would expect
+learning to be most beneficial—an agent must be able to learn from its
+own experience.
+
+You can observe that, training data in machine learning, if we model it in a proper format, can be for reinforcement learning. But, may not be complete and practical.
+I am asking this question in the view of statistical learning rather than machine learning alone. training data in statistical learning can be understood to any data, at any time instant, useful in learning.
+Then, is it perfectly fine to always interpret experience in reinforcement learning as training data in statistical learning?
+"
+"['neural-networks', 'datasets']"," Title: Modelling of output neuron for mixed features?Body: A dataset in artificial intelligence, in general, consists of some features (say $n$). Assume that $m$ among them are output features. I want to model this function using a neural network. So, input to my neural network is $n-m$ features and output is $m$ features. My question is about the output features.
+If an output feature is a continuous random variable, then its corresponding output neuron can be trained to give continuous output. Similarly, if an output feature is a discrete random variable, then its corresponding output neuron can be trained to give discrete output.
+But, I never came across the features that are mixed random variables. What is the nature of the output of the neuron that is intended to give the output value for a mixed random variable, which is neither discrete nor continuous in nature?
+"
+['mnist']," Title: MNIST with fewer pixels?Body: MNIST images are 28x28 pixels. Perhaps a silly question: is there anything like MNIST, but whose images have fewer pixels?
+"
+"['convolutional-neural-networks', 'classification', 'image-recognition', 'data-preprocessing', 'quality-control']"," Title: Is having near-duplicates in a training dataset a bad thing?Body: I am making a labeled dataset of images from web streams for a CNN classification. Pictures from the same stream are quite similar as far as background, but slightly different as far as the main object. The focus of what should be learned is in the main object.
+My concern is that feeding similar images with the same features in the background will result into making those features more relevant and hence lower the weights of the features that matter.
+So, should I be worried about removing similar images from a dataset, so that unrelated features are not learned?
+An ideal answer would discuss the trade-offs.
+I am aware of the practice of augmenting the training images by scaling/skewing/flipping them around. So it looks like people do it intentionally, but why?
+I should also say that it's not about learning from a single stream, there are tons of them. So most images are very square-distant from one another since they are coming from different streams, except those ones that were snapped from the same stream.
+"
+"['machine-learning', 'deep-learning', 'backpropagation', 'gradient-descent']"," Title: Different ways to calculate backpropagation derivatives, any difference?Body: I'm studying error backpropagation in neural networks. I am interested in why we use only one path on the computational graph to get the value of the derivative for a weight? I ask the question because there are several paths in the computational graph to get the derivative for a particular weight. Why do we only use a one value? Why don't we combine the values from all possible paths?
+Schema:
+
+Fromulas:
+Normal path:
+$$\frac{\partial E}{\partial w_{1,1}} = \frac{\partial E}{\partial Out} \cdot \frac{\partial Out}{\partial a_{1,1}}\cdot \frac{\partial a_{1,1}}{\partial a_{0,1}}\cdot \frac{\partial a_{0,1}}{\partial w_{1,1}}$$
+Alternate path:
+Normal path:
+$$\frac{\partial E}{\partial w_{1,1}} = \frac{\partial E}{\partial Out} \cdot \frac{\partial Out}{\partial a_{1,2}}\cdot \frac{\partial a_{1,2}}{\partial a_{0,1}}\cdot \frac{\partial a_{0,1}}{\partial w_{1,1}}$$
+Why don't we consider both derivatives or the sum of them?
+"
+"['reinforcement-learning', 'python', 'deep-rl', 'ddpg', 'multi-agent-systems']"," Title: How to parallelize multi-agent DDPG (MADDPG)Body: I am experimenting with MADDPG algorithm implemented in this repo. Since there were only a few agents (2-3) in the implementation (also in the original paper) steps like parameter updates, action prediction, etc. are done in a for loop. I want to increase the number of agents, say 10 or 30, and perform parallelization of the above-mentioned steps for all agents, i.e. I want to avoid for loops like this
+for agent_idx in range(n_agents):
+ ...
+ ...
+
+I tried Python Multiprocessing module with pool.map method but I am getting 'AttributeError: Can't pickle local object ...". Below is code I am running to get a joint action prediction but it results in the error above.
+def get_ind_action(i, obs_i):
+ return actor_critic[i].act(obs_i) # returns an individual action for a given observation for ith agent
+
+def get_joint_action(obs):
+ pool = Pool()
+ args_list = [[i, obs[i]] for i in range(n_agents)]
+ joint_action = pool.map(get_ind_action, args_list)
+ return joint_action
+
+
+Here actor_critic is a list of neural networks of all agents, obs is the joint state observed by the centralized critic but each actor only sees its own state. The algorithm has the following architecture.
+
+"
+"['neural-networks', 'multilayer-perceptrons', 'notation']"," Title: What, exactly, do mlp(64,64) and mlp(64,128,1024) mean in PointNet, and how many input neurons does 1 (x,y,z) point have?Body: I couldn't find out how to interpret the multilayer perceptron notation given in PointNet. Specifically, I am looking to find out what the numbers inside the parentheses of mlp(64,64)
and mlp(64,128,1024)
actually mean.
+(I also have a 2nd question about PointNet MLP architecture, which I ask towards the end.)
+Here's what I found online, which I believe applies:
+
+- https://towardsdatascience.com/deep-learning-on-point-clouds-implementing-pointnet-in-google-colab-1fd65cd3a263
+There's a paragraph here that says
+
+In this case MLP with shared weights is just 1-dim convolution with a kernel of size 1.
+
+Here, a link is provided to explain more about the 1-dim convolution...
+https://jdhao.github.io/2017/09/29/1by1-convolution-in-cnn/
+...and I follow this pretty well.
+
+- There's also this Matlab example...
+https://www.mathworks.com/help/vision/ug/point-cloud-classification-using-pointnet-deep-learning.html
+...which tells me to
+
+set the classifier model input size to 64 and the hidden channel size to 512 and 256 and use the initalizeClassifier
helper function...to initialize the model parameters.
+
+inputChannelSize = 64; hiddenChannelSize = [512,256];
+
+- Then there's this link: https://www.researchgate.net/figure/Network-architecture-The-numbers-in-parentheses-indicate-number-of-MLP-layers-and_fig2_327068615
+...in which they say,
+
+The numbers in parentheses indicate number of MLP
+layers
+
+but this is, in my opinion, not written very well. Do they mean,
+
+The notation mlp(64,64,128,256)
means that the MLP has 4 layers, and each layer produces an output with 64, 64, 128, and 256 channels, respectively?
+
+
+
+Here are my 2 questions about PointNet MLP notation / architecture:
+
+- What do each of the numbers in something like
mlp(64,64,128,256)
actually mean, and what do their positions mean? Are these numbers ONLY referring to the hidden layers, which includes the output layer? Also, are they referring to the number of channels, akin to the depth-wise feature layers of a CNN?
+
+- Finally, if your input is
nx3
(as in, n (x,y,z)
points), does this mean that the PointNet MLP takes an input of 1x3
, meaning 1 input neuron, or 3 input neurons?
+
+
+"
+"['convolutional-neural-networks', 'ai-design', 'labeled-datasets']"," Title: How do I prepare my data for a CNN to be applied to a geophysical-related problem?Body: I am currently doing research work on an inversion of geophysical data using Machine Learning. I have come across some research work where a Convolutional Neural Network (CNN) has been used effectively for this purpose (for example, his).
+I am particularly interested in how to prepare my input and output labelled data for this machine learning application, since the input will be the observed geophysical signal, and the label output will be the causative density or susceptibility distribution (for gravity and magnetic, respectively).
+I need some assistance and insight as to how to prepare the data for this CNN application.
+Additional Explanation
+Experimental setup: Measurements are taking above the ground surface. These measurements are signals that reflect the distribution of a physical property (e.g., density) in the ground beneath. For modelling, the subsurface is discretised into squares or cubes each having a constant but unknown physical property (e.g., density).
+How it applies to CNN: I want my input data to be the Measurements taken above ground. The output should then be the causative density distribution (that is, the value of the density in each cube/squares)
+See attached picture (flat top is the "above ground", all other prisms represent the discretisation of the subsurface. I want to train the CNN to give out a density value for each cube in the subsurface, given the above ground measurements)
+
+"
+"['natural-language-processing', 'natural-language-understanding', 'sentiment-analysis']"," Title: Is there something like person-specific sentiment analysis?Body: Sentiment analysis, as we know, measures "Cake sucks" as say -0.4, and "Cake is great" as 0.7.
+What I'm looking for is something a bit different like so:
+
+- Given input text data written by 1 person (say a blog)
+- Predict how they (the person who wrote the text) might react to a certain piece of text
+
+
+What might something like this look like?
+
+- Let's suppose that Person A with a blog has written in his blog posts thousands of times about how much cake is the best thing to happen to humanity.
+- The system should probably infer that if that person read something like "Cake is the WORST food ever", they would react negatively to it, if say, they also believe that there is such a thing as 'objective taste' somehow (aesthetic absolutism).
+- Or if Person A has made anti-racist statements, that racist statements would be strongly negative.
+- If Person A reads the statement "I hate lawyers" and in their blog they have written about how they don't care either way about law, it should probably be 0.
+- Finally, if Person A reads the statement "iPhones are better than Android" and there is zero data about either iPhones or Androids, or even related data about Apple or Google, then it should probably be 0, with an additional "confidence" metric at 0 (since there is no data, this confidence metric will let us know whether there is any data to support the measurement or not).
+
+
+This model would need to be able to somehow inductively 'infer' a value system of some kind, and assign intensities of probable reactions based on the frequency of an expressed view, as well as pick up on nuances (such as philosophical assumptions, (for example in the above cake example: aesthetic absolutism) etc.) that may inform that measurement.
+In other words, I'd like to create a model (or find a pre-trained model to fine-tune), that would be able to, given text data from that 1 person, predict their sentiment in response to a new piece of text.
+Would love any help whatsoever regarding:
+
+- What types of pre-trained models I should look at
+- Any ideas of any kind whatsoever you might have on how to achieve this
+- What sorts of architectures/resources/concepts may be relevant to look at
+
+"
+"['reinforcement-learning', 'deep-rl', 'regularization', 'normalisation', 'observation-spaces']"," Title: Should I apply normalization to the observations in deep reinforcement learning?Body: I am new to DRL and trying to implement my custom environment. I want to know if normalization and regularization techniques are as important in RL as in Deep Learning.
+In my custom environment, the state/observation values are in a different range. For example, one observation is in the range of [1, 20], while another is in [0, 50000]. Should I apply normalization or not? I am confused. Any suggestions?
+"
+"['math', 'geometric-deep-learning']"," Title: What are the different types of geometry in literature that may be used for deep learning?Body: Recently, I asked a question on the concept of a manifold and received an answer that points to a relatively new subfield of deep learning named geometric deep learning.
+In the preface
of the paper titled Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges, there is a mention of three types of geometry that do exist in the literature.
+
+For instance, Euclidean geometry is concerned with lengths and angles, because these properties are preserved by the group of Euclidean transformations (rotations and translations), while affine geometry studies parallelism, which is preserved by the group of affine transformations. The relation between these geometries is immediately apparent when considering the respective groups, because the Euclidean group is a subgroup of the affine group, which in turn is a subgroup of the group of projective transformations.
+
+The three types of geometry they mentioned are Euclidean, affine and projective. I want to know the complete list of types of geometry that do exist in the literature, if relevant to geometric deep learning.
+What are the types of geometry in the literature that may be used for deep learning?
+"
+"['machine-learning', 'terminology', 'generalization', 'inductive-bias']"," Title: Is the inductive bias always a useful bias for generalisation?Body: Is it true that a bias is said to be inductive iff it is useful in generalising the data?
+Or does inductive bias can also refer to the assumptions that may cause a decrease in performance?
+
+Suppose I have a dataset on which I want to use a deep neural network to do my task. I think, based on some knowledge, that a DNN with 5 or 11 layers may work well. But, after implementation, suppose 11 layer one worked well. Then can I call both of them inductive bias? or only the 11 layer assumption?
+"
+['singularity']," Title: Can an AI create another better AI, which in turn creates another better AI, and so on?Body: I have no specific knowledge of the AI field, but I heard that AI systems get better the longer they learn.
+So, I was wondering: could it be possible that AIs will learn how to create better AIs (Or assists humans to create a better AI), and then those better AIs will learn to create an even better/faster AI, and so on? Wouldn't this mean that the AI would get exponentially better/faster because, after each successive generation, a slightly faster AI will do the job?
+I also heard that "Google is using AI to design processors that run AI more efficiently". Wouldn't this be the same? AI designs faster CPU => AI get's faster and can design an even better CPU to run on.
+Is something like that possible? Would this mean that at some point there will be a breakthrough in AI that will significantly increase the speed of AIs because of those loops?
+"
+"['reinforcement-learning', 'q-learning', 'papers']"," Title: Why does Q-function training not query the Q-function value at unobserved states?Body: In the paper Conservative Q-Learning for Offline Reinforcement Learning, it is stated (section 3.1, page 3) that
+
+standard Q-function training does not query the Q-function value at unobserved states, but queries the Q-function at unseen actions
+
+I don't see how this is true. For every $(s,a)$ pair, we need to update $Q(s,a)$ to reduce the value $|Q(s,a) - R(s,a) - \gamma E[\max_{a'}Q(s',a')]|$ until it converges to zero.
+We see the existence of both $a'$ and $s'$, and $s'$ could be unseen, for example, on the very first update, where we are at $s$, take action $a$, and could arrive at any state $s'$.
+Can someone explain this?
+"
+"['reinforcement-learning', 'deep-rl', 'dqn']"," Title: Are the Q-values of DQN bounded at a single timestep?Body: Consider that we have an agent that has a set of thousands of different actions at each timestep. The reward function in $R:S \rightarrow\{0,1\}$. Let $Q_{t}^\pi(s,a)$ be the estimate from the neural network in the DQN. At timestep $t \leq T$, where $T$ is the horizon of the RL task, is there any possible way to upper bound
+$$ max_a Q_{t}^\pi(s,a) - min_a Q_{t}^\pi(s,a) $$
+where $\pi$ is the policy of DQN's Q-network (Neural Net regressor)?
+"
+"['research', 'academia']"," Title: Is there a venue to publish negative results in AI/ML domain?Body: Negative results occur frequently in AI/ML research (and perhaps in other domains too). Most of the time, these results are not published. This is mostly because your typical AI/ML conference doesn't accept such papers.
+However, are there any venues to publish these results? I believe these results can be still useful to look at before you delve into a certain project so that you'd at least know what approaches don't work.
+As an example venue, there seems to be PerFail workshop from the pervasive computing domain. So, is there something similar for AI/ML?
+"
+"['computer-vision', 'definitions', 'style-transfer']"," Title: What are the definitions for the content and style of an image without using deep neural network?Body: In deep learning, an image is said to contain two types of features. One is the content of the image and the other is the style of the image.
+Deep neural networks are generally used to obtain both content representation and style representation of an image. So, one can roughly define the style and content representations of an image using deep neural networks.
+Research papers generally show foreground objects (under consideration or focus) in an image as the content of the image and the background (or background objects such as sky etc.) as the style of the image.
+If we need to define the content and style of an image without using deep neural networks, then what can be the definitions for the content and style of an image?
+"
+"['neural-networks', 'game-ai', 'single-layer-perceptron']"," Title: Is my single layer perceptron getting biased input some way or the other?Body: I was working a little bit on a school project my team and I decided to do for submission in the year-end. It's a small game which I call 'Quattro', and its rules are as follows:
+
+- The game is played on an 8 x 8 square grid and each player (both the human and the computer) have sixteen pieces on their side (just the same layout as in chess, but here all pieces are identical for each player).
+- Only vertical moves,(i.e., one can move only one square forward at a time and that too in the forward direction/along a column) as long as no other piece stands before the piece to be moved.
+- However, one can cross over to a square present in the north-west or north-east (when you look around a 3 x 3 grid with the piece under consideration at the centre) if the north and west/east boundary of the piece when in the 3 x 3 grid are held by the enemy's pieces, as in the case of the move 'en passant' in chess. In the process, the enemy piece in the square just below the square the player has crossed over to is lost by the enemy.
+- The players involved can either be an attacker or a defender (if one choses the former, the other takes the latter). The attacker wins if he/she successfully takes at least four of his/her pieces(hence the name Quattro) to the enemy's side (that is, to the last row counting from that player's side) while the defender wins if the attacker is prevented from doing so.
+
+You can request me to add screenshots in case the rules are very vague (even my teammates were confused 😅).
+Okay, so I'm doing this on Python 3.9.6 and I have somehow made the board layout and movement rules (except for rule 2 and rule 4, which are supposed to be added once the primary workings of the game is completed). I had somehow made the AI player (which is based on a single-layer perceptron), but I doubt if it is working right or not. The problem is that when I make a random move, the AI player starts always at the same column and moves pieces in some order I can't clearly remember(in the primary stage of creation, it all seemed to work fine, but as time progressed, I began to see indexing errors so I tried to adjust things somehow) and then it wanders off into an infinite loop. From a debug message I set up to observe the change in weights, I saw that at times one weight it growing while the other few would either be shrinking or remaining constant. As of now, I set up a variable to give the model a random target value (or may not be random, it seems) to train with and still the problem continues. I doubt if the input data is biased someway or the other. Here's how the input is taken:
+
+- The model first checks through each column, and the input corresponding to each column will be a vector containing 0's and 1's with the 1's indicating the enemy's presence and 0 for the else case. The model thus generates a 'preference score' (equivalent to the activation function of the sum of the weighted inputs, as how it is in any other perceptron).
+- The same is done for rows as well and the list of values in both cases are passed to a dictionary, from which the player choses the row and column index with the highest scores and moves the piece there.
+
+I also set up an InvalidMove
exception so as to make sure that the machine doesn't play blankly.
+So here's the code:
+
+- MarchOfTheFinalFour.py - the module containing the required exception and the board.
+
+# MarchOfThFinalFour.py
+
+from time import *
+
+# 'March of The Final Four' - a clone of chess
+
+player_piece = 'Φ' # player's piece
+computer_piece = 'τ' # computer's piece
+
+class InvalidMove(Exception):
+ '''error when you/ the computer takes an invalid move'''
+ def __init__(self, coords):
+ self.stmt = "invalid move from:"
+ self.coords = coords
+
+class PlayTable:
+ def __init__(self, table_side):
+ ''' generates the game board, empty and with no pieces '''
+ self.length = table_side
+ self.table = [[0]*table_side]*table_side
+
+ def __repr__(self):
+ '''prints the table'''
+ print()
+ table_str = ''
+ num = 0
+ for row in self.table:
+ table_str += str(num) + " \t"
+ for piece in row:
+ table_str += str(piece) + "|"
+ table_str += "\n"
+ num += 1
+ return table_str + "\t0 1 2 3 4 5 6 7"
+
+ def reset(self):
+ ''' resets the board/ places pieces on it '''
+ for row in [0, 1] :
+ self.table[row] = [computer_piece]*self.length
+
+ for row in [self.length-2, self.length - 1] :
+ self.table[row] = [player_piece]*self.length
+
+ for row in range(2, self.length-2):
+ self.table[row] = [0]*self.length
+ def move_piece(self, coord, turn = 'player'):
+ '''moves the piece at coord '''
+ if self.table[coord[1]%self.length][coord[0]%self.length] != 0 and self.table[(coord[1] + (1 if turn == 'computer' else -1))%self.length][coord[0]%self.length] == 0:
+ temp = self.table[coord[1]][coord[0]]
+ self.table[coord[1]][coord[0]] = 0
+ direction = 1 if turn == 'computer' else -1
+ self.table[coord[1]+direction][coord[0]] = temp
+ print(f"Moved {temp} from {(coord[0] if coord[0] >= 0 else 8 + coord[0],coord[1] if coord[1] >= 0 else 8 + coord[1])}") # msg
+ elif self.table[coord[1]%self.length][coord[0%self.length]] == 0 or self.table[(coord[1] + (-1)**(1 if turn == 'player' else 1))%self.length][coord[0]%self.length] != 0:
+ raise InvalidMove(coord)
+ elif turn == 'player' and self.table[coord[1]%self.length][coord[0]%self.length] == computer_piece:
+ raise InvalidMove(coord)
+ elif turn == 'computer' and self.table[coord[1]%self.length][coord[0]%self.length] == player_piece:
+ raise InvalidMove(coord)
+
+
+board = PlayTable(8)
+board.reset()
+print(board)
+
+
+
+- TestGameML.py - sample game, NPC, single-layer perceptron, etc. all lies here:
+
+from math import *
+from random import *
+import MarchOfTheFinalFour as mff
+
+######################
+
+## Math functions for our use in here
+
+def multiply(list_a, list_b):
+ '''matrix multiplication and addition'''
+ list_res = [list_a[n] * list_b[n] for n in range(len(list_a))]
+ return fsum(list_res)
+
+def sig(x):
+ '''logistic sigmoid function'''
+ return exp(x)/(1+ exp(x))
+
+##############################
+
+## Neighbourhood search
+
+def neighbourhood(coords, board_length):
+ '''generates the 3 x 3 grid that forms the neighbourhod of the required square'''
+ axial_neighbours = [(coords[0] + 1, coords[1]),(coords[0] - 1, coords[1]),
+ (coords[0], coords[1] + 1), (coords[0], coords[1] - 1)] # neighbours along NEWS directins
+ diagonal_neighbours = [(coords[0] + 1, coords[1]+1),(coords[0] - 1, coords[1] - 1),
+ (coords[0]-1, coords[1] + 1), (coords[0]+1, coords[1] - 1)] #diagonal neighbours
+ neighbours = axial_neighbours + diagonal_neighbours # supposed neighbours
+ ## purging those coordinates with negative values in them:
+ for i in range(len(neighbours)):
+ if (neighbours[i][0] < 0 or neighbours[i][0] > board_length - 1) or (neighbours[i][1] < 0 or neighbours[i][1] > board_length - 1):
+ neighbours[i] = 0
+ while 0 in neighbours:
+ neighbours.remove(0)
+
+ return neighbours
+
+########################
+
+# The NPC's brain
+
+class NPC_Brain:
+ '''brain of the NPC ;), actually a single-layer perceptron '''
+ def __init__(self,board_size):
+ ''' Initialiser'''
+ self.inputs = board_size # no. of input nodes for the neural network
+ self.weights = [random() for i in range(self.inputs)] # random weights for each game
+ self.column_scores = [] # column scores (for each column) - the 'liking' of the computer to move a piece in a column as the output
+ # of the neural network's processing
+ self.row_scores = [] #same here
+ self.inputs_template_columns = [] # a container to hold the inputs to the neural network
+ self.inputs_template_rows = [] # same here but for rows
+ def process(self, board, threshold):
+ '''forward-feeding'''
+ # we begin by setting the lists to zero so as to make the computer forget the past state of the board and to look for the current state
+ self.inputs_template_columns = []
+ self.inputs_template_rows = []
+ self.column_scores = []
+ self.row_scores = []
+ self.row_scores = []
+ for column in range(self.inputs):
+ scores = [1 if row[column] == mff.computer_piece else 0 for row in board] # checking for enemies in each column
+ self.inputs_template_columns.append(scores)
+ score = sig(multiply(scores, self.weights)/threshold) # using the logistic sigmoid function to generate a liking for columns :D
+ self.column_scores.append(score) # each column score is appended
+ for row in range(self.inputs):
+ scores = [1 if board[row][i] == mff.player_piece else 0 for i in range(self.inputs)] # checking for enemies in each column
+ self.inputs_template_rows.append(scores)
+ score = sig(multiply(scores, self.weights)/threshold) # using the logistic sigmoid function to generate a liking for columns :D
+ self.row_scores.append(score) # each column score is appended
+ return {'columns':self.column_scores, 'rows':self.row_scores}
+ def back_prop(self, learning_rate, target = 1):
+ '''Back-propagation, with error function as squared-error function (target - error)**2'''
+ for j in range(len(self.inputs_template_columns)):
+ for i in range(self.inputs):
+ '''overfitting can occur, but still let's try this'''
+ self.weights[i] += learning_rate * 2 * (self.column_scores[j] - target) * (self.column_scores[j]*(1-self.column_scores[j])) * self.inputs_template_columns[j][i] #backprop formula
+ for k in range(len(self.inputs_template_rows)):
+ for i in range(self.inputs):
+ '''overfitting can occur, but still let's try this'''
+ self.weights[i] += learning_rate * 2 * (self.row_scores[k] - target) * (self.row_scores[k]*(1-self.row_scores[k])) * self.inputs_template_rows[k][i] #backprop formula
+
+
+
+
+class NPC:
+ ''' non-playable character / computerized player class '''
+ def __init__(self):
+ self.mind = NPC_Brain(mff.board.length) # the model
+ self.piece_lower = 0; self.piece_upper = 1 # initial row numbers of the computer's pieces
+ self.row_expanse = 2
+
+ def make_move(self):
+ moved = False
+ req_target = 0.5
+ counter = 1
+ while not moved:
+ if counter % 50 == 0:
+ req_target += log(req_target**(counter%25))
+ print("New target set:", req_target)
+ score_board = temp = self.mind.process(mff.board.table, 0.5) # feeding forward
+ x_coord = score_board['columns'].index(max(score_board['columns'])) # choosing the column the compute likes the most
+ y_coord = score_board['rows'].index(max(score_board['rows'])) % self.row_expanse # a random y coordinate is chosen
+ try:
+ if y_coord < mff.board.length - 1:
+ if mff.board.table[int(y_coord) + 1][int(x_coord)] == 0 and (mff.board.table[int(y_coord)][int(x_coord)] not in [0, mff.player_piece]):
+ mff.board.move_piece((int(x_coord), int(y_coord)), turn = 'computer')
+ self.piece_upper += 1 #increasing the upper limit of the y coordinate by 1
+ moved = True
+ req_target += 0.0001
+ self.row_expanse += 1
+ counter += 1
+ else:
+ raise mff.InvalidMove((x_coord,y_coord))
+ counter += 1
+ except mff.InvalidMove:
+ # trying to avoid the computer's confusion
+ self.mind.back_prop(1/pi, target = req_target) # making the computer learn from its decision
+ req_target -= 0.0001
+ counter += 1
+
+
+
+
+
+
+
+npc = NPC() # creating the NPC
+
+## Sample gamplay
+## The following gameplay will be a bit smooth in the beginning but turns into a confusion later
+all_gone_good = True
+while True:
+ all_gone_good = True
+ # infinite loop here till errors occur
+ player_mv = eval(input("Enter your move:")) # waiting for the player's move
+ try:
+ mff.board.move_piece(player_mv)
+ except mff.InvalidMove:
+ print("Invalid move")
+ all_gone_good = False
+ # next we check if the player's move was valid
+ if all_gone_good:
+ print(mff.board)
+ npc.make_move()
+ print(mff.board)
+
+
+I am sorry that haven't been able to comment in certain regions of the code, in which case you can ask me for clarification.
+My main doubts are : is my data acquisition method biased? Is the training part also little bit wacky? Or is it that I programmed it all without knowing what I am doing? What's actually causing such an infinite loop?
+
+Edit: : I have edited TestGameML.py and it's down here:
+from math import *
+from random import *
+import MarchOfTheFinalFour as mff
+
+######################
+##Bug fixes required:
+
+##1. The machine is making multiple moves unknowingly
+
+######################
+
+## Some variables for global use
+
+my_move = (0,0)
+
+## Math functions for our use in here
+
+def multiply(list_a, list_b):
+ '''matrix multiplication and addition'''
+ list_res = [list_a[n] * list_b[n] for n in range(len(list_a))]
+ return fsum(list_res)
+
+def sig(x):
+ '''logistic sigmoid function'''
+ return exp(x)/(1+ exp(x))
+
+##############################
+
+## Neighbourhood search
+
+def neighbourhood(coords, board_length):
+ '''generates the 3 x 3 grid that forms the neighbourhod of the required square'''
+ axial_neighbours = [(coords[0] + 1, coords[1]),(coords[0] - 1, coords[1]),
+ (coords[0], coords[1] + 1), (coords[0], coords[1] - 1)] # neighbours along NEWS directins
+ diagonal_neighbours = [(coords[0] + 1, coords[1]+1),(coords[0] - 1, coords[1] - 1),
+ (coords[0]-1, coords[1] + 1), (coords[0]+1, coords[1] - 1)] #diagonal neighbours
+ neighbours = axial_neighbours + diagonal_neighbours # supposed neighbours
+ ## purging those coordinates with negative values in them:
+ for i in range(len(neighbours)):
+ if (neighbours[i][0] < 0 or neighbours[i][0] > board_length - 1) or (neighbours[i][1] < 0 or neighbours[i][1] > board_length - 1):
+ neighbours[i] = 0
+ while 0 in neighbours:
+ neighbours.remove(0)
+
+ return neighbours
+
+########################
+# The NPC's brain
+
+class NPC_Brain:
+ '''brain of the NPC ;), actually a single-layer perceptron '''
+ def __init__(self,board_size):
+ ''' Initialiser'''
+ self.inputs = board_size # no. of input nodes for the neural network
+ #self.weights = [random() for i in range(self.inputs)] random weights for each game
+ self.weights = [0.5]*self.inputs
+ self.column_scores = [] # column scores (for each column) - the 'liking' of the computer to move a piece in a column as the output
+ # of the neural network's processing
+ self.row_scores = [] #same here
+ self.inputs_template_columns = [] # a container to hold the inputs to the neural network
+ self.inputs_template_rows = [] # same here but for rows
+
+ def process(self, board, threshold):
+ '''forward-feeding'''
+ # we begin by setting the lists to zero so as to make the computer forget the past state of the board and to look for the current state
+ self.inputs_template_columns = []
+ self.inputs_template_rows = []
+ self.column_scores = []
+ self.row_scores = []
+ for column in range(self.inputs):
+ scores = [(1/8)**(row + 1 if row == my_move[1] else 1) if board[row][column] == mff.player_piece else -1/8 for row in range(self.inputs)] # checking for enemies in each column
+ self.inputs_template_columns.append(scores)
+ score = sig(multiply(scores, self.weights)/threshold) # using the logistic sigmoid function to generate a liking for columns :D
+ self.column_scores.append(score) # each column score is appended
+ for row in range(self.inputs):
+ scores = [(1/8)**(i + 1 if i == my_move[0] else 1) if board[row][i] == mff.player_piece else -1/8 for i in range(self.inputs)] # checking for enemies in each column
+ self.inputs_template_rows.append(scores)
+ score = sig(multiply(scores, self.weights)/threshold) # using the logistic sigmoid function to generate a liking for columns :D
+ self.row_scores.append(score) # each column score is appended
+ return {'columns':self.column_scores, 'rows':self.row_scores}
+
+ def back_prop(self, learning_rate, target = 1):
+ '''Back-propagation, with error function as squared-error function (target - error)**2'''
+ for j in range(len(self.inputs_template_columns)):
+ for i in range(self.inputs):
+ '''overfitting can occur, but still let's try this'''
+ self.weights[i] += -learning_rate * 2 * (self.column_scores[j] - target) * ((self.column_scores[j]**2)*(1-self.column_scores[j])) * self.inputs_template_columns[j][i] #backprop formula
+ for k in range(len(self.inputs_template_rows)):
+ for i in range(self.inputs):
+ '''overfitting can occur, but still let's try this'''
+ self.weights[i] += -learning_rate * 2 * (self.row_scores[k] - target) * ((self.row_scores[k]**2)*(1-self.row_scores[k])) * self.inputs_template_rows[k][i] #backprop formula
+
+
+
+
+class NPC:
+ ''' non-playable character / computerized player class '''
+ def __init__(self):
+ self.mind = NPC_Brain(mff.board.length) # the model
+ self.piece_lower = 0; self.piece_upper = 1 # initial row numbers of the computer's pieces
+ self.row_expanse = 2
+
+ def make_move(self):
+ moved = False
+ req_target = 0.5
+ counter = 1
+ print("Thinking...")
+ while not moved:
+ score_board = temp = self.mind.process(mff.board.table, 0.5) # feeding forward
+ x_coord = score_board['columns'].index(min(score_board['columns'])) # choosing the column the compute likes the most
+ y_coord = score_board['rows'].index(max(score_board['rows'])) % self.row_expanse # a random y coordinate is chosen
+ try:
+ if y_coord < mff.board.length - 1:
+ if mff.board.table[int(y_coord) + 1][int(x_coord)] == 0 and (mff.board.table[int(y_coord)][int(x_coord)] not in [0, mff.player_piece]):
+ mff.board.move_piece((int(x_coord), int(y_coord)), turn = 'computer')
+ self.piece_upper += 1 #increasing the upper limit of the y coordinate by 1
+ moved = True
+ self.row_expanse += 1
+ counter += 1
+ else:
+ raise mff.InvalidMove((x_coord,y_coord))
+ counter += 1
+ except mff.InvalidMove:
+ # trying to avoid the computer's confusion
+ self.mind.back_prop(0.5, target = req_target) # making the computer learn from its decision
+ counter += 1
+
+
+
+
+
+
+
+npc = NPC() # creating the NPC
+
+## Sample gamplay
+## The following gameplay will be a bit smooth in the beginning but turns into a confusion later
+all_gone_good = True
+while True:
+ all_gone_good = True
+ # infinite loop here till errors occur
+ player_mv = eval(input("Enter your move:")) # waiting for the player's move
+ try:
+ mff.board.move_piece(player_mv)
+ except mff.InvalidMove:
+ print("Invalid move")
+ all_gone_good = False
+ # next we check if the player's move was valid
+ if all_gone_good:
+ my_move = player_mv
+ print(mff.board)
+ npc.make_move()
+ print(mff.board)
+
+
+Changelog:
+
+- Change data distribution method in lines 71 and 76
+- Asked NPC to choose the column with the least column score and max row score.
+
+"
+"['terminology', 'geometric-deep-learning']"," Title: What is meant by domain in the notations of geometric deep learning?Body: While reading the Notation
of the paper titled Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges, I came across the following notations.
+$$
+\Omega = \text{ Domain} \\
+ u = \text{Point on domain}
+$$
+I can only understand domain here as a set and point as an element in the set $\Omega$.
+I have a simple doubt here.
+Why do we call the set $\Omega$ as domain? Is it due to the notation related to mathematical function?
+$$\text{function_name} : \text{domain} \rightarrow \text{co_domain}$$
+Or is it due to the usage of the word domain in the sense of application domain: A specified sphere of activity or knowledge. such as computer vision, natural language processing etc?
+or a combination of both?
+"
+"['definitions', 'geometric-deep-learning', 'signal-processing']"," Title: Can I call any function a signal?Body: While reading the Notation
of the paper titled Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges, I came across the following notations.
+$$
+\Omega = \text{ Domain} \\
+ u = \text{Point on domain} \\
+x(u) \in \mathcal{X}(\Omega, C) = \text{ Signal on the domain of the form } x : \Omega \rightarrow C
+$$
+Mathematically, a signal is just a function.
+But, every function may not be a signal. There may be some distinction between a mathematical function and a signal.
+When can I call any mathematical function a signal? And what is $\mathcal{X}$ in the notation given?
+"
+"['reinforcement-learning', 'deep-learning', 'q-learning', 'state-spaces']"," Title: How to approach a blackjack-like card game with the possibility of cards being counted?Body: Consider a single-player card game which shares many characteristics to "unprofessional" (not being played in casino, refer point 2) Blackjack, i.e.:
+
+- You're playing against a dealer with fixed rules.
+- You have one card deck which is played completely through.
+- etc. An exact description of the game isn't needed for my question, thus I remain with these simple bullet points.
+
+Especially the second point bears an important implication. The more cards that have been seen, the higher your odds of predicting the subsequent card - up to a 100% probability for the last card. Obviously, this rule allows for precise exploitation of said game.
+As far as the action and state space is concerned: The action space is discrete, the player only has a fixed amount of actions (in this case five - due to the missing explanation of the rules, I won't go in-depth about this). Way more important is the state space. In my case, I decided to structure it as follows:
+
+
+
+
+A |
+2 |
+3 |
+4 |
+5 |
+6 |
+7 |
+8 |
+9 |
+10 |
+J |
+
+
+
+
+4 |
+4 |
+4 |
+4 |
+4 |
+4 |
+4 |
+4 |
+4 |
+14 |
+2 |
+
+
+
+
+First part of my state space describes each card value being left in the stack, thus on the first move all 52 cards are still in the deck. This part alone allows for about 7 million possible variations.
+Second part of the state space describes various decks in the game (once again without the rules it's hard to explain in detail). Basically five integers ranging from 0-21, depending on previous actions. Another 200k distinct situations.
+Third part are two details - some known cards and information, though they only account for a small factor, but still bring in a considerable amount of variation into my state space.
+Thus a complete state space might look something like this: Example one would be the start setup: 444444444142;00000;00
. Another example in midst of the game: 42431443081;1704520;013
. Semicolons have been added for readability purposes.
+So now arises the question: From my understanding my state space is definitely finite and discrete, but too big to be solved by SARSA, Q-learning, Monte Carlo or alike. How can I approach projects with a big state space without loosing a huge chunk of predictability (which I might fear with DQN, DDPQ or TD3)? Peculiarly due to the fact that only one deck is being used - and it's played through in this game - it seems like a more precise solution would be possible.
+"
+"['implementation', 'epochs']"," Title: Does an increase in the number of epochs lead to complete breakdown?Body: Recently, I ran a code on my system that involves deep neural networks. The number of epochs provided by the designers are 301.
+I tried to increase the number of epochs to 501. To my shock, the model after 350 epochs is behaving eccentric. And I can say that they are just returning crazy values.
+What makes such phenomena possible? Is "number of epochs" also a hyperaparameter/ magic number as an upper bound beyond which the model fails?
+"
+"['geometric-deep-learning', 'graph-neural-networks', 'convolutional-layers']"," Title: How do convolutional layers of basic Graph Convolutional Networks work?Body: I was reading the following article on Towards Data Science (here) and it says the following, regarding the calculation of convolutional layers:
+
+So the overall steps are:
+
+- Transform the graph into the spectral domain using eigendecomposition
+- Apply eigendecomposition to the specified kernel
+- Multiply the spectral graph and spectral kernel (like vanilla convolutions)
+- Return results in the original spatial domain (analogous to inverse GFT)
+
+
+Question: How can we visualize the convolutional layer working for a graph neural network?
+For example, for a CNN we can imagine the following (source: Stanford CS231n YouTube lectures, Lecture 5: Convolutional Neural Networks (here)). What is the analogous image for a graph convolutional filter?
+
+"
+"['generative-adversarial-networks', 'generator']"," Title: What is meant by inverting the generator?Body: Generative Adversarial Networks, in general, consists of two multi layer perceptrons: generator and discriminator. Generator is used for generating samples that are as real as training samples and discriminator tries to discriminate the real and fake samples.
+Generator receives noise and generate samples.
+Some papers like this use extra networks
+
+To achieve this, one can train a convolutional network to invert
+$G$ to regress from samples $\hat{x} \leftarrow G(z, \phi(t))$ back
+onto $z$.
+
+And some other papers, I remember saying that we use the same generator architecture to invert the it.
+What does it mean by inverting the generator of a GAN? Does inverting means passing samples and getting noise as output by using same or different architectures?
+"
+"['papers', 'geometric-deep-learning', 'graph-neural-networks', 'spectral-analysis']"," Title: How does Chebyshev approximation of spectral convolution work?Body: I was reading the following paper: here. In it, it talks about spectral graph convolutions and says:
+
+We consider spectral convolutions on graphs defined as the
+multiplication of a signal $x \in R^N$ (a scalar for every node)
+with a filter $g_{\theta}$ $=$ $\text{diag} (\theta)$ parameterized
+by $\theta \in R^{N}$ in the Fourier domain, i.e.: $ g_{\theta} * x = U g_{\theta} U^Tx $. We can understand $ g_{\theta}$ as a function of the eigenvalues of $L$, i.e. $g_{\theta}(\Lambda)$
+
+So far, it makes sense. $U^T x$ is the graph Fourier transform of the signal $x$, then we multiply by $ g_{\theta}$ in the Fourier domain as: $FT(f * g) = F(\omega)G(\omega)$. Then we have the multiplication by $U$ in the front to represent the inverse (graph) Fourier transform.
+Then the paper lists some reasons why using the above convolution equation may not be practical in reality:
+
+- Evaluating the above equation is computationally expensive; multiplying with eigenvector matrix $U$ is $O(N^2)$
+- Computing eigendecomposition of $L$ may be too expensive for an arbitrarily large graph
+- etc.
+
+and then the paper says:
+
+To circumvent this problem, it was suggested in Hammond et al. (2011)
+that $g_{\theta}(\Lambda)$ can be well-approximated by a truncated
+expansion in terms of Chebyshev polynomials $T_k (x)$ up to
+$K^{\text{th}}$ order: $$ g_{\theta '}(\Lambda) \approx \sum_{k =
+> 0}^{K} \theta_k ' T_k(\tilde{\Lambda}) $$
+with a rescaled $\tilde{\Lambda} = \frac{2}{\lambda_{\text{max}}}\Lambda − I_N$. $\lambda_{\text{max}}$
+denotes the largest eigenvalue of $L$. $\theta ′ \in R^K$ is now a
+vector of Chebyshev coefficients. The Chebyshev polynomials are
+recursively defined as $T_k(x) = 2xT_{k−1}(x) − T_{k−2}(x)$, with
+$T_0(x) = 1$ and $T_1(x) = x$. The reader is referred to Hammond et al. (2011) for an in-depth discussion of this approximation. Going
+back to our definition of a convolution of a signal $x$ with a filter
+$g_{\theta '}$, we now have: $$ g_{\theta '} * x \approx \sum_{k=0}^{K} \theta_k ′ T_k (\tilde{L}) x$$ with $\tilde{L} = \frac{2}{\lambda_{\text{max}}}L − I_N$ ; as can easily be verified by
+noticing that $(U \Lambda U^T)^k = U \Lambda^k U^T $
+
+Question: What happened to the terms $U^T$ and $U$ which take the (graph) Fourier transform and invert it respectively?
+Attempt: Does it have something to do with what is mentioned in the last line about noticing that $(U \Lambda U^T)^k = U \Lambda^k U^T $? I might guess that we use that because a k-th order Chebyshev polynomial will have $\Lambda ^k$ (and lower powers) present in the equation and thus the $U^T$ and $U$ mean that we can write the convolution equation in terms of the Laplacian matrix $L$
+"
+"['generative-adversarial-networks', 'generator']"," Title: Is it impossible to evaluate the generator distribution directly?Body: The following excerpt is taken from 3. The Inception Score for Image Generation
from the paper titled A Note on the Inception Score.
+
+Suppose we are trying to evaluate a trained generative model $G$ that
+encodes a distribution $p_g$ over images $\hat{x}$. We can sample from
+$p_g$ as many times as we would like, but do not assume that we can
+directly evaluate $p_g$. The Inception Score is one way to evaluate
+such a model.
+
+The excerpt is saying that we are not directly evaluating the $p_g$, the generator distribution but trying to evaluate the model $G$.
+Does the excerpt intend to say that it is practically impossible to evaluate $p_g$?
+"
+"['convolutional-neural-networks', 'training']"," Title: Validation accuracy less than training accuracy (with no sigh of overtraining)Body: I am working with a deep CNN with over 100k sample data. I divided it up into 75% training, 12.5% validation and 12.5% for testing. As I train my network, the training accuracy approaches near 100% accuracy. The validation accuracy approaches 70-90% accuracy. The validation accuracy is always increasing and never decreases so I do not believe that the network is over-training.
+The training accuracy is similar to the validation accuracy but both are less than the training accuracy.
+My question is, what is causing my validation/training data to trail the training data? Is it because my validation/training sets contains sample types which are not found in the training set? What else might be causing this?
+Additionally, between epochs, I see this 'stair case' in learning in that I see a huge jump in accuracy as soon as a new epoch starts. I am shuffling my data between epochs. What might be causing this jump in accuracy?
+Also, if there are more technical terms for the events that I am describing please let me know so that I can further research these.
+Thank you!
+blue = training,
+black = validation
+
+"
+"['reinforcement-learning', 'deep-rl', 'actor-critic-methods', 'ddpg']"," Title: Setting initial values in DDPG to favor better actionsBody: I'm working on a problem using DDPG.
+Is it possible to add some intelligence in the initialization phase, such that the convergence time is improved/shortened and local optima are avoided as much as possible?
+For example, this may include assigning (higher) probabilities to (better) actions (in the action selection algorithm) at the start of an episode. This hopefully leads to the agent discovering and selecting "better" actions faster, rather than starting from more random ones. Or this won't work since the neural networks will just unlearn these initial values during the training process?
+Also, with the above description, am I better off using Soft Actor-Critic?
+"
+"['meta-learning', 'model-agnostic-meta-learning']"," Title: How many tasks are needed for meta-learning?Body: This is an empirical question, essentially how many tasks do you need data for, to make a useful meta learning model (e.g. using MAML)? I'm looking for ranges based on personal experience or if anyone has done research on the topic and you know of references for the estimates that would be helpful as well.
+For context I'm trying to work with about 5-7 tasks. I saw a person implement meta-learning with about this many in the paper Multi-MAML. But I've since seen example code in the learn2learn library which uses thousands of tasks...
+P.S. I'm not sure if different parameterizations of a single task definition are still 'one task' (e.g. y=a*cos(x), where 'a' varies). Could that account for the discrepancy?
+"
+"['deep-learning', 'deep-rl']"," Title: Multi Agent Deep Reinforcement Learning for continuous and discrete actionBody: I am looking to have a cooperative multi agent reinforcement learning framework where one agent has a discrete action space and another agent has a continuous action space. Is there a way to do this as most papers I have seen will only handle one or the other.
+"
+"['deep-learning', 'classification', 'pytorch', 'softmax']"," Title: Is it normal that the values of the LogSoftmax function are very large negative numbers?Body: I have trained a classification network with PyTorch lightning where my training step looks like below:
+def training_step(self, batch, batch_idx):
+ x, y = batch
+ y_hat = self(x)
+ loss = F.cross_entropy(y_hat, y)
+ self.log("train_loss", loss, on_epoch=True)
+ return
+
+
+When I look at the output logits, almost all of them are very large negative numbers, with one that is usually 0. Is this normal or might be something wrong with my training?
+I am just using nn.LogSoftmax()
on the outputs and taking the max to make my predictions, but my network is not doing so good when I am running on unseen data, and I want to make sure the problem is just me overfitting.
+"
+"['geometric-deep-learning', 'graph-neural-networks', 'spectral-analysis']"," Title: How does graph Fourier transform work when multiple signals present on each node?Body: Context: I was reading the following set of notes (page 83): here and it says:
+
+Thus, the Fourier transform of signal (or function) $ \mathbf{f} \in R^{|V|} $ on a
+graph can be computed as $$ \mathbf{s} = \mathbf{U}^T \mathbf{f} $$
+
+Question: What happens if each node has multiple 'signals'? Are the Fourier transforms on each signal independent of one another?
+Attempt: I assume that each signal is denoted as a column vector, and thus multiple signals may be written as a matrix $\mathbf{F} = [\mathbf{f_1}, \mathbf{f_2}, ..., \mathbf{f_n}]$ (where $ \mathbf{f_i} \in R^{|V|} $) for a graph with $n$ signals. Thus, the graph Fourier transform would be $$ \mathbf{S} = \mathbf{U}^T \mathbf{F} $$ and thus each of the Fourier transforms would be independent of one another. Is this the correct way to think about this.
+Many thanks in advance!
+"
+"['reinforcement-learning', 'reward-functions', 'bellman-equations', 'dynamic-programming']"," Title: How are these two versions of the Bellman optimality equation related?Body: I saw two versions of the optimality equation for $V_{*}(s)$ and $Q_{*}(s,a)$.
+The first one is:
+$$
+V_{*}(s)=\max _{a} \sum_{s^{\prime}} P_{s s^{\prime}}^{a}\left(r(s, a)+\gamma V_{*}\left(s^{\prime}\right)\right)
+$$
+and
+$$
+Q_{*}(s, a)=\sum_{s^{\prime}} P_{s s^{\prime}}^{a}\left(r(s, a)+\gamma \max _{a^{\prime}} Q_{*}\left(s^{\prime}, a^{\prime}\right)\right)
+$$
+The second one is:
+$$
+V_{*}(s)=\max _{a \in \mathcal{A}}\left(R(s, a)+\gamma \sum_{s^{\prime} \in \mathcal{S}} P_{s s^{\prime}}^{a} V_{*}\left(s^{\prime}\right)\right)
+$$
+and for $Q_*$
+$$
+Q_{*}(s, a)=R(s, a)+\gamma \sum_{s^{\prime} \in \mathcal{S}} P_{s s^{\prime}}^{a} \max _{a^{\prime} \in \mathcal{A}} Q_{*}\left(s^{\prime}, a^{\prime}\right)
+$$
+If following distributive property to get from the first to the second expression. Why there is no summation term for the reward, for example, $$V_{*}(s) = \max_{a}(\sum_{s'}P^{a}_{ss'}r(s,a)+\gamma\sum_{s'}P^{a}_{ss'}V_{*}(s'))$$?
+My guess is that $r(s,a)$ is the constant so it can be moved out of the summation, leaving $$r(s,a)\sum_{s'}P^{a}_{ss'} = r(s,a).$$
+But is it always the case that $r(s,a)$ is independent of $s'$? I think the reward of moving from state $s$ to $s'$ may vary.
+"
+"['natural-language-processing', 'terminology', 'word-embedding']"," Title: Can I always use ""encoding"" and ""embedding"" interchangeably?Body: This question is restricted to the text domain only.
+The meaning of the word "encode" is Convert (information or instruction) into a particular form. One which performs encoding is called an encoder.
+In deep learning, an encoder can also be the first part of a neural network (autoencoder) that simulates identity function, which governs the English meaning of encoder since it encodes the input.
+Embeddings are encodings where the intention is to preserve semantics. You can observe the following excerpt from the chapter Vector Semantics and Embeddings
+
+In this chapter we introduce vector semantics, which instantiates this
+linguistic hypothesis by learning representations of the meaning of
+words, called embeddings, directly from their distributions in texts.
+
+But all encodings may not be the embeddings since encodings might not always preserve semantics (?). I have doubt in this statement which I inferred based on my current knowledge.
+Many times, I came across the terms text encoding and text embedding interchangeably. But failing to catch whether they are the same or we need to be choosy while using them.
+Consider the following usages of encoding and embedding in the paper titled Generative Adversarial Text to Image Synthesis by Scott Reed et al.
+
+#1: The intuition here is that a text encoding should have a higher compatibility score with images of the correspondong class compared to any other class and vice-versa.
+#2: Text encoding $\phi(t)$ is used by both generator and discriminator.
+#3: ...where $T$ is the dimension of the text description embedding.
+#4: ... we encode the text query $t$ using text encoder $\phi$. The description embedding $\phi(t)$ is first compressed ...
+
+I think they are used interchangeably. Is it true? Can I use any word if I am confident enough that my encoding is semantic preserving? Or is there any strong reason for choosing the words?
+If you observe the last point, the word "encoder" is used. Can I use embedder instead of it?
+"
+"['reinforcement-learning', 'markov-decision-process']"," Title: Calculating state-value functions in Markov Decision ProcessBody: I am watching David Silver's lectures on RL available on YouTube. My question here is with regard to Lecture 2 (Link to Video). At 1:11:00, I could not understand how he is calculating the state-value functions for C1, C2 and C3 (nodes with values 6, 8 and 10 respectively) in the student MDP example, starting from C3 and working backwards. Can someone please explain this?
+"
+"['deep-learning', 'comparison', 'activation-functions', 'relu', 'prelu']"," Title: Why should one ever use ReLU instead of PReLU?Body: To me, it seems that PReLU is strictly better than ReLU. It does not have the dying ReLU problem, it allows negative values and it has trainable parameters (which are computationally negligible to adjust). Only if we want the network to output positive values it makes sense to use it in the output layer. Other than that, I don't see why a priori I would decide to choose ReLU over PReLU. However, most architectures I came across use ReLU activations. Why? Am I missing something?
+"
+"['reinforcement-learning', 'deep-rl', 'ddpg', 'tanh']"," Title: Could we add clipping in the output layer of the actor in DDPG?Body: I have a doubt about how clipping affects the training of the RL agents.
+In particular, I have come across a code for training DDPG agents, the pseudo-code is the following:
+1 for i in training iterations
+2 action = clip(ddpg.prediction(state) * a + b, x, y)
+3 state, reward = environment(action)
+4 store action, state and reward
+5 if the number of experiences is larger than L:
+6 update the parameters of the agent
+
+In this case, the actor NN that predicts the DDPG has a $\tanh$ activation in the output layer.
+My question is, could we add the clipping in the output layer of the actor (changing $\tanh(x)$ by $\operatorname{clip}(a\cdot \tanh(x)+b, x, y$) in the training loop? Would the training work in that case?
+"
+"['reinforcement-learning', 'deep-rl', 'rewards', 'reward-functions', 'tic-tac-toe']"," Title: How are rewards calculated for episodic tasks like playing chess or tic-tac-toe?Body: I am new to Reinforcement Learning and trying to understand the concept of reaping rewards during episodic tasks. I think in games like tic-tac-toe, rewards will be in terms of a win or lose. But does that mean we need to finish the entire game to gain the reward? I mean reward will make sense only if three of the tokens are in one line. Each game of tic-tac-toe will be different as the sequence of actions followed will be different. So does reward come into the picture only after completing the game? And what if the game is a draw?
+"
+"['reinforcement-learning', 'deep-rl', 'pytorch', 'actor-critic-methods', 'optimizers']"," Title: Joined vs Separate optimizer for Actor-CriticBody: Say that I have a simple Actor-Critic architecture, (I am not familiar with Tensorflow, but) in Pytorch we need to specify the parameters when defining an optimizer (SGD, Adam, etc) and therefore we can define 2 separate optimizers for the Actor and the Critic and the backward process will be
+actor_loss.backward()
+actor_optimizer.step()
+critic_loss.backward()
+critic_optimizer.step()
+
+or we can use a single optimizer for both the Actor's and the Critic's parameters so the backward process can be like
+loss = actor_loss + critic_loss
+loss.backward()
+optimizer.step()
+
+I have 2 questions regarding both approaches:
+
+- Is there any consideration (pros, cons) for both the single joined optimizer and the separate optimizer approach?
+
+- If I want to save the best Agent (Actor and Critic) periodically (based on a predefined testing environment), do I always have to update the Critic, regardless of the current Agent's performance? Because (CMIIW) the Critic is (in its most basic purpose) only for predicting the action-value or state-value thus a more trained Critic is better.
+
+
+"
+"['minimax', 'hill-climbing', 'heuristic-functions', 'evaluation-functions', 'connect-four']"," Title: Which heuristic function should I use for the ColorShapeLinks game?Body: For learning purposes, I am trying to implement the minimax algorithm for the ColorShapeLinks game, which is similar to connect 4, except the fact that it combines both shape and color as the winning conditions, with a shape having priority over color. A color is associated with only one shape, so there can only be X blue or O red.
+I have previously applied the minimax to TicTacToe, but I think that's a lot simpler than this one.
+My question is: which heuristic function could be used to evaluate the states for this game?
+I'm thinking of checking each window of the board, where the new piece is put, and then compare the differences of shapes and colors and calculate the total of each predetermined value before (all heuristic). However, I think that's a little bit too simple, right?
+Also, is there any chance that the AI can be made with Local Search algorithms, like Hill Climbing, Annealing, or GA? I think we can't, since the state configurations are not complete, right? If there are any additional reasons for this, please kindly guide me through, I am pretty new in this research :D
+"
+"['datasets', 'probability-distribution', 'iid']"," Title: How can we ""draw i.i.d"" from any probability distribution?Body: Consider the following paragraph from 2 Learning in High Dimensions
in from of the paper titled Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges
+
+Supervised machine learning, in its simplest formalisation, considers
+a set of $N$ observations $D = \{(x_i, y_i)\}_{i=1}^{N}$ drawn
+i.i.d. from an underlying data distribution $P$ defined over $\mathcal{X} \times \mathcal{Y}$, where $\mathcal{X}$ and
+$\mathcal{Y}$ are respectively the data and the label domains. The
+defining feature in this setup is that $\mathcal{X}$ is a
+high-dimensional space: one typically assumes $\mathcal{X} =
+ \mathbb{R}^d$ to be a Euclidean space of large dimension $d$.
+
+Here, it is mentioned that $N$ observations are drawn i.i.d from probability distribution $P$, which is defined over $\mathcal{X} \times \mathcal{Y}$.
+My doubt is that how can we draw i.i.d from every probability distribution if our distribution is not an i.i.d distribution. The only $P$ I know to be an i.i.d is the following
+$$p(x_i) = \dfrac{1}{|\mathcal{X} \times \mathcal{Y}|} \text{ for } x_i \in \mathcal{X} \times \mathcal{Y} \text{ and } 1 \le i \le |\mathcal{X} \times \mathcal{Y}|$$
+To put simply, dataset with all possible $256 \times 256 \times 3$ images is i.i.d but the dataset with all dogs is not an i.i.d.
+As per my knowledge, every possible distribution may not be an i.i.d distribution. Then, without knowing anything about the distribution, how can we draw i.i.d?
+"
+"['terminology', 'datasets', 'probability-distribution', 'random-variable', 'iid']"," Title: Which of the following probability distribution is generating an iid dataset?Body: Let $X_1, X_2$ be two discrete random variables. Each random variable takes two values: $1, 2$
+The probability distribution $p_1$ over $X_1, X_2$ is given by
+$$p_1(X_1=1, X_2 = 1) = \dfrac{1}{4}$$
+$$p_1(X_1=1, X_2 = 2) = \dfrac{1}{4}$$
+$$p_1(X_1=2, X_2 = 1) = \dfrac{1}{4}$$
+$$p_1(X_1=2, X_2 = 2) = \dfrac{1}{4}$$
+The probability distribution $p_2$ over $X_1, X_2$ is given by
+$$p_2(X_1=1, X_2 = 1) = \dfrac{8}{16}$$
+$$p_2(X_1=1, X_2 = 2) = \dfrac{4}{16}$$
+$$p_2(X_1=2, X_2 = 1) = \dfrac{3}{16}$$
+$$p_2(X_1=2, X_2 = 2) = \dfrac{1}{16}$$
+Suppose $D_1, D_2$ are the datasets generated by $p_1, p_2$ respectively.
+Then which dataset can I call an iid? I am guessing as $D_1$ since we can prove the random variables are independent and are identically distributed and for $D_2$, iid does not hold.
+
+$\underline{\text{ For }D_1}$
+Identical : $p_1(X_1 = x_1) = p_1(X_2 = x_2)= \dfrac{1}{2} \text{ where } x_1 = x_2 \in \{1, 2\}$
+Independent: $p_1(X_1 = x_1,X_2 = x_2) = \dfrac{1}{4} = p_1(X_1 = x_1) p_1(X_2 = x_2) \text{ for } x_1, x_2 \in \{1, 2\}$
+
+We can show that random variables $X_1, X_2$ are not iid if we consider $p_2$.
+Is the iid I am discussing is different from making an iid of a dataset as answered here? If not, where am I going wrong?
+"
+"['probability-distribution', 'random-variable', 'iid']"," Title: Is knowing underlying probability distribution mandatory for deciding iid property of random variables?Body: Consider the following information regarding iid random variables
+
+The acronym IID stands for "Independent and Identically Distributed".
+A sequence of random variables (or random vectors) is IID if and only
+if the following two conditions are satisfied:
+
+- the terms of the sequence are mutually independent;
+
+- they all have the same probability distribution.
+
+
+
+Definition:
+
+Let $\{\mathcal{X}_n\}$ be a sequence of random vectors. Let
+$F_{\mathcal{X}_n}{(x_n)}$ be the joint distribution function of a
+generic term of the sequence $\{\mathcal{X}_n\}$. We say that
+$\{\mathcal{X}_n\}$ is an IID sequence if and only if
+$$F_{\mathcal{X}_n}{(x)} = F_{\mathcal{X}_k}{(x)} \forall x, n, k $$
+and any subset of terms of the sequence is a set of mutually
+independent random vectors.
+
+Thus,
+
+- iid is a property for a sequence of random variables.
+- A joint probability distribution function is necessary to validate whether a sequence of random variables is iid or not.
+
+Thus, the iid property of a sequence of random variables, from 2, is entirely depending on the underlying joint probability distribution function. Am I wrong anywhere?
+If I am wrong, is there any other iid property of random variables that do not depend on the underlying probability distribution function?
+"
+"['meta-learning', 'model-agnostic-meta-learning']"," Title: What are practical methods to acquire a large number of tasks for Meta-learning?Body: It appears that it may be necessary to acquire a very large number of tasks for meta-learning , because MAML for example says that each task is analogous to a single training example in regular learning.
+This is slightly confusing to me because it appears that outside of automated techniques like N-way classification where you randomly sub-select classes (& training examples) to include in a given task.
+Adding tasks seems to be quite laborious right? I mean it sounds like you would need to get a different micro dataset for each task? And if meta-learning needs so many tasks then how do you satisfy that need?
+"
+"['neural-networks', 'regularization']"," Title: What does it mean when accuracy of regularized model is higher for training set than for validation set?Body: Accuracy of my regularized model is higher for training set than for validation set.
+
+The situation improves when regularization coeefficient is reduced:
+
+What does this really imply?
+From my understanding, this seems to suggest that regularization is actually resulting in the model overfitting training set, which is the opposite of the intended outcome
+"
+"['deep-learning', 'image-recognition', 'pytorch']"," Title: How to increase accuracy of image orientation classification (Left, Right, Center)?Body: I am working on classifying images in "Left", "Right", "Center", "Back". Training and Validation images look like this:
+
+
+
+The images are "Left", "Right", and "Center". I am following Pytorch transfer learning tutorial with Resnet50 architecture and have not changed anything.
+The transformations I am using is as follows.
+data_transforms = {"train" : A.Compose([
+ A.Resize(256,256),
+ # A.ShiftScaleRotate(shift_limit=0.05, scale_limit=0.05, rotate_limit=15, p=0.5),
+ # A.RandomCrop(height=128, width=128),
+ A.RGBShift(r_shift_limit=15, g_shift_limit=15, b_shift_limit=15, p=0.5),
+ A.RandomBrightnessContrast(p=0.5),
+ A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
+ ToTensorV2(),]),
+ "val": A.Compose([
+ A.Resize(256,256),
+ # A.CenterCrop(height=128, width=128),
+ A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
+ ToTensorV2(),])
+ }
+
+Initially I had back images too, but they were getting miss-classified with center a lot with val accuracy 65%.
+Without back images I got 74% accuracy with labels left, right and center.
+Modified it to classify only center and not center, with only three type of images "left", "right" and "center" I have achieved 91% of val accuracy.
+I am looking for ways to increase accuracy for classifying images in left, right, center.
+"
+"['neural-networks', 'pattern-recognition']"," Title: How to build neural network that detects changed signal firing pattern and is trained on positive patterns only?Body: Let's have a set of n devices firing signals. Devices are firing in the same cycles, but each device can fire in different phase of the cycle. More, the exact firing point can fluctuate, for example 20%
+off the phase to both sides.
+
+For me, this repeated firings can be seen like some kind of time-based pattern.
+The main goal is to detect if some device changed it's phase, so the timings in the pattern are changed.
+I generated dataset with rows for each second. On each row there is actual time and another n-1 columns representing difference between firing time of this device and first device.
+Example: let's have 4 devices - D0 to D3. t1 means difference of firing times between D1 and D0.
+time t1 t2 t3
+13:00:00 5 2 7
+
+That means that D1 fired 5s after D0, D2 fired 5+2s after D0 and D3 fired 5+2+7s after D0.
+Also there can be negative numbers since the allowed fluctuation - the D2 could fired earlier than D0.
+To amateur me this seems that this numbers create patterns that some kind of NN should be able to learn and filter out the noise.
+My questions:
+
+- which kind of network should I use? I tried to do simple feed forward network and do a classification, but since I am generating the dataset, I have no negative data and I so I am unable to categorize it.
+- is this kind of dataset a good approach, or should I use other properties?
+
+"
+"['terminology', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: Is LSTM a subcategory of RNN?Body: Is the LSTM-Architecture a subcategory of RNNs? Or are they totally different?
+Literature doesn't seem to be unitary on this.
+This figure appears to explain the models to be alternatives, but I thought of them otherwise (LSTM to be a subcategory of RNN)
+LSTM as a subcategory of RNN is mentioned in the Wikipedia article on LSTMs:
+
+Long short-term memory (LSTM) is an artificial recurrent neural
+network (RNN) architecture...
+
+"
+"['deep-learning', 'tensorflow', 'early-stopping']"," Title: What is better to use: early stopping, model checkpoint or both?Body: I want to get a model which works best, what should I go for while training the model, ModelCheckpoint, EarlyStopping, or both?
+"
+"['neural-networks', 'machine-learning', 'backpropagation']"," Title: Discrepancy of backpropagation formula between Andrew Ngs ML Course and those derived by neuralnetworksanddeeplearning.comBody: I'm currently working through Week 5 of Andrew Ngs Machine Learning course on Coursera, which goes through the backprop algorithm for basic neural networks. Whilst trying to derive the formulae he gave in the lectures, I noticed that the formula for $\delta^L$, "error" of last activation layer, is slightly different to that derived in http://neuralnetworksanddeeplearning.com/chap2.html.
+In Andrew's, it seems like there is no inclusion of the partial derivative da/dz, or $\sigma'(z)$, only the dC/da part.
+
+However Michael Nielson does include that term:
+
+Is this difference significant and why does it arise? Is it because the derivation Nielson goes through defines the Cost using the mean square errors, whereas Andrew Ng defines the cost using the -ylog(h(x))... one? Also will Nielson's equations score full marks on the Ng's assignment?
+Thank you for reading.
+"
+"['machine-learning', 'terminology', 'datasets', 'random-variable', 'iid']"," Title: What are the different possible usages of the word ""i.i.d"" in machine learning?Body: The acronym "iid" stands for "independent and identically distributed". It is a property of a sequence of random variables. You can read here for more details. This question is just about the usage of the word "iid" in contemporary machine learning and is not about the feasibility of checking iid based on either associated joint distribution or dataset.
+In the formal and strict sense, the word "iid" should be used only as a property for a sequence of random variables based on the underlying joint probability distribution function. But, I noticed that there is another (maybe less-strict) usage for the word 'iid' based on the context.
+Consider the following statements compiled from different answers to my questions 1,2
+From this answer
+
+The term i.i.d. is a property of a dataset. A dataset can be created
+that is i.i.d. with respect to a particular probability distribution.
+It doesn't matter what that distribution is, it just has to exist, and
+be relevant to the purpose the ML is being put to.
+
+From this answer
+
+The point is even you know the distribution, sometimes you can't prove
+that the sampled data is i.i.d. or not!....
+
+From this answer
+
+....A table of results of dice throws is likely iid...... (there are some issues with this answer, but the bolded excerpt is true)
+
+So, the usage of the word iid, in this sense, is somewhat different. Although I think, iid is a property of a sequence of random variables in this sense also, it is okay to use the word 'iid' for a dataset (collection of samples) since the dataset represents some underlying probability distribution.
+Thus, the two usages I am aware of up to now are
+
+- iid for a sequence of random variables based on joint distribution.
+
+- iid for a sequence of random variables based on the collection of samples.
+
+
+Is my understanding of the two usages of the word "iid" correct? and are there any other usages for the word "iid"?
+"
+"['neural-networks', 'machine-learning', 'vectors']"," Title: How do I prepare this 3D data for NN?Body: How do I prepare the info of 3D models to use with NN? For example, I have thousands of models with boxes similar to the ones in the image below. I can extract the vertices and their normals that make up the faces of these boxes. Similarly, I would like to prepare the info of the red-shaded surfaces, again I have their vertices and their normals. For future studies, I will have more complex shapes such as cylinders, pyramids,...etc. What would be the best way to represent these complex shapes for NN?
+Update: These boxes don't stay in the same position, see the second image I added. I will have different geometric models and different red-shaded areas on the surfaces of these objects. The NN output would be a number for each surface of these boxes/objects. The number represents the surface temperature. The input would be the following:
+1- Some climate information such as (air temperature, humidity, ...etc.)
+3- The location and size of the buildings that are represented in boxes/or maybe other shapes.
+4- The size and the location of the red-shaded areas (red-shaded areas represent the shadow cast by buildings.
+5- Material of each surface (concrete, brick,...etc).
+
+
+"
+"['bert', 'fine-tuning']"," Title: How to fine-tune a model which was pre-trained on a corpus that contains words with different meanings than the meanings of those words on my corpus?Body: I have a scenario in which we should leverage previously asked questions (not questions pairs, single question in a column) to locate similar questions within those questions.
+How can I fine-tune my model to manage out of vocabulary, as my data includes domain-specific questions (3300 questions)?.
+Right now, I'm using hugging face sentence transformers, which is already pre-trained on huge data.
+For example, BERT knows that gold is a metal, but, in our domain corpus, it's a platform. We have some terminologies which were not exposed openly, how can I fine-tune the model to get related sentences (handling OOV).
+"
+"['natural-language-processing', 'python']"," Title: Identify whether two companies are the sameBody: I am trying to solve a problem where I need to map multiple variations of a company name to a single name. For example: say I have a company named Super Idea Corporation Limited
.
+I need to resolve the following to Super Idea Corporation Limited
+
+- SICL
+- Super Idea Corp Ltd
+- SIC Ltd
+- SIC Limited
+
+Is there a non regex way of doing this? The reason I am averse to using regex is that there are a lot of business names that can be represented in many different ways. I want something that is more flexible and adaptive.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'long-short-term-memory', 'statistical-ai']"," Title: Machine learning with raw data alone / or raw data with its statisticsBody: My question is very general and it does not originate from a specific problem. Let's assume that, through experience, we have learned that some statistical property of a set of data is important in predicting some behavior in a system. For example, for time series data d1, d2, d3, ..., dn, we heuristically know that the average of the last n steps denoted by avg(d,n) and the standard deviation of the last m denoted by std(d,m) are significant in prediction. Now my questions are:
+
+- Should a machine learning system, let's say LSTM, or reinforcement learning agent, be fed the raw data or data with other statistical properties? I am asking this because, if the statistical derivatives are useful in training then there is no limit on how many statistical properties we can define and feed to the training process.
+
+- Do machine learning, again let's say LSTM, automatically learn about underlying statistics from just pure raw data?
+
+- How do we deal with different data of different scales and dimensions, for example, simple average is in the same scale as the raw data but standard deviation is of different scale and dimension and so on so forth?
+
+
+I appreciate your comments.
+"
+"['machine-learning', 'natural-language-processing', 'papers', 'transformer']"," Title: Why do the authors of the T5 paper say that the ""architectural changes are orthogonal to the experimental factors""?Body: Here's a quote from the T5 paper
(T5 stands for "Text-to-Text Transfer Transformer") titled Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel et al.:
+
+To summarize, our model is roughly equivalent to the original
+Transformer proposed by Vaswani et al. (2017) with the exception of
+removing the Layer Norm bias, placing the layer normalization outside
+the residual path, and using a different position embedding scheme.
+Since these architectural changes are orthogonal to the experimental
+factors we consider in our empirical survey of transfer learning, we
+leave the ablation of their impact for future work.
+
+What exactly does 'orthogonal' mean in this context? Also, is it just me or have I seen the word used in a similar way before, but can't remember where?
+"
+"['neural-networks', 'training', 'kalman-filter']"," Title: What's the benefit for using a Kalman filter for training a neural network compared to other optimization algorithms?Body: I found a paper about using an Unscented Kalman Filter(UKF) for traning a neural network.
+The UKF filter is modified so it works for parameter estimation. Assume that we have a neural network model $\hat d_k = G(x_k, W_k)$ where $G$ is the neural network, $x_k$ is the input vector, $W_k$ is the parameter gain matrix and $\hat d_k$ is the output vector.
+This paper counts servral methods to train a neural network.
+
+- Gradient descent
+- Quasi-Newton
+- UKF parameter estimation
+
+So my questions for you are:
+
+- What's the benefit for using a kalman filter for training a neural network compared to other optimization algorithms?
+
+- I remember that I have been using gradient descent methods and they work, but I have always used regularization. Is that due to the noise in the data?
+
+- If I'm using a UKF-filter as parameter estimation, I can avoid regularization then?
+
+- Is this UKF parameter estimation algorithm only made for 1 layer neural network, or can it be used for deep neural networks as well?
+
+
+"
+"['convolutional-neural-networks', 'terminology', 'features']"," Title: What does it mean by ""low-level"" and ""high-level"" in features generated by CNN?Body: Across the literature, the terms "high-level" and "low-level" are generally used as an adjective to the features generated by the convolution neural network as intermediate representations.
+Should I understand the level to be either high or low based on
+
+- the position of feature maps considering the architecture of the convolutional neural network i.e., the size of the feature maps generated at different layers of the convolutional neural network
+
+(For example, the feature maps after bottleneck layers are "high-level" and are after the wide layers are "low-level" )
+or
+
+- the content of the feature maps based on the task
+
+(For example, the feature maps that learned eyebrows of a cat are "high-level" and are just containing pixel intensities are "low-level" )
+or
+
+- any other property?
+
+"
+"['reinforcement-learning', 'q-learning', 'papers', 'proofs']"," Title: What is the derivative of equation 1 in the paper ""Conservative Q-Learning for Offline Reinforcement Learning""?Body: I am looking at the paper Conservative Q-Learning for Offline Reinforcement Learning, but I'm not sure how they proved theorem 3.1.
+Here is a screenshot of theorem 3.1.
+
+In the proof of theorem 3.1
+
+they say
+
+By setting the derivative of Equation 1 to 0, we obtain the following expression
+...
+$$\forall \mathbf{s}, \mathbf{a} \in \mathcal{D}, k, \quad \hat{Q}^{k+1}(\mathbf{s}, \mathbf{a})=\hat{\mathcal{B}}^{\pi} \hat{Q}^{k}(\mathbf{s}, \mathbf{a})-\alpha \frac{\mu(\mathbf{a} \mid \mathbf{s})}{\hat{\pi}_{\beta}(\mathbf{a} \mid \mathbf{s})} \tag{11}\label{11}$$
+
+Here's equation 1 from the paper.
+$$\hat{Q}^{k+1} \leftarrow \arg \min _{Q} \alpha \mathbb{E}_{\mathbf{s} \sim \mathcal{D}, \mathbf{a} \sim \mu(\mathbf{a} \mid \mathbf{s})}[Q(\mathbf{s}, \mathbf{a})]+\frac{1}{2} \mathbb{E}_{\mathbf{s}, \mathbf{a} \sim \mathcal{D}}\left[\left(Q(\mathbf{s}, \mathbf{a})-\hat{\mathcal{B}}^{\pi} \hat{Q}^{k}(\mathbf{s}, \mathbf{a})\right)^{2}\right] \tag{1}\label{1}$$
+My question is: what exactly is the derivative of equation (1)? And how does that result in equation (11)?
+The $\hat{B}^\pi$ is the empirical Bellman operator and is defined as $\hat{B}^\pi \hat{Q}^k (s,a) = r + \gamma \sum_{s'} \hat{T}(s' \mid s,a) \mathbb{E}_{a'\sim \pi(a' \mid s')}\hat{Q}_k(s', a')$. Since in offline reinforcement learning, the dataset $\mathcal{D}$ typically does not contain all possible transitions $(s, a, s')$, the policy evaluation step actually uses an empirical Bellman operator that only backs up a single sample.
+"
+"['deep-learning', 'convolutional-neural-networks', 'transfer-learning']"," Title: Continue teaching pre-trained network without forgetting previous data setBody: I have a rather interesting problem here; I work in the field of image classification for quality assurance.
+For this I have a dataset of about 1 million images, which I have used to train different defect classes.
+Now one of these defect types has additional properties (new features of an image class), I would like to teach these new features to the previously trained network, and without re-training the whole previous dataset.
+In short: new features of an image class should be taught without affecting the performance of the network on the previous training set too much. Is this possible, and if so, what are some strategies for doing this?
+Thanks in advance!
+"
+"['neural-networks', 'weights', 'kalman-filter']"," Title: Using parameter estimation for training a neural networkBody: Assume that we have 4 layers in a neural network.
+$$z_1 = L_1(x, W_1)$$
+$$z_2 = L_2(z_1, W_2)$$
+$$z_3 = L_3(z_2, W_3)$$
+$$y = L_1(z_3, W_4)$$
+Where $x$ is the vector input, $y$ is the vector output and $W_i, i = 1..4$ is the weight matrix.
+Assume that I could estimate parameters in a function.
+$$b = f(a, w)$$
+Where the $b$ is a real value and $a$ is the input vector and $w$ is the weight vector parameter. The function $f$ could be like this.
+$$b = \text{activation}(a_1*w_1 + a_2*w_2 + a_3*w_3 + \dots + a_n*w_n)$$
+Here we can interpret $b$ as the neuron output. Estimate $w_n$ is very easy if we know $b$ and $a_n$. This can be done used by recursive least squares or a Kalman filter.
+Question:
+If every neuron in a neural network is a function that has inputs and weights, can I use parameter estimation for estimating all weights in a neural network if I did parameter estimation for every neuron inside a neural network?
+The reason why I'm asking:
+I found a paper where they are using a Unscented Kalman Filter for parameter estimation.
+Function $D_{k|k-1} = G[x_k, W_{k|k-1}]$ can be interpreted as a neuron function where $W_{k|k-1}$ is a matrix with different types of weights and $D_{k|k-1}$ is different types of outputs from that neuron. No, it's not a "multivariable output"-neuron. It's just the way how to estimate the best weights by using different weights.
+The error of the neuron output is: $d_k - \hat d_k$ in equation (41).
+So when the error is small, that means the output of the neuron is OK and that means the real weights $\hat w_k$ has been found.
+
+"
+"['neural-networks', 'activation-functions']"," Title: Why identity function is generally treated as an activation function?Body: It is known that the primary purpose of activation functions, used in neural networks, is to introduce non-linearity.
+Then how can the linear activation function, especially the identity function, be treated as an activation function?
+Are there any special applications/advantages in using an identity function as I cannot see any such use theoretically?
+"
+"['neural-networks', 'terminology']"," Title: Do authors generally use fully connected layer instead of affine transformation?Body: We generally encounter the following statement several times
+
+The input vector is first fed into a fully connected layer......
+
+Since linear activation functions, such as identity function, can so considered as an activation functions, a fully connected layer can be considered just as an Affine transformation if the fully connected layer uses linear activation function.
+So, in theory, a fully connected layer can refer to the following
+
+- Just an affine transformation
+- Affine transformation followed by a nonlinear activation function
+
+Do authors generally choose to use "fully connected layer" for case 2 only or for both cases 1 and 2?
+"
+"['neural-networks', 'card-games']"," Title: How to model the inputs and outputs of the neural network for the Splinterlands card game?Body: I have recently just completed a course on deep learning and I feel like an intermediate, but I still don't know how to structure this problem.
+I'm looking to create a NN to play the card game Splinterlands.
+I have the history of battles, the cards played, who won, and the battle rules and constraints.
+I'm struggling with how best to model the inputs. I think that standard feature engineering and the encoding of all the variables affect the game, like manor, cards chosen, rules, etc.
+How best to constrain the outputs? For example, the model needs to select cards I have available, etc., while the history contains cards I don't.
+I know it's a long shot, but just looking for some inspiration :)
+The basic rules of splinterlands are as follows:
+
+- Start battle (you are given mana and set of rules, like no
+shields etc)
+- Then select a summon (some rules say it has to be
+of certain, you also want to select your highest level summoner)
+- Then select 6 monsters ( each monster has a position and
+stats like attack, speed, health and mana cost)
+- Press battle
+
+"
+"['reinforcement-learning', 'markov-decision-process', 'value-functions']"," Title: Discard irrelavant states from a MDPBody: I came across this question about MDP.
+From the look of it, it seems the full MDP is reducible if the discarded state only have 1 way in and out but is it really so if we change the discounted factor? I think there is some tricky part regarding this problem...
+
+"
+"['implementation', 'notation', 'tensor']"," Title: Why using negative integers (as dimensions?) in tensor shapes rather than natural numbers?Body: Consider the following paragraph from A.1 MULTI-MNIST AND CLEVR
of A IMPLEMENTATION DETAILS
from the research paper titled GENERATING MULTIPLE OBJECTS AT SPATIALLY DISTINCT LOCATIONS by Tobias Hinz et al.
+
+In the global pathway of the generator we first obtain the layout
+encoding. For this we create a tensor of shape (10, 16, 16) (CLEVR:
+(13, 16, 16)) that contains the one-hot labels at the location of the
+bounding boxes and is zero everywhere else. We then apply three
+convolutional layers, each followed by batch normalization and a leaky
+ReLU activation. We reshape the output to shape (1, 64) and
+concatenate it with the noise tensor of shape (1, 100) (sampled from a
+random normal distribution) to form a tensor of shape (1, 164). This
+tensor is then fed into a dense layer, followed by batch normalization
+and a ReLU activation and the output is reshaped to (−1, 4, 4). We
+then apply two upsampling blocks to obtain a tensor of shape (−1, 16,
+16).
+
+The paragraph is saying that a tensor of shape (1, 164) is reshaped to (-1, 4, 4). What is the reason behind using negative number -1? Is it representing axis? Can't we represent it with $a \times x \times y$, where $a, x, y$ are natural number s and dimensions of the tensor?
+$\dfrac{164}{4 \times 4}$ is not a natural number, so what is the shape of the reshaped tensor using only the natural numbers?
+"
+"['deep-learning', 'reference-request', 'upsampling']"," Title: Where can I read about upsampling methods in detail?Body: In deep learning, we encounter the upsample blocks several times, especially when we deal with images.
+Consider the following statements from description regarding UPSAMPLE in PyTorch
+
+The algorithms available for upsampling are nearest neighbor and
+linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input
+Tensor, respectively.
+
+Where can I read about these upsampling techniques in detail, especially in the context of deep learning?
+"
+"['neural-networks', 'reinforcement-learning', 'deep-rl']"," Title: What deep reinforcement learning algorithm should I use for my problem?Body: So here is a description of my problem:
+Essentially, I have a large amount of files filled with code for a number of different tasks. However, lets say these codes are inefficient, and should be edited to be made more efficient. There exists a program to edit this code, and essentially what is does is analyze the code to look for what possible edits can be made, and classifies these edits into 5 classes (Class A, Class B, Class C, Class D, Class E). There is also a program to test the efficiency of this code and yields numeric value quantifying the efficiency. However, choosing some edits benefit the efficiency of a some code better than other edits. For example, a piece of code that has had 5 different edits applied to it (2 of Class A, and 3 of Class C) can be more efficient than the same code that has had 10 edits applied to it (5 of Class A, 5 of Class D). Ideally, the less edits and the higher the efficiency the better.
+I want to use deep reinforcement learning to address this problem.
+Here are the goals of my model:
+
+- Takes the code in a form which it can understand (I have looked into this and stumbled upon code2vec, so I am looking into that)
+- Also can take in the edits that can be applied to the code (which is the action space) and their respective classes.
+- Makes decisions for edits which yield the highest efficiency for the code
+
+I want to train this model on the files filled with the inefficient code. The ultimate goal is to end up with an AI that takes code as an input and makes the best decision of which edits it should apply to the code.
+What deep reinforcement learning techniques/algorithms/architecures should I use to approach this problem and how can I go about implementing these in Python? As you can probably tell from this question, I am not very experienced in deep reinforcement learning, but I am willing to learn. Any help would be appreciated!
+"
+"['convolutional-neural-networks', 'convolution']"," Title: Is it possible to apply 2D convolution to 1D data?Body: Suppose that I have a 1D dataset with 6 features.
+Can I apply a 2D convolutional neural net to this dataset?
+"
+"['convolutional-neural-networks', 'comparison', 'applications', 'multilayer-perceptrons']"," Title: When should we use CNN instead of MLP?Body: Is CNN only applicable to time-series data or image data?
+When should we use CNN instead of MLP?
+"
+"['terminology', 'papers', 'residual-networks']"," Title: Why actual mapping is called as unreferenced mapping in this context of residual framework?Body: Consider the following statements from the research paper titled Deep Residual Learning for Image Recognition by Kaiming He et al.
+#1:
+
+We explicitly reformulate the layers as learning residual functions
+with reference to the layer inputs, instead of learning unreferenced
+functions. We provide comprehensive empirical evidence showing that
+these residual networks are easier to optimize, and can gain accuracy
+from considerably increased depth.
+
+#2:
+
+In this paper, we address the degradation problem by introducing a
+deep residual learning framework. Instead of hoping each few stacked
+layers directly fit a desired underlying mapping, we explicitly let
+these layers fit a residual mapping. Formally, denoting the desired
+underlying mapping as $\mathcal{H}(x)$, we let the stacked nonlinear
+layers fit another mapping of $\mathcal{F}(x) := \mathcal{H}(x)−x$.
+The original mapping is recast into $\mathcal{F}(x)+x$. We hypothesize
+that it is easier to optimize the residual mapping than to optimize
+the original, unreferenced mapping. To the extreme, if an identity
+mapping were optimal, it would be easier to push the residual to zero
+than to fit an identity mapping by a stack of nonlinear layers.
+
+The research paper contains the word "unreferenced" twice for a function, which I think is the actual function $\mathcal{H}(x)$. In normal deep neural networks, the function $\mathcal{H}(x)$ is generally learned. But, $\mathcal{F}(x)$ is learned in the residual neural networks and hence $\mathcal{H}(x)$.
+Why this paper is calling the original function $\mathcal{H}(x)$as unreferenced function?
+"
+"['neural-networks', 'machine-learning', 'natural-language-processing', 'recurrent-neural-networks']"," Title: NLP problem Phrase/Token labelingBody: Looking for suggestions on how to define the following NLP problem and different ways in which it can be modeled to leverage machine learning. I believe there are multiple ways to model this problem. Deep-learning-based suggestions also work as there is a good amount of data is available for training.
+Will evaluate different approaches for the given dataset. Please share relevant papers, blogs, or GitHub repos. Thanks!
+Input: Given a sentence S having words W1 to W10.
+S = W1 W2 W3 W4 W5 W6 W7 W8 W9 W10
+The sentence has some syntactic and semantic patterns, but it is not exactly freely written natural language but it's in English. These are words, can be punctuation
+Output: should be something like this.
+Label1 - W4
+Label2 - W3
+Label3 - [W2 W1] continuous // semantically related. Means words [W2 W1] in-order are assigned a Label3. Also okay with solutions that don't output in-order.
+Label4 - [W6 W8]
+Label5- W10
+Noise- W7, W9. Means words W7 and W9 independently are assigned a
+Label3.
+Label7- W5
+Need to solve the problem. Looking for research/thoughts on how this problem can be defined in different ways to exploit different patterns in the structure of sentences. Looking for similar tasks which are already defined in NLP such as token labeling, parsing which can be used.
+Would be really helpful to get the suggestions to the latest research on solving/defining this problem.
+"
+"['natural-language-processing', 'reference-request', 'deepfakes', 'speech-synthesis']"," Title: Is Speech to Speech with changing the voice to a given other voice possible?Body: Background:
+I am working on a research project to use (demonstrate) the possibilities of Machine Learning and AI in artistic projects. One thing we are exploring is demonstrating deep fakes on stage. Of course, a deep fake is not easy to make. Still, we are exploring creating a "minor quality" deep fake live on a stage (or maybe in some other ways where people can make deep fakes of themselves) in which we put words into someone's mouth. I discovered that a semi-nice deep fake of the facial movement is possible, now a also want to add voice.
+There are a lot of text-to-speech systems, which allow using a voice that is created from the recording of the voice of a real person. That is already nice.
+The video is based on the facial movements of another person. So the Audio has to match the facial movements. The easiest way would be, if the "fake" voice says it exactly the same way, as the person doing the "facial acting" for the deep fake said it.
+Question:
+Is it possible to do a fake voice of a person in this way:
+
+- Another person (the actor, or source) speaks the words in his voice and records it.
+- The person with the faked voice (the destination) gives a voice sample with some random spoken text.
+- An AI/algorithm/whatever modifies the recording from (1) in such a way, that the tone/voice is modified so it matches the voice from (2).
+
+Do systems/research like this exist? I did not find anything using google, but maybe I did not use the correct keywords.
+"
+"['neural-networks', 'training', 'supervised-learning', 'labeled-datasets']"," Title: What problem does the neural network really solve?Body: In the image below taken from a Youtube video, the author explains that the neural network can be used to fit a relational graph for a set of data points shown by the green line. And that this is accomplished by using weights, biases and activation functions.
+My slight confusion is that, initially, the weights and biases and randomized, and they are re-adjusted by backpropagation. This means that, at the end of the output layer, we must have the actual values of the target function anyway.
+So what problem does the neural network really solve?
+So, for example, we want to find the target function for dosage and efficacy, we are given the data points shown in blue. If we initially choose randomized values for the weights, biases and activation function, then, at the output layer, we determine an output value for efficacy, but there is no way to know whether this value is in fact correct or not. So, we need the actual values to determine the difference.
+What about when we choose a value of dosage which has not been observed, for example, 0.25? Doesn't this rely upon a best-fit relation graph that has already been fitted to the data prior to adjusting the neural network?
+
+"
+"['reinforcement-learning', 'deep-rl', 'on-policy-methods', 'exploration-strategies']"," Title: Is it possible to apply a particular exploration policy for the on-policy RL agents?Body: Is it possible to use any particular strategy to explore (e.g. metaheuristics) in on-policy algorithms (e.g. in PPO) or is it only possible to define particular policies to explore in off-policy algorithms (e.g. TD3)?
+"
+"['machine-learning', 'features', 'algorithm-request']"," Title: How to find ""relationships"" between two data representations?Body: I am a researcher in a field, and new to the whole of AI and machine learning techniques. May the following question is trivial or not framed in the ML language but I try my best.
+I have two sets of representations (I can extract feature vectors, etc., from the datasets) from vastly different domains. I want to find, if any, a relationship exists between these two sets. In other words, I want an algorithm (the idea of an algorithm) to learn both representations and find the connections and convert one representation to another.
+There is neither apparent one-to-one correspondence nor both need to be the same lengths.
+Any suggestion on how to approach this problem is appreciated.
+I thought of one method; write an encoder-decoder for each of these presentations separately and swap the decoders. I am not sure whether it works or not, and besides I may not have any idea what's going on there.
+I prefer a general approach if it exists.
+"
+"['deep-learning', 'ai-design', 'research']"," Title: Should I repeat lengthy deep learning experiments to average results ? How to decide how many times to repeat?Body: I am doing my MSc thesis on deep learning. My model takes many hours to train. Part of what I do is trying different parameters and settings hoping that they will achieve different results. But I often notice that the result differences are too small to conclude whether the set of parameters A is better than B. Sometimes it happens that on a first run, set A seems to work better than B, but on a second run the opposite is suggested.
+The logical approach for that would be to repeat experiments and average out. But it seems unpractical given that each run could take so many hours.
+I wonder how expert AI researchers deal with that, do they perform multiple experiments, even if this takes extremely long? Do they draw conclusions from single runs?
+"
+"['computer-vision', 'object-detection', 'object-recognition', 'yolo', 'faster-r-cnn']"," Title: In anchor based object detection, why don't the anchors share the same weights?Body: After reading about YOLO V3 and Faster R-CNN, I don't understand why the weights for the regression head aren't the same across all boxes of the same size. Given that the backbone of these systems is fully convolutional, the location of the outputted features should only depend upon the local region of the image which telescops to that feature map. Given that we want the object detector to behave the same way regardless of the object location in the image, shouldn't the weights be the same across anchors of the same size?
+"
+"['reinforcement-learning', 'definitions', 'markov-decision-process', 'policies']"," Title: In reinforcement learning, why are policies defined as functions of states and not observations?Body: I am new to RL and I am following Sutton & Barto's book.
+My doubt is, when we talk about the policy of our agent, we say it is the probability of taking some action $a$ given the state $s$. However, I think that the policy should be defined in terms of observations and not states because I think it is not always possible for an agent to fully capture the state due to various reasons, maybe lack of sensors or lack of memory.
+So, why are policies defined as functions of states and not observations?
+"
+"['deep-learning', 'recurrent-neural-networks', 'feature-selection', 'relu']"," Title: What are Linear and Non-Linear Features of an image in the context of Convolutional Neural Network?Body: What features of image are linear or non-linear, any example ?
+"
+"['neural-networks', 'tensorflow', 'keras', 'input-layer']"," Title: How to handle random order of inputs and get same output?Body: I am a beginner with DL. I did some tutorials and I know the basics of TensorFlow. But I have a problem understanding how to construct more advanced NNs.
+Let's say I have 6 inputs and a list of 500 names from which you can pick any, but only 6 at the time. The output should be one value between $0.0$ and $1.0$.
+My question is, how I can handle random order in inputs? In inputs 1-6 you have names ABCDEF and the output score is 0.7. I need the same output if input will be in order CEDBFA. How can I handle this? Should I make random shuffle on inputs during training, or should I make for every output value 500D binary vector-like $[0,0,1,0,1,...,0,1,0,0,0]$, where index position in the array is the corresponding token of name and then feed it in 500 inputs? Or there is some better way?
+"
+"['geometric-deep-learning', 'graph-neural-networks', 'graphs', 'features']"," Title: What are examples of node 'features' in graph networks?Body: Context: I was reading Chapter 3 in the following book (here) about graph representation learning. Before I get to node embeddings, I wanted to make sure that I do understand what is meant by the phrase 'node features' used numerous times throughout the book. Examples are as follows:
+Chapter 5, page 50:
+
+Node Features: note that unlike the shallow embedding methods discussed in Part I of this book, the GNN framework requires that we
+node features $\mathbf{x}_{u}$, $\forall u \in \mathcal{V}$ as input
+to the model. In many graphs we will have rich node features to use
+(e.g. gene expression features in biological networks or text features
+in social networks)....
+
+Question: What is a simple, concrete example of different node features? I have read the paragraph above, but I am not sure whether I have interpreted it correctly. For example, if we imagine a social network of some friends, would some example node features be: address, age, height, weight, etc.? Would it be as simple as that? What are some more advanced/subtle bits of information which could be counted as node features. Perhaps one could be 'number of friends' (i.e. the degree of the node), but what about others.
+"
+"['geometric-deep-learning', 'graph-neural-networks', 'graphs', 'embeddings']"," Title: What is the reason behind using node embeddings?Body: I was reading Chapter 3 from the following book (here) on graph representation learning. The chapter is about node embeddings.
+Question: What is the point of using node embeddings?
+Do we use them:
+
+- to save space/memory by mapping our graph into a form which is of lower dimension?
+- to find a representation of the graph which can be fed into a neural network?
+- perhaps some other reason?
+
+It isn't clear to me what the actual purpose/end-goal is of finding a representation is.
+Any help would be greatly appreciated as I have also been reading about graph neural networks, which aim to find an embedding for the nodes (but I don't understand why that is of any use to us).
+"
+"['image-processing', 'function-approximation']"," Title: Uniform representation of images for machine learningBody: I'm new to the field of ML so please bear with me while I try to explain what I'm looking for. In most machine learning pipelines that deal with images there is a requirement to "normalize" the data in some way so that images of different dimensions can be used as inputs for the function that is being optimized. As in, if the function takes its input as an $n\times n$ grid of pixels (assuming we're dealing with 1-channel images) then any image that is not of the right shape must be re-shaped so that it can be used as input. We can assume $n = 2$ without losing any generality because any larger image can be reduced to the $2\times 2$ case for what I'm about to describe.
+So if we assume we have a $2\times 2$ image then there is an obvious way to map such an image to a function defined on $[0,1]\times [0,1]$ ($f:[0,1]\times[0,1]\rightarrow\mathbb{R}$) by using convex combinations of the points in the image. If the points of the image are labeled as $x_{00},x_{01},x_{10},x_{11}$ where $x_{00}$ is the top left corner and $x_{11}$ is the bottom right corner then given a point $(a,b)\in[0,1]\times[0,1]$ the value of $f$ at $(a, b)$ can be defined as $$f(a, b) = (1-b)((1-a)x_{10}+ax_{11})+b((1-a)x_{00}+ax_{01})$$
+Assuming I got all the signs right it's obvious that this idea can be extended to any grid of pixels by mapping the horizontal and vertical dimensions to $[0,1]$ and then interpolating between the grid points as in the $2\times 2$ case. So this mapping from grids of pixels to functions provides a uniform representation for all images as functions defined on $[0,1]\times[0,1]$.
+Now my question is the following: Is there any work that tries to use this kind of representation of images and if there isn't does anyone know what exactly are the obstructions to doing so? It's possible I'm missing something that makes this approach non-viable but I wasn't able to find anything that explained one way or the other why the usual tensor representation is preferable to a functional one as above that reduces all images to functions on $[0,1]\times[0,1]$.
+"
+"['machine-learning', 'proofs', 'perceptron']"," Title: How to show $\rho > 0$ when $\rho$ be minimum attainable from $y_n(W^{*T}X_n)$, where $W^*$ the vector that separates the data?Body: In the book Learning from Data written (by Abu Mostafa), we have the following exercise:
+
+Let $\rho$ be minimum attainable from $y_n(W^{*T}X_n)$ where $W^*$ is the vector that separates the data. Show $\rho > 0$. Also assume the Perceptron Learning Algorithm is initialized with the 0 vector.
+
+How to prove the above statement?
+I thought that it could be negative since a Perceptron function returns either +/-1?
+Even I wonder if I comprehend this proof question correctly.
+"
+"['natural-language-processing', 'python', 'transformer', 'question-answering']"," Title: How to generate a response while considering past questions as well?Body: User: What is the tallest mountain?
+Agent: Everest
+User: Where is it located? # Agent hears: "Where is Everest located?"
+Agent: Nepal
+
+I want to be able to generate a sequence that has been generated using the user's current query as well as the past conversation.
+More specifically, I am using Google's T5 for closed-book question answering, but, instead of trivial questions, we use the user's frequently asked queries.
+I want to be able to encode their past questions and the agent's past answers, then use them to generate the agent's next answer. How can I do that?
+"
+"['image-processing', 'bounding-box']"," Title: How to understand the common practices followed for writing a ""bounding box"" for an image in datasets?Body: For the image datasets, there may be a bounding box for each image at the dataset. It is an annotation for an image. It is a rectangular box intended for focusing on something inside the image.
+I read about the following two types of representations for a bounding box.
+
+- using two points $(x_1, y_1)$ and $(x_2, y_2)$.
+
+$$<x_1><y_1><x_2><y_2>$$
+
+- Using a point $(x_1, y_1)$, width, and height.
+$$<x_1><y_1><width><height>$$
+
+How do understand both the representations, Specifically, does the point $(x_1, y_1)$ used to denote the top right or top left or bottom right or bottom left in both cases?
+"
+"['neural-networks', 'convolutional-neural-networks', 'models']"," Title: Why is the validation accuracy lower in case of CNN?Body: I fed the same set of 1.4 million data to two different models:
+
+- MLP
+- CNN model
+
+In both cases, I used the same parameters and hyperparameters.
+The CNN is showing comparatively lower accuracy (80%) than that of the MLP (82%).
+Why?
+And, also, what does this experiment tell us?
+Edit:
+
+Is the data images, video or audio, or other grid-based signals (same signal repeated at multiple locations with a meaningful distance metric between them) in your case?
+
+It is protein C-alpha distances data that has 3 classes (helix, strand, coil) and n-number of features, where n is an even number.
+
+In fact, what is it, what was your expectation of the relative performance of the two models, and why?
+
+I thought CNN would be more efficient and thereby would demonstrate better test/validation accuracy.
+"
+"['neural-networks', 'machine-learning', 'tensorflow', 'recurrent-neural-networks', 'gated-recurrent-unit']"," Title: Multiple GRU layers to improve a text generationBody: I am using the model in this colab https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/text_generation.ipynb#scrollTo=AM2Uma_-yVIq for Shakespeare like text generation.
+It looks like this
+class MyModel(tf.keras.Model):
+ def __init__(self, vocab_size, embedding_dim, rnn_units):
+ super().__init__(self)
+ self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
+ self.gru = tf.keras.layers.GRU(rnn_units,
+ return_sequences=True,
+ return_state=True)
+ self.dense = tf.keras.layers.Dense(vocab_size)
+
+ def call(self, inputs, states=None, return_state=False, training=False):
+ x = inputs
+ x = self.embedding(x, training=training)
+ if states is None:
+ states = self.gru.get_initial_state(x)
+ x, states = self.gru(x, initial_state=states, training=training)
+ x = self.dense(x, training=training)
+
+ if return_state:
+ return x, states
+ else:
+ return x
+
+I look for strategies to improve the model, under the hypothesis that there is a reliant why to assess the goodness of my model; how could adding one or more GRU layers improve the model, e.g., number of rnn_units
, number of layers, for such a stacked model ?
+For instance, this gives extremely bad results
+class MyModel(tf.keras.Model):
+ def __init__(self, vocab_size, embedding_dim, rnn_units):
+ super().__init__(self)
+ self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
+ self.gru = tf.keras.layers.GRU(rnn_units,
+ return_sequences=True,
+ return_state=True)
+ self.gru_2 = tf.keras.layers.GRU(rnn_units,
+ return_sequences=True,
+ return_state=True)
+ self.dense = tf.keras.layers.Dense(vocab_size)
+
+ def call(self, inputs, states=None, return_state=False, training=False):
+ x = inputs
+ x = self.embedding(x, training=training)
+ if states is None:
+ states = self.gru.get_initial_state(x)
+ x, states = self.gru(x, initial_state=states, training=training)
+ if states is None:
+ states = self.gru_2.get_initial_state(x)
+ x, states = self.gru_2(x, initial_state=states, training=training)
+ x = self.dense(x, training=training)
+
+ if return_state:
+ return x, states
+ else:
+ return x
+
+"
+"['philosophy', 'human-like', 'ethics', 'emotional-intelligence', 'asimovs-laws']"," Title: Would empathy in AI be a reliable tool/capacity, or contribute to a solution to avoid harm done to humans or to other versions of AI?Body: TL;DR
+Would providing AI the capability of experiencing something as close as possible to the subjective human experience and from that acquiring empathy in the process be a solution, or contribute to a solution, that seeks to prevent AI to cause the same kind of harm we ourselves as a species have caused to each other in the past, and continue to do so, to us humans and also to itself and/or to other versions of AI?
+LONG VERSION:
+I was thinking about a question I listened being asked on a podcast where the topic was AI, which was something like: does AI need to have a body in order to understand what it is or how it feels to have a body. The person being interviewed said no. AI can understand it without having a body.
+My perception is that a lot of what goes on in AI development at the moment is about AI emulating and simulating human behavior in a way, at the external level, that is by observing human behavior and processing it, looking for patterns etc: speech, motion, the human DNA etc.
+I was thinking also that in order for a human being to relate and engage with society and with himself in a constructive, healthy way, and with minimal damage, it will need to possess a certain level of empathy. That is to relate to others and possess the ability to somehow feel what the other is feeling.
+Now feeling is not an objective state or something you can observe. Maybe you can measure by observing the human behavior that is a result of feeling but the thing in itself can't be measured or captured and transmitted to AI in a way it can learn from human feeling. One can argue it can be done by studying the brain activity at the moment feeling is occurring. But the subjective experience of experiencing feeling and then being able to recognize that experience in others, understanding what the other is feeling because you have felt the same, that is a subjective occurrence.
+It is one thing to understand the human body by observation and study of it's patterns, it's something different to be locked inside one and experience its pains and pleasures.
+I am left wondering how can AI understand the human condition if it will not experience what it is to be a human because it is built around the material world, built around what can be objectively observed and measured.
+The closest thing there is in human experience to an AI in this regard could be a human that suffers some condition which makes him/her being unable to experience empathy. I was about to use the word Psychopathy but being a loaded word might not be the best choice here. But suffice to say that humans that suffer from some kind of psychiatric condition, or trauma based experience which causes the person to lack the capability to feel empathy towards others might, or not, be easier for that person to in the future engage in actions which can cause harm to others and/or him/herself.
+We have cases in human history that might be used as an example. In the past countless human beings engaged in actions which due to its nature I can only conclude this people lacked any empathy or were placed in a situation where empathy was drained out of them. Like for example in war, usually soldiers and the whole country will go through a phase of propaganda which consists of dehumanizing the enemy. I believe this is done in order to facilitate and eliminate any sense of empathy and humanity in the population and building up the necessary will, energy and determination for the killing of the enemy.
+The solution to this question might be to create in AI the capability of experiencing subjectively what it is to be human, or what it is to have a body, or come up with something that makes it possible for AI to experience something similar to human empathy.
+One thing comes to mind which is the merge of AI and the human mind. But this might be another topic altogether.
+It has been argued in some places I remember, even in some Science-Fiction that actually the main problem of destructive human behavior is the ability to feel emotion and feeling. And the solution might just be to eliminate or diminish that part of ourselves. I really don't agree with this view and it seems to me that ATM our society does engage and promote solutions of this type: drugs such as some anti-depressants for ex. instead of going to the root of the problem which in many situations are trauma based. I really hope we don't go this way. I believe that if we have a future the solution is to find ways of developing and cultivating empathy.
+"
+"['neural-networks', 'deep-learning', 'terminology']"," Title: Is there a widely accepted definition of the width of a neural network?Body: The depth of a neural network is equal to the total number of layers in the neural network (excluding the input layer by convention). A neural network with "many layers" is called a deep neural network.
+On the other hand, the width is the name of a property of a layer in a neural network: it is equal to the number of neurons in that particular layer. So, it may be apt to use the phrase "the width of a layer in a neural network".
+But, is it valid to use the phrase "width of a neural network"?
+I got this doubt because the phrase "wide neural network" is widely used. The phrase gives the impression that the width is a property of a neural network. So, I am wondering whether the width of a neural network has a definition. For example, say, the width of a neural network is the number of neurons in the widest layer of that neural network.
+"
+"['natural-language-processing', 'hyper-parameters', 'clustering', 'cosine-similarity']"," Title: How to reduce the number of clusters produced by the Markov Clustering Algorithm?Body: I have used the Markov Clustering Algorithm (MCL) to cluster tweets, based on their similarity. However, I got a too high number of clusters, and most of the clusters have only one tweet. Any suggestions to reduce the number of clusters?
+"
+"['reinforcement-learning', 'ai-safety']"," Title: How to measure Deep RL algorithms in terms of safety?Body: I applied for a Ph.D. in AI, my advisor told me that my thesis is about safe applications of deep RL algorithms in healthcare. So I decided to do as the first paper, a comparison of Deep RL algorithms in terms of their inherent safety. However, after lots of research, I could not find an answer to my question, that is: How to measure Deep RL algorithms in terms of safety?
+"
+"['computer-vision', 'training', 'datasets', 'research', 'image-processing']"," Title: Is it okay to use publicly available Instagram videos to train an AI?Body: Since I haven't found any good training data for my university project, I want to use pictures and videos from public Instagram profiles. Am I allowed to do that?
+"
+"['reinforcement-learning', 'deep-rl', 'markov-decision-process', 'algorithm-request', 'reward-functions']"," Title: Are there any deep RL algorithms that work well on finite MDPs and non-trivial terminal rewards?Body: I notice that most Deep Reinforcement Learning (DRL) works focus on Markov Decision Process (MDP) with an infinite time horizon.
+Are there any algorithms that work well on finite MDP and non-trivial terminal reward?
+My definition of non-trivial terminal reward is that the reward function at the terminal time is different from the one at non-terminal timestamps. Many environments or games fall into this category. For example, many games are on $[0, T]$, the total reward is the summation of the accumulated reward plus the final bonus.
+"
+['convolutional-neural-networks']," Title: What is meant by ""shorter connections"" in the case of deep convolutional neural networks?Body: Consider the following two excerpts from the research paper titled Densely Connected Convolutional Networks by Gao Huang et al.
+#1: From abstract
+
+Recent work has shown that convolutional networks can be substantially
+deeper, more accurate, and efficient to train if they contain shorter
+connections between layers close to the input and those close to the
+output.
+
+#2: From discussion
+
+One explanation for the improved accuracy of dense convolutional
+networks may be that individual layers receive additional supervision
+from the loss function through the shorter connections. One can
+interpret DenseNets to perform a kind of “deep supervision”.
+
+Both excerpts mention the type of connections called shorter connections, especially to the layers that are close to the input and the output layers of the deep convolutional neural network. What does it mean by shorter connections here?
+"
+"['reinforcement-learning', 'deep-rl', 'policy-gradients', 'multi-agent-systems', 'pomdp']"," Title: Can a Reinforcement Learning problem with multiple simultaneous actions be formalized as a Multiagent Partially Observable Markov Decision Process?Body: Consider the following decision making problem. We have a controller that selects locations from a grid of coordinates and captures an image (observation $o_t$) with a camera at each location (action $a_t$). We try to find an optimal sequence of locations for a specific goal. This decision making problem can be formalized as a Partially Observable Markov Decision Process (POMDP). Here, we seek an optimal stochastic policy $\pi^{*}_{\theta}(a_t|h_t)$ that maps the history $h_t= \langle o_1, a_1, ..., o_{t-1},a_{t-1},o_t \rangle$ of actions and observations up to the current time $t$ to action probabilities. The history $h_t$ can be summarized by the hidden state of a RNN and we can use a policy gradient method, e.g. REINFORCE, to update the policy parameters $\theta$.
+Suppose now that we want to select multiple locations, i.e. actions, simultaneously. According to my understanding, we could formalize the problem as a Mutliagent POMDP (MPOMDP) [1]. In this formalism, we would replace the single action of the previous problem by joint actions $\vec{a}_t = \langle a^1_t, ..., a^N_t \rangle$, the single observation by joint observations $\vec{o}_t = \langle o^1_t, ..., o^N_t \rangle$ and the history by $h_t= \langle \vec{o}_1, \vec{a}_1, ..., \vec{o}_{t-1},\vec{a}_{t-1},\vec{o}_t \rangle$, where $N$ is the number of agents. We would now try to find an optimal joint policy $\vec{\pi}^{*} = \langle \pi^{1*}, ...,\pi^{N*} \rangle$ consisting of sub-policies $\pi_{\theta_n}(a^n_t|h_t)$ that map the history $h_t$ to the action probability of each agent $n$. This would mean that the RNN would have $N$ output nodes and each sub-policy $\pi^n$ would be parametrized by $\theta_n$, a sub-set of weights of the output layer [2]. Would it be correct to assume that an optimal or near-optimal joint policy $\vec{\pi}^{*}$ can be obtained by simply applying the policy gradient method used above to each sub-policy $\pi^n$?
+I would be curious to hear what you think about the MPOMDP formalism applied to the latter decision making problem or whether you would suggest something else.
+[1] Oliehoek, Frans A., et al. "A concise introduction to decentralized POMDPs." Springer, 2016.
+[2] Gupta, Jayesh K., et al. "Cooperative multi-agent control using deep reinforcement learning." International Conference on Autonomous Agents and Multiagent Systems. Springer, Cham, 2017.
+"
+"['machine-learning', 'q-learning', 'temporal-difference-methods', 'loss', 'action-spaces']"," Title: How to handle invalid actions for next state in Q-learning lossBody: I am implementing an RL application in an environment with illegal moves. For handling the illegal moves, I am currently just picking an action as the maximum Q-value from the set of legal Q-values.
+So, it is clear that when deciding on actions we only pick from a subset of valid Q-values, but, when using the Q-learning algorithm, do we also want to consider the subset of invalid actions for the $\max\limits_{a}Q(s_{t+1},a)$?
+My gut tells me that we consider all actions for the max function, purely based on the lack of documentation on the subject, but only considering the subset of legal actions makes more sense to me. I'm having a hard time finding any reliable sources addressing this topic. Any advice/direction would be greatly appreciated.
+
+"
+"['math', 'gradient', 'vanishing-gradient-problem', 'exploding-gradient-problem']"," Title: Mathematically speaking, Is it only the product operation used in the chain rule causing the vanishing or exploding gradient?Body: I am asking this question from the mathematical perspective of the vanishing and exploding gradient problems that we face generally during training deep neural networks.
+The chain rule of differentiation for a composite function can be expressed roughly as follows:
+$$\dfrac{d}{dx} (f_1(f_2(f_3(\cdots f_n(x))))) = \dfrac{df_n}{dx} \dfrac{df_{n-1}}{dy_1} \dfrac{df_{n-2}}{dy_2} \cdots \dfrac{df_1}{dy_{n-1}}$$
+We know that multilayer perceptrons are composite functions of layer functions. So, if layers are increasing, then the gradient terms to multiply will increase on the right-hand side.
+If all the gradient terms on the right-hand side are between 0 and 1 then the gradient will become less and less if layers keep on increasing the layers. This phenomenon is called the vanishing gradient problem. Similarly, if all the gradient terms on the right-hand side are greater than 1. Then the product will become more and more. This phenomenon is called exploding gradient problem.
+Since it is customary to use the same activation function across all the layers in deep neural networks, all the gradients on the right hand behave in a similar manner, i.e. either most of the gradient terms on the right-hand side fall between 0 and 1 or greater than one, which causes either vanishing gradient or exploding gradient problem.
+Do my mathematical interpretation of the vanishing and exploding gradient problem true? Am I missing anything?
+"
+"['machine-learning', 'deep-learning', 'training', 'philosophy', 'generative-adversarial-networks']"," Title: How to assess the goodness of a text generation algorithmBody: Take a RNN network fed with Shakespeare and generating Shakespeare-like text.
+Once a model seems mathematically fine, as can be assessed by observing its loss and accuracy over training epochs, how can one assess and refine the goodness of the result ?
+Only human eyes can judge of the readable character of a text, its creativity, its grammatical correctness etc.
+QUESTION : Which systematic approach can be used to refine a generative model (text) ?
+"
+"['convolutional-neural-networks', 'features', 'channel']"," Title: Is it true that channels always represent colours of an image?Body: Convolutional neural networks are widely used in image-related tasks in artificial intelligence.
+The input of a conventional neural network is generally an image. The output of a convolutional neural network can also be an image. But the output of the hidden/ intermediate layers of a convolutional neural network is generally the feature maps of the input image.
+In general, channels of an image, represent colors used. If the input and output of a convolutional neural network are RGB images. Then it is true that three channels of input and output images are representations for red green and blue colors. Is it true for feature maps also? Do the channels in feature maps are also the representations of colours?
+I got this doubt because of the reason that I remember channels as colours and feature maps may contain an arbitrary number of channels. If they are also represented under then it is difficult for me to understand.
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'computer-vision', 'generalization']"," Title: How general is generalization?Body: I am sorry but I have to explain my question using an example, I do not know how to ask it in proper scientific terms.
+Let's assume, I have trained a deep learning model on classifying hand gestures, but training and testing datasets' images are shot only in one lighting conditions and I achieved certain accuracy, let's assume 85%. As far as I understand, adding more data of the same hand gestures images but shot with different lightning should increase my model's "generalization" capabilities, right?
+So the question is, if I take this model, trained in two lightning conditions, and test it only on the dataset of the first lightning conditions, would that increase it's accuracy (the 85%) or maybe this "generalization" would only mean that it can now also classify correctly images with different lightning, but not increase the accuracy on the first set?
+"
+"['machine-learning', 'convolutional-neural-networks', 'datasets', 'imbalanced-datasets']"," Title: How do you handle unbalanced image datasets?Body: I have an image data set on which I am training a CNN. The data set is slightly unbalanced. So, my solution up till now was to delete some images of the majority class.
+But I now realize that there are cleaner ways to deal with this. But I haven't been able to find ways to fix unbalanced image data sets. Only structured data-sets.
+I would like someone to guide me to fix the unbalance, other than deleting data from the majority class.
+"
+"['neural-networks', 'deep-learning', 'hyper-parameters', 'batch-size', 'epochs']"," Title: Is there any relationship between the batch size and the number of epochs?Body: I am currently running a program with a batch size of 17 instead of batch size 32. The benchmark results are obtained at a batch size of 32 with the number of epochs 700.
+Now I am running with batch size 17 with unchanged number epochs. So I am interested to know whether there is any relationship between the batch size and the number of epochs in general.
+Do I need to increase the number of epochs? Or is it entirely dependent on the program?
+"
+"['machine-learning', 'datasets', 'data-preprocessing']"," Title: Recommended way to spilt image sequence for training/validation/testingBody: For object detection tasks I have a few minutes of video footage from a surveillance camera, converted to a sequence of images and ground truth bounding boxes for all people walking by.
+Now what's the best way to split this into training, validation and test sets (80/10/10)?
+
+- I could randomly select 10% for testing and 10% for validation and rest into training.
+- The first 80% go into training, next 10% to validation, rest to testing.
+
+
+The first way has the advantage of having a good distribution of different people walking by and also more varying densities and locations of people in the test set. But the disadvantage would be that for each testing image a very similar image exists in the training set.
+The second way would have the advantage of the testset being more truly "never seen" during training, but at cost of less variety.
+"
+"['tensorflow', 'keras', 'performance', 'multi-label-classification']"," Title: What could cause the hamming loss and subset accuracy to get stuck in a multi-label image classification problem?Body: I am rather new to deep learning and got some questions on performing a multi-label image classification task with keras convolutional neural networks. Those are mainly referring to evaluating keras models performing multi label classification tasks. I will structure this a bit to get a better overview first.
+Problem Description
+The underlying dataset are album cover images from different genres. In my case those are electronic, rock, jazz, pop, hiphop. So we have 5 possible classes that are not mutual exclusive. Task is to predict possible genres for a given album cover. Each album cover is of size 300px x 300px. The images are loaded into tensorflow datasets, resized to 150px x 150px.
+
+Model Architecture
+The architecture for the model is the following.
+import tensorflow as tf
+
+from tensorflow import keras
+from tensorflow.keras import layers
+from tensorflow.keras.models import Sequential
+
+data_augmentation = keras.Sequential(
+ [
+ layers.experimental.preprocessing.RandomFlip("horizontal",
+ input_shape=(img_height,
+ img_width,
+ 3)),
+ layers.experimental.preprocessing.RandomFlip("vertical"),
+ layers.experimental.preprocessing.RandomRotation(0.4),
+ layers.experimental.preprocessing.RandomZoom(height_factor=(0.2, 0.6), width_factor=(0.2, 0.6))
+ ]
+)
+
+def create_model(num_classes=5, augmentation_layers=None):
+ model = Sequential()
+
+ # We can pass a list of layers performing data augmentation here
+ if augmentation_layers:
+ # The first layer of the augmentation layers must define the input shape
+ model.add(augmentation_layers)
+ model.add(layers.experimental.preprocessing.Rescaling(1./255))
+ else:
+ model.add(layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)))
+
+ model.add(layers.Conv2D(32, (3, 3), activation='relu'))
+ model.add(layers.MaxPooling2D((2, 2)))
+ model.add(layers.Conv2D(64, (3, 3), activation='relu'))
+ model.add(layers.MaxPooling2D((2, 2)))
+ model.add(layers.Conv2D(128, (3, 3), activation='relu'))
+ model.add(layers.MaxPooling2D((2, 2)))
+ model.add(layers.Conv2D(128, (3, 3), activation='relu'))
+ model.add(layers.MaxPooling2D((2, 2)))
+ model.add(layers.Flatten())
+ model.add(layers.Dense(512, activation='relu'))
+
+ # Use sigmoid activation function. Basically we train binary classifiers for each class by specifiying binary crossentropy loss and sigmoid activation on the output layer.
+ model.add(layers.Dense(num_classes, activation='sigmoid'))
+ model.summary()
+
+ return model
+
+I'm not using the usual metrics here like standard accuracy. In this paper I read that you cannot evaluate multi-label classification models with the usual methods. In chapter 7. evaluation metrics the hamming loss and an adjusted accuracy (variant of exact match) are presented which I use for this model.
+The hamming loss is already provided by tensorflow-addons (see here) and an implementation of the subset accuracy I found here (see here).
+from tensorflow_addons.metrics import HammingLoss
+
+hamming_loss = HammingLoss(mode="multilabel", threshold=0.5)
+
+def subset_accuracy(y_true, y_pred):
+ # From https://stackoverflow.com/questions/56739708/how-to-implement-exact-match-subset-accuracy-as-a-metric-for-keras
+
+ threshold = tf.constant(.5, tf.float32)
+ gtt_pred = tf.math.greater(y_pred, threshold)
+ gtt_true = tf.math.greater(y_true, threshold)
+ accuracy = tf.reduce_mean(tf.cast(tf.equal(gtt_pred, gtt_true), tf.float32), axis=-1)
+ return accuracy
+
+ # Create model
+ model = create_model(num_classes=5, augmentation_layers=data_augmentation)
+
+ # Compile model
+ model.compile(loss="binary_crossentropy", optimizer="adam", metrics=[subset_accuracy, hamming_loss])
+
+ # Fit the model
+ history = model.fit(training_dataset, epochs=epochs, validation_data=validation_dataset, callbacks=callbacks)
+
+
+Problem with this model
+When training the model subset_accuracy hamming_loss are at some point stuck which looks like the following:
+
+What could cause this behaviour? I am honestly a little bit lost right now. Could this be a case of the dying ReLU problem? Or is it wrong use of the metrics mentioned or is the implementation of those maybe wrong?
+So far, I tried to test different optimizers and lowering the learning rate (e.g. from 0.01 to 0.001, 0.0001, etc..) but that didn't help either.
+"
+"['neural-networks', 'deep-learning', 'gradient-descent', 'implementation', 'mini-batch-gradient-descent']"," Title: In mini-batch gradient descent, do we pass each input in the batch individually or all inputs at the same time through the layer?Body: In the stochastic gradient descent algorithm, the weight update happens for every training sample.
+In the mini-batch gradient descent algorithm, the weight update happens for every batch of training samples.
+In the batch gradient descent algorithm, the weight update happens for all samples in the training dataset.
+I am confused with the procedure of training that happens in the mini-batch gradient descent algorithm. I am guessing one of the following two must be correct
+
+- Passing each input individually at each layer and calculating the output. This happens for a number of training samples that are equal to batch size.
+
+- Passing a batch of inputs at once at each layer and collecting the batch output at each layer.
+
+
+Which of the above is true in general implementations of mini-batch gradient descent algorithms to train your neural networks?
+"
+"['convolutional-neural-networks', 'weights', 'dense-layers', 'input-layer']"," Title: Correctly input additional values into CNNBody: I understand that in order to add additional inputs to a CNN, e.g. in self driving, I can append the data to a flattened layer after the convolutions and before the fully connected layers.
+However, a few things confuse me. In a paper the authors want to feed speed measurements into the driving network. Instead of just appending a normalized speed value, they first feed it into several FC layers. Why would they do that? What kind of features could you extract from a single real value? Could there be another reason?
+
+
+(p. 3, paper 1)
+Part two of my question is: In another paper, information about the turn signal is appended to a layer as one-hot encoding. The authors talk about how that didn’t work due to vanishing weights (not gradients). So they scaled weights by a constant factor. What do they mean by vanishing weights and how do I scale weights (e.g. in PyTorch)?
+
+(p. 6, paper 2)
+"
+"['neural-networks', 'gradient-descent', 'adam', 'mini-batch-gradient-descent']"," Title: How many iterations of the optimisation algorithm are performed on each mini-batch in mini-batch gradient descent?Body: I understand the idea of mini-batch gradient descent for neural networks in that we calculate the gradient of the loss function using one mini-batch at a time and use this gradient to adjust the parameters.
+My question is: how many times do we adjust the parameters per mini-batch, i.e. how many optimisation iterations are performed on a mini-batch?
+The fact that I can't find anything in the TensorFlow documentation about this to me implies the answer is just 1 iteration per mini-batch. If this assumption is correct, then how does an optimisation algorithm, like adam, work which uses past gradient information? It seems strange, since then gradients from past mini-batches are being used to minimise the loss of the current mini-batch?
+"
+"['reference-request', 'game-theory', 'incomplete-information', 'card-games']"," Title: AI for games which involve social intelligence. Games like warewolf where players must persuade, charm, threaten etcBody: I'm looking for any introductory/accessible reading on AI that can play games which involve social intelligence.
+Games like poker, where you might bait someone into overcommiting their hand or threaten them into not betting a lot when they should.
+Or warewolf, where there is a group of villagers, and a warewolf. Every night the warewolf "eats" a villager, and in the morning all the villagers get together to kick out a person who they think is the wolf. Wolf wins if he's the last one alive, villagers win if they manage to kick out the wolf before everyone dies.
+I know there's some online content on the game warewolf, but how does AI tackle such concepts in general? How are the rules represented, and how does the AI "persuade" and be persuaded?
+"
+['machine-learning']," Title: ""Hot Word Detection"", bur for different applicationsBody: I have been looking in detecting a specific rhythm/pattern within the temporal domain for a time-series signal. For this purpose, how "Wake Up" words work for devices like Alexa have gained my interest!
+I have 2 questions with regards to this matter:
+
+- Is this just, say, training a model (e.g. CNN) on a specific word (I know audio processing is not this simple, I meant it in it's simplest form) and running it continually until a hot word is detected? if so is the name just a formalization and not a different method of implementing a ML model?
+
+- If this type of model can be used for a time series signal, namely audio, is there any restriction on applying it on a different set of temporal signals? for example, detecting the onset of an earthquake?
+
+
+Also, I would really appreciate it if someone could reference a good paper to have a read through on this matter, I have looked online but have not been able to find interesting stuff.
+"
+"['deep-learning', 'deep-neural-networks', 'models', 'software-evaluation', 'topology']"," Title: Are there any animation tools available to visualise and simulate deep neural networks?Body: Deep learning researchers have to work with a lot of models. The models may include different types of Layers: They include convolutional neural network layers, recurrent neural network layers, batch normalization layers, polling layers, and many others.
+Along With their own visualization, it is also necessary to keep the model detailed enough visually to teach about the model.
+Although there are widely used visualization methods available in several packages such as model summary and others. I want to know the availability of animation tools that are useful to simulate the models of deep learning that are more visually intuitive.
+Are there any contemporary animation packages available true to draw and simulate deep learning models?
+"
+"['neural-networks', 'classification', 'data-labelling']"," Title: Should I train my network for classification on samples whose ground truth label is ambiguous?Body: Imagine that I am training a model to classify handwritten digits. Suppose there are some bad quality images that could be classified by a human as either 0 or 8, 1 or 7 or other commonly misclassified pair of digits.
+My question is, should I simply remove such ambiguous samples? Should I annotate it as the most similar digit, even though there are other similar answers? Should I present it repeat the sample, presenting it once per each 'acceptable' answer?
+"
+"['convolutional-neural-networks', 'filters', 'channel']"," Title: Is it possible to have different channel dimensions in a CNN?Body: Let's say I have two channels that I wish to feed into a CNN. One of the channel contains 4 traces and has a width of 512. Stacking them on top of each other therefore yields an image with dimensions (4, 512). The other channel is just 1 trace, so its dimensions would be (1, 512).
+I then have convolutional filters that are of dimension (1, 5) as an example. That means that the filters run over each trace separately. The first channel (containing the 4 traces) will then have a set of filter weights, shared among the 4 traces. The second channel (containing the 1 trace) will have a completely different set of weights (as per this SE question).
+TLDR: Can convolutional layers in a CNN have different dimensions? Putting this in the context of images: Could we have a CNN that takes an image that has dimensions (100, 100) for the red channel, (100, 100) for the green channel, and (50, 100) for the blue channel?
+"
+"['python', 'implementation', 'linear-regression', 'perceptron']"," Title: Why isn't my perceptron having the expected costs?Body: I want to implement a single perceptron for linear regression using the following formulas:
+
+The input data for the first case is one column (x(392, 1); y(392, 1))
and for the second case is (x(392, 7); y(392, 1))
. The NaN
values have been removed and x
values have been standardized x-x.mean()/x.std()
+This is my Python implementation:
+class LinearRegression(object):
+
+
+ def __init__(self, x, y, n_iter):
+ self.x = x
+ self.y = y
+ self.n_iter = n_iter
+ self.cost_iteration = []
+ # Initializing model parameters (w, b) to zeros
+ self.weights = np.zeros((1, self.x.shape[1])) # w: weights
+ self.biases = np.zeros((1, 1)) # b: bias
+
+ def feedforward(self):
+ # return the feedforward value for x
+ #self.weights, self.biases = self.update_params()
+ z = self.x @ self.weights.T + self.biases
+ return z
+
+ def loss(self):
+ # return the loss value for given x and y
+ z = self.feedforward()
+ loss = self.y-z
+ cost = np.sum(loss**2)/self.y.shape[0]
+ return loss, cost
+
+ def backpropagation(self):
+ # return the derivatives with respect to weight matrix and biases
+ loss, cost = self.loss()
+ db = -2*np.sum(loss)/self.y.shape[0] # dJ/db
+ dw = -2*np.dot(self.x.T, loss)/self.y.shape[0] # dJ/dw
+ return dw, db
+
+ def update_params(self):
+ # update weights and biases based on the output
+ dw, db = self.backpropagation()
+ self.weights -= dw.T
+ self.biases -= db
+ return self.weights, self.biases
+
+ def fit(self):
+ # fit method for the training data
+ for iter in range(self.n_iter):
+ self.update_params()
+ print(self.biases)
+ l, c = self.loss()
+ self.cost_iteration.append (c)
+ return self.cost_iteration
+
+
+The final cost should be approximately 23.9 and 11.6 for the two models, respectively. But I can't figure out why it's not the case when I use my code.
+"
+"['neural-networks', 'deep-learning', 'overfitting', 'generalization', 'representation-learning']"," Title: Does a bigger neural network learn ""worse"" representations than a small neural network when the amount of data isn't enough?Body: Assume we have a neural network and we want to train it on a classification problem. The hidden layers of the neural network are kind of feature representations of the input data.
+If the neural network is big and the amount of data isn't enough for the complexity of the model, does it usually learn worse representations than a smaller neural network which is good for the amount of data we have?
+"
+"['neural-networks', 'convolutional-neural-networks', 'training']"," Title: What practically makes a good architecture of ANN?Body: When we take a look at the literature there are so many opinions.
+I was wondering what are some generally good practices to design an architecture, like how much depth would you prefer and how much width would you prefer.
+Should the number of training data influence your decisions of designing the architecture.
+What should the number of parameters be ? etc.
+"
+"['reinforcement-learning', 'reference-request', 'game-theory', 'multi-agent-systems', 'books']"," Title: Book/course recommendation on game theory application to multi-agent system (reinforcement learning)Body: Is there any great game theory book or course that discusses the application of game theory to modern reinforcement learning or multi-agent systems? Or a classic reference book that can help me get a full understanding of papers like $\alpha$-rank.
+"
+"['neural-networks', 'reinforcement-learning', 'q-learning', 'gradient-descent', 'bellman-equations']"," Title: What is the difference between gradient decent in neural networks and temporal difference in reinforcement learning?Body: I am studying Q-learning in reinforcement learning. My question is about the Bellman equation.
+In Q-learning, the Bellman equation is often introduced as follows.
+\begin{align}
+Q_{new}(s,a)
+&= Q_{old}(s,a) + \text{learning rate} \times \text{error}\\
+&= Q_{old}(s,a) + \alpha(\text{target} - \text{actual})\\
+&= Q_{old}(s,a) + \alpha((\text{reward} + \text{discount factor} \times \text{max next } Q) - Q_{old}(s,a)) \\
+&=Q_{old}(s,a) + \alpha[r(s,a)+\gamma\times max(Q(s',a'))-Q_{old}(s,a)]
+\end{align}
+The update equation of gradient descent (which is used in the context of neural networks and other fields) is as follows.
+$$
+w_{new} = w_{old} + \eta\frac{dE}{dw}
+$$
+So, why the Bellman equation depends on the error after the learning rate while the gradient descent depends on the error gradient?
+I feel confused.
+"
+"['generative-adversarial-networks', 'generator']"," Title: How to understand the results of a generator that switches, for metric evaluation?Body: I am running a code on generative adversarial networks. The code is designed in such a way that it outputs a fake image after every 5 epochs. The total number of epochs is 800 in number.
+After the completion of the program, when I check the images generated by the generator of the generative adversarial network while training, I am so confused about the results.
+The phenomenon is as follows:
+
+Image after epoch n is very realistic, while the images n -5 and n + 5 are
+not highly realistic. I can see many such n's. And vice-versa sometimes.
+
+Although I am interested to know why it happens. My question here is not directly about why it happens. My doubt is about the decision regarding the generator on which I need to evaluate some metrics.
+It is a general practice to evaluate the metric on the generator after the last epoch, that is, the 800th generator. But if the phenomena I told holds, then it may be true that the last generator is not capable of generating realistic images, and the generator 795 or the generator 805 may be good.
+So, if I am correct, I need to check the fake sample generated by the generator and then apply my metric to the generator which generated high quality and realistic image. Am I correct?
+"
+"['reinforcement-learning', 'dynamic-programming']"," Title: Usefulness of the state_values calculation in Dynamic ProgrammingBody: State values are always presented as a central concept in RL, notoriously in the bible, the Sutton&Barton’s book.
+I have done some exercises trying to improve my understanding, but it is clear that I am missing some important point(s), at least in the tabular case.
+I do not understand, in the case where we have a complete MDP model of the environment, why we should bother doing value iteration or/and policy iteration if we can directly find the optimal policy by calculating each state-action value. I mean that having the state-action values, we can by the argmax function obtain the optimal policy for any starting point.
+In my learning/example, link, with an observation space of one million states and three different actions, using a desktop notebook it took 21 iterations in 19 minutes to find every state action value, and consequently, the optimal policy.
+Perhaps it could be done quicker, selecting only the interesting states. For instance, if there is a unique starting state and the simulation has a finite-small number of steps per episode, probably most of the states are uninteresting because never visited. In that case, we can select the target states using some test simulations, but, is it worth?
+"
+"['computer-vision', 'model-request', 'pose-estimation']"," Title: What is the fastest multi-human pose estimation model?Body: I am trying to find an accurate and fast multi-person human pose estimation that I can train on with custom data. I have been searching for a little while and I may not be up-to-date on the newest techniques. I will start by posting what I have found and looked into (a little):
+
+- Openpose: This is supposedly real-time (I assume on a GPU, 24fps?) and they provide training code
+- Lightweight OpenPose: Runs in realtime >20fps confirmed, training code is provided
+- mediapipe: runs in realtime > 20fps confirmed, training code is NOT provided
+- posenet: No training code, can one even train tfjs models?
+- movenet: Very fast but no way to train?
+- hrnet & lightweight-hrnet: seem to be slow? Can anyone confirm? training code provided
+- blazepose: haven't tried it yet, looks like tf implementation, but no discussion bout speed. Training code included
+- alphapose: haven't tried it, but looks to run at 16fps (maybe faster?) but is only intended for research not commercial. Training script available.
+- MoVnect: Looks new, and fast (haven't tested it) but looks like it uses student-teacher training.
+
+What are some other human poses estimation models out there?
+I care more about speed and training. I am thinking #2 is my best bet but it's a few years old. And not the most friendly to train Anything newer? Can anyone confirm or reject my findings?
+"
+"['machine-learning', 'reinforcement-learning', 'deep-rl', 'ai-design', 'ai-basics']"," Title: Appropriate ML algorithm to solve a cutting pattern problemBody: I have a rectangular area, where I need to place some 2 dimensional geometrical shapes - like a square or circle or a little more complicated shapes. And after the arrangement these shapes should be cut out.
+Requirements to the disposal of shapes:
+
+- These shapes are not allowed to intersect
+- And also they must disposed on the recatangular area
+- They must have at least a minimum distance
+- The waste should be minimized
+- When more than one shape is arranged on this area it is desirably that the shapes have a certain quantity (e.g. shape A: 50 %, shape B: 30 %, shape C: 20 %)
+
+After the arrangement I get the coordinates of the single shapes so that I can cut out my shapes...
+To solve this I thought of (deep) reinforcement learning but because I'm new to ML I'm not sure if there is a more appropriate method to solve this problem.
+I hope that you can give me some hints or simply confirm my assumption that (deep) reinforcement learning is appropriate. And perhaps you can also offer me some useful links...
+Many thanks in advance for your help!
+And lastly a little picture which is showing a possible bad result because shape A and shape E intersect. And probably there is to much waste.
+
+"
+"['supervised-learning', 'noise']"," Title: Is the noise term $\epsilon$ in $y=g(x) + \epsilon$ used to denote the model's imperfection to the real world?Body: In supervised machine learning, it is common to say that we learn a function of the form
+$$y=g(x) + \epsilon.$$
+Generally, $\epsilon$ is used to denote noise or, more precisely, any influence by latent variables such as measurement inaccuracies (right?).
+Is it, therefore, correct to say that we use $\epsilon$ to denote the model's imperfection to the real world (caused by anything unknown)?
+"
+"['convolutional-neural-networks', 'multiclass-classification']"," Title: When to use Multi-class CNN vs. one-class CNNBody: I'm building an object detection model with convolutional neural networks (CNN) and I started to wonder when should one use either multi-class CNN or a single-class CNN. That is, if I'm making e.g. a human detector from image data and a cat detector also from image data, then when should I have a specific model for each task, and when should I just combine all the data into one and use just one general multi-class CNN?
+I've understood from the No-Free-Lunch-Theorem and generally from estimation theory, that there there does not, in theory, exist a model which is simultaneously optimal for every problem. In other words, case specific models should, in general, beat the "all-purpose"-models in the same task.
+I have a difference in opinion with a colleague of mine whether to use one-class of a multi-class CNN and I would like to hear the communities opinion on this.
+"
+"['convolutional-neural-networks', 'convolutional-layers', '1d-convolution', '2d-convolution', '3d-convolution']"," Title: What do people refer to when they use the word 'dimensionality' in the context of convolutional layer?Body: In practical applications, we generally talk about three types of convolution layers: 1-dimensional convolution, 2-dimensional convolution, and 3-dimensional convolution. Most popular packages like PyTorch, Keras, etc., provide Conv1d, Conv2d, and Conv3d.
+What is the deciding factor for the dimensionality of the convolution layer mentioned in the packages?
+"
+"['neural-networks', 'filters', 'fully-convolutional-networks']"," Title: FCNs: Questions about the filter rarefaction in the CVPR paper [Long et al., 2015]Body: I am reading the paper about the fully convolutional network (FCN).
+I had some questions about the part where the authors discuss the filter rarefaction technique (I guess this is roughly equivalent to dilated convolution) as a trick to compensate for the cost of implementing a shift-and-stich method.
+
+Consider a layer (convolution or pooling) with input stride $s$, and a subsequent convolution layer with filter weights $f_{i,j}$ (eliding the irrelevant feature dimensions). Setting the lower layer’s input stride to 1 upsamples its output by a
+factor of s.
+
+
+- How does setting the input stride of the lower layer to 1 leads to upsampling (and not in the reduction of output dimension)? I am confused about what the terminologies lower/higher layer and input/output stride refer to here.
+
+
+To reproduce the trick, rarefy the filter by enlarging it as
+$f'_{i,j} = \begin{align} \begin{cases} f_{i/s,j/s} & \text{if $s$ divides both $i$ and $j$} \\ 0 & \text{otherwise} \end{cases} \end{align}$
+(with $i$ and $j$ zero-based). Reproducing the full net output of the trick involves repeating this filter enlargement layer-by-layer until all subsampling is removed.
+
+
+- I was wondering how the rarefaction defined here leads to the filter enlargement. Based on the equation, it seems that $f$ and $f'$ has the same size, with $f'$ having different filter weights based on $s$.
+
+"
+"['deep-learning', 'overfitting', 'validation-loss']"," Title: What can cause massive instability in validation loss?Body: I'm working with very weird data that is apparently very hard to fit.
+And I've noticed a very strange phenomenon where it can go from roughly 0.0176 validation MSE to 1534863.6250 validation MSE in only 1 epoch! It usually then will return to a very low number after a few epochs. Also, no such fluctuation is seen in the training data.
+This behavior of instability is consistent across shuffling, repartitioning & retraining. Even though I have 16,000 samples and a highly regularized network (dropout + residual layers + batch normalization + gradient clipping).
+I mean I realize I could have more data, but, still, this behavior is really surprising. What could be causing it?
+P.S. Model is feedforward with 10 layers of size [32,64,128,256,512,256,128,64,32,1], using Adam optimizer. Also, this question may be related (my experience is also periodic validation loss), but I don't think they experienced the same massive instability I am seeing.
+"
+"['reinforcement-learning', 'deep-learning', 'deep-rl', 'keras', 'monte-carlo-tree-search']"," Title: Do you need a terminal state when using double deep q networks?Body: I just got my agent training, and I'm wondering if the terminal flags are necessary when sampling from the replay buffer. The game I'm implementing the agent in has two different ways the game can end, and so far my agent seems to be learning without terminal flags. I was wondering how important this feature is, as it's in all the pseudocode but doesn't seem to be necessary in my implementation.
+"
+"['computer-vision', 'object-detection', 'bounding-box']"," Title: Why do the object detection networks produce multiple anchor boxes per location?Body: In various neural network detection pipelines, the detection works as follows:
+
+- One processes the input image through the pretrained backbone
+- Some additional convolutional layers
+- The detection head, where each pixel on the given feauture map predicts the following:
+
+- Offset from the center of the cell ($\sigma(t_x), \sigma(t_y)$ on the image)
+- Height and width of the bounding boxes $b_w, b_h$
+- Objectness scores (probability of object presence)
+- Class probabilities
+
+
+
+
+Usually, detection heads produce not a single box, but multiple.
+
+- The first version of YOLO - outputs 2 boxes per location on the feature map of size $7 \times 7$
+- Faster R-CNN outputs 9 boxes per location
+- YOLO v3 - outputs 9 boxes per pixel from the predefined anchors :
(10×13),(16×30),(33×23),(30×61),(62×45),(59× 119), (116 × 90), (156 × 198), (373 × 326)
+
+These anchors give the priors for the bounding boxes, but with the help
+of $\sigma(t_x), \sigma(t_y), b_w, b_h$ one can get any possible bounding box on the image for some pixel on the feature map.
+Therefore, the network will produce plenty of redundant boxes, and a certain procedure - NMS suppresion has to be run over the bounding box predictions to select only the best.
+Or the purpose of these anchors is to start from a prior, reshape and shift slightly the bounding box, and then compare with the ground truth.
+Is it the case, that if one used only a single bounding box for detection - it would be hard to train the network to rescale the initial bounding to, say, 10 times, and produce some specific aspect ratio?
+"
+"['neural-networks', 'optimization', 'metric', 'algorithm-request', 'efficiency']"," Title: Compare the efficiency of a trained ML model with a non-learning-based method for solving the same problemBody: If a certain task T is solved by a non-learning-based method A (let's say, an optimization-based approach). We now train a machine learning model B (let's say a neural network) on the same task.
+What are some metrics that we can use to compare their efficiency in terms of finding the solution (Assuming the quality of both solutions is comparable)?
+"
+"['image-processing', 'variational-autoencoder', 'image-generation']"," Title: variational autoencoder - decoder output for imagesBody: Following the standard setup/notation for a VAE, let $z$ denote the latent variables, $q$ as the encoder, $p$ as the decoder, and $x$ as the label. Let the objective be to maximize the ELBO, where a single sample monte carlo estimate of the ELBO is given by
+\begin{align*}
+\log p(x \, | \, z) + \log p(z) - \log q(z \, | \, x)
+\end{align*}
+Now I want to focus only on the $\log p(x \, | \, z)$ term, where $x$ is an image, and the decoder $p$ outputs the mean/variance of a normal distribution for each pixel.
+My understanding is that pixel values should be integer 0-255. Now consider a single pixel: suppose that the ground truth for that pixel is the value 10, and the encoder predicts the mean, variance $\mu, \sigma^2$ respectively. Now when computing the ELBO, we have this term
+\begin{align*}
+\log p(x \, | \, z) = \log f(10, \mu, \sigma^2) \tag{1}
+\end{align*}
+where $f$ is the probability density of the normal distribution. My question is why it is justified to compute $\log p(x \, | \, z)$ using the density considering that image data should be discrete valued. It seems to me that any sampled output between 9.5-10.5 would all get mapped to the correct value of 10. Then it seems that you should take the term in the ELBO as
+\begin{align*}
+\log p(x \, | \, z) = \log \big(F(10.5, \mu, \sigma^2) - F(9.5, \mu, \sigma^2)\big) \tag{2}
+\end{align*}
+where $F$ is the CDF of the normal distribution.
+It seems that all references calculate the ELBO as (1) and none as (2). Why is this justified?
+"
+"['neural-networks', 'machine-learning', 'object-detection', 'opencv']"," Title: Which method can accurately detect circular/angular shapes? (attached example)Body: Is there a method to detect shapes like these accurately and efficiently? I have tried the OpenCv Haar Casacade Classifier which does not work well. These shapes should all be the same class object and can be of different sizes and a little differently shaped (more circular or more angular). In the attached picture there are 4 separate shapes, of which 2 overlap each other.
+
+"
+"['graph-neural-networks', 'graph-theory']"," Title: Does the Weisfeiler-Lehman Isomorphism Test end?Body: I am studying GNNs. I am interested in the Weisfeiler-Lehman Isomorphism Test (WL-Test).
+I was looking for information about whether the test always ends or not, but I didn't find a definitive answer.
+I know that we can choose how many iterations can be done, or the test is finished if the iteration makes the same result.
+My question is: What if we don't choose how many iterations should be done and the iterations don't make the same result between two graphs? Will the iterations keep going on (i.e. infinitely)?
+"
+"['machine-learning', 'recurrent-neural-networks', 'prediction', 'sequence-modeling']"," Title: How can I predict the next number in a non-obvious sequence?Body: I've got an array of integers ranging from -3 to +3.
+Example: [1, 3, -2, 0, 0, 1]
+The array has no obvious pattern since it represents bipolar disorder mood swings.
+What is the most suitable approach to predict the next number in the series? The length of the array is about 700 entries.
+From where can I start the investigation? (provided that I've got some experience in Python and Node.js, but only a hello-worldish acquaintance with TensorFlow). Which training model might be suitable in this case? How can I chunk the data set properly?
+"
+['graph-neural-networks']," Title: How to initialize the coefficient vector of Deep Tensor Neural NetworkBody: In Quantum-Chemical Insights from Deep Tensor Neural Networks, I would like to ask a question about how to initialize the coefficient vector of the network, because I could not understand it even after reading the paper. In the paper, it says
+
+All presented models use atomic descriptors with 30 coefficients. We initialize each coefficient randomly
+
+If it is initialized randomly like this, why do we initialize it randomly since we cannot use the information of nuclear charge as mentioned in the paper?
+"
+"['upsampling', 'downsampling', 'etymology']"," Title: What is the role of the word sampling in upsampling and downsampling?Body: Upsampling and downsampling are highly used in deep learning algorithms that involve convolutional neural networks.
+Upsampling increases the size downsampling decreases the size of tensors.
+What is the role of the word sampling in the words upsampling and downsampling?
+Does it always have a connection with the sampling techniques that are generally used in statistics? Or is it true that sometimes it does not have any such connection and the only task of upsampling and downsampling is to increase and decrease respectively the size in any way?
+"
+"['reinforcement-learning', 'actor-critic-methods', 'reward-functions', 'sparse-rewards']"," Title: How do I compute the value function when the reward is only at the end in the context of actor-critic algorithms?Body: Consider the actor-critic reinforcement learning setting (actor and critic parameterized by a neural network). The reward is given only at the end of the episode (or when there is a timeout there is no reward).
+How could we learn the value function? Do you recommend computing intermediate rewards?
+"
+"['neural-networks', 'training', 'backpropagation', 'training-datasets']"," Title: Does the ANN's training data include the proper output for every neuron?Body: I was designing an Artificial Neural Network a while back, but hit a bump when I got to the backpropagation. I was having trouble making the script choose whether to add or subtract from the weights, when I had a thought.
+Does the ANN's training data include the proper output of all the neurons, or just the input and output layers? If it's the latter, could somebody please explain how backpropagation works in a simple way?
+"
+['overfitting']," Title: 2-stage model overfittingBody: I'm trying to build an entity matching model. There are 2 kinds of features - binary (0/1) and text features. Initially I made a deep learning model that uses character level embeddings of some of the text features, word level embeddings of the other text features and also uses the binary features as inputs.
+The output is through a softmax for 3 classes, so basically a $n\times 3$ array (where $n$ is the input data length).
+I've done three splits - train
, val
and test
, and for training the DL model through Keras, I've specified train
as the training split and val
as the validation split. I measured the performance on the test
split to get DL model metrics. The softmax outputs for all three splits were obtained using model_DL.predict
.
+Next, I used a Random Forest model as a second stage. Inputs: all the binary features PLUS the softmax outputs as inputs. e.g. I took the train
split, removed the text features and added in the columns of the predicted array as separate features. To be even more specific, if predtrain
was obtained by using model_DL.predict
on the train
split, then the additional features were added using train['class1prob'] = predtrain[:,0], train['class2prob'] = predtrain[:,1], train['class3prob'] = predtrain[:,2]
.
+Similarly I did for test
and val
splits. Now I trained the RF on the augmented train
split and measured its performance on the val
and test
splits. The F-score for the train
and val
splits was around 0.85, 0.74, 0.73 for the 3 classes respectively (i.e. performance was similar on both splits).
+BUT for the train
split the predictions were near perfect - 0.98, 0.99, 0.98 F-scores for the three classes. My intuition is that overfitting of the 2nd stage RF is understandable for train
, since the softmax outputs were predicted using the 1st stage DL model, which in turn was already trained on train
. Also, there's some data leakage for the val
split since val
was used as a validation set to finetune the DL model by Keras, so maybe even the val metrics aren't so reliable. But there is no leakage for the test
set.
+My question is, in this scenario have I made an error, or is this blatant overfitting normal for 2 stage models? If there's a glaring error, any way or best practice to fix that?
+"
+"['deep-learning', 'computer-vision']"," Title: Monocular depth estimationBody: I am currently reading the paper towards robust monocular depth estimation and I have 2 doubts about it.
+First of all the paper stated that there are 2 types of depth annotated, dense and sparse. What are they and what are their differences?
+Secondly, the paper predicts the relative depth given an input, how do we calculate the loss when a relative depth map is predicted? I know we could simply use MSE if the model predicts an absolute depth map. If I were to train a model myself to predict relative depth maps as well, how should I calculate the loss or other evaluation metrics?
+Any help would be greatly appreciated, thanks in advance!
+"
+"['neural-networks', 'backpropagation']"," Title: How does backpropagation know which weights to change?Body: I'm currently working on constructing a neural network from scratch (in JavaScript). I'm in the middle of working on the backpropagation, but there's something I don't understand: how does the backprop algorithm know which weights to change or which paths to take? The way I did it, it always took all of the paths/weights and changed them all. So how does the algorithm know which paths to take, which weights to change, and whether to add or subtract X amount from said weight?
+"
+"['reinforcement-learning', 'deep-rl', 'q-learning', 'reward-functions', 'discount-factor']"," Title: How to encourage the reinforcement-learning agent to reach the goal as quickly as possible, and what's the effect of discount factor?Body: I am trying to use reinforcement learning to solve a task and compare its performance to humans.
+The task is to find a single target in a fixed number of locations. At each step, the agent will pick one location, and check whether it contains the target. If the target is at this location, the agent will get a $+10$ reward and the trial ends; otherwise, the agent will get a hint at where the target is (with some stochastic noise), get a $-0.5$ reward, and it needs to pick another location in the next step. The trial will terminate if the agent cannot find the target within 40 steps (enough for humans). The goal is to solve the task as quickly and accurately as possible.
+I am now trying to solve this problem by Deep Q-Network with prioritized experience replay. With the discount factor $\gamma=0.5$, the agent can learn quickly and solve the task with an accuracy close to 1.
+My current questions are:
+
+- The accuracy level is already very high, but how to motivate the agent to find the target as quickly as possible?
+
+- What's the effect of $\gamma$ on the agent's task solving speed?
+
+
+I am considering $\gamma$ because it relates to the time horizon of the agent's policy, but I now have two opposing ideas:
+
+- With $\gamma \rightarrow 0$, the agent is trying to maximize the immediate reward. Since the agent will only receive a positive reward when it finds the target, $\gamma \rightarrow 0$ motivates the agent to find the target in the immediate future, which means to solve the task quickly.
+
+- With $\gamma \rightarrow 1$, the agent is trying to maximize the discounted sum of reward in the long term. This means to reduce the negative rewards as much as possible, which also means to solve the task quickly.
+
+
+Which one is correct?
+I have tried training the network with $\gamma=0.1, 0.5, 0.9, 0.99$, but the network can only learn with $\gamma=0.1, 0.5$.
+"
+"['convolutional-neural-networks', 'terminology', 'papers']"," Title: What is meant by ""spatial encoding"" in the context of convolutional neural networks?Body: Consider the following excerpt from the abstract of the research paper titled Squeeze-and-Excitation networks by Jie Hu et al.
+
+Convolutional neural networks are built upon the convolution operation, which extracts informative features by fusing spatial and channel-wise information together within local receptive fields. In order to boost the representational power of a network, several recent approaches have shown the benefit of enhancing spatial encoding.
+
+The authors used the term "spatial encoding" and the excerpt implies that enhancing spatial encoding has the benefit of increasing the representational power of a convolutional neural network.
+What is meant by the term "spatial encoding" in this context related to the convolutional neural networks?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'optimization', 'hyperparameter-optimization']"," Title: How do you decide that you have tested enough hyper-parameter combinations for a specific neural network architecture?Body: How do you decide that you have tested enough hyper-parameter combinations for a specific neural network architecture to discard it and move on to a new model?
+Do you have a structured (generic) approach? In practice, what gives you the necessary performance (e.g. >= 80% accuracy) the fastest (w.r.t. to your work-hours) assuming there is no SOTA that easily exceeds your requirements and extensive hyper-parameter optimization is infeasible?
+"
+"['reinforcement-learning', 'deep-learning', 'reference-request']"," Title: How should I choose a reinforcement learning algorithm?Body: I'm starting a new RL project. I'm familiar with Deep Q-Learning because of an old project where I used it, but I'm not sure I chose correctly back then.
+Why should or shouldn't I choose DQN, or any other RL algorithm/method for a problem? By which criteria should I judge an RL algorithm? Is there any set of guidelines to help you choose a specific RL algorithm for a problem?
+I did some research, but wasn't satisfied.
+"
+"['natural-language-understanding', 'text-summarization']"," Title: Among N documents, how to summarize the most unique content in each document?Body: I now have $N$ documents, which share common content and they have special unique content.
+Say I have $3$ legal documents related to the same person. Document $A$ is about land law, document $B$ is about company law and document $C$ is about marriage law.
+How can I extract the land, company and marriage content from each document respectively and skip the common personal information?
+It sounds like text-summarization but with a very different nature. Any idea is welcome.
+Edit: In my situation, $N$ varies and the nature of the unique content is unknown.
+"
+"['convolutional-neural-networks', 'object-detection', 'hyperparameter-optimization', 'convolution', 'pooling']"," Title: How do I choose the hyper-parameters for a model to detect different guitar chords?Body: I need to build a hand detector that recognizes the chord played by a hand on a guitar.
+I read this article Static Hand Gesture Recognition using Convolutional Neural Network with Data Augmentation that looks like what I need (hand gesture recognition).
+I think my task is (from my point of view) a little more difficult than that in the paper, because I think it is more difficult to distinguish between two chords than between a punch and a palm.
+What I don't understand clearly is how to choose the best parameters for this more complex task: is it better to have more/less convolutional layers? A higher or lower number of poolings? Max or avg pooling?
+The input will be more or less like this one:
+
+There will be a first net (MobileNetV2 trained on EgoHands) that will find the bounding box, crops the image and then passes the saturated blending between the original one and Frei&Chen edges to the second net (unfortunately I don't have a processed picture yet, I will post an example as soon as I get it)
+"
+"['deep-learning', 'feature-engineering', 'inductive-bias']"," Title: What can be an example for the prior knowledge used in Deep Learning systems?Body: It is known that machine learning algorithms expect feature engineering as an initial step. Now, consider the following paragraph, taken from 1.1 The deep learning revolution of the textbook named Deep learning with PyTorch by Eli Stevens, Luca Antiga, Thomas Viehmann, regarding the role of feature engineering in deep learning
+
+Deep learning, on the other hand, deals with finding such
+representations automatically, from raw data, in order to successfully
+perform a task. In the ones versus zeros example, filters would be
+refined during training by iteratively looking at pairs of examples
+and target labels. This is not to say that feature engineering has
+no place with deep learning; we often need to inject some form of
+prior knowledge in a learning system. However, the ability of a neural
+network to ingest data and extract useful representations on the basis
+of examples is what makes deep learning so powerful. The focus of
+deep learning practitioners is not so much on handcrafting those
+representations, but on operating on a mathematical entity so that
+it discovers representations from the training data autonomously.
+Often, these automatically created features are better than those that
+are handcrafted! As with many disruptive technologies, this fact has
+led to a change in perspective.
+
+The paragraph clearly saying that we need to inject some form of prior knowledge into the learning system. What can be a concrete example for such prior knowledge we are used in deep learning systems?
+"
+"['deep-learning', 'terminology', 'features', 'random-variable']"," Title: When can we call a feature ""hierarchical""?Body: Features in machine learning are the attributes of the elements of a data set. They are considered as random variables in probability.
+Consider the following excerpt from 1.1: The deep learning revolution of the textbook named Deep learning with PyTorch by Eli Stevens, Luca Antiga, Thomas Viehmann,
+
+On the right, with deep learning, the raw data is fed to an algorithm
+that extracts hierarchical features automatically, guided by the
+optimization of its own performance on the task; the results will be
+as good as the ability of the practitioner to drive the algorithm
+toward its goal.
+
+When can we call a feature hierarchical? Does it refer to a random variable that is a (function on) derived from some other random variables?
+"
+"['datasets', 'autoencoders', 'generative-model', 'variational-autoencoder', 'algorithm-request']"," Title: How to determine the quality of synthetic data?Body: I'm working on a VAE model to produce synthetic data of X-Ray diffraction spectrums.
+I try to figure out how I can measure the quality of the spectrums. The goal would be to produce synthetic data which is similar to the training data but also different from the training data. The spectrums should keep their characteristics, but should be different in terms of noise and intensity.
+I trained models which can produce those type of spectrums (because I checked some of them visual), but I don't know how to quantify the difference/similarity to the origin (1) and the difference between the produced synthetic spectrums in one dataset (2).
+Are there any methods to quantify these points?
+"
+"['python', 'markov-decision-process', 'markov-property']"," Title: Implementation of MDP in python to determine when to take action cleanBody: I am trying to model the following problem as a Markov decision process.
+
+In a steel melting shop of a steel plant, iron pipes are used. These pipes generate rust over time. Adding an anti-rusting solution can delay the rusting process. If there is too much rust, we have to mechanically clean the pipe.
+
+I have categorized the rusting states as StateA, StateB, StateC, StateD with increasing rusting from A to D . StateA is absolute clean state with almost no rust.
+ StateA -> StateB -> StateC -> StateD
+ ∆ ∆ ∆ | | |
+ | | | | | |
+ Mnt Mnt Mnt | | |
+ | | |_clean_| | |
+ | |_______clean______| |
+ |_______________________________|
+ clean
+
+We can take 3 possible actions:
+
+- No Maintenance
+- Clean
+- Adding Anti Rusting Agent
+
+The transition probabilities are mentioned below:
+The states degrades from StateA to StateD. State degrades with rusting with certain amount of rust denoted by transition probabilities. Adding Anti Rusting Agent decreases the probabilty of degradation of state
+The transition probabilities from StateA to StateB is 0.6 with No Maintenance
+The transition probabilities from StateA to StateB is 0.5 with adding an anti-rusting agent.
+The transition probabilities from StateB to StateC is 0.7 with No Maintenance
+The transition probabilities from StateB to StateC is 0.6 with adding an anti-rusting agent.
+The transition probabilities from StateC to StateD is 0.8 with No Maintenance.
+The transition probabilities from StateC to StateD is 0.7 with an anti-rusting agent.
+Action clean will move any state to StateA with probability 1
+Rewards for StateA is 0.6, StateB is 0.5, StateC is 0.4, StateD is 0.3
+Clean action lead to Maintenance (Mnt) state which has 0.1 reward. The Maintenance state will lead to increase in productivity after cleaning which is good, but there will be shutdown while Maintenance, so there will be loss of production. So reward is less.
+I am new to MDP. It will be helpful if anyone can help me in getting the decision about when should we clean through MDP through a python implementation (python codes)? Shall we Clean at StateB, Clean at StateC, Clean at StateD?
+"
+"['deep-learning', 'pytorch', 'gradient-clipping']"," Title: What exactly happens in gradient clipping by norm?Body: Consider the following description regarding gradient clipping in PyTorch
+torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=2.0, error_if_nonfinite=False)
+
+
+Clips gradient norm of an iterable of parameters.
+The norm is computed over all gradients together as if they were
+concatenated into a single vector. Gradients are modified in-place.
+
+Let the weights and gradients, for loss function $L$, of the model, be given as below
+\begin{align}
+w
+&= [w_1, w_2, w_3, \cdots, w_n] \\
+\triangledown
+&= [\triangledown_1, \triangledown_2, \triangledown_3, \cdots, \triangledown_n] \text{, where } \triangledown_i = \dfrac{\partial L}{\partial w_i} \text{ and } 1 \le i \le n
+\end{align}
+From the description, we need to compute gradient norm, i.e. $||\triangledown||$.
+How to proceed after the step of finding the gradient norm? What is meant by clipping the gradient norm mathematically?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'python', 'facial-recognition']"," Title: What is the best open source python repo for facial recognition?Body: I am looking for best open source python repo for facial recognition. Best if it uses tensorflow backend. I know you can train images to recognize. Yolo can be used if trained on face. To name the person.
+But I wonder if there is any code where you can add new faces to database without training or minimum training. As new faces are added I don't want to train the network repeatedly. Also the less amount of face picture needed the better.
+If code is not available any guide or research paper will also be helpful. For example what approach can I take to make an app for a person who has difficulty remembering peoples name. So the app can take a small video or few photos with name and will be able to tell the persons name in the future. Neural network should not be retrained while adding new face to database if possible.
+"
+"['recommender-system', 'scikit-learn']"," Title: How matrix factorization helps with recommendations when it converges to the initial user-items matrix?Body: We can say that matrix factorization of a matrix $R$, in general, is finding two matrices $P$ and $Q$ such that $R \approx P.Q^{T}$ with some constraints on $P$ and $Q$. Looking at some matrix factorization algorithms on the internet like Scikit-Learn's Non-Negative Matrix Factorization I come to wonder how this works for recommendation systems. Generally with recommendation systems we have a user-item ratings matrix, let's denote it $R$, which is really sparse so when we look at datasets we find missing values, $NaN$. When I look at examples of using matrix factorization for recommender systems I find that the missing values are replaces with $0$. My question is, how do we get actual predictions on the items non rated by users when the dot product $P.Q^{T}$ is supposed to converge to $R$?
+I have tried with this simple matrix that I found here
+R = [
+ [5,3,0,1],
+ [4,0,0,1],
+ [1,1,0,5],
+ [1,0,0,4],
+ [0,1,5,4],
+ ]
+R = np.array(R)
+
+The algorithm I used is Scikit-Learn's and no matter how I change the parameters, I can't seem to find a matrix that has actual values in place of $0$s. It always finds a really good approximation of $R$. Maybe all the hyperparameter tuning I'm doing is leading to overfitting, and let's suppose there is a set of combination of parameters for which we don't have $0$s and still we minimize $||R-P.Q^{T}||$ with regard to some norm to a decent level, how can we be sure that the predictions are accurate? I mean, there must be many different combinations of parameters that ensure both prediction different values for the $0$s and minimizing $||R-P.Q^{T}||$ to a decent level.
+Thank you!
+"
+"['neural-networks', 'regression']"," Title: Best way to measure regression accuracy?Body: I'm asking because classification problems have very concrete metrics like accuracy that are totally transparent to understand.
+Whereas regression models seem to have a very large number of possible evaluation strategies and to me at least it is not clear which (if any) of them is as reliable/interpretable as accuracy is in classification problems.
+Possible Candidates:
+
+- Regular loss (e.g MAE): MAE is potentially quite interpretable, but again interpretation depends upon distribution statistics which vary across regression problems.
+- MAPE/Relative Loss: This is interesting and is potentially decently similar to accuracy. Yet it has obvious draw backs, like the true value being extremely small causing explosion of loss values & there being no incorporation of overall distribution statistics for the output values.
+- Chi-squared test: I like the idea of this but I have not seen it used at all for NN regression for some reason. I'm not sure why and I'm curious if people think it would be a good idea to use it for that.
+- (adjusted) R^2 coefficient: Another statistic that seems great in theory, but again I see almost never being used for NNs and I'm not sure why. This has the great advantage of being a 'bounded'/'normalized' metric like accuracy and in theory is should be just as interpretable. Why is it not used for NNs?
+
+"
+"['computer-vision', 'papers', 'math', 'convolution', 'saliency-map']"," Title: In this paper, if region $R_{2}$ moves in a sliding window manner, won't the saliency map have a smaller size than the original image?Body: In the paper Salient Region Detection and Segmentation, I have a question pertaining to section 3 on the convolution-like operation being performed. I had already asked a few questions about the paper previously, for which I received an answer here. In the answer, the author (JVGD) mentions the following:
+
+So for each pixel in the image you overlap on top $R_{1}$ and then $R_{2}$ on top of $R_{1}$, then you compute the distance $D$ for those 2 regions to get the saliency value of that pixel, then slide the $R_{1}$ and $R_{2}$ regions in a sliding window manner (which is basically telling you to implement it with convolution operation).
+
+Regarding the above, I had the following question:
+If the region $R_{2}$ moves in a sliding window manner, won't the saliency map (mentioned in section 3.1) have a smaller size than the original image (like in convolution the output image is smaller)? If this is so, wouldn't it be impossible to add the saliency maps at different scales since they each have different sizes?
+The following edit re-explains the question in more detail:
+
+In the animation above, you can see a filter running across an image. For each instance of the filter, some calculations are happening between the pixels of the filter and the image. The result of each calculation becomes one pixel in the output image (denoted by "convolved feature"). Here, the output image is smaller than the input image because there are only 9 instances of the filter. From what I understood of the salient region operation, a similar process is being followed i.e. a filter runs across an image, some calculations happen, and the result of each calculation becomes one pixel in the output image (saliency map). Hence, won't the saliency map have a smaller size than the original image? Furthermore, when the filter size is 3 x 3, the output image size is 3 x 3. However, if the filter size was 5 x 3, the output image size would only be 1 x 3. Clearly, the output image size is different for different filter sizes. This makes the output images (saliency maps) impossible to add. There is clearly something I am missing / misunderstanding here, and clarity on the same would be much appreciated.
+P.S. There is no indication of padding or any operation of that sort in the research paper, so I don’t want to assume anything because the calculations would then be wrong.
+"
+"['terminology', 'books', 'bayes-theorem', 'probability-theory']"," Title: What do we mean by ""orderly opinions"" in this sentence in the context of Bayes theorem?Body: In this page, it's written (emphasis mine)
+
+If probabilities are thought to describe orderly opinions, Bayes theorem describes how the opinions should be updated in the light of new information
+
+What is your understanding/definition of "orderly" opinion?
+Maybe something like: a probability that is not arbitrarily chosen but well-founded and explainable?
+"
+"['neural-networks', 'convolutional-neural-networks', 'keras', '3d-convolution']"," Title: Improving validation losses and accuracy for 3D CNNBody: I have used a 3D CNN architecture, for detecting the presence of a particular promoter (MGMT), by using FLAIR brain scans. (64 slices per patient). The output is supposed to be binary (0/1).
+I have gone through the pre-processing properly, and used stratification after splitting the "train" dataset into train and validation sets, (80-20 ratio). My model initialisation and training kernels look like this:
+def get_model(width=128, height=128, depth=64):
+"""Build a 3D convolutional neural network model."""
+
+inputs = keras.Input((width, height, depth, 1))
+
+x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(inputs)
+x = layers.MaxPool3D(pool_size=2)(x)
+x = layers.BatchNormalization()(x)
+
+x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(x)
+x = layers.MaxPool3D(pool_size=2)(x)
+x = layers.BatchNormalization()(x)
+
+x = layers.Conv3D(filters=128, kernel_size=3, activation="relu")(x)
+x = layers.MaxPool3D(pool_size=2)(x)
+x = layers.BatchNormalization()(x)
+
+x = layers.Conv3D(filters=256, kernel_size=3, activation="relu")(x)
+x = layers.MaxPool3D(pool_size=2)(x)
+x = layers.BatchNormalization()(x)
+
+x = layers.GlobalAveragePooling3D()(x)
+x = layers.Dense(units=512, activation="relu")(x)
+x = layers.Dropout(0.3)(x)
+
+outputs = layers.Dense(units=1, activation="sigmoid")(x)
+
+# Define the model.
+model = keras.Model(inputs, outputs, name="3dcnn")
+return model
+
+
+# Build model.
+model = get_model(width=128, height=128, depth=64)
+model.summary()
+
+Compile model:
+initial_learning_rate = 0.0001
+lr_schedule = keras.optimizers.schedules.ExponentialDecay(
+ initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
+)
+model.compile(
+ loss="binary_crossentropy",
+ optimizer=keras.optimizers.Adam(learning_rate=lr_schedule),
+ metrics=["acc"],
+)
+
+# Define callbacks.
+checkpoint_cb = keras.callbacks.ModelCheckpoint(
+ "Brain_3d_classification.h5", save_best_only=True,monitor = 'val_acc',
+ mode = 'max', verbose = 1
+)
+early_stopping_cb = keras.callbacks.EarlyStopping(monitor="val_acc", patience=20,mode = 'max', verbose = 1,
+ restore_best_weights = True)
+
+# Train the model, doing validation at the end of each epoch
+epochs = 60
+model.fit(
+ train_dataset,
+ validation_data=valid_dataset,
+ epochs=epochs,
+ shuffle=True,
+ verbose=2,
+ callbacks=[checkpoint_cb, early_stopping_cb],
+)
+
+This is my first time ever working with a 3D CNN, and I used this keras webpage for the format:https://keras.io/examples/vision/3D_image_classification/
+The (max) validation accuracy in my case was about 54%. I tried reducing the initial learning rate , and for 0.00001 I got to a max of 66.7%. For learning rates of 0.00005, 0.00002, I got max accuracy of about 60 and 62%.
+Accuracy vs epoch plots for learning rates 0.0001, 0.00005,0.00002 and 0.00001:
+
+
+
+
+It does seem like reducing the initial learning rate has a positive effect on accuracy, although the accuracy is still very low.
+What other parameters can I tune to expect a better accuracy? And is it okay to just keep reducing the initial learning rate until we achieve a targeted accuracy?
+I know this is a rather broad question, but I am quite confused as to how we should approach increasing the accuracy in the case of CNNs, (that too 3D), where there just seems to be a lot of stuff going on. Do I change something in my initialisations? Add more layers? Or change the parameters? Do I decrease or increase them? With so many things going on, I don't think trying every combination and just keep repeating the training process is an efficient idea...
+Full notebook (including pre-processing steps): https://www.kaggle.com/shivamee682003/3d-image-preprocessing-17cd03/edit
+"
+"['reinforcement-learning', 'comparison', 'q-learning', 'value-functions', 'policy-iteration']"," Title: When to use Value Iteration vs. Policy IterationBody: Both value iteration and policy iteration are General Policy Iteration (GPI) algorithms. However, they differ in the mechanics of their updates. Policy Iteration seeks to first find a completed value function for a policy, then derive the Q function from this and improve the policy greedily from this Q. Meanwhile, Value Iteration uses a truncated V function to then obtain Q updates, only returning the policy once V has converged.
+What are the inherent advantages of using one over the other in a practical setting?
+"
+"['convolutional-neural-networks', 'pytorch', 'pooling']"," Title: Is there any recommended way to perform pooling in this context?Body: Suppose I have three batches of feature maps, each of size $180 \times 100 \times 100$. I want to concatenate all these feature maps channel-wise, and then resize them into a single feature map. The batch size is equal to 10.
+Consider the following code in PyTorch
+import torch
+from torch import nn
+
+x1 = torch.randn(10, 180, 100, 100)
+x2 = torch.randn(10, 180, 100, 100)
+x3 = torch.randn(10, 180, 100, 100)
+
+
+pool1 = nn.AvgPool3d(kernel_size = (361, 1, 1), stride= 1)
+pool2 = nn.AvgPool3d(kernel_size = 1, stride= (3, 1, 1))
+
+final_1_x = pool1(torch.cat((x1, x2, x3), 1))
+final_2_x = pool2(torch.cat((x1, x2, x3), 1))
+
+print(final_1_x.shape)
+print(final_2_x.shape)
+
+and its output is
+torch.Size([10, 180, 100, 100])
+torch.Size([10, 180, 100, 100])
+
+You can observe that both types of polling I did are able to give a feature map of the desired size. But the first one takes a large amount of time with unsatisfactory results and the second one ignores many values in the input feature maps. I don't know whether it is okay to ignore or not.
+I want to know the recommended way to perform polling in order to get the desired size of feature maps. Is there any such recommended way to perform pooling?
+"
+"['channel', 'feature-maps']"," Title: What are the recommended ways to change shape of feature maps channel wise other than using Convolutional neural networks?Body: Suppose I have a feature map with size $C_1 \times H \times W$. And I need to convert it into a feature map of size $C_2 \times H \times W$.
+One way to do this is to use convolutional neural networks as Conv2d($C_1, C_2$)
+I want to know whether there are any other ways in Literature to perform the desired operation?
+"
+"['neural-networks', 'comparison', 'prediction', 'feedforward-neural-networks', 'matlab']"," Title: Closed networks vs Networks with a removed delay to predict new dataBody: I've come across two types of neural networks to predict, both from Matlab, the closed structure and the net that removes one delay to find new data.
+From Matlab's app generated scripts we see:
+
+% Closed Loop Network
+% Use this network to do multi-step prediction.
+% The function CLOSELOOP replaces the feedback input with a direct
+% connection from the output layer.
+
+netc = closeloop(net);
+netc.name = [net.name ' - Closed Loop'];
+view(netc)
+[xc,xic,aic,tc] = preparets(netc,{},{},T);
+yc = netc(xc,xic,aic);
+closedLoopPerformance = perform(net,tc,yc)
+
+
+% Step-Ahead Prediction Network
+% For some applications it helps to get the prediction a timestep early.
+% The original network returns predicted y(t+1) at the same time it is
+% given y(t+1). For some applications such as decision making, it would
+% help to have predicted y(t+1) once y(t) is available, but before the
+% actual y(t+1) occurs. The network can be made to return its output a
+% timestep early by removing one delay so that its minimal tap delay is now
+% 0 instead of 1. The new network returns the same outputs as the original
+% network, but outputs are shifted left one timestep.
+
+nets = removedelay(net);
+nets.name = [net.name ' - Predict One Step Ahead'];
+view(nets)
+[xs,xis,ais,ts] = preparets(nets,{},{},T);
+ys = nets(xs,xis,ais);
+stepAheadPerformance = perform(nets,ts,ys)
+
+My question is: What is the real difference between them?
+Can one uses them equivalently? If yes, why? I mean, even tho the structure or how they are equipped, could be very very different, e.g. one is apple, the other is grape?
+As far as I understand both can return new data if one codes them for that. For example, taking the closed net, one can predict 10 new values. Taking the net that removes one delay, one can predict one new value, but if one does this recursively 9 times, one can get the new 10 data. Is there a problem in using this last net in that way?
+On another side, running both codes, as they are now (this changes depending on the code one works on), yields very different performances. Why?
+Update:
+I've checked this page https://www.mathworks.com/matlabcentral/answers/297187-neural-network-closed-loop-vs-open-loop, and in the answer by Greg Heath, we see
+[...]
+
+OPENLOOP: The desired output, AKA the delayed target, is used as an additional input. The OL net will produce output for the common time extent of the input and target.
+CLOSELOOP: The delayed target input is replaced by a direct delayed output connection. The CL net will produce output for the time extent of the input.
+
+[...]
+"The desired output, AKA the delayed target, is used as an additional input." how is this?
+"The OL net will produce output for the common time extent of the input and target." and this?
+"The CL net will produce output for the time extent of the input." What does this mean?
+"
+"['reference-request', 'history', 'perceptron', 'xor-problem', 'ai-winter']"," Title: Did the unsolved XOR problem in ""Perceptrons: An Introduction to Computational Geometry"" 1969 book really cause the winter of the AI in 1974?Body: Winter of AI definition:
+
+periods of reduced funding and interest in artificial intelligence research, due to unmet expectations after a period of hype. There have been at least two major AI winters in 1974-1980 and 1987-1993.
+
+I read this question: Did Minsky and Papert know that multi-layer perceptrons could solve XOR?
+
+Minsky and Papert show that a perceptron can't solve the XOR problem. This contributed to the first AI winter, resulting in funding cuts for neural networks. However, now we know that a multilayer perceptron can solve the XOR problem easily.
+
+Does someone have evidence to support that the "unsolved XOR problem" in Minsky and Papert's book caused the first winter of the AI? Or was it a succession of unsolved problems for neural networks and loss of interest that caused the cuts on funding?
+"
+"['terminology', 'search', 'evolutionary-algorithms', 'neuroevolution']"," Title: What does ""unknown search spaces"" mean in the context of Evolutionary Algorithms?Body: In the article Multi-Verse Optimizer: a nature-inspired algorithm for global
+optimization (DOI 10.1007/s00521-015-1870-7), it's written
+
+The results of the real case studies also demonstrate the potential of MVO in solving real problems with unknown search spaces
+
+where MVO stands for Multi-Verse Optimizer.
+What does "unknown search spaces" mean in the context of Evolutionary Algorithms and, especially, in the context of the Multi-Verse Optimizer?
+"
+"['neural-networks', 'training', 'prediction', 'training-datasets']"," Title: Should I train a neural network with data with or without a constraint?Body: I want to train a Neural Network (NN) using a dataset. I want to use the NN model as a prediction function in one algorithm. However, in the algorithm, any data that does not meet a specific constraint (say some parameter $\theta <10$) would not be included.
+So, my question is, while generating the training data, should I include all kinds of inputs irrespective of the constraint, or should I generate only those data which meet the constraint $\theta <10$?
+Currently, I am training data with constraint ($\theta <10$), and I am getting an average error of around $6\%$. Ideally, I want it below $3\%$.
+I am new to NN model training. Any kind of pointers would be helpful.
+"
+"['neural-networks', 'implementation', 'yolo', 'bounding-box']"," Title: Different equations for Yolov3 in courses/ articles and Darknet GitHub code?Body: I am confused by the equations for bounding boxes I find online. Some articles say that
+box_width = anchor_width * exp(residual_value_of_box_width))
+
+and the coordinates have a sigmoid function.
+Eg: https://www.kdnuggets.com/2018/05/implement-yolo-v3-object-detector-pytorch-part-1.html
+https://christopher5106.github.io/object/detectors/2017/08/10/bounding-box-object-detectors-understanding-yolo.html
+
+But Darknet code and GitHub have equations dividing coordinates and box width with image width.
+For example, https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/data/voc/voc_label.py
+def convert(size, box):
+ dw = 1./size[0]
+ dh = 1./size[1]
+ x = (box[0] + box[1])/2.0
+ y = (box[2] + box[3])/2.0
+ w = box[1] - box[0]
+ h = box[3] - box[2]
+ x = x*dw
+ w = w*dw
+ y = y*dh
+ h = h*dh
+ return (x,y,w,h)
+
+If the image width is used, then what is the use of anchor box width/height values in yolov3.cfg
file? I can't find where it has been used in the source code other than generate the anchors file.
+"
+"['machine-learning', 'natural-language-processing', 'recommender-system']"," Title: A recommender system based on millions of fields including text and numberBody: I want to train a model based on millions of fields, including text and number, that are stored in a SQL database and recommend a perfect match based on some inputs. Now, which algorithm is the best for this problem?
+For instance, consider this database pattern:
+
+
+
+
+Title |
+Content |
+Volume |
+Count |
+
+
+
+
+First |
+row1 |
+5.36 |
+34 |
+
+
+Second |
+row2 |
+36.1 |
+239 |
+
+
+... |
+... |
+... |
+... |
+
+
+
+
"
+"['machine-learning', 'training', 'cross-validation', 'testing']"," Title: How to decide a train-test split?Body: In almost every ML model, a train-test (or train-test-val split) is critical to assess the model's performance. However, I have always wondered what the rationale is to decide a particular train-test split. I've seen that some people like an 80-20 split, others opt for 90-10, but why? Is it simply a matter of preference? Also, why not 70-30 or 60-40, what is the best way to decide?
+"
+"['machine-learning', 'computer-vision', 'object-detection', 'algorithm-request']"," Title: What kind of algorithm or approach can I use to find a specific type of object in an image?Body: What kind of algorithm or approach can I use to find a specific type of object in an image?
+In particular, I am interested in finding an object like a windmill in an image taken, for example, from Google Maps. The image could be something like this
+
+"
+"['deep-learning', 'transformer', 'attention']"," Title: Is the multi-head attention in the transformer a weighted adjacency matrix?Body: Are multi-head attention matrices weighted adjacency matrices?
+The job of the multi-head-attention mechanism in transformer models is to determine how likely a word is to appear after another word. In a sense this makes the resulting matrix a big graph with nodes and edges, where a node represents a word and an edge the likelihood to appear after that. So basically it is an adjacency matrix that is created.
+"
+"['comparison', 'logic', 'symbolic-ai', 'reasoning']"," Title: What is the difference between ""Syllogism"" and ""Law of Syllogism""?Body: The logical arguments are the basis for Artificial Intelligence. That is why I picked AI community to ask my question.
+Reading from Wikipedia,
+
+A syllogism is a kind of logical argument that applies deductive
+reasoning to arrive at a conclusion based on two propositions that are
+asserted or assumed to be true.
+
+Again from Wikipedia, deductive reasoning,
+
+is the process of reasoning from one or more statements (premises) to
+reach a logical conclusion.
+
+As part of deductive reasoning, it lists Modus Ponens, Modus Tollens, and Law of syllogism where Law of syllogism is defined as
+
+In term logic the law of syllogism takes two conditional statements
+and forms a conclusion by combining the hypothesis of one statement
+with the conclusion of another.
+
+Based on these articles, is it safe to assume that Syllogism uses Modus ponens, Modus tollens, and Law of syllogism to arrive at conclusion? Does that mean "Law of syllogism" is part of "syllogism" or is it something different?
+P.S.
+If this community is not appropriate for this kind of question, kindly guide me to the correct StackExchange or other communities.
+"
+"['neural-networks', 'deep-learning', 'gradient-descent', 'gradient-clipping']"," Title: What is the effect of gradient clipping by norm on the performance of a model?Body: It is recommended to apply gradient clipping by normalization in case of exploding gradients. The following quote is taken from here answer
+
+One way to assure it is exploding gradients is if the loss is unstable
+and not improving, or if loss shows NaN value during training.
+Apart from the usual gradient clipping and weights regularization that
+are recommended...
+
+But I want to know the effect of gradient clipping by normalization in the performance of the model in normal or general cases.
+Suppose I have a model and I run up to 800 epochs without gradient clipping because of the reason that there are no exploding gradients. If I run the same model with gradient clipping by norm, even if it is not necessary, then does the performance of the model decline?
+"
+"['deep-learning', 'objective-functions', 'mini-batch-gradient-descent']"," Title: When would it make sense to perform a gradient descent step for each term of a loss function with multiple terms?Body: I am training a neural network using a mini-batch gradient descent algorithm.
+Now, consider the following loss function, which is composed of 2 terms.
+$$L = L_{\text{MSE}} + L_{\text{regularization}} \label{1}\tag{1}$$
+As far as I understand, usually, we update the weights of a neural network only once per mini-batch, even if the loss function is composed of 2 or more terms, like in equation \ref{1}. So, in this approach, you calculate the 2 terms, add them, and then update weights once based on the sum.
+My question is: rather than summing the 2 terms of the loss function $L$ in equation \ref{1} and computing a single gradient for $L$, couldn't we separately compute the gradient both for $L_{\text{MSE}}$ $L_{\text{regularization}}$, then update the weights of the neural network twice? So, in this case, we would update the weights twice for each mini-batch. When would this make sense? Of course, my question also applies to the case where $L$ is composed of more than 2 terms.
+"
+"['deep-rl', 'q-learning', 'dqn', 'transfer-learning']"," Title: Transferring a Q-learning policy to larger instancesBody: How do I best transfer and fine-tune a Q-learning policy that was trained on small instances to large instances?
+Some more details on the problem:
+I am currently trying to derive a decision policy for a dynamic vehicle dispatching problem.
+In the problem, a decision point occurs whenever a customer requests delivery. Customers expect to be delivered within a fixed amount of time and the objective is to minimize the delay. The costs of each state are the delays realized in that state (i.e., the delay of the customers that were delivered between the last two states.
+Some details on the policy:
+I used a Q-learning policy (Dueling Deep Q Network) to estimate the discounted future delay of assigning an order to a vehicle. The policy was trained on a small-scale instance (5 vehicles ~100 customers) using epsilon-greedy exploration and a prioritized experience replay. I did not use temporal difference learning (as the cost of a decision are only revealed later in the process) but updated the policy after simulating the entire instance.
+My problem:
+As it is, the policy transfers well to instances of larger sizes (up to 100 vehicles and ~2000 customers) and I could do without fine-tuning. However, there certainly is room for the policy to improve. Unfortunately, when I try to fine-tune the initial model on the larger instance, the retrained model becomes worse over the training steps with regards to minimizing delay.
+I suspect that large gradients play a role here as the initial q-values, trained on the small instance, are obviously way off for the large instances (due to the increase in customers).
+Is there a standard approach to deal with such a transfer problem or do you have any suggestions?
+"
+['reinforcement-learning']," Title: Is it possible to tell the Reinforcement Learning agent some rules directly without any constraints?Body: I try to apply RL for a control problem, and I intend to either use Deep Q-Learning or SARSA.
+I have two heating storage systems with one heating device, and the RL agent is only allowed to heat up 1 for every time slot. How can I do that?
+I have two continuous variables $x(t)$ and $y(t)$, where $x(t)$ quantifies the degree of maximum power for heating up storage 1 and $y(t)$ quantifies the degree of maximum power for heating up storage 2.
+Now, IF $x(t) > 0$, THEN $y(t)$ has to be $0$, and vice versa with $x(t)$ and $y(t)$ element $0$ or $[0.25, 1]$. How can I tell this to the agent?
+One way would be to adjust the actions after the RL agent has decided about that with a separate control algorithm that overrules the actions of the RL agent. I am wondering if and how this can be also done directly? I'll appreciate every comment.
+Update: Of course I could do this with a reward function. But is there not a direct way of doing this? Because this is actually a so called hard constraint. The agent is not allowed to violate this at all as this is technically not feasible. So it will be better to tell the agent directly not to do this (if that is possible).
+Reminder: Can anyone tell me more about this issue? I'll highly appreciate any further comment and will be quite thankful for your help. I will also award a bounty for a good answer.
+"
+"['reinforcement-learning', 'policy-gradients']"," Title: Can we learn a policy network via a sequence of manually determined actions?Body: In policy gradients, is it possible to learn the policy if the chain of actions is selected and performed manually/externally (e.g. by myself or by someone else who I have no influence over)?
+For example, we have four actions, and I choose in the beginning an action 2, and we end up in a given state, then I choose action 4 and we end up in another state, etc. (the actions can follow some logic or not but the question is general; some of the actions will end up with positive rewards).
+Can we learn any meaningful policy network from such a chain of actions?
+"
+"['generative-adversarial-networks', 'overfitting', 'semi-supervised-learning']"," Title: Validation set performance increasing even after seemingly overfit on training setBody: I am training a semi-supervised GAN network using data from multiple subjects. I separated the labeled and unlabeled data based on my subjects, so there is no leakage, while having much more unlabeled data than labeled data. After few epochs training accuracy hits 100% which normally indicates overfitting, however the performance on the validation and test sets keeps increasing for 200-300 epochs. Is this considered overfitting and is there an explanation for this behavior?
+
+"
+"['image-recognition', 'transformer']"," Title: What is the purpose of hard distillation?Body: In order to get a smaller model, one often uses larger model, that performs reasonably well on the data as a teacher, and uses the information from large model to train the smaller one.
+There are several strategies to do this:
+
+- Soft distillation
+Given logits of the teacher one adds the KL-divergence between the student logits and teacher logits to the loss:
+$$
+\mathcal{L}_{loss} = (1 - \alpha) \mathcal{L}_{BCE} (y_{student}, y_{true}) +
+\lambda \mathcal{L}_{KL} (y_{student}, y_{teacher})
+$$
+Intuition behind this approach is clear - logits are more informative than a single target label and seemingly allow for faster training.
+
+- Hard distillation
+One adds the
BCE
between student logits and teacher model outputs as if they were true labels.
+$$
+\mathcal{L}_{loss} = (1 - \alpha) \mathcal{L}_{BCE} (y_{student}, y_{true}) +
+\lambda \mathcal{L}_{BCE} (y_{student}, y_{teacher})
+$$
+
+
+And the benefit of the last approach is unclear to me. For the perfect model, one will have no difference with the vanilla training procedure, and for the case, where the teacher makes mistakes, we will optimize the wrong objective.
+Despite these concerns, it was shown experimentally in several papers, and in
+the Deit, that this objective can improve performance. Even more, it is better, than soft distillation.
+
+Why can this be the case?
+"
+"['reference-request', 'image-processing', 'image-generation']"," Title: What is the name of the method for the smart extend of image surroundings?Body: I'm looking for the name of the method (or algorithms family, or research body) used for the smart extend of image surroundings.
+For example, the method I'm looking for would take this image:
+
+And smartly extend it into:
+
+So that the grass and the surrounding scenery are all generated to fill the desired area.
+Generally speaking, what I'm looking for should smartly generate surroundings including entities such as tree trunks and branches, grass patterns, mountains slopes, clouds patterns, water bodies like puddles, shrubs, stones on ground, and so on.
+Also, it would be nice to know how mature is this technology, i.e. how well can different entities be smartly extended.
+Note that Seam Carving is a candidate (used in Photoshop under the name Content-Aware Scale (see this for example)), but I'm looking for something smarter, I think, and I'm not really sure if it can do what I'm looking for.
+"
+"['deep-learning', 'history']"," Title: What are the defining moments that make community realise the potential of deep learning?Body: Consider the following paragraph from the chapter named pre-trained models from the textbook titled Deep Learning with PyTorch by Eli Stevens et al.
+
+The AlexNet architecture won the 2012 ILSVRC by a large margin, with a
+top-5 test error rate (that is, the correct label must be in the top 5
+predictions) of 15.4%. By comparison, the second-best submission,
+which wasn’t based on a deep network, trailed at 26.2%. This was a
+defining moment in the history of computer vision: the moment when the
+community started to realize the potential of deep learning for vision
+tasks. That leap was followed by constant improvement, with more
+modern architectures and training methods getting top-5 error rates as
+low as 3%.
+
+This paragraph is mentioning a moment that makes the community realize the potential of deep learning. Are there any other similar defining moments in the history of deep learning?
+"
+"['reinforcement-learning', 'deep-rl']"," Title: Training a reinforcement learning agent that can decide to continue or end the gameBody: I am trying to use reinforcement learning to let an agent learn simultaneously how to play a game and when to end a game.
+The task is to find a single target in a grid of locations. At each time step, the agent needs to make a series of decisions:
+
+- It believes the target is at the currently inspected location. End the trial and see whether the result is correct.
+- It believes the target is not at the currently inspected location. It then needs to pick another location to check at the next timestep.
+
+If the agent is choose decision #2, the environment will give some hints on where the target is, with some stochastic noise. The noise level depends on the distance between the true target location and currently inspected location. The shorter the distance, the lower the noise. The goal is to let the agent perform the task as fast and accurately as possible, so the agent needs to learn when to stop the trial, and how to select the next inspected location given the hints. The agent also has a internal memory so it won't select previously inspected locations. I would like to compare the agent's speed-accuracy trade off to human's.
+In a previous simplified version of the task, the environment ends the trial once the agent hit the target location, so the agent only needs to learn how to choose the next location to inspect. I used a simple Q-network and it works well. I also found that the network should be a fully convolutional network because fully connected layers are not spatially shift-invariant.
+Now how can I modify the existing convolutional network to satisfy the new task requirement? Or should I use a new network architecture?
+"
+"['deep-learning', 'max-pooling', 'pooling', 'average-pooling']"," Title: Is there any reason behind bias towards max pooling over avg pooling?Body: Consider the following excerpt taken from the chapter named Using convolutions to generalize from the textbook titled Deep Learning with PyTorch by Eli Stevens et al.
+
+Downsampling could in principle occur in different ways. Scaling an
+image by half is the equivalent of taking four neighboring pixels as
+input and producing one pixel as output. How we compute the value of
+the output based on the values of the input is up to us. We could
+
+- Average the four pixels. This average pooling was a common approach early on but has fallen out of favor somewhat.
+- Take the maximum of the four pixels. This approach, called max pooling, is currently the most commonly used approach, but it has
+a downside of discarding the other three-quarters of the data.
+- Perform a strided convolution, where only every $N$-th pixel is calculated. A $3 \times 4$ convolution with stride 2 still incorporates input
+from all pixels from the previous layer. The literature shows promise
+for this approach, but it has not yet supplanted max pooling.
+
+
+The paragraph is mentioning that the research community is biased towards max-pooling than average pooling. Is there any rational basis for such bias?
+"
+['natural-language-processing']," Title: Probability that two words appear in the same sentenceBody: How can I know if two words are likely to appear in the same sentence in (British) English (or English in general to enhance the chance of getting a result).
+As I don't have access to a powerful machine, is there any relevant website? Or a pretrained model I can use? Or something else?
+"
+"['implementation', 'intelligence-testing', 'turing-test', 'testing']"," Title: Are there standardized forms of the Turing Test?Body: Most computer science instructors will tell you that the Turing Test is more a theoretical or conceptual thought experiment than an actual exam that someone (or something!) can formally sit and receive a score on. A thread here on AI Stack Exchange confirms this.
+Considering all of this, have there been any significant attempts to create a standardized form of a Turing Test that could be rolled out widely and used to evaluate various AI constructs? Obviously, none of these standardized testing systems could be considered The Only True Turing Test (TM), but perhaps they could have their place in research as a way to benchmark or categorize various algorithms or evaluate the work of students.
+For example, I'm imagining hearing a graduate student muttering the following:
+
+My AI construct passes the Johnson-Smith Turing Test 1992 and the Hernandez-Dorfer 2017, but it's still failing the Takahashi-2003 Advanced Elite. What am I doing wrong? Maybe if I tweak this routine here. [click]. Darn, still fails.
+
+Either a fully automated test (e.g. just login and click to sit the exam) or a standardized system involving trained human judges (e.g. a la 21st century medical board exams) who apply standardized written rubrics would be acceptable as long as the criteria for passing are standardized rather than left to the judgment of untrained personnel or random passersby.
+"
+"['machine-learning', 'reinforcement-learning', 'markov-decision-process']"," Title: Can RL still learn in a scenario where current state and the next state are independant?Body: I am trying to implement reinforcement learning into my real-world problem.
+One thing making me hesitant to apply RL is that this real-world problem of mine is unique in a way how every state is independent of one another. The action taken by the agent at timestep t is the only thing that affects the state at the next timestep. (For example, in the cycle of "state-action-reward-next state", the "next state" is solely dependent on the "action" but not the "state".)
+I am wondering if the RL could still be able to learn through this scenario. If not, what other methods could be an option?
+"
+"['neural-networks', 'training', 'reference-request', 'convergence', 'gradient-methods']"," Title: References for the convergence of gradient-based algorithms for training neural networksBody: I'm looking for some good references that give convergence results of training neural networks. I'm decently familiar with works that analyze the convergence of SGD, and, in particular, I really like this paper Optimization Methods for Large-Scale Machine Learning. I'm looking for works that talk about the convergence of SGD (or possibly a different gradient-based algorithm) specifically for neural networks.
+An example of the type of paper I'm looking for is Convergence Analysis of Two-layer Neural Networks with ReLU Activation.
+"
+"['reference-request', 'evolutionary-algorithms', 'benchmarks', 'moea', 'nsga-2']"," Title: Is there a benchmark for multi-objective evolutionary algorithms?Body: I'm working on a project for an evolutionary algorithms course, and the problem we're trying to solve is multi-objective. We'll use NSGA-II but we also wanted to compare with some other MOEAs, however, we haven't been able to find good comparisons/benchmarks of these algorithms, so we don't really know how to decide.
+Any insights will be appreciated.
+"
+"['reinforcement-learning', 'reference-request', 'proofs', 'convergence', 'temporal-difference-methods']"," Title: Why does TD (0) converge to the MLE solution of the Markov model?Body: Why does TD (0) converge to the MLE solution of the Markov model?
+Let's take the Example 6.4 in Sutton and Barto's book as an example.
+
+Example 6.4: You are the Predictor Place yourself now in the role of the predictor of returns for an unknown Markov reward process. Suppose you observe the following eight episodes:
+$A,0,B,0; B,1;B,1 ;B,1 ;B,1 ;B,1 ;B,1 ;B,0$
+...
+But what is the optimal value for the estimate $V(A)$ given this data? Here there are two reasonable answers. One is to observe that $100 \%$ of the times the process was in state $A$ it traversed immediately to $B$ (with a reward of $0$); and because we have already decided that $B$ has value $\frac{3}{4}$, therefore $A$ must have value $\frac{3}{4}$ as well. One way of viewing this answer is that it is based on first modeling the Markov process, in this case as shown to the right, and then computing the correct estimates given the model, which indeed in this case gives $V(A)=\frac{3}{4}$. This is also the answer that batch $\mathrm{TD}(0)$ gives.
+
+Given the TD(0) update rule $V(S) \leftarrow V(S)+\alpha\left\lceil R+\gamma V\left(S^{\prime}\right)-V(S)\right]$, how can we deduce that it will get the MLE solution and thus $V(A) =\frac{3}{4}$?
+"
+"['reinforcement-learning', 'reference-request', 'convergence', 'temporal-difference-methods', 'learning-rate']"," Title: How does $\alpha$ affect the convergence of the TD algorithm?Body: In Temporal-Difference Learning, we update our value function by $V\left(S_{t}\right) \leftarrow V\left(S_{t}\right)+\alpha\left(R_{t+1}+\gamma V\left(S_{t+1}\right)-V\left(S_{t}\right)\right)$
+If we choose a constant $\alpha$, will the algorithm eventually give us the true state value function? Why or why not?
+"
+['neat']," Title: NEAT Speciation distance: How does one treat disabled connections?Body: When calculating the distance between two genomes, how does one treat disabled connections?
+For example, consider the following genome:
+
+
+
+
+ |
+ |
+ |
+ |
+ |
+ |
+ |
+ |
+ |
+
+
+
+
+[1, 0.2, E] |
+[2, 0.1, D] |
+[3, 0.2, E] |
+[4, 0.15, E] |
+[5, 0.3, D] |
+ |
+[7, 0.25, D] |
+[8, 0.25, E] |
+[9, 0.1, E] |
+
+
+[1, 0.2, E] |
+[2, 0.2, E] |
+[3, 0.1, E] |
+ |
+ |
+[6, 0.15, E] |
+ |
+ |
+ |
+
+
+
+
+For $\overline{W}$, the weighted sum of common genes, do I only sum the genes that are enabled in both (1 and 3), or do I sum all of the common genes (1, 2, and 3)?
+For $D$, the disjoint genes, do I only count the disjoint enabled genes (4 and 6), or do I count all of them (4, 5, and 6)?
+For $E$, the excess genes, do I only count the enabled genes (8 and 9), or do I count all of them (7, 8, and 9)?
+And finally, for $N$, do I count all of the genes in the larger genome or just the enabled ones?
+Oh, one last question. Is gene 2 now considered disjoint since it is disabled in the first genome?
+"
+"['reference-request', 'image-recognition', 'image-generation', 'algorithm-request', 'artificial-creativity']"," Title: What sort of out-of-the-box technology could be used to create work similar to artist Refik Anadol?Body: Refik Anadol has machines view actual pictures and then has the machine create its own images. This video shows some of the stuff he does.
+What kind of out-of-the-box tools (e.g. a Python package) or algorithms produce similar things to what he does?
+I am hoping to play around with it and see what can happen. Not sure what to even Google for or search for.
+"
+"['neural-networks', 'backpropagation', 'gradient-descent', 'weights', 'mini-batch-gradient-descent']"," Title: In mini-batch gradient descent, are the weights updated after each batch or after all the batches have gone through an epoch?Body: Say I have a mini-batch of size 32, and I have 10 such batches. Assuming I only run it for one epoch (just for the sake of understanding it), Will the weights be updated using the gradients of one mini-batch, or will it be done after all the 10 mini batches have passed through?
+Intuitively for me, it ought to be the first one because otherwise, the only difference between Batch-GD and mini-batch GD will be the size of the batch.
+"
+"['neural-networks', 'hyperparameter-optimization', 'training-datasets']"," Title: Do we need automatic hyper-parameter tuning when we have a large enough dataset?Body: Hyperparameter tuning is the process of selecting the optimal hyperparameters for an ANN.
+Now, my guess is that, if we have sufficient data (say, 1.4 million for, say, 6 features), the model can be optimally trained and we don't need a hyperparameter tuner (like Keras-Tuner), because, while training, the data itself will optimize the model.
+Do we need a hyperparameter tuner if we have a sufficient number of random data for training our ANN model?
+"
+"['neural-networks', 'convolutional-neural-networks', 'object-detection', 'applications']"," Title: When is an object detection approach over a CNN approach appropriate?Body: I understand that CNNs are for image classification while object detection is for localization + classification of the objects detected. However, in particular, AI for chest radiographs, why is object detection used? If a CNN has 99% accuracy, should object detection still be considered? I see a lot of research papers on object detection with x-ray data but they don't explain why object detection is better than CNNs. While object detection allows users to see "where" the object is located, does this even matter if we get such high accuracy already? Also, if the location really does matter, can't we just use the heat maps from CNN?
+"
+"['neural-networks', 'convolutional-neural-networks', 'computer-vision', 'tensorflow', 'convergence']"," Title: Hand Landmark Detector Not ConvergingBody: I'm currently trying to train a custom model with TensorFlow to detect 17 landmarks/keypoints on each of 2 hands shown in an image (fingertips, first knuckles, bottom knuckles, wrist, and palm), for 34 points (and therefore 68 total values to predict for x & y). However, I cannot get the model to converge, with the output instead of being an array of points that are pretty much the same for every prediction.
+I started off with a dataset that has images like this:
+
+each annotated to have the red dots correlate to each keypoint. To expand the dataset to try to get a more robust model, I took photos of the hands with various backgrounds, angles, positions, poses, lighting conditions, reflectivity, etc, as exemplified by these further images:
+![]()
+I have about 3000 images created now, with the landmarks stored inside a CSV as such:
+
+I have a train-test split of .67 train .33 test, with the images randomly selected to each. I load the images with all 3 color channels and scale both the color values & keypoint coordinates between 0 & 1.
+I've tried a couple of different approaches, each involving a CNN. The first keeps the images as they are, and uses a neural network model built as such:
+model = Sequential()
+
+model.add(Conv2D(filters = 64, kernel_size = (3,3), padding = 'same', activation = 'relu', input_shape = (225,400,3)))
+model.add(Conv2D(filters = 64, kernel_size = (3,3), padding = 'same', activation = 'relu'))
+model.add(MaxPooling2D(pool_size = (2,2), strides = 2))
+
+filters_convs = [(128, 2), (256, 3), (512, 3), (512,3)]
+
+for n_filters, n_convs in filters_convs:
+ for _ in np.arange(n_convs):
+ model.add(Conv2D(filters = n_filters, kernel_size = (3,3), padding = 'same', activation = 'relu'))
+ model.add(MaxPooling2D(pool_size = (2,2), strides = 2))
+
+model.add(Flatten())
+model.add(Dense(128, activation="relu"))
+model.add(Dense(96, activation="relu"))
+model.add(Dense(72, activation="relu"))
+model.add(Dense(68, activation="sigmoid"))
+
+opt = Adam(learning_rate=.0001)
+model.compile(loss="mse", optimizer=opt, metrics=['mae'])
+print(model.summary())
+
+I've modified the various hyperparameters, yet nothing seems to make any noticeable difference.
+The other thing I've tried is resizing the images to fit within a 224x224x3 array to use with a VGG-16 network, as such:
+vgg = VGG16(weights="imagenet", include_top=False,
+ input_tensor=Input(shape=(224, 224, 3)))
+vgg.trainable = False
+
+flatten = vgg.output
+flatten = Flatten()(flatten)
+
+points = Dense(256, activation="relu")(flatten)
+points = Dense(128, activation="relu")(points)
+points = Dense(96, activation="relu")(points)
+points = Dense(68, activation="sigmoid")(points)
+
+model = Model(inputs=vgg.input, outputs=points)
+
+opt = Adam(learning_rate=.0001)
+model.compile(loss="mse", optimizer=opt, metrics=['mae'])
+print(model.summary())
+
+This model has similar results to the first. No matter what I seem to do, I seem to get the same results, in that my mse loss minimizes around .009, with an mae around .07, no matter how many epochs I run:
+
+Furthermore, when I run predictions based off the model it seems that the predicted output is basically the same for every image, with only a slight variation between each. It seems the model predicts an array of coordinates that looks somewhat like what a splayed hand might, in the general areas hands might be most likely to be found. A catch-all solution to minimize deviation as opposed to a custom solution for each image. These images illustrate this, with the green being predicted points, and the red being the actual points for the left hand:
+
+So, I was wondering what might be causing this, be it the model, the data, or both, because nothing I've tried with either modifying the model or augmenting the data seems to have done any good. I've even tried reducing the complexity to predict for one hand only, to predict a bounding box for each hand, and to predict a single keypoint, but no matter what I try, the results are pretty inaccurate.
+Thus, any suggestions for what I could do to help the model converge to create more accurate & custom predictions for each image of hands it sees would be very greatly appreciated.
+"
+"['bayesian-networks', 'bayes-theorem', 'bayesian-probability', 'kalman-filter']"," Title: What makes Sequential Bayesian Filtering and Smoothing tractable?Body: I'm currently diving into the Bayesian world and I find it pretty fascinating.
+I've so far understood that applying the Bayes' Rule, i.e.
+$$\text{posterior} = \frac{\text{likelihood}\times \text{prior}}{\text{evidence}}$$
+are most of the time intractable because of the high dimensional parameter space in the denominator.
+One way to solve this is by using a prior conjugate to the likelihood, as then the analytical form of the posterior is known and calculations are simplified.
+So far so good. Now I've read about bayesian sequential filtering and smoothing techniques such as the Kalman Filter or Rauch-Tung-Striebel Smoother (find references here). As far as I understood, assuming a time step $k$, instead of calculating the complete posterior distribution $p(X_k|Y_k)$ with $X=[x_1, ...,x_k]$ and $Y=[y_1,...y_k]$, a Markov Chain is assumed and only the marginal $p(x_k|Y_k)$ is estimated in a recursive manner. That is, the posterior calculate at time step $k$ serves as prior for the next time step. I guess, Bayes' Rule is somehow involved in these calculations
+Furthermore, both techniques assume the posterior always to be Gaussian and therefore closed-form solutions are obtained. Now I was wondering what restriction does make the whole process tractable, i.e. eliminates the need to compute the evidence?
+I guess it's the Gaussian assumption, i.e. the prior, the predicted, and the posterior distribution are all assumed to be Gaussian, and therefore updated distributions are obtained without computing the evidence - is this correct and does this refer to conjugate distributions?
+Or is it the fact that we assume a Markov Chain and do not consider all states at each time step?
+"
+"['neural-networks', 'deep-learning']"," Title: Should I allow NN to infer relationships of inputs?Body: This question is assuming a sequential, deep neural network
+Given some features [X1, X2, ... Xn], I'm trying to predict some value Y.
+The raw data available to me contains feature X1
and feature X2
. Say that I know there is an effect on Y
based on the ratio of the two features, i.e. X1 / X2
.
+Should I add a new feature, mathematically defined as the ratio of the two features? I haven't been able to locate any literature which begins to describe the necessity or warnings of this.
+Instinctly I'm worried about the following:
+
+- Overfitting and the need for excessive regularization, due to duplicate information in the feature set
+- Exponentially growing number of features, since defining a ratio between each feature may be necessary
+
+However, I also recognize that certain relationships are impossible to be defined by a deep neural network (i.e. logic gates, exponential relationships, etc), so when would this sort of "relationship defining" be necessary? For example, if an exponential relationship is known to exist?
+"
+"['intelligent-agent', 'norvig-russell', 'rationality', 'rational-agents']"," Title: Why is this vacuum cleaner agent rational?Body: This is the vacuum cleaner example of the book "Artificial intelligence: A Modern Approach" (4th edition).
+
+Consider the simple vacuum-cleaner agent that cleans a square if it is dirty and moves to the other square if not; this is the agent function tabulated as follows.
+Percept sequence Action
+[A,Clean] Right
+[A,Dirty] Suck
+[B,Clean] Left
+[B,Dirty] Suck
+[A,Clean], [A,Clean] Right
+[A,Clean], [A,Dirty] Suck
+. .
+. .
+[A,Clean], [A,Clean], [A,Clean] Right
+[A,Clean], [A,Clean], [A,Dirty] Suck
+. .
+.
+ .
+
+The characteristics of environment and performance function are as follow:
+
+- The performance measure awards one point for each clean square at each time step, over a "lifetime" of 1000 time steps.
+
+- The "geography" of the environment is known a priori (the above figure) but the dirt distribution and the initial location of the agent are not. Clean squares stay clean and sucking cleans the current square. The Right and Left actions move the agent one square except when this would take the agent outside the environment, in which case the agent remains where it is.
+
+- The only available actions are Right, Left, and Suck.
+
+- The agent correctly perceives its location and whether that location contains dirt.
+
+
+In the book, it is stated that under these circumstances the agent is indeed rational. But I do not understand such percept sequence that consists of multiple [A, clean]
percepts, e.g. {[A, clean], [A, clean]}
. In my opinion, after first [A, clean]
, the agent must be gone to the right square; So, the sequence {[A, clean], [A, clean]}
will never be perceived.
+In other words, the second perception of [A, clean]
is the consequence of acting left or suck action after perceiving the first [A, clean]
. Therefore, we can conclude the agent is not rational.
+Please, help me to understand it.
+"
+"['neural-networks', 'deep-learning', 'datasets', 'transformer']"," Title: Can I use the transformers for the prediction of historical data?Body: Can I use the transformers for the prediction of wind power with the historical data?
+Dataset
+Datetime, Ambient temperature (Degree), Dewpoint (Degree), Relative Humidity\n (%), Air Pressure, Wind Direction (Degree), Wind Speed at 76.8 m (m/sec), Power Generated(kW).
+15 years of data from 2007 to 2021 with a sampling time of 1 hour
+"
+"['generative-model', 'variational-autoencoder']"," Title: What are the roles of the prior $\mathrm{p}(\mathbf{z})$ in a VAE?Body: I know the encoder is variational posterior $q_{\phi}(\mathbf{z} \mid \mathbf{x})$.
+I also know that the decoder represents the likelihood:
+$p_{\theta}(\mathbf{x} \mid \mathbf{z})$.
+My question is about the prior $\mathrm{p}(\mathbf{z})$.
+I know ELBO can be written as:
+$$E_{q_{\phi}(\mathbf{z} \mid \mathbf{x})}[\log (p_{\theta}(\mathbf{x} \mid \mathbf{z}))]-\mathrm{D}_{\mathrm{KL}}( q_{\phi}(\mathbf{z} \mid \mathbf{x}) \| \mathrm{p}(\mathbf{z})) \leq \log (p_{\theta}( \mathbf{x}))$$
+And for the VAE, the variational posterior is
+$$ q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x}^{(i)})= \mathcal{N}( \boldsymbol{\mu}^{(i)}, \boldsymbol{\sigma}^{2(i)} \mathbf{I}),$$
+and prior is
+$$ \mathrm{p}(\mathbf{z})=\mathcal{N}( \boldsymbol{0}, \mathbf{I}).$$
+So
+$$\mathrm{D}_{\mathrm{KL}}\left(\mathrm{q}_{\Phi}(\mathbf{z} \mid \mathbf{x}) \| p_{z}(\mathbf{z})\right)=\frac{1}{2} \sum_{j=1}^{J}\left(1+\log \left(\sigma_{j}^{2}\right)-\sigma_{j}^{2}-\mu_{j}^{2}\right)$$
+That's one way I know the prior plays a role, in helping determine part of the loss function.
+Is there any other role that the prior plays for the VAE?
+"
+"['deep-learning', 'papers', 'audio-processing']"," Title: Understanding gumbel-softmax backpropagation in Wav2Vec papersBody: I'm studying the series of Wav2Vec papers, in particular, the vq-wav2vec and wav2vec 2.0, and have a problem understanding some details about the quantization procedure.
+The broader context is this: they use raw audio and first convert it to "features" $z$ via a convolutional network. Then they project any feature $z$ to a "quantized" element $\hat{z}$ from a given finite codebook (or concatenation of finitely many finite codebooks). To find $\hat{z}$, they compute scores $l_j$ for each codebook entry $v_j$, convert these scores to Gumbel-Softmax probabilities $p_j$ (using a formula which is not deterministic, the formula involves random choices of some numbers from some distribution) and then use these probabilities $p_j$ to choose $\hat{z}$. Further stages of the pre-training pipeline are trained to predict $\hat{z}'s$ by either predicting "future" from the "past", or "reconstructing masked segments".
+My question is this is about this sentence:
+
+During the forward pass, $i = \text{argmax}_j p_j$ and in the
+backward pass, the true gradient of the Gumbel-Softmax outputs is
+used.
+
+
+- I have trouble seeing what exactly is happening in the loss function and back-propagation. Could someone please help me to break this down into details?
+
+
+My mental attempts to make sense out of it (I'm using the notation $\hat{z}$ for quantized vectors, in the second paper they use $q$)
+(1) I would say that during the forward pass, in the Gumbel-Softmax, random variables from the Gumbel-distribution $n_j$ are sampled every time (for every training example) to compute the Gumbel-softmax probabilities $p_j$.
+(1a) In the back-propagation, these $n_j$'s are kept constant, and $p_j$ is treated as a function of $l_j's$ only.
+(2) The loss function has 2 parts here, Contrastive loss and Diversity loss.
+(2a) Based on the description, I would say that in the contrastive loss, the "sampled" vectors $\hat{z}_j$ are used, and probabilities never appear (even not in back-propagation of this part of the loss).
+(2b) I would believe that in the gradient of the Diversity loss, which only uses probabilities $p_{g,v}$, that here the gradient or the loss actually is used, as this is responsible for maximizing the entropy. This part of the gradient probably does not use the sampled values $\hat{z}_j$.
+Is this approximately correct?
+If yes, then I still fail to understand what exactly is happening in the vq-wav2vec paper. The sentence
+
+During the forward pass, $i = \text{argmax}_j p_j$ and in the
+backward pass, the true gradient of the Gumbel-Softmax outputs is
+used.
+
+is there as well, but I cannot see any part of the loss function (in this paper) where the probabilities are explicitly used (such as the diversity loss).
+"
+"['neural-networks', 'papers', 'transformer', 'attention']"," Title: In layman terms, what does ""attention"" do in a transformer?Body: I heard from many people about the paper titled Attention Is All You Need by Ashish Vaswani et al.
+What actually does the "attention" do in simple terms? Is it a function, property, or some other thing?
+"
+"['neural-networks', 'gradient-descent', 'gpu']"," Title: How to train neural networks with multiprocessing?Body: I am trying to figure out how multiprocessing works in neural networks.
+In the example I've seen, the database is split into $x$ parts (depending on how many workers you have) and each worker is responsible to train the network using a different part of the database.
+I am confused regarding the optimization part:
+Let's say worker 1 finished first to calculate the gradient, now it will update the network accordingly.
+Then worker 2 finished the calculation and it will also attempt to update the weights. However, the gradient it calculated was for the network before it was updated by the first worker. Now, the second worker will attempt to update the network with a bad gradient.
+Did I miss something?
+"
+"['image-recognition', 'autonomous-vehicles']"," Title: How do automatic high-beam headlights work on cars?Body: Modern cars can operate high-beam headlights automatically:
+
+- They automatically switch from high-beam headlights to low-beam ones (less intense) when you enter a town or there is a car in front of you either going in the same or opposite direction so you don't dazzle other drivers or people in the street.
+
+- Oppositely, when you are in almost complete darkness and there aren't any other drivers at sight, the system automatically sets the high-beam headlights.
+
+
+I am aware that in the front part of these cars there is a camera or sensor and also imagine that the automatic switching when entering or exiting a town is achieved by just having a threshold ambient illumination.
+But I am unable to imagine how the recognition of other cars works. It might be that an image recognition program is used to detect pairs of front car lights (white) and rear lights (red). However, how do you deal with:
+
+- pairs of street lamps (far from the road and not illuminating it) that could be identified as a car coming in the opposite direction,
+- pairs of lights coming from the reflectors of the crash barriers,
+
![]()
+- many other random pairs of lights that could be interpreted as cars.
+
+Is this technology based on AI software that after intense training is able to deal with these points? Or is it a less complex image analysis program that takes into account that moving lights outside the car (with respect to the road) move differently than static lights (with respect to the road) when seen from the car?
+Edit: I've seen this technology working on Audi A4 and A5 cars.
+"
+"['convolutional-neural-networks', 'representation-learning', 'k-means', 'contrastive-learning']"," Title: How to use K-means clustering to visualise learnt features of a CNN model?Body: Recently, I was going through the paper Intriguing Properties of Contrastive Losses. In the paper (section 3.2), the authors try to determine how well the SimCLR framework has allowed the ResNet50 Model to learn good quality/generalised features that exhibit hierarchical properties. To achieve this, they make use of K-means on intermediate features of the ResNet50 model (intermediate means o/p of block 2,3,4..), and I quote the reason below
+
+If the model learns good representations then regions of similar objects should be grouped together.
+
+Final Results:
+I am trying to replicate the same procedure but with a different model (like VggNet, Xception).
+Are there any resources explaining how to perform such visualizations?
+"
+"['deep-learning', 'natural-language-processing', 'transformer', 'attention', 'computational-complexity']"," Title: Why does research on faster Transformers focus on the query-key product?Body: A lot of recent research on Transformers has been devoted to reducing the cost of the self-attention mechanism:
+$$\text{softmax}\left(\frac{Q K^T}{\sqrt{d}} \right)V,$$
+As I understand it, the runtime, assuming $\{Q, K, V\}$ are each of shape $(n, d)$, is $O(n^2 d + n d^2)$. In general, the issue is the $n^2 d$ term, because the sequence length $n$ can be much bigger than the model dimension $d$. So far, so good.
+But as far as I can tell, current research focuses on speedups for $Q K^T$, which is $O(n^2 d)$. There's less focus on computing $A V$, where $A = \text{softmax} \left(\frac{Q K^T}{\sqrt{d}} \right)$ -- which also has complexity $O(n^2 d)$.
+Why is the first matrix product the limiting factor?
+Examples of these faster Transformer architectures include Longformer, which approximates $QK^T$ as a low-rank-plus-banded matrix, Nystromformer, which approximates $\text{softmax}(QK^T)$ as a low-rank matrix with the Nystrom transformation, and Big Bird, which approximates it with a low-rank-plus-banded-plus-random matrix.
+"
+"['implementation', 'geometric-deep-learning', 'graph-neural-networks']"," Title: How do graph neural networks adapt to different number of nodes and connections of different graphs?Body: I have recently been studying GNN, and the fundamental idea seems to be the aggregation and transfer of information from a node's neighborhood to update the node's internal state. However, there are few sources that mention the implementation of GNN in code, specifically, how do GNNs adapt to the differing number of nodes and connections in a dataset.
+For example, say we have 2 graph data that looks like this:
+
+It is clear that the number of weights required in the two data points would be different.
+So, how would the model adapt to this varying number of weight parameters?
+"
+"['neural-networks', 'deep-learning', 'optimization', 'gradient-descent', 'optimizers']"," Title: What is uncentered variance and how it becomes equal to mean square in Adam?Body: I have been reading about Adam and AdamW (Here). The author mentioned that in "uncentered variance" we don't consider subtracting mean
+
+In this statement, the author is talking about uncentered variance and how it becomes equal to the square of the mean.
+
+I want to understand what exactly is uncentered variance? (because if I consider the general equation of variance $\dfrac{(\text{obs-mean})^2}{(n-1)}$ then it is not making sense to me the removing mean will lead to the definition of uncentered variance as we need a point around which we are calculating variance and here mean is that point)
+Also if we are making mean=0(not subtracting mean from obs) then if I consider this as
+uncentered mean (For me it is variance around 0) then it is becoming hard to understand how this will lead to uncentered $\text{variance = mean}^2$
+"
+"['computer-vision', 'papers', 'saliency-map']"," Title: In this paper, how does scaling the filter instead of the image generate saliency maps of the same size and resolution as the input image?Body: In this paper, in section 3.1, the authors state
+
+Scaling the filter instead of the image allows the generation of saliency maps of the same size and resolution as the input image.
+
+How is this possible?
+From what I have understood, the process of filtering the image is similar to that of a convolution operation, like this:
+
+However, if this is true, shouldn't we get different sized outputs (i.e. saliency maps) for different filter sizes?
+I think I am misunderstanding how the filtering process really works in that it actually differs from a CNN. I would highly appreciate any insight on the above.
+Note: This is a follow-up to this question.
+"
+"['neural-networks', 'algorithm-request', 'signal-processing']"," Title: How to detect the sine wave signal with different frequency using neural networks?Body: I'm wondering if there is a way to use a neural network that can detect the noisy sine wave, where the frequency is not constant. In other words, I'm not looking for a solution that would detect the signal of a particular frequency (say 50Hz), but a solution that can detect any sine wave signal in say range (100-1000Hz).
+"
+"['classification', 'deep-neural-networks', 'pytorch']"," Title: Multi label classification on non binary labels with pytorchBody: I am working on a project consisting of medical images and a huge dataset of multi-label and non-binary labels/outcomes ( sex, blood pressure, age and 40 more ).
+Would be the best approach to hard code all of them or is there some better approach? If this is the best way, does anyone have a similar PyTorch notebook on which I could orientate myself? Or some smart solution how to hard code them automatically?
+Any help is welcome!
+"
+"['machine-learning', 'computer-vision', 'image-processing', 'image-segmentation']"," Title: Is there any way to remove background of an image fully with the help of post-processor techniques(like edge detector) after deep learning based modelBody: I'm using a deep learning-based model (deep lab v3+ with xception as the backbone) for image segmentation and removing the background. The subject of the image will be a person. And my target is to extract the person from the image. But only with the machine learning model, the output is not satisfactory. I'm thinking if somehow I can detect the edge of the person with "Canny Edge detector" and use this as post processor to get exact output. I also try the blur effect for smoothing the object's edge and getting a pleasant-looking output. I saw Adobe Photoshop's "select subject" feature. It is quite accurate.
+Main image -> segmented mask from ml-model -> adding blur effect on the edge
+Canny Edge Detector as Post Processor:
+I tried the following steps:
+
+- Run Canny Edge on the main image. canny edge image
+- Remove the background noise(of the canny edge image) using the image segmentation mask(I got from the ml-model). Here I use bitwise-and of the canny edge image and segmentation mask. background noise removed canny edge image
+- In the 2nd step, some of the real edges of the person also get removed, and to restore those, I have used the breadth-first-search algorithm and run it up to some limit(suppose, up to 10 neighbors). after BFS
+- I shrink the segmentation mask and using this, removed the interior noise of the canny edge image. removing the inside noise
+
+Now if I could somehow connect the edge line of the canny edge image, we would get a perfect segmentation. But I failed to do so.
+Any suggestion will be helpful for me.
+"
+"['reinforcement-learning', 'game-ai', 'atari-games']"," Title: Demonstration of AI-powered Mario collecting lots of coins?Body: As part of a talk I'm giving, I'd like to show one of the many videos on YouTube where an AI is playing Mario, such as this one. What bothers me though is that the AI is trying to complete the level as quickly as possible, without trying to collect lots of coins. I believe it's known as "speed run" in the gaming world, which is fine and well, but I think most people expect Mario to be collecting lots of coins and mushrooms.
+Are you familiar with a video of an AI-powered Mario that does collect a lot of coins and mushrooms?
+If not, maybe you know a different video of a similarly popular video game where the AI does try to get lots of points and not just do a speed run?
+"
+"['deep-learning', 'regression', 'model-request']"," Title: What is the best way to train a text-based regressor model?Body: I want to build a deep learning model that can predict a continuous value (LogP in this case) given text inputs (SMILES notations in this case), the dataset is as illustrated below.
+
+
+
+
+SMILES notations |
+LogP |
+
+
+
+
+C1CCCC(C)(C)1 |
+1.98 |
+
+
+... |
+... |
+
+
+
+
+I have never tackled text data, I mostly worked with numbers-based datasets (or images).
+My questions are:
+
+- What is the best model for this case? I believe RNN based architectures, such as LSTM and GRU, are the most suitable.
+- What about recent architectures such as Transformers?
+- How can/should I convert (or embed, or encode) the text inputs (SMILES) to feed them to my model?
+
+"
+"['neural-networks', 'optimization', 'gradient-descent', 'adam']"," Title: Why are optimization algorithms for deep learning so simple?Body: From my knowledge, the most used optimizer in practice is Adam, which in essence is just mini-batch gradient descent with momentum to combat getting stuck in saddle points and with some damping to avoid wiggling back and forth if the conditioning of the search space is bad at any point.
+Not to say that this is actually easy in absolute terms, but after a few days, I think I got most of it. But when I look into the field of mathematical (non-linear) optimization, I'm totally overwhelmed.
+
+What are the possible reasons that optimization algorithms for neural networks aren't more intricate?
+
+- There are just more important things to improve?
+- Just not possible?
+- Is Adam and others already so good that researchers just don't care?
+
+"
+"['natural-language-processing', 'data-preprocessing', 'machine-translation', 'lemmatization']"," Title: Given the immaturity of NLP tools for non-English languages, should I first translate the non-English language to English before text pre-processing?Body: For non-English languages (in my case Portuguese), what is the best approach? Should I use the not-so-complete tools in my language, or should I translate the text to English, and after using the tools in English? Lemmatization, for example, is not so good in non-English languages.
+"
+"['ai-design', 'ontology', 'knowledge-based-systems']"," Title: What is the place of ontologies in artificial intelligence?Body: Very much a general question here, from a somewhat uneducated perspective.
+I'm currently part way through an MSc in AI, and at the minute I am taking a module on Knowledge Engineering and Computational Creativity. The professor taking the class obviously does research in this area and is saying that ontologies are becoming very important in the world of AI, or it may be more accurate to say he is suggesting they are becoming more important.
+I intend to look into the work he does, and ask him a few questions, but, generally, I was wondering where this type of research sits in the world of AI. Is it something being worked on a lot? Is it becoming bigger?
+I am interested because I do find the topic interesting, and I will have a research project coming up soon, and while I do want to work on an interesting topic, I also want to work on a relevant topic, so any information would be great.
+"
+"['search', 'breadth-first-search', 'depth-first-search']"," Title: How do the BFS and DFS search algorithms choose between nodes with the ""same priority""?Body: I am currently taking an Artificial Intelligence course and learning about DFS and BFS.
+If we take the following example:
+
+From my understanding, the BFS algorithm will explore the first level containing $B$ and $C$, then the second level containing $D,E,F$ and $G$, etc., till it reaches the last level.
+I am lost concerning which node between $B$ and $C$ (for example) will the BFS expand first?
+Originally, I thought it is different every time, and, by convention, we choose to illustrate that it's done from the left to the right (so exploring $B$ then $C$), but my professor said that our choice between $B$ and $C$ depends on each case and we choose the "shallowest node first".
+In made examples, there isn't a distance factor between $A$ and $B$, and $A$ and $C$, so how could one choose then?
+My question is the same concerning DFS where I was told to choose the "deepest node first". I am aware that there are pre-order versions and others, but the book "Artificial Intelligence - A Modern Approach, by Stuart Russel" didn't get into them.
+I tried checking the CLRE algorithms book for more help but the expansion is done based on the order in the adjacency list which didn't really help.
+"
+['machine-learning']," Title: Can people without a background in STEM go into the field of AI?Body: I've recently become very interested in the potential AI holds for the future of society. I believe it has the potential to truly alter the way we live our lives in the not-too-distant future. I've read around three dozen books by leading scholars and I've written my undergraduate thesis on the implications of AI for modern warfare and the international balance of power. With that in mind, I've started to think about a career related to AI and machine learning.
+However, I have a bit of a problem considering my background as a humanities person. I'm about to finish my undergraduate studies in International Relations and I plan to get a master in a similar field. As someone with little background in Comp Sci/ Math/ Physics/ etc, how could someone go about working in a field like AI? I struggle to see what utility non-STEM people could offer to firms.
+I hope I'm asking this in the right place considering this forum is mostly for technical questions.
+"
+"['neural-networks', 'deep-learning', 'variational-autoencoder', 'kl-divergence', 'latent-variable']"," Title: What is the most suitable measure of the distance between two VAE's latent spaces?Body: The problem I'm trying to solve is as follows.
+I have two separate domains, where inputs do not have the same dimensions. However, I want to create a common feature space between both domains using paired inputs (similar inputs from both domains).
+My solution is to encode pairs of inputs into a shared latent space using two VAE encoders (one for each domain). To ensure that the latent space is shared, I want to define a similarity metric between the output of both probabilistic encoders.
+Let's define the first encoder as $q_\phi$ and the second as $p_\theta$. As for now, I have two main candidates for this role:
+
+- KL-divergence : $\text{KL}(p || q)$ (or $\text{KL}(q||p)$), but, since it is not symmetrical, I don't really know which direction is the best.
+
+- JS-divergence: symmetrical and normalized, which is nice for a distance metric, but, since it is not as common as KL, I'm not sure.
+
+
+Other candidates include adversarial loss (a discriminator is tasked to guess from which VAE the latent code is, the goal of both VAE being to maximally confuse it) or mutual information (seen more and more in recent works, but I still don't fully understand it).
+My question is: according to you, which loss could work best for my use case? KL or JS? Or other candidates I didn't think of?
+-- More context --
+My ultimate goal is to use transfer learning between morphologically distinct robots e.g a quadrupedal robot and a bipedal robot. The first step in my current approach is to record trajectories on both robots executing the same task (walking for example). From said trajectories I create paires of similar states (to simplify the problem, I suppose that both robot achieve the same task at the same speed so temporaly aligned states for both robots are paired). Then my goal is to encode these paired states (that doesn't have the same dimension due to the difference in number of joints) into two latents spaces (one for each VAE) such that similar pair of inputs are close in the latents spaces. If I was working with simple autoencoders I would simply minimize the distance in the latent space between paires of inputs such that similar states on both robots maps to the same point in the latent space. But I need the generative capabilities of VAE so instead I would like to make the distributions outputed by the VAEs as close as possible. Make sense ?
+"
+"['data-preprocessing', 'face-recognition', 'face-detection', 'facenet']"," Title: Does the converted (now square) distorted image of a face affect the accuracy of the calculation of the similarity in FaceNet?Body:
+As far as I know,
+
+- FaceNet requires a square image as an input.
+
+- MTCNN can detect and crop the original image as a square, but distortion occurs.
+
+
+Is it okay to feed the converted (now square) distorted image into FaceNet? Does it affect the accuracy of calculation similarity (embedding)?
+For similarity (classification of known faces), I am going to put some custom layers upon FaceNet.
+(If it's okay, maybe because every other image would be distorted no matter what? So, it would not compare normal image vs distorted image, but distorted image vs distorted image, which would be fair?)
+Original issue: https://github.com/timesler/facenet-pytorch/issues/181.
+"
+"['geometric-deep-learning', 'graph-neural-networks', 'filters']"," Title: Graph Convolutional Networks: why are non-parametric filters not localized in space?Body: I was reading the following paper here about some of the groundwork in graph deep learning. On page 3, in the bit entitled Polynomial parameterization for localized filters, it states that non-parametric filters (i.e. a filter whose parameters are all free) are not localized in space.
+Question: Why is this the case? It is referring to a filter $g_{\theta}$ such that:
+$$ y = g_{\theta}(L) x = g_{\theta} (U \Lambda U^T) x = U g_{\theta} (\Lambda) U^T x $$
+where $L$ is the graph laplacian matrix, and $U \Lambda U^T$ are the eigenvector decomposition matrices.
+Attempted explanation: Is it because the filter $g_{\theta}$ is defined in the spectral domain and thus its spatial domain (i.e. its inverse graph Fourier transform) may be defined over the whole graph (and thus not localized)?
+"
+"['papers', 'graph-neural-networks', 'semi-supervised-learning', 'spectral-clustering']"," Title: How are GCN doing semi-supervised learning?Body: In Semi-Supervised Classification with Graph Convolutional Networks, the authors say that GCN is an approach for semi-supervised learning (SSL).
+But a GCN is making predictions using only the graph Laplacian. The single place where I find the labels is in its loss function.
+$$\mathcal{L}=-\sum_{l \in \mathcal{Y}_{L}} \sum_{f=1}^{F} Y_{l f} \ln Z_{l f}$$
+How does it make GCN a SSL approach?
+"
+"['neural-networks', 'deep-learning', 'transformer', 'layers', 'vision-transformer']"," Title: What are the major layers in a Vision Transformer?Body: Currently, I am studying deepfake detection using deep learning methods. Convolution neural networks, recurrent neural networks, long-short term memory networks, and vision transformers are famous deep learning-based methods that are used in deepfake detection, as I found in my study.
+I was able to find that CNNs, RNNs and LSTMs are multilayered neural networks, but I found very little about the neural network layers in a Vision Transformer. (Like a typical CNN has an input layer, pooling layer, and a fully connected layer, and finally an output layer. RNN has an input layer, multiple hidden layers and an output layer.)
+So, what are the main neural network layers in a Vision Transformer?
+"
+"['papers', 'generative-model', 'variational-autoencoder', 'probability-distribution', 'variational-inference']"," Title: How does the VAE learn a joint distribution?Body: I found the following paragraph from An Introduction to
+Variational Autoencoders sounds relevant, but I am not fully understanding it.
+
+A VAE learns stochastic mappings between an observed $\mathbf{x}$-space, whose empirical distribution $q_{\mathcal{D}}(\mathbf{x})$ is typically complicated, and a latent $\mathbf{z}$-space, whose distribution can be relatively simple (such as spherical, as in this figure). The generative model learns a joint distribution $p_{\boldsymbol{\theta}}(\mathbf{x}, \mathbf{z})$ that is often (but not always) factorized as $p_{\boldsymbol{\theta}}(\mathbf{x}, \mathbf{z})=p_{\boldsymbol{\theta}}(\mathbf{z}) p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$, with a prior distribution over latent space $p_{\boldsymbol{\theta}}(\mathbf{z})$, and a stochastic decoder $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$. The stochastic encoder $q_{\phi}(\mathbf{z} \mid \mathbf{x})$, also called inference model, approximates the true but intractable posterior $p_{\theta}(\mathbf{z} \mid \mathbf{x})$ of the generative model.
+
+How is it that the generative model learns a joint distribution $p_{\boldsymbol{\theta}}(\mathbf{x}, \mathbf{z})$ in the case of the VAE? I know that learning the weights of the decoder is learning $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$
+"
+"['reinforcement-learning', 'deep-rl', 'sample-efficiency']"," Title: Reinforcement Learning method suitable for a large discrete action space with high sample efficiencyBody: Consider the following problem.
+We have a process, that generates $N$ stones (e.g. 2000) in one batch $b$. Every pebble has state $s_{i}^b$ and reward $s_i^b$. After choosing one pebble $i$ from the $N$, we start sampling again using the chosen pebble as a starting point and we generate the next batch $b+1$. The state $s_i^b$ is a vector of real-values and the reward $r_i^b$ is a real value.
+The problem is to choose pebbles so that we maximize reward $r_i^b$ in long term. Because depending on how we choose the pebble, we can sample around the region that gives a better or worse reward $r$.
+During each new batch, we make one selection of one pebble (so actions can from $i, \dots, N$. We have access to the previous $m$ batches (e.g. through replay buffer) with their rewards and states.
+In short, it looks like this:
+
+- Chose randomly the first pebble from which we start sample;
+- We start sampling from the chosen pebble in the current batch;
+- We sample $N$ pebbles from a process, each pebble have a state $s_i^b$ and reward $r_i^b$;
+- We can chose one pebble from $i \dots N$ as action $a_i^b$ based on state $s_i^b$ and reward $r_i^b$;
+- Go to point 1 and repeat;
+
+For example, at the moment, I choose in a given batch $b$ pebble with max reward $r_i^b$, so
+$$i = \underset{i}{\mathrm{argmax}}\, r_i^b$$ and then use $a_i^b$ for a the current batch $b$.
+But what I want is to choose:
+$$i = \underset{i}{\mathrm{argmax}}\, \underset{b}{E}[R_i^b | s_i, a_i]$$
+Graphically speaking:
+Assuming one Batch (N) is 30
+P - pebble, P - chosen pebble
+batch 1: PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
+batch 2: PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
+batch 3: PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
+batch 4: PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
+batch 5: PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
+So, if I have a batch $N$, when I choose one element from the batch as an action, so the expected reward is the highest in long term, and start the sampling again from it. So only one element per batch can be chosen. And only choice of the one pebble per batch does affect the sequence in the next batch, but not inside the current batch.
+The problem is, what Reinforcement Learning algorithm to use, when we choose only one item from N. And the one choice affect the whole sampled sequence in the next batch. For example in batch 1 to 4, the reward can be very low, and in batch 5 the reward is super high, if we chose wisely the pebbles in 4 previous batchs.
+"
+"['comparison', 'unsupervised-learning', 'notation', 'clustering', 'fuzzy-logic']"," Title: In this example of fuzzy c-means, what is the difference between ""sigma"" and ""center"" for the clusters?Body: In this example, what exactly do "Cluster" and "Sigma" mean? (They chose random coordinates for the three centroids of the groups)
+
+- Centers: Cluster centers, returned as a
JxN
array, where J
is the number of clusters and N
is the number of data dimensions.
+- Sigma: Range of influence of cluster centers for each data dimension, returned as an
N
-element row vector. All cluster centers have the same set of sigma values.
+
+Please, elaborate on the difference.
+"
+"['neural-networks', 'objective-functions', 'transfer-learning']"," Title: How to choose the new layer and objective function for transfer learning on a neural network?Body: I have a base model $M$ trained on a data say type 1 for task $T$. Now, I want to update $M$ by applying transfer learning for it to work on data type 2 for the same task $T$. I am very new to AI/ML field. One common way I found for applying transfer learning is to add a new layer at the end or replace the last layer of the base model with a new layer, and then retrain the model on new data (type 2 here). Depending upon the size of type 2, we may decide whether we retrain the whole model or only the new layer.
+However, my question is that how do we decide following:
+
+- What should be the new layer(s)?
+- Should the objective function while retraining be the same as the one used for the base model, or it can be different? If different, then any insights on how to figure out a new objective function?
+
+P.S. Data of type 1 and type 2 are of the same category (like both are logs or both are images), however are significantly different.
+"
+"['neural-networks', 'transformer', 'positional-encoding']"," Title: Is there any point in adding the position embedding to the class token in Transformers?Body: The popular implementations of ViTs by Ross Wightman and Phil Wang add the position embedding to the class tokens as well as to the patches.
+Is there any point in doing so?
+The purpose of introduction positional embeddings to the Transformer is clear - since in the original formulation Transformer is equivariant to permutations of tokens, and the original task doesn't respect this symmetry one needs to break it in some way to translational symmetry only - and this goal is achieved via the positional embedding (learned or fixed).
+However, the class token is somehow distinguished from the other tokens in the image, and there is no notion for him to be located in the [16:32, 48:64]
slice of the image.
+Or this choice is simply a matter of convenience? And additional parameter, indeed, has a negligible cost, and there is no benefit as well as harm of the addition of positional embedding to the [CLS] or any special token?
+"
+"['terminology', 'papers', 'convolution', 'graph-neural-networks', 'notation']"," Title: What is a filter in the context of graph convolutional networks?Body: In Section 2.1 of the research paper titled Semi-Supervised Classification with Graph Convolutional Networks by Thomas N. Kipf et al.,
+Spectral convolution on graphs defined as
+
+The multiplication of a signal $x ∈ R^N$ with a filter $g_\theta =$ diag$(\theta)$ parameterized by $\theta \in R^N$ in the Fourier domain,
+
+i.e.:
+
+$g_\theta * x $= $U g_\theta U^T x$
+
+Actually, I don't understand notations here.
+
+- What is a filter? Is it like a filter from CNN? Then why it has (or should have) a diagonal form? What is $\theta$ here?
+
+- What is a Fourier domain?
+
+
+I searched on Google and there was no Fourier domain.
+"
+"['convolutional-neural-networks', 'image-processing']"," Title: Simple Image-based example for not utilising the variable-sized input handling capability of a Convolutional neural networkBody: Convolutional neural networks are capable of handling inputs of varying sizes. It is one of the key benefits of convolutional neural networks. But I am unsure about the cases when we should not utilize this advantage of the convolutional neural network.
+Although the following example has been provided in the chapter named Convolutional Networks of the textbook titled Deep Learning by Ian Goodfellow et al.
+
+Convolution does not make sense if the input has variable size because
+it can optionally include different kinds of observations. For example,
+if we are processing college applications, and our features consist of
+both grades and standardized test scores, but not every applicant took
+the standardized test, then it does not make sense to convolve the
+same weights over features corresponding to the grades as well as the
+features corresponding to the test scores.
+
+It is not clear for me to understand the above example since I am habituated with images in the case of convolutional neural networks.
+Can anyone provide a possible example of images where we cannot utilize the benefit of passing variable-size images to the convolutional neural network?
+"
+"['reinforcement-learning', 'dqn', 'algorithm-request', 'residual-networks', 'atari-games']"," Title: How can I use a ResNet as a function approximator for pixel based reinforcement learning?Body: I'd like to use a residual network to improve learning in image-based reinforcement learning, specifically on Atari Games.
+My main question is divided into 3 parts.
+
+- Would it be wise to integrate a generic ResNet with a DQN variant?
+
+- I believe ResNets take a long to train, and therefore would it be realistic to train on an Atari simulator? What would the downsides be?
+
+- Are there any fast ResNets that can be used for such purposes? Perhaps a fast ResNet that is specifically designed for online settings?
+
+
+"
+"['natural-language-processing', 'pytorch', 'transformer', 'bert']"," Title: What is the Intermediate (dense) layer in between attention-output and encoder-output dense layers within transformer block in PyTorch implementation?Body: In PyTorch, transformer (BERT) models have an intermediate dense layer in between attention and output layers whereas the BERT and Transformer papers just mention the attention connected directly to output fully connected layer for the encoder just after adding the residual connection.
+Why is there an intermediate layer within an encoder block?
+For example,
+
+encoder.layer.11.attention.self.query.weight
+encoder.layer.11.attention.self.query.bias
+encoder.layer.11.attention.self.key.weight
+encoder.layer.11.attention.self.key.bias
+encoder.layer.11.attention.self.value.weight
+encoder.layer.11.attention.self.value.bias
+encoder.layer.11.attention.output.dense.weight
+encoder.layer.11.attention.output.dense.bias
+encoder.layer.11.attention.output.LayerNorm.weight
+encoder.layer.11.attention.output.LayerNorm.bias
+encoder.layer.11.intermediate.dense.weight
+encoder.layer.11.intermediate.dense.bias
+encoder.layer.11.output.dense.weight
+encoder.layer.11.output.dense.bias
+encoder.layer.11.output.LayerNorm.weight
+encoder.layer.11.output.LayerNorm.bias
+
+I am confused by this third (intermediate dense layer) in between attention output and encoder output dense layers
+"
+"['neural-networks', 'machine-learning', 'accuracy', 'binary-classification']"," Title: Is a test accuracy of 0.74 good enough, given a dataset of about 700 samples, and, if not, how can I improve it?Body: I am new to neural networks. I am trying to solve a binary classification problem. Specifically, I want to determine whether a patient has or not a certain disease based on the dataset.
+The dataset has about 700 samples of different patients. I divided the sets into training and test (test size = 0.3).
+My model has 1 input layer, 5 hidden layers, and 1 output layer. I used ReLU for the input and hidden layers, and I used the sigmoid for the output layer.
+During the compilation of the model, I used stochastic gradient descent (SGD) as the optimizer and the mean squared logarithmic error for the loss. I used mini-batch gradient descend (batch size = 4) for the training.
+I am trying to calculate the accuracy on the test set I created previously.
+
+- The model evaluation for train set is about: 0.07 (loss) 0.76 (accuracy).
+
+- The model evaluation for test set is about: 0.07 (loss) 0.74 (accuracy).
+
+
+Firstly, I would like to know if this is a good value for a model. Is the accuracy too small?
+Plus, I would like to know if there's a way to improve accuracy based on my model.
+I am trying to work on a project, so I was wondering if these values are acceptable.
+"
+"['reinforcement-learning', 'reference-request', 'convergence', 'temporal-difference-methods']"," Title: Is there a tutorial for understanding the proof of convergence for TD learning?Body: I'm reading the article An Analysis of Temporal-Difference Learning
+with Function Approximation (1997), but the mathematics inside seems overly complicated for me. Answers to some similar questions had pointed out that these proof typically involves stochastic approximation.
+My question is: are there any good tutorials (textbooks, list of papers, etc.) on stochastic approximation (or similar topics) that prepare you for reading proof like this? It will be best if it is rather self-contained under my mathematical maturity.
+I have an undergraduate-level mathematical analysis and probability theory foundation and have touched some measure theory.
+"
+"['neural-networks', 'deep-learning', 'objective-functions', 'artificial-neuron', 'multiclass-classification']"," Title: Multi-class classification but a single feature sometimes boils it down to a binary-classificationBody: I have a three-class classification problem for a large dataset. Classes are 0, 1, and 2. There's a categorical variable in my feature vectors such that when a sample point has this variable positive, it can only belong to either class 1 or class 2. On the other hand, if that categorical variable is negative for another sample point, then this sample can be from all three classes. I was wondering how I can make a neural network use that information during training? I guess I need a custom loss function however I could not figure out how to exactly create that.
+"
+"['computer-vision', 'data-preprocessing', 'data-augmentation']"," Title: What is the sensible amount of augmentation?Body: I am playing with the transforms from Torchvision.
+There are plenty of different kinds of these like:
+
+Resize
+RandomCrop
+ColorJitter
+Blurring
+- ...
+
+These are some cases of Resize
for a given image:
+
+ColorJitter
+
+RandomAffine
+
+The main purpose of the augmentation procedure is to prevent overfitting, extend
+the training dataset in a certain way.
+Some transformed images still look like the image from the original dataset, since a small change in contrast or brightness still makes the image look real.
+For the other, there can be a significant change in color, or the original image occupies only a small fraction of the resulting image.
+In all cases, one can still classify this object as a dog. However, many of these would not lie on the manifold of real images, and the classification for these may not make sense.
+Are there some papers or research discussing the issue of the augmentation choice, and the correspondence of these to the directions along the manifold of real images?
+"
+"['neural-networks', 'convolutional-neural-networks', 'computer-vision', 'reference-request', 'architecture']"," Title: CNN Architectures for local features vs global contextBody: Kaparthy in his blog post said
+
+[this] hints at the kinds of architectures we’ll eventually explore. As an example - are very local features enough or do we need global context?
+
+I'd like to gain expertise in designing networks for local vs global features. Obviously increasing the receptive field of neurons (though more pooling/striding and bigger kernels) will allow to network to take more global context into account.
+Could someone point me in the direction of some readings on the topic?
+Are there any specific architectures targeted to local features or to global context?
+"
+"['definitions', 'image-processing']"," Title: What is meant by sub-region of an image?Body: Consider the following sentences from the research paper titled PatternNet: Visual Pattern Mining with Deep Neural Network by Hongzhi Li et al.
+
+The value of each pixel in a feature map is the response of a filter
+with respect to a sub-region of an input image. A high value for a
+pixel means that the filter is activated by the content of the
+sub-region of the input image.
+
+The sentences mention the sub-regions of an image. Is there any formal definition for a sub-region of an image?
+"
+"['papers', 'geometric-deep-learning', 'graph-neural-networks', 'graphs', 'graph-isomorphism-network']"," Title: Why is the Graph Isomorphism Network powerful?Body: I am reading a paper known as GIN, How powerful are graph neural networks?, Xu et al. 2019
+The paper, Lemma 5 and Corollary 6, introduces Graph Isomorphism Network (GIN).
+In Lemma 5,
+
+Moreover, any multiset function $g$ can be decomposed as $g (X) = \phi(\sum_{x\in X} f(x))$ for some function $\phi$
+
+Similarly, in Corollary 6,
+
+Moreover, any
+function $g$ over such pairs can be decomposed as $g (c, X) = \varphi((1+\epsilon) f(c)+ \sum_{x\in X} f(x))$ for some function $\varphi$.
+
+Finally, it makes MLP by compositing $f^{(k+1)}$ and $\varphi^{(k)}$. i.e
+
+$f^{(k+1)} \varphi^{(k)}$
+
+I know that $h(X)$ or $h(c,X)$ is injective, because they are unique to $c$ and $X$(in Lemma 5 and Corollary 6, respectively).
+What I don't understand is: in the statements in Lemma 5 and Cor 6, $\phi$ and $\varphi$, I don't know whether they are injective or not.
+My question is then: why is GIN powerful? i.e, Why does GIN preserve injectivity?
+This paper explains that injectivity should be preserved to get powerful GNN. How can I answer this question?
+"
+"['reference-request', 'knowledge-representation', 'expert-systems', 'ontology']"," Title: Apart from ontologies, which other methods for knowledge representation are there in Artificial Intelligence?Body: From what I have been reading, I see statements like
+
+Ontology is a common method used for knowledge representation in
+artificial intelligence.
+
+But there is never really a discussion around what other options are available. To allow me to research other options I am hoping someone could maybe suggest alternatives?
+As an extra question, do you have a preference for a particular system? If so, why?
+"
+"['datasets', 'prediction', 'unsupervised-learning', 'data-science', 'feature-engineering']"," Title: Generating a dataset from data with ""assumed"" lablesBody: I've got a task similar to the following:
+Out of x amount of people, I need to predict, who could be a good athlete and who not. The thing is, I don't have data on the athletic performance of those individuals.
+So I was thinking of looking into assumptions/traits:
+Most of the NBA players are tall. If someone of a random amount of people is tall, it could be a good basketball player.
+In contrast, a tall person would not be a good jockey.
+The same goes for age - 3 years or 90 years old might not make a world-performing athlete, etc.
+How would I best build a dataset for this problem? Which features do I need to add to the dataset in order to make good predictions about athletic performance?
+"
+"['comparison', 'terminology', 'papers', 'variational-autoencoder', 'aevb-algorithm']"," Title: How is the VAE related to the Autoencoding Variational Bayes (AEVB) algorithm?Body: I am familiar with the variational autoencoder, but not totally clear on what exactly the AEVB is.
+In the original VAE paper (by Kingma and Welling), he uses both the terms variational autoencoder and autoencoding variational Bayes.
+
+For the case of an i.i.d. dataset and continuous latent variables per datapoint, we propose the Auto-Encoding VB (AEVB) algorithm. In the AEVB algorithm, we make inference and learning especially efficient by using the SGVB estimator to optimize a recognition model that allows us to perform very efficient approximate posterior inference using simple ancestral sampling, which in turn allows us to efficiently learn the model parameters, without the need of expensive iterative inference schemes (such as MCMC) per datapoint.
+
+And then in section 2, the paper talks about the SGVB estimator and the AEVB algorithm.
+Then in section 3, it says that the VAE is an example.
+So, is the VAE a special application of AEVB?
+"
+"['automation', 'self-replicating-machines']"," Title: Why aren't 3d printers and robotic arms already used to create the first versions of self-replicating machines?Body: The ability to create self-replicating machines can give some very useful benefits. So what is the problem with creating this type of stuff?
+Let's say we have two pieces of equipment - 3d printers and robotic arms. These items are already available and are easy to create.
+It looks like they are enough to create self-replicating machines. 3d printers are able to print any details for arms and printers. Robotic arms are able to assemble other arms and printers. Both equipment items are able to create almost any other kind of stuff.
+So we need only one set of 3d printers and arms with a basic program to start the process. The more sophisticated programs can be added later to create almost any type of equipment from design. If there are enough rough materials, this process can be scaled indefinitely and allow to construct, gather resources, etc.
+So, what is the problem with that scheme? Why is is not used already yet everywhere?
+"
+"['reinforcement-learning', 'game-ai', 'reward-functions', 'reward-design', 'algorithmic-trading']"," Title: How should I define the reward function for a stock trading-like game?Body: Problem setting
+Consider a game like trading a stock
+
+- At each step, the agent can
buy
/ sell
a stock.
+- Trade is a pair of
buy
and sell
actions.
+- We can set the reward for the
sell
action as the profit made in this trade. Although this might be misleading, since whether a trade is good or not depends on both buying
and selling
timing.
+- We don't know the reward for
buy
, whether a buy
is good or not depends on when you sell
.
+
+Questions
+
+- How should the reward scheme be for a game like this? i.e., whether one action is good or not depends on other actions taken before it?
+- Can we set a scheme like this: set the reward to be zero at each step, no matter it is a
buy
or a sell
. Only give the reward at the end of the game, say after 1000 steps, and the reward, in this case, is the total money made?
+
+"
+"['machine-learning', 'definitions', 'hyper-parameters']"," Title: When can I call an entity a hyperparameter?Body: As per my knowledge, any entity that is learnable by a training algorithm can be called a parameter. Weights of a neural network are called parameters because of this reason only.
+But I have doubts about the qualification of hyperparameter.
+Hyperparameter according to my knowledge is an entity that needs to be learned outside of the training algorithm. But, a lot of entities can come into the picture if I want to follow this definition.
+For example, the selection of the type of neural network, number of layers, number of neurons in each layer, presence of batch normalization layer, type of activation function, number of parameters, type of parameters (integer, float, etc.), number of epochs, batch size, type of optimizer, learning rate, etc., and I can able to list a lot of entities like this.
+Is it okay to call anything that needs to be learned outside the training algorithm a hyperparameter?
+"
+"['transformer', 'attention', 'positional-encoding']"," Title: Is there a notion of location in Transformer architecture in subsequent self-attention layers?Body: Transformer architecture (without position embedding) is by the very construction equivariant to the permutation of tokens. Given query $Q \in \mathbb{R}^{n \times d}$ and keys $K \in \mathbb{R}^{n \times d}$ and some permutation matrix $P \in \mathbb{R}^{n \times n}$, one has:
+$$
+Q \rightarrow P Q, K \rightarrow P K
+$$
+$$
+A =
+\text{softmax} \left(\frac{Q K^T}{\sqrt{d}} \right) \rightarrow
+\text{softmax} \left(\frac{R Q K^T R^T}{\sqrt{d}} \right) =
+R \ \text{softmax} \left(\frac{Q K^T}{\sqrt{d}} \right) R^T = R A R^T
+$$
+Without the positional embedding (learned of fixed), that breaks permutation symmetry to translational there is no notion of location. And with the positional embedding one introduces a notion - token $x$ is on the $k$-th position.
+However, I wonder, whether this notion makes sense after the first self-attention layer.
+The operation of producing the output (multiplication of attention by the value)
+$$
+x_{out} = A V
+$$
+outputs some weighted sum of all tokens in the sequence, the token at the $k$-th position now has information from all the sequence.
+So, I wonder, whether the notion of position (absolute or relative) still makes sense in this case?
+
+My guess is, that since Transformers involve skip connections, these transfer the notion of the location to the next layers. But this depends on the relative magnitude between the activation of the given self-attention layer and the skip-connections
+"
+"['reinforcement-learning', 'q-learning', 'optimization', 'multi-armed-bandits', 'contextual-bandits']"," Title: Multi-armed Bandit in optimization on graph edges selectionBody: I have the problem, which I described below. I wonder if there exists a class of multi-armed bandit approaches that is related to it.
+I am working on computer networking optimization.
+In the simplest scenario, we model the network as a graph with a circular node topology, similar to that seen in Chord (attached photo). Each node(vertex) can have a maximum number of $X$ active links (tunnels or edge) to other nodes at any given time. Then it can open, maintain, or close links (each operation has a cost associated with it). If there isn't a direct edge, traffic must be routed through neighboring nodes. What is the best link structure(optimal set of edges in the graph connecting nodes) in the underlying graph given the predicted traffic intensity matrix between the nodes?
+Note: the optimal link structure should be recalculated on a regular basis to account for history (for example, it is worthwhile to keep a connection between two nodes open even though there is no traffic at the current time because it was generally a busy link in the past and the chance of using this link is high in future).
+
+Estimation: Can multi-armed bandit be useful here?
+"
+"['neural-networks', 'convolutional-neural-networks', 'reference-request', 'universal-approximation-theorems']"," Title: Is there any paper that shows that multi-channel neural networks are universal approximators?Body: Lately, I have been reading a lot about the universal approximation theorem. I was surprised to find only theorems about "single-channel" standard networks (multi-layer perceptrons), where all layers are 2D vectors and the weights can be represented in weight matrices.
+In particular, this is no longer applicable in some convolutional network applications, where the layers tend to be tensors with multiple feature channels. Of course, one could construct an equivalent "single-channel" neural network from a multi-channel network by putting the weight matrices together in a certain way. However, one would then have sparse matrices as weight matrices with very many constraints on the matrix entries, so that the "standard theorems" would no longer be applicable.
+Do you know of any papers that study the Universal Approximation Theorem for multi-channel neural networks? Or is there a way to derive it from one of the other theorems?
+"
+"['computer-vision', 'algorithm-request', 'model-request', 'deepfakes']"," Title: Non-face ""deepfakes"" in videosBody: Instead of changing faces (like James Bond to Putin) what if, given sufficient training data, I wanted to:
+
+- Remove or add some windows from a brick house?
+- Convert a glass of red wine to a glass of white wine?
+- Remove the infamous the Starbucks cup from Game of Thrones?
+
+Although deepfakes are notoriously hungry to train, the pre-trained weights are plug-and-play.
+Does there exist any equivalent for non-face objects?
+"
+"['comparison', 'graph-neural-networks', 'probabilistic-graphical-models', 'probabilistic-machine-learning']"," Title: What is the difference between Probabilistic Graphical models and Graph Neural networks?Body: While going over PGMs and GNNs, it seems like both leverage the graph data structure. The former has been used to represent causal associations (among other things), while the latter has a varied set of applications. Do these techniques intersect?
+"
+"['convolutional-neural-networks', 'math', 'probability-distribution', 'weights-initialization']"," Title: What is the analytical formula for ""Kaiming He"" probability density function?Body: A probability density function is a real-valued function that roughly gives the density of probability at a particular value of a random variable.
+For example, the probability density function of a normal random variable is given by
+$$f(x) = \dfrac{1}{2\sigma \sqrt{2\pi}} e^{-{\LARGE(}\dfrac{x-\mu}{\sigma}{\LARGE)}^2}$$
+Uniform Kaiming He probability distribution function is used for initialization of weights in Convolutional neural networks in PyTorch and the distribution function was initially mentioned in the research paper titled Delving Deep into Rectifiers:
+Surpassing Human-Level Performance on ImageNet Classification by Kaiming He et al. I think.
+What is the analytical formula for the Kaiming He probability density function?
+"
+"['neural-networks', 'reference-request', 'feedforward-neural-networks', 'function-approximation', 'universal-approximation-theorems']"," Title: Does there exist functions for which the necessary number of nodes in a shallow neural network tends to infinity as approximation error tends to 0?Body: The Universal Approximation Theorem states (roughly) that any continuous function can be approximated to within an arbitrary precision $\varepsilon>0$ by a feedforward neural network with one hidden layer (a Shallow Neural Network) with sufficient width.
+I remember stumbling upon articles showing that for some functions the necessary number of hidden neurons tends to infinity as $\varepsilon$ tends to zero (similar to this one by A closer look at the approximation capabilities of neural networks), but I have not had success finding them again. Is my memory incorrect or is it merely my searching skills that are insufficient?
+"
+"['convolutional-neural-networks', 'reference-request', '3d-convolution', '2d-convolution']"," Title: 2D models on 3D tasks (convolutions): simple replace?Body: 2D tasks enjoy a vast backing of successful models that can be reused.
+For convolutions, can one simply replace 2D operations with 3D counterparts and inherit their benefits? Any 'extra steps' to improve the transition? Not interested in unrolling the 3D input along channels.
+Publication/repository references help.
+
+Details
+The input is an STFT-like transform of multi-channel EEG timeseries, except it's 3D. There's spatial dependency to exploit across all three dimensions. The full transform is 4D, but one of the dimensions is unrolled along channels, so data and transform channels are mixed. The transform itself has stacked complex convolutions and modulus nonlinearities (output is real).
+I seek to reuse models like SEResNet, InceptionNet, and EfficientNet. The "benefits" of interest are mainly train viability (convergence speed, amount of required tuning) and generalization (test accuracy) - or alternative to latter, that blocks don't interact harmfully by e.g. assuming an inherently 2D structure.
+"
+"['neural-networks', 'machine-learning', 'tensorflow']"," Title: Is the graph considered as overfit?Body: I have a training dataset of 2000 images and 500 images for validation. I have executed 50 epochs, however I realized that my graph seems to be different as my accuracy is smaller than my loss. I am not sure on whether my graph is considered as overfitting. If it is, are there other ways to resolve it? I am currently running on Jupiter notebook. Attached below is my code
+Edit:
+By the way, I have made some changes to my code, such as reducing the dropout to 0.2. I have uploaded the new graph output as shown below. Is it considered as normal?
+Code
+import numpy as np
+import keras
+from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
+import os
+import cv2
+
+Training_dir = './datasets/train'
+
+train_datagen = ImageDataGenerator(rotation_range =15,
+ width_shift_range = 0.2,
+ height_shift_range = 0.2,
+ rescale=1./255,
+ shear_range=0.2,
+ zoom_range=0.2,
+ horizontal_flip = True,
+ fill_mode = 'nearest',
+ data_format='channels_last',
+ brightness_range=[0.5, 1.5])
+
+train_gen = train_datagen.flow_from_directory(
+ Training_dir,
+ batch_size = 64,
+ class_mode='binary',
+ target_size=(150, 150)
+)
+
+Validation_dir = './datasets/val'
+
+val_datagen = ImageDataGenerator(rotation_range =15,
+ width_shift_range = 0.2,
+ height_shift_range = 0.2,
+ rescale=1./255,
+ shear_range=0.2,
+ zoom_range=0.2,
+ horizontal_flip = True,
+ fill_mode = 'nearest',
+ data_format='channels_last',
+ brightness_range=[0.5, 1.5])
+
+
+validation_gen = val_datagen.flow_from_directory(
+ Validation_dir,
+ batch_size = 64,
+ class_mode = 'binary',
+ target_size = (150, 150)
+)
+
+imgs = os.listdir(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\train\cat")
+imgs1 = os.listdir(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\train\dog")
+imgs2 = os.listdir(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\val\dog")
+imgs3 = os.listdir(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\val\cat")
+
+
+
+for img in imgs:
+ img=cv2.imread(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\train\cat" + "\\"+img)
+ x = img_to_array(img)
+ x = x.reshape((1,) + x.shape)
+
+ i = 0
+ for batch in train_datagen.flow (x, batch_size=1, save_to_dir =r'C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\Preview\train\cat', save_prefix ='cat', save_format='jpg'):
+ i+=1
+ if i>5:
+ break
+
+for img in imgs1:
+ img=cv2.imread(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\train\dog" + "\\"+img)
+ x = img_to_array(img)
+ x = x.reshape((1,) + x.shape)
+
+ i = 0
+ for batch in train_datagen.flow (x, batch_size=1, save_to_dir =r'C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\Preview\train\dog', save_prefix ='dog', save_format='jpg'):
+ i+=1
+ if i>5:
+ break
+
+for img in imgs2:
+ img=cv2.imread(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\val\dog" + "\\"+img)
+ x = img_to_array(img)
+ x = x.reshape((1,) + x.shape)
+
+ i = 0
+ for batch in val_datagen.flow (x, batch_size=1, save_to_dir =r'C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\Preview\Validation\dog', save_prefix ='dog', save_format='jpg'):
+ i+=1
+ if i>5:
+ break
+
+for img in imgs3:
+ img=cv2.imread(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\val\cat" + "\\"+img)
+ x = img_to_array(img)
+ x = x.reshape((1,) + x.shape)
+
+ i = 0
+ for batch in val_datagen.flow (x, batch_size=1, save_to_dir =r'C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\Preview\Validation\cat', save_prefix ='cat', save_format='jpg'):
+ i+=1
+ if i>5:
+ break
+
+
+
+from tensorflow.keras.models import Sequential
+from tensorflow.keras.layers import Conv2D, Dense, MaxPool2D, Flatten, Dropout
+
+'''
+SRC : https://www.pyimagesearch.com/2018/12/31/keras-conv2d-and-convolutional-layers/
+----------------------------------
+STRIDES = The stridesparameter is a 2-tuple of integers, specifying the “step” of the
+convolution along the x and y axis of the input volume.
+PADDING = Typically, we set the values of the extra pixels to zero 'valid' or 'same'
+KERNEL_INITIALIZER = The kernel_initializercontrols the initialization method used to initialize
+all values in the Conv2D class prior to actually training the network.
+
+FLATTEN = Return a copy of the array collapsed into one dimension.
+DROPOUT = dropout refers to ignoring units (i.e. neurons) during the training
+phase of certain set of neurons which is chosen at random. When created, the dropout rate can be specified to the
+layer as the probability of setting each input to the layer to zero. TO PREVENT OVER-FITTING.!
+
+DENSE = A Dense layer feeds all outputs from the previous layer
+to all its neurons, each neuron providing one output to the next layer.
+----------------------------------
+'''
+
+
+
+
+model = Sequential()
+model.add(Conv2D(16, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal',input_shape=(150, 150, 3)))
+model.add(Conv2D(16, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
+model.add(MaxPool2D((2,2)))
+model.add(Dropout(0.2))
+
+model.add(Conv2D(32, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
+model.add(Conv2D(32, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
+model.add(Conv2D(32, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
+model.add(MaxPool2D((2,2)))
+model.add(Dropout(0.2))
+
+model.add(Conv2D(64, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
+model.add(Dropout(0.2))
+model.add(Conv2D(64, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
+model.add(Dropout(0.2))
+model.add(Conv2D(64, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
+model.add(MaxPool2D((2,2)))
+model.add(Dropout(0.2))
+
+model.add(Conv2D(128, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
+model.add(Dropout(0.2))
+model.add(Conv2D(128, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
+model.add(Dropout(0.2))
+model.add(Conv2D(128, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
+model.add(MaxPool2D((2,2)))
+model.add(Dropout(0.2))
+
+model.add(Flatten())
+model.add(Dense(units=256, activation='relu'))
+model.add(Dropout(0.2))
+model.add(Dense(units=256, activation='relu'))
+model.add(Dropout(0.2))
+model.add(Dense(units=256, activation='relu'))
+model.add(Dropout(0.2))
+model.add(Dense(units=1, activation='sigmoid'))
+
+
+import tensorflow as tf
+check_point_path = './best.h6'
+model_checkpoint = tf.keras.callbacks.ModelCheckpoint(
+ filepath = check_point_path,
+ monitor = 'val_accuracy',
+ save_weights_only=False,
+ save_best_only=True,
+ verbose=1
+)
+
+model.compile(optimizer = tf.keras.optimizers.Adam(0.0005,decay=1e-5),
+ loss = 'binary_crossentropy',
+ metrics = ['acc'])
+
+
+tb_callback = tf.keras.callbacks.TensorBoard(log_dir="logs/", histogram_freq=1)
+
+
+
+
+print('Num Params : ',model.count_params())
+
+
+'''
+VERBOSE = By setting verbose 0, 1 or 2 you just say how do you want to 'see' the training progress for each epoch.
+'''
+model_history = model.fit(
+ train_gen,
+ epochs=50,
+ batch_size=128,
+ verbose=1,
+ callbacks = [tb_callback],
+ validation_data=validation_gen
+)
+
+
+
+
+
+
+
+
+
+%matplotlib inline
+
+import matplotlib.image as mpimg
+import matplotlib.pyplot as plt
+
+#-----------------------------------------------------------
+# Retrieve a list of list results on training and test data
+# sets for each training epoch
+#-----------------------------------------------------------
+Acc=model_history.history['acc']
+Val_acc=model_history.history['val_acc']
+Loss=model_history.history['loss']
+Val_loss=model_history.history['val_loss']
+
+epochs=range(len(Acc)) # Get number of epochs
+
+#------------------------------------------------
+# Plot training and validation accuracy per epoch
+#------------------------------------------------
+plt.plot(epochs, Acc, 'r')
+plt.plot(epochs, Val_acc, 'b')
+plt.title('Training and validation accuracy')
+plt.xlabel('Epochs')
+plt.ylabel('Accuracy')
+plt.legend(["Training Accuracy","Validation Accuracy"])
+plt.show()
+
+#------------------------------------------------
+# Plot training and validation loss per epoch
+#------------------------------------------------
+plt.plot(epochs, Loss, 'r')
+plt.plot(epochs, Val_loss, 'b')
+plt.title('Training and validation loss')
+plt.title('Training and validation loss')
+plt.xlabel('Epochs')
+plt.ylabel('Loss')
+plt.legend(["Training Loss","Validation Loss"])
+plt.show()
+
+Epochs output
+Num Params : 3273585
+Epoch 1/50
+32/32 [==============================] - 97s 3s/step - loss: 0.7618 - acc: 0.4925 - val_loss: 0.6931 - val_acc: 0.5000
+Epoch 2/50
+32/32 [==============================] - 70s 2s/step - loss: 0.6916 - acc: 0.5330 - val_loss: 0.6932 - val_acc: 0.5000
+Epoch 3/50
+32/32 [==============================] - 71s 2s/step - loss: 0.6946 - acc: 0.5250 - val_loss: 0.6932 - val_acc: 0.5000
+Epoch 4/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6911 - acc: 0.5205 - val_loss: 0.6932 - val_acc: 0.5000
+Epoch 5/50
+32/32 [==============================] - 73s 2s/step - loss: 0.6949 - acc: 0.5095 - val_loss: 0.6932 - val_acc: 0.5000
+Epoch 6/50
+32/32 [==============================] - 74s 2s/step - loss: 0.6933 - acc: 0.5170 - val_loss: 0.6933 - val_acc: 0.5000
+Epoch 7/50
+32/32 [==============================] - 74s 2s/step - loss: 0.6926 - acc: 0.5190 - val_loss: 0.6932 - val_acc: 0.5000
+Epoch 8/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6936 - acc: 0.5055 - val_loss: 0.6933 - val_acc: 0.5000
+Epoch 9/50
+32/32 [==============================] - 74s 2s/step - loss: 0.6940 - acc: 0.5120 - val_loss: 0.6933 - val_acc: 0.5000
+Epoch 10/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6920 - acc: 0.5265 - val_loss: 0.6935 - val_acc: 0.5000
+Epoch 11/50
+32/32 [==============================] - 73s 2s/step - loss: 0.6929 - acc: 0.4975 - val_loss: 0.6934 - val_acc: 0.5000
+Epoch 12/50
+32/32 [==============================] - 92s 3s/step - loss: 0.6935 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.5000
+Epoch 13/50
+32/32 [==============================] - 89s 3s/step - loss: 0.6924 - acc: 0.5115 - val_loss: 0.6935 - val_acc: 0.5000
+Epoch 14/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6929 - acc: 0.5095 - val_loss: 0.6935 - val_acc: 0.5000
+Epoch 15/50
+32/32 [==============================] - 71s 2s/step - loss: 0.6928 - acc: 0.5175 - val_loss: 0.6940 - val_acc: 0.5000
+Epoch 16/50
+32/32 [==============================] - 71s 2s/step - loss: 0.6916 - acc: 0.5170 - val_loss: 0.6941 - val_acc: 0.5000
+Epoch 17/50
+32/32 [==============================] - 71s 2s/step - loss: 0.6922 - acc: 0.5205 - val_loss: 0.6945 - val_acc: 0.5000
+Epoch 18/50
+32/32 [==============================] - 71s 2s/step - loss: 0.6894 - acc: 0.5280 - val_loss: 0.6949 - val_acc: 0.5000
+Epoch 19/50
+32/32 [==============================] - 71s 2s/step - loss: 0.6887 - acc: 0.5205 - val_loss: 0.6956 - val_acc: 0.5000
+Epoch 20/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6870 - acc: 0.5400 - val_loss: 0.6963 - val_acc: 0.5000
+Epoch 21/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6864 - acc: 0.5380 - val_loss: 0.6974 - val_acc: 0.5000
+Epoch 22/50
+32/32 [==============================] - 74s 2s/step - loss: 0.6836 - acc: 0.5505 - val_loss: 0.6965 - val_acc: 0.5000
+Epoch 23/50
+32/32 [==============================] - 74s 2s/step - loss: 0.6722 - acc: 0.5640 - val_loss: 0.6903 - val_acc: 0.5280
+Epoch 24/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6712 - acc: 0.5755 - val_loss: 0.6980 - val_acc: 0.5100
+Epoch 25/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6709 - acc: 0.5735 - val_loss: 0.6946 - val_acc: 0.5180
+Epoch 26/50
+32/32 [==============================] - 74s 2s/step - loss: 0.6687 - acc: 0.5890 - val_loss: 0.7016 - val_acc: 0.5120
+Epoch 27/50
+32/32 [==============================] - 73s 2s/step - loss: 0.6626 - acc: 0.5825 - val_loss: 0.7043 - val_acc: 0.5060
+Epoch 28/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6675 - acc: 0.5830 - val_loss: 0.7053 - val_acc: 0.5060
+Epoch 29/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6580 - acc: 0.5895 - val_loss: 0.7074 - val_acc: 0.5100
+Epoch 30/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6632 - acc: 0.5875 - val_loss: 0.7036 - val_acc: 0.5200
+Epoch 31/50
+32/32 [==============================] - 73s 2s/step - loss: 0.6705 - acc: 0.5685 - val_loss: 0.6877 - val_acc: 0.5400
+Epoch 32/50
+32/32 [==============================] - 73s 2s/step - loss: 0.6595 - acc: 0.5880 - val_loss: 0.6821 - val_acc: 0.5520
+Epoch 33/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6751 - acc: 0.5600 - val_loss: 0.7052 - val_acc: 0.5020
+Epoch 34/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6536 - acc: 0.6015 - val_loss: 0.6853 - val_acc: 0.5580
+Epoch 35/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6549 - acc: 0.6010 - val_loss: 0.6675 - val_acc: 0.5860
+Epoch 36/50
+32/32 [==============================] - 73s 2s/step - loss: 0.6530 - acc: 0.6050 - val_loss: 0.7023 - val_acc: 0.5160
+Epoch 37/50
+32/32 [==============================] - 75s 2s/step - loss: 0.6559 - acc: 0.5910 - val_loss: 0.6965 - val_acc: 0.5380
+Epoch 38/50
+32/32 [==============================] - 73s 2s/step - loss: 0.6423 - acc: 0.6225 - val_loss: 0.6719 - val_acc: 0.5920
+Epoch 39/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6426 - acc: 0.6135 - val_loss: 0.7164 - val_acc: 0.5260
+Epoch 40/50
+32/32 [==============================] - 73s 2s/step - loss: 0.6430 - acc: 0.6080 - val_loss: 0.6936 - val_acc: 0.5640
+Epoch 41/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6440 - acc: 0.5980 - val_loss: 0.6894 - val_acc: 0.5720
+Epoch 42/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6431 - acc: 0.6135 - val_loss: 0.7083 - val_acc: 0.5560
+Epoch 43/50
+32/32 [==============================] - 73s 2s/step - loss: 0.6344 - acc: 0.6205 - val_loss: 0.6800 - val_acc: 0.5860
+Epoch 44/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6687 - acc: 0.5735 - val_loss: 0.6699 - val_acc: 0.5940
+Epoch 45/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6344 - acc: 0.6190 - val_loss: 0.7070 - val_acc: 0.5620
+Epoch 46/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6340 - acc: 0.6160 - val_loss: 0.7012 - val_acc: 0.5740
+Epoch 47/50
+32/32 [==============================] - 74s 2s/step - loss: 0.6424 - acc: 0.5985 - val_loss: 0.7255 - val_acc: 0.5340
+Epoch 48/50
+32/32 [==============================] - 73s 2s/step - loss: 0.6449 - acc: 0.5985 - val_loss: 0.7120 - val_acc: 0.5420
+Epoch 49/50
+32/32 [==============================] - 72s 2s/step - loss: 0.6437 - acc: 0.6120 - val_loss: 0.6708 - val_acc: 0.6140
+Epoch 50/50
+32/32 [==============================] - 73s 2s/step - loss: 0.6497 - acc: 0.6030 - val_loss: 0.7339 - val_acc: 0.5280
+
+Graph output
+
+"
+"['reinforcement-learning', 'deep-rl', 'papers', 'policy-gradients', 'deterministic-pg-theorem']"," Title: Why exactly was previously believed that the deterministic policy gradient did not exist?Body: I'm reading the paper Deterministic Policy Gradient Algorithms, David Silver et al.
+First of all, in the introduction, the author says that
+
+It was previously believed that the deterministic policy gradient did not exist
+
+But, I wonder why it is. The general version of the policy gradient theorem does not have restrictions about the policy $\pi$. So, if we choose the policy $\pi$ as a Dirac measure, that is, $\pi(\cdot | s) = \delta_{a}$ for some $a \in \mathcal{A}$, then it is exactly the notion of deterministic policy, so we can apply the usual gradient descent theorem.
+Indeed, in theorem 2, they showed that deterministic policy gradient theorem and usual gradient theorem matches when it comes to zero variance. (In fact, I can't understand the statement rigorously, because the policy is something about probability "measure", and the variance is something about "random variable".) However, my below computation shows some contradiction.
+Let $\pi(\cdot | s) = \delta_a$, a Dirac measure for some atom $a \in \mathcal{A}$. Following the notation of the paper DPG, a policy gradient theorem says
+$$\nabla_{\theta}J(\pi_{\theta}) = \mathbb{E}_{s, a}[\nabla_{\theta}log \pi_{\theta}(a | s) * Q^{\pi}(s, a)] $$
+A definition of integral shows $$\nabla_\theta J(\pi_{\theta}) = \int_{\mathcal{S}, \mathcal{A}} \pi_{\theta}(da|s) \rho^{\pi}{(ds)} \nabla_\theta log \pi_\theta(a|s) * Q^{\pi}(s,a).
+$$
+But since $\pi_\theta$ has only an atom at generic element $a \in \mathcal{A}$, so it is same with $$\int_\mathcal{S} \rho^{\pi}(s) \nabla_\theta log \pi_\theta(a | s) * Q^{\pi}(s, a) = \mathbb{E}_s[\nabla_\theta log \pi_\theta(a|s) * Q^{\pi}(s, a)].$$
+(NOTE : the last term seems the same with the first line of the equation, but we get rid of $a$ from the expectation, by fixing $a$ corresponding to the atom of $s$.) However, note that $log\pi_\theta(a|s) = 1$, since $\pi_\theta(a|s) = \delta_a(\{a\}) = 1$ as we defined! Thus, $\nabla_\theta log \pi_\theta(a | s) = \nabla_\theta 1 = 0$ so we reached that gradient vanishes..
+However, clearly not.
+Can anyone help me?
+"
+"['reinforcement-learning', 'q-learning', 'sutton-barto', 'sarsa', 'expected-sarsa']"," Title: Why would SARSA diverge (but not Expected SARSA or Q-learning)?Body: In figure 6.3 (shown below) from Reinforcement Learning: An Introduction (second edition) by Sutton and Barto, SARSA is shown to perform worse asymptotically (after 100k episodes) than in the interim (after 100 episodes) for larger values of alpha (alpha > 0.9). The graph is for the cliff walking gridworld example whose description is also given (from the paper by van Seijen et al).
+
+
+As the image mentions, the image is taken from a paper by van Seijen and others titled "A Theoretical and Empirical Analysis of Expected Sarsa". In the image below from the van Seijen paper from Section VII A Discussion, the authors mention that the reason for the better interim performance of SARSA as compared to its asymptotic performance for larger values of alpha, is the divergence of Q-values. The authors, however, fail to mention the reason for the divergence.
+
+What would be the reason that SARSA diverges but not Expected SARSA or Q-learning?
+According to me, SARSA might have a higher variance than Expected SARSA, but it should behave, on average, the same as Expected SARSA.
+Additionally, shouldn't Q-learning be at greater risk of diverging Q values since, in its update, we maximise over actions (and I have in fact seen a number of instances where there is a problem of diverging Q values in DQNs)?
+The majority of papers I have looked at only talk about the problem from the function approximation perspective.
+"
+"['machine-learning', 'gradient-descent', 'stochastic-gradient-descent', 'mini-batch-gradient-descent']"," Title: Is it possible to use stochastic gradient descent at the beginning, then switch to batch gradient descent with only a few training examples?Body: Batch gradient descent is extremely slow for large datasets, but it can find the lowest possible value for the cost function. Stochastic gradient descent is relatively fast, but it kind of finds the general area where convergence happens and it kind of oscillates around that area.
+Is it possible to use stochastic gradient descent at the beginning and find the way to a general convergence and then use batch gradient descent on only a few training examples out of the huge dataset to get even closer to the exact point of convergence?
+I know that a model with a cost function that's a bit away from the lowest value for the cost function performs well in stochastic gradient descent, but assuming you want better results, will this work well?
+"
+"['neural-networks', 'deep-learning', 'long-short-term-memory', 'papers', 'convergence']"," Title: Why does the number of input tokens to an LSTM have an impact on the convergence of Integrated Gradients?Body: Background
+I am computing the attribution scores for a simple LSTM model using Integrated Gradients. This method defines the contribution of a feature to a model prediction by integrating over the gradients along a path between the input and a fixed baseline:
+$$IG_i(x) = (x_i - x'_i) \cdot\int_{\alpha=0}^1 \frac{\partial F(x'+\alpha(x-x'))}{\partial x_i}d\alpha$$
+A common way of measuring the quality of the generated attributions is via the completeness axiom, which states that:
+$$\sum_i IG_i(x) = F(x) - F(x')$$
+The key to computing the IG scores is the approximation of the path integral, which can be approximated via Riemann sum, or a similar interpolation method. In section 5 of the IG paper, it is stated that, in practice, between 50 to 300 interpolation steps are sufficient to obtain IG scores that converge to satisfy the completeness axiom.
+Issue
+I am now testing the IG attributions on a simple LSTM model (1-layer, 16 hidden units). For shorter inputs (<20 tokens), convergence is reached in a reasonable number of steps, and the approximation of the integral is stable. However, when the length of the input increases, I find that the integral approximation diverges when the number of interpolation steps is increased! This can be seen in the following plot (N.B. the y axis is logarithmic):
+
+Question
+My question is: why does the number of input tokens to the LSTM have an impact on the convergence of integrated gradients?
+It is stated in footnote 1 of the IG paper that the completeness axiom depends on whether the model satisfies Lebesgue's integrability condition. It would surprise me, however, that increasing the number of input tokens would dissatisfy this constraint: would it be possible that the model has become too nonlinear for numerical integration to still work? If so, are there alternative numerical integration methods that could be used here, instead of Riemann approximations or the Gauss-Legendre quadrature?
+"
+"['deep-learning', 'notation', 'loss']"," Title: What is the name of this letter $\mathcal{J}$?Body: What is the name of this letter $\mathcal{J}$ in the following deep learning equation? And what alphabet it is from?
+$$\mathcal{J} = \frac{1}{m} \sum_{i=1}^m \mathcal{L}^{(i)}$$
+"
+"['neural-networks', 'computer-vision', 'transformer', 'vision-transformer']"," Title: Do Vision Transformers handle arbitrary sequence lengths the same way as normal Transformers?Body: Does ViT do handle arbitrary sequence lengths using masking the same way the normal Transformer does?
+The ViT paper doesn't mention anything about it, so I assume it uses masking like the normal Transformer.
+"
+"['convolutional-neural-networks', 'backpropagation']"," Title: CNN: Difficulties understanding backward pass derivativesBody: I have really quite hard difficulties to understand what is actually going on in the backward pass of a CNN.
+I am currently focusing on these references:
+
+- https://towardsdatascience.com/forward-and-backward-propagations-for-2d-convolutional-layers-ed970f8bf602
+- https://leonardoaraujosantos.gitbook.io/artificial-inteligence/machine_learning/deep_learning/convolution_layer
+
+At the moment, I only want to compute the gradient which is then passed to the next lower layer.
+I tried to implement the last equation on this image (from [1]):
+https://miro.medium.com/max/2400/1*K2K0tfxmAlyRlqqbj4z0Rg@2x.png
+For me, this formula looks like 6 nested for-loops.
+The first for the channel c
, the second for the output height i
, the third for the output height j
, the fourth for the number of kernels f
, the fith for the kernel height m
and the sixth for the kernel width n
.
+Am I wrong? I tried to implement this but I always get an out of bounds error.
+Someone here who doesn't mind going a bit more into detail?
+Any tips are greatly appreciated
+"
+"['binary-classification', 'explainable-ai']"," Title: Explainable AI for complex input featuresBody: I have a model for binary classification that includes 2 linear layers with RELU activation function and Sigmoid in the last layer. The input features are FastText word embedding, frequency, and statistical signals.
+This model has a 93% f1-score and I want to add an explanation to this model but I don't know how can I start.
+My question is which models or papers good for these complex input features?
+I appreciate any advice to achieve this goal.
+"
+"['computer-vision', 'embeddings', 'self-supervised-learning']"," Title: How do the scale of an embedding affects a downstream task?Body: I am currently training a neural network in a self-supervised fashion, using Contrastive Loss and I want to use that network then to fine-tune it in a classification task with a small fraction of the data with labels. I'm basing this project on the paper titled A Simple Framework for Contrastive Learning of Visual Representations
+by Ting Chen et al.
+My questions go oriented on a very specific thing that I think is causing me some problems. After finishing the self-supervised training I extract the embeddings of a bunch of data and I get the following. These stats are calculated using a flattening of all the embedding of all the bunch of data.
+Mean: -23.090446
+Std: 91.78753
+Min: -710.24493
+Max: 651.8682
+
+What bugs me is that when I extract the embeddings in a ResNet, for example, I get much much lower values (the highest value in absolute terms don't usually go beyond 15), but I'm getting much larger values.
+When looking for similarities with the use of these embeddings, it is something that looks like it works, but my question is that if it is something that might affect when adding another layer and using it for a classification task.
+I have the feeling that something here is wrong but I can not spot exactly what it is.
+Could you give me any advice on this?
+"
+"['ethics', 'superintelligence', 'ai-safety', 'deepfakes', 'algorithmic-bias']"," Title: Are the existential dangers of AI exaggerated?Body: I understand some of the inherent dangers involved with AGI and advanced machine learning. While I can see some of the more low-level risks associated with AI coming to fruition (deep-fakes, biased algorithms, etc.), some of the more existential dangers seem far-fetched and lack empirical examples. As someone outside of AI development circles and without a background in comp-sci, I would like to know how realistic those working in the field of AI find these fears. Are there any convincing hypotheticals about the risks of a superintelligence?
+I've read some Stuart Russel and have recently been watching Robert Miles videos on the topic. Both of these observers seem worried. Is this opinion widely held or is it seen as a bit extreme among developers?
+Provably Beneficial Artificial Intelligence
+by Stuart Russell
+Robert Miles YouTube channel
+"
+"['deep-learning', 'data-labelling', 'multiclass-classification']"," Title: How to label unsupervised data for deep learning multi-classificationBody: I have unlabeled credit card transaction data that has the following columns:
+Transaction_ID Frequency Amount Fees
+ 192831 21 829 23
+ 382912 14 920 24
+ 483921 839 24059 87
+
+Eventually, I'd like to build a deep learning model(e.g. LSTM) that can tell me whether a transaction(row) has a "high", "moderate", or "low" risk. However, since the data is unlabeled, I believe I need to label the data first before I feed the data into the deep learning model.
+For example, transactions that have small frequency and amount values like the first two rows need to be labeled as "low (0)" while transactions that have large frequency and amount like the last row should be labeled as "high (2)". If both frequency and amount have moderate values, the row will be labeled as "moderate(1)".
+I wonder if it is okay to use other machine learning techniques such as K-Means clustering to label the data before I feed the data into the deep learning model. Is it okay to use one Machine Learning algorithm (K-means) to label the data and feed the same labeled data into another Deep Learning model (LSTM)? Or is it a bad practice? For example, if the first model (K-means) is biased, will that bias(error) be carried over from the first model to the second model (LSTM)?
+If it is a bad practice to use two different ML technologies, what else can I do to label the data?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks']"," Title: Decreasing number of neurons in CNNBody: the conventional way of creating a CNN is using increasing number of neurons:
+model = models.Sequential([
+ layers.Conv2D(32,(3,3),activation='relu',input_shape=input_size),
+ layers.MaxPooling2D((2,2)),
+ layers.Conv2D(64,(3,3),activation='relu'),
+ layers.MaxPooling2D((2,2)),
+ layers.Conv2D(128,(3,3),activation='relu'),
+ layers.MaxPooling2D((2,2)),
+ layers.Conv2D(128,(3,3),activation='relu'),
+ layers.MaxPooling2D((2,2)),
+ layers.Flatten(),
+ layers.Dense(128,activation='relu'),
+ layers.Dense(64,activation='relu'),
+ layers.Dense(1,activation='sigmoid')
+])
+
+where in this case, the number of neurons increase from 32, 64, to 128. However, i have also found a paper https://pubmed.ncbi.nlm.nih.gov/33532975/ that uses decreasing number of neurons i.e. 128, 64, 32 , as the network goes deeper. but in this paper, not much explanation was given on how the NN work in decreasing number of neurons. Does it mean decreasing number of neurons assumes that "there are less number of important features to be captured as the network goes deeper" ?
+Question:
+Can someone explain to me
+
+- how does the increasing number of neurons work
+- how does the decreasing number of neurons work and why this is not the common practice
+- referring to 2, what keyword should i find, in order to get articles or writing related to 2?
+
+"
+"['deep-learning', 'test-datasets', 'validation-datasets', 'validation']"," Title: how to decide the optimum model?Body: I have split the database available into 70% training, 15% validation, and 15% test, using holdout validation. I have trained the model and got the following results: training accuracy 100%, validation accuracy 97.83%, test accuracy 96.74%
+In another trial for training the model, I got the following results: Training accuracy 100%, validation accuracy 97.61%, test accuracy 98.91%
+The same data split is used in each run.
+Which model should I choose, the first case in which the the test accuracy is lower than the validation? or the second case in which the test is higher than the validation?
+"
+"['neural-networks', 'machine-learning', 'classification', 'weights', 'binary-classification']"," Title: Is there a way to update the neural network to fit the new data without the time required for retraining?Body: I built a basic neural network in MATLAB. The neural network classifies points on the X-Y axis system into two classes (0 and 1).
+(I try to get the function that represents a shape from this photo)
+
+Every so often the values of the points change slightly and some of the points defined in class 1 become class 0, like in this photo.
+
+Is there a way to update the neural network to fit the new data without the time required for retraining?
+"
+"['papers', 'transformer', 'proofs', 'linear-algebra', 'linformer']"," Title: Where do the characteristics of self-attention come into play in Linformer's proof that self-attention is low rank?Body: In Linformer's proof that self-attention is low rank in their paper, I don't see how it doesn't generalize to every matrix. They don't utilize any specifics of self-attention (the entire proof feels like it's in equation 20 utilizing JL, and I do not see where characteristics of self-attention come into play).
+What am I missing?
+
+"
+"['neural-networks', 'reference-request']"," Title: Reference needed for neural networks finding solutions of PDE'sBody: DL-PDE prescribes a way to feed a neural network data, which in turn comes up with a PDE of the form
+$$u_{t}(t,x,y) = F(x,y,u,u_{x},u_{y},u_{xx},u_{xy},u_{yy},...) \hspace{0.5cm} (x,y) \in \Omega \subset \mathbb{R}^{2}, t \in [0, T]$$
+I am looking for a way to feed a neural network the same data and also prescribe a number of other variables (say 2, making the total number of variables 3), and the neural network could come up with a system of PDE's comprising of three equations with three variables.
+Any leads for this?
+"
+"['convolutional-neural-networks', 'generative-adversarial-networks', 'transpose-convolution']"," Title: Is a Conv2DTranspose the same as a full convolution?Body: I am currently creating a GAN model from scratch (following this tutorial: https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-an-mnist-handwritten-digits-from-scratch-in-keras/) but I can't find out how to implement Conv2DTranspose from scratch. Is a Conv2DTranspose the same as a full convolution? If not, how would one implement it?
+"
+"['math', 'objective-functions']"," Title: Are the domains of objective functions in AI always equals to $\mathbb{R}^D$ or subset of it?Body: Consider the following paragraph from the chapter named Vector Calculus from the textbook titled Mathematics for Machine Learning by Marc Peter Deisenroth et al.
+
+Central to this chapter is the concept of a function. A function $f$
+is a quantity that relates two quantities to each other. In this book,
+these quantities are typically inputs $x \in \mathbb{R}^D$ and targets
+(function values) $f(x)$, which we assume are real-valued if not
+stated otherwise. Here $\mathbb{R}^D$ is the domain of $f$, and the
+function values $f(x)$ are the image/codomain of $f$.
+
+we can notice that the textbook is taking $\mathbb{R}^D$ as the domain for objective functions. I want to know whether it is valid in general cases.
+Do the objective functions that we generally use in artificial intelligence have $\mathbb{R}^D$ as the domain?
+I am guessing it would not be since the loss functions are generally defined on the datasets which can also have discrete attributes and hence the objective function cannot be defined on every point in $\mathbb{R}^D$. So, I am guessing that the correct form of the bold statement from the quoted paragraph is "Here the domain of $f$ should be a subset of $\mathbb{R}^D$" if we are intended to deal with a general case. Am I correct or is there any arrangement such as defining $f$ as zero where the function is not defined?
+"
+"['deep-learning', 'convolutional-neural-networks', 'data-preprocessing', 'data-labelling', 'fully-convolutional-networks']"," Title: Best practice for handling letterboxed images for non fully-convolutional deep learning networks?Body: I'm working on a depth estimation network. It has two outputs:
+
+- A relative depth map
+- A scalar for scaling the relative depth map into an absolute depth map. This second output uses dense layers so we cannot use variable-sized input.
+
+We are trying to handle two different dimensions (192x256 and 256x192). The current approach is to letterbox the image, meaning apply black on the image so that it comes out to 256x256. We decided on this approach instead of center-cropping images to 192x192 because we believe we may lose valuable data with cropping.
+When using letterboxes, I see two paths:
+
+- Ignore the letterbox portions of the image in my loss function. The loss function will only perform calculations on the original portion of the image.
+- Set a static value for the letterbox portion and include it as part of the loss.
+
+Is #1 the correct approach? The network will then be able to predict any depth value for the black letterbox portions without being penalized. I'm concerned with #2 about confusing the network between the letterbox portion and actual dark portions of images.
+"
+"['neural-networks', 'backpropagation', 'pytorch']"," Title: How are partial derivatives calculated in a computational graph?Body: I am trying to understand how are partial derivatives calculated in a computational graph. I understand reasoning behind computational graphs and I am bold enough to say I understand how they work, at least on high level of understanding.
+But what I don't know is how are partial derivatives itself computed or better said, how are they implemented in code.
+I have checked few resources like this lecture slides from CS231N, this blog post or this blog post on TowardsDatascience. They explain graphs and how are expressions calculated in graph, but they don't explain how are partial derivatives derived (or I didn't understand from those explanations). For example, blog post from TowardsDatascience says:
+
+Next, we need to calculate the partial derivatives of each connection between operations, represented by the edges. These are the calculations of the partials of each edge:
+
+And then they show image with values of partial derivatives but they newer explain how are these equations actually calculated in implementation of graph.
+Yes, okay, I know how to calculate these partial derivatives on paper and then hardcode them in my code, but I don't know how are they actually automatically computed and implemented in code of libraries like Torch or Theano.
+Do they have some basic rules implemented in code, like, for example:
+$$ \frac{\partial (a + b)}{\partial a} = \frac{\partial a}{\partial a} + \frac{\partial b}{\partial a} = 1 $$
+and then decompose expressions until they reach basic elements/rules or is there another way that libraries like Torch, Theano or TF do it?
+Or to put it in another way, if I have this code in Torch:
+from torch import Tensor
+from torch.autograd import Variable
+
+def element(val):
+ return Variable(Tensor([val]), requires_grad=True)
+
+# Input nodes
+i = element(2)
+j = element(3)
+k = element(5)
+l = element(7)
+
+# Middle and output layers
+m = i*j
+n = m+k
+y = n*l
+
+# Calculate the partial derivative
+y.backward()
+dj = j.grad
+print(dj)
+
+how does Torch know, that is, how does it compute internally that $ \frac{\partial y}{\partial j} = l \cdot 1 \cdot i = l \cdot i $?
+"
+"['game-ai', 'algorithm-request']"," Title: What AI algorithm could I use to trap an agent in a game?Body: Imagine a game with grid size 10x10, there is a good guy and a bad guy and obstacles in the grid, i.e. essentially a maze. The goal of the bad guy is to find the good guy and trap him by erecting walls around him. The good guy is able to move in the maze using a simple random movement, only in 4 directions - up, down, right or left.
+I've used the A* algorithm to find the shortest path to the good guy but am still unsure as to how to go about trapping the good guy in his own space?
+"
+"['definitions', 'gradient', 'calculus', 'derivative']"," Title: What is the rigorous and formal definition for the direction pointed by a gradient?Body: Consider the following definition of derivative from the chapter named Vector Calculus from the test book titled Mathematics for Machine Learning by Marc Peter Deisenroth et al.
+
+Definition 5.2 (Derivative). More formally, for $h>0$ the derivative
+of $f$ derivative at $x$ is defined as the limit
+$$\dfrac{df}{dx} := \lim\limits_{h \rightarrow 0}^{} \dfrac{f(x + h) − f(x)}{h}$$
+The derivative of $f$ points in the direction of the steepest ascent of $f$.
+
+You can observe that the derivate of a function is another function. If we consider derivative at a single point then it will be a real number that quantifies the rate of change of the output of the function with respect to the input.
+There are two kinds of directions we need to focus on that are related to gradients. One is the direction pointed by a gradient and another one is the direction for moving our input parameters using a gradient. This question is restricted to the direction of the first kind.
+We can treat the sign of the derivative at a particular point as the direction to move our input parameters. And I am not sure about the rigorous definition for the direction pointed by a derivative. I have thus doubts about the direction pointed by a gradient.
+What exactly is the direction pointed by a gradient? and I want to know the formal definition for the direction of a gradient
+I know about the direction that is given by gradient to move our parameters. But, I am not sure about the rigorous definition for the direction of a gradient vector.
+"
+"['search', 'state-spaces', 'norvig-russell', 'depth-first-search', 'state-space-search']"," Title: How DFS may expand the same state many times via different paths in an acyclic state space?Body: I am reading the book titled Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig (4th edition) and came across this sentence about depth-first search (page 79, line 12):
+
+For acyclic state spaces it may end up expanding the same state many
+times via different paths, but will (eventually) systematically
+explore the entire space.
+
+My question is: how this is possible? Can you please show me some examples?
+"
+"['deep-learning', 'comparison', 'incremental-learning', 'mlops']"," Title: How will MLOps and lifelong learning be complementary?Body: According to [1], in MLOps, continuous training is
+
+a new property, unique to ML systems, that's concerned with automatically retraining and serving the models.
+
+While lifelong/incremental learning mainly studies how to incrementally learn rather than retrain. [2]
+
+Lifelong Machine Learning or Lifelong Learning (LL) is an advanced machine learning (ML) paradigm that learns continuously, accumulates the knowledge learned in the past, and uses/adapts it to help future learning and problem-solving.
+
+I can see some links or conflicts between the two but cannot explicitly explain, and I asked an author in the second link about this issue and he said that the two are complementary. I wonder how will the two help each other? Or will this kill that?
+"
+"['models', 'weights']"," Title: model and trained model parameters on CIFAR-10Body: I'm looking for different models (specifically ResNet18/20, ResNet32/34, VGG16, MobileNet and SqueezeNet) and their parameters after training (i.e., .pth
file) that were trained on CIFAR-10
or CIFAR-100
. I tried looking for them for hours and couldn't find anything. perhaps someone could refer me to a site which have trained models for CIFAR-x
?
+Thank you.
+"
+"['reinforcement-learning', 'deep-rl', 'dqn', 'actor-critic-methods', 'target-network']"," Title: Why isn't a target network used for the critic in on-policy actor-critic methods?Body: Based on my research, I've seen so many on-policy AC approaches that utilise a critic network to estimate the value function $V$. The Bellman equation for the value function is as bellow:
+$$
+V_\pi(s_t) = \sum_a \pi(a|s_t)\sum_{r, s'}(r+V_\pi(s'))P(s', r|s, a)
+$$
+It makes sense not to have a replay buffer due to the current policy in the formula and the fact that our approach is on-policy. However, I really do not figure out why no one uses a target network to stabilize the training process of the critic, like what we have in DQN, namely the variant published in 2015. Does anyone have an idea for that with probably a citation?
+I know that DDPG uses a critic with a fixed target network, but be aware that it is a real off-policy actor-critic. By "real" I mean it is not due to importance sampling.
+I have to mention that I can imagine something, but I'm not sure whether it is true or not. If we have a target network, it means we are trying to find a deterministic, optimal in the case of DQN, policy, while we are learning the current policy's data for the actor-critic case with the critic.
+"
+"['regression', 'performance', 'support-vector-machine', 'scikit-learn']"," Title: Do I need to tune the hyper-parameters or more data if SVR model performs poorly?Body: I am using non-linear data to SVR and have tried tuning the hyperparameters and still have a poor model performance. Do I need more data or format the data for more suitable results?
+I get similar performance for ANNs, decision tree, and random forest (slightly better) and even negative for polynomial regression.
+The graphs for test data performance and training data also get a DataConversionWarning
+You can find the data I used here
+The plots I obtained look like this:
+actual vs predicted for test data
+actual vs predicted for training data
+import numpy as np
+import matplotlib.pyplot as plt
+import pandas as pd
+from sklearn.model_selection import train_test_split
+from sklearn.svm import SVR
+from sklearn.metrics import r2_score
+#
+#
+dataset = pd.read_csv('Data.csv')
+X = dataset.iloc[:, :-1].values
+y = dataset.iloc[:, -1].values
+#
+#
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 1)
+#
+#
+regressor = SVR(kernel = 'linear', gamma='auto')
+regressor.fit(X_train, y_train.ravel())
+y_predict = regressor.predict(X_test)
+np.set_printoptions(precision=2)
+print(np.concatenate((y_predict.reshape(len(y_predict),1), y_test.reshape(len(y_test),1 )), 1))
+#
+#
+r2_score(y_test, y_predict)
+#
+#
+#model performance
+fig, ax = plt.subplots()
+ax.scatter(y_test, y_predict)
+ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
+ax.set_xlabel('Actual')
+ax.set_ylabel('Predicted')
+#regression line
+y_test, y_predict = y_test.reshape(-1,1), y_predict.reshape(-1,1)
+ax.plot(y_test, regressor.fit(y_test, y_predict).predict(y_test))
+ax.set_title('Final Prediction-R2: ' + str(r2_score(y_test, y_predict)))
+plt.show()
+#
+#
+#training data performance
+fig, ax = plt.subplots()
+ax.scatter(y_train, y_train_predict)
+ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
+ax.set_xlabel('Actual')
+ax.set_ylabel('Predicted')
+#regression line
+y_train, y_train_predict = y_train.reshape(-1,1), y_train_predict.reshape(-1,1)
+ax.plot(y_train, regressor.fit(y_train, y_train_predict).predict(y_train))
+ax.set_title('Final Prediction-R2: ' + str(r2_score(y_train, y_train_predict)))
+plt.show()
+
+"
+"['reinforcement-learning', 'value-functions', 'policies', 'bellman-equations', 'optimal-policy']"," Title: Can an optimal policy have a value function that has a smaller value for a state than a non-optimal policy?Body: I'm starting to learn about the Bellman Equation and a question came to my mind.
+A policy $\pi$ is optimal if the value $v_\pi(s)$ is greater or equal than the value $v_{\pi'}(s)$ for all states $s \in S$.
+Why does this work?
+Can't it be that the optimal policy thinks a state isn't that good and gives him a low value but perform best in comparison with other policies which have higher values for this state?
+"
+"['reinforcement-learning', 'deep-rl', 'q-learning']"," Title: How to deal with Q-learning having low variance in predicted Q-values?Body: I have a neural network that takes the state (which contains a lot of data), and the possible action (which is very little data), and predicts the Q-value of the action. I am double Q-learning.
+I've noticed that, given a particular state $s$, the neural network will predict nearly identical $q(s, a_i)$ for all actions $a_1, \dots, a_n$. As the neural network gets trained, this situation gets worse.
+I think it is predicting the mean Q value of the state. Perhaps, the small amount of data that comprises the action is being drowned out by state input?
+I've considered using softmax to predict the best action, but it seems like we have nearly an unlimited number of possible actions to take, and I don't want to hard-code them.
+EDIT, More Detail: The state is represented by all the text detected by an OCR of a GUI program + an embedding of screenshot image of the current GUI state. While in comparison the action is just 1-2 words (the text or tooltip of the GUI element we are considering clicking on), and I'm concerned that this imbalance of representation of state vs action in the input could be causing the network to accidentally (nearly) ignore the action input.
+"
+"['machine-learning', 'reference-request', 'symbolic-ai', 'expert-systems', 'knowledge-based-systems']"," Title: Is there any other (possibly less popular) approach to create AI apart from statistical methods?Body: From what I have gathered so far, an AI has some prior (stored in the form of some probability distribution), and, based on experiences/data, changes the distribution (via Bayes rule) accordingly. This idea seems intuitively correct, as humans do something similar: we have some prejudice about certain things and refine it further based on additional observations.
+I am wondering if there is a different (possibly, non-probabilistic) setting for designing an AI.
+"
+"['reinforcement-learning', 'temporal-difference-methods', 'off-policy-methods', 'importance-sampling']"," Title: How does this TD(0) off-policy value update formula work?Body: The update formula for the TD(0) off-policy learning algorithm is (taken from these slides by D. Silver for lecture 5 of his course)
+$$ \underbrace{V(S_t)}_{\text{New value}} \leftarrow \underbrace{V(S_t)}_{\text{Old value}} + \alpha \left( \frac{ \pi(A_t|S_t)}{\mu (A_t|S_t)} (\underbrace{R_{t+1} + \gamma V(S_{t+1}))}_{\text{TD target}} - \underbrace{V(S_t)}_{\text{Old value}} \right) $$
+where $\frac{ \pi(A_t|S_t)}{\mu (A_t|S_t)}$ is the ratio of the likelihoods that policy $\pi$ will take this action at this state divided by the likelihood that behavior policy $\mu$ takes this action at this state.
+What I do not understand is:
+Assume the behavior policy $\mu$ took an action that is very unlikely to happen under policy $\pi$. I would assume this term goes towards $0$.
+$$ \frac{ \pi(A_t|S_t)}{\mu (A_t|S_t)} = 0 $$
+But, if this term goes to $0$, the whole equation would become the following
+$$ V(S_t) \leftarrow V(S_t) - \alpha V(S_t) $$
+This would mean we decrease the value of this state.
+But this doesn't make any sense to me, if the 2 policies are very different, we gain little to no information. Therefore, I would assume the value would be unchanged instead of decreased.
+What is my misconception here?
+"
+"['machine-learning', 'data-preprocessing', 'binary-classification', 'cross-validation', 'imbalanced-datasets']"," Title: How to arrange test dataset distribution for an imbalanced classification problem?Body: I have a dataset that contains 560 datapoints, and I would like to do binary classification on it. 400 datapoints belong to class 1, and 160 points belong to class 2. In the case of an imbalanced dataset like this, how to arrange the test dataset to get valid performance results? Should I keep the same imbalanced data distribution for the test set which is similar to the distribution of the training data, or arrange it in a way that half of the test points belong to the first class and the remaining half belongs to the second class?
+"
+"['transformer', 'attention', 'machine-translation', 'language-model', 'encoder-decoder']"," Title: What is input (and shape) to K/V/Q of self-attention of EACH Decoder block of Language-translation model Transformer's tokens during Inference?Body: Transformer model of the original Attention paper has a decoder unit that works differently during Inference than Tranining.
+I'm trying to understand the shapes used during decoder (both self-attention and enc-dec-attention blocks), but it's very confusing. I'm referring to this link and also the original Attention paper
+In Inference, it uses all previous tokens generated until that time step (say k
th time-step), as shown in the diagram below and explained at this link.
+
+Another diagram that shows self-attention and enc-dec-attention within decoder:
+
+Question:
+However when I look at actual shapes of the QKV projection in the decoder self-attention, and feeding of the decoder self-attention output to the "enc-dec-attention"'s Q matrix, I see only 1 token from the output being used.
+Let's assume 6 deocder blocks one after the other in the decoder stack (which is the base transformer model).
+I'm very confused how the shapes for all matrices in the Decoder blocks after decoder-1 of the decoder-stack (more specifically decoder-block-2 decoder-3, decoder-4..., decoder-6 of the decoder stack) self-attention and enc-dec-attention can match up with variable length of input to the decoder during inference. I looked at several online material but couldn't find answer.
+I see only the BGemms in the decoder's self-attention (not enc-dec-attention) using the variable shapes until all previous k
steps, but all other Gemms are fixed size.
+
+- How is that possible? Is only 1 token (last one from decoder output) is being used for qkv matmuls in self-attention and Q-matmul in enc-dec-attention (which is what I see when running the model)?
+- Could someone elaborate how all these shapes for QKV in self-attention and Q in enc-dec-attention match up with decoder input length being different at each time-step?**
+
+"
+"['reinforcement-learning', 'algorithm-request', 'multi-agent-rl']"," Title: Which multi-agent reinforcement learning algorithm can I use when there are two types of agents with different action spaces?Body: Most of the papers on multi-agent RL (MARL) that I have encountered have multiple agents who have a common action space.
+In my work, my scenario involves $m$ numbers of a particular agent (say type A) and $n$ numbers of another type of agent. Here, the type A agents deal with a similar problem due to which they have the same action space, and type B deal with another type of problem and they have the same action space.
+The type A agents are involved in an intermediary task that doesn't reflect in the final reward, and the final reward comes from the actions of type B agents. But the actions of type B are dependent on type A agents.
+Any idea on what kind of MARL algorithm is suitable for such a scenario?
+"
+"['deep-learning', 'proximal-policy-optimization']"," Title: Proximal Policy Optimization for continuous control problemBody: I am using clipped PPO to train a neural network to act as the controller for steering an aircraft, and am finding that my networks aren't learning. The goal is to keep the aircraft flying to cover the distance, and the code is implemented using Pytorch. I am wondering if anyone who is more experienced would be willing to take a look at my implementation.
+Aircraft and actor model
+I have a flight simulation environment with the dynamical system of an aircraft, where the simulation is propagated through time in 0.1s increments. The states of the aircraft serve as inputs to the actor network (airspeed, pitch, heading, height), and the controls include pitch and roll angles.
+The actor network uses a multivariate normal distribution to output two values that serve as the means for the pitch and roll angle controls. The variances are fixed. This way, the policy is stochastic and allows for exploration. I am using two hidden layers of 256 neurons each for both actor and critic networks, with learning rates of 1e-4. ReLu activation on the hidden layers and sigmoid on the actor's output, with the critic's output being linear.
+Reward structure
+I have tried two different reward structures:
+
+- A reward of 1 for every time step that the aircraft is in the air (with a reward of 0 in the terminal state, which is when the aircraft crashes to the ground or if it achieves the maximum flight time)
+- A reward of 0 for every time step, but a single reward computed at the terminal state that is proportional to the displacement of the aircraft over the simulation, and an extremely large penalty that is subtracted from this reward if the aircraft crashes.
+
+Neither of these reward structures seemed to allow learning. I am wondering if simply having a reward of 1 at every time step, much like the gym cartpole environment is insufficient in getting the model to learn how to fly. I was expecting the network to figure out ways of controlling the aircraft to fly longer to obtain a greater total reward. For the second reward structure, I expected the advantage estimation to carry the large reward/penalty backward through the trajectory so that the actor would learn to avoid flying in certain ways that end up in crashing the plane.
+PPO implementation
+For the clipped PPO implementation, I am:
+
+- Resetting the flight simulation environment
+- Propagating the simulation until the aircraft crashes or reaches maximum flight time
+- Training the actor and critic networks every 20 time steps (2s of simulation)
+- Repeating this for a maximum number of simulations
+
+For training, I take the 20 time steps, shuffle them, and divide them into 5 batches. Then I train the network on these batches, reshuffle the 20 time steps and create 5 new batches, and train again.
+I repeat this process for a total of 4 epochs per learning cycle.
+The fastest that the aircraft can reach the ground (crash) is 1.7s of simulation time (17 time steps), whereas I want the aircraft to fly ultimately for 10m (6000 time steps). I am thinking that it is too difficult to train a network to fly for such a long period of time because it would have to learn all the states leading up to that point.
+Results
+I found that training the algorithm typically resulted in extreme volatility in the test simulation's score (sum of all rewards at every time step). The score history would look like:
(moving average of 100 scores in orange).
+What I've tried
+
+- Changing the standard deviation of the normal distribution of the actor output so that there is less exploration (stability of the aircraft is quite sensitive to the particular control values).
+- Advantage normalization
+- Increasing training time to 10 000 simulations (~40 000 learning iterations since there are 4 epochs per simulation)
+- Testing the algorithm on the gym's cartpole environment (discrete actions). I was able to successfully train the actor in 200ish games.
+
+If anyone has any other suggestions of things to try, it would be greatly appreciated.
+"
+"['deep-learning', 'binary-classification', 'cross-validation', 'training-datasets', 'test-datasets']"," Title: Given a dataset of people with and without cancer, should I split it into training and test datasets such that the same person is not in both?Body: I have a database that contains healthy persons and lung cancer patients. I need to design a deep neural network for the binary classification problem (cancer/no cancer). I need to split the dataset into 70% train and 30% test.
+How can I do the splitting? According to persons?
+I think that splitting according to persons is correct since this will ensure that the same person will not exist simultaneously in both the training and the test subsets. This is reasonable since we are recognizing the disease, not the person. If images from the same person exist in both subsets, the problem will be easy, and not reasonable, from a practical point of view. Do you agree?
+"
+"['computer-vision', 'image-restoration', 'lpips', 'psnr', 'ssim']"," Title: To assess the quality of the reconstructed images, which metric is more reliable: PSNR or LPIPS?Body: I am training a model for image reconstruction. I used several metrics to assess the quality of the reconstructed images. LPIPS is decreasing, which is good. PSNR goes up and down, but the L1 loss and SSIM loss are increasing.
+So, which metric should I care more about?
+My datasets are Paris Street View and CelebA.
+I'm not sure if the VGG that extracts features for LPIPS is reliable here or not.
+"
+"['recurrent-neural-networks', 'papers', 'seq2seq']"," Title: Does Seq2Seq decoder take a special vector or the weights of the last encoder cell as an output?Body: I'm reading Sequence to Sequence Learning with Neural Networks and there's a thing that I couldn't quite grasp.
+Paper says the encoder outputs a vector to be fed to the decoder. More precisely
+
+Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector
+
+However, when I look at the diagram:
+
+there's no such vector here. What I understand from this diagram is decoder RNN takes the weights of the last encoder cell as an input.
+Which one is correct? can you explain?
+Stanford notes put it as
+
+The final hidden state of the cell will then become C
+
+So, is there no vector?
+"
+"['reference-request', 'game-ai', 'monte-carlo-tree-search', 'game-theory', 'alphazero']"," Title: Would AlphaZero work just with a value network?Body: There is a nice post about the intuition why AlphaZero works.
+One of the advantages of using a policy network in the games where a perfect simulator is available (such as chess) is to save computation time by not generating all subsequent moves and then evaluating them using the value network. Instead, we can only focus on the good moves given by the policy network.
+However, besides the computation time savings of the policy network, are there any requirements why it needs to be used during training?
+What if we would replace the computation of the policy network with this logic: generate all subsequent moves, evaluate them using value network, and create policy from these predictions. Would this still work?
+I would appreciate any references where this topic is discussed.
+"
+"['computer-vision', 'image-recognition', 'data-preprocessing', 'action-recognition', 'video-classification']"," Title: Can I flip a video to generate more data for action recognition?Body: There are 8 distinct action classes and around 50+ videos per class. I was wondering if flipping videos from the training set can be a good option to generate additional data. Is it?
+"
+"['regression', 'multilayer-perceptrons', 'vc-dimension', 'sample-complexity']"," Title: Is the VC dimension of a MLP regressor a valid upper bound on how many points it can exactly fit?Body: I want to calculate an upper bound on how many training points an MLP regressor can fit with ~0 error. I don't care about the test error, I want to overfit as much as possible the (few) training points.
+For example, for linear regression, it's impossible to achieve 0 MSE if the points don't lie on a line. An MLP, however, can overfit and include these points in the prediction.
+
+So, my question is: given an MLP and its parameter, how can I calculate an upper bound on how many points it can exactly fit?
+I was thinking to use the VC dimension to estimate this upper bound.
+The VCdim is a metric for binary classification models but a pseudo dimension can be adapted to real-value regression models by thresholding the output:
+$$Pdim(\mathcal{G}) = {VCdim}(\{(x,t) \mapsto 1_{g(x)-t>0}:g \in \mathcal{G}\})$$
+(from the book Foundation of Machine Learning, 2nd edition, definition 11.5)
+where $\mathcal{G}$ is the concept class of the regressor, $g(x)$ is the regressor out and $t$ is a threshold.
+The model is an MLP with RELU activations. As far as I understood on Wikipedia, it should have a VCdim equal to the number of the weights (correct me if I'm wrong).
+So, the question is: how to practically calculate the pseudo-dim for the regressor given the VCdim? Does it make sense for the purpose that I want to achieve?
+"
+"['reinforcement-learning', 'action-spaces', 'dimensionality-reduction']"," Title: Is it generally advisable to have a low dimensional action space in Reinforcement Learning?Body: In supervised or unsupervised learning, it is advised to reduce the dimensionality due to the curse of dimensionality in general.
+Is this also generally advisable for the action space of reinforcement learning?
+As far as I understand (and inspired by the answer here Is it possible to tell the Reinforcement Learning agent some rules directly without any constraints), you can reduce the dimensionality of the action space always to 1, meaning that you solely have 1 action. This can be done by using a mapping (e.g. in the step function of Open AI Gym).
+Let's have a look at an example: We have a heating device that can heat 3 storages and we have a discrete action variable for all of them with 11 steps [0.0, 0.1, 0.2, ..., 1.0]. So we have
+
+- action_heatStorage1: [0.0, 0.1, 0.2, ..., 1.0]
+- action_heatStorage2: [0.0, 0.1, 0.2, ..., 1.0]
+- action_heatStorage3: [0.0, 0.1, 0.2, ..., 1.0]
+
+In this case, we would have a 3-dimensional action space
+
+- action = [action_heatStorage1, action_heatStorage2,
+action_heatStorage3]
+
+However, it is also possible to combine the 3 actions into 1 action variable "action_combined" of the size [11 * 11 * 11=1331] by just using a mapping of this one action into the separate 3 actions. For example like this:
+
+- action_combined = 0 --> action_heatStorage1 =0, action_heatStorage2
+=0, action_heatStorage3 =0
+- action_combined = 1 --> action_heatStorage1 =0.1, action_heatStorage2
+=0, action_heatStorage3 =0
+- action_combined = 2 --> action_heatStorage1 =0.2, action_heatStorage2
+=0, action_heatStorage3 =0
+
+...
+
+- action_combined = 1331 --> action_heatStorage1 =1.0, action_heatStorage2
+=1.0, action_heatStorage3 =1.0
+
+Is it generally advisable to reduce the dimensionality of the action space (Option 2), or to use multidimensional action variables directly (Option 1)?
+I know that the is most probably not an answer that is valid for all problems. But, as I am relatively new to reinforcement learning, I would like to know whether in the theory of reinforcement learning there is a general recommendation to do something like this or not or whether this question can't be answered in general as it is something that totally depends on the application and should be tested for each application individually?
+Reminder: I have already received a good answer. Still, I would like to remind you on this question to maybe hear also the opinion and experience of others regarding this topic.
+"
+"['training', 'object-detection']"," Title: How do Tensorflow models and YOLO differ in terms of training steps?Body: Can anybody explain how the training steps work for the Tensorflow Object Detection algorithms available on the Tensorflow 2 Detection Model Zoo? For instance, YOLOv5 cycles through epochs. As I understand it, one epoch is completed after all the training data passes through the algorithm. However, the Tensorflow models I just described are set up so they pass through a certain amount of training steps (several are optimized for 100,000 training steps, including some with 200,000 and 300,000 steps, depending on the algorithm).
+What is the difference between epochs and these steps? Just trying to understand how the algorithm trains my data.
+"
+"['natural-language-processing', 'pytorch', 'word-embedding']"," Title: Why do we multipy context_size with embedding_dim? (PyTorch)Body: I've been using Tensorflow and just started learning PyTorch. I was following the tutorial: https://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html#sphx-glr-beginner-nlp-word-embeddings-tutorial-py
+Where we try to create an n-gram language model. However, there's something I don't understand.
+class NGramLanguageModeler(nn.Module):
+
+ def __init__(self, vocab_size, embedding_dim, context_size):
+ super(NGramLanguageModeler, self).__init__()
+ self.embeddings = nn.Embedding(vocab_size, embedding_dim)
+ self.linear1 = nn.Linear(context_size * embedding_dim, 128)
+ self.linear2 = nn.Linear(128, vocab_size)
+
+at self.linear1 = nn.Linear(context_size * embedding_dim, 128)
why did we multiply embedding_dim with context_size? Isn't the embedding_dim input size? So why do we multiply it by the context size?
+"
+"['neural-networks', 'machine-learning', 'autoencoders']"," Title: Masked Autoencoder StructureBody: In the following structure when we use MADE due to the constraints for making a masked autoencoder, it seems some inputs do not have any connection to the next layer, and also there is the output that does not have a connection to the previous layer!
+Can someone clarify?
+
+"
+"['deep-learning', 'objective-functions', 'object-detection', 'jaccard-similarity']"," Title: Can a GIoU loss (generalized intersection over union) be used after an STN module (spatial transformer network)?Body: I have a model that uses an STN module for number detection and Mean Squared Error loss. But I would like to replace it for GIoU, because MSE doesn't take into account how much of the target area has been detected, only how close individual coordinates are close to the target. But I wonder if this makes sense. Has anyone tried it, or has some insight?
+"
+"['deep-learning', 'transformer', 'language-model', 'natural-language-generation', 'forward-pass']"," Title: Why do language models produce different outputs for same prompt?Body: For conventional 'Neural Networks', the weights simply act as a transformation in highly multi-dimensional space; for a forward pass, the output is always the same since there is no stochastic weighting component in the process.
+However, in Transformers (self-attention based encoder-decoder type architecture to be specific) we get different outputs with the same prompts (assuming $T > 0$). This doesn't make sense to me because the set of weights are always static, so the probability distribution produced should be the same; this simple decoding should yield the same output.
+However, in practice, we observe that it is not actually the case.
+Any reasons why?
+"
+"['time-complexity', 'norvig-russell', 'uniform-cost-search']"," Title: Why is there a 1 in complexity formula of uniform-cost search?Body: I am reading the book titled Artificial Intelligence: A Modern Approach 4th ed by Stuart Russell and Peter Norvig. According to the book, the complexity of uniform-cost search is as
+$$
+O(b^{1+\lfloor{C^*/\epsilon}\rfloor}),
+$$
+where $b$ is the branching factor (i.e. the number of available actions in each state), $C^*$ is the cost of the optimal solution, and $\epsilon > 0$ is a lower bound of the cost of each action.
+My question is: Why is there is a 1 in the formula?
+For example, suppose in the following tree, the red node is the initial state and the green one is the goal state, and two actions are needed to reach the goal state from the initial state. If the cost of both actions is equal to $\epsilon = 1$, so, $C^*$ will be $2$. Therefore, the complexity will be $O(b^{2})$. But, from the above formula, the complexity will be $O(b^{3})$.
+
+PS. I know there is a similar question in stackoverflow and have read the answer. But there is a disagreement between the answers about the 1.
+"
+"['neural-networks', 'history', 'perceptron', 'probability-theory']"," Title: Why are today's neural networks not modeled with probability theory?Body: In the paper The Perceptron: A probabilistic model for information storage and organization in the brain, Rosenblatt used the probability theory to model his perceptron.
+My professor told me that today's neural networks are not modeled with probability theory anymore. Why is that?
+"
+"['reinforcement-learning', 'value-functions', 'function-approximation', 'weights', 'temporal-difference-methods']"," Title: In TD(0) with linear function approximation, why is the gradient of $\hat v(S^{\prime}, \mathbf w)$ wrt parameters $\mathbf w$ not considered?Body: I am reading these slides. On page 38, the update for the parameters for the linear function approximation of TD(0) is given. I have a doubt regarding this.
+The cost function (RMSE) is given on page 37.
+My doubt is: why is the gradient of $\hat v(S^{\prime}, \mathbf w)$ with respect to parameters $w$ not considered?
+I think the parameter update should be:
+$$\mathbf w \leftarrow \mathbf w +\alpha [R + \gamma \hat v(S', \mathbf w) - \hat v(S, \mathbf w)] (\nabla \hat v(S, \mathbf w)- \gamma \nabla \hat v(S', \mathbf w))$$
+Instead in the material it is given as:-
+$$\mathbf w \leftarrow \mathbf w +\alpha [R + \gamma \hat v(S', \mathbf w) - \hat v(S, \mathbf w)] \nabla \hat v(S, \mathbf w)$$
+Could someone please explain?
+"
+"['deep-learning', 'natural-language-processing', 'transformer', 'positional-encoding']"," Title: Is Positional Encoding always needed for using Transformer models correctly?Body: I am trying to make a model that uses a Transformer to see the relationship between several data vectors, but the order of the data is not relevant in this case, so I am not using the Positional Encoding.
+Since the performance of models using Transformers is quite improved with the use of this part, do you think that if I remove that part I am breaking the potential of Transformers or is it correct to do so?
+"
+"['machine-learning', 'papers', 'notation', 'gradient', 'calculus']"," Title: Which is more popular/common way of representing a gradient in AI community: as a row or column vector?Body: Consider the following remark about writing gradients from the chapter named Vector Calculus from the test book titled Mathematics for Machine Learning by Marc Peter Deisenroth et al.
+
+Remark (Gradient as a Row Vector). It is not uncommon in the
+literature to define the gradient vector as a column vector, following
+the convention that vectors are generally column vectors. The reason
+why we define the gradient vector as a row vector is twofold: First,
+we can consistently generalize the gradient to vector-valued functions
+$f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ (then the gradient becomes
+a matrix). Second, we can immediately apply the multi-variate chain
+rule without paying attention to the dimension of the gradient.
+
+The above remark implicitly says that there is no standard in writing the gradients. So, I can write a gradient of a scalar-valued multivariate function either as a column vector or as a row vector.
+But, I want to know which is more common in the AI community?
+"
+"['convolutional-neural-networks', 'recurrent-neural-networks', 'time-series']"," Title: In a Temporal Convolutional Network, how is the receptive field different from the input size?Body: I'm playing around with TCN's lately and I don't understand one thing. How is the receptive field different from the input size?
+I think that the receptive field is the time window that TCN considers during the prediction, so I guess the input size shall be equal to it.
+According to the WaveNet paper, I cannot see a reason why it should be otherwise. I'm using TensorFlow with this custom library.
+Please help me understand.
+"
+"['machine-learning', 'deep-learning', 'reference-request', 'papers', 'academia']"," Title: How should I read a deep learning paper?Body: I have a background in mathematics and I am accustomed to reading papers with lemma and proofs. When I see a deep learning paper, they seem to be of practical nature.
+How can I improve my reading and understanding of deep learning papers?
+To truly understand, should I have to implement the code? What is the best approach (if any)?
+"
+"['neural-networks', 'activation-functions', 'universal-approximation-theorems']"," Title: Why does the activation function for a hidden layer in a MLP have to be non-polynomial?Body: Across multiple pieces of literature describing MLPs or while describing the universal approximation theorem, the statement is very specific on the activation function being non-polynomial.
+Is there a reason why it cannot be a higher-order polynomial? Is it just an attempt to use the least complex solution or we really cannot use higher-order polynomial?
+I can understand the reason for the non-linear, but I am clueless about the non-polynomial requirement.
+"
+"['reinforcement-learning', 'implementation', 'monte-carlo-methods', 'on-policy-methods', 'epsilon-greedy-policy']"," Title: How to code an $\epsilon$-soft policy for on-policy Monte Carlo control?Body: I was trying to code the on-policy Monte Carlo control method. The initial policy chosen needs to be an $\epsilon$-soft policy.
+Can someone tell me how to code an $\epsilon$-soft policy?
+I know how to code the $\epsilon$-greedy. In $\epsilon$-soft, there are inequalities in place of equalities which is the issue for coding the $\epsilon$-soft.
+"
+"['machine-learning', 'reference-request', 'online-learning']"," Title: Are there any resources that introduce the basics of online machine learning?Body: Are there any resources (either books, articles, or tutorials) that introduce the basics of online machine learning?
+For example, this website has nice lecture notes (from lec16) on some of the aspects. Or this book.
+I can't seem to find many resources on this. I'm trying to understand the basics, not read research papers.
+"
+"['machine-learning', 'image-recognition', 'object-detection', 'object-recognition', 'yolo']"," Title: Multiple labels for the same rectbox?Body: My goal is to identify the horse in a photo. I'm dealing with about 500 unique horses.
+My feeling is that the best way to distinguish one horse from another is by its face. So I trained Yolov5 successfully to find faces at reasonable angles.
+I'd like to take this a step further, and teach it to identify which horse's face it sees.
+I'm new to this sort of thing (though not programming in general), so the way I assume I should approach this is to add an additional label like face_horsename, with the unique name for the horse (or really, a unique reference to a database of horses).
+Is that the right approach? It seems like the Yolo file format doesn't allow for multiple labels for the same box, so my guess is I should just make 2 rectboxes that are identical, but both point to different labels.
+Frankly, I'd like to take it even further and label the same thing with the type of "blaze" of the horse's face, and its proper name for the horse's color. So now I'm talking about 4 labels.
+Is that the right approach (duplicate boxes with unique labels)?
+"
+"['machine-learning', 'natural-language-processing', 'python', 'ai-basics', 'text-classification']"," Title: Which AI algorithm to use for identifying API for a specific use from a list of APIs?Body: We have a legacy code solution in C#. We have to change the code so that it fetches internal data via APIs and not via DB calls.
+E.g. if the current code GETS Payment object from DB, we have to replace the logic so that the code calls the GET PAYMENT API instead.
+Since there are 100s of code files and multiple DB hits in a single file, doing this manually is not feasible.
+I was thinking of building an AI-based tool that would take my code file as input and point me out where I would need to replace the existing code and suggest what API to call at that place.
+I have never worked on AI and it would be great if anyone suggests which algorithm to refer for my tool and also how should I proceed with solving the above problem.
+"
+"['neural-networks', 'deep-learning', 'feature-engineering', 'input-layer']"," Title: Is it a good practice to split sparse from dense features?Body: I have a mixture of real (float) and categorical features to use as input in a neural network. I encode the categorical features using one-hot / multi-hot encoding.
+If I want to use all the features as input what is usually/empirically the best practice:
+
+- Concatenating all the features - sparse one-hot/multi-hot vectors and float values features - in one vector which is part dense part sparse and using this as input, or
+
+- Splitting the sparse one-hot/multi-hot vectors from the dense features and using an extra separate layer for the sparse features to make them dense before concatenating them with the other already dense features.
+
+- Same as 2 but maybe using a separate layer for the dense features too so we concatenate "embeddings" instead of features and embeddings.
+
+
+What, in your experience / opinion, should I do, trial-and-error aside?
+"
+"['reinforcement-learning', 'training', 'convergence', 'on-policy-methods', 'online-learning']"," Title: How to fix high variance of the returns on a 2d env?Body: I'm trying to train an agent on a self-written 2d env, and it just doesn't converge to the solution.
+
+It is basically a 2d game where you have to move a small circle around the screen and try to avoid collisions with randomly moving "enemy" circles and the edge of the screen. The positions of the enemies are initialized randomly, at a minimum distance of 2 diameters from the enemy.
+The player circle has $n$ sensors (lasers) that measure the distance and speed of the closest object found.
+The observation space is continuous and is made of concatenated distances and speeds, hence has the dimension of $\mathbb{R}^{n * 3}$.
+I scale the distances by the length of the screen diagonal.
+The action space is discrete (multidiscrete in my implementation) $[dx, dy] \in \{-1, 0, 1\}$
+The reward is +1 for every game step made without collisions.
+I use PPO implementation from Stable Baselines, but the return variance just gets bigger over the training. In addition to that, the agent hasn't run away from the enemies even once. I tried even setting the negative reward, to test if he can learn the suicide behavior, but no results either.
+I thought maybe it's just possible for some degenerate policies like going to the corner of the screen and staying there to gain big returns, and that jeopardizes the training. Then I increased the number of enemies, thinking that it will enforce the agent to learn actually to avoid the enemies, but it didn't work as well.
+I'm really out of ideas at this point and would appreciate some help on debugging this.
+"
+"['computer-vision', 'computational-learning-theory', 'training-datasets', 'sample-complexity']"," Title: How can I estimate how many photos I need to train ResNet-50 for image classification?Body: I am working on a project where I have to classify around 1000 unique objects. I'm trying to plan how much training data I will need to collect. I was planning on using ResNet-50. Is there anyway I can estimate the amount of photos I should plan to collect ahead of time (assuming I will collect an equal amount of photos of each class)?
+"
+"['applications', 'research', 'algorithm-request', 'education']"," Title: Can AI be used for grading code copy exercises and adjust difficulty based on these scores?Body: I'm a senior in a bachelor Multimedia and Creative Technology. My experience is mostly full-stack web app development.
+For my bachelor's thesis, I need to do research in a subject I have no experience in. I want to build an application where students can exercise HTML and CSS. Teachers can upload simple code pieces (e.g. h1, h2, and list with elements) with difficulty levels and students can try to copy these exercises with a code editor on the web with live preview.
+My question:
+
+- Is it possible to use AI for grading these "copies", give the students scores, and, based on these scores, adjust the difficulty level so the next exercise is harder or easier?
+
+- And if so, could you put me in the right direction?
+
+
+"
+"['neural-networks', 'training', 'autoencoders']"," Title: Can I apply reparametrization trick on ""any"" deep neural network?Body: I came across the "reparametrization trick" for the first time in the following paragraph from the chapter named Vector Calculus from the test book titled Mathematics for Machine Learning by Marc Peter Deisenroth et al.
+
+The Jacobian determinant and variable transformations will become
+relevant ... when we transform random variables and probability
+distributions. These transformations are extremely relevant in machine
+learning in the context of training deep neural networks using the
+reparametrization trick, also called infinite perturbation analysis.
+
+The trick has been used in the context of neural networks training in the quoted paragraph. But when I search about the reparametrization trick, I found it only or widely in training autoencoders.
+In the context of training a traditional deep neural network, is the trick useful?
+"
+"['deep-learning', 'computer-vision', 'residual-networks', 'training-datasets']"," Title: How many unique angles of an object do you need in your image training set in order to correctly classify it?Body: I'm interested in using ResNet-50 to classify images of objects for around 1000 unique classes. I'm wondering if there is any way to estimate how many unique angles I need in my training set to classify images that can be taken from any angle. For example, if for a given object I had 500 training images from directly the front and 500 training images from directly the top, I'd have 2 unique angles.
+A model trained with only those 2 unique angles probably wouldn't be able to classify the same object if it was given a photo from the top right looking down.
+Is there anyway to figure out how many unique angles I would need in my training set to classify images that could be taken from any angle? If I had 12 unique angles (top, bottom, front, back, left, right, front-left, front-right, front-top, front-bottom, back-left, back-right, back-top, back-bottom) would I then be able to classify images of any arbitrary angle?
+To clarify, if I had 12 unique angles, that would mean I would have many photos from each of the 12 angles, but the 12 angles would all be exactly the same with no variation. I.E. top would be exactly a 90-degree angle towards the object on the Z-axis and 0-degree angles on the X and Y axis, for many photos.
+"
+"['machine-learning', 'optimization', 'gradient-descent']"," Title: Why is gradient descent used over the conjugate gradient method?Body: Based on some preliminary research, the conjugate gradient method is almost exactly the same as gradient descent, except the search direction must be orthogonal to the previous step.
+From what I've read, the idea tends to be that the conjugate gradient method is better than regular gradient descent, so if that's the case, why is regular gradient descent used?
+Additionally, I know algorithms such as the Powell method use the conjugate gradient method for finding minima, but I also know the Powell method is computationally expensive in finding parameter updates as it can be run on any arbitrary function without the need to find partial derivatives of the computational graph. More specifically, when gradient descent is run on a neural network, the gradient with respect to every single parameter is calculated in the backward pass, whereas the Powell method just calculates the gradient of the overall function at this step from what I understand. (See scipy's minimize, you could technically pass an entire neural network into this function and it would optimize it, but there's no world where this is faster than backpropagation)
+However, given how similar gradient descent is to the conjugate gradient method, could we not replace the gradient updates for each parameter with one that is orthogonal to its last update? Would that not be faster?
+"
+"['machine-learning', 'reference-request']"," Title: Which introductory courses (preferably video lectures) could I use to learn ML for applying ML to black hole simulations?Body: I am a Ph.D. candidate in High Energy Physics and my research involves numerical simulations and data analysis. I am interested to learn Artificial Intelligence and Machine Learning from the basics so that I could implement the same in my research. Due to the huge popularity of AI & ML one could easily find a large number of articles on the internet on the topic, but it seems to me that a search for a proper course (that would teach the basics at the beginner level) is nearly impossible for me.
+It would be helpful if someone suggest me an introductory course (preferably video lecture) on 'machine learning' with the goal of applying the concepts to simulations of black hole environments.
+"
+"['natural-language-processing', 'tensorflow', 'hyperparameter-optimization', 'gpt', 'fine-tuning']"," Title: How to fine-tune GPT-J with small datasetBody: I have followed this guide as closely as possible: https://github.com/kingoflolz/mesh-transformer-jax
+I'm trying to fine-tune GPT-J with a small dataset of ~500 lines:
+You are important to me. <|endoftext|>
+I love spending time with you. <|endoftext|>
+You make me smile. <|endoftext|>
+feel so lucky to be your friend. <|endoftext|>
+You can always talk to me, even if it’s about something that makes you nervous or scared or sad. <|endoftext|>
+etc...
+
+Using the create_finetune_tfrecords.py script (from the repo mentioned above) outputs a file with 2 in it. I understand that means my data has 2 sequences.
+I could really use some advice with the .json
config file. What hyperparameters do you recommend for this small dataset?
+The best I came up with trying to follow the guide:
+{
+ "layers": 28,
+ "d_model": 4096,
+ "n_heads": 16,
+ "n_vocab": 50400,
+ "norm": "layernorm",
+ "pe": "rotary",
+ "pe_rotary_dims": 64,
+
+ "seq": 2048,
+ "cores_per_replica": 8,
+ "per_replica_batch": 1,
+ "gradient_accumulation_steps": 2,
+
+ "warmup_steps": 1,
+ "anneal_steps": 9,
+ "lr": 1.2e-4,
+ "end_lr": 1.2e-5,
+ "weight_decay": 0.1,
+ "total_steps": 10,
+
+ "tpu_size": 8,
+
+ "bucket": "chat-app-tpu-bucket-europe",
+ "model_dir": "finetune_dir",
+
+ "train_set": "james_bond_1.train.index",
+ "val_set": {},
+
+ "eval_harness_tasks": [
+ ],
+
+ "val_batches": 2,
+ "val_every": 400000,
+ "ckpt_every": 1,
+ "keep_every": 1,
+
+ "name": "GPT3_6B_pile_rotary",
+ "wandb_project": "mesh-transformer-jax",
+ "comment": ""
+}
+
+The problem is that, when I test the fine-tuned model, I get responses that make no sense:
+
+"
+"['neural-networks', 'image-recognition', 'ai-basics', 'pattern-recognition']"," Title: For a task that searches for an image artifact within a picture, can existing tools can be used or do I need to design the process myself?Body: I am familiar only with basic AI/NN concepts but never worked with any libraries/tools as tensor flow. Currently, I have a task for which AI might be ideal: detection of a certain image artifact in a picture (lets say I want to detect a black circular spot of a variable size). Because the spot can be very small or very large, I guess the NN would have to somehow process the whole picture and then proceed in smaller regions? Anyway, for such a task, do I need to learn more about machine learning or there are already tools that I could simply train (e.g. providing "clear" and "stained" image samples in their training sets) without worrying about internal details?
+"
+"['neural-networks', 'backpropagation', 'training-datasets']"," Title: Why ""large set of training data"" is needed in Neural Network AI training?Body: I often heard people saying, "large set of training data is needed for producing an accurate AI".
+But when I looked for articles explaining backpropagations online, it all seems like you could get the job done by "one single set of input", as long as you repeat the process enough times.
+So what's the "large set of training data" for!?
+After the optimized set of weights was calculated from the first input, plug in the second input and repeat the process again?
+Won't that "screw up" the result from the first input since it was "tailored" from it?
+"
+['expert-systems']," Title: Can a system of If-Then rules be regarded as AI?Body: Expert Systems (ES) are regarded as AI. However, ES can be as simple as a system of If-Then rules. But AI seems like a big name for a set of (could be rather simple) If-Then rules. Is this indeed the case that certain systems of If-Then rules are regarded as AI?
+"
+"['neural-networks', 'bias']"," Title: Bias equal to 1 and neuron output equal to -1 in neural networksBody: I have read that bias in neural networks is used to avoid situation in which output of neuron is equal to 0. But what if the same output is equal to -1 and we add 1 to it? Isn't it the same issue as in case of zero output and no bias?
+"
+"['reinforcement-learning', 'deep-rl', 'dqn', 'gym', 'experience-replay']"," Title: What does the line of code ""self.buffer[-1] = observation"" do in this BufferWrapper class for DQN?Body: So the code is related to using a buffer
+class BufferWrapper(gym.ObservationWrapper):
+ def __init__(self, env, n_steps, dtype=np.int):
+ super(BufferWrapper, self).__init__(env)
+ self.dtype = dtype
+ old_space = env.observation_space
+ self.observation_space = gym.spaces.Box(old_space.low.repeat(n_steps, axis=0),
+ old_space.high.repeat(n_steps, axis=0), dtype=dtype)
+
+ def reset(self):
+ self.buffer = np.zeros_like(self.observation_space.low, dtype=self.dtype)
+ return self.observation(self.env.reset())
+
+ def observation(self, observation):
+ self.buffer[:-1] = self.buffer[1:]
+ self.buffer[-1] = observation
+ return self.buffer
+
+It is used to basically do some image processing so that the DQN is fed some transformation of the image. This link provides some higher-level logic behind some operations.
+How can I actually understand what's the reason behind the code? Almost all repos have the exact same lines with no explanation (e.g. atari games GitHub repo).
+My specific question is what is the purpose of the line self.buffer[-1] = observation
?
+In my case, my observation is a (7*1) array and I have to return that in an appropriate manner from the observation
function.
+The book has some mention of this class but I couldn't understand much from it
+"
+"['reinforcement-learning', 'python', 'dqn']"," Title: How rewards are playing role in Deep Q NetworkBody: I have started working on Reinforcement Learning, specifically DQN. And I have watched some interesting videos on it. However, I have some doubts about how the model works.
+Let's say we are playing Atari Breakout where we have only 3 actions: left, stay still, right. We have 2 networks- the policy_net and the target_net (technically both are the same) and they give 3 outputs which are the q values for the 3 actions. During exploration we choose:
+random.randrange(3)
+
+and during exploitation, we choose:
+argmax(policy_net output)
+
+where the input of the policy net is the current state.
+Now, during each timeline of each episode, we are storing current_state, action, reward, and next_state in storage that we will later randomly shuffle and use in training, which we call experience replay memory. During training, let's say we extract a batch of (current_states, actions, rewards, next_states). Now, we get current_q_val and next_q_val as:
+current_q_values = policy_net(current_states)
+next_q_values = target_net(next_states)
+
+We use the following equation to find the loss:
+$$q_*(s,a) - q(s,a) = \text{loss}$$
+$$E[R_{t+1} + \gamma \max_{a'} q_*(s', a')] - E[\Sigma_{k=0}^ \inf \gamma^k R_{t+k+1}] = \text{loss}$$
+And for that, we find which one from the next_q_val is the best one and that we call $\max q_*(s',a')$ (we already have $q(s,a)=\text{current_q_values}$).
+Now my question is, which is $R_{t+1}$ here? As we are taking a random batch from the experience replay memory, we don't have any specific time here and we cannot calculate the $R_{t+1}$ from the time $t$. And if we simply use $q*(s,a) - q(s,a)$ or $\text{next_q_val} - \text{current_q_val}$, then where is the importance of the reward? I don't really understand how we are using the rewards in the training. I mean, where are we making sure the positive and negative influences of good and bad rewards respectively? The fact that the agent takes an action (randomly or from the policy_net) which then gives a reward, I don't understand how to use this reward in the loss function so as to influence how the agent should take action given a state.
+"
+"['machine-learning', 'reinforcement-learning', 'deep-learning', 'computer-vision', 'deep-rl']"," Title: Is it possible to train an RL agent using images?Body: I have an image which consists of a start and an end point, the journey has some obstacles which have to be avoided.
+
+- Is it possible to train an RL agent using such images to find the best path avoiding objects.
+- Or what algorithm should be used in order to find the best path avoiding objects, where the input is an image.
+
+For example a picture of a person on a field track and there are obstacles in between from the start to the end point. I want to predict the series of actions that are required to reach the final position.
+"
+"['deep-learning', 'datasets', 'applications', 'tensor']"," Title: Do any practical deep learning algorithms deal with tensors containing non-real entries?Body: In deep learning, most of the applications are from text and images. Both text and images can be converted into a tensor of real numbers.
+Other than both mentioned above, there may be some other real-world data used by Deep learning algorithms. Any algorithm in general takes tensor as input and gives tensor as output. As far as I know, tensors always consist of real numbers.
+I want to know whether there are any practical applications that deal with the tensors containing non-real numbers. Is there any such possibility?
+"
+"['datasets', 'applications']"," Title: Is there any simple example for volumetric data except from physics and medicine?Body: Recently I heard about the term volumetric data. The definition for volumetric data is as follows
+#1: Definition
+
+Volumetric data is typically a set S of samples $(x, y, z, v)$,
+representing the value v of some property of the data, at a 3D
+location $(x, y, z)$. If the value is simply a 0 or a 1, with a value
+of 0 indicating background and a value of 1 indicating the object,
+then the data is referred to as binary data. The data may instead be
+multivalued, with the value representing some measurable property of
+the data, including, for example, color, density, heat, or pressure.
+
+#2: Some more details
+
+A volumetric dataset consists of information at sample locations in
+some space. The information may be a scalar (such as density in a
+computed tomography (CT) scan), a vector (such as velocity in a flow
+field), or a higher-order tensor (such as energy, density, and
+momentum in computational fluid dynamics (CFD)). The space is usually
+3D, consisting of either three spatial dimensions or another
+combination of spatial and frequency dimensions.
+
+In simple words, I can say that volumetric data is nothing but a three-dimensional collection of tensors.
+The articles linked above contain some examples of volumetric data in the medical (and probably physics) domain.
+Are there any other simple real-world examples for volumetric data other than the medical and physics domain and physics?
+"
+"['reinforcement-learning', 'q-learning', 'value-functions', 'function-approximation', 'continuous-state-spaces']"," Title: What do we actually 'approximate' when dealing with large state spaces in Q-learning?Body: I realized that my state space is very large in size. I had planned to use tabular Q-learning (Bellman equation to update the $Q(s, a)$ after each action taken). But this 'large space' realization has now disappointed me and I read a lot of stuff on the internet. I have the following confusions.
+I saw the 'approximation' term for the 'large space' scenario (for example, in this Medium blog post). But what it is exactly? I can't reduce the states I have nor can I club together different states and update the Q values. So, what is it I should do when they say 'approximate'? If it is the $Q(s,a)$ we approximate, then won't we anyway do for each state $s$ as and when it is encountered? How does this help in a 'large space' scenario?
+"
+"['machine-learning', 'ai-design', 'feature-extraction', 'random-forests', 'score-prediction']"," Title: Derive information for sub-scoring from one scoring modelBody: I am currently working in Python with a random forest algorithm to perform a scoring. My output is binary.
+The idea now is to derive sub-scores from the above model that give an opinion on different topics within my dataset.
+Unfortunately, I don't have an output variable for these sub-scores in my dataset, so I have the idea of deriving the information from the feature importance of the larger model.
+Any ideas on how to do this methodically?
+As an example:
+The goal is to determine if a person is creditworthy. The dataset contains a lot of information about the person's occupation, the area where he/she lives, past payment history etc. Now I want a score for overall creditworthiness and sub-scores for
+
+- Job features
+- Features on the housing situation
+- Features on the payment history/behaviour
+
+"
+"['reinforcement-learning', 'comparison', 'value-functions']"," Title: When to use the state value function $V(s)$ and when to use the state-action value function $Q(s, a)$?Body: I saw the difference between value function $V(s)$ and $Q(s, a)$. But when do I use each one? When I coded in Matlab I only used $Q(s, a)$ directly (as I was thinking of a tabular approach). So, when is more beneficial than the other? I have a large state space.
+"
+"['neural-networks', 'generative-model', 'variational-autoencoder', 'density-estimation']"," Title: How to estimate conditional density using neural network?Body: Conditional Variational Autoencoders (CVAE) and Mixture Density Networks (MDN) are supposed to address this issue. However, these models provide the distribution parameters, e.g., mean and standard deviation, for each given sample, while I need a single underlying distribution that the given data is generated from.
+To put it simply, I would like to find the parameters of a normal distribution that estimates $P(Y \mid X)$, given $X$ and $Y$. Let's imagine the given data, $X$, have an $(n, m)$ dimension, where $n$ and $m$ indicate the number of samples, and features, respectively.
+Using CVAE/MDN I get the parameters with dimension $(n, d)$, where $d$ is the dimension of the parameters. But I am looking for a model to provide parameters with dimensions $(1, d)$.
+"
+"['neural-networks', 'q-learning', 'dqn', 'function-approximation']"," Title: Alternatives to neural networks for function approximation in Q learning?Body: I want to know if there is anything other than neural networks (or Deep NNs) that I can effectively use to perform function approximation? I am asking this w.r.t to the use of approximators in Q learning with large state space.
+"
+"['reinforcement-learning', 'deep-rl', 'q-learning', 'dqn']"," Title: Can Q-learning be used for my scenario, and how might I do so?Body: I have already asked 2-3 general questions w.r.t Q learning and now I am asking a scenario specific one. I will try to be concise and understandable. I really really need help.
+Scenario: I have a network with few nodes and links. On each link, there are some slots (#1 to #800). I generate traffic requests (come one by one) that want to go from one node to another and need certain slots to do so. So, my task is to allot the slots to each upcoming request and finally achieve a low rejection probability i.e. able to allot slots to as many requests as possible. The allotted slots are also freed up depending on when the arrived requests leave the system. I use the Poisson process to do, but this is not important here.
+What I thought: There have been certain simple benchmarks to do this but I wanted to use Q learning so that in the long run the agent (a centralized controller) takes better decisions as to which slots to assign on the particular link i.e. which slots position (#1 to #800).
+What I did: I decided to take the state space as the links (say 10 links in my case) and the action space as the #1 to #800 slots. I use binary notation 1 to say slot is occupied or 0 that is free.
+Problem encountered: But it is long later I realized that my state space is infinitely big. For E.g. For request 1, I give two slots on Link1 & state is 1 1 0 0 0 .... up to 800 zeros. Another request comes (say 3 slots) and say Link1 state is now 1 1 1 1 1 0 0 0...up to 800 zeros. This is when I realized that the state space is unimaginably large as departures can also occur leading to freeing up and some 1s becoming 0s and so on.
+What I am asking for: So, does anyone have any ideas on how can I still use Q learning in this case. The point is that someone already used deep Q on this. I was thinking I am approaching it in a different and simplified way of just using link state as state space that would enable me to have a small Q table. But it is later I realized that each link state will vary every time and lead to large state space thus putting me back to square one after investing a lot of time on this. So, please give any suggestions as I don't want to leave it altogether.
+"
+"['convolutional-neural-networks', 'datasets', 'inception', 'fid-score']"," Title: Does the Frechet Inception Distance (FID) consider color?Body: I was wondering if the Frechet inception distance
+for two colored datasets would be the same than the FID calculated for the same datasets converted to grayscale.
+I know that it depends on the feature extraction, which is the Inception network. And that is the question, I don't fully understand the role of color in CNNs. I thought that all color information is lost in the feature extraction, and in this case I grayscale and colored datasets would produce the same FID. But I am not sure about that.
+"
+"['deep-learning', 'objective-functions', 'adversarial-ml']"," Title: Where do the objective functions proposed in this paper by Carlini-Wagner attack come from?Body: I'm trying to understand the paper by Carlini and Wagner on deep neural networks adversarial attacks. On page 44, in Section V-A, it is explained how the loss function to the described problem was designed. One part of this loss function is called "objective function".
+There are 7 such functions proposed on which experiments are made, but there is no information provided on where they came from and why those are chosen.
+Is this some commonly known thing in the Deep Learning area? Do you recognize them and can tell me where they came from and why they were chosen?
+\begin{align}
+f_{1}\left(x^{\prime}\right) &= -\operatorname{loss}_{F, t}\left(x^{\prime}\right)+1 \\
+f_{2}\left(x^{\prime}\right) &= \left(\max _{i \neq t}\left(F\left(x^{\prime}\right)_{i}\right)-F\left(x^{\prime}\right)_{t}\right)^{+} \\
+f_{3}\left(x^{\prime}\right) &= \operatorname{softplus}\left(\max _{i \neq t}\left(F\left(x^{\prime}\right)_{i}\right)-F\left(x^{\prime}\right)_{t}\right)-\log (2) \\
+f_{4}\left(x^{\prime}\right) &= \left(0.5-F\left(x^{\prime}\right)_{t}\right)^{+} \\
+f_{5}\left(x^{\prime}\right) &= -\log \left(2 F\left(x^{\prime}\right)_{t}-2\right) \\
+f_{6}\left(x^{\prime}\right) &= \left(\max _{i \neq t}\left(Z\left(x^{\prime}\right)_{i}\right)-Z\left(x^{\prime}\right)_{t}\right)^{+} \\
+f_{7}\left(x^{\prime}\right)&= \operatorname{softplus}\left(\max _{i \neq t}\left(Z\left(x^{\prime}\right)_{i}\right)-Z\left(x^{\prime}\right)_{t}\right)-\log (2)
+\end{align}
+"
+"['machine-learning', 'terminology', 'gpt', 'language-model', 'gpt-3']"," Title: What is the ""temperature"" in the GPT models?Body: What does the temperature parameter mean when talking about the GPT models?
+I know that a higher temperature value means more randomness, but I want to know how randomness is introduced.
+Does temperature mean we add noise to the weights/activations or do we add randomness when choosing a token in the softmax layer?
+"
+"['machine-learning', 'deep-learning', 'computer-vision', 'python', 'opencv']"," Title: Feature Extraction for printer classificationBody: I need some advice. I am currently trying to do a printer classification with ML/DL.
+What do I have?
+11 colored-images with high resolution from 8 different inkjet-printers (in total 88 images)
+I have 8 classes (printers)
+All images are scanned with 2.400 dpi, so you are able to see the halftone of the images and the matrix dots
+I know each printers are different in terms of size of matrix dots, dot pattern etc.
+Based on that I need to do a feature extraction and train a ML model which can classify the correct printer. There is a previous work which has been done with Wavelet-Transformation for feature extraction and SVM for classification. The goal now is to find another approach of feature extraction and training.
+My question here is, what do you think is the best solution?
+My idea is:
+Isolate the dots into binary color (black/white)
+do an edge detection with opencv (using filters like sobel, canny etc.)
+But I am not sure if this is a good approach. After reading a lot of papers on related work I found out many used Transfer Learning (e.g. VGG, Resnet) where features are learned in the training process.
+So basically I have images and when you can zoom in you see for each printers the pattern are different. So instead of doing Wavelet-Transformation I need to do another approach.
+In the litarature common feature extractor for this are Gray-levcel Co-Occurences, Wavelet-Transformation, Spartial filters which will be used in SVM or AdaBoost. Another approach is as said above with pre-trained CNN (transfer learning).
+So, what do you think I should tackle next?
+
+"
+"['reinforcement-learning', 'rewards', 'ddpg', 'reward-functions']"," Title: Is it a bad practice to use cumulative rewards in reinforcement learningBody: I am using a DDPG agent for doing prediction on the position on an asset in a stock trading-like environment. I am using the cumulative reward as the reward for each timestep. Since it is trained over many years of data, the reward tends to become large. I have realized that after some training the agent becomes lazy, it just keeps the same position.
+Is it a bad practice to use cumulative rewards as rewards? Would the daily revenue be a better reward for the agent?
+"
+"['neural-networks', 'natural-language-processing', 'data-preprocessing', 'word-embedding', 'data-labelling']"," Title: General approaches in text encoding and labelling for NLPBody: What are the approaches of encoding text data? I would be glad to hear some summarization from experienced persons.
+And are there any solutions accepting words outside the vocabulary and including them to the results (online machine learning)?
+Data input
+So my basic understanding is that if we want predict some value (linear regression) or say what is the probability of occuring some event (logistic regression) we have to gather some features as our input and encode them as number. But this is not necessarily true when working with continuous data like sentences.
+The most naive aproach, which comes to my mind is just to assign some natural numbers to each word in the vocabulary. But this number does not contain any meaningful data about the word itself. On the other hand what seems to be important in NLP is just the order of the words. This is where I think about n-grams so we feed network with more than just one word. Or attention like in the Transformer.
+Another idea, which cames to my mind is to vectorize the word using one of the Word Embedding technique. Here we have some context about the word so the input is not just a dumb number. But does it have any value when we want to predict the next word? Can Word Embedding be used in that way or it's purpose is completely different.
+Last thing I was reading of was to encode characters rather than words but it feels pointless in such basic example as next word prediction. I would think about it more for sub-word tasks like inflection generating.
+Labelling
+Again based on my knowledge when we want to solve yes/no problem we're using sigmoid function. If we have more classes we can use one-hot encoding. But sometimes the output of the network might give us ambiguous meaning so we're using the softmax function so all output sum to 1.
+How this looks in NLP area? When having a vocabulary consisting of 600k words do we really need 600k softmaxed outputs? I'm also thinking there about Word Embedding solutions where we can reduce the number of outputs to let's say 300 numbers and then find the closest word matching the output without using softmax.
+"
+"['machine-learning', 'objective-functions', 'unsupervised-learning', 'performance', 'accuracy']"," Title: Test accuracy decreases during my train processBody:
+I want to train a neural network model with the arcface loss function and try to combine it with domain adaption. But when the training process continues, I find the test accuracy first increases and then decreases, the model cannot reach convergence. I chose the office31 dataset, and the feature_extractor was resnet50.
+I want to know if it is caused by my code, or by my loss function
+The arcface function was set as
+def Arc_pred(cosine, s=64.0, m=0.1):
+ cosine = cosine / s
+ thea = torch.acos(cosine)
+ top = torch.exp(torch.cos(thea + m) * s)
+ _top = torch.exp(torch.cos(thea) * s)
+ bottom = torch.sum(torch.exp(cosine * s), dim=1).view(-1, 1)
+ divide = (top / (bottom - _top + top)) + 1e-10
+ return divide
+
+and my total loss function was set as
+total_loss = 0.1*target_entropy_loss + label_loss + arc_loss + discriminator_loss
+
+In that, the target_entropy_loss
tries to make the decision boundary cross the sparsest sample area,label_loss
was the classification loss, discriminator_loss
was a domain adaptation loss function.
+I tried to set a learning rate schedule for my experiment, it seems it did not work. So, could it be caused by my loss function?
+"
+['expert-systems']," Title: How to build an expert system similar to ES-Builder WebBody: I made a simple expert system using ES-Builder. Please click the link to view it. ES-Builder is a web-based expert system shell. There is a tree-based knowledge representation. In ES builder, User Interfaces are also automatically designed. They generate a link as I have shared above and anyone can access it and can use it.
+But when I try other ES shells such as JESS, CLIPS & PyKE, I only noticed that there we have to write facts and rules and the program is run on command line upon consulting the Expert System. There is no UI like in ES-Builder.
+My question is, is there any way to build UI to the expert systems created by CLIPS/JESS? Or else should I create a web application using another framework like Spring, DotNet, and integrate it with the knowledge base created with CLIPS/JESS?
+(I am a bit confused, because according to what I have learned: if we use an expert system shell then we need not program it using languages (such as Prolog). Because the User Interfaces and Inference Engine is already there. What we are remained to do is just to build the knowledge base. Similar to ES builder UI is auto built.)
+Thank you very much for the support! If the question is confusing, I am happy to modify it in a more understandable manner.
+"
+"['neural-networks', 'probability', 'softmax', 'multiclass-classification']"," Title: Use soft-max post-training for a ReLU trained network?Body: For a project, I've trained multiple networks for multiclass classification all ending with a ReLU activation at the output.
+Now the output logits are not probabilities.
+Is it valid to get the probability of each class by applying a softmax function at the end, after training?
+"
+"['neural-networks', 'machine-learning', 'computer-vision', 'algorithm-request']"," Title: Which algorithms are used to locate objects in a 3d space?Body: I can see mobile apps that can locate a 3D object on a surface with a mobile camera and you can turn around that object.
+What is the name of the algorithm(s) that is used for that purpose? Or, is there AI in these algorithms? Are they use plane detection? After detection, which algorithm do they use to locate the objects?
+It seems like a Computer Vision problem, but I do not know the title.
+
+"
+"['reinforcement-learning', 'training', 'dqn', 'proximal-policy-optimization']"," Title: Do we use validation and test sets for training a reinforcement learning agent?Body: I am pretty new to reinforcement learning and was working with some code for the PPO and DQN algorithms. After looking at the code, I noticed that the authors did not include any code to setup a validation or testing dataloader. In most other machine learning
+training loops, we generally include a validation and testing dataset to assure that the model is not overfitting the training data. However, in reinforcement learning the data is all simulated from the same environment, so perhaps the overfitting issue is not such a big deal?
+Anyhow, could someone please indicate whether it is standard practice to only use a training dataset or dataloader for reinforcement learning, and to ignore validation or testing datasets?
+"
+"['natural-language-processing', 'terminology', 'transformer', 'machine-translation', 'inference']"," Title: What does it mean to apply decomposition at inference-time in a machine translation system?Body: I'm reading this paper for sub-character decomposition for logographic languages and the authors mention decomposition at inference-time. They're using Transformer architecture.
+More specifically, the authors write:
+
+We propose a flexible inference-time sub-character decomposition procedure which targets unseen characters, and show that it aids adequacy and reduces misleading overtranslation in unseen character translation.
+
+What do inference-time and inference-only decomposition mean in this context? My best guess would be that inference-time would be at some point during the decoding process, but I'm not 100% clear on whether that's the case and, if so, when exactly.
+I'm going to keep digging and update if I find something helpful. In the meantime, if anyone needs more context just let me know.
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing', 'named-entity-recognition', 'spacy']"," Title: How much labelling is required for NER with SpaCy?Body: I have transaction data and I would like to extract the merchant from the transaction description. I am new to this but I just came across Named Entity Recognition and SpaCy. I have hundreds of thousands of different merchants.
+Some questions that I have:
+
+- How much labelling do I need to do given the number of merchants I need to extract?
+
+- How many different instances of the same merchant I need to label to get decent results?
+
+
+"
+"['convolutional-neural-networks', 'tensorflow', 'image-recognition']"," Title: Why am I getting a very small number as CNN prediction?Body: I created a CNN using Tensorflow to identify pneumonia and sometimes it returns a very small number as a prediction. why is this happening?
+I have attached the link for the dataset
+Here I how I process and load the data.
+from tensorflow.keras.preprocessing.image import ImageDataGenerator
+
+train_datagen = ImageDataGenerator( rescale = 1.0/255. )
+val_datagen = ImageDataGenerator( rescale = 1.0/255. )
+test_datagen = ImageDataGenerator( rescale = 1.0/255. )
+
+train_generator = train_datagen.flow_from_directory('/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/train/',
+ batch_size=20,
+ class_mode='binary',
+ target_size=(350, 350))
+
+validation_generator = val_datagen.flow_from_directory('/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/val/',
+ batch_size=20,
+ class_mode = 'binary',
+ target_size = (350, 350))
+test_generator = test_datagen.flow_from_directory('/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/test/',
+ batch_size=20,
+ class_mode = 'binary',
+ target_size = (350, 350))
+
+And here the Model, compile and fit functions
+import tensorflow as tf
+
+model = tf.keras.models.Sequential([
+ # Note the input shape is the desired size of the image 150x150 with 3 bytes color
+ tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(350, 350, 3)),
+ tf.keras.layers.MaxPooling2D(2,2),
+ tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
+ tf.keras.layers.MaxPooling2D(2,2),
+ tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
+ tf.keras.layers.MaxPooling2D(2,2),
+ # Flatten the results to feed into a DNN
+ tf.keras.layers.Flatten(),
+ # 512 neuron hidden layer
+ tf.keras.layers.Dense(1024, activation='relu'),
+ # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('cats') and 1 for the other ('dogs')
+ tf.keras.layers.Dense(1, activation='sigmoid')
+])
+
+compile the model
+from tensorflow.keras.optimizers import RMSprop
+
+model.compile(optimizer=RMSprop(learning_rate=0.001),
+ loss='binary_crossentropy',
+ metrics = ['accuracy'])
+
+model fit
+history = model.fit(train_generator,
+ validation_data=validation_generator,
+ steps_per_epoch=200,
+ epochs=2000,
+ validation_steps=200,
+ callbacks=[callbacks],
+ verbose=2)
+
+The evaluation metrics as followings,
+loss: 0.2351 - accuracy: 0.9847
+The prediction shows a big negative number for the negative pneumonia, and for positive it shows more than .50.
+I have two equations?
+
+- why I getting a very small number as 2.xxxx * 10e-20
+
+- why I can't get the following values as null?
+val_acc = history.history[ 'val_accuracy' ]
+val_loss = history.history['val_loss' ]
+
+
+I am a still student in machine learning and I really appreciate your effort to solve my question.
+"
+"['neural-networks', 'long-short-term-memory', 'activation-functions', 'sigmoid', 'tanh']"," Title: Why is there tanh(x)*sigmoid(x) in a LSTM cell?Body: CONTEXT
+I was wondering why there are sigmoid and tanh activation functions in an LSTM cell.
+
+My intuition was based on the flow of tanh(x)*sigmoid(x)
+
+and the derivative of tanh(x)*sigmoid(x)
+
+It seems to me that authors wanted to choose such a combination of functions, the derivative would make possible big changes around the 0, since we can use normalized data and weights. Another thing is that the output would go to 1 for positive values and go to 0 for negative values which is convenient.
+On the other hand, it seems natural that we use sigmoid in forget gate, since we want to have a better focus on the important data. I just don't understand why there cannot only be a sigmoid function in the input gate.
+OTHER SOURCES
+What I found on the web is this article where the author claims:
+
+To overcome the vanishing gradient problem, we need a method whose second derivative can sustain >for a long range before going to zero. Tanh is a good function that has all the above properties.
+
+However, he doesn't explain why this is the case.
+Also, I found the opposite statement here, where the author says that the second derivative of the activation function should go to zero, however, there is no proof for that claim.
+QUESTION
+Summing up:
+
+- Why cannot we put a signal with just a sigmoid on the input gate?
+- Why there are
tanh(x)*sigmoid(x)
signals in the input and output gate?
+
+"
+"['neural-networks', 'reinforcement-learning', 'time-complexity']"," Title: What is the time complexity of DDPG algorithm?Body: Suppose we have a DDPG algorithm.
+The actor has N input nodes, two hidden layers with J nodes, and S output nodes.
+The critic has N+S input nodes, two hidden layers with C nodes, and one output node.
+How does the time complexity of this algorithm could be calculated??
+"
+"['weights', 'multilayer-perceptrons', 'gradient']"," Title: Rank of gradient-of-loss with respect to layer weights in an MLPBody: The paper: https://arxiv.org/abs/2110.11309, makes the following claim at the end of page 3:
+
+The gradient of loss $L$ with respect to weights $W_l$ of an MLP is a rank-1 matrix for each of B batch elements $\nabla_{w_l}L = \sum_{i=1}^B \delta_{l+1}^i {u_l^i}^T$, where $\delta_{l+1}^i$ is the gradient of the loss for batch element $i$ with respect to the preactivations at layer $l + 1$, and ${u_l^i}^T$ are the inputs to layer $l$ for batch element i.
+
+Suppose that we have an MLP with $k$ hidden layers (every hidden layer is followed by an activation function). Then the weight matrices will be $W_1, W_2, \dots, W_k$ (plus the biases, but they are irrelevant for now), and their sizes will be $(D_1, D), (D_2, D_1), \dots (D_k, D_{k-1})$ correspondingly, where $D$ is the number of input features.
+Therefore, hidden layer $l$ has a weight matrix $W_l$ of size $(D_l, D_{l-1})$. Its gradient wrt the loss (for 1 batch element), $\frac{\partial L}{\partial W_l}$, will also be a matrix of size $(D_l, D_{l-1})$.
+So if I understand correctly, the authors of the paper are claiming that $\frac{\partial L}{\partial W_l}$ is a rank-1 matrix? That is, every row (or column) can be expressed as a linear combination of 1 only row (or column)? If yes, why? How?
+"
+"['optimization', 'genetic-algorithms']"," Title: Are Genetic Algorithms suitable for a problem with a non-unique optimal solution?Body: I was wondering if a genetic algorithm is useful if the optimization problem has several optimal solutions.
+My thought was that I should not use it since when combining two members of a population who have good fitness but are close to different optimal solutions, the child will get retarded.
+Is this thinking wrong? If so, why?
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks', 'weights', 'weight-normalization']"," Title: Are there neural networks with (hard) constraints on the weights?Body: I don't know too much about Deep Learning, so my question might be silly. However, I was wondering whether there are NN architectures with some hard constraints on the weights of some layers. For example, let $(W^k_{ij})_{ij}$ be the weights of the (dense) $k$-th layer.
+Are there architectures where it is imposed something like
+$$
+\sum_{i, j} (W^k_{ij})^2 = 1
+$$
+(namely the roll-out vector of weights is constrained to stay on a sphere) or $W^k_{ij}$ are equivalence classes $mod K$ for some number $K>0$?
+Then, of course, one should probably think about proper activation functions for these cases, but it's probably not a big obstacle.
+Putting constraints of these kinds will prevent the weights to grow indefinitely and maybe could prevent over-fitting?
+"
+"['reinforcement-learning', 'deep-learning', 'deep-rl', 'policy-gradients', 'reinforce']"," Title: FrozenLake-v0 not training using REINFORCEBody: I am implementing a simple REINFORCE (policy gradient) algorithm for openAI's FrozenLake-v0 environment. However, it does not seem to learn anything at all.
+I have used the same neural architecture for openAI's CartPole-v0, and trained it using REINFORCE (policy gradient), and it works perfectly. So, what I am doing incorrectly for the FrozenLake-v0 environment? I think this has to do with the nature of the environment, but I am unsure which aspects of training REINFORCE must be altered to accommodate the dynamics of FrozenLake-v0. It seems like a very simple environment to solve, given that it has only 16 states.
+My code is as follows:
+import gym
+from gym.envs.registration import register
+import numpy as np
+import torch
+import torch.nn as nn
+import torch.optim as optim
+import torch.nn.functional as F
+from torch.autograd import Variable
+import matplotlib.pyplot as plt
+
+
+# helper function for conversion of a state into an input to a neural network
+def OH(x, n):
+ '''
+ :param x: state id
+ :param n: n_states
+ :return: 1-hot encoded numpy array of size [1,n]
+ '''
+ one_hot = np.zeros((n,))
+ one_hot[x] = 1
+ return one_hot
+
+
+
+def running_mean(x, n):
+ N=n
+ kernel = np.ones(N)
+ conv_len = x.shape[0]-N
+ y = np.zeros(conv_len)
+ for i in range(conv_len):
+ y[i] = kernel @ x[i:i+N]
+ y[i] /= N
+ return y
+
+
+# architecture of the Policy Network
+class PolicyNetwork(nn.Module):
+ def __init__(self, state_dim, n_actions):
+ super().__init__()
+ self.n_actions = n_actions
+ self.model = nn.Sequential(
+ nn.Linear(state_dim, 256),
+ nn.ReLU(),
+ nn.Linear(256, n_actions),
+ nn.Softmax(dim=0)
+ ).float()
+
+ def forward(self, X):
+ return self.model(X)
+
+
+def train_reinforce_agent(env, episode_length = 100, max_episodes = 50000, gamma = 0.99, visualize_step = 50, learning_rate=0.003):
+
+ # define the parametric model for the Policy: this is an instantiation of the PolicyNetwork class
+ model = PolicyNetwork(env.observation_space.shape[0], env.action_space.n)
+ # define the optimizer for updating the weights of the Policy Network
+ optimizer = optim.Adam(model.parameters(), lr=learning_rate)
+
+
+ # hyperparameters of the reinforce agent
+ EPISODE_LENGTH = episode_length
+ MAX_EPISODES = max_episodes
+ GAMMA = gamma
+ VISUALIZE_STEP = max(1, visualize_step)
+ score = []
+
+
+
+ for episode in range(MAX_EPISODES):
+ # reset the environment
+ curr_state = env.reset()
+ done = False
+ transitions = []
+
+ # rollout an entire episode from the Policy Network
+ for t in range(EPISODE_LENGTH):
+ act_prob = model(torch.from_numpy(curr_state).float())
+ action = np.random.choice(np.array(list(range(env.action_space.n))), p=act_prob.data.numpy())
+ prev_state = curr_state
+ curr_state, _, done, info = env.step(action)
+ transitions.append((prev_state, action, t+1))
+
+ if done:
+ break
+ score.append(len(transitions))
+ reward_batch = torch.Tensor([r for (s, a, r) in transitions]).flip(dims=(0,))
+
+
+ # compute the return for every state-action pair from the rewards at every time-step
+ batch_Gvals = []
+ for i in range(len(transitions)):
+ new_Gval = 0
+ power = 0
+ for j in range(i, len(transitions)):
+ new_Gval = new_Gval + ((GAMMA ** power) * reward_batch[j]).numpy()
+ power += 1
+ batch_Gvals.append(new_Gval)
+
+ # normalize the returns for the batch
+ expected_returns_batch = torch.FloatTensor(batch_Gvals)
+ expected_returns_batch /= expected_returns_batch.max()
+
+ # batch the states, actions, prob after the episode
+ state_batch = torch.Tensor([s for (s, a, r) in transitions])
+ action_batch = torch.Tensor([a for (s, a, r) in transitions])
+ pred_batch = model(state_batch)
+ prob_batch = pred_batch.gather(dim=1, index=action_batch.long().view(-1, 1)).squeeze()
+
+
+ # compute the loss for one episode
+ loss = -torch.sum(torch.log(prob_batch) * expected_returns_batch)
+
+ # back-propagate the loss
+ optimizer.zero_grad()
+ loss.backward()
+ # update the parameters of the Policy Network
+ optimizer.step()
+
+ # print the status after every VISUALIZE_STEP episodes
+ if episode % VISUALIZE_STEP == 0 and episode > 0:
+ print('Episode {}\tAverage Score: {:.2f}'.format(episode, np.mean(score[-VISUALIZE_STEP:-1])))
+
+
+ # Training plot: Episodic reward over Training Episodes
+ score = np.array(score)
+ avg_score = running_mean(score, visualize_step)
+ plt.figure(figsize=(15, 7))
+ plt.ylabel("Episode Duration", fontsize=12)
+ plt.xlabel("Training Episodes", fontsize=12)
+ plt.plot(score, color='gray', linewidth=1)
+ plt.plot(avg_score, color='blue', linewidth=3)
+ plt.scatter(np.arange(score.shape[0]), score, color='green', linewidth=0.3)
+ plt.show()
+
+"
+"['reinforcement-learning', 'deep-learning', 'tensorflow', 'deep-rl', 'tensorflow-probability']"," Title: tfp.Distributions.Categorical.sample() is picking the same action everytime after certain iterationsBody: I have written a code for an RL agent such that at each state the model calculates the probabilities of all possible actions and samples one action randomly to proceed further. To acheive this, I have written the following code
+act_prob_dist = tfp.distributions.Categorical(probs=act_probs)
+action = act_prob_dist.sample()
+
+It is working fine in the initial stages of training. Once the model has learnt about the particular state really well and the probability of one particular action has increased significantly than the others, the sample()
call is picking is the same action every time. For example, when the action probabilities of a particular state are
+tf.Tensor(
+[[0.05213022 0.06613996 0.4933109 0.02918373 0.04188393 0.04100212
+ 0.03228914 0.00716161 0.08877521 0.02158365 0.04645196 0.07092285
+ 0.00916469]], shape=(1, 13), dtype=float32)
+
+The model is sampling an action randomly. After decent iterations of learning the action probabilities became
+tf.Tensor(
+[[1.12852089e-12 1.54888698e-06 6.40413802e-08 1.03480375e-11
+ 2.05246806e-08 2.17290430e-09 1.04494591e-09 5.20959872e-11
+ 9.99995708e-01 1.26053008e-08 6.85156265e-10 1.70332885e-06
+ 9.99039457e-07]], shape=(1, 13), dtype=float32)
+
+The model started picking index with high probability every time(index 8 in this case). The documentation reads Generate samples of the specified shape.
I'm assuming it implies the choosing happens randomly. Can someone please explain why same action is being chosen in my case?
+PS: tf.version
is returning tensorflow._api.v2.version
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'data-preprocessing', 'feature-engineering']"," Title: Are derived or computed inputs bad for CNNs?Body: I am building a CNN and am wondering if inputting derived or computed inputs are generally bad for the effectiveness of CNNs? Or just NNs in general?
+By derived or computed values I mean data that is not "raw" and instead is computed based on the raw data. For example, in a very simple form, using time-series data as the "raw" data and computing a 30 day SMA as a "derived/computed" value, and as another input.
+Is this bad practice at boosting the effectiveness of the network? If it is not a bad practice, are there any tips on what kind of computed values someone should consider when adding new inputs?
+The goal of my NN is for building predictions in time-series data.
+"
+"['machine-learning', 'natural-language-processing', 'transformer', 'architecture', 'sequence-modeling']"," Title: Why do Transformers have a sequence limit at inference time?Body: As far as I understand, Transformer's time complexity increases quadratically with respect to the sequence length. As a result, during training to make training feasible, a maximum sequence limit is set, and to allow batching, all sequences smaller are padded.
+However, after a Transformer is trained, and we want to run it on a single sequence at inference time, the computational costs are far less than training. Thus, it seems reasonable that I would want to run the transformer on a larger input sequence length during inference time. From a technical perspective, this should be feasible.
+I keep reading online that a Transformer cannot be run on a sequence size larger than the one seen during training. Why is this? Is it because the network weights will be unfamiliar with sequences of this length? Or is it more fundamental?
+"
+"['deep-learning', 'convolutional-neural-networks', 'terminology', 'image-processing', 'stride']"," Title: What is the stride information of an image referring here?Body: In convolutional neural networks, the convolution and pooling operations have a parameter known as stride, which decides the amount of jump the kernel needs to do on the input image. You can get more information regarding stride from follows taken from here
+
+Stride is the number of pixels shifts over the input matrix. When the
+stride is 1 then we move the filters to 1 pixel at a time. When the
+stride is 2 then we move the filters to 2 pixels at a time and so on.
+
+But, I am not getting what does it mean by stride information of an image at the tensor level. Consider the following paragraph from the chapter named Real-world data representation using tensors from the textbook titled Deep Learning with PyTorch by Eli Stevens et al.
+
+img = torch.from_numpy(img_arr)
+out = img.permute(2, 0, 1)
+
+We’ve seen this previously, but note that this operation does not make
+a copy of the tensor data. Instead, out
uses the same underlying
+storage as img
and only plays with the size and stride information
+at the tensor level. This is convenient because the operation is very
+cheap; but just as a heads-up: changing a pixel in img
will lead to
+a change in out
.
+
+It is mentioning about the stride information at the tensor level of an image. Do they mean the strides that are related to the CNN, pooling, etc., or are they referring to any other stride information?
+"
+"['deep-learning', 'convolutional-neural-networks', 'generative-model', 'batch-normalization']"," Title: Why batch normalization before upsampling is giving worse results?Body: I am training a model to generate images.
+The model contains 5+5 layers:
+Conv2D -> Upsample -> Conv2D -> Upsample -> Conv2D -> Upsample -> Conv2D -> Upsample -> Conv2D -> Upsample
+
+I am modifying it as
+Conv2D -> BatchNorm -> Upsample -> Conv2D -> BatchNorm -> Upsample -> Conv2D -> BatchNorm -> Upsample -> Conv2D -> BatchNorm -> Upsample -> Conv2D -> BatchNorm -> Upsample
+
+I am applying the batch normalization layers just before upsampling as shown above and hence I am not getting the results that are at least comparable to the results by the model without any batch normalization layer.
+Is my placement of the batch normalization layer wrong? If yes, then why?
+"
+"['machine-learning', 'deep-learning', 'tensorflow', 'keras']"," Title: Why is val accuracy 100% within 2 epochs and incorrectly predicting new images? (1,000 images per class when training)Body: My CNN tensorflow model reports 100% validation accuracy within 2 epochs. But it incorrectly predicts on single new images. (It is multiclass problem. I have 3 classes). How to resolve this? Can you please help me understand these epoch results?
+I have 1,000 images per class that are representative of my testing data. How can validation accuracy reach 1.00 in just the first epoch when I have a dataset of 3,000 images in total, equal amount per class? (I would expect this to start at around 33% percent -- 1/ 3 classes.)
+I understand overfitting can be a problem. I've added a dropout layer to try to solve this potential problem. From this questionWhat to do if CNN cannot overfit a training set on adding dropout? I learned that a "model is over-fitting if during training your training loss continues to decrease but (in the later epochs) your validation loss begins to increase. That means the model can not generalize well to images it has not previously encountered." I don't believe my model is overfitting based on this description. (My model reports both high training and high validation accuracy. If my model was overfitting I'd expect high training accuracy and low validation accuracy.)
+My model:
+def model():
+ model_input = tf.keras.layers.Input(shape=(h, w, 3))
+ x = tf.keras.layers.Rescaling(rescale_factor)(model_input)
+ x = tf.keras.layers.Conv2D(16, 3, activation='relu',padding='same')(x)
+ x = tf.keras.layers.Dropout(.5)(x)
+ x = tf.keras.layers.MaxPooling2D()(x)
+ x = tf.keras.layers.Flatten()(x)
+ x = tf.keras.layers.Dense(128, activation='relu')(x)
+ outputs = tf.keras.layers.Dense(num_classes, activation = 'softmax')(x)
+
+Epoch results:
+Epoch 1/10
+/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1096: UserWarning: "`sparse_categorical_crossentropy` received `from_logits=True`, but the `output` argument was produced by a sigmoid or softmax activation and thus does not represent logits. Was this intended?"
+ return dispatch_target(*args, **kwargs)
+27/27 [==============================] - 13s 124ms/step - loss: 1.0004 - accuracy: 0.5953 - val_loss: 0.5053 - val_accuracy: 0.8920
+Epoch 2/10
+27/27 [==============================] - 1s 46ms/step - loss: 0.1368 - accuracy: 0.9825 - val_loss: 0.0126 - val_accuracy: 1.0000
+Epoch 3/10
+27/27 [==============================] - 1s 42ms/step - loss: 0.0020 - accuracy: 1.0000 - val_loss: 5.9116e-04 - val_accuracy: 1.0000
+Epoch 4/10
+27/27 [==============================] - 1s 42ms/step - loss: 3.0633e-04 - accuracy: 1.0000 - val_loss: 3.5376e-04 - val_accuracy: 1.0000
+Epoch 5/10
+27/27 [==============================] - 1s 42ms/step - loss: 1.7445e-04 - accuracy: 1.0000 - val_loss: 2.2319e-04 - val_accuracy: 1.0000
+Epoch 6/10
+27/27 [==============================] - 1s 42ms/step - loss: 1.2910e-04 - accuracy: 1.0000 - val_loss: 1.8078e-04 - val_accuracy: 1.0000
+Epoch 7/10
+27/27 [==============================] - 1s 42ms/step - loss: 1.0425e-04 - accuracy: 1.0000 - val_loss: 1.4247e-04 - val_accuracy: 1.0000
+Epoch 8/10
+27/27 [==============================] - 1s 42ms/step - loss: 8.6284e-05 - accuracy: 1.0000 - val_loss: 1.2057e-04 - val_accuracy: 1.0000
+Epoch 9/10
+27/27 [==============================] - 1s 42ms/step - loss: 7.0085e-05 - accuracy: 1.0000 - val_loss: 9.3485e-05 - val_accuracy: 1.0000
+Epoch 10/10
+27/27 [==============================] - 1s 42ms/step - loss: 5.4979e-05 - accuracy: 1.0000 - val_loss: 8.5952e-05 - val_accuracy: 1.0000
+
+Model.fit and model.compile:
+model = model()
+
+model = tf.keras.Model(model_input, outputs)
+
+ model.compile(optimizer='adam',
+ loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
+ metrics=['accuracy'])
+
+hist = model.fit(
+ train_ds,
+ validation_data=val_ds,
+ epochs=10
+)
+
+Code to predict new image:
+def makePrediction(image):
+ from IPython.display import display
+ from PIL import Image
+ from tensorflow.keras.preprocessing import image_dataset_from_directory
+ img = keras.preprocessing.image.load_img(
+ image, target_size=(h, q)
+ )
+ img_array = keras.preprocessing.image.img_to_array(img)
+ img_array = tf.expand_dims(img_array, 0) #Create a batch
+
+ predicts = model.predict(img_array)
+ p = class_names[np.argmax(predicts)]
+ return p
+
+Going to the "data" directory and using the folders to create a dataset. Each folder is a class label:
+from keras.preprocessing import image
+directory_data = "data"
+tf.keras.utils.image_dataset_from_directory(
+ directory_testData, labels='inferred', label_mode='int',
+ class_names=None, color_mode='rgb', batch_size=32, image_size=(256,
+ 256), shuffle=True, seed=123, validation_split=0.2, subset="validation",
+ interpolation='bilinear', follow_links=False,
+ crop_to_aspect_ratio=False
+)
+
+tf.keras.utils.image_dataset_from_directory(directory_testData, labels='inferred')
+
+Creating dataset and splitting it:
+Train_ds code: (Output: Found 1605 files belonging to 3 classes.
+Using 1284 files for training.)
+train_ds = tf.keras.preprocessing.image_dataset_from_directory(
+ directory_data = "data",
+ validation_split=0.2,
+ subset="training",
+ seed=123,
+ image_size=(h, w),
+ batch_size=batch_size)
+
+Val_ds code: (Output: Found 1605 files belonging to 3 classes.
+Using 321 files for validation.)
+val_ds = tf.keras.preprocessing.image_dataset_from_directory(
+directory_data = "data",
+ validation_split=0.2,
+ subset="validation",
+ seed=123,
+ image_size=(h, w),
+ batch_size=batch_size)
+
+"
+"['reference-request', 'optimization', 'hyperparameter-optimization', 'incremental-learning', 'online-learning']"," Title: Is there a way to adapt Particle Swarm Optimization to an incremental/online learning setting?Body: As stated in the title, is there a way to adapt PSO to an online scenario where new data samples arrive continuously?
+In more detail: suppose that I have a classifier with several parameters for which the optimal values are to be chosen automatically, instead of being predefined. I want to use PSO to select the parameters. I know this is doable in a static scenario, where the data set is fixed. However, if new data samples arrive over time (and in large amounts), is there a way to make PSO work on such dynamic data streams?
+Also, I am open to other ways to implement self-adaptive parameters. PSO is a possible choice but if it's not possible I'd love to hear your suggestions about other approaches.
+"
+"['machine-learning', 'optimization', 'algorithm-request', 'optimizers']"," Title: How do I use machine learning to create an optimization algorithm?Body: Let's say that I want to create an optimization algorithm, which is supposed to find an optimum value for a given objective function. Creating an optimization algorithm to explore through the search space can be quite challenging.
+My question is: can machine learning be used to automatically create optimization algorithms? Is there any source to look at for this?
+"
+"['machine-learning', 'recurrent-neural-networks', 'hopfield-network']"," Title: What is remembering in Hopfield network?Body: Hopfield is a simple and traditional network. We feed into the network some patterns (Learning/Training). There is no training in Hopfield as the weight calculation adds up all the strength between neurons. The network goes into remembering mode by feeding a new unseen pattern (partially corrupted), and then the input is deactivated. The network iterates until it reaches a global or local minimum.
+My question is that it finally remembers anything. It means that it remembers one of those patterns (combinations).
+For example, we have five neurons. It remembers one of those $2^5=32$ patterns. So, one can say OK, this is what I am looking for, but it is not. What mechanism is available in the Hopfield network to check whether the found pattern is identical or similar to the input pattern?
+"
+"['reinforcement-learning', 'q-learning', 'sarsa', 'connect-four']"," Title: Reduction of state space of the game Connect Four to apply RL algorithms SARSA and Q-LearningBody: I would like to implement the reinforcement learning algorithms SARSA and Q-Learning for the board game Connect Four.
+I am familiar with the algorithms and know about their limitations regarding large states spaces due to storing all of this information in the Q-Table. Connect Four has a large state space estimated around 7.1*1013, e.g. MIT Slide 7, which is why simply applying these algorithms won't work (though this thesis claims it did)
+However, I have found this answer to a similar post that proposes a possible solution to simplify the state space.
+
+For one thing, you could simplify the problem by separating the action space. If you consider the value of each column separately, based on the two columns next to it, you reduce N to 3 and the state space size to 106. Now, this is very manageable. You can create an array to represent this value function and update it using a simple RL algorithm, such as SARSA
+
+Unfortunately, I don't understand the proposed simplification and would like to ask the following questions:
+
+- The action space is separated from the state space by considering each column separately. However, if my understanding of SARSA and QL is correct they use
Q(S,A)
to estimate the value function, therefore the state action pair is assigned a value.
+- How does one calculate the value of a column based on the two columns next to it?
+- Also what does next to it mean in this context? If the two adjacent columns of each column are used then we create five pairs (N=5) or are pairs created from the inside out (e.g. middle three columns, middle five, all seven)?
+- Is a state (of the entire board?) then mapped to the array containing the value function for each action/column?
+
+Any references to other literature or simplifications would be much appreciated!
+"
+"['reinforcement-learning', 'reference-request', 'markov-decision-process', 'reward-functions', 'pomdp']"," Title: Is there a mathematical formalism to deal with a missing reward signal?Body: Typically, a Reinforcement Learning learning problem is formalized as finding an optimal policy for a Markov Decision Process (MDP). In many real-life situations, however, an agent can only get partial information from the environment. For example, Partially Observable MDPs are used to model the case where the agent does not fully observe the state.
+I was wondering whether there is any well-established formalism for the case where the agent does not fully observe the reward signal.
+In particular, I am thinking about the case where for every state-action pair $(s, a)$ the agent receives the reward $R(s, a)$ with probability $1 - \varepsilon$ and does not receive anything with probability $\varepsilon$. Of course, in principle, this setting can be thought as a regular MDP with a stochastic reward, but here I would like the agent to behave optimally w.r.t. to $R$.
+I would really appreciate if you could point some relevant literature to me!
+"
+"['papers', 'variational-autoencoder', 'probability-distribution', 'notation']"," Title: Are the authors of the VAE paper writing the PDFs as a function of the random variables?Body: Usually, I see the conventions:
+
+- discrete random variable is denoted as $X$,
+- the pmf is written as $P(X=x)$ or $p(X=x)$ or $p_{X}(x)$ or $p(x)$, where $x$ is an instance of $X$
+- a continuous random variable is denoted as $X$,
+- the pdf is denoted as $f_{X}(x)$ or $f(x)$, where $x$ is an instance of $X$; sometimes $p$ is used here too instead of $f$.
+
+However, the VAE paper uses slightly different notation that I'm trying to understand
+
+Let us consider some dataset $\mathbf{X}=\left\{\mathbf{x}^{(i)}\right\}_{i=1}^{N}$ consisting of $N$ i.i.d. samples of some continuous or discrete variable $\mathrm{x}$. We assume that the data are generated by some random process, involving an unobserved continuous random variable $\mathbf{z}$. The process consists of two steps: (1) a value $\mathbf{z}^{(i)}$ is generated from some prior distribution $p_{\boldsymbol{\theta}^{*}}(\mathbf{z}) ;(2)$ a value $\mathbf{x}^{(i)}$ is generated from some conditional distribution $p_{\boldsymbol{\theta}^{*}}(\mathbf{x} \mid \mathbf{z})$. We assume that the prior $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ and likelihood $p_{\boldsymbol{\theta}^{*}}(\mathbf{x} \mid \mathbf{z})$ come from parametric families of distributions $p_{\boldsymbol{\theta}}(\mathbf{z})$ and $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$, and that their PDFs are differentiable almost everywhere w.r.t. both $\boldsymbol{\theta}$ and $\mathbf{z}$. Unfortunately, a lot of this process is hidden from our view: the true parameters $\theta^{*}$ as well as the values of the latent variables $\mathrm{z}^{(i)}$ are unknown to us.
+
+So I am looking at these:
+
+- $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$
+- $p_{\boldsymbol{\theta}^{*}}(\mathbf{x} \mid \mathbf{z})$
+- dataset $\mathbf{X}=\left\{\mathbf{x}^{(i)}\right\}_{i=1}^{N}$
+
+So I know the subscript for $\theta$ denotes those are the parameters for the pdf. It says "discrete variable $\mathrm{x}$", "unobserved continuous random variable $\mathbf{z}$", and "latent variables $\mathrm{z}^{(i)}$". In the top, where I wrote " discrete random variable $X$", seems like that's the equivalent of "discrete variable $\mathrm{x}$" in this paper.
+So, it looks like they're writing the PDFs as a function of the random variables. Is my assumption correct? Because it is different than the typical conventions I see.
+edit: looks like his other paper has a notation guide, in the appendix, though it seems like he's conflating both random vector and instances of vector in the notation?
+https://arxiv.org/pdf/1906.02691.pdf
+"
+"['classification', 'datasets', 'prediction', 'regression']"," Title: What to predict in a limited transaction dataset?Body: I have been given a task with a real transaction dataset. The task is to predict something using either logistic regression or simple binary classification.
+The columns are as follow:
+
+- Transaction ID
+- Quantity purchased
+- Product name
+- Coupon code
+- Transaction Date
+- City (where transaction was made)
+- Delivery fee (if any)
+- Total amount spent
+
+I am having a rough time figuring out what to predict using regression or classification given only these columns.
+i.e: Given a full row how much is the total spent... etc
+In other words I need help deciding what will be the label of my dataset and what would be the reason behind choosing that label.
+"
+"['machine-learning', 'calculus', 'derivative', 'vector-space']"," Title: How the vector-space isomorphism between $\mathbb{R}^{m \times n}$ and $\mathbb{R}^{mn}$ guarantees reshaping matrices to vectors?Body: Consider the following paragraph from section 5.4 Gradients fo Matrices of the chapter Vector Calculus from the textbook titled Mathematics for Machine Learning by Marc Peter Deisenroth et al.
+
+Since matrices represent linear mappings, we can exploit the fact that
+there is a vector-space isomorphism (linear, invertible mapping)
+between the space $\mathbb{R}^{m \times n}$ of $m \times n$ matrices
+and the space $\mathbb{R}^{mn}$ of mn vectors. Therefore, we can
+re-shape our matrices into vectors of lengths $mn$ and $pq$,
+respectively. The gradient using these $mn$ vectors results in a
+Jacobian Matrices can be of size $mn \times pq$. .... In practical
+applications, it is often desirable to re-shape the matrix into a
+vector and continue working with this Jacobian matrix: The chain
+rule... boils down to simple matrix multiplication, whereas in the
+case of a Jacobian tensor, we will need to pay more attention to what
+dimensions we need to sum out.
+
+What I understood from the paragraph is: There is always a one-one mapping(?) between $\mathbb{R}^{m \times n}$ and $\mathbb{R}^{mn}$. So, we use this property to replace any element in $\mathbb{R}^{m \times n}$ (matrix) to an element in $\mathbb{R}^{mn}$.
+I have doubt on how the property allows us to replace the matrix by vector without any discrepancies?
+"
+"['deep-learning', 'computer-vision', 'tensorflow', 'object-detection']"," Title: Object detection: when there's only 1 object in each imageBody: Good day. I have a custom dataset for object detection, which has imbalance that each image has only one object annotation.
+I trained the object detection model(Efficientdet-dx) on TensorFlow object detection API with this dataset. But the model predicts only one object in image, even though it has many trackable objects when the training has finished. It looks like the model learned in the wrong way that it should find only one object in a image.
+
+Here's my question: In which way should I train the model, so the model find objects independently to the number of objects in the trainset?
+It should be very helpful, if you help me.
+What I tried:
+
+- Copy and Paste augmentation(CAP)
+
+- result: The dataset has no mask annotation unfortunately, so I used a deep learning model trained on open dataset for background subtraction, but the subtraction did not work well. As you can see, the edge of pasted objects is not clear. and some part of the objects are missing.
+
+
+
+
+
+- Mix another dataset like COCO, PASCAL VOC.
+
+- result: It wasn't that bad such like CAP. However, the mixed dataset becomes very big size and the unrelated labels also are in the predictable label list.
+
+
+
+"
+"['neural-networks', 'machine-learning', 'overfitting', 'mnist']"," Title: Can neural networks learn noise?Body: I'm interested in the following graphs. A neural network was trained to recognise digits from the MNIST dataset and then the labels were randomly shuffled and the following behaviour was observed. How can this behaviour be explained?
+What explains the apparent 'mirroring' of the graphs on the RHS, and the fact that training error on the RHS is approx. equal to validation error on the LHS?
+It's evident from the graphs that the neural network is not learning noise - so what exactly is it learning?
+
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks']"," Title: Non-sliding kernels for location-aware processing in Convolutional Neural NetworksBody: My understanding of how CNN operates in image detection is through the use of kernels that slide through the image to detect features (edges and so on). So a single kernel could potentially be learning to detect an edge no matter where it is in the image. This is great for image recognition problems where an image of a dog shifted to the right or inverted is still an image of a dog. This article states "the features the kernel learns must be general enough to come from any part of the image". The article also states how using CNN for categorical data where the order in which data is organised is irrelevant can be "disastrous".
+However, there are instances where it is desirable for the algorithm to be location-aware in order to classify better. Take the case of using CNN to train a network that will predict card play in the game of bridge (a version of double-dummy where all cards are laid out open - perfect information, deterministic). At the beginning of the game the cards dealt to the four could look (very unrealistically) something like this.
+
+where Leader = the player playing the lead card in round 1, and the subsequent players organised as Leader.LeftHandOpponent, Leader.Partner and Leader.RightHandOpponent. Each player's cards are organised in four suits starting from the Trump_Suit and then the other suits in the original suit hierarchy. Cards go from highest value in the top 'A' to lowest value in the bottom '2'.
+Here is a transpose of the image above.
+
+This layout provides a lot of visual cues in terms of how the gameplay will proceed and who will end up winning how many tricks if viewed it from the perspective of control cards distribution within each suit and hand strength. So, the answer to the question of will CNN actually be able to process this data to provide good predictions is a resounding Yes (at least to me).
+However, here is the problem - A regular CNN with a sliding kernel with a (4, 1) stride and no padding would make no distinction between the red boxes when in reality there is a massive difference between them.
+Possible Solution? - A filter consisting of non-sliding kernels/kernels that only slide in one direction (perhaps horizontally or vertically) however would theoretically only seek to learn location-aware features and that could potentially improve accuracy? Just shooting arrow in the sky.
+Has this been researched? Has anybody implemented this already? Could this work?
+P.S: CNN has been used on AlphaGo Zero was great success. Obviously in the game of Go, patterns located in the top of the board carry the same weight as those located in the bottom. The gameplay does not change if the board is flipped 180 degrees. This however is not the case in the game of contract bridge. I am looking at ideas of how this can be resolved.
+"
+"['neural-networks', 'machine-learning', 'long-short-term-memory', 'prediction']"," Title: Which ML algorithm is the best for predict the next PRNG generated numbers?Body: I have a homework. The task is to decide, if the PRNG generated lottery is attackable/crackable or not.
+Details:
+Lottery:
+There is a lottery game where you have to choose 8 numbers between 1-20 for the field A and choose 1 number between 1-4 for field B. It is generated every 5 minutes(7:05 - 22:00), so there are ~64k draws/year.
+For example:
+
+- A: [3, 5, 6, 7, 10, 13, 17, 18]
+- B: 2
+
+Possible dependent variables:
+Timestamp, DrawNum (It is between 85-264 every day. 7:05 is 85 because there are 425 minutes between 00:00 and 7:05. (425/5=85))
+
+Unfortunately we don't have too much dependent variable and there is no clue for the PRNG algorithm.
+I think this two dependent variable is not enough to predict the numbers. I am thinking on an LSTM to predict the next 1stNum based on the previous ones and use the same model for the other numbers.
+What do you think? How would you predict the next set of numbers?
+Which ML algorithm is the best for this use case?
+"
+"['game-ai', 'monte-carlo-tree-search', 'board-games']"," Title: Which value to propagate in Monte Carlo Tree Search in a non-zero-sum game?Body: Usually, when I read about Monte Carlo Tree Search, values between 0 and 1 (or values between -1 and 1) are backpropagated, depending on whether the simulation was a win or loss.
+Now, suppose you have an AI which needs to play a game in which it is also important to score as high as possible. For example, it needs to score as many points as possible in the game of Carcassonne against one other player.
+What kind of options are there for the values being backpropagated in such cases? Can you just backpropagate the number of points, and then, depending on the node, use the points of only the players in UCT? Or would that lead to the search converging to a worse move than the optimal move?
+"
+"['machine-learning', 'deep-learning', 'math', 'definitions', 'tensor']"," Title: What is the definition of a trace of a tensor?Body: Tensor is a multi-dimensional ordered collection of elements, which is used in deep learning to store and process data as well as intermediate steps.
+We are aware of the trace of a two-dimensional tensor i.e. matrix. It is defined as the sum of the diagonal elements of the matrix.
+Is there any definition for the trace of a tensor?
+"
+['datasets']," Title: What are the types of data in which the order of instances does matter?Body: In general, the order of instances in the datasets that are used in machine learning is immaterial. But there are exceptions. Timeseries data is one such exception I know. Consider the following two excerpts
+#1: From 4.3 Representing tabular data of the textbook titled Deep Learning with PyTorch by Eli Stevens et.al.
+
+At first we are going to assume there’s no meaning to the order in
+which samples appear in the table: such a table is a collection of
+independent samples, unlike a time series, for instance, in which
+samples are related by a time dimension.
+
+#2: From an answer
+
+Moreover, there are datasets that contain elements whose order in the
+dataset can be relevant for the predictions, such as datasets of
+time-series data, while, in mathematical sets and multi-sets, the
+order of the elements does not matter.
+
+I want to know the types of such data in which the order of instances does matter. Are there any other kinds of data except Timeseries in which the order of instances does matter?
+"
+"['resource-request', 'books']"," Title: Textbook for CNN-LSTM networks of predictions of numerical dataBody: I am learning NN algorithms because I'd like to create my own project.
+What I found on the internet, is that for my type of project which I have in mind CNN-LSTM neural network would be ideal.
+But now I have a question - I don't know if it's against the rules of this forum or not. So pardon me if I violated them.
+So, now I am learning NN algorithms from a couple of books that "classify" them like: Classification NN, LSTM, Convolutional - each neural network is a separate topic in each book.
+But I am looking for a book that teaches the reader about Convolutional Long-Short Term Memory Neural Network. Does someone know such a book where such hybrid NN is the main topic?
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'genetic-operators']"," Title: Would it be a good idea to mutate half of the offspring of each GA generation 100% of the time and the other half 0% of the time?Body: I was reading about genetic algorithms, and to my understanding a genetic algorithm (GA) is an algorithm that starts with an initial population of chromosomes, where each chromosome has associated with it a fitness score, and it evolves the population such that the chromosomes in the population have an on average better fitness score than the chromosomes in the initial population. A GA accomplishes this by selecting the chromosomes in the population with the best fitness scores, and then it combines those chromosomes using a crossover function to produce offspring chromosomes. Those offspring may or may not be mutated randomly according to a probability that the offspring will be mutated. This process of selection, crossover, and mutation is iterated (with each successive iteration of the population being called a 'generation') until the fitness score of the population as a whole is deamed satisfactory. Please correct me if any of this is wrong.
+My idea is with regards to mutating the offspring produced during the crossover phase. In the relatively simple implementations of GAs that I've looked at on the internet, most seem to randomly mutate the offspring according to a mutation rate. For example, the offspring may have a ten percent chance of mutating. When the GA is ran, the fitness function generally improves a lot over the first several generations, but it often stagnates for a while when one chromosome takes over essentially the entire population and little mutation occurs. This problem can be partially solved by increasing the mutation rate, but that also increases the probability of having offspring with a lower fitness score than the parents if the mutation poorly affects the fitness score. My idea is to make it so that half of the offspring produced with each generation have a one hundred percent mutation rate, and the other half has a zero or relatively low mutation rate. This could potentially lower the number of generations necessary to reach a satisfactory fitness score.
+Is this a good idea? Would it work? Would it be better than other methods of mutating the offspring of each crossover?
+"
+"['logic', 'knowledge-representation', 'ontology']"," Title: How are theories represented using Conceptual Graphs?Body: Theory 1 shows three axioms and two definitions, written in First Order Logic (FOL), that represents a fragment of a mereology theory. For this posting, it is important that the set of axioms is considered as a theory (i.e. a set of axioms together with theorems as a logical consequence of those axioms). In the context of this question, the particular axioms are not significant. Any other set of axioms forming a consistent theory would be equally acceptable.
+Theory 1
+Axioms
+Reflexivity $\forall x : part(x,x)$
+Antisymmetry $\forall x \forall y : ((part(x,y) \land part(y,x)) \implies (x = y))$
+Transitivity $\forall x \forall y \forall z :((part(x,y) \land part(y,z)) \implies part(x,z))$
+Definitions
+Overlap : $\forall x,y \colon (overlap(x,y) \iff(\exists z \colon (part(z,x) \land part(z,y)))$
+Proper Part : $\forall x,y \colon (proprPart(x,y) \iff (part(x,y) \land \neg part(y,x)))$
+I am using CafeOBJ to represent the above logical axioms and definitions, shown in Listing 1:
+Listing 1
+ mod M{
+ [E]
+ preds overlap part properPart : E E
+ -- axioms
+ ax [M1] : \A[x:E] part(x,x) .
+ ax [M2] : \A[x:E]\A[y:E] ((part(x,y) & part(y,x)) -> (x = y)) .
+ ax [M3] : \A[x:E]\A[y:E]\A[z:E]((part(x,y) & part(y,z)) -> part(x,z)) .
+ -- definitions
+ ax [DM1] : \A[x:E]\A[y:E] (properPart(x,y) <-> (part(x,y) & ~(part(y,x)))) .
+ ax [DM2] : \A[x:E]\A[y:E] (overlap(x,y) <-> (\E[z:E] (part(z,x) & part(z,y)))) .
+ }
+
+Note that the logical theory is contained in a named module called M
. The variables are over a domain of generic entities E
, universal and existential quantification are denoted by \A[x:E]
and \A[x:E]
respectively. In CafeOBJ, named modules allow one to structure signatures, theories, sub/super theories, and models using the Theory of Institutions (TOI).
+Below is my naïve attempt to present the axioms as a set of conceptual graphs (CG). My motivation for using CGs is that they provide an intuitive visualization of logic and have a direct relation to Common Logic (ISO zipped PDF).
+
+The above CG was produced using CharGer software as Java zip file (manual).
+My understanding of the above CGs is as follows:
+
+- The variables are universally quantified, not default for CGs, but allowed in extended CG (ECG).
+- The three graphs are all related by conjunction, which is default for GC.
+- The arrow on graph representing reflexivity is bi-directional.
+- Both antisymmetry and transitivity are represented by an IF-THEN contexts.
+- Dotted lines are co-references.
+- Equality (=) is actually commutative, but is represented as a directed relation .
+- Each CG asserts a single proposition, labelled Proposition.
+
+Question:
+How do I present Theory 1 using CGs? Do I need some labeling that indicates that a set of concepts represent a theory. Or are theories represented by some enclosing special type of concept?
+"
+"['convolutional-neural-networks', 'tensorflow', 'weights-initialization']"," Title: In general, when are the normal, uniform and zero initializers used?Body: I came across a Conv2D layer in a fully convolutional network, which used a kernel_initializer='zero'
for regression. Why is a kernel_initializer
of 'zero'
used here?
+In general, when are 'normal'
, 'uniform'
and 'zero'
initializers used?
+"
+"['deep-learning', 'papers', 'variational-autoencoder', 'variational-inference']"," Title: Do we use two distinct layers to compute the mean and variance of a Gaussian encoder/decoder in the VAE?Body: I am looking at appendix C of the VAE paper:
+It says:
+
+C.1 Bernoulli MLP as decoder
+In this case let $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$ be a multivariate Bernoulli whose probabilities are computed from $\mathrm{z}$ with a fully-connected neural network with a single hidden layer:
+$$
+\begin{aligned}
+\log p(\mathbf{x} \mid \mathbf{z}) &=\sum_{i=1}^{D} x_{i} \log y_{i}+\left(1-x_{i}\right) \cdot \log \left(1-y_{i}\right) \\
+\text { where } \mathbf{y} &=f_{\sigma}\left(\mathbf{W}_{2} \tanh \left(\mathbf{W}_{1} \mathbf{z}+\mathbf{b}_{1}\right)+\mathbf{b}_{2}\right)
+\end{aligned}
+$$
+where $f_{\sigma}(.)$ is the elementwise sigmoid activation function, and where $\theta=\left\{\mathbf{W}_{1}, \mathbf{W}_{2}, \mathbf{b}_{1}, \mathbf{b}_{2}\right\}$ are the weights and biases of the MLP.
+C.2 Gaussian MLP as encoder or decoder
+In this case let encoder or decoder be a multivariate Gaussian with a diagonal covariance structure:
+$$
+\begin{aligned}
+\log p(\mathbf{x} \mid \mathbf{z}) &=\log \mathcal{N}\left(\mathbf{x} ; \boldsymbol{\mu}, \boldsymbol{\sigma}^{2} \mathbf{I}\right) \\
+\text { where } \boldsymbol{\mu} &=\mathbf{W}_{4} \mathbf{h}+\mathbf{b}_{4} \\
+\log \sigma^{2} &=\mathbf{W}_{5} \mathbf{h}+\mathbf{b}_{5} \\
+\mathbf{h} &=\tanh \left(\mathbf{W}_{3} \mathbf{z}+\mathbf{b}_{3}\right)
+\end{aligned}
+$$
+where $\left\{\mathbf{W}_{3}, \mathbf{W}_{4}, \mathbf{W}_{5}, \mathbf{b}_{3}, \mathbf{b}_{4}, \mathbf{b}_{5}\right\}$ are the weights and biases of the MLP and part of $\boldsymbol{\theta}$ when used as decoder. Note that when this network is used as an encoder $q_{\phi}(\mathbf{z} \mid \mathbf{x})$, then $\mathrm{z}$ and $\mathrm{x}$ are swapped, and the weights and biases are variational parameters $\phi$.
+
+So, it seems like, for a Bernoulli decoder, it only outputs a vector $\mathbf{y}$, which gets plugged into the log-likelihood formula.
+But then, for the Gaussian decoder, it outputs both $\boldsymbol{\sigma}$ and $\mu$. So, is it like 2 parallel layers, one calculating $\boldsymbol{\sigma}$ one calculating $\mu$?
+Similar to how we get the $\mu$ and $\sigma$ of the encoder (which I am assuming the encoder ones are different from the decoder ones)?
+And we plug it into the formula I derived in this link
+here, the log-likelihood to get the reconstruction loss?
+This is the intuition I am getting, but I haven't seen it explicitly all in one place.
+"
+"['computer-vision', 'object-detection', 'algorithm-request', 'clustering', 'sift']"," Title: What would be a reasonable option for clustering for unknown number of clusters and a lot of outliers?Body: I am implementing the CV detection pipeline with the use of SIFT and KNN Matcher.
+Image keypoints matched to the query keypoints produce the following image:
+
+The matched objects have a lot of key points on them and there are some false matches. I would like to consider spots with a lot of matches as detections of the query object and ignore isolated points.
+What would be an appropriate clustering method, where one can put a limit on points in a neighborhood of some radius to declater this set of points a cluster?
+KMeans is not a good idea, since it takes a fixed number of points and doesn't throw outliers.
+From the algorithms proposed in sklearn, seems like DBSCAN
and Agglomerative clustering
are a good choice, since they allow for variable number of clusters, unknown apriori and outlier removal.
+Or there is better alternative?
+"
+"['neural-networks', 'machine-learning', 'siamese-neural-network', 'input-layer', 'network-design']"," Title: How can Siamese Neural Networks accept a variable number of inputs?Body: Traditionally, Siamese Neural Networks have two inputs. With some tweaking, you can get them to accept any number of inputs. What I don't understand is how to get them to accept variable numbers of inputs. I've seen a couple of research papers (most notably this one) where they talk about doing this, but none explain exactly how.
+Could someone please explain how to create a Siamese Neural Network with a variable number of inputs?
+"
+"['image-recognition', 'explainable-ai', 'gradient', 'saliency-map']"," Title: What exactly do gradient-based saliency map tell us?Body: As far as I understand, gradients are supposed to tell us 1) the magnitude and 2) direction, to update a parameter such as to minimize the loss function.
+Regarding saliency maps, which use gradients with respect to the input, do the gradients give us the same information?
+Consider vanilla saliency maps [1] (i.e. gradients-only) and integrated-gradients [2] (using a baseline image), with grayscale images.
+Do the (vanilla) gradients give us the amount and direction a pixel-value needs to change? OR does the magnitude tell us for the amount of in loss based on a minimal change in pixel-value?
+In simpler terms: does magnitude signify:
+
+- amount of change required in a pixel-value to have some change (in loss?) or
+
+- amount of change in (loss?) based on a minimal/local change in pixel-value?
+
+
+"
+"['deep-learning', 'embeddings']"," Title: Best way to resize 3d to 2d matrixBody: I have a (5, 128, 768)
matrix, that is, I have 5 embedding spaces of shape (128, 768)
. Since they all keep a relation, and for the sake of my model, I need to combine them into a unique output: (1, 128, 768*5)
. If I just concat them all along axis=-1
, will I be losing some info?
+Making that concatenation is the only way I can think of solving this. Is there any better option?
+"
+"['convolutional-neural-networks', 'recurrent-neural-networks', 'time-series']"," Title: Neural network for recognizing ship types based on location seriesBody: I am building a neural network for recognizing ship types based on a 1000-long series of location data (latitude-longitude, normalized to account for different km/longitude° metrics, so that vector difference yields a consistent distance). The dataset I use consists of around 100 000 distinct day-ship pairs, classified by a 10-valued labeling. The cardinality of the classes are: 30115, 26327, 12798, 10940, 5859, 4211, 4176, 3639, 3521, 2834
+I tried two different approaches:
+
+- A recurrent network (using LSTM): Dense[relu] -> LSTM -> Dense[relu] -> Dropout -> Dense[softmax]
+- A 1D convolutional network: Dense[relu] -> Conv1D -> Dense[relu] -> Dropout -> Dense[softmax]
+
+I experimented with the hyperparameters of the above networks, but they all converge to a 40% accuracy, where the model classifies all inputs as class 0 or 1 (choosing the most likely class of the output layer).
+I could accept that the data is not well-defined and this kind of prediction is impossible, but the strange thing is that even if I give the same data as training and validation, the model stops getting better at the 40% accuracy mark. Shouldn't it go further, and "memorize" the classes in this case, resulting in ~100% accuracy on the training data?
+"
+"['reinforcement-learning', 'thompson-sampling']"," Title: Why is Thompson Sampling considered a part of Reinforcement Learning?Body: I often see Thompson Sampling in RL literature, however, I am not able to relate it to any of the current RL techniques. How exactly does it fit with RL?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'computer-vision']"," Title: Predict placement of an object in 3D spaceBody: I am trying to find a way to train a model to predict the correct placement of entities like a tree, dog and cat in a natural 3D environment. Any help regarding how I could use textual data to learn correct placement of objects in 3D space would be a great help. I am a little lost on how to approach this problem.
+"
+"['supervised-learning', 'multi-label-classification', 'imbalanced-datasets']"," Title: What's the best way to train data with unbalanced targets?Body: Suppose I have data I want to use for supervised learning, but there is a pretty bad target/class/labels imbalance. Should I:
+
+- Limit the size of the training set to make sure there is a flat target/class balance distribution (the training set is designed such that there is an equal number of training samples for each class based on splitting the lowest-occuring class as high as possible). For example, if my lowest-occuring class appears only 50 times in my data, and I want an 80-20 train-test split, then I decide I take 40 of the samples for training, and for an even target balance, take 40 samples for all other samples in training, even if the highiest-occuring class appears 100,000 times, for instance.
+
+- Ignore target balance and just focus on the ratio for the train and testing split. So, if it's 80-20, take 40 of the samples out of 50 for my lowest-occuring class, and 80% of 100,000 for my highest occuring class, and so on.
+
+- Something else?
+
+
+Let's suppose I can't just get more data. I know there's some stuff to be said regarding undersampling and oversampling, but what can I do to tell if either one is working better if model accuracy might be disingenuous?
+"
+"['neural-networks', 'natural-language-processing', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: What kind of NN to use to find misprints in testBody: I have a bunch of unique full names of users. I made pseudo-physical model to emulate misprints of desktop and mobile users (hence, fatfingering, jumpy fingers, accidentals touches of touch bar etc.)
+So, I have pairs like John Snow - joh Snown
+I tried first Recurrent networks, LSTM, like some kind of vocabulary to "translate" from bad words to good ones, but it return only known predicted result, and when I try to put unknown last names, it returns wrong results.
+I wish to find some patterns in misspelled words, and to predict correct spelling.
+Can you please advice some kind of NN to cope with the task, or maybe some contributions in that domain?
+P.S. Yes, I know that there exist other AI methods to get things done
+P.P.S. This vocabulary is not in English, just in case
+UPDATE
+LSTM nn works nice with known names and last names endings for new last names. Right now I use 2 different nn, first to correct mistypes, second to determine first, last and middle name.
+UPDATE2
+Sequence to sequence solution also can normalize name (put them in order), find sex of person, find probability of error, etc.
+"
+['constrained-optimization']," Title: How to assign tasks to users with ranking?Body: I'm trying to write an automatic assignment algorithm for the following problem:
+I have $N$ tasks and $M$ users. For each task, I have a ranking for each user for "how related it is to that user". The ranking is a floating-point number in the [0, 1.0]
range. The sum of all ranks is 1. I need to write an algorithm that will create the overall based assignment for all tasks. It has 2 constraints:
+
+- Overall best correctness of assignment.
+- Properly balanced - I can't have 1 user assigned 20 tasks while all others are assigned 1-2.
+
+So far, from studying multiple input sets, I found that a task that has a user with a rank over 0.7 should be assigned to that user no matter what, because it's a really strong indicator of a correct assignment. After that, I tried to balance all the remaining tasks.
+So, for the input of:
+
+tasks = [t1, t2, t3,... , tn]
+task_user_scores = [[s11, s12,...s1m], [s21, s22, ..., s2m],..., [sm1, sm2, ..., smm]]
+first_iteration_limit = float
, [0, 1.0]
, I use 0.7
, as I described earlier.
+per_user_limit = int
, how many tasks to strive to assign per user.
+
+I do:
+Step 1
+for ti in tasks
+ if some sij > 0.7
+ assign ti to user j
+
+Step 2
+for ti in unassigned tasks
+ compute user deviation(max deviation is abs(n - per_user_limit), udj is abs(assigned_to_j - per_user_limit) from per_user_limit(ud),
+ compute adjusted scores as follows sij * (1 - udj).
+ Select the user from the list using weighted random, if selected user already has more assigned tasks than per_user_limit, select again(the random select happens a finite number of times, I found that 10 is fine).
+ Otherwise, if the user didn't reach per_user_limit assign ti to user j.
+
+Step 3
+for ti in unassigned tasks
+ assign ti to the user in the top 5 with the least number of tasks assigned to it.
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'natural-language-processing', 'bert']"," Title: Fine tuning BERT for token level classificationBody: I want to try self-supervised and semi-supervised learning for my task, which relates to token-wise classification for the 2 sequences of sentences (source and translated text). The labels would be just 0 and 1, determining if the word level translation is good or bad on both the source and target sides.
+To begin, I used XLMRoberta, as I thought it would be best suited for my problem. First, I just trained normally using nothing fancy, but the model overfits after just 1-2 epochs, as I have very little data to fine-tune on (approx 7k).
+I decided to freeze the BERT layers and just train the classifier weights, but it performed worse.
+I thought of adding a more dense network on top of BERT, but I am not sure if it would work well or not.
+One more thought that occurred to me was data augmentation, where I increased the size of my data by multiple factors, but that performed badly as well. (Also, I am not sure what should be the proper number to increase the data size with augmented data.)
+Can you please suggest which approach could be suitable here and if I am doing something wrong? Shall I just use all the layers for my data or freezing is actually a good option? Or you suspect I am ruining somewhere in the code and this is not what is expected.
+"
+"['deep-learning', 'weights', 'weights-initialization']"," Title: Are there any recommendations on initialising a single parameter in deep learning?Body: I want to initialize a parameter, which is a single real number in my model. If you want the role of the parameter in the model, you can assume it as the parameter to multiply with the output of the neural network and the resultant product will be the final output.
+Does initialization value matter? If yes, are there any guidelines on initializing the single parameter?
+"
+"['transformer', 'bert', 'logic', 'gpt', 'reasoning']"," Title: Why is BERT/GPT capable of ""for-all"" generalization?Body: As shown in the figure:
+
+Why does token prediction work when "Socrates" is replaced with "Plato"?
+From the point of view of symbolic logic, the above example effectively performs the logic rule:
+∀x. human(x) ⇒ mortal(x)
+
+How might we explain this ability? Moreover, how is this learned in just a few shots of examples?
+I think this question is key to understanding the Transformer's logical reasoning ability.
+Below are excerpts from 2 papers:
+
+
+"
+"['reinforcement-learning', 'objective-functions', 'policy-gradients', 'gradient']"," Title: What specifically is the gradient of the log of the probability in policy gradient methods?Body: I am getting tripped up slightly by how specifically the gradient is calculated in policy gradient methods (just the intuitive understanding of it).
+This Math Stack Exchange post is close, but I'm still a little confused.
+In standard supervised learning, the partial derivatives can be acquired specifically because we want to learn about the derivative of the cost with respect to input parameters and then adjust in the direction of minimising this error.
+Policy gradients are the opposite, as we want to maximise the likelihood of taking good actions. However, I don't understand what we are getting partial derivatives with respect to - in other words, what is the 'equivalent' of the cost function, specifically for $\nabla_\theta \log\pi_\theta$?
+"
+"['machine-learning', 'reference-request', 'weights', 'probability-distribution', 'logistic-regression']"," Title: How does the distribution of the parameters change in logistic regression?Body: I have my own data to train a logistic regression model (for a multi-class classification task), and I want to know how the distribution of weight parameters changes after each update with gradient descent.
+For example, let's say that there are $f$ many features for each input, and the weight $W$, which is a $c \times f$ matrix, where $c$ is a number of classes, is initialized with uniform distribution $U(-1/\sqrt{f}, 1/\sqrt{f})$, which is LeCun uniform initialization.
+For each step of gradient descent with Cross-Entropy loss, it will be updated as
+$$
+W_{t+1} = W_{t} - \alpha \frac{\partial \mathcal{L}}{\partial W_{t}}
+$$
+where $\alpha$ is a learning rate and the gradient is given
+$$
+\frac{\partial \mathcal{L}}{\partial W_{t}} = \frac{1}{n} (\mathbf{y} - \mathbf{p})^{T}\mathbf{X}
+$$
+where $\mathbf{X} \in \mathbb{R}^{n\times f}$ is an input matrix, $\mathbf{y} \in \mathbb{R}^{n \times c}$ is one-hot encoded labels, and $$\mathbf{p} = \mathrm{softmax}(\mathbf{X}W_{t}^{T}) \in \mathbb{R}^{n \times c}$$
+is the model's output (predicted probability for each example & class).
+What I want to know is how the distribution of $W_{t}$ changes as $t$ increases, if some information about $\mathbf{X}$ is known. More precisely,
+
+- Is it possible to get a bound of $\mathbb{E}[||W_{t}||_{\infty}]$ in terms of $t, \alpha, ...$? We may assume that the input $\mathbf{X}$ is also bounded (in the sense that $||\mathbf{X}||_{\infty} \leq M$ for known $M$.) I have a really rough bound for $||W_{t}||_{\infty}$, but it is not good enough.
+- Are there any works in this direction for other models, such as MLPs?
+
+When I plotted the value $||W_{t}||_{\infty}$ for each $t$, then it seems that it increases sub-linearly in $t$, but
+"
+"['reinforcement-learning', 'comparison', 'policies', 'sutton-barto', 'trust-region-policy-optimization']"," Title: What is the difference between an on-policy distribution and state visitation frequency?Body: On-policy distribution is defined as follows in Sutton and Barto:
+
+On the other hand, state visitation frequency is defined as follows in Trust Region Policy Optimization:
+$$\rho_{\pi}(s) = \sum_{t=0}^{T} \gamma^t P(s_t=s|\pi)$$
+Question:: What is the difference between an on-policy distribution and state visitation frequency?
+"
+"['deep-learning', 'natural-language-processing', 'data-preprocessing', 'text-classification', 'binary-classification']"," Title: Which pre-processing steps are necessary for Deep Learning models to solve a document classification problem?Body: I have created a data set with 30.000 text documents (each text file is rather small with respect to its length), which are labelled with 0 and 1. Using this data set, I want to train machine learning and deep learning models in order to be able to classify new text files.
+On the one hand, I want to use classical machine learning models (such as logistic regression, random forest, SVM, etc.) with the Bag of Words/TF-IDF approach. This requires extensive text pre-processing, such as tokenization, stemming, converting to lower case, removing of stopwords and punctuation, lemmatization, etc.
+On the other hand, I want to use new deep learning models (such as RNN, LSTM, BERT, XLNET, etc.).
+Which pre-processing steps are necessary/advantageous for these deep learning models? Should I also use tokenization, stemming, converting to lower case, removing of stopwords and punctuation, lemmatization, etc. or can I omit most of these steps?
+"
+"['tensorflow', 'keras', 'image-segmentation', 'u-net', 'categorical-crossentropy']"," Title: Why is the simplest U-Net architecture giving the best (but not good enough) results on a multi-class segmentation on microscopic data?Body: Currently, I'm trying to optimize a training process of a neural net to improve final results. The problem I'm dealing with is multiclass segmentation on microscopic data.
+The paradox is that the best (and not sufficient) result is giving the simplest U-Net architecture on the original dataset. If I try a deeper or more complex model (e.g. r2unet), the final segmentation is significantly worse. If I try on the fly augmentation - worse as well. Changing a complex model into a more shallow one didn't help either (just tried out the other way than making it more complex).
+Now, I'm trying to make a custom loss function work to improve the segmentation.
+Any ideas what might be a root cause? Or any other ideas that could improve the result?
+To get more specific, here's an example of the data. The initial 4000x4000 images are cut to 512x512, which results in a little over 3700 images. Most of them don't include the classes and is just background, that's why I'm trying to make another loss functions work, as well as weighted classes.
+
+
+
+So far, I'm using categorical cross-entropy as a loss function, however, dice, Jaccard, the focal losses seem could be more suitable and once I'll finish my computations I'll try to make these work again, so far my tries weren't really compatible with Keras, at least it seemed.
+The size of U-Net
+
+- depth = 5
+- first conv layer has 64 filters, goes up to 1024.
+
+R2U-Net
+
+- depth 5
+- first layer 64filter (tried also 32)
+
+"
+"['recurrent-neural-networks', 'loss']"," Title: Writing a loss function for ""how far can this output be pushed""Body: I'm trying to train a function for a industrial-process-control-like system. This is my first attempt at a custom training, so feel free to point out any invalid assumptions.
+I've got one input and one controlled output, which I'm trying to optimise. I've reduced the problem with some values normalisation to:
+
+- the first half of input looks like a
sin()
raise from 0 to max value then dropoff - with lots of noise on top (let's say up to +/-10% at each measurement)
+
+- I don't know what the max is, but it's roughly predictable (input goes from 0 to between 0.5 and 2)
+
+- the output cannot go down (well, it can by a tiny bit, but I'd ignore that here), and cannot go higher than the input value
+
+- the goal is to get the output value as high as possible
+
+
+Currently the best non-NN approach I've got is to start a few % below input and at each step run output = A*input + (1-A)*previous_output
, so the result looks like this (input in blue, output in orange)
+
+I wanted to check if some RNN can improve on this, so I'm planning to check an LSTM doing this instead. I'm struggling to come up with a loss calculation which is viable for training here.
+I considered making the input and output for the network an absolute change from the previous value (or input as input-last_output
difference), then as the loss using some kind of inverted value of sum(output changes)
from step 0 to the crossover point period to reward higher maximums while ignoring the distance. (so discarding anything that happens past the crossover)
+But... that doesn't relate to the input really, so tensorflow
wouldn't be able to train based on this, if I understand it correctly. Am I going in the wrong direction? Are there some known ways to solve this problem?
+"
+"['neural-networks', 'classification', 'papers', 'features', 'mnist']"," Title: How can a neural network distinguish a rotated 6 and 9 digits?Body: Rotated MNIST is a popular dataset for benchmarking models equivariant to rotations on $\mathbb{R}^2$, described by $SO(2)$ group or its discrete subgroups like $\mathbb{Z}^{n}$:
+
+It consists of all digits from 0 to 9 rotated on an arbitrary angle from $[0, 2 \pi)$. However, what makes me a bit puzzled is that digits $6$ and $9$ seem to be confused by any learning algorithms, since from the view of human perception $6$ rotated by 180 degrees is equivalent to $9$ and vice versa.
+The original paper in the description of Rotated MNIST doesn't comment on this point at all, which is strange, since it is a very natural question to ask.
+
+In the paper Oriented Response Networks - authors plot embeddings of rotated digits projected via t-SNE on a 2d plane. There is a clear separation between all rotated versions of 6 and the rotated version of 9 for ORN.
+
+I do not understand how it can be achieved? Probably, the networks understand much more in writing the digit, there are some subtle features, inaccessible to humans, but recognizable by powerful classifier?
+"
+"['reinforcement-learning', 'training', 'policy-gradients', 'monte-carlo-tree-search', 'alphazero']"," Title: What is a policy training target in AlphaZero?Body: In AlphaZero's attached pseudocode, they create a training target for the policy network in this way.
+def store_search_statistics(self, root):
+ sum_visits = sum(child.visit_count for child in root.children.itervalues())
+ self.child_visits.append([
+ root.children[a].visit_count / sum_visits if a in root.children else 0
+ for a in range(self.num_actions)
+ ])
+
+In other words, the training target probability for a certain move is proportional to its visit count.
+However, in the paper, they describe the usage of softmax sampling of visit counts with temperature. This temperate is equal to 1 for the first 30 moves (in this case the policy training target is the same as in the pseudocode above) and for subsequent moves they set infinitesimal temperature -> 0, which essentially means they are picking the move with the highest visit count.
+Since these are 2 different things (if the game has more than 30 moves), my question is: which approach should be used for creating the training target for the policy?
+"
+"['machine-learning', 'terminology']"," Title: What does ""at inference time"" on Tesla's cars mean?Body: I've watched Tesla AI Day 2021 and there was a question Tesla staff tried to answer, but I did not quite understand the question (Note: quote taken from autogenerated subtitles, I do not hear differently, but may you will):
+
+or you'd be training a lot more complex models which would be
+potentially significantly more expensive to run at inference time on
+the cars
+
+I've found a definition of "inference time" in How to Optimize a Deep Learning Model for faster Inference?
+
+The inference time is how long is takes for a forward propagation
+
+But what does "AT inference time on the cars" mean? Is it just badly worded, or does this "at" actually add proper meaning? Also, does it make sense to run training models on the cars themselves and what can that phrase mean? Overall I do not make sense of the question. Do you?
+Note: I'm not a native English speaker.
+"
+"['deep-learning', 'terminology', 'recurrent-neural-networks', 'encoder-decoder']"," Title: What is a ""mask"" in the context o RNN-based encoders?Body: While reading source code related to RNN encoders, I've come across the term mask as input to the encoder. What exactly is it?
+"
+"['deep-learning', 'recurrent-neural-networks', 'pytorch', 'word-embedding']"," Title: What exactly is embedding layer used in RNN encoders?Body: I am reading about RNN encoders. I came across the following line from this code. And I am facing difficulty in understanding the theoretical details regarding it.
+emb = self.drop(self.encoder(input))
+
+The input
is a tensor of shape $[32, 100]$. Here 32 is the batch size and 100 is the length of the sentence. Hundred elements are indices to the words (from the dictionary) that are used in the sentence. We can observe that the output emb
is later passed to the rnn (LSTM/GRU) layer.
+output, hidden = self.rnn(emb, hidden)
+
+So, to me, it looks like that self.encoder
is the necessary step while using the RNN encoder. So, I am interested in what it actually does.
+When we see about self.encoder
, it is an Embedding layer. The description for this layer is as follows
+
+A simple lookup table that stores embeddings of a fixed dictionary and
+size.
+This module is often used to store word embeddings and retrieve them
+using indices. The input to the module is a list of indices, and the
+output is the corresponding word embeddings.
+
+When we see about self.drop
, it randomly keeps zero in the embeddings.
+
+During training, randomly zeroes some of the elements of the input
+tensor with probability p using samples from a Bernoulli distribution.
+Each channel will be zeroed out independently on every forward call.
+
+The outputs for both self.encoder(input)
and self.drop(self.encoder(input))
are $[32, 100, 3000]$.
+I have doubt(s) on the bolded parts of the description of the Embedding layer. The description is saying the Embedding layer uses/contains(?) a lookup table. The description says Embedding layer stores and retrieves word embeddings.
+The doubts are
+
+- Generally, does an embedding layer calculate word embeddings or just store and retrieve them from the table? If it does not calculate them, then who will calculate the embeddings? If you can also comment on the specifics of PyTorch, I would appreciate it.
+
+- What exactly is an embedding layer? Is it a collection of neurons or any other?
+
+
+"
+"['neural-networks', 'knowledge-representation', 'representation-learning']"," Title: Is there some way for us to know if the neural network internally finds an association between labels?Body: I have a question about the association between labels. Say my neural network performs multi-labeling in its output layer. Now, if one of the labels is for whether a person lives in city $X$, another for if they have a wiki article, and finally a third for if that person is a millionaire. Let us assume that these are highly correlated.
+Is there some way for us to know if the neural network internally finds an association between these three labels?
+"
+"['convolutional-neural-networks', 'natural-language-processing', 'recurrent-neural-networks', 'machine-translation']"," Title: Is image machine translation done in two steps?Body: Suppose I have images of hand-written Japanese text. If I want to translate those images, would my ML algorithm be a 2-step model (for example, a CNN to convert the image into Japanese characters/tokens and then feed those tokens in an RNN)? Is this normally how it would be done, or is there an end-to-end solution?
+"
+"['deep-learning', 'terminology', 'generative-adversarial-networks', 'image-generation']"," Title: What does ""Gau"" in GauGAN stand for?Body: GauGAN is a neural network architecture from NVIDIA that can create realistic images from semantic maps (and nowadays also textual descriptions).
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'word-embedding']"," Title: What are the types of inputs used for RNN in literature given sentences?Body: Suppose there are $m$ sentences in a text file and the number of distinct words is equal to $n$. The goal is to get word embeddings using RNN.
+We know that it is impossible to pass any word, which is in text format, as an input to RNN. We need to convert each word into some number and then pass it to the RNN to get word embeddings.
+I know only the following method, if correct:
+
+- Assign an index to each word. So, the index ranges from $0$ to $n-1$.
+- Use the indices as input to RNN.
+
+Is it the only technique used in the literature? If not, what are the names of other techniques that are used in the context of RNN encoders?
+"
+"['convolutional-neural-networks', 'architecture', 'yolo', 'convolution-arithmetic']"," Title: Are the output dimensions of the first and second convolutional layer in YOLO paper correct?Body: I was reading the last version of the YOLO paper available in Arxiv, and I don't fully understand the output dimensions (I understand width and height, but not depth) of the first and second convolutional layers.
+
+Shouldn't the output of the first layer be 112x112x64? Shouldn't the output of the second layer be 56x56x192? I thought (and this is the case after the 3rd layer) that the depth of the ouput of a conv layer is equal to the number of filters of this layer. This is why after the first conv layer (that contains 64 filters) I would expect an output depth of 64. And the same for the second conv layer that contains 192 filters, I would expect the output to have a depth of 192.
+"
+"['reinforcement-learning', 'ai-design', 'hyperparameter-optimization', 'hyper-parameters', 'environment']"," Title: What components of reinforcement learning influence the result the most?Body: I'm working on my thesis concerning a reinforcement learning problem and am trying to prioritise my time on different components of it:
+
+- Formalising the agent environment (like the design of state-, action-space and reward-structure)
+- Selection of learning algorithm
+- Selection of network architecture and size
+- Design of the training setup
+
+It is an agent in a 3D environment with simulated physics (in Unity), the domain being a real-time strategical game. It is an environment with constraint training data, so sample efficiency is very important.
+Now my question: I do anticipate that the design of the state- and action space will have a big impact on the training result, especially in this environment with little training data.
+However, is there a way one can clearly prioritise what components will be the most important ones for an RL setting?
+Time is limited, and, for me, as a beginner, it seems to be quite difficult to determine what component will be the most important one and needs the most focus. Testing only the hyper-parameters of a learning algorithm thoroughly will take in itself a long time. And obviously disregarding any component will result in bad results.
+Is there a way to know on which component one should focus more?
+"
+"['deep-learning', 'classification', 'deep-neural-networks', 'underfitting']"," Title: If the model always underfits, do I really need a larger model?Body: I train my neural network on random points generated for a data set that theoretically consists of approximately $1.8 * 10^{39}$ elements. I sample (generate) tens of thousands of random points on each epoch with uniform distribution. For every model I try, it appears that it cannot get past $10-12\%$ accuracy on the data, even if I increase the size of the model to over ten million parameters.
+Each feature of the data set is of two Rubik's cube positions, and the corresponding label is the first move of a possible solution to solve from the first provided position to the second provided position within twenty moves.
+So, it's a classification model for $18$ distinct classes, one for each of the possible moves on a Rubik's cube. Seeing that it has $12\%$ accuracy (being greater than $1/18 \approx 5.6\%$) is nice because it does mean that it is learning something, just not enough.
+I also notice that loss tends to go down to a hard minimum over many epochs, but accuracy stops increasing after only around ten epochs. On epoch 36, it reached a loss of $2.713$, and it repeatedly comes back down to $2.713$, but never any lower even after 2000 epochs.
+I concatenated a convolutional layer with a fully connected layer to use it as the first layer of the model. Convolutional layers might not work for this as well as I'd hope, so I throw in the fully connected layer as a safeguard. Some Keras code below:
+inp_cubes = Input((2,6,3,3,1))
+
+x = TimeDistributed(TimeDistributed(Conv2D(1024,(2,2),(1,1),activation='relu')))(inp_cubes)
+
+output_face_conv = TimeDistributed(TimeDistributed(Flatten()))(x)
+flatten_inp_cubes = TimeDistributed(TimeDistributed(Flatten()))(inp_cubes)
+x = TimeDistributed(TimeDistributed(Concatenate()))((output_face_conv, flatten_inp_cubes))
+
+x = TimeDistributed(TimeDistributed(Dense(1024,'relu')))(x)
+x = TimeDistributed(TimeDistributed(Dropout(0.3)))(x)
+x = TimeDistributed(TimeDistributed(Dense(1024,'relu')))(x)
+x = TimeDistributed(TimeDistributed(Dropout(0.3)))(x)
+x = TimeDistributed(TimeDistributed(Dense(1024,'relu')))(x)
+x = TimeDistributed(TimeDistributed(Dropout(0.3)))(x)
+x = TimeDistributed(TimeDistributed(Dense(1024,'relu')))(x)
+x = TimeDistributed(TimeDistributed(Dropout(0.3)))(x) # face logits
+x = TimeDistributed(Flatten())(x)
+x = TimeDistributed(Dense(1024,'relu'))(x)
+x = TimeDistributed(Dropout(0.3))(x)
+x = TimeDistributed(Dense(1024,'relu'))(x)
+x = TimeDistributed(Dropout(0.3))(x)
+x = TimeDistributed(Dense(1024,'relu'))(x)
+x = TimeDistributed(Dropout(0.3))(x) # cube logits
+x = Flatten()(x)
+x = Dense(1024,'relu')(x)
+x = Dropout(0.3)(x)
+x = Dense(1024,'relu')(x)
+x = Dropout(0.3)(x)
+x = Dense(1024,'relu')(x)
+
+outp_move = Dense(18,'softmax')(x) # solution logits
+
+I tried using only one of the two types of input layers separately, and nothing quite worked.
+Loss is measured as categorical cross-entropy. I make use of time-distributed layers so that each of the two Rubik's cube positions from the input is processed equivalently, except when determining how they relate. I'm making sure to scale my data, and all that stuff. It really seems like this should just work, but it doesn't.
+Is there any way to increase the model's performance without using hundreds of millions of parameters, or is that actually necessary?
+I would have thought that there would be some relatively simple correlation between positions and solutions, although it's hard for us to see as humans, so maybe this comes down to the Cayley diagram of the Rubik's cube group being innately random, as though they're prime numbers or something.
+EDIT: I guess I really did just need a bigger neural network. This new on has 75 million parameters. The second image shows how that model is able to learn the data set quite easily. It takes a long time to process, though.
+![]()
+"
+"['reinforcement-learning', 'comparison', 'q-learning', 'sarsa']"," Title: What is meant by ""two action selections"" in SARSA?Body: I have some difficulties understanding the difference between Q-learning and SARSA. Here (What are the differences between SARSA and Q-learning?) the following updating formulas are given:
+Q-Learning
+$$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma \max_aQ(s',a) - Q(s,a))$$
+SARSA
+$$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma Q(s',a') - Q(s,a))$$
+I know that SARSA is for on-policy learning while Q-learning is off-policy learning. So, in Q-learning, the epsilon-greedy policy (or epsilon-soft or softmax policy) is chosen for selecting the actions and the greedy policy is chosen for updating the Q-values. In SARSA the epsilon-greedy policy (or epsilon-soft or softmax policy) is chosen for selecting the actions and for updating the Q function.
+So, actually, I have a question on that:
+On this website (https://www.cse.unsw.edu.au/~cs9417ml/RL1/algorithms.html) there is written for SARSA
+
+As you can see, there are two action selection steps needed, for determining the next state-action pair along with the first.
+
+What is meant by two action selections? Normally you can only select one action per iteration. The other "selection" should be for the update.
+"
+"['reinforcement-learning', 'deep-learning', 'dqn']"," Title: Reinforcement Learning applied to Optimisation ProblemBody: Problem Statement: We are given an optimisation problem; with production centres, source airport, destination airports, transfer points and finally delivered to the customers. This is better explained in the following picture.
+Objective function 1:
+Minimise costs = inventory costs + transportation costs + penalty costs + loading/unloading costs
+
+- Inventory costs = inventory cost at source airport + inventory costs at distribution centres
+
+- Transportation costs = cost of transporting cargo from production centre to source airport (via trucks) + cost of transporting cargo through itineraries (via flight) + cost of transporting cargo from distribution centre to transfer points (via trucks) + cost of transporting cargo from transfer point to customers (via drones)
+
+- Penalty costs = cost of operating flight routes and delay penalty costs
+
+- Loading/unloading costs = cost of loading cargo on trucks at production centres + cost of unloading cargo from trucks at the transfer point
+
+
+Mathematical Solution (Using IBM CPLEX solver / Docplex): The complete python code (.ipynb file) with the formulation is present in this Google Drive Link. This gives an optimal solution.
+Query: Is there any non-mathematical, non-formulation based method to solve this problem statement? Something on the lines of Reinforcement Learning? If any implementation is also provided, it will be icing on the cake.
+"
+"['comparison', 'multilayer-perceptrons', 'symbolic-ai', 'universal-approximation-theorems']"," Title: Are the capabilities of connectionist AI and symbolic AI the same?Body: The universal approximation theorem says that MLP with a single hidden layer and enough number of neurons can able to approximate any bounded continuous function. You can validate it from the following statement
+
+Multilayer Perceptron (MLP) can theoretically approximate any bounded, continuous function. There's no guarantee for a discontinuous function.
+
+We can express any MLP in terms of algebraic expressions. And the expressions can be considered as symbolic-AI.
+So, can I infer that symbolic AI algorithms can theoretically approximate any bounded continuous function?
+If not, then why can't there be a one-one mapping between MLP and symbolic-AI algorithm?
+"
+"['machine-learning', 'classification', 'python', 'mnist']"," Title: What model to train to restore MNIST test datasetBody: I came across this problem, and am not sure where to start. What model would work best for this problem and why?
+Imagine the digits in the test set of the MNIST dataset (http://yann.lecun.com/exdb/mnist/) got cut in half vertically and shuffled around. Implement a way to restore the original test set from the two halves, whilst maximizing the overall matching accuracy.
+"
+"['u-net', 'skip-connections']"," Title: How does the skip connection match its dimension to the same layer in the expansive path?Body: According to the U-Net architecture image from the second page of the research paper (URL link) https://arxiv.org/pdf/1505.04597.pdf
+How does the skip connection match its dimension to the same layer in the expansive path?
+"
+"['multilayer-perceptrons', 'symbolic-ai', 'universal-approximation-theorems', 'incompleteness-theorems']"," Title: Does Godel's incompleteness theorems restricts the scope of connectionist-AI?Body: It is well-known that Godel's incompleteness theorems restricted the reachability of symbolic-AI, which is dependent on mathematical logic.
+But, I am wondering whether it has any impact on the connectionist AI.
+I don't think it has any impact on the capability of connectionist AI because of the following reasons I am aware of
+
+- Connectionist-AI is more focused on generalization and is not about mathematical logic..
+- Universal approximation theorem, contrary to Godel's incompleteness theorems says that connectionist-AI is capable of achieving all bounded-continuous functions. I am not sure about the implications of Godel's incompleteness theorems on either unbounded or discrete functions.
+
+So, the incompleteness theorems seem to have no impact on the connectionist AI.
+Do the theorems also restrict the reachability of connectionist-AI?
+"
+"['neural-networks', 'variational-autoencoder', 'random-variable', 'probabilistic-graphical-models']"," Title: For the VAE, should the input, output and latent variable code be random variables?Body: For a variational autoencoder, we have input $x$ (assume 1 data point for now, like an image), a latent code sampled from the decoder, $z$, and an output $\hat{x}$.
+If I were to draw a diagram for the VAE with the input, output, and latent code sample, is it appropriate to write those three as random variables/vectors? Or as instances of random variables/vectors?
+I thought it was random variables/vectors, but I saw this discussion, where they talk about the dataset being instances.
+"
+"['neural-networks', 'feedforward-neural-networks', 'graphs']"," Title: How to uniquely associate a directed graph with a feedforward neural network?Body: I want to write an algorithm that returns a unique directed graph (an adjacency matrix) that represents the structure of a given feedforward neural network (FNN). My idea is to deconstruct the FNN into the input vector and some nodes (see definition below), and then draw those as vertices, but I do not know how to do so in a unique way.
+Question: Is it possible to construct such an algorithm, and if so, how would you formalize it?
+
+Example [Shallow Feedforward Neural Network (SNN)]
+To illustrate the problem, consider an SNN, defined as a mapping $f=\left(f_1(\mathbf{x}), \ldots, f_m(\mathbf{x})\right): \mathbb{R}^n\rightarrow\mathbb{R}^m$ where for $k=1,\ldots,m$
+\begin{align}
+ f_k(\mathbf{x}) &= \sum_{j=1}^{\ell} w_{j,k}^{(2)} \rho \left( \sum_{i=1}^n w_{i,j}^{(1)} x_i + w_{0,j}^{(1)} \right) + w_{0,k}^{(2)}, \quad \mathbf{x}=(x_1,\ldots,x_n)\in\mathbb{R}^n
+\end{align}
+and $w_{i,j}^{(k)}\in\mathbb{R}$ is fixed for all $i,j,k \in \mathbb{N}$ and $\rho:\mathbb{R}\rightarrow\mathbb{R}$ is a continuous mapping.
+I want to determine the nodes that make up the FNN, where a node $N^{\rho}: \mathbb{R}^n\rightarrow\mathbb{R}$ is defined as a mapping
+\begin{align} \label{eq:node}
+ && \quad && N^{\rho}(\mathbf{x}) &= \rho\left(\sum_{i=1}^n w_i x_i + w_0 \right), & \mathbf{x}=(x_1,\ldots,x_n)\in\mathbb{R}^n
+\end{align}
+where $\mathbf{w}=(w_0, \ldots,w_n)\in\mathbb{R}^{n+1}$ is fixed.
+Clearly (to me) I can write each $f_k$ as
+\begin{align}
+ f_k(\mathbf{x}) &= \sum_{j=1}^{\ell} w_{j,k}^{(2)} N^{\rho}_j(\mathbf{x}) + w_{0,k}^{(2)},
+\end{align}
+where $N^{\rho}_{j}$ is a node for $j=1,\ldots,\ell$. Now I see that $f_k$ is a node which takes as input the output of other nodes. But how can I formalise this in an algorithm? And does it generalize to Deep Feedforward Neural Networks?
+"
+"['reinforcement-learning', 'sarsa']"," Title: Why are we choosing more than 1 action in SARSA?Body: Here is the pseudocode for SARSA (which I took from here)
+
+Why are we choosing more than 1 action in SARSA? One for going into the next state and the other one for updating the Q function?
+"
+"['reinforcement-learning', 'sarsa']"," Title: Are we choosing the same action in every step in SARSA?Body: Here is the pseudocode for SARSA (which I took from here)
+
+Do we only select one action at the very beginning and then we always choose the same action for each step? Does it really make sense to choose the same initially chosen action $a$ regardless of the state $s$?
+"
+"['reinforcement-learning', 'sarsa', 'on-policy-methods', 'exploration-strategies']"," Title: Are the two policies in SARSA for choosing an action the same?Body: Here is the pseudocode for SARSA (which I took from here)
+
+Are the two policies in SARSA for choosing an action equal? I guess yes, because it is called an on-policy learning algorithm. But could I, for example, also use different policies in this SARSA framework? For example an $\epsilon$-greedy policy and a softmax policy. Maybe the resulting algorithm would not be called SARSA anymore but it would be something similar.
+"
+"['reinforcement-learning', 'q-learning', 'sarsa', 'bootstrapping']"," Title: How is $Q(s', a')$ calculated in SARSA and Q-Learning?Body: I have a question about how to update the Q-function in Q-learning and SARSA. Here (What are the differences between SARSA and Q-learning?) the following updating formulas are given:
+Q-Learning
+$$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma \max_aQ(s',a) - Q(s,a))$$
+SARSA
+$$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma Q(s',a') - Q(s,a))$$
+How can we calculate the $Q(s', a')$ in both SARSA and Q-learning for updating the Q-function? After having taken an action $a$ at state $s$, we get the reward $r$, which we can observe. But we cannot observe $Q(s',a')$ from the environment as far as I see it.
+Can anyone maybe think about a comprehensive example where you can see how it is done (or a link to a website)?
+"
+"['machine-learning', 'reference-request', 'algorithm-request', 'model-request']"," Title: How to train an ML model to convert the given lyrics into a song by a particular singer?Body: I am interested in training a machine algorithm to convert the lyrics I give into a song by a particular singer.
+My language is non-English (south Indian) The songs are mostly monophonic (very few instruments, if at all). I have data of a bunch of songs sung by this singer, I want to try new lyrics and imagine how to singer would have sung.
+"
+"['reinforcement-learning', 'papers']"," Title: What is the meaning of the shaded area in the reinforcement learning literature graphs?Body: In most of the reinforcement learning literature, I see that there is a shaded area in the graphs. I couldn't understand what it exactly represents?
+For example, from the A3C paper:
+
+Or another example from the PPO paper:
+
+Is it for multiple runs or it's for something else? How I can reproduce such graphs (which library and what type of data from my training episode do I need)?
+"
+"['objective-functions', 'generative-adversarial-networks', 'logarithm']"," Title: Why are logarithms used in GANs minimax equation?Body: The minimax equation for generative adversarial networks
+$$\min_G \max_D V(D,G) = \mathbb{E}_{\boldsymbol{x}\sim p_{data}(\boldsymbol{x})}[\log D(\boldsymbol{x})] + \mathbb{E}_{\boldsymbol{z}\sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log(1 - D(G(\boldsymbol{z}))] $$
+Why do we use logarithms instead of just
+$$\min_G \max_D V(D,G) = \mathbb{E}_{\boldsymbol{x}\sim p_{data}(\boldsymbol{x})}[ D(\boldsymbol{x})] + \mathbb{E}_{\boldsymbol{z}\sim p_{\boldsymbol{z}}(\boldsymbol{z})}[(1 - D(G(\boldsymbol{z}))] $$
+"
+"['objective-functions', 'generative-adversarial-networks', 'gradient-descent', 'numerical-algorithms']"," Title: GANs: Why does iterative gradient descent sometimes optimise $\min_G \max_D V(D,G)$ and sometimes $\max_D \min_G V(D,G)$?Body: For the following minimax equation for generative adversarial networks (GANs),
+$$\min_G \max_D V(D,G) = \mathbb{E}_{\boldsymbol{x}\sim p_{data}(\boldsymbol{x})}[\log D(\boldsymbol{x})] + \mathbb{E}_{\boldsymbol{z}\sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log(1 - D(G(\boldsymbol{z}))] $$
+Goodfellow mentions in his paper (https://arxiv.org/pdf/1406.2661.pdf, page 3, 1st paragraph under equation 1) that in practice this minimax game is implemented iteratively (algorithm 1 in the paper).
+In one of his tutorials (1:31:52 - 1:33:16) he mentions that $\min_G \max_D V(D,G)$ is desirable, whereas $\max_D \min_G V(D,G)$ leads to mode collapse. Iterative gradient descent can "sometimes act" like the former, and sometimes like the latter.
+I am confused about how the iterative method can sometimes act like min-max or max-min. What about gradient descent causes the change in behavior?
+"
+"['recurrent-neural-networks', 'datasets', 'semantics', 'vector-semantics', 'bidirectional-rnn']"," Title: What is all necessary types of data for a bidirectional RNN to learn embeddings?Body: Bidirectional RNNs are used for generating the semantic vectors of the text at the sentence level and word level.
+In order to train a CNN for the classification tasks, images, and labels/outputs are required in general.
+If the data required is dependent on the algorithm used then please consider the algorithm that uses less possible types of data. Suppose if we need to perform a classification task on images: then in general, images with corresponding labels are expected. But, there may be algorithms that also work better with fewer labels available. But, the types of data in both the cases are the same: Images, labels.
+I want to know such bare minimum types of data requirements for a bidirectional RNN. Afaik, a text file is enough for it to get semantic vectors since learning in text processing generally happens based on the distributional hypothesis.
+Am I correct? If not, then what should be the other necessary data requirements for a bidirectional RNN to generate semantic embeddings?
+"
+"['machine-learning', 'python', 'algorithm-request', 'clustering', 'scikit-learn']"," Title: What is the best machine learning algorithm for clustering dots based on coordinates $(x,y)$ with consideration of weight of the points?Body: I'm looking for a machine learning algorithm for clustering points based on their coordinates. Furthermore, I want to take into consideration the weights of each point. Suppose there is a weight in each point. Then we take the sum of the weight of all the points in a cluster. I want the sum in different clusters to be close and balanced. What algorithm is best for this? Any suggestion?
+"
+"['terminology', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: Is a recurrent layer same as LSTM or single-layered LSTM?Body: In MLP, there are neurons that form a layer. Each hidden layer gives a vector of number that is the output of that layer.
+In CNN, there are kernels that form a convolutional layer. Each layer gives feature maps that are the output of that layer.
+In LSTM, there are cells that form a recurrent layer. Each layer gives a sequence that is the output of that layer.
+This is my understanding of the basic terminology regarding MLP, CNN, and LSTM.
+But consider the following description regarding the number of layers in LSTM in PyTorch
+
+num_layers – Number of recurrent layers. E.g., setting num_layers=2
+would mean stacking two LSTMs together to form a stacked LSTM, with
+the second LSTM taking in outputs of the first LSTM and computing the
+final results. Default: 1
+
+The description uses the "number of recurrent layers" and the "LSTM" in a similar manner. How I can understand this? Is it costmary to consider a recurrent layer as an LSTM?
+"
+"['deep-learning', 'terminology', 'recurrent-neural-networks', 'statistics']"," Title: What does ""statistical strength"" mean in this context?Body: Consider the following excerpt from a paragraph taken from chapter 10: Sequence Modeling: Recurrent and Recursive Nets of the textbook named Deep Learning by Ian Goodfellow et al regarding the advantages of RNN over full traditional MLP.
+
+To go from multilayer networks to recurrent networks, we need to take
+advantage of one of the early ideas found in machine learning and
+statistical models of the 1980s: sharing parameters across different
+parts of a model. Parameter sharing makes it possible to extend and
+apply the model to examples of different forms(different lengths, here)
+and generalize across them. If we had separate parameters for each
+value of the time index, we could not generalize to sequence lengths
+not seen during training, nor share statistical strength across
+different sequence lengths and across different positions in time. Such
+sharing is particularly important when a specific piece of information
+can occur at multiple positions within the sequence.
+
+The authors used the phrase "statistical strength". Do they mean the strength of RNN in learning the embeddings of a word based on its context rather than its position in input, if it occurs in several inputs? Or does it mean that RNN uses fewer parameters to generalize in a better way compared to a traditional MLP? Or do they mean something else?
+"
+"['deep-learning', 'recurrent-neural-networks']"," Title: When does an RNN use the connections that help in going backward in time?Body: Consider the following paragraph taken from chapter 10: Sequence Modeling: Recurrent and Recursive Nets of the textbook named Deep Learning by Ian Goodfellow et al mentioning the connections of RNN to go backward in time.
+
+For the simplicity of exposition, we refer to RNNs as operating on a
+sequence that contains vectors $x^{(t)}$ with the time step index $t$
+ranging from 1 to $\tau$. In practice, recurrent networks usually
+operate on minibatches of such sequences, with a different sequence
+lengthτfor each member of the minibatch. We have omitted the minibatch
+indices to simplify notation. Moreover, the time step index need not
+literally refer to the passage of time in the real world. Sometimes it
+refers only to the position in the sequence. RNNs may also be applied
+in two dimensions across spatial data such as images, and even when
+applied to data involving time, the network may have connections
+that go backward in time, provided that the entire sequence is
+observed before it is provided to the network.
+
+The paragraph says that the RNN can go back in time if and only if the entire sequence is provided. So, I am suspecting that it happens only during the backpropagation/backward pass.
+Am I true? Or is it possible for an RNN to use those connections while forward pass also?
+"
+"['variational-autoencoder', 'notation', 'variational-inference']"," Title: Why do we use $q_{\phi}(z \mid x^{(i)})$ in the objective function of amortized variational inference, while sometimes we use $q(z)$?Body: In page 21 here, it states:
+
+General Idea of Amortization: if same inference problem needs to be solved many times, can we parameterize a neural network to solve it?
+Our case: for all $x^{(i)}$ we want to solve:
+$$
+\min _{q(z)} \mathrm{KL}\left(q(z) \| p_{\theta}\left(z \mid x^{(i)}\right)\right.
+$$
+Amortized formulation:
+$$
+\min _{\phi} \sum \operatorname{KL}\left(q_{\phi}\left(z \mid x^{(i)}\right) \| p_{\theta}\left(z \mid x^{(i)}\right)\right)
+$$
+
+One thing I am trying to wrap my mind around is $q(z)$ in the 1st formulation vs $q_{\phi}(z \mid x^{(i)})$ in the second.
+Why is one a conditioned on $x^{(i)}$ and the other is not? I know in the 1st formulation, we are trying to find a different $q(z)$ for each datapoint.
+I also recall that in VAEs,which uses amortized inference we consider $q(z)$ to be aggregated posterior, like
+$$q_{\phi}(z)=\int q_{\phi}(x, z) d x \quad \text { Marginal of } q_{\phi}(x, z) \text { on } z$$
+$$q_{\phi}(x, z) \equiv p_{\mathcal{D}}(x) q_{\phi}(z \mid x)$$
+(formulas taken from here)
+"
+"['comparison', 'recurrent-neural-networks']"," Title: Is there any relation between the recursive neural network and recurrent neural network?Body: Recurrent neural networks, abbreviated as RNNs, are widely used in deep learning literature, especially for text processing.
+Are they related to recursive neural networks in any way?
+I am asking for the general/special relationship that enables us to view the one in terms of another if possible.
+"
+"['neural-networks', 'feedforward-neural-networks']"," Title: Can I help my neural network if I know the sign of the relationships between inputs and outputsBody: I am attempting to train a neural network where I can say the following:
+For most inputs, I know the sign of the relationship between that input and several specific outputs. I.e. whatever set of values the inputs are set to, I can point to individual input and say that so long as the other inputs are unchanged, the output should monotonically increase/decrease as I change this input's value.
+This is not to say that the inputs don't interact with each other, they do, but the interaction never changes the sign (i.e. increasing vs decreasing) nature of the monotonic relationships between specific inputs and outputs.
+I am wondering how I can use this information to help my network learn fast and/or perform well.
+"
+"['comparison', 'recurrent-neural-networks', 'multilayer-perceptrons', 'universal-approximation-theorems', 'turing-completeness']"," Title: Is the capability of RNN more than the capability of MLP?Body: Consider the following excerpt paragraph taken from the section titled "Recurrent Neural Networks" of the chapter 10: Sequence Modeling: Recurrent and Recursive Nets of the textbook named Deep Learning by Ian Goodfellow et al on the computational graph of some computations.
+
+The recurrent neural network of ..... is universal in the sense that
+any function computable by a Turing machine can be computed by such a
+recurrent network of a finite size. The output can be read from the RNN
+after a number of time steps that is asymptotically linear in the
+number of time steps used by the Turing machine and asymptotically
+linear in the length of the input. The functions computable by a
+Turing machine are discrete, so these results regard exact
+implementation of the function, not approximations. The RNN, when used
+as a Turing machine, takes a binary sequence as input, and its outputs
+must be discretized to provide a binary output. It is possible to
+compute all functions in this setting using a single specific RNN of
+finite size. The “input” of the Turing machine is a specification of the
+function to be computed, so the same network that simulates this
+Turing machine is sufficient for all problems. The theoretical RNN used
+for the proof can simulate an unbounded stack by representing its
+activations and weights with rational numbers of unbounded precision.
+
+This paragraph clearly explains that RNN is capable of computing any computable function exactly and is the same as the Turing machine in terms of capability.
+Afaik, MLP is capable of approximating any continuous, bounded function.
+So, it seems to me that RNN is more powerful in terms of the capability of computing functions than MLP. RNN can learn any function that MLP can learn and RNN can learn more than that can be learned by any MLP in general.
+Am I correct? Or is there any issue in my interpretation or do more details need to be considered?
+"
+"['history', 'ai-field', 'turing-award']"," Title: Which computer scientists have received the Turing Award specifically for their contributions to Artificial Intelligence?Body: Many people have heard of Hinton, Bengio, and LeCun in recent years, given the popularity of deep learning and neural networks, and their contributions to this subfield of Artificial Intelligence. For their contributions, they have conjointly received the Turing Award in 2019 (although, in my view, a few other people could also have received this award for the same reasons).
+In addition to them, which computer scientists have received the Turing Award specifically for their contributions to Artificial Intelligence?
+For each of them, please, describe the specific reason why they were awarded and/or provide the links to the official site that announces this or the Turing lecture.
+Why am I asking this question? Alan Turing is considered one of the fathers, if not the father, of Artificial Intelligence and Computer Science. In particular, in addition to the development of Turing machines, which are widely studied in Theoretical Computer Science and Theory of Computation, he's also published the famous paper Computing Machinery and Intelligence in 1950, where he proposed what was later called the Turing test, and asked one of the most fundamental questions in AI: "Can machines think?". The Turing Award is given to people that make significant contributions to CS or AI, so I think we should remember all these people that have contributed to AI.
+"
+"['machine-learning', 'reinforcement-learning', 'game-ai', 'reward-functions', 'reward-design']"," Title: How to teach Machine Learning Agent to destroy replicating objects in a puzzle game?Body: I have an unusual but very interesting problem. I have a game that is very similar to Toon Blast (a puzzle mobile game). It's based on a Match-2 mechanic in which you can destroy 2 or more connected blocks and your goal is to complete all the required objectives (collect X color blocks, destroy 30 balloons, etc).
+I have tons of levels and the ML solver seems to perform very well for all kinds of obstacles - except Quicksand.
+Quicksand is a special object that replicates itself to a nearest tile whenever a user makes a move. If the user destroyed a quicksand in his turn, a quicksand won't be replicated. So basically the fastest way to destroy quicksand is to make sure you destroy as much quicksand as you can in each turn so it won't replicate and cover your board.
+I use ML-Agents from Unity (https://github.com/Unity-Technologies/ml-agents) and I just give the agent reward=1f
whenever it completes an objective (destroy 1 obstacle) and I subtract 1f
from reward whenever it performs a move.
+For simpler non-replicating obstacles it works perfectly. For example, you click 2 blocks next to a balloon - it will pop a balloon, add 1f as a reward and at the same time remove 1f for using a move.
+This way the agent learns to make as few moves as possible.
+Below you'll find how the Quicksand works and some simple obstacle - Balloon.
+Quicksand preview (sorry for bad quality, 2mb max size)
+
+Balloon preview
+
+My issue is that no matter what, I can't teach it to solve quicksand. With the above rewarding approach, I think this strange creature learned that by actually REPLICATING quicksand it's gaining more reward because it can destroy it later (actually it's not because moves give -1f so it's more-less equal).
+I've tried not giving reward for the quicksand, so it only loses rewards by using moves. But it doesn't work either, I'm not sure why.
+Do you guys have any idea how this kind of things should be taught?
+"
+"['backpropagation', 'variational-autoencoder']"," Title: How does backprop work through the random sampling layer in a variational autoencoder?Body: Implementations of variational autoencoders that I've looked at all include a sampling layer as the last layer of the encoder block. The encoder learns to generate a mean and standard deviation for each input, and samples from it to get the input's representation in latent space. The decoder then attempts to decode this back out to match the inputs.
+My question: How does backpropagation handle the random sampling step?
+Random sampling is not a deterministic function and doesn't have a derivative. In order to train the encoder, gradient updates must somehow propagate back from the loss, through the sampling layer.
+I did my best to hunt for the source code for tensorflow's autodifferentiation of this function, but couldn't find it. Here's an example of a keras implementation of that sampling step, from the keras docs, in which tf.keras.backend.random_normal
is used for the sampling.
+class Sampling(layers.Layer):
+ """Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
+
+ def call(self, inputs):
+ z_mean, z_log_var = inputs
+ batch = tf.shape(z_mean)[0]
+ dim = tf.shape(z_mean)[1]
+ epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
+ return z_mean + tf.exp(0.5 * z_log_var) * epsilon
+
+"
+"['neural-networks', 'machine-learning', 'optimization']"," Title: Why is there a Hessian diagonal approximation? And when can we use it?Body: This topic has been introduced in "Pattern Recognition and Machine Learning, Bishop, 2006", section 5.4.1. I am a bit confused about this method and I have two questions.
+
+- Why this method has attracted attention or has been developed?
+First, we want to compute the Hessian fast, so we try to approximate it in O(W) time, where W is the number of parameters.
+And then, we see that this matrix most of the time is heavily non-diagonal.
+
+- My second question is how can we know whether we can approximate a Hessian or not? Is there a hint/clue in problems?
+
+
+Thanks in advance!
+"
+"['papers', 'generative-model', 'rewards', 'image-generation', 'data-compression']"," Title: Generative systems based on Schmidhuber's compression frameworkBody: In Driven by Compression Progress:
+A Simple Principle Explains Essential Aspects of
+Subjective Beauty, Novelty, Surprise,
+Interestingness, Attention, Curiosity, Creativity,
+Art, Science, Music, Jokes Jürgen Schmidhuber describes, using the idea of using compression progress on perceived data as an intrinsic reward signal:
+
+[Artists] create action sequences yielding interesting inputs, where interestingness is a measure of learning progress, for example [...] the saved number of bits needed to encode the data.
+
+Has such a (purely generative) system been created before? Say, one that creates 2d images?
+"
+"['deep-learning', 'backpropagation', 'models']"," Title: Deep Learning Architecture where outputs from two different inputs are used for error calculationBody: Is there a deep learning architecture where outputs of the same model with two different inputs are used for error calculation (backpropagation)?
+Workflow:
+Input1 -----> Model ------> Output1
+Input2 -----> Model ------> Output2
+Loss = criterion(Output1, Output2)
+Backpropagation(Loss)
+"
+"['comparison', 'recurrent-neural-networks', 'theory-of-computation', 'teacher-forcing', 'turing-completeness']"," Title: Can teacher forcing in RNN ensure Turing completeness?Body: RNN has the same capability as a universal Turing machine. But I am confused whether RNN holds the same capabilities if we use teacher forcing.
+Consider the following excerpts from paragraphs taken from the section titled "Teacher Forcing and Networks with Output Recurrence" of the chapter 10: Sequence Modeling: Recurrent and Recursive Nets of the textbook named Deep Learning by Ian Goodfellow et al.
+
+The network with recurrent connections only from the output at one time step to the hidden units at the next time step is strictly less
+powerful because it lacks hidden-to-hidden recurrent connections. For
+example, it cannot simulate a universal Turing machine. Because this
+network lacks hidden-to-hidden recurrence, it requires that the output
+units capture all the information about the past that the network will
+use to predict the future....... Models that have recurrent
+connections from their outputs leading back into the model may be
+trained with teacher forcing.
+
+The quoted portion says that the RNN in which recurrent connections only exist from the output at one time step to the hidden units at the next time step are less powerful and are not as capable as a universal Turing machine. And those neural networks can be trained with teacher forcing. Even though they are not trained using teacher forcing, they are not as capable as the universal Turing machines. But, I want to get clarity on the relation between the capability of RNN trained using teacher forcing and the capability of universal Turing machine.
+Is it true that if an RNN is trained with teacher forcing then it cannot simulate a universal Turing machine?
+"
+"['machine-learning', 'reference-request', 'recommender-system']"," Title: How do we give recommendations when users create/post content (like in YouTube)?Body: I've explored tools like amazon personalize, etc. for generating recommendations. It seems like amazon personalize is appropriate when all the content is with the company/a single entity. For example, in Netflix, all the content (the catalogue of movies, tv shows, etc.) is with them and they generate personalized movie/tv show recommendations.
+But what if there's a platform similar to Youtube, TikTok, where users can:
+
+- post content (users are continuously generating content)
+- view other users content and interact (like, share, repost, comment)
+- follow other users
+
+When there is user-generated content like this and users follow other users (meaning they probably want recommendations from users they follow), how do we give recommendations? What algorithms and tools can be used?
+Lots of content - handling the cold start problem
+And when there is user-generated content, there is going to be lots of content being generated every minute. So, how do we handle the cold start problem (i.e. how do we decide who to recommend all of this new influx of content too)? Usually, we might experiment with this new content, like recommend it to some users, see how they're responding and appropriately decide how to recommend this content. But when there is a very high frequency of content being created, how do we reduce the amount of time it takes to give recommendations/push the new content to users quickly?
+And does anybody know if the questions mentioned above can be addressed using Amazon Personalize itself (to some extent maybe)?
+"
+"['deep-learning', 'long-short-term-memory', 'time-series', 'regression']"," Title: Is my dataset a time series dataset? and should I use an LSTM?Body: I have a dataset where I am recording temperature after every 4milliseconds till 500 and another feature "conductivity value". The length of the dataset is around a 1000 rows. I need to find the conductivity value based on the temperature pattern.
+
+
+
+
+t1 |
+t2 |
+t3 |
+.... |
+t5 |
+conductivity |
+
+
+
+
+90 |
+91 |
+93 |
+.... |
+96 |
+0.34 |
+
+
+92 |
+91 |
+93 |
+.... |
+95 |
+0.36 |
+
+
+
+
+I am bit confused on how to use the dataset in a time series model such as LSTM because I have all the time sequence in columns and I don't know the conductivity values in between as in t2,t3,t4.
+I think the dataset becomes a classification problem with the current format.
+Can you guys help me out?
+"
+"['pytorch', 'weights', 'linear-regression', 'tensor', 'bias']"," Title: Not able to understand Pytorch Tensor (Weight & Biases) Size for Linear RegressionBody: Below are the two tensors
+[ 73., 67., 43.]
+[ 91., 88., 64.],
+[ 87., 134., 58.],
+[102., 43., 37.],
+[ 69., 96., 70.]
+
+[ 56., 70.],
+[ 81., 101.],
+[119., 133.],
+[ 22., 37.],
+[103., 119.]
+
+These are the weight that are added
+Weights and biases
+ w = torch.randn(2, 3, requires_grad=True)
+ b = torch.randn(2, requires_grad=True)
+
+I am not able to understand how the size of tensors are decided for weight and biases. Is there common rule that we should follow while adding weight and biases for our model
+"
+"['neural-networks', 'deep-learning', 'papers', 'image-net', 'flops']"," Title: Does higher FLOPS mean higher throughput?Body: I understand that FLOPS means floating-point operations per second, and throughput is the number of inputs (for example, images) per second. If a model has higher FLOPS, it means it performs faster.
+However, in the article Container: Context Aggregation Network, they show that:
+
+The container has higher FLOPS and less throughput, while the container-light has lower FLOPS and higher throughput.
+What is the reason for that?
+"
+"['convolutional-neural-networks', 'python', 'mask-rcnn']"," Title: Can I use a Mask R-CNN to detect a skin texture?Body: I'm trying to implement a solution in python to detect skin in an image.
+I'm evaluating the Mask R-CNN model to create a mask on the skin (not on clothes). The problem is that every solution I have encountered using Mask R-CNN uses it to classify objects. I'm afraid that using it trying to classify texture might be a problem. Is it the case?
+My dataset is actually pretty good, composed of the
+
+- original image,
+- precise mask on the skin, and
+- grounding box.
+
+Can I use a Mask R-CNN to detect a skin texture?
+"
+"['logic', 'knowledge-representation', 'ontology']"," Title: How do I show the relationship between theories and models using Conceptual Graphs?Body: The Mereology Theory below contains three first-order axioms that represent a part of a mereology theory. For this posting, it is important that the set of axioms should be considered as a theory.
+Mereology Theory
+Reflexivity $\forall x : part(x,x)$
+Antisymmetry $\forall x \forall y : ((part(x,y) \land part(y,x)) \implies (x = y))$
+Transitivity $\forall x \forall y \forall z :((part(x,y) \land part(y,z)) \implies part(x,z))$
+Here is my naïve attempt to present the theory as a set of conceptual graphs (CG).
+
+My understanding of the above CGs is as follows:
+
+- The variables are universally quantified, not default for CGs, but allowed in extended CG (ECG).
+- The inner graphs are all related by conjunction, which is default for GCs and I assume for ECGs.
+- The arrow on graph representing reflexivity is bi-directional.
+- Both antisymmetry and transitivity are represented by an IF-THEN contexts.
+- Dotted lines are co-references.
+- Equality (=) is actually commutative, but is represented as a directed relation .
+- Each inner graph asserts a single proposition, labelled Proposition.
+- The outer graph is labeled MereologyTheory, I am not sure that this is correct ECG syntax.
+
+Below is a possible model of the above theory:
+Mereology Model Mathematical notation
+$Entities = \{ a,b \}$
+$Relations = \{part(a,a),part(a,b),part(b.b)\}$
+Obviously there are many other possible models. MereologyModel below is my attempt to visualize this model as a CG. I am not sure that putting the label MereologyModel is correct CG syntax or denotes it as a model of MereologyTheory .
+
+Question:
+In CG visual notation, how are theories and models related?
+It seems to me that basic CGs can represent FOL sentences and the relation between such sentences. According to Chein and Mugnier the subsumption relation is defined by graph homomorphisms between CGs. Is the model/theory relation for CGs also defined in terms of graph homomorphisms? I am aware that in general a model satisfies a theory i.e. $M \vDash T$. Does the graph homomorphism provide the necessary syntactic mapping to enable model and theory to be related?
+Note CGs can be represented in Common Logic (ISO zipped PDF), which would permit a formal proof that all the axioms of the theory are satisfied in the model.
+"
+"['recurrent-neural-networks', 'testing', 'teacher-forcing']"," Title: How to do testing for an RNN that was trained with teacher forcing only?Body: If an RNN is trained using only the teacher forcing, then the network takes the actual output from the previous time step as input to the hidden state the next time step.
+We know that the actual outputs cannot be given to the model while testing, then what information passes from a time step to the next time step in the test phase?
+"
+"['machine-learning', 'deep-learning', 'objective-functions', 'optimization', 'geometric-deep-learning']"," Title: a loss for binary step function dataBody: I have some data with ground truth that looks like a binary step function
, where part of it is 0 and part is one.
+An example for the GT can be like 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0
+or something like this
+
+I have a hard time to come up with a loss function that can optimize this problem, the simplest option would be something like CrossEntropy or BinaryCrossEntropy, but I am wondering if there is any other loss that I try.
+Something that can take into account the property that when the GT is one (1
) it is continuous 1
and when it is zero it is continuous.
+To give a little more information, for example, I will never have a GT that be like this 0 0 1 0 1 0 1
also I will never have a GT like this 0 0 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0
. In other words, I will have one time ones in a continuous way (it can be at the start or middle or end) but it wont be two discontinuous 1
s.
+Is there any loss function that take into account this properties?
+"
+"['reinforcement-learning', 'dqn', 'bellman-equations']"," Title: Would it be possible to enforce the same $s_{t + 1}$ between the model's estimate and the target function's Q-value?Body: Say I have a game of blackjack, and I am trying to teach a single forward-pass neural network to approximate the Q value of the current state and action.
+There are 3 inputs: The current card in hand, the cards in the deck, and the cards in the pile. It outputs the Q-value of two actions, namely, holding or adding the current card to the pile.
+My loss function is $$L(Q,Q_E)= \sum({Q(s,a_i)- Q_E(s,a_i) )^2},$$
+where $Q_E$ is the estimated Q-value of the current state from the policy network. And $Q$ is the target function, which is calculated using the Bellman equation.
+As I understand it, the Bellman equation assumes the setting to be deterministic, meaning that, if you're in state $s_t$ as you take action $a_1$, you should always reach the same $s_{t+1}$.
+Of course, in blackjack, this is not the case, as the state $s_{t+1}$ is purely dependent on the card you draw, which is a stochastic process.
+Would it be possible to omit some of this noise or "stochasticity" by enforcing the same $s_{t + 1}$ between the model's estimate and the target function's Q-value?
+In other words, say we're in state $s_t$, and the target function picks action $a_1$ and draws a 10 reaching state $s_{10}$ as the next state. For the training of the policy network, it loads the state $s_t$ from the experience replay, it also picks action $a_1$ and draws a 7 reaching state $s_7$ as the next state.
+Would it somehow ruin the training, if I then just hardcoded it, such that the next state reverted to state $s_{10}$ if the policy network picked action $a1$? Are there any counter-productive consequences to this?
+"
+"['computer-vision', 'image-recognition', 'image-net']"," Title: Validity of ImageNet for measurement of the model performanceBody: ImageNet dataset is an established benchmark for the measurement of the performance of CV models.
+ImageNet involves 1000 categories and the goal of the classification model is to output the correct label given the image.
+Researchers compete with each other to improve the current SOTA on this dataset, and the current state of the art is 90.88% top-1 accuracy.
+If the images involved only a single object and background - this problem would be well-posed (at least from our perceptual point of view). However, many images in the dataset involve multiple objects - a group of people, a person, and an animal - the task of classification becomes ambiguous.
+Here are some examples.
+The true class for this image is bicycle
. However, there is a group of people. The model that recognizes these people would be right from the human point of view, but the label would be wrong.
+
+Another example is the fisherman with fish called tench
.
+The model could have recognized the person, but be wrong.
+
+So, my question is - how does much the performance of the best models of ImageNet reflect their ability to capture complicated and diverse image distribution, and how much the final result on the validation set is accidental. When there are multiple objects present on the image, the network can predict any of them. Prediction can match the ground truth class or can differ. And for the model, that happens to be luckier, this benchmark will show better performance. The actual quality can be the same, in fact.
+"
+"['deep-learning', 'architecture', 'embeddings', 'input-layer']"," Title: Is there any way to force one input have more effect on model?Body: Now I am working on building a deep learning model for a regression problem. I used 50 inputs and try to add one new categorical input. The problem is that this one input is much more important than other inputs. I want to make it more influential than others and all I can think of now are the following three.
+
+- just add at first layer as other inputs
+- Add new categorical input to each layer (Now model has 5 layers)
+- Fit it to the embedding layer first and increate its dimension and concatenate it with other inputs.
+
+Do these seem fine and are there any other ways to give more power to one input?
+"
+"['decision-trees', 'maximum-likelihood', 'density-estimation']"," Title: Optimize parametric Log-Likelihood with a Decision TreeBody: Suppose there are some objects with features, and the target is parametric density estimation. Density estimation is model-based. Parameters are obtained by maximizing log-likelihood.
+$LL = \sum_{i \in I_1} \log \left( \sum_{j \in K_i} \theta_j \right) + \sum_{i \in I_2} \log (1 - \sum_{j \in L_i} \theta_i)$
+Assume that parameters $\theta_j$ are probabilities, i.e. $0 < \theta_j < 1$, and that $\sum_{j\in L_i} \theta_i < 1$. From practical perspective, it seems natural to make parameters $\theta_j$ themselves functions of features, i.e. $\theta_j = F(x_j^1, \ldots, x_j^m)$.
+Is there any known standard method or heuristic to optimize such objective with a decision tree, i.e. we assume that our function $F$ is a decision tree?
+Any related results are welcome.
+"
+"['computer-vision', 'optical-flow']"," Title: How does Horn–Schunck method for Optical Flow solve the aperture problem?Body: This is regarding the details stated in Wikipedia.
+I am reading optical flow in Computer Vision. I understood the Horn–Schunck method as such, but did not get how it is related to the aperture problem, and how it is solved using Horn–Schunck method.
+Also, why is Horn–Schunck method invented/used where a simpler "Lucas–Kanade method" is already there (Reference)?
+"
+"['machine-learning', 'computational-learning-theory', 'pac-learning']"," Title: What is the relevance of the concept size to the time constraints in PAC learning?Body: My question is about the relevance of concept size to the polynomial-time/example constraints in efficient PAC-learning. To ask my question precisely I must first give some definitions.
+Definitions:
+Define the input space as $X_n = {\left\{ 0,1\right\} }^n$ and a concept $c$ as a subset of $X_n$. For example, all vectorized images of size $n$ representing a particular numeral $i$ (e.g. '5') collectively form the concept $c_i$ for that numeral. A concept class $C_n$ is a set of concepts. Continuing our example, the vectorized numeral concept class $\left\{ c_0, c_1, \dots c_9\right\}$ is the set of all ten vectorized numeral concepts for a given dimension $n$.
+As an extension to include all dimensions we define $\mathcal{C} = \cup_{n \geq 1} C_n$. A hypothesis set $H_n$ is also a fixed set of subsets of $X_n$ (which might not necessarily align with $C_n$) and we define $\mathcal{H} = \cup_{n \geq 1} H_n$.
+The following definition of efficient PAC-learnability is adapted from An Introduction to Computational Learning Theory by Kearns and Vazirani.
+$\mathcal{C}$ is efficiently PAC-learnable if there exists an algorithm $\mathcal{A}$ such that for all $n \geq 1,$ all concepts $c \in C_n$, all probability distributions $D$ on $X_n$, and all $\epsilon, \delta \in \left(0,1\right)$, the algorithm halts within polynomial time $p{\left( n, \text{size}{\left(c\right)}, \frac{1}{\epsilon}, \frac{1}{\delta}\right)}$ and returns a hypothesis $h \in H_n$ such that $$ \underset{x \sim D}{\mathbb{P}}{\left[ h{\left(x\right)} \neq c{\left(x\right)} \leq \epsilon\right]} \geq 1 - \delta.$$
+Question:
+Now, in the polynomial $p{\left(\cdot, \cdot, \cdot, \cdot \right)}$, I understand the dependence on $\epsilon$ (accuracy) and $\delta$ (confidence). Additionally, I understand why the polynomial should depend on $n$ - the concept of learnability should be invariant to the time burden incurred from increasing the dimension of the input space (e.g. increasing the resolution of the image). What I do not understand is why the dependence on the size of the target concept (which I believe is usually taken to mean the smallest encoding of the target concept)?
+"
+"['comparison', 'markov-decision-process', 'environment', 'pomdp']"," Title: Is there a fundamental difference between an environment being stochastic and being partially observable?Body: In AI literature, deterministic vs stochastic and being fully-observable vs partially observable are usually considered two distinct properties of the environment.
+I'm confused about this because what appears random can be described by hidden variables. To illustrate, take an autonomous car (Russel & Norvig describe taxi driving as stochastic). I can say the environment is stochastic because I don't know what the other drivers will do. Alternatively, I can say that the actions of the drivers are determined by their mental state which I cannot observe.
+As far as I can see, randomness can always be modeled with hidden variables. The only argument I came up with as to why the distinction is necessary is Bell's inequality, but I don't think that AI researchers had this in mind.
+Is there some fundamental difference between stochasticity and partial observability or is this distinction made for practical reasons?
+"
+"['reinforcement-learning', 'policy-gradients']"," Title: How to mix grid matrix and explicit values when designing RL state?Body: I'm trying to do multi-agent reinforcement learning on the grid world navigation task where multiple agents try to collectively reach multiple goals while avoiding collisions with stationary obstacles and each other. As a constraint, each agent can only see within a limited range around itself.
+So on a high level, the state of each agent should contain both information to help it avoid collision and information to guide it towards the goals. I'm thinking of implementing the former by including into the agent's state a matrix consisted of the grid cells surrounding the agent, which would show the agent where the obstacles are. However, I'm not sure how to include goal navigation information on top of this matrix. Currently I just flatten the matrix and append all relative goal locations at the end, and use this as the state.
+For example, for a grid world as shown below (0 means empty cell, 1 means agents, 2 means obstacles, and 3 represents goals):
+[[0 0 0 0 0 0 2 2 0 0]
+ [0 0 0 0 0 0 0 0 0 0]
+ [0 0 2 2 0 0 0 0 0 0]
+ [0 3 2 2 0 0 0 0 0 2]
+ [0 0 0 0 0 0 0 0 0 2]
+ [0 0 0 0 1 0 0 0 0 2]
+ [2 0 0 0 0 2 2 0 3 0]
+ [2 0 0 0 0 2 2 0 0 0]
+ [0 0 0 0 0 0 0 0 0 0]
+ [0 0 2 0 0 1 0 0 0 0]]
+
+The agent at row5 col4 sees the following cells that are within distance1 around it:
+[[0. 0. 0.]
+ [0. 1. 0.]
+ [0. 0. 2.]]
+
+flattened, the matrix becomes:
+[0,0,0,0,1,0,0,0,2]
+
+The location of the goal at row3 col1 relative to the aforementioned agent is (5-3=2, 4-1=3)
+The location of the goal at row6 col8 relative to the aforementioned agent is (5-6=-1, 4-8=-4)
+So after appending the relative locations, the state of the agent becomes:
+[0,0,0,0,1,0,0,0,2,2,3,-1,-4]
+
+(Similar process for the other agent)
+Is this a reasonable way of designing the state? My primary concern is that the flattened grid matrix and the relative goal locations need to be handled quite differently, but it can be hard for RL to figure out the difference.
+Thanks in advance!
+Edit: To validate my concern, I trained an agent using PG REINFORCE algorithm. As I feared, the agent learned to avoid obstacles but otherwise just moved randomly without navigating towards the goals.
+"
+"['computer-vision', 'image-processing', 'affine-transformations', 'image-transformations', 'projective-transformations']"," Title: What is the expression for projective transformation?Body: The following are the two types are projections that are generally used in image processing
+
+- Affine transformation
+- Projective transformation
+
+Affine transformation is a backbone operation in neural networks also. It is expressed as
+$$\mathbf{wx+b}$$
+where $\mathbf{w, x, b}$ are matrices. In general, $\mathbf{x}$ is treated as an image in image processing.
+Projective transformation is also a type of transformation on images and it may be different from affine transformation. I want to know whether it can be represented in terms of mathematical expression.
+If yes, what is the expression for projective transformation?
+"
+"['optimization', 'topology']"," Title: Is it possible to find a good neural network structure without training it?Body: Neural networks consist of so many parameters. Researchers could create as many possible neural networks as they wish. So I want to ask a general question. Could we devise an evolutionary algorithm which learns an efficient structure without optimization?
+Are there some important works in this area?
+If we look at sparse neural networks, it seems that there are so many topologies that perform as well as a dense network.
+So a single task has so many solutions which differ slightly. So getting rid of optimization for many problems shouldn't be hard at all.
+Edit: I add some more information. I want to know whether we could find sparse topologies by mutating them like adding layers and changing the connections without optimizing the loss function directly?
+"
+"['deep-learning', 'tensorflow', 'triplet-loss-function']"," Title: Triplet Loss- Three forward pass and one backward pass(Propagation)Body: I am trying to build a CNN model based on the concepts of Contrastive Learning. In specific based on Triplet loss.
+I have 5 different class labels and I create triplets such that in a triplet, two images are from the same class and the third one is from another class.
+I have a CNN model which takes one input from a triplet at a time and generates its corresponding embedding in 128 dimensions.
+All three embedding embeddings from a triplet are used for calculating loss. The loss is based on the Triplet loss.
+Further, the loss is backpropagated and training is carried out stochastically.
+The idea is to use the trained model to generate one embedding for an input image which can be further used for multi-class classification problems.
+My question is, is this method of 3 forward passes and 1 backward pass valid in Tensorflow?
+Here is a fragment of my code that I am using for training:
+def cnn():
+ model_input = layers.Input(shape=(112, 112, 3))
+ x = layers.Conv2D(filters=16, kernel_size=3, padding='same', name='Conv1')(model_input)
+ x = layers.MaxPool2D()(x)
+ x = layers.BatchNormalization()(x)
+ x = layers.ReLU()(x)
+
+ x = layers.Conv2D(filters=32, kernel_size=3, padding='same', name='Conv2')(x)
+ x = layers.MaxPool2D()(x)
+ x = layers.BatchNormalization()(x)
+ x = layers.ReLU()(x)
+
+ x = layers.Conv2D(filters=64, kernel_size=3, padding='same', name='Conv3')(x)
+ x = layers.MaxPool2D()(x)
+ x = layers.BatchNormalization()(x)
+ x = layers.ReLU()(x)
+
+ x = layers.Conv2D(filters=128, kernel_size=3, padding='same', name='Conv4')(x)
+ x = layers.MaxPool2D()(x)
+ x = layers.BatchNormalization()(x)
+ x = layers.ReLU()(x)
+
+ x = layers.Conv2D(filters=256, kernel_size=3, padding='same', name='Conv5')(x)
+ x = layers.MaxPool2D()(x)
+ x = layers.BatchNormalization()(x)
+ x = layers.ReLU()(x)
+
+ x = layers.GlobalAvgPool2D(name='GAP')(x)
+ output = layers.Dense(128, activation='tanh', name='Dense1')(x)
+ origin = tf.zeros_like(output, dtype=float)
+ unit_vector = tf.divide(output, tf.sqrt(tf.reduce_sum(tf.square(output-origin)))) # Normalize vector(L2_Norm)
+ shared_model = Model(inputs=model_input, outputs=unit_vector)
+ shared_model.summary()
+ return shared_model
+
+def triplets_loss(anchor_sample, positive_sample, negative_sample, alpha=0.2):
+ anchor_pos_dist = tf.sqrt(tf.reduce_sum(tf.square(anchor_sample - positive_sample))) # distance between positive pairs
+ anchor_neg_dist = tf.sqrt(tf.reduce_sum(tf.square(anchor_sample - negative_sample)))# distance between negative pairs
+ triplet_loss = tf.maximum(((anchor_pos_dist - anchor_neg_dist) + alpha), 0.000001) # triplet loss
+ return triplet_loss
+
+def train(train_data_dir, training_batch=4, lr=1e-4, epochs=100,margin=0.2):
+ model = cnn()
+ ### creating triplet data loader object ###
+ train_data_util_instance = TripletFormulator(data_path_dictionary=train_data_dir,
+ batch=training_batch)
+ train_data_array_dict, data_count = train_data_util_instance.data_loader()
+ majority_class = max(data_count, key=data_count.get)
+ ######
+ optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
+ for epoch in range(epochs):
+ total_train_loss = 0
+ start_time = time.time()
+ batch_no = 0
+ for majority_class_batch in train_data_array_dict[majority_class]:
+ batch_loss = 0
+ train_batch_dict = {}
+ train_batch_dict['A'] = next(iter(train_data_array_dict['A']))
+ train_batch_dict['B'] = next(iter(train_data_array_dict['B']))
+ train_batch_dict['C'] = majority_class_batch
+ train_batch_dict['D'] = next(iter(train_data_array_dict['D']))
+ train_batch_dict['E'] = next(iter(train_data_array_dict['E']))
+ train_triplets = train_data_util_instance.triplet_generator(train_batch_dict)
+ for triplets in train_triplets:
+ with tf.GradientTape() as tape:
+ anchor = model(tf.reshape(triplets[0], [-1, 112, 112, 3]))
+ positive = model(tf.reshape(triplets[1], [-1, 112, 112, 3]))
+ negative = model(tf.reshape(triplets[2], [-1, 112, 112, 3]))
+ if np.isnan(anchor).any() or np.isnan(positive).any() or np.isnan(negative).any():
+ print('NAN FOUND')
+ else:
+ loss = triplets_loss(anchor_sample=anchor, positive_sample=positive,negative_sample=negative, alpha=self.alpha)
+ total_train_loss += loss
+ batch_loss += loss
+ grads = tape.gradient(loss, model.trainable_weights)
+ optimizer.apply_gradients(zip(grads, model.trainable_weights))
+ print(epoch, batch_no, batch_loss)
+ batch_no += 1
+ end_time = time.time()
+ print('Training_loss: ', total_train_loss, 'Time_taken: ', end_time - start_time)
+
+When I start training, I can see the training loss converging. But the overall idea is confusing about the number of forward passes per backward pass. Also, is this method a concept of weight sharing?
+I would be very eager to discuss this topic as I do not see many problems similar to this.
+In most cases, the CNN model takes multiple input and generate multiple outputs and further, the embeddings are used for binary classification that is if the inputs are the same or not.
+I would be waiting for your comments and suggestions.
+"
+"['deep-learning', 'computer-vision', 'object-detection', 'object-recognition', 'semantic-segmentation']"," Title: How does the classification head of EfficientDet work?Body: EfficientDet outputs classes and bounding boxes. My question is about both but specifically I am interested in the class prediction net part. In the paper's diagram it shows 2 conv layers. I don't understand the code and how it works. And what's the difference between the 2 conv layers of classification and box prediction?
+
+"
+"['reference-request', 'embeddings', 'knowledge-graph', 'knowledge-graph-embeddings']"," Title: What are knowledge graph embeddings?Body: What are knowledge graph embeddings? How are they useful? Are there any extensive reviews on the subject to know all the details? Note that I am asking this question just to give a quick overview of the topic and why it might be interesting or useful, I am not asking for all the details, which can be given in the reference/survey.
+"
+"['deep-learning', 'hyper-parameters', 'gpu', 'batch-size']"," Title: Is it true that batch size of form $2^k$ gives better results?Body: I am confused among the following in selecting the batch size for my model.
+#1: powers of 2
+I generally see that batch sizes are in powers of two: 32, 64, 128, 256.
+#2: maximum GPU
+Suppose my GPU allows a maximum batch size of 61. And it is not a power of two.
+Which one should I apt? Is there anything like the powers of 2 will give relatively good results?
+"
+"['neural-networks', 'tensorflow', 'objective-functions', 'image-segmentation']"," Title: Custom Tensorflow loss function that disincentivizes all black pixelsBody: I'm training a Tensorflow model that receives an image and segments the image into foreground and background. That is, if the input image is w x h x 3
, then the neural network outputs a w x h x 1
image of 0
's and 1
's, where 0
represents background and 1
represents foreground.
+I've computed that about 75% of the true mask is background, so the neural network simply trains a model that outputs all 0
's and gets a 75% accuracy.
+To solve this, I'm thinking of implementing a custom loss function that checks if there are more than a certain percentage of 0
's, and if so, to add a very large number to the loss to disincentivize the all 0
's strategy.
+The issue is that this loss function becomes non-differentiable.
+Where should I go from here?
+"
+"['deep-learning', 'speech-recognition']"," Title: How to align or synchronize Youtube caption with audio accuratelyBody: I need to use the automatic caption from Youtube to precisely isolate excerpts from the video aligned to text and generate the dataset to train a model in French.
+So I've already written the script, but when I compare the audio with the matching text, I noticed that the text is often delayed (positive or negative). For example, the text reads "1 2 3 4" and the audio says "0 1 2 3" ("0" comes from the previous clip).
+If you have a look at a Youtube video in French, when you click on "open transcript", you can also notice this delay.
+Here is an example that is very noticeable on short clips: The audio says "conditions de travail" whereas the transcript reads "de travail".
+I measured the delay in Audacity and it is not consistent across the clips. Please note that it does not seem to happen in English videos.
+If I use Google Speech Recognition in Python (recognize_google) on audio clips, there are no such delays (also because the clips are already separated) but the punctuation is missing which is not good for training my model.
+Why can't Google align more accurately the audio and the text (caption)?
+Can you suggest a better way of aligning audio with text?
+"
+"['long-short-term-memory', 'time-complexity', 'computational-complexity', 'testing']"," Title: What is the time complexity for testing a stacked LSTM model?Body: In the data preparation phase, we have to divide the dataset into two parts: the training dataset and the test dataset.
+I have seen this post regarding the time complexity for training a model.
+However, I couldn't find any good source for the time complexity for testing a model, specifically, a stacked LSTM model (with 1 input, 3 layers, 4 LSTM units per LSTM layer, 1 output, sequence 18, and batch size 32 for MSE loss), based on the test set, e.g. the computation of the accuracy/loss on the test set given $N$ test examples.
+Is there any source to check that?
+"
+"['convolutional-neural-networks', 'autoencoders', 'multilayer-perceptrons']"," Title: Why is training all layers at a time effective for a multi-layer autoencoder?Body: This training of all layers of a CNN simultaneously is standard practice today. It is found in every CNN (AlexNet (2012), VGG, Inception, GANs, etc) and even pre-CNN networks such as Le et al. 2012.
+What is the advantage of training all the layers simultaneously? Wouldn't the later layers be learning from poor lower layers to start with, and have to re-learn to adapt? And why would there ever be an advantage for an autoencoder like Le et al. 2012 where there is no backpropagation to communicate from the later layers to the earlier layers?
+I think the conventional answer is that the lower layers can actually learn to provide low-level features that support the layers above. An example of this is learning to detect a horizontal yellow-blue feature to detect the water line in a beach scene.
+But couldn't the yellow-blue feature be found just as easily by training the lower layers first? This would be especially true of an autoencoder such as Le et al. 2012, which picks up on patterns in the training set without having ground truth-labels to group them.
+Citations to experiments or theoretical work that directly answers this question would be appreciated!
+This is a follow-on to an earlier question.
+"
+"['reinforcement-learning', 'deep-rl', 'tensorflow', 'weights-initialization', 'deepmind']"," Title: Why Acme is using own uniform initializer?Body: Why is Acme using own initializer for both tanh and ELU, when commonly used for tanh is Xavier and for ELU is He initializer? What mathematics is behind them?
+Here is the code.
+uniform_initializer = tf.initializers.VarianceScaling(
+ distribution='uniform', mode='fan_out', scale=0.333)
+
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing']"," Title: Generating automatic sports commentary (NLG)Body: I am trying to develop a "simple" announcer for sports segments that mainly consists of events like goals, fouls, substitutions, and many other events that could happen in many sports. The idea is that I already have key info like the player who does the action, the location in the court, the time that it takes place, and more extra info. I also have information like the sport being played and the type of event, so this task is purely focused on NLG.
+The naïve idea that I had was to extract commentaries of soccer, which can be found on so many websites, and extract the key info of these commentaries that will act as the ground truth in the model for getting the input, i.e.,:
+['football', 'goal', 'Bob', 'fourth minute'] -> Goal! that was a nice goal from Bob in the fourth minute of the match.
+I would say that the first two words are being used for steering the model to generate sports according to phrases (a cricket player kicking the ball doesn't make sense).
+The comments generated by the fine-tuned model on this input-output are acceptable.
+The problem is that I have to build a dataset for many sports (or at least 2 with the same quality as soccer ones like in Flashscore) and I can't find any.
+I have also been looking for Plug and Play methods to generate sentences.
+What do you think? Is fine-tuning a must-do in this situation or can it be thought of in another way?
+"
+['reinforcement-learning']," Title: Should I represent my reinforcement learning as an episodic or continuous task?Body: I would like the community to help me understand if the following example would be better represented as episodic or continuous task, this will help me structure the problem and chose the right RL algorithm.
+The agent start with an initial score x
of let's say 100. The agent objective is to maximise it's score. There is no upper bound! Theoretically the agent can get a score up to infinity, and there is no termination based on the number of steps, therefore the agent could play forever. However, the score can't be negative and if the agent get to a score of zero, the episode should terminate and the environment reset. I am undecided what would be the best representation, because if the agent learns how to play, the episode would never terminate, and the agent would theoretically play forever. However if the score get to zero, there is no way for the agent to continue playing so the environment needs to reset. Thank you.
+"
+['reinforcement-learning']," Title: How to normalize rewards in DQN?Body: I want to use a Deep Q-Network for a specific problem. My immediate rewards ($r_t = 0$) are all zeros. But my terminal reward is a large positive value $(r_T=100$). How could I normalize rewards to stabilize the training? I think clipping rewards to be in range $[0,1]$ makes training harder because it just forces most values to be near zero.
+"
+"['neural-networks', 'objective-functions', 'backpropagation', 'linear-regression', 'cross-entropy']"," Title: Why is the cross-entropy a cost function?Body: The question looks foolish, but I think cross-entropy is somewhat weird as a cost function.
+As a cost function for linear regression, the mean square error $ \sum_{i=1}^{n} (y_i - (ax_i+b)) ^2$ seems quite reasonable, because it literally/directly measures the error between real value and predicted value.
+However, in the case of the cross-entropy, I do not understand what it is.
+For multi-class classification, for example, with 3 classes, the true target is $[ 0, 0, 1 ]$, while the output of the model is $[ 0.2, 0.3, 0.5 ]$ (maybe with a softmax activation at the last layer). So, the error of it is: $C(x) = -(0*log(0.2) + 0*log(0.3) + 1*log(0.5))$.
+It looks... I don't know, why is it an "error?" How can it be updated with backpropagation?
+Also, what is the objective of it? Maybe optimization, so maybe minimizing error? Then what happens?
+"
+"['neural-networks', 'objective-functions', 'metric', 'evaluation-functions']"," Title: What inherent quality of a function makes it treated as either loss or evaluation metric?Body: A neural network model needs a loss function for training. The neural network needs to minimize the loss function.
+A neural network is evaluated after training using a metric. The neural network needs to either minimize or maximize the metric depending on the context.
+Suppose $L$ is the loss function used and $M$ is the metric/evaluation function. Assume the metric needs to be minimized and is calculated based on the output of the neural network. We can use $L+M$ as the loss function. It looks to me that it may be up to the choice of the designer to use certain function $f$ for either $L$ or $M$ as they try to quantify how good/bad the model is working.
+But, in the literature, if we observe, there are some fixed loss functions and fixed metrics for evaluation depending on the underlying task.
+With this context, what is the inherent quality of the function that makes it treated as either loss or evaluation metric?
+"
+"['neat', 'neuroevolution']"," Title: NEAT: How to properly handle Node IDs and avoid Competing Conventions?Body: I'm working on yet another NEAT implementation for a personal project, and I feel like I'm missing something about the proposed solution to the Competing Conventions problem.
+Here's what I'm assuming:
+
+- Each new connection gene yields a new innovation number.
+
+
+The new connection gene created in the first mutation is assigned the number 7, and the two new connection genes added during the new node mutation are assigned the numbers 8 and 9.
+
+
+- If a connection from
X
to Y
appears more than once at the same generation, it receives the same innovation number.
+
+
+However, by keeping a list of the innovations that occurred in the current generation, it is possible to ensure that when the same structure arises more than once through independent mutations in the same generation, each identical mutation is assigned the same innovation number.
+
+
+- Every new node receives a globally new id. (I have no source for this. Maybe the problem is here)
+- Equal fitness is assumed at the example below, so all genes are randomly inherited (on this case the probability is 100%, for demonstration purposes).
+
+
+In this case, equal fitnesses are assumed on the example below, so the disjoint and excess genes are also inherited randomly.
+
+So we have two genomes after two generations, which suffered the exact same mutations at each generation: add_connection(0,2) at gen 1, add_node(1) at gen 2.
+
+Even though the topology is exactly the same for both, the genomes don't share innovation numbers, so the crossover yields a more complex topology.
+It seems to me that using global ids for nodes breaks the historical tracking of connections, which blows up the innovation counter pretty fast (around 1400 innovations in 100 gens for 100 individuals), and yields big, non-functional networks.
+What am I missing?
+"
+"['reinforcement-learning', 'reference-request', 'multi-armed-bandits']"," Title: Is there a paper/article on contextual $\epsilon$-greedy algorithm?Body: I am reading the paper A Contextual-Bandit Approach to Personalized News Article Recommendation, where it refers to $\epsilon$-greedy (disjoint) algorithm. I suspect, that it is just a version of a K-armed bandit with regressors that estimate the average reward for an arm. However, I cannot find the description of this algorithm in the literature (papers, books, or other resources)
+"
+"['reinforcement-learning', 'q-learning', 'gradient-descent', 'function-approximation', 'eligibility-traces']"," Title: Watkins' Q(λ) with function approximation: why is gradient not considered when updating eligibility traces for the exploitation phase?Body: I'm implementing the Watkins' Q(λ) algorithm with function approximation (in 2nd edition of Sutton & Barto).
+I am very confused about updating the eligibility traces because, at the beginning of chapter 9.3 "Control with Function Approximation", they are updated considering the gradient: $ e_t = \gamma \lambda e_{t-1} + \nabla \widehat{q}(S_t, A_t, w_t) $, as shown below.
+
+Nevertheless, in Figure 9.9, for the exploitation phase the eligibility traces are updated without the gradient: $ e = \gamma \lambda e $.
+
+Furthermore, by googling, I found that the gradient is simplified with the value of the i-th feature: $ \nabla \widehat{q}(S_t, A_t, w_t) = f_i(S_t, A_t)$.
+I thought that, in Figure 9.9, the gradient is not considered because in the next step the eligibility traces are increased by 1 for the active features. So, the +1 can be seen as the value of the gradient, as I found on google, being binary features. But I'm not sure.
+So, what is (and why) the right rule to update the eligibility traces?
+"
+"['reinforcement-learning', 'math', 'policy-based-methods']"," Title: How to derive the dual function step by step in relative entropy policy search (REPS)?Body: TL:DR, (Why) is one of the terms in the expectation not derived properly?
+Relative entropy policy search or REPS is used to optimize a policy in an MDP. The update step is limited in the policy space (?) by the KL-divergence metric to stabilize the update. Based on the KL-divergence constraints, and some constraints about the definition of a policy, we can derive its Lagrangian, and its dual optimization problem afterwards. And lastly, we find the appropriate update step (delta) by solving the dual problem.
+However, I think we can also use it to find multiple optimal solutions in an optimization problem, just like (CMA)-evolutionary strategy algorithm.
+So, based on the original paper and a section of REPS in this paper, I'm trying to derive the dual problem.
+Suppose that we're finding set of solutions represented as a parametrized distribution $\pi(x|\theta)$ that maximizes $H(x)$. Suppose that the last parameters we came up with is denoted as $\hat{\theta}$, we find the optimal parameters $\theta$ by:
+
+max $\int_x H(x)\pi(x|\theta) dx$
+
+
+s.t. $\int_x \pi(x|\theta) dx = 1$
+
+
+$D_\text{KL}\left(\pi(.|\theta) || \pi(.|\hat{\theta})\right) \leq \epsilon $
+
+
+with $D_\text{KL}\left(\pi(.|\theta) || \pi(.|\hat{\theta})\right) = \int_x \pi(x|\theta)\log\frac{\pi(x|\theta)}{\pi(x|\hat{\theta})}$
+
+Based on the equations above, we can write the Lagrangian as follows:
+
+$L(\theta, \lambda, \eta) = \int_x H(x)\pi(x|\theta) dx +
+\lambda(1-\int_x \pi(x|\theta) dx) +
+\eta(\epsilon-\int_x \pi(x|\theta)\log\frac{\pi(x|\theta)}{\pi(x|\hat{\theta})})$
+
+Now, we can see that the term $\lambda(1-\int_x \pi(x|\theta) dx)$ is $0$, right? But here, it was not cancelled out. So, following the flow based on the two papers, We can simplify the Lagrangian by treating the integral wrt to $x$ as an expectation.
+
+$L(\theta, \lambda, \eta) = \lambda + \eta\epsilon + \underset{\pi(x|\theta)}{\mathbb{E}}\left[H(x) -\lambda -\eta \log\frac{\pi(x|\theta)}{\pi(x|\hat{\theta})} \right]$
+
+We will find the optimal $\pi(x|\theta)$ by solving $\frac{\partial L}{\partial \pi(x|\theta)} = 0$. Now, I got confused starting from this step.
+If I mindlessly copy/follow the notations from here, the derivative of $L$ wrt the policy parametrized by $\theta$ is:
+
+$\frac{\partial L}{\partial \pi(x|\theta)} = H(x) - \lambda - \eta \log\frac{\pi(x|\theta)}{\pi(x|\hat{\theta}}$
+
+Where's the integral wrt $x$ goes? Is it because they are all multiplied by $\pi(x|\theta)$, so that it can cancel the integral/the expectation? If so, then why the derivative of the KL term in the expectation derived into this $\eta \log\frac{\pi(x|\theta)}{\pi(x|\hat{\theta})}$? Isn't the $\pi(x|\theta)$ in the log will derive something more?
+"
+"['datasets', 'image-processing', 'data-labelling']"," Title: What are the ""per image"" annotations that are generally used for image datasets in AI?Body: Computer vision is highly benefited by AI algorithms. Image data is abundantly available. There are different varieties of tasks such as image classification, prediction, segmentation, generation, etc.
+Although the collection of the folder(s) of image(s) is mandatory, it may not be enough. Different types of annotations are used in datasets. Annotations can be treated as some extra information related to each image that helps for the AI algorithm under consideration.
+I want to know the kinds of annotations at the individual image level that are generally used. Although the necessity of a particular type of annotations depends on the task under consideration. I want to know the requirements for the contemporary prevalent tasks including classification, prediction, segmentation, and generation. You are encouraged to provide for more tasks if you are aware.
+I know the following types of annotations:
+
+- Bounding box(es)
+- Label
+
+What can be the other kinds of annotations used for images in image datasets?
+"
+"['reinforcement-learning', 'q-learning', 'function-approximation', 'features']"," Title: What is the difference between the $Q_a$ calculated to update delta and those to select next action in the exploitation phase?Body: As the title suggests, I have a doubt about the computation of the $Q_a$ used to update the delta and the $Q_a$ used to select the next action in the exploitation phase, as shown below (source of pseudocode in Figure 8.9).
+
+In both for loops enumerated in the image with 1 and 2, the $Q_a$ are calculated considering the new state, but while in 1 for each action the active features $F_a$ are calculated, in 2 they aren't. So what are the active features to consider in 2? Are they the same as those in 1? In that case, I could avoid recalculating the $Q_a$ by storing those calculated in 1.
+"
+"['neural-networks', 'generative-adversarial-networks', 'performance', 'learning-rate']"," Title: GAN performance starts to get worse as training continuesBody: I'm currently trying to train a GAN to recreate similar images from a dataset. The dataset is using the Eiffel Tower Pictures from Googles Quick Draw dataset. The images aren't very large (only 12x12 pixels) and are all black and white.
+The performance increases at an expected rate initially however after a certain point the quality begins to go down despite the cost from both generator and discriminator networks seeming to stay at a steady value as expected.
+
+
+
+I've tried changing the learning rate and other hyper parameters however they all end up with the same result of the network eventually getting worse.
+My learning rate is currently 0.01 which I'm aware is quite high compared to what other people are using but anything lower takes too long to train and the results don't seem any better even if I do give it long enough.
+Any pointers on what could be causing this or the specific name of this problem if it's a common issue with GANs would be appreciated.
+Thanks
+"
+"['recurrent-neural-networks', 'neat', 'neuroevolution']"," Title: Order of operations on sparse recurrent network alters the output. How to deal with it?Body: I'm working on an implementation of NEAT, which evolves neural networks with small and sparse topologies.
+Evaluating a sparse and possibly recurrent network requires a different approach than the matrix operations of dense networks, and I'm trying to wrap my head around the order in which nodes should be evaluated.
+I've set up a simple example:
+
+- Every node (except inputs) starts at 0. (t0)
+- Every weight is 1. There's no activation function or biases.
+
+
+Assuming that the algorithm has the same intuition we humans do: evaluate the nodes connected to the input, then the "aggregation" node, then the output.
+How should it decide whether to evaluate node [3] before node [4] or vice-versa?
+By starting (t0) at [3], the value of [4] is 0. And vice-versa.
+The side-effect of this behavior appears when comparing two networks. Say you have the network above, and a copy of it where [3] and [4] are inverted. I feel like both should return the same result for the same input in order for evolution to work properly.
+Any thoughts?
+"
+"['tensorflow', 'time-series']"," Title: How do sine and cosine transforms help in extracting frequencies in time series forecasting models?Body: I'm trying to learn how time series forecasting models work and while reading a tutorial off the TensorFlow website I came across these algorithms. I don't quite understand what the article means by "time signals" and how do sine and cosine functions help accomplish them. Can anyone please explain?
+Here's a link to the tutorial
+The following code was provided along with the caption
+"the time in seconds is not a useful model input. Being weather data, it has clear daily and yearly periodicity. There are many ways you could deal with periodicity.
+You can get usable signals by using sine and cosine transforms to clear "Time of day" and "Time of year" signals:"
+day = 24*60*60
+year = (365.2425)*day
+
+df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
+df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
+df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
+df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
+
+"
+"['reinforcement-learning', 'convolutional-neural-networks', 'inverse-rl']"," Title: Augmented an Image with other data when training CNNBody: In the typical RL/MDP framework, I have offline data of $(s,a,r,s')$ of expert Atari gameplay.
+I'm looking to train a CNN to predict $r$ based on $(s, a)$.
+The states are represented by a $4 \times 84 \times 84$ image of the Atari screen, where 4 represents 4 sequential frames, and $84 \times 84$ is the size of the image. The action is an integer from 0 to 3.
+I'm not sure how best to merge these two inputs $(s, a)$ together. How should I incorporate the action into the CNN?
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: Is one big network faster than several small ones?Body: The basis of my question is that a CNN that does great on MNIST is far smaller than a CNN that does great on ImageNet. Clearly, as the number of potential target classes increases, along with image complexity (background, illumination, etc.), the network needs to become deeper and wider to be able to sufficiently capture all of the variation in the dataset. However, the downside of larger networks is that they become far slower for both inference and backprop.
+
+Assume you wanted to build a network that runs on a security camera in front of your house. You are really interested in telling when it sees a person, or a car in your driveway, or a delivery truck, etc. Let's say you have a total of 20 classes that you care about (maybe you want to know minivan, pickup, and so on).
+
+You gather a dataset that has plenty of nice, clean data. It has footage from lots of times of the day, with lots of intra-class variation and great balance between all of the classes. Finally, assume that you want this network to run at the maximum possible framerate (I know that security cameras don't need to do this, but maybe you're running on a small processor or some other reason that you want to be executing at really high speed).
+
+Is there any advantage, computationally, to splitting your network into smaller networks that specialize? One possibility is having a morning, an afternoon/evening, and a night network and you run the one corresponding to the time of day. Each one can detect all 20 classes (although you could split even farther and make it so that there is a vehicle one, and a person one, and so on). Your other option is sharing base layers (similar to using VGGNet layers for transfer learning). Then, you have the output of those base layers fed into several small networks, each specialized like above. Finally, you could also have just one large network that runs in all conditions.
+
+Question: Is there a way to know which of these would be faster other than building them?
+
+In my head, it feels like sharing base layers and then diverging will run as slow as the ""sub-network"" with the most additional parameters. Similar logic for the separate networks, except you save a lot of computation by sharing base layers. Overall, though, it seems like one network is probably ideal. Is there any research/experimentation along these lines?
+"
+"['neural-networks', 'unsupervised-learning']"," Title: Does it make sense to train an autoencoder using data from different distributions?Body: Say I have 500 variables and I believe those variables can be shown in a 4-dimensional latent representation which I want to learn.
+
+What I have for training is 100K samples, and those samples are coming mainly from 3 unbalanced groups: 1st group has 1K samples, 2nd group has 49K samples, and 3rd group has 50K samples.
+
+Do you think I can learn a meaningful representation by training a (variational) autoencoder with this data? Is there a reason that requires all samples to come from the same distribution? If not, is there a reason that requires balanced classes?
+"
+"['deep-learning', 'ai-design', 'training', 'combinatorial-games', 'chess']"," Title: How would you encode your input vector/matrix from a sequence of moves in game like tasks to train an AI? e.g. Chess AI?Body: I've seen data sets for classification / regressions tasks in domains such as credit default detection, object identification in an image, stock price prediction etc. All of these data sets could simply be represented as an input matrix of size (n_samples, n_features) and fed into your machine learning algorithm to ultimately yield a trained model offering some predictive capability. Intuitively and mathematically this makes sense to me.
+However, I'm really struggling with how to think about the structure of an input matrix for game-like tasks (Chess, Go, Seth Blings Mario Kart AI) specifically (using the Chess example):
+
+- How would you encode the state of the board to something that a model could train on? Is it reasonable to think about the board state as a 8x8 matrix (or 1x64) vector with each point being encoded by a numerical value dependent on the type of piece and color?
+
+- Assuming a suitable representation of the board state, how would the model be capable of making a recommendation given that each piece type moves differently? Would it not have to evaluate the different move possibilities for each piece and propose which move it "thinks" would have the best long term outcome for the game?
+
+- A follow up on 2 - given the interplay between a moves made now and moves made n moves into the future how would the model be able to recognize and make trade-offs between moves which may offer a better position now vs those that offer a position n moves in the future - would one have to extend the board state input to a vector of length 1x64n where n is the total number of moves for expected for an individual player or is this a function of a different algorithm which should be able to capture historical information which training?
+
+
+I am unsure if I'm overthinking this and am missing something really obvious but I would appreciate any guidance in terms of how to approach thinking about this.
+"
+"['machine-learning', 'deep-learning', 'classification']"," Title: Creating a classifier for simpler classifiers trained on few training samplesBody: Suppose I have a classification problem with a stream of training-samples constantly arriving over time. I cannot keep all training-samples in memory, but I still want to train a classifier that will have the ""wisdom"" of all samples, and additionally, I want the classifier to become better whenever it gets new samples.
+
+I thought of the following idea. Suppose we have enough memory to keep 100 samples. Then, for each run of 100 samples, we will train a different sub-classifier. We will have a meta-classifier that will classify based on voting between all existing sub-classifiers. Over time, we will have more and more sub-classifiers, so hopefully the meta-classifier will improve with time - it will have a ""wisdom of the crowds"" effect.
+
+Has this method been tried before? Specifically, has it been tried in a deep-learning sequence-classification setting?
+"
+['machine-learning']," Title: Para Generation and Drawing Conclusion from X give ArticlesBody: I was wondering any examples of the following;
+
+
+- para generation: For eg, given X similar paragraphs, are you able to build a model to learn the style and generate a new para that is a paraphrase of the X paras. Similar in meaning but diff wording.
+- drawing conclusions from X given articles. He has a list of conclusions, check the X articles can provide evidence to the conclusions. Eg, given conclusion “city is not safe”, look for evidence such as “murders” and “thefts”.
+
+
+Glady Appreciate,
+Betty
+"
+"['artificial-neuron', 'hardware', 'neuromorphic-engineering']"," Title: Could a large number of interconnected tiny turing-complete computer chips be patterned across a wafer to simulate a programmable neural network?Body: The Intel 8080 had 4500 transistors and ran at 2-3.125 MHz. By comparison, the 18-core Xeon Haswell-E5 han 5,560,000,000 transistors and can run at 2 GHz. Would it be possible or prudent to simulate a neural network by backing a chip chock-full of a million interconnected, slightly modified intel 8080s (sped up to run at 2 GHz)? If each one modeled 100 neurons you could simulate a neural network with 100 million neurons on a single chip.
+
+Edit: I'm not proposing that you actually use a million intel 8080s; rather I'm proposing that you take a highly minimal programmable chip design like the intel 8080's design and pattern it across a wafer as densely as possible with interconnects so that each instance can function as one or a few dozen fully programmable neurons each with a small amount of memory. I'm not proposing that someone take a million intel 8080s and hook them together.
+"
+"['neural-networks', 'deep-learning', 'getting-started']"," Title: Deep Learning Approaches for Color Enhancement TestingBody: I'm a student, and currently into image processing project and coding using OpenCV. Recently, I watched Sebastian Thrun from Udacity in TedTalks talked about AlphaGo and I'm totally interested in the idea. I have read this question too : Merged Neural Network in AlphaGo.
+I was wondering if same approaches can be used in my project.
+
+I'm going to perform color enhancement method for any natural images. And of course, color sampling is a tricky task now. It's a lot of work, I have to prepare condition for each key-color sampling given and also prepare & pick the best enhancement function for it. I'm able to do it already using OpenCV.
+
+But I was wondering if I could load tons of sample pictures instead, have my system test them against each other, and figure out its own enhancement rules from all testing.
+
+I'm not that familiar with Deep Learning, we don't even have deep learning course at my university, but I'm interested in the idea and ready to learn. I'm not even sure if this can be done or not, but I wonder what kind of approaches should I learn to achieve my goal ? Is Deep Learning --> Neural Network a good start ? In my case, to which method in Deep Learning should I go with ? Any reference / advice will be highly appreciated. Thanks.
+"
+"['machine-learning', 'applications', 'robotics']"," Title: Is robotic process automation related to AI/ML?Body: I am trying to understand if robotic process automation (RPA) is a field that requires expertise in machine learning.
+Do the algorithms behind RPA use machine learning except for OCR?
+"
+"['algorithm', 'image-recognition', 'emotional-intelligence', 'math', 'facial-recognition']"," Title: Viola Jones AlgorithmBody: Can Viola Jones algorithm be used to detect the facial emotion. Actually it was used in creating harr-cascade file for object and facial detection, but what confused me is whether it can be used to train for emotion detection.
+
+If not, what algorithms can I use? and what are the mathematical bases? (i.e. what mathematics should I be studying?)
+"
+"['convolutional-neural-networks', 'prediction', 'regression', 'testing', 'mean-squared-error']"," Title: Is it a good idea to train a CNN to detect the hydration value (percentage) in skin images and evaluate it with the MSE?Body: I have a large dataset of skin images, each one associated with a hydration value (percentage).
+
+Now I'm looking into predicting the hydration value from an image. My thinking: train a CNN on the dataset and evaluate the model with a mean square error regression.
+
+First, does this sound like a sensible way to try this?
+
+Second, I'd like to run the model on mobile. Can you recommend any examples with Caffe2 (or alternatively TensorFlow) or diagrams that might explain a similar task?
+"
+"['neural-networks', 'reference-request', 'activation-functions', 'relu', 'cross-entropy']"," Title: Why do non-linear activation functions that produce values larger than 1 or smaller than 0 work?Body: Why do non-linear activation functions that produce values larger than 1 or smaller than 0 work?
+My understanding is that neurons can only produce values between 0 and 1, and that this assumption can be used in things like cross-entropy. Are my assumptions just completely wrong?
+Is there any reference that explains this?
+"
+"['algorithm', 'reinforcement-learning', 'optimization']"," Title: Which features and algorithm could optimize this air-conditioner problem?Body: Imagine we have 2 air conditioner systems (AA) and 2 ""free cooling"" systems which mix external and internal air (FC) in a closed box which always tends to warm up. For each system, we have to find turn on and off temperatures (for some hysteresis, let's say between the range 20-40 each one) to optimize the energy consumption.
+
+As we don't know the relation between these parameters and the energy consumption (and we don't intend to know them), we treat the problem as a black-box function.
+
+Till now, the problem would be solvable via a bayesian optimizer (eg. with gaussian process acquisition function).
+
+But there is a problem: the best configuration may change between seasons, and even days! A simple bayesian optimizer maybe could deal with these changes limiting the data it takes into account by, for example, the last 15-30 days. But this would deal with the change AFTER the consumption increased.
+
+So, the idea is introduce some contextual variables which would help the system prevent these changes (eg. the external and internal temperature, and the vectors of variation of external and/or internal temperature, the weather prediction, whatever).
+
+Also, some of these variables we can take into account might be internal of the system, which means while these influence the best configuration, the actual configuration also influences these variables! and this becomes a reinforcement learning problem.
+
+1) Is there a way (documented or experimental) to know which variables (both internal or external) influences the optimal configuration of these AA/FC systems?
+
+2) Based on the first question, which would be the best approach?
+
+2.1.) No features. This might be considered a multiarmed bandit problem for continuous reward. (FIX POSTERIOR TO SCENARIO CHANGE, IF THERE IS A SCENARIO CHAGNE)
+
+2.2.) Only external features to predict the scenario change. This might be considered a contextual multiarmed bandit-problem. (FORESEE THE SCENARIO CHANGE)
+
+2.3.) Consider only system-internal features. This can be considered a reinforcement-learning problem. (FIX IMMEDIATLY THE SCENARIO CHANGE)
+
+2.4.) Consider both external and internal features. This can be considered a reinforcement-learning problem where some of the states are not influenced by the configuration. (FORESEE THE SCENARIO CHANGE, AND IF SOMETHING FAILS, FIX IMMEDIATLY).
+"
+"['natural-language-processing', 'python', 'similarity']"," Title: How do I compute the structural similarity between sentences?Body: I am working on a problem where I need to determine whether two sentences are similar or not. I implemented a solution using BM25 algorithm and wordnet synsets for determining syntactic & semantic similarity. The solution is working adequately, and even if the word order in the sentences is jumbled, it is measuring that two sentences are similar. For example
+
+
+- Python is a good language.
+- Language a good python is.
+
+
+My problem is to determine that these two sentences are similar.
+
+
+- What could be the possible solution for structural similarity?
+- How will I maintain the structure of sentences?
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: Histopathological image vs. natural imageBody: What is the difference between a histopathological image and a natural image when training a neural network?
+"
+"['game-ai', 'evolutionary-algorithms', 'alpha-beta-pruning', 'evaluation-functions', 'board-games']"," Title: How do I write a good evaluation function for a board game?Body: I'm currently writing the Alpha-Beta pruning algorithm for a board game. Now I need to come up with a good evaluation function. The game is a bit like snakes and ladders (you have to finish the race first), so for a possible feature list I came up with the following:
+
+- field index should be high
+- in the lower fields my fuel should be high, when coming to the end it should be low (maximum of '10' required to enter the goal)
+- all 'power-ups' must be spent to enter the goal, so prioritize them
+- if it is possible to enter the goal (a legit move), do it!
+
+There could be some more for some special cases.
+I've read somewhere that it is the best (and easiest) to combine them in a linear function, for example:
+$$0.75 * i - 5 * p - 0.25 * |(f - \text{MAX_FIELD_INDEX}/i)|,$$
+where
+
+- $i$ = field index
+- $p$ = power-ups
+- $f$ = fuel
+
+Since I can't ask an expert and I'm not an expert by myself, I have nobody to ask if those parameters are good, if I've forgotten something or if I've combined the factors correctly.
+The parameters aren't that big of a deal because I could use a genetic algorithm or something else to optimize them.
+My problem and question is: What do I have to do to find out how to put together my features optimally (how can I optimize the function/parameter arrangement itself)?
+"
+"['ai-design', 'training', 'deep-neural-networks']"," Title: How would I implement this New Type of NNBody: CIO NN
+
+CIO NN stands for Controller Input Output Nerual Network
+
+note due to a typo the ""nearon"" means ""neron""
+
+For this we have to redefine the Nearon
+
+
+- 2 Inputs
+- 2 Outputs
+- 4 Weights (each input and output have their own weights)
+- Internal Memory Cell (any byte or bit or block size with variable size)
+- Activation Function (Defines what weights and what inputs activate this Nearon)
+- Memory Storage Function (Defines what and when this cell should store said memory or memory stream)
+- Memory Transpose Function (Once activated any stored memory that the activation function can trigger will be played/pushed into the Nerual Network)
+- Forget Function (Defines when and/or how and/or why these memories can be destroyed/removed based on Activation function with Memory states and any input stateses itself)
+
+
+How would I implement this in the form of Code?
+please take note of the spec. (this is non profit/GNU v3)
+
+This would look something like these:
+
+
+These would be arranges like this:
+
+
+Which we can build it into this:
+
+
+it can be trained like this:
+
+
+- do normal NN training from the inputs to the input outputs like a hidden layer NN
+
+
+then trained to be controlled like this:
+
+
+- then set the known NN dataset (inputs) to be corrected to actual or correct values via the CI (Controller Input) which will be outputed on the ""Input + Controller Input"" Output
+
+
+by training like this:
+
+
+- you have a normal NN with hidden layers which can be trainned (this can be done with CNNs) with backpropergation, and a cost function (use least amount of nerons) etc
+- now you can allow the CIO NN to retrain / teach itself with supervised or unsupervised learning.
+- you can combine the ""Input Output"" and the ""Input + Controller Input"" Output with another NN which can then connect with this NN in a similiar way that a Neron connects to a Neron
+
+"
+"['neural-networks', 'machine-learning', 'graphs', 'data-visualization']"," Title: Neural network for data visualizationBody: At my work, we're currently doing some research into data visualisation for highly interconnected data, basically graphs.
+We've been implementing all sorts of different layouts and trying to see which fits best, but, due to the nature of the problem --it's a visual thing - we needed to come up with some automated way to analyse the result so welcome up with a bunch of metrics to analyse our layouts.
+So far, the most important metrics have been information density, edge crossings, node overlap and edge length. This gives us some good results and has allowed us to fine-tune our layout algorithms.
+However, when a new graph is loaded, we noticed that humans still tend to fiddle a lot with the structure of the layout. Moreover, it seems that our metrics do a good job of predicting where a user is likely to mess around. Graph layout is a tough problem, so after some discussion, the idea of just throwing data at a neural network and let it figure it out came up.
+None of us are experts, or even experienced in AI. I'm the one with the most contact with AI methods. All I've ever done were simple NN models, no convolution, feedback or feedforward or anything of the sorts, but it seems to me this should be doable.
+Maybe it's my lack of expertise here, but I haven't been able to find any good information on this sort of application for NNs, so I was hoping someone here could point me in the right direction.
+
+- What sort of model is best for such a situation? and why? Is this actually possible or would it be super complicated? Has anyone ever tried something like this before?
+
+If it helps, our input data (for v1, I guess) would be two arrays of variable length, one for the nodes and another for the relationships between them and the output data would be an array with the node XY coordinates.
+"
+"['reinforcement-learning', 'game-ai', 'unsupervised-learning', 'monte-carlo-tree-search', 'q-learning']"," Title: Which Reinforcement Learning algorithms are efficient for episodic problems?Body: I have some episodic datasets extracted from a turn-based RTS game in which the current actions leading to the next state doesn’t determine the final solution/outcome of the episode.
+The learning is expected to terminate at a final state/termination condition (when it wins or loses) for each episode and then move on to the next number of episodes in the dataset.
+I have being looking into Q learning, Monte Carlo and SARSA but I am confused about which one is most applicable.
+If any of the above algorithms are implemented, can a reward of zero be given in preliminary states before the termination state, at which it will be rewarded with a positive/negative (win/loss) value?
+"
+['classification']," Title: Can I combine two classifiers that make different kinds of errors to get a better classifier?Body: I have a dataset with 2,23,586 samples out of which i used 60% for training and 40% for testing. I used 5 classifiers individually, SVM, LR, decision tree, random forest and boosted decision trees. SVM and LR performed well with close to 0.9 accuracy and recall also 0.9 but tree based classifiers reported an accuracy of 0.6. After a careful observation, I found out that SVM and LR did not predict the labels of 20,357 samples identically. So Can I apply voting and resolve this conflict wrt prediction outcome? Can this conflict be due to an imbalanced dataset?
+"
+"['philosophy', 'agi']"," Title: Is there a limit to the increase of intelligence?Body: Some argue that humans are somewhere along the middle of the intelligence spectrum, some say that we are only at the very beginning of the spectrum and there's so much more potential ahead.
+
+Is there a limit to the increase of intelligence? Could it be possible for a general intelligence to progress infinitely, provided enough resources and armed with the best self-recursive improvement algorithms?
+"
+"['neural-networks', 'classification']"," Title: Is it better to make neural network to have hierchical output?Body: i'm quite new to neural network and i recently built neural network for number classification in vehicle license plate. It has 3 layers: 1 input layer for 16*24(382 neurons) number image with 150 dpi , 1 hidden layer(199 neurons) with sigmoid activation function, 1 softmax output layer(10 neurons) for each number 0 to 9.
+
+I'm trying to expand my neural network to also classify letters in license plate. But i'm worried if i just simply add more classes into output, for example add 10 letters into classification so total 20 classes, it would be hard for neural network to separate feature from each class. And also, i think it might cause problem when input is one of number and neural network wrongly classifies as one of letter with biggest probability, even though sum of probabilities of all number output exceeds that.
+
+So i wonder if it is possible to build hierchical neural network in following manner:
+
+There are 3 neural networks: 'Item', 'Number', 'Letter'
+
+
+- 'Item' neural network classifies whether input is numbers or letters.
+- If 'Item' neural network classifies input as numbers(letters), then input goes through 'Number'('Letter') neural network.
+- Return final output from Number(Letter) neural network.
+
+
+And learning mechanism for each network is below:
+
+
+- 'Item' neural network learns all images of numbers and letters. So there are 2 output.
+- 'Number'('Letter') neural network learns images of only numbers(letter).
+
+
+Which method should i pick to have better classification? Just simply add 10 more classes or build hierchical neural networks with method above?
+"
+"['reinforcement-learning', 'comparison', 'backpropagation']"," Title: What is the relation between back-propagation and reinforcement learning?Body: What is the relation between back-propagation and reinforcement learning?
+"
+"['neural-networks', 'deep-learning', 'game-ai', 'path-finding']"," Title: How to teach an AI to race optimally in a racing game?Body: I play a racing game called Need For Madness ( some gameplay: https://www.youtube.com/watch?v=NC5uFZ-t0A8 ). NFM is a racing game, where the player can choose different cars and race and crash the other cars, and you can play on different tracks too. The game has a fixed frame rate, so you can assume that the same sequence of button presses will always arrive at the exact same position, rotation, velocity, etc. of the car.
+
+I want to make a bot which could race faster than I can. What would be the best way to go about doing this? Is this problem even suited for deep learning?
+
+I was thinking I could train a neural network where the input would be the current world state (position of the player, position of the checkpoints you have to through and all the obstacles), and the output would be an array of booleans, one for each button. During a race, I could then keep forward propagating from the input to the booleans. However, I'm not so sure what I would do after the race is over. How do I back propagate after the race to make the NN be less likely to make bad moves?
+"
+"['machine-learning', 'natural-language-processing', 'natural-language-understanding', 'machine-translation', 'google-translate']"," Title: Why does Google Translate produce two different translations for the same Chinese text?Body: I don't understand why Google Translate translates the same text in different ways.
+Here is the Wikipedia page of the 1973 film "Enter the Dragon". You can see that its traditional Chinese title is: 龍爭虎鬥. Google translates this as "Dragons fight".
+Then, if we go to Chinese Wikipedia page of this film, and search for 龍爭虎鬥 using Ctrl-F, it will be found on several places:
+
+But if we try to copy the hyperlink of Chinese page into Google translate, it will be the word "tiger" from somewhere:
+
+Even more, if we try to translate Chinese page into English using build-in Chrome translate, it will be sometimes translated as "Enter the Dragon", in English manner:
+
+Why it gives different translations for the same Chinese text here?
+"
+"['neural-networks', 'data-preprocessing', 'data-augmentation', 'data-labelling']"," Title: How to label edited images after data augmentation?Body: I am new to neural networks, I've only started studying and learning about the subject a year ago, and I just started building my first neural network.
+The project is a little bit ambitious: A browser extension for children's safety, it checks for sexual or abusive content, so that it replaces that content with a placeholder, the user will have to insert a password to show original content.
+I didn't find a dataset online, so I decided to build my training dataset. So, I started by writing a web crawler, it starts collecting images, meanwhile implementing data augmentation techniques. It basically resizes images (to 95x95), crops them, rotates, changes colors, adds blur, black and white, noise, etc.
+The problem is that after applying these techniques, I noticed that some images are not even recognizable by a human subject.
+I mean that even though I know that picture contains sexual content, it doesn't even appear to be sexual anymore.
+So, do I have to label it as sexual or not sexual?
+Notice that it's easier for me to consider it as sexual, if every image produces about 50 edited images, I'd only have to label the original image, what follows is that all 50 images get the same label. Is it okay to do just that?
+
+This is a sample of what I get after doing data augmentation, notice that some pictures are not recognizable by humans.
+For example, look at the result after editing images hue and saturation, a human can't recognize this result, is it okay to label it: not sexual?
+
+
+I wouldn't recognize the picture on the right if I didn't see the original one.
+I also tested this on human subjects (my brothers), they didn't recognize the squirrel on the right.
+"
+"['neural-networks', 'deep-learning']"," Title: What is a heavy node in neural networks?Body: I was watching a documentary on Netflix about AlphaGo, and at one point (~1:10:16 from the end), one of the programmers uses the term ""heavy node,"" which I assume has to do with neural networks. I did a little bit of research but couldn't find anything on what that term means. The closest I could get was this wikipedia page on Heavy path decomposition: https://en.wikipedia.org/wiki/Heavy_path_decomposition, which seemed like it could be somewhat related, but I wasn't sure how exactly. Has anyone heard of this term being used? Does anyone know what it means?
+
+For context, in the documentary the line is that if it (the network/player) creates something new not in the heavy node, then they don't see it.
+"
+"['research', 'history', 'ai-field']"," Title: What noteworthy contributions have Chinese AI researchers made in the field of artificial intelligence?Body: In recent years, China has made rapid progress in manufacturing and scientific research, as evidenced by their successful teleportation of a single quantum entangled photon to a satellite in orbit.
+My question is, what major contributions have Chinese AI researchers made in the field of Artificial Intelligence?
+"
+"['machine-learning', 'datasets']"," Title: Human Height estimation using person detection techniquesBody: I have a pedestrian dataset and would like to estimate human height in a video survillance using person detection techniques like YOLO Darknet or SSD (Single Shot Detectors). Would this technique work? Also, the videos that I have are in a constrained environment with good illumination. The idea is to get the coordinates from the bounding box and try to estimate pixel height. After getting the pixel height, some correlation could be estimated between pixel height and real world height. Note that I won't be using camera calibration.
+"
+"['image-recognition', 'supervised-learning', 'data-labelling']"," Title: How can computers beat humans at image recognition, if humans may incorrectly label the images?Body: For supervised learning, humans have to label the images computers use to train in the first place, so the computers will probably get wrong the images that humans get wrong. If so can computers beat humans?
+"
+"['deep-learning', 'autonomous-vehicles']"," Title: Semantic Segmentation how to upsamplingBody: Many of the architectures that do semantic segmentation like SegNet, DilatedNet (Yu and Koltun), DeepLab, etc. do not work on high resolution images. For such benchmarks like Cityscapes, what is a standard/practical approach for such methods to perform on the benchmark?
+
+I've tried to look into the paper, but I couldn't find such details. There's an article mentioning that they output at 1/8 of input images than do interpolation (usually 2, 4 or 8 times) from their results, but the article does not specify which upsampling techniques are the most reasonable one.
+"
+"['neural-networks', 'deep-learning', 'learning-algorithms', 'recommender-system']"," Title: How do stacked denoising autoencoders workBody: I've been studying a recommender system which uses a collaborative deep learning approach and Bayesian learning. It has the following NN representation :
+
+
+
+I need to know the working of stacked denoising autoencoders.
+
+Here is the link to the paper:http://www.wanghao.in/paper/KDD15_CDL.pdf
+"
+"['philosophy', 'ethics']"," Title: What are the connections between ethics and artificial intelligence?Body: What are the connections between ethics and artificial intelligence?
+
+What are the issues that have arisen, especially in the business context? What are the issues that may arise?
+"
+['reinforcement-learning']," Title: RL agent's view of state transitionsBody:
+
+The above environment is DeepTraffic
+
+Now consider this situation in the above environment, the Red car (we control it with our RL agent) is on the extreme right lane.
+
+During the exploration phase, we take a 'move right' action, which ofcourse will result in the car not moving right,but the other cars will be moving, state changes due to the rules of the environment.
+
+I'm using CNN to solve this, the state representation is the image itself and its a Q-Learning algorithm as described in DQN paper from deepmind.
+
+In the above situation I mentioned, wont the agent think due to 'move right' action the state has changed, which is not really the case?
+
+and when remembering the state transition (s,a,r,s') should i remember the actual action 'move right'(invalid) or 'do nothing'(correct as per env) ?
+"
+"['machine-learning', 'image-recognition', 'tensorflow']"," Title: If neurons are only defined for values between 0 and 1, how does ReLU differ from the identity?Body: I'm struggling to understand the underlying mechanics of CNNs so any help is appreciated. I have a network with a ReLU activation function which does perform signifigantly better than one with sigmoid. This is expected as ReLU solves the vanishing gradient problem. However, my understanding was the reason we implement nonlinearities is to separate data which cannot be separated linearly. But if ReLU is linear for all values we care about it shouldn't work at all?
+
+Unless, of course, neurons are defined for negative values but then my question becomes ""why does ReLU solve the vanishing gradient problem at all?"", since the derivative of ReLU for x<0 = 0
+"
+"['machine-learning', 'reinforcement-learning', 'terminology', 'math', 'multi-armed-bandits']"," Title: What is a weighted average in a non-stationary k-armed bandit problem?Body: In the book Reinforcement Learning: An Introduction (page 25), by Richard S. Sutton and Andrew G. Barto, there is a discussion of the k-armed bandit problem, where the expected reward from the bandits changes slightly over time (that is, the problem is non-stationary). Instead of updating the Q values by taking an average of all rewards, the book suggests using a constant step-size parameter, so as to give greater weight to more recent rewards. Thus:
+
+$$ Q_{n+1} = Q_n + \alpha (R_n - Q_n),$$
+
+where $\alpha$ is a constant between 0 and 1.
+
+The book then states that this a weighted average because the sum of the weights is equal to 1. What does this mean? Why is this true?
+"
+"['machine-learning', 'classification']"," Title: How do I statistically evaluate a ML model?Body: I have a model that predicts sentiment of tweets. Are there any standard procedures to evaluate such a model in terms of its output?
+
+I could sample the output, work out which are correctly predicted by hand, and count true and false positives and negatives but is there a better way?
+
+I know about test and training sets and metrics like AUROC and AUPRC which evaluate the model based on known data, but I am interested in the step afterwards when we don't know the actual values we are predicting. I could use the same metrics, I suppose, but everything would need to be done by hand.
+"
+"['applications', 'chat-bots', 'intelligent-agent']"," Title: What are some of the possible future applications of intelligent agents?Body: I am trying to do some experiments with some intelligent agents, but I'm not sure how significant they will be in the future.
+
+What are some possible interesting applications or use-cases of intelligent agents in the future?
+
+For instance, it can be used as a virtual assistant instead of a real call agent. But what can be a more appealing application in the future?
+"
+"['algorithm', 'heuristics']"," Title: How to open up a rigid structure made of connected panels?Body: By open up I mean slightly open up so that a theoretical structure of panels with no width looks three-dimensional. The original structure being an ideal object where any number several panels can occupy the same region of space (plane).
+
+To concretize what I mean by rigid structure of panels, let's take what's on my profile picture. I'm including here a larger version of that object:
+
+
+
+An origami figure is folded from a square and, unlike this simple example in the image, it can be very convoluted, with layers upon layers.
+
+Let's say that if I have an ideal, theoretical and flat model of an origami picture and by flat I mean the faces are on planes but not necessarily on a single plane. For example all of faces of the figure in the image would be in one plane, but there could be figures with more planes; think of animals with ears, flippers, etc.
+
+I would like to open up those parts that hinge on a theoretical segment (relatively easy) or curve or make a triangle of the corners of those faces that have two adjacent faces with no connections, open up a set of several faces that allow such opening of which the image is a good example.
+
+So far I have tried programming rules for different structures suchs as flaps, ends, wrap-arounds... However there are three no small problems. First, it's extremely challenging to take into account all the corner cases and possibilities. I suspect it sounds simpler that it really is, but I don't want to digress with explanations. Second, the code is not maintenable. It's difficult to put into words rules that have to be visualized. Third it's very difficult to unit test and debug.
+
+I strongly suspect that there must be some Artificial Intelligence techniques for doing this. Would you be so kind as to point me in the right direction? Just in a general way, without needing to go much into detail.
+
+Let me know if I should include more info or code.
+"
+"['neural-networks', 'deep-learning', 'reinforcement-learning', 'tensorflow', 'game-ai']"," Title: What layers to use in a Neural Network for card gameBody: I am currently writing an engine to play a card game and I would like for an ANN to learn how to play the game. The game is currently playable, and I believe for this game a deep-recurrent-Q-network with a reinforcement learning approach is the way to go.
+
+However, I don't know what type of layers I should use, I found some examples of Atari games solved through ANN, but their layers are CNN (convolutional), which are better for image processing. I don't have an image to feed the NN, only a state composed of a tensor with cards in the player's own hand and cards on the table. And the output of the NN should be a card or the action 'End Turn'.
+
+I'm currently trying to use TensorFlow but I'm open to any library that can work with NN. Any type of help or suggestion would be greatly appreciated!
+"
+"['search', 'optimization', 'hill-climbing']"," Title: Should the mutation be applied with the hill climbing algorithm?Body: As far as I understand, the hill climbing algorithm is a local search algorithm that selects any random solution as an initial solution to start the search. Then, should we apply an operation (i.e., mutation) on the selected solution to get a new one or we replace it with the fittest solution among its neighbours?
+
+This is the part of the algorithm where I am confused.
+"
+"['machine-learning', 'backpropagation', 'learning-algorithms', 'gradient-descent']"," Title: What is the proof behind the gradient of a curve being proportional to the distance between the two co-ordinates in the x-axis?Body: In the [delta rule][1] the equation to adjust the weight with respect to error is
+$$w_{(n+1)}=w_{(n)}-\alpha \times \frac{\partial E}{\partial w}$$
+*where $\alpha$ is the learning rate and $E$ is the error.
+The graph for $E$ vs $w$ would look like the one below with $E$ in the $y$ axis and $W$ in the $x$-axis
+
+In other words, we can write
+$$\alpha \times \frac{\partial E}{\partial w}=w_{(n)}-w_{(n+1)}$$
+I want to know, what is the proof behind the gradient of a curve being equal/proportional to the distance between the two coordinates in the x-axis.
+$\frac{\partial E}{\partial w}$ times step is a small shift on $f(w)$ not $w$. So, why does the difference between $W(n+1)$ and $W(n)$ be equal to $f(W)$?
+I found a similar question, but the accepted answer doesn't have a proof.
+"
+"['deep-neural-networks', 'reference-request']"," Title: Are there a finite set of computable functions constructing deep neural network which can form or implement any c.e. function or computable function?Body: Are there a finite set of computable functions constructing deep neural network which can form or implement any c.e. function or computable function?
+
+Or does there exist a finite set of computable function by which every c.e. function can be implemented by combination and connection(like connection in DNN)?
+"
+['chat-bots']," Title: Can anyone find the source code for the chatbot Luna?Body: This AI is really human-like and allegedly doesn't give pre-programmed responses. It's makers Robots without Borders say the project is open source but I couldn't find the code anywhere.
+"
+"['training', 'tensorflow', 'object-recognition', 'keras']"," Title: Transfer learning from model trained in a similar datasetBody: I am currently working on a defect detection algorithm but I only have a few samples of defects.I googled for defect detection datasets and I found this one:
+
+http://resources.mpi-inf.mpg.de/conferences/dagm/2007/prizes.html
+
+which has a few hundreds of original images of defects.
+
+My idea is:
+Imagenet => Defect dataset from internet => Own defect dataset
+
+Step 1. Training a model with ImageNet initialization using the defect dataset found in the internet (+ non-defect images + augmented data)
+
+Step 2. Using the output model of step 1 (which will be more similar to my own data),do transfer learning using my own defect dataset (defects + non-defects + augmented).
+
+Do you think this a good way to get good results?
+
+Based on:
+https://blog.slavv.com/a-gentle-intro-to-transfer-learning-2c0b674375a0
+
+Should defect images consider as low similar with imagenet's images? or similar to model because a both inputs are images? Some webpages said because they both are images, they are similar but some webpages said because these images are too different to the images used to train the imagenet model so I got confused about this.
+
+If I skip step 1, I dont think I get anything good because I have less than 100 images.
+
+Any advise or comment will be appreciated.
+"
+"['neural-networks', 'ai-design', 'algorithm']"," Title: What machine learning algorithm should be used to analyze the relationship between strings?Body: I am trying to build a neural network that takes in a single string, ex: ""dog"" as an input, and outputs 50 or so related hashtags such as, ""#pug, #dogsarelife, #realbff"".
+
+I have thought of using a classifier, but because there is going to be millions of hashtags to choose the optimal one from, and millions of possible words from the english dictionary, it is virtually impossible to search up the probability of each
+
+It is going to be learning information from analyzing twitter posts' text, and its hashtags, and find which hashtags goes with what specific words.
+"
+"['machine-learning', 'linear-regression', 'python']"," Title: Can number of Leads be predicted based on previous monthsBody: I have a sample set of data about Leads that gets generated every day. Leads are nothing but a user expressing request to be our partner or not. Sample data set is as shown below
+
+LEADID,CREATEDATE,STATUS,LEADTYPE
+810029,24-DEC-17 12.00.00.000000000 AM,open,LeadType1
+806136,30-DEC-17 12.00.00.000000000 AM,open,LeadType2
+812134,31-DEC-17 12.00.00.000000000 AM,open,LeadType2
+806147,31-DEC-17 12.00.00.000000000 AM,open,LeadType1
+806166,01-JAN-18 12.00.00.000000000 AM,open,LeadType2
+28002,04-MAR-16 12.00.00.000000000 AM,open,LeadType2
+808156,01-JAN-18 12.00.00.000000000 AM,open,LeadType1
+808162,01-JAN-18 12.00.00.000000000 AM,open,LeadType2
+806257,07-JAN-18 12.00.00.000000000 AM,open,LeadType1
+832091,17-JAN-18 12.00.00.000000000 AM,open,LeadType2
+838079,17-JAN-18 12.00.00.000000000 AM,open,LeadType1
+66001,26-MAR-16 12.00.00.000000000 AM,open,LeadType1
+70001,28-MAR-16 12.00.00.000000000 AM,open,LeadType2
+806019,23-DEC-17 12.00.00.000000000 AM,open,LeadType2
+822064,12-JAN-18 12.00.00.000000000 AM,open,LeadType1
+834043,14-JAN-18 12.00.00.000000000 AM,open,LeadType2
+836053,16-JAN-18 12.00.00.000000000 AM,open,LeadType1
+838119,19-JAN-18 12.00.00.000000000 AM,open,LeadType2
+
+
+As you can see Lead types can be of LeadType1 or LeadType2 and this get generated every day.
+
+In order to make sense of data I created the following plot using Python
+
+
+
+The supporting code is as follows. Note I am just a Noob to Python and AI but I want to check if this proves a valid use case for Machine Learning and what should be my approach
+
+import numpy as np
+import pandas as pd
+import matplotlib.pyplot as plt
+#%matplotlib inline
+in_file = 'lead_data.csv'
+mydf = pd.read_csv(in_file,encoding='latin-1')
+
+fig, ax = plt.subplots(figsize=(15,7))
+#g = mydf.groupby(['R4GSTATE','LEADTYPE']).count()['STATUS'].unstack()
+g = mydf.groupby(['R4GSTATE','STATUS']).count()['LEADTYPE'].unstack()
+g.plot(ax=ax)
+#ax.set_xlabel('R4GSTATE')
+ax.set_xlabel('R4GSTATE')
+ax.set_ylabel('Number of Leads')
+ax.set_xticks(range(len(g)));
+ax.set_xticklabels([""%s"" % item for item in g.index.tolist()], rotation=90);
+
+
+Basically I just read the csv, curated the data( I have cleaned the original csv) to keep what is meaningful for me. I also created grouping of number of leads Month-Year wise so that I can see the historical lead generated every month.
+
+I want to know if Machine Learning helps me to predict number of Lead generated in next coming months based on previous months data.
+
+If the answer is yes then is Linear Regression the right path to explore further
+"
+"['machine-learning', 'learning-algorithms']"," Title: Recommend item from set based on featuresBody: Forgive what might be a basic question. I'm just experimenting with ML / AL and I have a small problem set and I'd like to see if it can be solved with ML / AI. Basically, given a set of objects with multiple features, I'd like to create a process for recommending one automatically to a user.
+
+I'm thinking that some sort of clustering algorithm may be the best approach. However, one main challenge I'm trying to wrap my head around is that I don't know in advance how many distinct clusters will evolve... There may be scenarios where we Feature X is really important, but other scenarios where a user will say Feature Y is important.
+
+Secondly, what is my input set? For each training sample, I will have 1 selected object, and N-1 unselected objects. But I don't want to ""train"" that the unselected objects are ""bad"" because they could be selected in a future training example.
+
+Finally, I don't have a large training set already, so I would like to use feedback (user input, ""This was a bad choice"" or ""Use this object instead."") from the process to further refine the algorithm. Is this feasible?
+
+Are there any established patterns for this sort of process? Thanks in advance.
+"
+"['deep-learning', 'convolutional-neural-networks', 'image-recognition', 'filters', 'weights-initialization']"," Title: How are the kernels initialized in a convolutional neural network?Body: I am currently learning about CNNs. I am confused about how filters (aka kernels) are initialized.
+Suppose that we have a $3 \times 3$ kernel. How are the values of this filter initialized before training? Do you just use predefined image kernels? Or are they randomly initialized, then changed with backpropagation?
+"
+"['neural-networks', 'machine-learning', 'objective-functions', 'implementation', 'sigmoid']"," Title: How do I avoid the ""math domain error"" when the input to the log is zero in the objective function of a neural network?Body: I am implementing a neural network to train it on handwritten digits.
+Here is the cost function that I am implementing.
+$$J(\Theta)=-\frac{1}{m} \sum_{i=1}^{m} \sum_{k=1}^{K}\left[y_{k}^{(i)} \log \left(\left(h_{\Theta}\left(x^{(i)}\right)\right)_{k}\right)+\left(1-y_{k}^{(i)}\right) \log \left(1-\left(h_{\Theta}\left(x^{(i)}\right)\right)_{k}\right)\right]+ \\\frac{\lambda}{2 m} \sum_{l=1}^{L-1} \sum_{i=1}^{s_{l}} \sum_{j=1}^{s_{l+1}}\left(\Theta_{j, i}^{(l)}\right)^{2}$$
+In $\log(1-(h(x))$, if $h(x)$ is $1$, then it would result in $\log(1-1)= \log(0)$. So, I'm getting a math error.
+I'm initializing the weights randomly between 10-60. I'm not sure where I have to change and what I have to change.
+"
+['evolutionary-algorithms']," Title: ES- Modify sigma before mutate object parametersBody: I'm reading the book Introduction to Evolutionary Computing and on the chapter about Evolution Strategies said that we have to modify the strategy parameter sigma (standard deviation or mutation step size) before using in to modify the object parameter and I don't understand why.
+
+Why do we have to modify sigma before the mutation of object parameters?
+
+Maybe because if we do it we will have the mutated object parameters and the strategy parameters that have generated them.
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'pattern-recognition']"," Title: Using neural network to recognise patterns in matricesBody: I am trying to develop a neural network which can identify design features in CAD models (i.e. slots, bosses, holes, pockets, steps).
+
+The input data I intend to use for the network is a n x n matrix (where n is the number of faces in the CAD model). A '1' in the top right triangle in the matrix represents a convex relationship between two faces and a '1' in the bottom left triangle represents a concave relationship. A zero in both positions means the faces are not adjacent. The image below gives an example of such a matrix.
+
+
+Lets say I set the maximum model size to 20 faces and apply padding for anything smaller than that in order to make the inputs to the network a constant size.
+
+I want to be able to recognise 5 different design features and would therefore have 5 output neurons - [slot, pocket, hole, boss, step]
+
+Would I be right in saying that this becomes a sort of 'pattern recognition' problem? For example, if I supply the network with a number of training models - along with labels which describe the design feature which exists in the model, would the network learn to recognise specific adjacency patterns represented in the matrix which relate to certain design features?
+
+I am a complete beginner in machine learning and I am trying to get a handle on whether this approach will work or not - if any more info is needed to understand the problem leave a comment. Any input or help would be appreciated, thanks.
+"
+['reinforcement-learning']," Title: Move blocks to create a designed surfaceBody: I am new to machine learning and AI, so forgive me if this is obvious. I was talking with a friend on how to solve this problem, and neither of us could figure out how to do it.
+
+Say I have a grid area of 100x100 blocks, and I want a robot to build a horizontal 100x100 grid, and 3 blocks high. I am given a random, but known starting surface, always 100x100 but the height of the random surface can vary from 1 to 5 blocks. I have an extra reserve of blocks i can pick up, so dont have to worry about running out. The robot can move in any direction, even diagonally at some cost penalty. The robot can obviously move a 4 high block to fill in a 2 high, so each is at the design height of 3.
+This sounds like a reinforcement learning problem, but would any one be able to explain more detail how I would do this, to a) minimize the amount of moves, and b) to get to the design surface.
+"
+['multi-agent-systems']," Title: Agent collision avoidance javaBody: I am working with a project which is a agent based pedestrian simulation in Java and its is animated with the help of JavaFX. I've tried to read all the social force model papers but my understanding of those articles are none.
+So I tried an own approach which got trashed after failing time after time.
+
+My approach was that each agent calculated its surrounding and first calculated the distance to each of the agents on the field and if that distance was below a constant then the agent calculates the angle of that agent which is too close to it and moves accordingly to the calculated angle.
+
+This approach didnt work for me because the ""avoidance code"" is not efficient enough and the agents just dont know where to go when they meet and just stays in place.
+
+I am asking for guidance to how I can approach this problem in a better way.
+
+double[] check(Vector<Pedestrian> peds, Pedestrian p1){
+for (Pedestrian p : peds){
+ if (p.getPedestrianId() != this.id){
+ double distance = IPedestrian.distance_formula(getTranslateX(), getTranslateY(), p.getTranslateX(), p.getTranslateY());
+ if (distance <= DANGER){
+ System.out.println(""DANGER"");
+ return IPedestrian.angle(getTranslateX(), getTranslateY(), p.getTranslateX(), p.getTranslateY(), p1);
+ }
+ }
+}
+return new double[] {SPEED, 0};
+
+
+}
+
+public void move(Vector<Pedestrian> peds, Pedestrian p) {
+double[] new_steps = this.check(peds, p);
+if (side == SideChooser.Left){
+ setTranslateX(getTranslateX() + new_steps[0]);
+ setTranslateY(getTranslateY() + new_steps[1]);
+} else {
+ setTranslateX(getTranslateX() - new_steps[0]);
+ setTranslateY(getTranslateY() - new_steps[1]);
+}
+
+
+}
+
+Math formulas:
+
+static double distance_formula(double thisX, double thisY, double otherX, double otherY){
+return Math.sqrt(Math.pow(otherX - thisX, 2) + Math.pow(otherY - thisY, 2));
+
+
+}
+
+static double[] angle(double x1, double y1, double x2, double y2, Pedestrian p){
+double angle = Math.toDegrees(Math.atan2(y2-y1, x2-x1));
+angle += Math.ceil(-angle/360) * 360;
+
+//double angle = Math.toDegrees(Math.atan2(y2-y1, x2-x1));
+
+if (p.getSideChoosen() == SideChooser.Left){//if the pedestrian is from the left side
+ if (angle < 45 || angle > 315)//front
+ return new double[]{-SPEED/5, 0};
+
+ else if (angle >= 135 || angle <= 225 ) //back
+ return new double[]{SPEED*1.4, 0};
+
+ else if (angle >= 45 || angle <= 90)//North-East
+ return new double[]{0, SPEED};
+
+ else if (angle > 90 || angle <= 135) //North-West
+ return new double[]{SPEED*1.2 , SPEED};
+
+ else if (angle >= 270 || angle <= 315) //South-East
+ return new double[]{0, -SPEED};
+
+ else if (angle > 225 || angle <= 270) //South-West
+ return new double[]{SPEED*1.2, -SPEED};
+
+ else
+ return new double[]{SPEED, 0};
+} else {
+ if (angle < 45 || angle > 315)//back
+ return new double[]{SPEED*1.4, 0};
+
+ else if (angle >= 135 || angle <= 225 ) //front
+ return new double[]{-SPEED/5, 0};
+
+ else if (angle >= 45 || angle <= 90)//North-West
+ return new double[]{SPEED*1.2, -SPEED};
+
+ else if (angle > 90 || angle <= 135) //North-East
+ return new double[]{0 , -SPEED};
+
+ else if (angle >= 270 || angle <= 315) //South-West
+ return new double[]{SPEED*1.2, SPEED};
+
+ else if (angle > 225 || angle <= 270) //South-East
+ return new double[]{0, SPEED};
+
+ else
+ return new double[]{SPEED, 0};
+}
+
+
+}
+"
+"['deep-learning', 'convolutional-neural-networks', 'convolution-arithmetic', 'receptive-field']"," Title: How can 3 same size CNN layers in different ordering output different receptive field from the input layer?Body: Below is a quote from CS231n:
+
+Prefer a stack of small filter CONV to one large receptive field CONV layer. Suppose that you stack three 3x3 CONV layers on top of each other (with non-linearities in between, of course). In this arrangement, each neuron on the first CONV layer has a 3x3 view of the input volume. A neuron on the second CONV layer has a 3x3 view of the first CONV layer, and hence by extension a 5x5 view of the input volume. Similarly, a neuron on the third CONV layer has a 3x3 view of the 2nd CONV layer, and hence a 7x7 view of the input volume. Suppose that instead of these three layers of 3x3 CONV, we only wanted to use a single CONV layer with 7x7 receptive fields. These neurons would have a receptive field size of the input volume that is identical in spatial extent (7x7), but with several disadvantages
+
+My visualized interpretation:
+
+How can you see through the first CNN layer from the second CNN layer and see a 5x5 sized receptive field?
+There were no previous comments stating all the other hyperparameters, like input size, steps, padding, etc. which made this very confusing to visualize.
+
+Edited:
+I think I found the answer. BUT I still don't understand it. In fact, I am more confused than ever.
+"
+"['natural-language-processing', 'reference-request', 'chat-bots']"," Title: Is there an AI system that automatically generates classes and methods by giving it voice commands?Body: I want to develop (in Java) a voice plugin for Eclipse on a Mac that helps me jot down high-level classes and stub methods. For example, I would like to command it to create a class that inherits from X
and add a method that returns String
.
+
+Could somebody help me point out the right material to learn to achieve that?
+
+I don't mind using an existing solution if it exists. As far as I understand, I would have to use some Siri interface and use nltk to convert the natural text into commands. Maybe there's some chatbot library that saves me some boilerpate NLP code to directly jump on to writing grammar or selecting sentence patterns.
+"
+"['neural-networks', 'convolutional-neural-networks', 'computer-vision', 'yolo']"," Title: In YOLO, what exactly do the values associated with each anchor box represent?Body: I'm going through Andrew NG's course, which talks about YOLO, but he doesn't go into the implementation details of anchor boxes.
+After having looked through the code, each anchor box is represented by two values, but what exactly are these values representing?
+As for the need for anchor boxes, I'm also a little confused about that --
+As far as I understand, the ground truth labels have around 6 variables :
+
+- $P_o$ checks if it's an object or background,
+- $B_x$ and $B_y$ are the center coordinates
+- $B_h$ and $B_w$ are the height and width of the box
+- $C$ is the object class, which depends on how many classes you have, so you can have multiple $C$
+
+As for creating the bounding box,
+$B_h$ is divided by 2, with one half from the center points ($B_x, B_y$) to the top, and the other half to the bottom.
+If we train our classifier, wouldn't the prediction boxes be close to the ground truth labels as training progresses? So, if our ground truth label has a high height, small width as boxes for some images, and low hight and large width for other images, wouldn't our classifier automatically
+learn to differentiate between when to use one over the other, as it is being trained? If so then what is the use of anchor boxes? And what are those numbers representing anchor boxes representing?
+"
+"['deep-learning', 'hardware-evaluation']"," Title: Can I do deep learning with the 1060 or the 1070 ti?Body: Before I start, I want to let you know that I am completely new to the field of deep learning! Since I need a new graphics card either way (gaming you know) I am thinking about buying the GTX 1060 with 6GB or the 1070 ti with 8GB. Because I am not rich, basically I am a pretty poor student ;), I don't want to waste my money. I don't need deep learning for my studies I just want to dive into this topic because of personal interest. What I want to say is that I can wait a little bit longer and don't need the results as quickly as possible.
+
+Can I do deep learning with the 1060 (6GB seem to be very limiting, according to some websites) or the 1070 ti? Is the 1070 ti overkill for a personal hobby deep learner?
+
+Or should I wait for the new generation Nvidia graphics card?
+"
+"['natural-language-processing', 'statistical-ai']"," Title: How to figure out which words have the same meaning in two different languages?Body: Imagine two languages that have only these words:
+
+Man = 1,
+deer = 2,
+eat = 3,
+grass = 4
+
+
+And you would form all sentences possible from these words:
+
+Man eats deer.
+Deer eats grass.
+Man eats.
+Deer eats.
+
+
+German:
+
+Mensch = 5,
+Gras = 6,
+isst = 7,
+Hirsch = 8
+
+
+Possible german sentences:
+
+Mensch isst Hirsch.
+Hirsch isst Gras.
+Mensch isst.
+Hirsch isst.
+
+
+How would you write a program that would figure out which words have the same meaning in English and German?
+
+It is possible.
+
+All words get their meaning from the information in which sentences they can be used. The connection with other words defines their meaning.
+
+We need to write a program that would recognize that a word is connected to other words in the same way in both languages. Then it would know those two words must have the same meaning.
+
+If we take the word ""deer"" (2) it has this structure in English
+
+1-3-2
+2-3-4
+
+
+In german (8):
+
+5-6-8
+8-6-7
+
+
+We get the same structure (pattern) in both languages: both 8 and 2 lie in first and last position, and the middle word is the same in both languages, the other word is different in both languages. So we can conclude that 8=2 because both elements are connected with other elements the same way.
+
+Maybe we just need to write a very good program for recognizing analogies and we will be on the right track to creating AI?
+"
+"['neural-networks', 'datasets']"," Title: Data-set values feature scaling: sigmoid vs tanhBody: As many papers point out, for better learning curve of a NN, it is better for a data-set to be normalized in a way such that values match a Gaussian curve.
+
+Does this process of feature normalization apply only if we use sigmoid function as squashing function? If not what deviation is best for the tanh squashing function?
+"
+"['neural-networks', 'machine-learning', 'homework']"," Title: How can I manually calculate the output a specific neural network given some input?Body: I have the following problem, which I am unable to solve.
+
+A neural network with the following structure is given: 1 input neuron, 4 elements in the hidden layer, 1 output neuron.
+The output neuron is bipolar, the neurons in the hidden layer are linear.
+The weights between the input neuron and the neurons in the hidden layer have the following values: $w_{11} = -3, w_{12} = 2, w_{13} = -1, w_{14} = 0.5$, while between neurons in the hidden layer and the starting neuron:: $w_{21} = +2, w_{22} = -0.5, w_{23} = -3, w_{24} = +1$ (no threshold input in both layers).
+What will the network response be like if the number 3 is given to the input neuron?
+
+"
+"['deep-learning', 'computer-vision']"," Title: Continuous ground truth in supervised (metric) learning?Body: I am writing my thesis in the field of (deep) metric learning (DML). I am training a network in the fashion of contrastive / triplet Siamese networks to learn similarity and dissimilarity of inputs. In this context, the ground truth is commonly expressed as a binary. Let's take an example based on the similarity of species:
+
+
+- Image A: german shepard (dog)
+- Image B: siberian husky (dog)
+- Image C: turkish angora (cat)
+- Image D: gray wolf (wolf)
+
+
+
+ Image A and B are similar: same species, same sub-species (canis lupus) -> 1.0 == TRUE
+
+ Image A and C are dissimilar: different species (canis lupus vs. felis silvestris) -> 0.0 == FALSE
+
+ Image A and D ? same species, but different sub-species -> 0.8
+
+
+Which metric learning approaches use a continuous ground truth for learning?
+
+I could imagine that there is a lot of research out there using a continuous ground truth in classification settings. For instance to learn that the expression of a face is ""almost (60%) happy"", or more controversial, an image of a person depicts a ""70% attractive person"". Also in this fields I would be happy for hints / links.
+
+Remarks:
+
+
+- I don't ask for opinions on whether this makes sense or not.
+
+"
+"['neural-networks', 'deep-learning']"," Title: Modelling odd-even distinction of an integer with neural networksBody: Will it be possible to model the problem of odd-even distinction of an integer (not binary string representation) using neural networks?
+"
+"['neural-networks', 'machine-learning', 'backpropagation', 'datasets', 'gradient-descent']"," Title: What are some concrete steps to deal with the vanishing gradient problem?Body: I am training an ANN for classification between 3 classes. The ANN has an input layer, one hidden layer and a 3 node output layer.
+
+The problem I am facing is that the output being produced by the 3 output nodes are so close to 1 (for the first few iterations at least, and so I am assuming the problem propagates to future outputs as well) the weights are not being updated (or hardly updated) due to overflow (about $10^{-11}$). I can fix the overflow problem (but I don't think it is the culprit). I think such low values of error is the main culprit, and I cannot figure what is causing such low values of error.
+
+What will cause the network to behave more responsively, that is, how will I be actually able to grasp the weight updates and not something in the order of $10^{-11}$?
+
+The data set contain values in the order of $10$'s, and the weights randomly initialized are in the order of $0 < w < 1$. I have tried feature normalization but it is not that effective.
+"
+"['unsupervised-learning', 'keras', 'python']"," Title: Keras pattern finding between hash and wordBody: My goal is to build a neural net that can find patterns between a hash and a word on it's own. So that it returns the word of any hash that I will input.
+
+Unfortunatally my skill in the area of neural net isn´t advanced, and I want to use this project to learn more. So I use a German dictionary and encode it via one_hot
encoding. Then I generate the sha256
value of every word inside (before I have done this I cleaned the file and wrote every word in another line) it. So I got an big array with the shape of 20000x20000 for the words and another for the hashs.
+
+So then I used the a example of the Keras homepage for binary classification
because the one_hot values are represented by ones and zeros.
+
+So if I want to predict a hashs I get these error: Error when checking : expected dense_1_input to have shape (20000,) but got array with shape (1,)
. So I don't know if this model is working for my problem but I couldn't convert one hash into a size of 20000x20000. (The hash will one_hot
encoded for that prediction). So how could I get it to accept different shaped hashs/one hash only?
+
+Is there a way to train the model with each hash after another for example with a for loop?!
+EDIT: So I figured out that I can convert a list of characters into a numpy.array with 2 dimensions. So I hot_encoded
every character and create a list of them, these list I passed inside the np.array(words,ndim=2)
. So this I have done for my hashs aswell. Then after I run the code I got this error: ValueError: setting an array element with a sequence
So I tried to reshape the array with the .reshape(20000)
command but nothing chaged. So what to do with that? EDIT2: I figured out now that the problem is that enhot_encoding
generates diffrent sized ""arrays"" for each word, and if I fill this into a real array and this into a neuronal net it have to return this error. But still the question is: How to convert single words and hashs to a format that I can train a neuronal net with and get usefull output so I can enter any hash and it should return some kind of word(lable). If you need the actual code please inform me and I will upload it`s current state.
+Code:
+
+model = Sequential()
+model.add(Dense(64, input_shape=20000, activation='relu'))
+model.add(Dropout(0.5))
+model.add(Dense(64, activation='relu'))
+model.add(Dense(units=64, activation=""relu""))
+model.add(Dropout(0.5))
+model.add(Dense(19957, activation='sigmoid'))
+model.compile(loss='binary_crossentropy',optimizer='rmsprop', metrics=['accuracy'])
+print(""Fitting data..."")
+model.fit(test_hashs,test_words ,epochs=10,batch_size=128, verbose=1)
+
+
+train_y=input(""Input a hash that is not contained in the training data: "")
+#train_x=pd.Series(hashlib.sha256(str.encode(train_y)).hexdigest())
+train_y=pd.Series(train_y)
+#test_x=pd.get_dummies(train_x)
+test_y=pd.get_dummies(train_y)
+model.save(""first_test"")
+print(model.evaluate(test_y))
+#score=model.evaluate(test_x, test_y, batch_size=128,)
+print(""Score: ""+score)
+prediction=model.predict(test_x,verbose=1)
+for i in prediction:
+ print(i)
+
+"
+"['ai-design', 'computer-vision', 'ai-basics']"," Title: Data extraction from medical reportsBody: I am new in Machine Learning. I have taken a course in vision and we are required to do a project.
+
+I am thinking of data mining medical lab report images. My code must take an image and jpg file and then extract important information from it like lab where test has been done, patient name, test type and more important various data like heamoglobin, RBC, etc in case of blood test report.
+
+I can build an OCR, but, problem which I am stuck at is in case of data which generally forms a table like structure. So, I want to find that tabular structure on which I can just apply matrix extraction to find various datas.
+
+I'm looking for assistance with two basic things:
+
+
+- Is my approach of finding tables and then extracting data is correct? If yes, then can you point out some good papers or implementation to find tabular structure. (P.S.- Don't mention tabular)
+- Any approach which is state-of-the-art or good? (Paper or implementation)
+
+"
+"['natural-language-processing', 'probability']"," Title: How can I improve this word-prediction AI?Body: I'm relatively new to AI, and I've tried to create one that ""speaks"". Here's how it works:
+1. Get training data e.g 'Jim ran to the shop to buy candy'
+2. The data gets split into overlapping 'chains' of three e.g ['Jim ran to', 'ran to the', 'to the shop', 'the shop to'...]
+3. User enters two words
+4. Looks through the chains to find if the two words have been seen before.
+5. If they have, finds out which word followed it and how many times.
+6. Work out the probability e.g: if 'or' followed the two words 3 times, 'because' followed the two words 1 time and 'but' followed it 1 time it would be 0.6, 0.2 and 0.2
+7. Generate a random decimal
+8. If the random decimal is in the range of the first word (0 - 0.6) pick that one or if it's in the range of the second word (0.6 - 0.8) pick that word or if it's in the range of the third (0.8 - 1) pick that word
+9. Output the word picked
+10. Repeat from 4 but with the new last two words e.g if the last words had been 'to be' and it picked 'or' the new last two words would be 'be or'.
+
+It does work, but it doesn't stick to a particular topic. For example, after training with 800 random Wikipedia articles:
+
+
+ In the early 1990s the frequency had a plastic pickguard and separate hardtail bridge with the council hoped that the bullet one replaced with the goal of educating the next orders could revert to the north island or string of islands in a new urban zone close to the west.
+
+
+As you can see the topic changes many times mid-sentence. I thought of increasing the number of words it considered from two to three or four, but I thought it might start simply quoting the articles. If I'm wrong please tell me.
+
+Any help is greatly appreciated. If I haven't explained clearly enough or you have any questions please ask.
+"
+"['neural-networks', 'machine-learning']"," Title: Stacked softmax layers before outputBody: I have seen people using stacked softmax layers right at the output of neural networks designed for classification. I'm trying to understand this. Does it give any additional value? I think this could ""sharpen"" decisions on the boundaries.
+
+
+ model.add(Dense(10, activation='sigmoid'))
+ model.add(Dense(1,
+ activation='sigmoid'))
+
+
+Seen here.
+"
+"['machine-learning', 'computer-vision', 'reference-request', 'statistical-ai']"," Title: Is there a way of computing a prominence score based on the prevalence of features in an image?Body: Is there any previous work on computing some sort of prominence score based on the prevalence of features in an image?
+
+For example, let's say I am classifying images based on whether or not they have dogs in them. Is there a way to compute how prominent that feature is?
+"
+"['philosophy', 'evolutionary-algorithms', 'artificial-consciousness']"," Title: Can the first emergence of consciousness in evolution be replicated in AI?Body: At some point in time during the evolution, because of some factors, some beings first started to become conscious of themselves and their surroundings. That conscious experience is beyond some mere sensory reflexive actions trained. Can that be possible with AI?
+"
+"['natural-language-processing', 'chat-bots']"," Title: How could a chat bot learn synonyms?Body: I have started to make a chatbot. It has a list of greetings that it understands and responds to with its own list of greetings.
+
+How could a bot learn a new greeting or a synonym for a word it already knows?
+"
+"['philosophy', 'game-ai', 'alphago']"," Title: Is it fair to compare AlphaGo with a Human player?Body: A human player plays limited games compared to a system that undergoes millions of iterations. Is it really fair to compare AlphaGo with the world #1 player when we know experience increases with the increase in number of games played?
+"
+"['game-ai', 'heuristics', 'alpha-beta-pruning', 'combinatorial-games', 'adversarial-search']"," Title: What else can boost iterative deepening with alpha-beta pruning?Body: I read about minimax, then alpha-beta pruning, and then about iterative deepening. Iterative deepening coupled with alpha-beta pruning proves to quite efficient as compared to alpha-beta alone.
+I have implemented a game agent that uses iterative deepening with alpha-beta pruning. Now, I want to beat myself. What can I do to go deeper? Like alpha-beta pruning cut the moves, what other small change could be implemented that can beat my older AI?
+My aim to go deeper than my current AI. If you want to know about the game, here is a brief summary:
+There are 2 players, 4 game pieces, and a 7-by-7 grid of squares. At the beginning of the game, the first player places both the pieces on any two different squares. From that point on, the players alternate turns moving both the pieces like a queen in chess (any number of open squares vertically, horizontally, or diagonally). When the piece is moved, the square that was previously occupied is blocked. That square can not be used for the remainder of the game. The piece can not move through blocked squares. The first player who is unable to move any one of the queens loses.
+So my aim is to cut the unwanted nodes and search deeper.
+"
+"['neural-networks', 'machine-learning', 'image-recognition']"," Title: How to know what kind of memory is stored in the connection weights?Body: So, I have seen few pictures re-created by a Neural Network or some other Machine Learning algorithm after it has been trained over a data set.
+
+How, exactly is this done? How are the weights converted back into a picture or a memory which a Neural Net is holding?
+
+A real life example would be when we close our eyes we can easily visualize things we have seen. Based on that we can classify things we see. Now in a Neural Net classification part is easily done, but what about the visualization part? What does the Neural Net see when it closes its eyes? And how to represent it for human understanding?
+
+For example a deep net generated this picture:
+
+
+SOURCE: Deep nets generating stuff
+
+There can be many other things generated. But the question is how exactly is this done?
+"
+"['machine-learning', 'math', 'probability', 'notation', 'expectation']"," Title: What does the argmax of the expectation of the log likelihood mean?Body: What does the following equation mean? What does each part of the formula represent or mean?
+
+$$\theta^* = \underset {\theta}{\arg \max} \Bbb E_{x \sim p_{data}} \log {p_{model}(x|\theta) }$$
+"
+"['deep-learning', 'generative-adversarial-networks', 'generative-model', 'self-supervised-learning']"," Title: What is the purpose of the GAN?Body: The Generative Adversarial Network (GAN) is composed of a generator $G$ and a discriminator $D$. How do these two components interact? What is the intuition behind the GAN, its purpose, and how it is trained?
+"
+"['machine-learning', 'google', 'automated-machine-learning']"," Title: What is an intuitive explanation of how Google's AutoML works?Body: I recently read that Google has developed a new AI that anyone can upload data to and it will instantly generate models, i.e. an image recognition model based on that data.
+Can someone explain to me in a detailed and intuitive manner how this AI works?
+"
+"['machine-learning', 'search', 'resource-request']"," Title: Is there any website that allows you to choose an algorithm, code it and visualise how it works?Body: I would like to do some practical implementation of a planning algorithm (of course, something a bit simple and easy).
+Is there any website where I can pick an algorithm (e.g. A* or hill climbing), code it, and visualize how it works/executes?
+The site doesn't necessarily need to be restricted to planning or search algorithms. For example, in the context of machine learning, I would also like to be able to pick the learning algorithm and model (e.g. linear regression), code it, and visualize how it works.
+"
+"['machine-learning', 'prediction', 'optimization']"," Title: Application of Ai to task scheduling problems on heterogenous platformsBody: Let's say we have a cluster of 20-2000 heterogenous compute nodes.
+Consider for example the parallel solution of the helmholtz equation:
+Now we want to distribute the solution process and, to make things easier, we split the problem in a fine-grained way (partial solution of the system matrix).
+We could train an Ai with the time taken to solve the subproblem depending on multiple factors (for example, size of the mesh, needed precision, etc) and let the Ai choose the optimal distribution and division of the problem based on the available data.
+
+I'm new to the area of Artificial Intelligence.
+Are there any open source frameworks which could accomplish this task?
+How would you estimate the required amount of compute power to train the network?
+"
+"['deep-learning', 'deep-neural-networks', 'reference-request']"," Title: The connection between number of layer of DNN and computational complexity of itBody: number of layer of DNN and computational complexity of it are correlated after optimization, but how to estimate it before designing DNN?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'computational-learning-theory', 'comparison']"," Title: What is the relation between the definition of learnability of Vapnik and Gold and learnability of neural networks?Body: Gold showed that a language can be learned only if it contains a finite set of sentences.
+
+We know that deep neural networks can implement any function. Does this contradict the Gold's result?
+
+What is the relation or difference between the definition of learnability of Vapnik and Gold and the definition of learnability of neural networks?
+"
+"['neural-networks', 'activation-functions', 'hyperparameter-optimization', 'hyper-parameters', 'network-design']"," Title: How to design a neural network to predict the quadrant where a given point lies?Body: I've written a single perceptron that can predict whether a point is above or below a straight-line graph, given the correct training data and using a sign activation function.
+Now, I'm trying to design a neural network that can predict whether a point $(x, y)$ is in the 1st, 2nd, 3rd or 4th quadrant of a graph.
+One idea I have had is to have 2 input neurons, the first taking the $x$ value, the 2nd taking the $y$ value, these then try and predict individually whether the answer is on the right or left of the centre, and then above or below respectively. These then pass their outputs to the 3rd and final output neuron. The 3rd neuron uses the inputs to try and predict which quadrant the coordinates are in. The first two inputs use the sign function.
+The problem I'm having with this is to do with the activation function of the final neuron. One idea was to have a function that somehow scaled the output into an integer between 0 and 1, so 0 to 0.25 would be quadrant 1, and so on up to 1. Another idea would be to convert it to a value using sin and representing it as a sine wave as this could potentially represent all 4 quadrants.
+Another idea would be to have a single neuron taking the input of the $x$ and $y$ value and predicting whether something was above or below a graph (like my perceptron example), then having two output neurons, which the 1st output neuron would be fired if it was above the line and then passed in the original $x$ coordinate to that output neuron. The 2nd output neuron would be fired if it was below, then pass in the original $x$ value, as well to determine if it was left or right.
+Are these good ways of designing a neural network for this task?
+"
+"['neural-networks', 'convolutional-neural-networks', 'classification', 'backpropagation']"," Title: Is there any research on neural networks with multiple outputs for hierarchical label classification?Body: I had this idea of training for example a CNN on images, and having output branches at several of its intermediate layers. The early layers' output branch might then predict high-level class of detected objects (supposedly able to do this because less info is needed for a high-level classification than a very specialised one), and the later layers giving more detailed labels of the sub-class of the earlier high level class.
+
+I have been searching for research on this type of setup but couldn't really find anything. Is there a name for this idea, or is this an open question/idea?
+"
+"['deep-learning', 'voice-recognition']"," Title: Can variations in microphones used in training set and test set impact the accuracy of speech recognition models?Body: If I train a speech recognition model using data collected from N different microphones, but deploy it on an unseen (test) microphone - does it impact the accuracy of the model?
+
+While I understand that theoretically an accuracy loss is likely, does anyone have any practical experience with this problem?
+"
+"['game-ai', 'chess', 'alphazero']"," Title: What was the average decision speed pf Alpha Zero in the recent Stockfish match?Body: The match got a lot of press, and I doubt anyone is surprised that Alpha Zero crushed Stockfish.
+
+See: AlphaZero Destroys Stockfish in 100 Game Match
+
+To me, what's really salient is that ""much like humans, AlphaZero searches fewer positions that its predecessors. The paper claims that it looks at ""only"" 80,000 positions per second, compared to Stockfish's 70 million per second.""
+
+For those who remember Matthew Lai's GiraffeChess:
+
+
+ However, it is interesting to note that the way computers play chess is very different from how
+ humans play. While both humans and computers search ahead to predict how the game will go on,
+ humans are much more selective in which branches of the game tree to explore. Computers, on the
+ other hand, rely on brute force to explore as many continuations as possible, even ones that will be
+ immediately thrown out by any skilled human. In a sense, the way humans play chess is much more
+ computationally efficient - using Garry Kasparov vs Deep Blue as an example, Kasparov could not
+ have been searching more than 3-5 positions per second, while Deep Blue, a supercomputer with
+ 480 custom ”chess processors”, searched about 200 million positions per second 1 to play at
+ approximately equal strength (Deep Blue won the 6-game match with 2 wins, 3 draws, and 1 loss).
+
+ How can a human searching 3-5 positions per second be as strong as a computer searching 200
+ million positions per second? And is it possible to build even stronger chess computers than what
+ we have today, by making them more computationally efficient? Those are the questions this
+ project investigates.
+
+
+[Lai was tapped by DeepMind as a researcher last year]
+
+But what I'm interested in at the moment is the decision speed in these matches:
+
+- What was the average time to make a move in the AlphaZero vs. Stockfish match?
+"
+['reinforcement-learning']," Title: Agent in toy environment only learns to act optimally with small discount factorsBody: I have tried several environment libraries like OpenAI gym/gridworld but now I am trying to create a toy environment for experimentation. The environment I've created is as follows:
+
+
+- State: grid with n rows by m columns, represented by a boolean matrix. Each grid cell can be empty or filled and the grid starts empty.
+- Action: one of the m columns to be filled, which must have at least the top row empty.
+- Next state: Once a column is chosen, the lowest unfilled cell in that column is filled. This works from bottom up like a very simple version of Tetris.
+- Reward: after every action, a reward equal to the number of empty columns is awarded.
+
+
+Therefore in a sample world of 5 rows by 3 column, starting off with an empty grid, the maximum attainable reward would be by filling column wise first. This policy will give a maximum total reward of 2*5 + 1*5 = 15. (2 free columns by 5 row action, once first column is filled then 1 free column by 5 row action.)
+
+This very simple environment is trained using DQN with a single ff layer. The agent only took a few episodes to converge and is able to produce the maximum attainable reward.
+
+In a next toy environment, I've made it a little more complex. I modified the very first action to be random choice of any column. I have retrained the RL model with the new environment modification. However, after convergence, the agent does not attain max score of 15 for all possible starting columns. I.e. If column 1 was randomly chosen first, max score might be 15, however column 2 or 3 was randomly chosen first, max score might only reach 11 or 9. In theory, the optimum policy would be for the agent to fill column that was randomly chosen first - i.e. repeat the first randomly chosen action.
+
+I have tried several ways to tweak my input parameters (e.g. episilon_decay_rate, learning_rate, batch_size, number of hidden nodes) to see if the agent could act optimally for all possible starting columns. I also tried DDQN and Sarsa. The only way I could make the agent perform optimally is by reducing gamma (discount factor) to 0.5 or below. Are there any explanations to why the agent only works for small discount factors in this example? Also, are there alternative ways to obtain the optimum policy?
+"
+"['chess', 'alphazero']"," Title: Why were Chess experts surprised by the AlphaZero's victory against Stockfish?Body: It was recently brought to my attention that Chess experts took the outcome of this now famous match as something of an upset.
+
+See: Chess’s New Best Player Is A Fearless, Swashbuckling Algorithm
+
+As as a non-expert on Chess and Chess AI, my assumption was that, based on the performance of AlphaGo, and the validation of that type of method in relation to combinatorial games, was that the older AI would have no chance.
+
+
+- Why was AlphaZero's victory surprising?
+
+"
+"['neural-networks', 'neurons', 'neuroscience', 'brain', 'computational-theory-of-mind']"," Title: What makes the animal brain so special?Body: Whenever I read any book about neural networks or machine learning, their introductory chapter says that we haven't been able to replicate the brain's power due to its massive parallelism.
+Now, in modern times, transistors have been reduced to the size of nanometers, much smaller than the nerve cell. Also, we can easily build very large supercomputers.
+
+- Computers have much larger memories than brains.
+- Computes can communicate faster than brains (clock pulse in nanoseconds).
+- Computers can be of arbitrarily large size.
+
+So, my question is: why cannot we replicate the brain's parallelism if not its information processing ability (since the brain is still not well understood) even with such advanced technology? What exactly is the obstacle we are facing?
+"
+"['reinforcement-learning', 'papers', 'importance-sampling', 'sample-efficiency']"," Title: What is sample efficiency, and how can importance sampling be used to achieve it?Body: For instance, the title of this paper reads: ""Sample Efficient Actor-Critic with Experience Replay"".
+
+What is sample efficiency, and how can importance sampling be used to achieve it?
+"
+"['deep-learning', 'reference-request', 'papers', 'symbolic-ai', 'neurosymbolic-ai']"," Title: What are some interesting recent papers that synthesize symbolic AI with Deep Learning?Body: A lot of people seem to be under the impression that combining GOFAI and contemporary AI will make models more general. I'm particularly interested in reasoning through analogy or case-based reasoning.
+"
+"['machine-learning', 'artificial-neuron', 'object-recognition']"," Title: Is understanding value for different features next step for object recognition?Body: Once the artificially intelligent machines are able to identify objects, we might want to teach them how to value different things differently based on their utility, demand, life, etc. How can we accomplish this and how did we start to value things?
+"
+"['deep-learning', 'computer-vision', 'deep-neural-networks', 'activation-functions', 'image-super-resolution']"," Title: Why is no activation function used at the final layer of super-resolution models?Body: I'm trying to implement some image super-resolution models on medical images. After reading a set of papers, I found that none of the existing models use any activation function for the last layer.
+What's the rationale behind that?
+"
+"['machine-learning', 'natural-language-processing']"," Title: Existing programs that find out words with same meaningsBody: I'm wondering if these 2 specific programs already exist and if not how hard would it be to write them:
+
+
+- A program that would figure out (by only ""reading"" large amounts of texts in human language 1 and 2) which words in second language have the same meaning as a word in first language. You would give for input texts in both languages and for output you would get for every word in first language a list of words in second language that are most similar to it with a probability that they mean the same thing.
+- A program that would figure out which words have the most similar meaning by analyzing large amounts of texts in one human language.
+
+
+I'm planning on writing these two programs and it would be nice if I could get existing programs that do this so that I could compare results of my program to those of existing programs.
+"
+"['reference-request', 'philosophy', 'artificial-consciousness']"," Title: What are the current theories on the development of a conscious AI?Body: What are the current theories on the development of a conscious AI? Is anyone even trying to develop a conscious AI?
+
+Is it possible that consciousness is an emergent phenomenon, that is, once we put enough complexity into our system, it will become self-aware?
+"
+"['reinforcement-learning', 'multi-armed-bandits']"," Title: Programming a bandit to optimize donationsBody: I'm developing a multi-armed bandit which learns the best information to display to persuade someone to donate to charity.
+
+Suppose I have treatments A, B, C, D (which are each one paragraph of text). The bandit selects one treatment to show to a person. The person is given $1 and has to decide how much (if any) to donate, in increments of one cent. The donation decision is recorded and fed to the multi-armed bandit, who will then re-optimize before another person is shown a treatment selected by the bandit.
+
+How should I program the bandit if my objective is to maximize total donations? For example, can I use Thompson sampling, and if a participant donates $0.80, I count that as 80 successes and 20 failures?
+"
+"['natural-language-processing', 'recurrent-neural-networks', 'word-embedding', 'papers']"," Title: How is the word embedding represented in the paper ""Recurrent neural network based language model""?Body: I'm reading ""Recurrent neural network based language model"" of Mikolov et al. (2010). Although the article is straight forward, I'm not sure how word embedding $w(t)$ is obtained:
+
+
+
+The reason I wonder is that in the classic ""A Neural Probabilistic Language Model"" Bengio et al. (2003) - they used separate embedding vector for representing each word and it was somehow ""semi-layer"", meaning - it haven't contains non-linearity, but they did update word embeddings during the back-propagation.
+
+In Mikolov approach though, I assume they used simple one-hot vector, where each feature represent presence of each word. If we represent that's way single word input (like was in the Mikolov's paper) - that vector become all-zeros except single one.
+
+Is that correct?
+"
+"['neural-networks', 'boltzmann-machine', 'restricted-boltzmann-machine']"," Title: How do we calculate the hidden units values in a (restricted) Boltzmann machine?Body: Now, Boltzmann machines are energy-based undirected networks, meaning there are no forward computations. Instead, for each input configuration $x$, a scalar energy is calculated to asses this configuration. The higher the energy, the less likely for $x$ to be sampled from the target distribution.
+
+The probability distribution is defined through the energy function by summing over all possible states of the hidden part $h$.
+
+If I understand correctly, the hidden units are added to capture higher-order interactions, which offers more capacity to the model.
+
+So, how do we calculate the values of these hidden units? Or do we not explicitly compute these values and instead approximate the marginal ""free energy"" (which is the negative log of the sum over all possible states of $h$)?
+"
+"['neural-networks', 'machine-learning', 'fuzzy-logic']"," Title: Tuning the b parameter, ANFISBody: In order for the generalized bell membership function to retain its defined shape and domain, two restrictions must be placed on the b parameter: 1) b must be positive and 2) b must be an integer. Using backpropagation to tune the membership parameters (a,b and c in the bell), it appears to be possible that the correction to b will break one or both of these restrictions on b. Can someone please explain to me how we can use backpropagation to tune the b parameter (as well as a and c) without violating 1) and 2)?
+"
+"['machine-learning', 'unsupervised-learning', 'learning-algorithms', 'r']"," Title: Predict frequently purchased items under certain conditions with customer purchasing history dataBody: I have purchasing history data for grocery shopping. I am trying to get abnormally frequently purchased items under certain conditions. For instance, I am trying to find frequently purchased items, when customers shop online and are willing to pay an extra shipping fee.
+
+In order to find items that are particularly (or abnormally) frequently purchased under that situation (through online stores by paying shipping fee), how and what Machine Learning Algorithm should I apply and identify those items?
+
+I found arules
R package which is using the association rules with purchasing history and tried to apply it. But it seems the package might be based on different principle from my idea.
+
+Anyone has an idea about my problem? If there is an R package related to the problem, it would be perfect.
+"
+"['philosophy', 'ethics']"," Title: Algorithms can be greedy. What are some other algorithmic vices?Body: Greedy algorithms are well known, and although useful in a local context for certain problems, and even potentially find general, global optimal solutions, they nonetheless trade optimality for shorter-term payoffs.
+This seems to me a good analogue for human greed, although there is also the grey goo type of greed that is senseless acquisition of material (think plutocrats who talk about wealth as merely a way of "keeping score".)
+Technical debt is an extension of development practices that fall under the algorithmic definition of greed (short-term payoff leads to trouble down the road.) This may be further extended to any non-optimized code in terms of energy waste (flipping of unnecessary bits) which will only increase as everything becomes more computerized.
+So my question is:
+
+- What are other vices that can arise in algorithms?
+
+"
+"['machine-learning', 'ai-design', 'computer-vision', 'knowledge-representation', 'reasoning']"," Title: Training an AI to play Starcraft 2 with superhuman level of performance?Body: I'm interested in working on challenging AI problems, and after reading this article (https://deepmind.com/blog/deepmind-and-blizzard-open-starcraft-ii-ai-research-environment/) by DeepMind and Blizzard, I think that developing a robust AI capable of learning to play Starcraft 2 with superhuman level of performance (without prior knowledge or human hard-coded heuristics) would imply a huge breakthrough in AI research.
+
+Sure I know this is an extremely challenging problem, and by no means I pretend to be the one solving it, but I think it's a challenge worth taking on nonetheless because the complexity of the decision making required is much closer to the real world and so this forces you to come up with much more robust, generalizable AI algorithms that could potentially be applied to other domains.
+
+For instance, an AI that plays Starcraft 2 would have to be able to watch the screen, identify objects, positions, identify units moving and their trajectories, update its current knowledge of the world, make predictions, make decisions, have short term and long term goals, listen to sounds (because the game includes sounds), understand natural language (to read and understand text descriptions appearing in the screen as well), it should probably be endowed also with some sort of attention mechanism to be able to pay attention to certain regions of interest of the screen, etc. So it becomes obvious that at least one would need to know about Computer Vision, Object Recognition, Knowledge Bases, Short Term / Long Term Planning, Audio Recognition, Natural Language Processing, Visual Attention Models, etc. And obviously it would not be enough to just study each area independently, it would also be necessary to come up with ways to integrate everything into a single system.
+
+So, does anybody know good resources with content relevant to this problem? I would appreciate any suggestions of papers, books, blogs, whatever useful resource out there (ideally state-of-the-art) which would be helpful for somebody interested in this problem.
+
+Thanks in advance.
+"
+"['algorithm', 'natural-language-processing', 'ai-basics', 'getting-started']"," Title: How to find the subject in a text?Body: I often develop bots and I need to understand what some people are saying.
+
+Examples:
+- I want an apple
+- I want an a p p l e
+
+How do I find the object (apple)? I honestly don't know where to start looking. Is there an API that I can send the text to which returns the object? Or perhaps I should manually code something that analyses the grammar?
+"
+"['convolutional-neural-networks', 'linear-regression', 'dropout']"," Title: What to do if CNN cannot overfit a training set on adding dropout?Body: I have been trying to use CNN for a regression problem. I followed the standard recommendation of disabling dropout and overfitting a small training set prior to trying for generalization. With a 10 layer deep architecture, I could overfit a training set of about 3000 examples. However, on adding 50% dropout after the fully-connected layer just before the output layer, I find that my model can no longer overfit the training set. Validation loss also stopped decreasing after a few epochs. This is a substantially small training set, so overfitting should not have been a problem, even with dropout. So, does this indicate that my network is not complex enough to generalize in the presence of dropout? Adding additional convolutional layers didn't help either. What are the things to try in this situation? I will be thankful if someone can give me a clue or suggestion.
+
+PS: For reference, I am using the learned weights of the first 16 layers of Alexnet and have added 3 convolutional layers with ReLU non-linearity followed by a max pooling layer and 2 fully connected layers. I update weights of all layers during training using SGD with momentum.
+"
+"['philosophy', 'biology', 'genes']"," Title: Motivation that drives a rogue AI agentBody: This is a kind of biological and philosophical question. So, the recent concern in AI is that an AI agent may go rogue with prominent people voicing their concerns.
+
+Now say, we have created an AI (you are free to use your own definition of what makes an AI intelligent) which has gone rogue with powers given in this question.
+
+Now, the broad view of today's biology is that everything we do is to further our genes down the future (leaving aside small technical details). It is even widely accepted that we are just machines whose controller are the genes. Everything we do is controlled/hardwired by the genes with some avenue of learning from experiences. Also genes only further their own interest. Scientist George Price even wrote a mathematical equation proving all our acts are selfish and only furthering the interest of our genes (article). Also Richard Dawkins is a pioneer of this idea (this is only to show I haven't pulled the idea out from air).
+
+Now, my question is that what will possibly be the motivation of an AI agent to go rogue? It doesn't have genes whose interest it needs to further. We all do something for an end result. What is the end result a rogue AI might try to achieve/attain and why?
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks', 'algorithmic-bias', 'inductive-bias']"," Title: Can prior knowledge be encoded in deep neural networks?Body: I was reading Gary Marcus's a Critical Appraisal of Deep Learning. One of his criticisms is that neural networks don't incorporate prior knowledge in tackling a problem. My question is: have there been any attempts at encoding prior knowledge in deep neural networks?
+"
+"['linear-regression', 'math']"," Title: Understanding a few terms in Andrew Ng's definition of the cost function for linear regressionBody: I have completed week 1 of Andrew Ng's course. I understand that the cost function for linear regression is defined as $J (\theta_0, \theta_1) = 1/2m*\sum (h(x)-y)^2$ and the $h$ is defined as $h(x) = \theta_0 + \theta_1(x)$. But I don't understand what $\theta_0$ and $\theta_1$ represent in the equation. Is someone able to explain this?
+"
+"['reinforcement-learning', 'q-learning', 'data-science']"," Title: How to implement exploration function and learning rate in Q LearningBody: I'm trying to implement Q-learning (state-based representation and no neural / deep stuff) but I'm having a hard time getting it to learn anything.
+
+I believe my issue is with the exploration function and/or learning rate. Thing is, I see different explanations in the sources I am following so I'm not sure what's the right way anymore.
+
+What I understand so far is that Q-learning is TD with q-val iteration.
+
+So a time-limited q-val iteration step is:
+
+Q[k+1](s,a) = ∑(s'): t(s,a,s') * [r(s,a,s') + γ * max(a'):Q[k](s',a')]
+
+
+Where:
+
+Q = q-table: state,action -> real
+t = MDP transition model
+r = MDP reward func
+γ = discount factor.
+
+
+But since this is a model-free, sample-based setting, the above update step becomes:
+
+Q(s,a) = Q(s,a) + α * (sample - Q(s,a))
+
+
+Where:
+
+sample = r + γ * max(a'):Q(s',a')
+r = reward, also coming from percept after taking action a in step s.
+s' = next state coming from percept after taking action a in step s.
+
+
+Now for example, assume the following MDP:
+
+ 0 1 2 3 4
+0 [10t][ s ][ ? ][ ? ][ 1t]
+
+Discount: 0.1 |
+Stochasticity: 0 |
+t = terminal (only EXIT action is possible)
+s = start
+
+
+With all of the above, my algo (in pseudo code) is:
+
+input: mdp, episodes, percept
+Q: s,a -> real is initialized to 0 for all a,s
+α = .3
+
+for all episodes:
+ s = mdp.start
+
+ while s not none:
+ a = argmax(a): Q(s,a)
+ s', r = percept(s,a)
+ sample = r + γ * max(a'):Q(s',a')
+ Q(s,a) = Q(s,a) + α * [sample - Q(s,a)]
+ s = s'
+
+
+As stated above, the algorithm will not learn. Because it will get greedy fast.
+
+It will start at 0,1 and choose the best action so far. All q-vals are 0 so it will choose based on arbitrary order on how the qvals are stored in Q. Asume 'W' (go west) is chosen. It will go to 0,0 with a reward of 0 and a q-val update of 0 (since we don't yet know that 0,0, EXIT yields 10)
+
+In the next step it will take the only possible action EXIT from 0,0 and get 10.
+
+At this point the q-table will be:
+
+0,1,W: 0
+0,0,Exit: 3 (reward of 10 averaged by learning rate of .3)
+
+
+And the episode is over because 0,0 was terminal.
+On the next episode, it will start at 0,1 again and take W again because of the arbitrary order. But now 0,1,W will be updated to 0.09. Then 0,0,Exit will be taken again (and 0,0,Exit updated to 5.1). Then the second episode will be over.
+
+At this point the q-table is:
+
+0,1,W: 0.09
+0,0,Exit: 5.1
+
+
+And the sequence 0,1,W->0,0,Exit will be taken ad infinitum.
+
+So this takes me to learning rates and the exploration functions.
+
+The book 'Artificial Intelligence: A Modern Approach' (3ed, by Russell) first mentions (pages 839-842) the exploration function as something to put in the val update (because it is discussing a model-based, value iteration approach instead).
+
+So extrapolating from the val update discussion in the book, I'd assume the q-val update becomes:
+
+Q(s,a) = ∑(s'): t(s,a,s') * [r(s,a,s') + γ * max(a'):E(s',a')]
+
+
+Where E would be an exploration function which according to the book could be something like:
+
+E(s,a) = <bigValue> if visitCount(s,a) < <minVisits> else Q(s,a)
+
+
+The idea being to artificially pump up the vals of actions which have not been tried yet and so now they'll be tried out at least minVisits
times.
+
+But then, in page 844 the book shows pseudo code for Q-learning and instead does not use this E in the q-val update but rather in the argmax of the action selection. I guess makes sense? Since exploration amounts to choosing an action...
+
+The other source I have is the UC Berkeley CS188 lecture videos/notes.
+In those (Reinforcement Learning 2: 2016) they show the exploration function in the q-val update step. This is consistent with what I extrapolated from the book's discussion on value iteration methods but not with what the book shows for Q-Learning (remember the book uses the exploration function in the argmax instead).
+
+I tried placing exploration functions in the update step, the action selection step and in both at the same time.. and still the thing eventually gets greedy and stuck.
+
+So not sure where and how this should be implemented.
+
+The other issue is the learning rate. The explanation usually goes ""you need to decrease it over time."" Ok.. but is there some heuristic? Right now, based off the book I am doing:
+
+learn(s,a) = 0.3 / visitCount(s,a)
. But no idea if it is too much or too little or just right.
+
+Finally, assuming I had the exploration and learn right, how would I know how many episodes to train for?
+
+I'm thinking I'd have to keep 2 versions of the Q-table and check at which point the q-vals do not change much from previous iterations (similar to value iteration for solving known MDPs).
+"
+"['neural-networks', 'machine-learning', 'classification', 'natural-language-processing']"," Title: Methods to tell if a question can be answered from a paragraphBody: I'm working on a project related to machine Q&A, using the SQuAD dataset. I've implemented a neural-net solution for finding answers in the provided context paragraph, but the system (obviously) struggles when given questions that are unanswerable from the context. It usually produces answers that are nonsensical and of the wrong entity type.
+
+Is there any existing research in telling whether or not a question is answerable using the info in a context paragraph? Or whether a generated answer is valid? I considered textual entailment but it doesn't seem to be exactly what I'm looking for (though maybe I'm wrong about that?)
+"
+"['neural-networks', 'deep-learning', 'computer-vision', 'classification']"," Title: Is it possible to train a neural network to identify only one type of object?Body: I am new to neural networks. Is it possible to train a neural network to identify only one type of object? For instance, a table from a large set of images, where the neural network should be able to identify if new images are tables.
+"
+"['game-ai', 'applications']"," Title: What are examples of simple problems and applications that can be solved with AI techniques?Body: What are examples of simple problems and applications that can be solved with AI techniques, for a beginner who is trying to make use of his basic programming skills into AI at the beginning level?
+"
+"['neural-networks', 'deep-learning', 'papers', 'optimization', 'weight-normalization']"," Title: How does weight normalization work?Body: I was reading the paper Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks about improving the learning of an ANN using weight normalization.
+
+They consider standard artificial neural networks where the computation of each neuron consists in taking a weighted sum of input features, followed by an elementwise nonlinearity
+
+$$y = \phi(\mathbf{x} \cdot \mathbf{w} + b)$$
+
+where $\mathbf{w}$ is a $k$-dimensional weight vector, $b$ is a scalar bias term, $\mathbf{x}$ is a $k$-dimensional vector of input features, $\phi(\cdot)$ denotes an elementwise nonlinearity and $y$ denotes the
+the scalar output of the neuron.
+
+They then propose to reparameterize each weight vector $\mathbf{w}$ in terms of a parameter vector $\mathbf{v}$ and a scalar parameter $g$ and to perform stochastic gradient descent with respect to those parameters instead.
+
+$$
+\mathbf{w} = \frac{g}{\|\mathbf{v}\|} \mathbf{v}
+$$
+
+where $\mathbf{v}$ is a $k$-dimensional vector, $g$ is a scalar, and $\|\mathbf{v}\|$ denotes the Euclidean norm of $\mathbf{v}$. They call this reparameterizaton weight normalization.
+
+What is this scalar $g$ used for, and where does it come from? Is $\mathbf{w}$ is the normalized weight? In general, how does weight normalization work? What is the intuition behind it?
+"
+"['deep-learning', 'genetic-algorithms', 'evolutionary-algorithms', 'neuroevolution', 'novelty-search']"," Title: In novelty search, are the novel structures or behaviour of the neural network rewarded?Body: I have been reading a lot lately about some very promising work coming out of Uber's AI Labs using mutation algorithms enhanced with novelty search to evolve deep neural nets. See the paper Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients (2018) for more details.
+
+In novelty search, are the novel structures or behavior of the neural network rewarded?
+"
+"['neural-networks', 'machine-learning', 'image-recognition']"," Title: In number classification using neural network, is training with edge image better than gray image?Body: i'm trying to identify numbers and letters in license plate. License plate images are taken at different lighting condtion and converted to gray image. My concern with type of data for training is:
+
+Gray Image:
+
+
+- Since they are taken at different lighthing condition, gray image have different pixel intensity for same number. Which means, i have to get many training data for different lighting condition to train.
+
+
+Edge Image:
+
+
+- They lack enough pixel information since only edge is white while others(background) are black. So i think they will be very weak for translational difference like shearing or shifting.
+
+
+I want to get some information about which type of image is better for training number in different lighting condition. I wish to use edge image if they don't differ much since i can prepare edge image right now.
+"
+"['neural-networks', 'backpropagation', 'hidden-layers']"," Title: Data prepared to linear regression. Can I use it with backpropagation?Body: I'm studying a Master's Degree in Artificial Intelligence and I need to learn how to use the Java Neural Network Simulator, JavaNNS, program.
+
+In one practice I have to build a neural network to use backpropagation on it.
+
+I have created a neural network with one input layer with 12 nodes, one hidden layer with 6 nodes and one output layer with 1 node.
+
+I'm using Kaggle's titanic competition data with this format following Dataquest course Getting Started with Kaggle for Titanic competition:
+
+Pclass_1,Pclass_2,Pclass_3,Sex_female,Sex_male,Age_categories_Missing,Age_categories_Infant,Age_categories_Child,Age_categories_Teenager,Age_categories_Young Adult,Age_categories_Adult,Age_categories_Senior,Survived
+0,0,1,0,1,0,0,0,0,1,0,0,0
+1,0,0,1,0,0,0,0,0,0,1,0,1
+0,0,1,1,0,0,0,0,0,1,0,0,1
+1,0,0,1,0,0,0,0,0,1,0,0,1
+0,0,1,0,1,0,0,0,0,1,0,0,0
+0,0,1,0,1,1,0,0,0,0,0,0,0
+1,0,0,0,1,0,0,0,0,0,1,0,0
+0,0,1,0,1,0,1,0,0,0,0,0,0
+
+
+If you want see the same data better in an Spreadsheet:
+
+
+But they preprocess the data to use it with linear regression and I don't know if I can use these data with backpropagation
+
+I think something is wrong because when I run backpropagation in JavaNNS I get these data:
+
+opened at: Sat Feb 17 17:29:40 CET 2018
+Step 200 MSE: 0.5381023044692738 validation: 0.11675894327003862
+Step 400 MSE: 0.5372328944712378 validation: 0.11700781497209432
+Step 600 MSE: 0.5370386219557437 validation: 0.11691717861750939
+Step 800 MSE: 0.5370348711919518 validation: 0.11696104763606407
+Step 1000 MSE: 0.5369724294992798 validation: 0.11697568840154722
+Step 1200 MSE: 0.5369697016710676 validation: 0.11665485957481342
+Step 1400 MSE: 0.5370053339270906 validation: 0.11684215268609244
+Step 1600 MSE: 0.5370121961199371 validation: 0.11670833992558485
+Step 1800 MSE: 0.5370200812483633 validation: 0.11673550099633925
+Step 2000 MSE: 0.5367923502149529 validation: 0.11675956129361797
+
+
+Nothing changes, it is like it doesn't learn anything.
+
+How many hidden layers does the network have with how many nodes on each hidden layer?
+
+Maybe the problem is that the data have been prepared to be used in Linear regression and I using it with Backpropagation.
+
+I have only created the neural network, I haven't implemented the backpropagation algorithm because it is already implemented in JavaNNS.
+"
+"['neural-networks', 'recurrent-neural-networks']"," Title: How to train a recurrent neural network with multiple seriesBody: I am new to neural networks. I am trying to model the run-off vs. time in a water channel after a storm event given that I know the permeability of the material in the channel, total precipitation, and some other single valued parameters for a particular event..
+
+I have a database of run off histories, and the values of the associated parameters (permeability, total precipitation, etc)
+
+I want my model to give me a runoff vs. Time history when I enter the associated parameters.
+
+I do not know how to train my model. Do i just stack all the time histories in my database together and feed them together? All examples in books use one time history to train the model. Im confused.
+"
+['linear-algebra']," Title: Does k consistency always imply (k - 1) consistency?Body: From Russell-Norvig:
+
+
+ A CSP is strongly k-consistent if it is k-consistent and is also (k − 1)-consistent, (k − 2)-consistent, . . . all the way down to 1-consistent.
+
+
+How can a CSP be k-consistent without being (k - 1)-consistent? I can't think of any counter example for this case. Any help would be appreciated.
+"
+"['neural-networks', 'tensorflow', 'recurrent-neural-networks']"," Title: Trajectory classification using RNNBody: The problem: I want to classify a trajectory if it has some properties, for example I want to create a simple 0/1 classifier for circular trajectories. If a target is moving in a circular trajectory the network should produce 1, if not it should produce 0.
+
+My input and data set: what I have is data set with cartesian coordinates in 2d so x,y,Vx,Vy. I have a dataset of 10000 trajectory, 5000 circular, 5000 rectilinear. So I feed the network with a tensor [10000, 4, 1]
+
+The question: I'm trying to use a network with three layers, input layer with 4 neurons, hidden layer with 2 LTSM and one fc layer with sigmoid activation function. Is it possible to feed the network with a tensor [4x1] each time? Or do I need to provide the information in batches? Or what? Is the design of my basic network correct?
+"
+"['neural-networks', 'backpropagation', 'math', 'activation-functions', 'relu']"," Title: What is the derivative of the Leaky ReLU activation function?Body: I am implementing a feed-forward neural network with leaky ReLU activation functions and back-propagation from scratch. Now, I need to compute the partial derivatives, but I don't know what the derivative of the Leaky ReLU is.
+
+Here is the C# code for the leaky RELU function which I got from this site:
+
+private double leaky_relu(double x)
+{
+ if (x >= 0)
+ return x;
+ else
+ return x / 20;
+}
+
+"
+"['convolutional-neural-networks', 'transfer-learning', 'fine-tuning', 'single-shot-multibox-detector']"," Title: When doing transfer learning, which initial layers do we need to freeze, and how should I change the last layer for my task?Body: I want to train a neural network for the detection of a single class, but I will be extending it to detect more classes. To solve this task, I selected the PyTorch framework.
+I came across transfer learning, where we fine-tune a pre-trained neural network with new data. There's a nice PyTorch tutorial explaining transfer learning. We have a PyTorch implementation of the Single Shot Detector (SSD) as well. See also Single Shot MultiBox Detector with Pytorch — Part 1.
+This is my current situation
+
+- The data I want to fine-tune the neural network with is different from the data that was used to initially train the neural network; more specifically, the neural network was initially trained with a dataset of 20 classes
+
+- I currently have a very small labeled training dataset.
+
+
+To solve this problem using transfer learning, the solution is to freeze the weights of the initial layers, and then train the neural network with these layers frozen.
+However, I am confused about what the initial layers are and how to change the last layers of the neural network to solve my specific task. So, here are my questions.
+
+- What are the initial layers in this case? How exactly can I freeze them?
+
+- What are the changes I need to make while training the NN to classify one or more new classes?
+
+
+"
+['neural-networks']," Title: Identify unnecessary inputs of NNBody: Lately I've been wondering.
+Is there's a way to locate redundant/unnecessary/misleading inputs by analysis of weights in the first layer?
+"
+"['machine-learning', 'deep-learning', 'optimization', 'gradient-descent', 'facial-recognition']"," Title: How to calculate Adaptive gradient?Body: In the FaceNet paper there mentions an gradient algorithm called 'AdaGrad'(Adaptive Gradient) referenced to this paper which has apparently been used to calculate the gradient of the Triplet Loss function. After referring to the paper also I find it hard to understand how to calculate this adaptive gradient.
+
+Any ideas regarding this matter? Would love to hear any explanations or ideas towards understanding this concept.
+
+Thank you.
+"
+"['neural-networks', 'convolutional-neural-networks', 'reinforcement-learning']"," Title: Dealing with input to recurrent net with changing dimensionsBody: I have a problem in which the dimensions of the input are increasing in row and column at each timestep. What method for preprocessing could be done or are there any architectures used for solving such a case?
+"
+"['search', 'hill-climbing']"," Title: How can I formulate the map colouring problem as a hill climbing search problem?Body: I have a map. I need to colour it with $k$ colours, such that two adjacent regions do not share a colour.
+
+How can I formulate the map colouring problem as a hill climbing search problem?
+"
+"['research', 'ethics', 'social']"," Title: Is the research by Stanford University students who use logistic regression to predict sexual orientation from facial images really scientific?Body: Two Stanford University researchers, Dr. Michal Kosinki and Yilun Wang have published a paper that claims that AI can predict sexuality from a single facial photo with startling accuracy. This research is obviously disconcerting since it exposes an already vulnerable group to a new form of systematized abuse.
+
+The research can be found here https://osf.io/zn79k/ ,here https://psyarxiv.com/hv28a/ and has even been highlighted by Newsweek magazine here http://www.newsweek.com/ai-can-tell-if-youre-gay-artificial-intelligence-predicts-sexuality-one-photo-661643
+
+
+
+Above is an image of composite heterosexual faces and composite gay faces from the research. (Image courtesy of Dr Michal Kosinki and Yilun Wang)
+
+My question is, as knowledgable members of the AI community, how can we scientifically debunk/discredit this research?
+"
+"['machine-learning', 'unsupervised-learning', 'pattern-recognition']"," Title: Detect observations under certain conditionsBody: I have a customer purchasing dataset and the data set is from a retailer having an online store and offline stores. So, customers have two options in their shopping channel, online or offline. In an online shopping, there is a shipping fee however if a basket size is larger than $50 there is no shipping fee.
+
+I found pieces of evidence that customers are trying to add some of items to make their basket size larger than $50 when their baskets are near and a little bit below the $50, because their shipping fee can be waived by doing that.
+
+
+- In this situation, I am trying to identify and characterize items that were purchased only because of the shipping threshold by using a machine learning algorithm.
+
+
+If there is no shipping threshold, $50, the customers would not purchase the items, but they purchased some items to make their basket size larger than $50. I have not observed those kinds of items (added items because of the shipping threshold).
+
+
+- Is there any machine learning algorithm that I can identify those kinds of items?
+
+
+I think I need to use some of unsupervised machine learning algorithm.
+
+Another challenging part is that each customer has different characteristics so I probably need to consider it as well. How can I detect those kinds of items??
+"
+"['logic', 'knowledge-representation']"," Title: How to mathematically/logically represent the sense of sentences like ""The cat drinks milk""?Body: I am absolutely new in the AI area.
+I would like to know how to mathematically/logically represent the sense of sentences like:
+
+- The cat drinks milk.
+- Sun is yellow.
+- I was at work yesterday.
+
+So, that it could be converted to computer understandable form and analysed algorithmically.
+Any clue?
+"
+"['reinforcement-learning', 'research', 'deepmind']"," Title: Is it common in RL research with Atari/ALE to automatically press FIRE to start games?Body: In some Atari games in the Arcade Learning Environment (ALE), it is necessary to press FIRE
once to start a game. Because it may be difficult for a Reinforcement Learning (RL) agent to learn this, they may often waste a lot of time executing actions that do nothing. Therefore, I get the impression that some people hardcode their agent to press that FIRE
button once when necessary.
+
+For example, in OpenAI's baselines repository, this is implemented using the FireResetEnv
wrapper. Further down, in their wrap_deepmind
(which applies that wrapper among others), it is implied that DeepMind tends to use this functionality in all of their publications. I have not been able to find a reference for this claim though.
+
+
+
+My question is: is it common in published research (by DeepMind or others) to use the functionality described above? I'd say that, if this is the case, it should be explicitly mentioned in these papers (because it's important to know if hardcoded domain knowledge was added to a learning agent), but I have been unable to explicitly find this after looking through a wide variety of papers. So, based on this, I'd be inclined to believe the answer is ""no"". The main thing that confuses me then is the implication (without reference) in the OpenAI baselines repository that the answer would be ""yes"".
+"
+"['deep-learning', 'optimization', 'hyperparameter-optimization', 'hyper-parameters', 'hidden-layers']"," Title: Why should the number of neurons in a hidden layer be a power of 2?Body: I have read somewhere on the web (I lost the reference) that the number of units (or neurons) in a hidden layer should be a power of 2 because it helps the learning algorithm to converge faster.
+Is this a fact? If it is, why is this true? Does it have something to do with how the memory is laid down?
+"
+"['machine-learning', 'deep-learning']"," Title: Does augmenting data changes the distribution of augmented data?Body: When we augment data for training are we also changing the distribution of data and if its a different distribution why do we use it to train a model for original distribution ?
+"
+"['neural-networks', 'machine-learning', 'natural-language-processing', 'word-embedding', 'embeddings']"," Title: What is the intuition behind how word embeddings bring information to a neural network?Body: How is it that a word embedding layer (say word2vec) brings more insights to the neural network compared to a simple one-hot encoded layer?
+I understand how the word embedding carries some semantic meaning, but it seems that this information would get "squashed" by the activation function, leaving only a scalar value and as many different vectors could yield the same result, I would guess that the information is more or less lost.
+Could anyone bring me insights as to why a neural network may utilize the information contained in a word embedding?
+"
+"['agi', 'superintelligence', 'state-of-the-art']"," Title: What is the current state of AGI development?Body: Could you please provide some insight into the current stage of developments in AGI area? Are there any projects that had breakthroughs recently? Maybe some news source to follow on this topic?
+"
+"['neural-networks', 'datasets', 'applications']"," Title: Which problems have neural networks yet to solve?Body: (Gross Oversimplification) Neural Networks model systems, black boxes with a set of inputs, and a set of outputs. To train a network for modeling this system, obtain hundreds (or millions) of possible inputs/output pairs. This is called the data set, and the network and its optimization algorithm are set to find a set of network parameters that best match the I/O of the network with the I/O of the system.
+Are there any systems, for which we have functional data sets, that have yet to be meaningfully modeled with Neural Networks in any form (recurrent, deep, convolutional, etc)?
+"
+"['machine-learning', 'convolutional-neural-networks', 'algorithm']"," Title: A smart way to adjust XML files according to handwritten onesBody: I have a game application with characters that have to cross mazes. The game can generate thousands of different mazes and the characters can move according to users choice and cross the maze manually. We needed to add the possibility to show a correct way out of each maze. Therefore we added the possiblity to move the characters according to an xml file.
+
+This XML file is very complex, usually around thirty-fifty thousands of rows. lets say its in the following structure (but much more complex):
+
+ <maze-solution>
+ <part id=""1"">
+ <sector number=""1"">
+ <action>
+ <equipment>heavy</equipemnt>
+ <movement>
+ <start-position>1250></start-position>
+ <angle>23.43</angle>
+ <duration>0.44</duration>
+ </movement>
+ <action-type>run</action-type>
+ <character>1</character>
+ <protection>none</protection>
+ </action>
+ <action>
+ <equipment>light</equipemnt>
+ <movement>
+ <start-position>4223></start-position>
+ <angle>233.43</angle>
+ <duration>0.32</duration>
+ </movement>
+ <action-type>walk</action-type>
+ <character>1</character>
+ <protection>none</protection>
+ </action>
+ <action>
+ <equipment>heavy</equipemnt>
+ <movement>
+ <start-position>1231></start-position>
+ <angle>84.134</angle>
+ <duration>0.454</duration>
+ </movement>
+ <action-type>run</action-type>
+ <character>2</character>
+ <protection>none</protection>
+ </action>
+ <action>
+ <equipment>heavy</equipemnt>
+ <movement>
+ <start-position>932></start-position>
+ <angle>34.43</angle>
+ <duration>0.50</duration>
+ </movement>
+ <action-type>duck</action-type>
+ <character>1</character>
+ <protection>none</protection>
+ </action>
+ </sector>
+ <sector number=""2"">
+ <action>
+ <equipment>heavy</equipemnt>
+ <movement>
+ <start-position>1250></start-position>
+ <angle>23.43</angle>
+ <duration>0.44</duration>
+ </movement>
+ <action-type>run</action-type>
+ <character>1</character>
+ <protection>none</protection>
+ </action>
+ <action>
+ <equipment>light</equipemnt>
+ <movement>
+ <start-position>4223></start-position>
+ <angle>233.43</angle>
+ <duration>0.44</duration>
+ </movement>
+ <action-type>walk</action-type>
+ <character>1</character>
+ <protection>none</protection>
+ </action>
+ <action>
+ <equipment>heavy</equipemnt>
+ <movement>
+ <start-position>1231></start-position>
+ <angle>84.134</angle>
+ <duration>0.454</duration>
+ </movement>
+ <action-type>run</action-type>
+ <character>2</character>
+ <protection>none</protection>
+ </action>
+ <action>
+ <equipment>heavy</equipemnt>
+ <movement>
+ <start-position>932></start-position>
+ <angle>23.43</angle>
+ <duration>0.44</duration>
+ </movement>
+ <action-type>duck</action-type>
+ <character>1</character>
+ <protection>none</protection>
+ </action>
+ </sector>
+ <sector number=""3"">
+ </maze-solution>
+
+
+At the moment, we have the ability to analayze each maze using a CNN algorithm for image classification and generate an xml that represents a way out of the maze - meaning that if the characters will be moved according to that file, they will cross the maze. That algorithm has been tested and can not be changed by any means.
+
+The problem is that most of the times the generated file is not the best one possible (and quite often it is very noticeable). There are different, faster, better ways to cross the maze.
+
+We also have thousands (and we can get as many as needed) files that were created manually for saved mazes and therefore they are representing an elegant and a fast way out of the maze. The ideal goal is that someday, our program will learn how to generate such a file without people creating them manually.
+
+To conclude, we have plenty of XML files generated by a program compared to the hard-coded XML files. There are thousands of pairs - The file the program generated, and the ""ideal"" file version that a person created. (and we can get infinite number of such pairs)
+Is there a way, using those thousands of pairs, to make a second step algorithm that will ""learn"" what adjustments should be made in the generated XML files to make them more like the hard-coded ones?
+
+I'm not looking for a specific solution here but for a general idea that will get me going. I hope i made myself clear but if I missed some info let me know and I will add it.
+"
+"['neural-networks', 'deep-learning', 'natural-language-processing', 'voice-recognition', 'text-summarization']"," Title: What is easier or more efficient to summarize voice or text? [DP/RN]Body: If possible consider the relationship between implementation difficulty and accuracy in voice examples or simply chat conversations.
+
+And currently, what are the directions on algorithms like Deep Learning or others to solve this.
+"
+"['neural-networks', 'decision-theory']"," Title: Condition Action Statement - Feed Forward Neural NetworkBody: I am trying to produce Decision Tree from Feed Forward Neural Network .
+
+The input to the feed forward neural network is Condition Action Statement
+for example, if airthrusthold > 90 , power up the engine else rotate shaft 5 degree wide
+
+Above statement is the input to the FFNN . How do i feed the statement ?
+Either converting into word2vec ? (or) there is any other format to do ?
+
+And i need to produce decision tree from the outcome of neural network
+
+Can we do this using reinforcement learning using Markov Decision Process?
+
+Thanks!
+"
+"['deep-learning', 'image-recognition', 'computer-vision']"," Title: How do we perform object classification given images from a camera that captures images at 15 FPS?Body: I've been working with vanilla feedforward neural networks and have been researching the convolutional neural network literature.
+If a camera is capturing a video at a rate of 15 frames per second, is the classification model being trained continuously/iteratively in order to maintain non-time delayed classifications?
+"
+"['research', 'agi', 'superintelligence']"," Title: How can people contribute to AGI research?Body: Is there a way for people outside of the core research community of AGI to contribute to the cause?
+
+There are a lot of people interested in supporting the field, but there is no clear way to do that. Is there something like BOINC for AGI researches, or open projects where random experts can provide some input? Maybe Kickstarter for AGI projects?
+"
+"['machine-learning', 'game-ai']"," Title: Can a game AI learn the concept of acceleration?Body: If the game had a variable speed and was essential in evolution/gaining score(IDK AI terminologies). Would the AI be able to figure out when to slow down and speed up?
+
+If it is able to solve the problem or complete the level, will it have an equation to relating acceleration, or perhaps a number on when to speed up and down. What if the game environment was dynamic?
+
+Can you even teach math to an AI?
+
+PS: I'm not sure if I should ask separate question?
+"
+"['papers', 'embeddings', 'facenet']"," Title: How is the constraint $\|f(x)\|_{2}=1$ enforced for the embedding $f(x)$ in the FaceNet paper?Body: In the FaceNet paper, under section 3.2, the authors mention that:
+
+The embedding is represented by $f(x) \in \mathbb{R}^{d}$. It embeds an image $x$ into a $d$-dimensional Euclidean space. Additionally, we constrain this embedding to live on the $d$ dimensional hypersphere, i.e. $\|f(x)\|_{2}=1$.
+
+I don't quite understand how the above equation holds. As far as I understand, the $L_2$ norm is the same as the Euclidean distance, but I don't quite understand how this imposes
+$\|f(x)\|_{2}=1$ criteria.
+"
+"['neural-networks', 'image-recognition']"," Title: How to apply EOT algorithm to 3d modelBody: Many of you have probably seen the turtle from LabSix that gets mistaken for a rifle in Google's InceptionV3 image classifier. I read the paper and I understand how they apply EOT to 2d images and on the individual pixel values, but I am still unsure how they implement the EOT algorithm to the 3d model.
+
+
+- Are they using EOT to perturb the individual coordinates in the 3d model's mesh? Or are they perturbing images of a turtle and then printing the turtle from the images?
+- How do they check the InceptionV3 output iteratively without having to 3d print the object each time and check the probabilities given?
+
+
+Any examples that someone can point to would also be very helpful.
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'keras', 'data-science']"," Title: Deep NN architecture for predicting a matrix from two matricesBody: Recently my friend asked me a question: having two input matrices X and Y (each size NxD) where D >> N, and ground truth matrix Z of size DxD, what deep architecture shall I use to learn a deep model of this representation?
+
+
+- N ~ is in the order of tens
+- D ~ is in the order of tens of thousands
+
+
+The problem is located in the domain of bioinformatics, however, this is more of an architectural problem. All matrices contain floats.
+
+I tried first a simple model based on a CNN model in keras. I've stacked input X and Y into an Input Matrix of size (number of training examples, N, D, 2). Outputs are of size (number of training examples, D, D, 1)
+
+
+- Conv2D layer
+
+
+- Conv2D layer
+
+
+- Dropout
+- Flattening layer
+- Dense (fully connected) of size D
+
+
+- Dense (fully connected) of size D**2 (D squared)
+
+
+- Reshaping output into (D,D,1) (for single training set)
+
+
+However, this model is untrainable. It has over billion parameters for emulated data.
+
+(Exactly 1,321,005,944 for my randomly emulated dataset)
+
+Do you find this problem solvable? What other architectures I might try to solve this problem?
+
+Best.
+"
+['reinforcement-learning']," Title: What is Imagination Learning and Imagination machines?Body: I recently came across a Quora post, where I saw the term ""Imagination Learning"". It seems to be based on something called ""Imagination Machines"" (the link is based on a guy's work profile as of now; subject to change).
+
+The only thing that I could find on Internet about it is this paper: Imagination-Augmented Agents for Deep Reinforcement Learning. (But I'm not sure if it's related to that concept.)
+
+Any ideas on this would be appreciated.
+"
+"['machine-learning', 'datasets', 'anomaly-detection']"," Title: Find anomalies from records of categorical dataBody: I have a data-set with $m$ observations and $p$ categorical variables (nominal), each variable $X_1, X_2,\dots, X_p$ has several different possible values.
+
+Ultimately, I am looking for a way to find anomalies i.e. to identify rows for which the combination of values seems incorrect with respect to the data I saw so far.
+
+So far, I was thinking about building a model to predict the value for each column and then build some metric to evaluate how different the actual row is from the predicted row.
+
+I would greatly appreciate any help!
+"
+"['deep-learning', 'reinforcement-learning', 'deep-neural-networks', 'q-learning']"," Title: Should the actor or actor-target model be used to make predictions after training is complete (DDPG)?Body: The situation
+
+I am referring to the paper T. P. Lillicrap et al, ""Continuous control with deep reinforcement learning"" where they discuss deep learning in the context of continuous action spaces (""Deep Deterministic Policy Gradient"").
+
+Based on the DPG approach (""Deterministic Policy Gradient"", see D. Silver et al, ""Deterministic Policy Gradient Algorithms""), which employs two neural networks to approximate the actor function mu(s)
and the critic function Q(s,a)
, they use a similar structure.
+However one characteristic they found is that in order to make the learning converge it is necessary to have two additional ""target"" networks mu'(s)
and Q'(s,a)
which are used to calculate the target (""true"") value of the reward:
+
+y_t = r(s_t, a) + gamma * Q'(s_t1, mu'(s_t1))
+
+
+Then after each training step a ""soft"" update of the target weights w_mu', w_Q'
with the actual weights w_mu, w_Q
is performed:
+
+w' = (1 - tau)*w' + tau*w
+
+
+where tau << 1
. According to the paper
+
+
+ This means that the target values are constrained to change slowly, greatly improving the stability of learning.
+
+
+So the target networks mu'
and Q'
are used to predict the ""true"" (target) value of the expected reward which the other two networks try to approximate during the learning phase.
+
+They sketch the training procedure as follows:
+
+
+
+The question
+
+So my question now is, after the training is complete, which of the two networks mu
or mu'
should be used for making predictions?
+
+Equivalently to the training phase I suppose that mu
should be used without the exploration noise but since it is mu'
that is used during the training for predicting the ""true"" (unnoisy) action for the reward computation, I'm apt to use mu'
.
+
+Or does this even matter? If the training was to last long enough shouldn't both versions of the actor have converged to the same state?
+"
+"['machine-learning', 'convolutional-neural-networks', 'object-recognition']"," Title: Why does the classifier network in RPN output two scores?Body: The region proposal network (RPN) in Faster-RCNN models contains a classifier and a regressor network. Why does the classifier network output two scores (object and background) for each anchor instead of just a single probability? Aren't the two classes considered exclusive?
+
+Source: Figure 3 of the original Faster-RCNN paper
+"
+['image-recognition']," Title: Create captions based on a series of imagesBody: I'd like to generate subtitles for a silent film. Is there an open source project out there capable of creating captions based on a series of images (such as a scene from a movie)?
+
+EDIT: thanks for the comments below. To clarify, what i'm looking for is an algorithm which can generate a caption for a sequences of images within a movie describing what happens in the sequence. This is for preliminary research, so accuracy is less important.
+"
+"['neural-networks', 'classification', 'comparison', 'objective-functions', 'regression']"," Title: Why is the hyperbolic tangent with MSE better than the sigmoid with cross-entropy?Body: Usually, in binary classification problems, we use sigmoid as the activation function of the last layer plus the binary cross-entropy as cost function.
+
+However, I have already experienced (more than once) that $\tanh$ as activation function of last layer + MSE as cost function worked slightly better for binary classification problems.
+
+Using a binary image segmentation problem as an example, we have two scenarios:
+
+
+- sigmoid (in the last layer) + cross-entropy: the output of the network will be a probability for each pixel and we want to maximize it according to the correct class.
+- $\tanh$ (in the last layer) + MSE: the output of the network will be a normalized pixel value [-1, 1] and we want to make it as close as possible the original value (normalized too).
+
+
+We all know the problems associated with a sigmoid (vanishing of gradients) and the benefits of the cross-entropy cost function. We also know $\tanh$ is slightly better than sigmoid (zero-centered and little less prone to gradient vanishing), but when we use MSE as the cost function, we are trying to minimize a completely different problem - regression instead of classification.
+
+Why is the hyperbolic tangent ($\tanh$) combined with MSE more appropriate than the sigmoid combined with cross-entropy for binary classification problems? What's the intuition behind it?
+"
+"['ai-basics', 'heuristics', 'path-finding']"," Title: Approach for data transformation neededBody: I am looking for an algorithm to transform an input data to a goal data using a series of operations. The shorter the series the better.
+
+The following is known:
+
+
+- the input data
+- the goal data
+- input and goal data does not stand in any correlation
+- operations (and there impact to the current data) which can be endless combined
+- different input data for the same goal data could have same, similar or totally different operation series
+- for some data states not all operations are possible
+
+
+
+
+I thought of a pathfinding algorithm, since I can calculate the distance between current data and goal data. So each edge would be an operator and each node the current data. But I am unsure about variety and combination amount of possible operations.
+
+What approach could I try?
+"
+"['algorithm', 'genetic-algorithms']"," Title: Which algorithm for scheduling race grid?Body: I want to plot a schedule of races based on rules. Rules like ""each team needs at least 2 races between their next race"" and some teams (e.g. collegiate) need to be clumped near each other.
+
+What would be the best algorithm to approach this? So far, all I've found is genetic algorithm. Are there any other alternatives I could look into?
+"
+"['deep-learning', 'training', 'keras']"," Title: Periodic Pattern in Validation Loss CurveBody: I am currently trying to solve a regression problem using neural networks. I want to detect movement patterns in images over time (video) and output a continuous value.
+During the training process I noticed a strange behaviour for the validation loss curve and I was wondering if anyone has noticed this kind of periodic pattern on some of their own work. What might cause this?
+
+The model looks like the following:
+
+- TimeDistributed(Conv2D(32, (3,3)))
+- TimeDistributed(Conv2D(16, (3,3)))
+- TimeDistributed(Flatten())
+- GRU(64, stateful=True)
+- Dropout(0.5)
+- Dense(64, activation='relu')
+- Dense(1)
+
+
+I trained the model using the mean squared error as the loss function, a batch size of 1 and the AdamOptimizer with an initial learning rate of 10^(-6). Obviously, the loss curve for the training data is not very good, but I am currently just wondering about the pattern of the val_loss. The plots below represent the loss of 65 epochs.
+
+Thanks!
+
+
+
+
+
+Edit:
+The way I try to solve my task relies on a sliding window approach where I try to predict a continuous value for the next second based on the last 20 seconds (400 frames) of the time-series input data. But I don't think this information is needed to solve my initial question since the periodic patterns appear over several epochs (one ""peak"" for about every 15 epochs) which is strange. Although the stateful-version of the GRU is used (btw: using TensorFlow and Keras), the internal state of the GRU is reset after every epoch to maintain a clean start. The stateful keyword is used to indicate a dependency between batches.
+"
+"['machine-learning', 'convolutional-neural-networks', 'computer-vision', 'visual-place-recognition']"," Title: Which neural networks are suitable for visual place recognition?Body: I am doing a project on visual place recognition in changing environments. The CNN used here is mostly AlexNet, and a feature vector is constructed from layer 3.
+Does anyone know of similar work using other CNNs, for example, VGGnet (which I am trying to use) and the corresponding layers?
+I have been trying out the different layers of VGGnet-16. I am trying to get the nearest correspondence to the query image by using the cosine difference between the query image and database images. So far no good results.
+"
+"['reinforcement-learning', 'backpropagation', 'policy-gradients']"," Title: In the case of invalid actions, which output probability matrix should we use in back-propagation?Body: As discussed in this thread, you can handle invalid moves in reinforcement learning by re-setting the probabilities of all illegal moves to zero and renormalising the output vector.
+In back-propagation, which probability matrix should we use? The raw output probabilities, or the post-processed vector?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'classification', 'facial-recognition']"," Title: Facial recognition and classifying unknowns with neural networksBody: As far as I understand, neural networks aren't good at classifying 'unknowns', i.e. objects that do not belong to a learned class. But how do face detection/recognition approaches usually determine that no face is detected/recognised in a region? Is the predicted probability somehow thresholded?
+
+I'm asking because my application will involve identifying unknown objects. In fact, most of the input objects are unknown and only a fraction is known.
+"
+"['convolutional-neural-networks', 'long-short-term-memory']"," Title: What is the difference between ConvLSTM and CNN LSTM?Body: What will be the difference when used for video classification? Will they yield different results or are they the same fundamentally?
+"
+"['reinforcement-learning', 'multi-armed-bandits', 'contextual-bandits']"," Title: Name of a multiarmed bandit with only some levers availableBody: In order to model a card game, as an exercise, I was thinking of an elementary setting as a multiarmed bandit, each lever being the distribution of expected rewards of a specific card.
+
+But, of course, the player only has some cards in the hand each round, or, equivalently, for a given round, it has available a number $n$ of arms randomly selected from the total number $N$ of levers.
+
+Is this just a ""contextual bandit"" or has it some specific, narrower, name that I could use to look up in the literature?
+"
+"['neural-networks', 'deep-learning', 'terminology', 'activation-functions']"," Title: What is the purpose of an activation function in neural networks?Body: It is said that activation functions in neural networks help introduce non-linearity.
+
+
+- What does this mean?
+- What does non-linearity mean in this context?
+- How does the introduction of this non-linearity help?
+- Are there any other purposes of activation functions?
+
+"
+"['neural-networks', 'genetic-algorithms', 'evolutionary-algorithms', 'neat', 'neuroevolution']"," Title: Does NEAT require only connection genes to be marked with a global innovation number?Body: Does NEAT require only connection genes to be marked with a global innovation number?
+
+From the NEAT paper
+
+
+ Whenever a new gene appears (through structural mutation), a global innovation number is incremented and assigned to that gene.
+
+
+It seems that any gene (both node genes and connection genes) requires an innovation number. However, I was wondering what was the node gene innovation number for. Is it to provide the same node ID across all elements of the population? Isn't the connection gene innovation number sufficient?
+
+Besides, the NEAT paper includes the following image which doesn't show any innovation number on node genes.
+
+
+"
+"['neural-networks', 'genetic-algorithms', 'neat', 'neuroevolution']"," Title: Can mutation enable a disabled connection?Body: In the add node mutation, the connection between two chosen nodes (e.g. A and B) is first disabled and then a new node is created between A and B with their respective two connections.
+
+I guess that the former A-B connection can be re-enabled via crossover (is it right?).
+
+Can the former A-B connection also be re-enabled via mutation (e.g. ""add connection"")?
+"
+"['reinforcement-learning', 'deep-rl', 'policy-gradients', 'actor-critic-methods', 'proximal-policy-optimization']"," Title: Understanding multi-iteration updates of the model in the Proximal Policy Optimization algorithmBody: I have a general question about the updating of the network/model in the PPO algorithm.
+If I understand it correctly, there are multiple iterations of weight updates done on the model with data that is created from the environment (with the model before the update). Now, I think that the updates of the model weights are not correct anymore after the first iteration/optimization step, because the model weights changed and therefore the training data is outdated (since the model would now give different actions in the environment and therefore different rewards).
+Basically, in the pseudo-code of the algorithm, I don't understand the line "Optimize surrogate L ... with K epochs...". If the update is done for multiple epochs, the data that is learned is outdated already after the first iteration of optimization, since the model's weights changed. In other algorithms, like A2C, there is only one optimization step done, instead of $K$ epochs.
+
+Is this some form of approximation or augmentation on the data by using the data that was created by an older model for multiple iterations or am I missing something here? If yes, where was this idea first introduced or better described? And where is an (empirical) proof that this still leads to a correct weight updating?
+"
+"['philosophy', 'models', 'agi']"," Title: Are mathematical models sufficient to create general artificial intelligence?Body: Are mathematical models sufficient to create general artificial intelligence?
+
+I am not sure if it is possible to represent e.g. emotions or intuition using mathematical model. Do we need a new approach in order to solve this problem?
+"
+"['natural-language-processing', 'reference-request', 'chat-bots']"," Title: Two chatbots - One teaches anotherBody: I am seeking the information for this kind of chatbot architecture : There are two chatbots. One plays the role of teacher, and another is a student who is learning. The goal is to test the student's quality, and to improve the student's ability.
+I didn't find much reference. There are :
+Bottester: Testing Conversational Systems with Simulated Users
+And the ParlAI, a python-based platform for enabling dialog AI research has the notion of "Teacher agent", which seems to be what I am looking for.
+Of course, we also have deep reinforcement learning which might be related.
+I prefer to have some classical references for this approach to chatbots.
+Currently, reinforcement learning is not in my consideration.
+Constructing two chatbots talking to each other, like what Facebook did, is not what I want. Because in this case, both of them are student agents.
+"
+"['agi', 'singularity', 'superintelligence']"," Title: Strong AI vs Singularity - which should happen first?Body: What is supposed to happen first: Strong AI or Technological Singularity?
+
+Meaning which option is more likely, that the Strong AI that will bring as to the state of technological singularity or achieving technological singularity will allow us to construct strong AI?
+"
+"['evolutionary-algorithms', 'neat', 'fitness-functions', 'fitness-design']"," Title: How do I design a fitness function that weighs the importance of eating food?Body: Summary:
+I am teaching bots to pick food on a playing field. Some food is poisonous and some is good.
+Food Details:
+
+- Poisonous food subtracts score points and good food adds.
+- Food points vary based on its size.
+- There is about 9:1 ratio of poisonous food to good food, so a lot more chances to end up in negative numbers.
+- Food grows in points overtime.
+- Food spoils after some predetermined size becoming poisonous.
+
+Fitness Function:
+The fitness function I use is simply counting points by the end of iterations. Bots might choose to eat it or skip it.
+The Problem:
+The problem I am having is that, in the first generation, most bots eat a lot of bad crap and the curious ones end up in negative numbers. So, mostly the ones that make it are the ones that are lazy and didn't eat or didn't head towards the food and most of the time the fittest for first few generations comes out with 0 points and 0 eats of any kind of food.
+When trained for a long time, they just end up waiting for the food instead of eating multiple times. Often, while they wait, food goes bad and they just end up going to another food. This way, at the end of the iteration, I have some winners, but they are nowhere near the potential they could have been at.
+Question:
+I somehow need to weigh the importance of eating food. I want them to eventually learn to eat.
+So I thought of this:
+brain.score += foodValue * numTimesTheyAteSoFar
+
+But this blows up the score too much and now the food quality is not respected and they just gulp on anything slightly above 0.
+"
+"['deep-learning', 'convolutional-neural-networks', 'papers', 'object-recognition', 'yolo']"," Title: In YOLO, when is $\mathbb{1}_{i j}^{\mathrm{obj}} = 1$, and what are the ground-truth labels for $x_i$ and $y_i$?Body: I'm trying to implement a custom version of the YOLO neural network. Originally, it was described in the paper You Only Look Once: Unified, Real-Time Object Detection (2016). I have some problems understanding the loss function they used.
+Basic information:
+
+- An input image is divided into $S \times S$ grid (that gives the total of $S^2$ cells) and each cell predicts $B$ bounding boxes and $c$ conditional class probabilities. Each bounding box predicts $5$ values: $x,y,w,h,C$ (center of bounding box, width and height and confidence score). This makes the output of YOLO an $S \times S \times (5B*c)$ tensor.
+
+- The $(x,y)$ coordinates are calculated relative to the bounds of the cell and $(w,h)$ is relative to the whole image.
+
+- I understand that the first term penalizes the wrong prediction of the center of a bounding box; the 2nd term penalizes wrong width and height prediction; the 3rd term the wrong confidence prediction; the 4th is responsible for pushing confidence to zero when there is no object in a cell; the last term penalizes wrong class prediction.
+
+
+My problem:
+I don't understand when $\mathbb{1}^\text{obj}_{ij}$ should be $1$ or $0$. In the paper, they write (section 2.2. Training):
+
+$\mathbb{1}_{i j}^{\mathrm{obj}}$ denotes that the $j$th bounding box predictor in cell $i$ is "responsible" for that prediction.
+
+and they also write
+
+Note that the loss function only penalizes classification error if an object is present in that grid cell (hence the conditional class probability discussed earlier). It also only penalizes bounding box coordinate error if that predictor is "responsible" for the ground truth box
+
+
+- So, is it right that, for every object in the image, there should be exactly one pair of $ij$ such that $\mathbb{1}_{i j}^{\mathrm{obj}} = 1$?
+
+- If this is correct, this means that the center of the ground truth bounding box should fall into $i$th cell, right?
+
+- If this is not the case, what are other possibilities when $\mathbb{1}_{i j}^{\mathrm{obj}} = 1$, and what ground truth labels $x_i$ and $y_i$ should be in these cases?
+
+
+
+- Also, I assume that ground truth $p_i(c)$ should be $1$ if there is an object of class $c$ in the cell $i$, but what ground truth $p_i(c)$ should be equal to in case there are several objects of different classes in the cell?
+
+
+
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'implementation', 'action-spaces']"," Title: How to deal with different actions for different states of the environment?Body: I'm new to this AI/Machine Learning and was playing around with OpenAI Gym a bit. When looking through the environments, I came across the Blackjack-v0
environment, which is a basic implementation of the game where the state is the hand count of the player and the dealer and if the player has a useable ace. The actions are only hit or stand and the possible rewards 1 if the player wins, -1 if the player loses, and 0 when it is a draw.
+So, that got me thinking what a more realistic environment/model for this game would look like, taking into account the current balance and other factors and has multiple actions like betting 1-10€ and hit or stand.
+This brings me to my actual question:
+
+- As far as I understand, in neural networks (and I do not very well yet, I guess) the input will be the state and the output the possible actions and how good the network thinks they are/will be. But now there are two different action spaces, which apply to different states of the game (betting or playing), so some of the actions are useless. How would be the right way to approach this scenario?
+
+I'm guessing one answer would be to give some kind of negative reward if the network guesses a useless action, but, in this case, I think the reward should be the actual stake (negative reward) and the actual win if any. Therefore, this would cause some bias in how the game proceeds as it should start with some amount of balance and end if the balance is 0 or after a specified amount of rounds.
+Limiting timesteps wouldn't be an option either, I guess, because it should be limited to rounds, so, for example, it won't end after a betting step.
+Therefore, for a useless step, the reward would be 0 and the state would stay the same, but, for the neural network, it doesn't matter how many useless steps it takes because it'll make no difference to the actual outcome.
+Corollary question:
+
+- Should be split up into two neural networks? One for betting and one for playing?
+
+"
+['neural-networks']," Title: Is there a naming convention for network weights for multilayer networks?Body: In the diagram below, although the flow of information happens from the input to output layer, the labeling of weights appears reverse. Eg: For the arrow flowing from X3 to the fourth hidden layer node has the weight labeled as W(1,0) and W(4,3) instead of W(0,1) and W(3,4) which would indicate data flowing from the 3rd node of the 0'th layer to the 4th node of the 1st layer.
+
+
+
+One of my neural networks teachers did not emphasize on this convention at all. Another teacher made it a point to emphasize on it.
+
+Is there a reason there is such an un-intuitive convention and is there really a convention?
+"
+"['neural-networks', 'natural-language-processing', 'tensorflow', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: Seq2Seq dialogs predicts only most common words like `you` after couple of epochesBody: I'm training Seq2Seq model on OpenSubtitles dialogs - Cornell-Movie-Dialogs-Corpus.
+My work based on the following papers (but currently I'm not implemented Attention yet):
+
+The loss
I received is quite high and sucked in variation ~6.4
after 3 epoches. The model predicts the most common words with some times other not significant words (but 99.99% is just 'you'):
+
+- I’ve experimented with 128 - 2048 hidden units and with 1 or 2 or 3 LSTM layers per
encoder
and decoder
. The outcomes are more or less the same.
+
+
+SEQ1: yeah man it means love respect community and the dollars too the package the unk end
+SEQ2: but how did you get unk 82 end
+PREDICTION: promoting 16th dashboard be of the the the you you you you you you you you you you you you you you you you you you you you you you you you
+
+I'm using here greedy
prediction, meaning - after I receive logit
I do argmax(..)
on all its value for first-3 mini-batch-elements (here I present only first element). For convenient - SEQ1
and SEQ2
are also printed - to know the actual dialog which was presented to the model.
+The pseudo-code of my architecture looks like this (I'm using Tensorflow 1.5):
+seq1 = tf.placeholder(...)
+seq2 = tf.placeholder(...)
+
+embeddings = tf.Variable(tf.random_uniform([vocab_size, 100],-1,1))
+
+seq1_emb = tf.nn.embedding_lookup(embeddings, seq1)
+seq2_emb = tf.nn.embedding_lookup(embeddings, seq1)
+
+encoder_out, state1 = tf.nn.static_rnn(BasicLSTMCell(), seq1_emb)
+decoder_out, state2 = tf.nn.static_rnn(BasicLSTMCell(), seq2_emb,
+ initial_state=state_1)
+logit = Dense(decoder_out, use_bias=False)
+
+crossent = tf.nn.saparse_softmax_cross_entropy_with_logits(logits=logit,
+ labels=target)
+crossent = mask_padded_zeros(crossent)
+loss = tf.reduce_sum(crossent) / number_of_words_in_batch
+
+train = tf.train.AdamOptimizer(learning_rate=0.00002).minimize(loss)
+
+I'm also wonder if I pass well state1
to decoder
, which in general looks like this:
+# reshape in pseudocode: state1 = state[1:]
+new_state1 = []
+for lstm in state1:
+ new_lstm = []
+ for gate in lstm:
+ new_lstm.append(gate[1:])
+ new_state1.append(tuple(new_lstm))
+state1 = tuple(new_state1)
+
+
+- Should I use some projection layer between states of
encoder
and decoder
?
+
+So if seq1
has 32 words, seq2
has 31 (since we will not predict nothing after the last word, which is the tag <END>
).
+"
+"['machine-learning', 'deep-learning', 'reinforcement-learning']"," Title: ML model that is most suited to analyse Google Analytics dataBody: Google Analytics allows me to collect data about every web-session. For simplicity, let's assume for each user, we collect the number of pages and time spent on site for each session:
+
+user_id visit_id page_views time_spent result
+1 1 10 100 0
+1 2 31 510 0
+1 3 1 10 1
+
+
+How would you model this data? What I would like the ML algorithm to
+ do:
+
+
+- Extract as much information as possible
+- Have a flexible number of
+inputs (e.g. the number of sessions can go to infinity)
+
+
+What I can think of:
+
+
+- Aggregate the data per user e.g. average page_views or total page_views and feed it into a general algorithm e.g. random forrest (but I lose information with aggregation)
+- Use LSTM and feed at most last 3 visits (will also lose information, but would this perform better than aggregation?)
+
+
+Goal:
+To build a predictive model to analyse all user sessions and make a prediction whether the person will convert or not.
+"
+"['neural-networks', 'machine-learning', 'reference-request', 'function-approximation']"," Title: Which functions can't neural networks learn efficiently?Body: There are a lot of papers that show that neural networks can approximate a wide variety of functions. However, I can't find papers that show the limitations of NNs.
+
+What are the limitations of neural networks? Which functions can't neural networks learn efficiently (or using gradient-descent)?
+
+I am looking also for links to papers that describe these limitations.
+"
+"['neural-networks', 'convolutional-neural-networks', 'comparison', 'terminology', 'definitions']"," Title: What is the difference between a convolutional neural network and a regular neural network?Body: I've seen these terms thrown around this site a lot, specifically in the tags convolutional-neural-networks and neural-networks.
+
+I know that a neural network is a system based loosely on the human brain. But what's the difference between a convolutional neural network and a regular neural network? Is one just a lot more complicated and, ahem, convoluted than the other?
+"
+"['neural-networks', 'evolutionary-algorithms', 'neat']"," Title: How are connection weights ""perturbed""?Body: I am reading through the NEAT paper here. On page 14 of the PDF, there is this quote about mutation:
+
+
+ There was an 80% chance of a genome having its connection weights mutated, in which case each weight had a 90% chance of being uniformly perturbed and a 10% chance of being assigned a new random value.
+
+
+What exactly does it mean to perturb weights? What is uniform vs. nonuniform perturbation?
+
+Is there an established method to do this? I am imagining the process as multiplying each connection weight by a random number, but I'm unfamiliar with the term.
+"
+"['training', 'datasets']"," Title: A way to give more weight to particular data?Body: Let us for these purposes say with are working with any feed forward neural network.
+
+Let us also say, that we know beforehand that certain portion of our dataset arsignificantly more impactful or important to our underlying representation. Is there anyway to add that “weighting” to our data?
+"
+"['deep-learning', 'generative-model']"," Title: How to evaluate the goodness of images generated by GANs?Body: As we all know, there has been tons of GAN variants featuring different aspects of the image generation task such as stability, resolution or the ability to manipulate images. However, it is still confusing to me that how do we determine that images generated by one network are more plausible than images generated by another?
+
+PS: could someone with higher reputation create more tags like image generation?
+"
+"['deep-learning', 'multilayer-perceptrons']"," Title: How to deal with padded inputs in a fully connected feed forward network?Body: I have a fully connected network that takes in a variable-length input padded with 0.
+However, the network doesn't seem to be learning and I am guessing that the high number of zeros in the input might have something to do with that.
+Are there solutions for dealing with padded input in fully connected layers or should I consider a different architecture?
+UPDATE (to provide more details):
+The goal of the network if to clean full file paths:
+i.g.:
+
+/My document/some folder/a file name.txt > a file name
+/Hard drive/book/deeplearning/1.txt > deeplearning
+
+The constraint is that the training data labels have been generated using a regex on the file name itself so it's not very accurate.
+I am hoping that by treating every word equally (without sequential information) the network would be able to generalize as to which type of word is usually kept and which is usually discarded.
+Then network takes in a sequence of word embedding trained on paths data and outputs logits that correspond to probabilities of each word to be kept or not.
+"
+"['neural-networks', 'generative-model']"," Title: Neural network to get input attributes using only the output valueBody: I have an idea about how to use neural networks but I'm not sure if it is possible or not.
+
+In supervised learning we have a set of attributes labeled with an output value. I can use these set to train my network.
+
+Now I have a network trained to get an output value from an random set of attributes but, can I use this trained network to get the input attributes using only the desired output?
+
+I will have N input values and only 1 output value. I've thought that I can use the weights for that network into a new one with 1 input value and N output values but I'm not sure if I can do that.
+"
+"['neural-networks', 'architecture', 'neat', 'neuroevolution', 'forward-pass']"," Title: How to compute the output of a neural network produced by NEAT?Body: I used to work with layered neural networks, where, given certain inputs, the output is produced layer-by-layer.
+With NEAT, a neural network may assume any topology, and they are no longer layered. So, how do we compute the output of such a neural network? I understand time-steps must be taken into account, but how? Should I keep the inputs until all hidden neurons are processed and output is produced? Should I wait for the output to stabilize?
+"
+"['reinforcement-learning', 'monte-carlo-tree-search', 'minimax', 'evaluation-functions']"," Title: Game AI evaluation function and making progress towards winningBody: In two-player games, the exact value of the evaluation function doesn't matter, as long as it's bigger for better positions. However, for learning, it's customary when it does change when the best move gets made. This way, the learning can minimize the difference between the directly computed value $f(0, p)$ of a position $p$ and the value obtained from $n$ step minimax $f(n, p)$.
+What I'm missing here is a way to direct the evaluation function to actually winning. For example, a perfect evaluation function for a won position in chess would always return $+1$ without any hint on how to progress towards a checkmate. In a chess variant without the fifty-move limit, it could play useless turns forever.
+I guess this is a rather theoretical problem, as we won't ever have such a good function, but I wonder if there's a way to avoid it?
+"
+['reinforcement-learning']," Title: Proof of uniqueness of value function for MDPs with undiscounted rewardsBody: How does one prove the uniqueness of the value function obtained from value iteration in the case of bounded and undiscounted rewards? I know that this can be proven for the discounted case pretty easily using the Banach fixed point theorem.
+"
+"['machine-learning', 'unsupervised-learning', 'javascript']"," Title: Is it possible to write an adaptive parser?Body: I am working on a js library which focuses on error handling. A part of the lib is a stack parser which I'd like to work in most of the environments.
+
+The hard part that there is no standard way to represent the stack, so every environment has its own stack string format. The variable parts are message, type and frames. A frame usually consists of called function, file, line, column.
+
+In some of the environments there are additional variable regions on the string, in others some of the variables are not present. I can run automated tests only in the 5 most common environments, but there are a lot more environments I'd like the parser to work in.
+
+
+- My goal is to write an adaptive parser, which learns the stack string format of the actual environment on the fly, and after that it can parse the stack of any exception of that environment.
+
+
+I already have a plan how to solve this in the traditional way, but I am curious, is there any machine learning tool (probably in the topic of unsupervised learning) I could use to solve this problem?
+
+According to the comments I need to clarify the terms ""stack string format"" and ""stack parser"". I think it is better to write 2 examples from different environments:
+
+A.)
+
+example stack string:
+
+Statement on line 44: Type mismatch (usually a non-object value used where an object is required)
+Backtrace:
+ Line 44 of linked script file://localhost/G:/js/stacktrace.js
+ this.undef();
+ Line 31 of linked script file://localhost/G:/js/stacktrace.js
+ ex = ex || this.createException();
+ Line 18 of linked script file://localhost/G:/js/stacktrace.js
+ var p = new printStackTrace.implementation(), result = p.run(ex);
+ Line 4 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html
+ printTrace(printStackTrace());
+ Line 7 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html
+ bar(n - 1);
+ Line 11 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html
+ bar(2);
+ Line 15 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html
+ foo();
+
+
+stack string format (template):
+
+Statement on line {frames[0].location.line}: {message}
+Backtrace:
+{foreach frames as frame}
+ Line {frame.location.line} of {frame.unknown[0]} {frame.location.path}
+ {frame.calledFunction}
+{/foreach}
+
+
+extracted information (json):
+
+{
+ message: ""Type mismatch (usually a non-object value used where an object is required)"",
+ frames: [
+ {
+ calledFunction: ""this.undef();"",
+ location: {
+ path: ""file://localhost/G:/js/stacktrace.js"",
+ line: 44
+ },
+ unknown: [""linked script""]
+ },
+ {
+ calledFunction: ""ex = ex || this.createException();"",
+ location: {
+ path: ""file://localhost/G:/js/stacktrace.js"",
+ line: 31
+ },
+ unknown: [""inline#1 script in""]
+ },
+ ...
+ ]
+}
+
+
+B.)
+
+example stack string:
+
+ReferenceError: x is not defined
+ at repl:1:5
+ at REPLServer.self.eval (repl.js:110:21)
+ at repl.js:249:20
+ at REPLServer.self.eval (repl.js:122:7)
+ at Interface.<anonymous> (repl.js:239:12)
+ at Interface.EventEmitter.emit (events.js:95:17)
+ at Interface._onLine (readline.js:202:10)
+ at Interface._line (readline.js:531:8)
+ at Interface._ttyWrite (readline.js:760:14)
+ at ReadStream.onkeypress (readline.js:99:10)
+
+
+stack string format (template):
+
+{type}: {message}
+{foreach frames as frame}
+{if frame.calledFunction is undefined}
+ at {frame.location.path}:{frame.location.line}:{frame.location.column}
+{else}
+ at {frame.calledFunction} ({frame.location.path}:{frame.location.line}:{frame.location.column})
+{/if}
+{/foreach}
+
+
+extracted information (json):
+
+{
+ message: ""x is not defined"",
+ type: ""ReferenceError"",
+ frames: [
+ {
+ location: {
+ path: ""repl"",
+ line: 1,
+ column: 5
+ }
+ },
+ {
+ calledFunction: ""REPLServer.self.eval"",
+ location: {
+ path: ""repl.js"",
+ line: 110,
+ column: 21
+ }
+ },
+ ...
+ ]
+}
+
+
+The parser should process the stack strings and return the extracted information. The stack string format and the variables are environment dependent, the library should figure out on the fly how to parse the stack strings of the actual environment.
+
+I can probe the actual environment by throwing exceptions with well known stacks and check the differences of the stack strings. For example if I add a whitespace indentation to the line that throws the exception, then the column and probably the called function variables will change. If I detect a number change somewhere, then I can be sure that we are talking about the column variable. I can add line breaks too, which will cause line number change and so on...
+
+I can probe for every important variables, but I cannot be sure that the actual string does not contain additional unknown variables and I cannot be sure that all of the known variables will be added to it. For example the frame strings of the ""A"" example contain an unknown variable and do not contain the column variable, while the frame strings of the ""B"" example do not always contain the called function variable.
+"
+"['neural-networks', 'backpropagation', 'gradient-descent']"," Title: How is the gradient calculated for the middle layer's weights?Body: I am trying to understand backpropagation. I used a simple neural network with one input $x$, one hidden layer $h$ and one output layer $y$, with weight $w_1$ connecting $x$ to $h$, and $w_2$ connecting $h$ to $y$
+
+$$
+x \rightarrow (w_1) \rightarrow h \rightarrow (w_2) \rightarrow y
+$$
+
+In my understanding, these are the steps happening while we train a neural network:
+
+The feedforward step.
+
+\begin{align}
+h=\sigma\left(x w_{1}+b\right)\\
+y^{\prime}=\sigma\left(h w_{2}+b\right)
+\end{align}
+
+The loss function.
+
+$$
+L=\frac{1}{2} \sum\left(y-y^{\prime}\right)^{2}
+$$
+
+The gradient calculation
+
+$$\frac{\partial L}{\partial w_{2}}=\frac{\partial y^{\prime}}{\partial w_{2}} \frac{\partial L}{\partial y^{\prime}}$$
+
+$$\frac{\partial L}{\partial w_{1}}=?$$
+
+The weight update
+
+$$
+w_{i}^{t+1} \leftarrow w_{i}^{t}-\alpha \frac{\partial L}{\partial w_{i}}
+$$
+
+I understood most parts of backpropagation, but how do we get the gradients for the middle layer weights $dL/dw_1$?
+
+How should we calculate the gradient of a network similar to this?
+
+
+
+Is this the correct equation?
+
+$$\frac{\partial L}{\partial w_{1}}=\frac{\partial h_{1}}{\partial w_{1}} \frac{\partial w_{7}}{\partial h_{1}} \frac{\partial o_{2}}{\partial w_{7}} \frac{\partial L}{\partial o_{2}}+\frac{\partial h_{1}}{\partial w_{1}} \frac{\partial w_{5}}{\partial h_{1}} \frac{\partial o_{1}}{\partial w_{5}} \frac{\partial L}{\partial o_{1}}$$
+"
+"['machine-learning', 'learning-algorithms', 'linear-regression']"," Title: In the multi-linear regression, how is the value of weight $b_2$ calculated?Body: In multivariate linear regression (linear regression with more than one variable) the model is $yi = b_0 + b_1x_{1i} + b_2x_{2i} + ...$ , and so on. But how is the $b_n$ value calculated iteratively? Can it be calculated non-iteratively? What is the intuition behind using that method to calculate $b_2$?
+"
+"['neural-networks', 'training', 'backpropagation', 'neat', 'neuroevolution']"," Title: Does training happen during NEAT?Body: When one uses NEAT to evolve the best fitting network for a task, does training take place in each epoch as well?
+
+If I understand correctly, training is the adjustment of the weights of the neural network via backpropagation and gradient descent. During NEAT, say, a generation runs for 1000 iterations. During that time, is there any training involved, or does each genome randomly poke around and the winner takes it to the next stage?
+
+I've used NEAT, but the fact that neural networks are not trained does not make sense to me. At the same time, I can't find any code in my framework (Neataptic.js) that would train the generation during the epoch.
+"
+['programming-languages']," Title: Why is Common Lisp, Python and Prolog used in artificial intelligence?Body: What are the advantages/ strengths and disadvantages/weakness of programming languages like Common Lisp, Python and Prolog? Why are these languages used in the domain of artificial intelligence? What type of problems related to AI are solved using these languages?
+
+Please, give me link to papers or books regarding the mentioned topic.
+"
+"['neural-networks', 'machine-learning', 'activation-functions', 'relu']"," Title: How exactly can ReLUs approximate non-linear and curved functions?Body: Currently, the most commonly used activation functions are ReLUs. So I answered this question What is the purpose of an activation function in neural networks? and, while writing the answer, it struck me, how exactly can ReLUs approximate a non-linear function?
+By pure mathematical definition, sure, it's a non-linear function due to the sharp bend, but, if we confine ourselves to the positive or the negative portion of the $x$-axis only, then it's linear in those regions. Let's say we take the whole $x$-axis also, then also it's kinda linear (not in a strict mathematical sense), in the sense that it cannot satisfactorily approximate curved functions, like a sine wave ($0 \rightarrow 90$) with a single node hidden layer as is possible by a sigmoid activation function.
+So, what is the intuition behind the fact that ReLUs are used in NNs, giving satisfactory performance? Are non-linear functions, like the sigmoid and the tanh, thrown in the middle of the NN sometimes?
+I am not asking for the purpose of ReLUs, even though they are kind of linear.
+
+As per @Eka's comment, the ReLu derives its capability from discontinuity acting in the deep layers of the NN. Does this mean that ReLUs are good, as long as we use it in deep NNs and not a shallow NN?
+"
+"['neural-networks', 'getting-started']"," Title: Neural network to control movement and ""home in"" on a targetBody: I'm a software developer who keeps trying (and failing) to get my head around AI and neural networks. There is one area that sparked my interest recently - simulating a mouse ""homing in"" on a piece of cheese by following the smell. Based on the rule that moving closer to the cheese = stronger smell = good, then it feels like it should be quite a simple problem to solve - in theory at least!
+
+My thought process was to start by placing the mouse and cheese in random positions on the screen. I would then move the mouse one step in a random direction and measure its distance to the cheese, and if it's closer than before (stronger smell) then that's good. This is where I come unstuck on the theory - this ""feedback"" somehow needs to modify the mechanism used to move the mouse, gradually refining it until the mouse is able to head straight towards the cheese.
+Once ""trained"", I should be able to reposition the cheese and expect the mouse to travel to it more quickly. Note I'm also keeping things simple by not having obstacles for the mouse to negotiate around.
+
+How on earth would this be implemented with a NN? I understand the basic concepts, but I find that things unravel once I start looking at real code! The examples I've seen typically start by training the NN from a data set, but this doesn't seem to apply here as it feels like the only training available is ""on the fly"" as the mouse moves around (i.e. closer = good, further away = bad). I'm assuming the brain has some kind of ""reward mechanism"" triggered by a stronger smell of cheese.
+
+Am I barking up the wrong tree - either with my thought process, or NN not being a good fit for this problem? This isn't homework btw, just something that I've been puzzling over in the back of my mind.
+"
+"['machine-learning', 'linear-regression', 'data-science']"," Title: multi vs one prediction using RegressionBody: I was trying to build a prediction system where I have the input data arranged in multiple columns. The input data would be of the type where I have
+
+- weather,
+- service type (bronze, silver, gold),
+- size (xs, s, m, l, xl, xxl),
+- time,
+- availability,
+- pin code, and
+- the result (target).
+
+Each of the data types is arranged in columns with a specific code. I have read this, this, this , this, and this.
+They are helpful but do not give me a clear picture. I would like to achieve a multi-vs-one prediction. Most of the schemes available are one-vs-one where the data is a 1*1 entity.
+Here is a sample code that I was working with:
+regressionModel = linear_model.LinearRegression()
+ """ 3. Processing is not necessary for current concept """
+ y = pd.DataFrame(modifiedDFSet['Code'])
+ print(y.shape)
+ drop2 = ['Code']
+ X = modifiedDFSet.drop(drop2)
+ print(X.shape)
+ """ 4. Data Scaling, Data Imputation is not necessary. Training and Test data is prepared using train-test-split """
+ train_data, test_data = train_test_split(X, test_size=0.20, random_state=42)
+ """ 5. the Regression Model """
+ # h = .02 # step size in the mesh
+ # logreg = linear_model.LinearRegression()
+ # we create an instance of Neighbours Classifier and fit the data.
+ regressionModel.fit(X, y)
+ d_predictions = regressionModel.predict(y)
+
+X.shape
and y.shape
would yield (500, 6)
and (500, 1)
, respectively, which would obviously cause a dimensional error in the d_predictions
, meaning the regression model does not take multiple column inputs.
+I have a hypothesis that I can create a scoring scheme that will take into account the importance of each of the columns and create a scheme that creates a score and the end result would be a one-vs-one regression problem. Looking for some direction with respect to my hypothesis. Is it correct, wrong or halfway?
+"
+['machine-learning']," Title: How can I have a computer learn the equation with known dependent variables?Body: I'm trying to design an orbital rendezvous program for Kerbal Space Program. I'm focusing on when to launch my spacecraft so that I can end up in the general vicinity of the target. If I can control the ascent profile, the remaining dependent variables are the ship's twr and the target's altitude. I want to try a computer learning solution.
+
+What is the best way to formulate the problem of learning the time to launch based on some twr?
+
+How can I make an algorithm to compute the general equation of launch time to a altitude based on my ability to accelerate? What type of learning problem could this be classified as? What are some approaches to solve problems with known dependent variables?
+
+This may be an obvious question, I kind of expect the answer is regression? But it seemed a general enough question to be sure about to get a solid foothold in computer learning with this type of problem, which seems to come up a lot.
+"
+"['definitions', 'environment', 'norvig-russell']"," Title: Are all fully observable environments episodic?Body: According to the definition of a fully observable environment in Russell & Norvig, AIMA (2nd ed), pages 41-44, an environment is only fully observable if it requires zero memory for an agent to perform optimally, that is, all relevant information is immediately available from sensing the environment.
+
+From this definition and from the definition of an ""episodic"" environment in the same book, it is implied that all fully observable environments are, in fact, episodic or can be treated as episodic, which doesn't seem intuitive, but logically follows from the definitions. Also, no stochastic environment can be fully observable, even if the entire state space at a given point in time can be observed because rational action may depend on the previous observation that must be remembered.
+
+Am I wrong?
+"
+"['neural-networks', 'deep-learning', 'deepmind', 'transformer']"," Title: What is regression layer in a spatial transformer?Body: I came across this line while reading the original paper on Spatial Transformers by Deepmind in the last paragraph of Sec 3.1:
+
+
+ The localisation network function floc() can take any form, such as a fully-connected network or a convolutional network, but should include a final regression layer to produce the transformation parameters θ.
+
+
+I understand what regression is, but what is meant by a regression layer?
+"
+"['neural-networks', 'backpropagation', 'calculus', 'forward-pass']"," Title: Are my computations of the forward and backward pass of a neural network with one input, hidden and output neurons correct?Body: I have computed the forward and backward passes of the following simple neural network, with one input, hidden, and output neurons.
+
+Here are my computations of the forward pass.
+\begin{align}
+net_1 &= xw_{1}+b \\
+h &= \sigma (net_1) \\
+net_2 &= hw_{2}+b \\
+{y}' &= \sigma (net_2),
+\end{align}
+where $\sigma = \frac{1}{1 + e^{-x}}$ (sigmoid) and $ L=\frac{1}{2}\sum(y-{y}')^{2} $
+Here are my computations of backpropagation.
+\begin{align}
+\frac{\partial L}{\partial w_{2}}
+&=\frac{\partial net_2}{\partial w_2}\frac{\partial {y}' }{\partial net_2}\frac{\partial L }{\partial {y}'} \\
+\frac{\partial L}{\partial w_{1}} &=
+\frac{\partial net_1}{\partial w_{1}} \frac{\partial h}{\partial net_1}\frac{\partial net_2}{\partial h}\frac{\partial {y}' }{\partial net_2}\frac{\partial L }{\partial {y}'}
+\end{align}
+where
+\begin{align}
+\frac{\partial L }{\partial {y}'}
+& =\frac{\partial (\frac{1}{2}\sum(y-{y}')^{2})}{\partial {y}'}=({y}'-y) \\
+\frac{\partial {y}' }{\partial net_2}
+&={y}'(1-{y}')\\
+\frac{\partial net_2}{\partial w_2}
+&= \frac{\partial(hw_{2}+b) }{\partial w_2}=h \\
+\frac{\partial net_2}{\partial h}
+&=\frac{\partial (hw_{2}+b) }{\partial h}=w_2 \\
+\frac{\partial h}{\partial net_1}
+& =h(1-h) \\
+\frac{\partial net_1}{\partial w_{1}}
+&= \frac{\partial(xw_{1}+b) }{\partial w_1}=x
+\end{align}
+The gradients can be written as
+\begin{align}
+\frac{\partial L }{\partial w_2 } &=h\times {y}'(1-{y}')\times ({y}'-y) \\
+\frac{\partial L}{\partial w_{1}}
+&=x\times h(1-h)\times w_2 \times {y}'(1-{y}')\times ({y}'-y)
+\end{align}
+The weight update is
+\begin{align}
+w_{i}^{t+1} \leftarrow w_{i}^{t}-\alpha \frac{\partial L}{\partial w_{i}}
+\end{align}
+Are my computations correct?
+"
+"['neural-networks', 'ai-design', 'natural-language-processing', 'legal']"," Title: NLP proved against US legal textsBody: I'm new to AI development and am looking for a quality algorithm (potentially nlp?) implementation proved against US legal texts.
+
+Obviously some training would need to be done, but I've found little to no online references to go on when it comes to running assessment against US legal documents.
+
+My goal is to use an algorithm to discover potential issues in long and complex legal texts, or associated (groups) of legal texts which bind one or more related entities (people or corporations) to potentially conflicting clauses.
+
+Just a pointer in some kind of direction would be helpful.
+"
+"['terminology', 'cross-validation', 'training-datasets', 'test-datasets', 'validation-datasets']"," Title: What are ""development test sets"" used for?Body: This is a theoretical question. I am a newbie to artificial intelligence and machine learning, and the more I read the more I like this. So far, I have been reading about the evaluation of language models (I am focused on ASR), but I still don't get the concept of the development test sets.
+The clearest explanation I have come across is the following (taken from chapter 3 of the book Speech and Language Processing (3rd ed. draft) by Dan Jurafsky and James H. Martin)
+
+Sometimes we use a particular test set so often that we implicitly tune to its characteristics. We then need a fresh test set that is truly unseen. In such cases, we call the initial test set the development test set or, devset.
+
+In any case, I still don't understand why an additional test has to be used. In other words, why aren't training and test sets enough?
+"
+"['machine-learning', 'linear-regression', 'gradient-descent']"," Title: How is direction of weight change determined by Gradient Descent algorithmBody: The result of gradient descent algorithm is a vector. So how does this algorithm decide the direction for weight change? We Give hyperparameters for step size. But how is the vector direction for weight change, for the purpose of reducing the Loss function in a Linear Regression Model, determined by this algorithm?
+"
+"['machine-learning', 'reinforcement-learning']"," Title: M.c.Escher and abstract thoughtBody: The lithographs of dutch artist M.C. Escher have been used in the study of artificial Intelligence. How can the human mind Incorporate these optical illusions into abstract thought? Is this reverse artificial intelligence?
+"
+"['comparison', 'genetic-algorithms', 'evolutionary-algorithms', 'biology']"," Title: What's the difference between biological and artificial evolution?Body: I am trying to understand the difference between biological and artificial evolution. If we look at it in terms of genetics, in both of them, the selection operation is a key term.
+
+What's the difference between biological and artificial evolution?
+"
+"['terminology', 'definitions', 'unsupervised-learning']"," Title: Why do we need learning in unsupervised learning?Body: I am not clear with the concept that an unsupervised model learns. We are giving an input and output to the supervised model, so that it can generate a particular value, pattern or something out of it which can be used to categorize something in the future. By contrast, in unsupervised learning, we are clustering, so why do we need learning?
+
+Can anyone detail me with some real-world examples?
+"
+"['convolutional-neural-networks', 'recurrent-neural-networks', 'comparison']"," Title: Do convolutional neural networks also have recurrent connections?Body: I asked my self this simple question while reading ""Comment Abuse Classification with Deep Learning"" by Chu and Jue. Indeed, they say at the end of the that
+
+
+ It is clear that RNNs, specifically LSTMs, and CNNs are state-of-the-art architectures for sentiment analysis
+
+
+To my mind CNNs were only neurons arranged so that they correspond to overlapping regions when paving the input field. It wasn't that recurrent at all.
+"
+"['machine-learning', 'deep-learning', 'papers', 'alexnet']"," Title: Why do we need 10 bits to represent the 1000 classes in AlexNet?Body: I'm reading the AlexNet paper. In section 4, where the authors explain how they prevent overfitting, they mention
+
+
+ Although the 1000 classes of ILSVRC make each training example impose 10 bits of constraint on the mapping from image to label"".
+
+
+What does this mean?
+"
+['autoencoders']," Title: Equilateral and One-of-n encodingBody: I was reading AI For Humans Vol. 1 by Jeff Heaton when I came across the terms ""equilateral encoding"" and ""one-of-n encoding."" The explanations unfortunately made no sense to me and the reddit threads on the Web are blocked by my Internet provider (I use a high-school machine). Is anyone here able to provide basic explanations regarding the two procedures for me? Thanks in advance.
+"
+"['neural-networks', 'training', 'datasets']"," Title: Classification Learning - Normalization of time series and live usageBody:
+ UPDATE: The tables look messed up so i put them on pastebin for better
+ visibility. https://pastebin.com/gDX28uVF
+
+
+I am using Neural Network with different learning types (for example Standard Backpropagation) to classify trends in time series. As stated in several papers, data normalization is a very important factor for successful / efficient learning. I am trying to be clear and precise as possible in the description.
+
+Problem / Learning Goal:
+
+The network gets trained with time series and 2 indicators to predict a specific cluster. Here is a very simple (madeup) example of raw data to understand the problem:
+
+Example RAW Data
+Timestamp;DensityX;WaveLengthY;Temperature (K)
+
+1;0.1;2;200
+
+2;0.9;3;150
+
+3;-0.5;1;175
+
+4;0;6;154
+
+5;1;8;155
+
+6;1.3;1.5;220
+
+7;-0.5;3.4;250
+
+8;0.2;2;255
+
+9;0.1;1;180
+
+
+ see https://pastebin.com/gDX28uVF for better visual
+
+
+I use the following process to generate suitable sample data for training:
+
+The neural network receives n time slices with the indicators and tries to check if a future trend in the temperature occurs (for x future time slices).
+For example n = 2; x=3.
+
+The input and output are defined as follows:
+
+Input vector:
+
+
+- In1 = Density_(t-2)
+- In2 = Wavelength_(t-2)
+- In3 = Density_(t-1)
+- In4 = Wavelength_(t-1)
+
+
+Output vector:
+
+Output Vector is a classification encoded by Effects Encoding or Dummy Encoding (Details in “Neural Networks using C# Succinctly”)
+
+Calculation:
+
+
+- Classification “Down” : Temperature drops 3 times in a row (Encoded as 0;1)
+- Classification: “Stable”: Temperature does neither drop nor raises 3 times in a row (1;0)
+- Classification: “Up”: Temperature raises 3 times in a row. (-1;-1;)
+
+
+So the “processed” training sample would look like this:
+
+Processed Data
+
+Pattern;I1;I2;I3;I4;O1;O2;Class;Used TS
+
+1;0.1;2;0.9;3;0;1;Down;1 to 5
+
+2;0.9;3;-0.5;1;-1;-1;Up;2 to 6
+
+3;-0.5;1;0;6;-1;-1;Up;3 to 7
+
+4;0;6;1;8;1;0;Stable;4 to 8
+
+5;1;8;1.3;1.5;1;0;Stable;5 to 9
+
+
+ see https://pastebin.com/gDX28uVF for better visual
+
+
+As you can see due to the different indicators ranges I want to normalize the data.
+
+Basically I found the following propositions in literature and research:
+
+Min/Max Normalization
+
+Requires the following values to calculate
+ - dataHigh: The highest unnormalized observation.
+
+
+- dataLow: The lowest unnormalized observation.
+- normalizedHigh: The high end of the range to which the data will be normalized.
+- normalizedLow: The low end of the range to which the data will be normalized.
+
+
+Reciprocal normalization
+
+Every value is processed to its reciprocal (x=1/x). Calculated values for density x would be:
+
+Timestamp;Reciprocal Density
+
+1;10
+
+2;1.111111111
+
+3;-2
+
+4;#DIV/0!
+
+5;1
+
+6;0.769230769
+
+7;-2
+
+8;5
+
+9;10
+
+
+ see https://pastebin.com/gDX28uVF for better visual
+
+
+Percentage normalization
+
+The percentual delta is calculated using the value from the previous time stamp.
+
+The starting point was Timestamp 1 where the Delta equals 0.
+For each timestamp the delta percentage is calculated evaluating the previous value. So calculating the time series delta percentages would turn out to:
+
+Timestamp;""Delta Density X""
+
+1;0
+
+2;0.9
+
+3;-0.555555556
+
+4;0
+
+5;#DIV/0!
+
+6;1.3
+
+7;-0.384615385
+
+8;-0.4
+
+9;0.5
+
+
+ see https://pastebin.com/gDX28uVF for better visual
+
+
+As you can see there are errors with handling zero values and the range is still a problem in my opinion. The Min/Max approach is generally leads to a good normalization but I think there is a problem as well, because live data may breach the max and min values of the training set.
+
+My questions are:
+
+
+- What are your thoughts about the general idea how I process the raw data?
+- How would you normalize the given data – if at all?
+
+a) Does it make sense for MinMax Normalization to propose a min max value which will include live data (And throw some error in case it happens)
+
+b) How to handle 0 values (maybe convert it to a small positive or negative value?
+- Are there other ideas or concepts to conduct this problem?
+
+
+I am looking forward to your input. Everything is appreciated. Thanks in advance! I also apologize for errors in the example values. Anyways, thanks for your time.
+
+Cheers, hob.
+"
+['ai-design']," Title: AI Self-Destruct ButtonBody: Could you implement code into an AI that can't be modified? Like if you place a code that shuts-down the program/machine would they be able to rewrite/ reinterpret the ideas?
+"
+"['learning-algorithms', 'linear-regression']"," Title: Can we use the recursive least squares as a learning algorithm to an ADALINE?Body: I'm new to neural network, I study electrical engineering, and I just started working with ADALINEs.
+
+I use Matlab, and in their Documentation they cite :
+
+
+ However, here the LMS (least mean squares) learning rule, which is
+ much more powerful than the perceptron learning rule, is used. The
+ LMS, or Widrow-Hoff, learning rule minimizes the mean square error and
+ thus moves the decision boundaries as far as it can from the training
+ patterns.
+
+
+The LMS algorithm is the default learning rule to linear neural network in Matlab, but a few days later I came across another algorithm which is : Recursive Least Squares (RLS) in a 2017 Research Article by Sachin Devassy and Bhim Singh in the journal: IET Renewable Power Generation, under the title : Performance analysis of proportional resonant and ADALINE-based solar photovoltaic-integrated unified active power filter where they state :
+
+
+ ADALINE-based approach is an efficient method for extracting
+ fundamental component of load active current as no additional
+ transformation and inverse transformations are required. The various
+ adaptation algorithms include least mean square, recursive least
+ squares etc.
+
+
+My questions are:
+
+
+- Is RLS just like LMS (i mean can it be used as a learning algorithm too)?
+- If yes, how can I customize my ADALINE to use RLS instead of LMS as a learning algorithm (preferably in Matlab, if not in Python) because I want to do a comparative study between the two Algorithm!
+
+"
+"['convolutional-neural-networks', 'backpropagation', 'keras']"," Title: How does backpropagation work on a custom loss function whose components have magnitudes of different orders?Body: I want to use a custom loss function which is a weighted combination of l1 and DSSIM losses. The DSSIM loss is limited between 0 and 0.5 where as the l1 loss can be orders of magnitude greater and is so in my case. How does backpropagation work in this case? For a small change in weights, the change of the l1 component would obviously always be far greater than the SSIM component. So, it seems that only l1 part will affect the learning and the SSIM part would almost have no role to play. Is this correct? Or I am missing something here. I think I am, because in the DSSIM implementation of Keras-contrib, it is mentioned that we should add a regularization term like a l2 loss in addition to DSSIM (https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/dssim.py); but I am unable to understand how it would work and how the SSIM would affect the backpropagation being totally overshadowed by the large magnitude of the other component. It will be a great help if someone can explain this. Thanks.
+"
+"['reinforcement-learning', 'comparison', 'rewards', 'value-functions', 'return']"," Title: What is the difference between expected return and value function?Body: I've seen numerous mathematical explanations of reward, value functions $V(s)$, and return functions. The reward provides an immediate return for being in a specific state. The better the reward, the better the state.
+
+As I understand it, it can be better to be in a low-reward state sometimes because we can accumulate more long term, which is where the expected return function comes in. An expected return, return or cumulative reward function effectively adds up the rewards from the current state to the goal state. This implies it's model-based. However, it seems a value function does exactly the same.
+
+Is a value function a return function? Or are they different?
+"
+"['neural-networks', 'machine-learning', 'backpropagation', 'time-complexity']"," Title: What is the time complexity for training a neural network using back-propagation?Body: Suppose that a NN contains $n$ hidden layers, $m$ training examples, $x$ features, and $n_i$ nodes in each layer. What is the time complexity to train this NN using back-propagation?
+
+I have a basic idea about how they find the time complexity of algorithms, but here there are 4 different factors to consider here i.e. iterations, layers, nodes in each layer, training examples, and maybe more factors. I found an answer here but it was not clear enough.
+
+Are there other factors, apart from those I mentioned above, that influence the time complexity of the training algorithm of a NN?
+"
+['classification']," Title: Decision tree classifierBody: I have seen weka j48 classifier, I want to build a classifier similar to it but I don't know how to go about it.
+Can anyone advice me on how to create a classifier algorithm for decision tree?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'image-recognition']"," Title: Is color information only extracted in the first input layer of a convolutional neural network?Body: In a convolutional neural network (CNN), since the RGB values get multiplied in the first convolutional layer, does this mean that color is essentially only extracted in the very first layer?
+A snippet from CS231n Convolutional Neural Networks for Visual Recognition:
+
+One dangerous pitfall that can be easily noticed with this visualization is that some activation maps may be all zero for many different inputs, which can indicate dead filters, and can be a symptom of high learning rates.
+
+Another one.
+
+Typical-looking activations on the first CONV layer (left), and the 5th CONV layer (right) of a trained AlexNet looking at a picture of a cat. Every box shows an activation map corresponding to some filter. Notice that the activations are sparse (most values are zero, in this visualization shown in black) and mostly local.
+
+"
+"['machine-learning', 'decision-trees', 'id3-algorithm']"," Title: Given a dataset with no noisy examples, is the training error for the ID3 algorithm always 0?Body: Given a dataset with no noisy examples (i.e., it is never the case that for 2 examples, the attribute values match but the class value does not), is the training error for the ID3 algorithm is always equal to 0?
+"
+"['machine-learning', 'classification', 'datasets']"," Title: In order to classify the taste of crystals, how should I make the training, validation and test datasets?Body: Right now, I'm planning to make a deep neural network for classifying taste of crystals, with their molecular structure, which includes information like the number of atoms or the mass of each atom.
+How should I make a data set for training, testing, and validation?
+"
+"['deep-learning', 'computer-vision']"," Title: Algorithms for scene rotationBody: My goal is to take an image and return another image that looks as if the scene was viewed from another angle. The difference in angle can be small — let's say as if the hand holding the camera moved slightly sideways.
+"
+"['convolutional-neural-networks', 'weights', 'filters']"," Title: In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?Body: My understanding is that the convolutional layer of a convolutional neural network has four dimensions: input_channels, filter_height, filter_width, number_of_filters
. Furthermore, it is my understanding that each new filter just gets convoluted over ALL of the input_channels
(or feature/activation maps from the previous layer).
+HOWEVER, the graphic below from CS231 shows each filter (in red) being applied to a SINGLE CHANNEL, rather than the same filter being used across channels. This seems to indicate that there is a separate filter for EACH channel (in this case I'm assuming they're the three color channels of an input image, but the same would apply for all input channels).
+This is confusing - is there a different unique filter for each input channel?
+
+This is the source.
+The above image seems contradictory to an excerpt from O'reilly's "Fundamentals of Deep Learning":
+
+...filters don't just operate on a single feature map. They operate on the entire volume of feature maps that have been generated at a particular layer...As a result, feature maps must be able to operate over volumes, not just areas
+
+...Also, it is my understanding that these images below are indicating a THE SAME filter is just convolved over all three input channels (contradictory to what's shown in the CS231 graphic above):
+
+
+"
+"['neural-networks', 'machine-learning']"," Title: Learning algorithm that filters keyboard clicking in audio feedsBody: When recording audio for screencasts or similar, very often the keyboard is clearly visible and can start to annoy listeners after a while.
+
+NN are quiet good at recognizing patterns. Image classification is all over the place these days. There is also some work on audio, so that seems to work as well. Could the following approach therefore work to eliminate (or greatly reduce) the sounds of the keyboard in a recording whilst leaving the voice quality largely untouched?
+
+
+- Train a NN to recognize the clicking sounds of the keys. Lots of labeled data can be created by just recording and tracking key clicks in the millisecond range. That way markers can be placed on the recording automatically that ""label"" clicks from non clicks. Let's say a click has on average a 10ms range in the audio, the audio feed could be cut into snippets of 10ms and those that have a click sound in it are labelled as such.
+- A adversarial network is trained to modify an input stream so as to fool the first one into thinking there are no clicks while also being punished for large changes in the stream data. So the better it removes the clicks sounds the better but if it just gives out nothing (technically no clicks then), it's of course bad so there needs to be some reward for being ""close to input""
+
+
+Would this be a good approach? Are there other ways to filter this? I know there is an ""ehm detector"" that uses MDP to warn speakers whenever they are likely to say ""ehm"". This wouldn't apply to this though, because it's not that I want to guess when the next click comes but rather I want to manipulate the input stream without running a constant filter on the entire stream such as a lowpass filter for removing unwanted constant noise. So ideally the algorithm would learn to apply a ""correction stamp"" whenever a click is detected to remove a range of frequencies during small windows in the overall recording but leaving most of it untouched.
+"
+"['convolutional-neural-networks', 'backpropagation', 'gradient-descent', 'convolutional-layers']"," Title: How to compute the derivative of the error with respect to the input of a convolutional layer when the stride is bigger than 1?Body: I read that to compute the derivative of the error with respect to the input of a convolution layer is the same to make of a convolution between deltas of the next layer and the weight matrix rotated by $180°$, i.e. something like
+$$\delta^l_{ij}=\delta^{l+1}_{ij} * rot180(W^{l+1})f'(x^l_{ij})$$
+with $*$ convolution operator. This is valid with $stride=1$.
+However, what happens when stride is greater than $1$? Is it still a convolution with a kernel rotation or I can't make this simplification?
+"
+"['machine-learning', 'reinforcement-learning', 'classification', 'unsupervised-learning', 'long-short-term-memory']"," Title: Learning from events. Supervised, Unsupervised or MDP?Body: I have a large set of simulation logs for a market simulation of which I want to learn from. The market includes:
+
+
+- customers
+- products (subscriptions)
+
+
+The customers choose products and then stick with them until they decide on a different one. Examples could be phone, electricity or insurance contracts.
+
+For every simulation I get the data about the customers (some classes and metadata) and then for each round I get signups/withdrawals and charges for the use of the service.
+
+I am trying to learn a few things
+
+
+- competitiveness of an offering (in relation to the environment/competition)
+- usage patterns of customers (the underlying model is a statistical simulation) depending on their chosen tariff, time of day and their metadata + historical usage
+- ability to forecast customer numbers for each product
+
+
+The use cases are all very applicable to real world data although my case is all a (rather large) simulation.
+
+My problem is this: What kind of learning is this? Supervised? Unsupervised? I have come up with various hypotheses and cannot find a definite answer for either.
+
+
+- Pro Supervised: For the usage patterns of the customers I have historical data of actual usage so I can do something similar to time-series forecasting. However, I don’t want to forecast simply off of their previous usage but also off of their metadata and their tariff choice (so also metadata in a way).
+- Pro Unsupervised: The forecasting of the “competitiveness” of a randomly chosen product configuration is hard to label even with historical data. The exact reason why a product has performed in a certain way is very high-dimensional. I do get subscription records about every product for every time slot though, so I guess some “feedback” could be generated. This might also be a RL problem though?
+
+
+So obviously I need help pulling these different concepts apart so as to map them on this kind of problem which is not the classical “dog or cat” problem or the classical “historical data here, please forecast” timeseries issue. It’s also not a “learn how to walk” reinforcement problem as it’s based on historical data. The end goal is however to write an agent that generates these products and competes in the market so that will be a reinforcement problem.
+"
+"['machine-learning', 'natural-language-processing', 'knowledge-representation', 'turing-test', 'automated-reasoning']"," Title: Did Turing foresee the required capabilities to pass the Turing test?Body: In Section 1.1 of Artificial Intelligence: A Modern Approach, it is stated that a computer which passes the Turing Test would need 4 capabilities, and that these 4 capabilities comprise most of the field of Artificial Intelligence:
+
+
+- natural language processing: to enable it to communicate successfully in English
+- knowledge representation: to store what it knows and hears
+- automated reasoning: to use the stored information to answer questions and to draw new conclusions
+- machine learning: to adapt to new circumstances and to detect and extrapolate patterns
+
+
+Did Alan Turing discern the requirements for the field of artificial intelligence (the necessary subfields) and purposefully design a test around these requirements, or did he simply design a test that is so general that the subfields which developed within artificial intelligence happen to be what is required to solve it? That is, was he prescient or lucky? Are these Turing's subdivisions, or Peter Norvig's and Stuart Russell's?
+
+If Turing did foresee these 4 requirements, what did he base them on? What principles of intelligence allowed him to predict the requirements for the field?
+"
+"['terminology', 'papers', 'genetic-programming', 'grammatical-evolution', 'codon']"," Title: What does ""we wrap the individual and reuse the codons"" mean in the paper ""Grammatical Evolution"" by Neill and Ryan?Body: I've just started learning Grammatical Evolution and I'm reading the paper Grammatical Evolution by Michael O'Neill and Conor Ryan.
+On page 3 (section IV-A), they write:
+
+During the genotype-to-phenotype mapping process it is possible for individuals to run out of codons and in this case, we wrap the individual and reuse the codons.
+
+I'm not an English native speaker and I don't understand the meaning of the word "wrap" here. What does it mean?
+I understood that, if none of the symbols are terminals, we have to start from the beginning of the genotype again and replace the nonterminal symbols until we have only terminal symbols. But, if I'm correct, when do I have to stop? In the paper, they also talk about non-valid individuals.
+"
+"['game-ai', 'monte-carlo-tree-search']"," Title: How to estimate the AI player's strength in multiplayer game?Body: I have implemented multiple MCTS based AI players for the Love Letter game (rules). It is a 2-4 players zero sum card game where players make alternating moves. I am struggling with how to properly conduct experiments for estimating AI player strength against human players:
+
+
+- In 2 player game where one of the players is AI bot
+- In 4 player game where one (or multiple) of players is AI bot
+
+"
+"['neural-networks', 'machine-learning']"," Title: S-shaped nonlinearities in tanh neuronsBody: I have started reading Fundamentals of Deep Learning by Nikhil Buduma and I have a question regarding tanh neurons. In the book, it is stated:
+
+
+ ""When S-shaped nonlinearities are used, the tanh neuron is often preferred over the sigmoid neuron because it is zero-centered.""
+
+
+Can anyone explain me why exactly??
+"
+"['neural-networks', 'convolutional-neural-networks', 'training', 'datasets', 'binary-classification']"," Title: If we want to classify something as either a cat/dog or neither, do we need 2 or 3 classes?Body: Suppose one trains a CNN to determine if something was either a cat/dog or neither (2 classes), would it be a good idea to assign all cats and dogs to one class and everything else to another? Or would it be better to have a class for cats, a class for dogs, and a class for everything else (3 classes)? My colleague argues for 3 classes because dogs and cats have different features, but I wonder if he's right.
+"
+"['reinforcement-learning', 'game-ai', 'q-learning', 'imperfect-information']"," Title: How to use DQN to handle an imperfect but complete information game?Body: I'm currently having troubles to win against a random bot playing the Schieber Jass game. It is a imperfect card information game. (famous in switzerland https://www.schieber.ch/)
+
+The environement I'm using is on Github https://github.com/murthy10/pyschieber
+
+To get a brief overview of the Schieber Jass I will describe the main characteristics of the game.
+The Schieber Jass consists of four players building two teams.
+At the beginning every player gets randomly nine cards (there are 36 cards).
+Now there are nine rounds and every player has to chose one card every round. Related to the rules of the game the ""highest card"" wins and the team gets the points.
+Hence the goal is to get more points then your opponent team.
+
+There are several more rules but I think you can image how the game should roughly work.
+
+Now I'm trying to apply a DQN approach at the game.
+
+To my attempts:
+
+
+- I let two independent reinforcement player play against two random players
+- I design the input state as a vector (one hot encoded) with 36 ""bits"" for every player and repeated this nine times for every card you can play during a game.
+- The output is a vector of 36 ""bits"" for every possible card.
+- If the greedy output of the network suggest an invalid action I take the action with the highest probability of the allowed actions
+- The reward is +1 for winning, -1 for losing, -0.1 for a invalid action and 0 for an action which doesn't lead to a terminal state
+
+
+My question:
+
+
+- Would it be helpful to use a LSTM and reduce the input state?
+- How to handle invalid moves?
+- Do you have some good ideas for improvements? (like Neural-Fictitious Self-Play or something similar)
+- Or is this the whole approach absolute nonsense?
+
+"
+"['neural-networks', 'convolutional-neural-networks']"," Title: How well can CNN for bounding box detection generalise?Body: Suppose a CNN is trained to detect bounding box of a certain type of object (people, cars, houses, etc.)
+
+If each image in the training set contains just one object (and its corresponding bounding box), how well can a CNN generalize to pick up all objects if the input for prediction contains multiple objects?
+
+Should the training images be downsampled in order for the CNN to pick out multiple objects in the prediction?
+
+I don't have a specific one in mind. I was just curious about the general behavior.
+"
+"['convolutional-neural-networks', 'image-recognition', 'transfer-learning', 'yolo', 'binary-classification']"," Title: Why is my fine-tuned YOLO model detecting other objects as a human?Body: I am new to deep learning and computer vision. I have a problem where I use the YOLO to detect objects.
+For my problem, I just want to recognize 1 human only. So, I changed the final YOLO's layer (which contained 80 neurons) to only 1 neuron, and do the training process with transfer learning techniques. Of course, I do not use the final layer's weights, and these weights are randomly initialized for my problem. I feed only the human data to the model.
+However, I realize that after longer training, the model becomes worse. It starts to recognize other objects as a human.
+Should I also feed non-human data to the model?
+"
+"['algorithm', 'optimization', 'combinatorics']"," Title: How to use MOPSO to align characters vertically?Body: I need to efficiently align characters vertically using Multi Objective PSO. Alignment is achieved by adding spaces in between a given set of characters.
+
+a b c d e f
+b b d h g
+c a b f
+
+
+Might be
+
+- a b - c d e f - -
+- - b b - d - - h g
+c a b - - - - f - -
+
+
+Now this is a multi objective solution.
+I need to maximize the characters that get aligned vertically and minimize the amount of spaces in between the characters.
+
+I wanted to focus firstly on how to get a set of characters to represent a position of a particle. This would mean that I need to somehow transform a possible set of characters into a position of a particle. If I can somehow achieve this then the rest should fall into place.
+
+
+- How do I transform these set of characters into a position of a particle?
+- Also is this the best approach or are there better ways to approach this problem?
+
+"
+"['neural-networks', 'deep-learning', 'definitions', 'activation-functions', 'elu']"," Title: Why don't ELUs multiply the linear portion by $\alpha$?Body: An exponential linear unit (as proposed by Clevert et al.) uses the function:
+\begin{align}
+\text{ELU}_\alpha(x)
+=
+\begin{cases}
+\alpha(e^x - 1), &\text{if } x < 0\\
+x, \text{if} &\text{if } x \geq 0\\
+\end{cases}
+\end{align}
+Here's a picture.
+
+Now, this is continuous at $x=0$, which is great. It's differentiable there too if $\alpha=1$, which is the value that the paper used to test ELU units.
+But if $\alpha \neq $ (as in the above diagram), then it's no longer differentiable at $x=0$. It has a crook in it, which seems weird to me. Having your function be differentiable at all points seems advantageous. Further, it seems that, if you just make the linear portion evaluate to $\alpha x$ rather than $x$, it would be differentiable there.
+Is there a reason that the function wasn't defined to do this? Or did they not bother, because $\alpha = 1$ is definitely the hyperparameter to use?
+"
+"['reinforcement-learning', 'q-learning', 'environment']"," Title: How does Q-learning work in stochastic environments?Body: The Q function uses the (current and future) states to determine the action that gets the highest reward.
+
+However, in a stochastic environment, the current action (at the current state) does not determine the next state.
+
+How does Q learning handle this? Is the Q function only used during the training process, where the future states are known? And is the Q function still used afterwards, if that is the case?
+"
+"['neural-networks', 'recurrent-neural-networks']"," Title: How to calculate the output of this neural network?Body: What is the output value of the network for these inputs respectively, and why?
+(Linear activation function is fine.)
+
+[2, 3][-1, 2][1, 0][3, 4]
+
+My main question is how you take the 'backwards' directed paths into account.
+
+
+"
+"['classification', 'fuzzy-logic']"," Title: Fuzzy confusion matrix for fuzzy classifierBody: Let us suppose I have a NxN
matrix and I want to classify in M
classes each entry of the matrix using a fuzzy classifier. The output of my classifier will be, for each matrix entry, an M
-dimensional vector containing the probabilities for the entry to be classified in each class.
+A naive way to build a confusion matrix would be to select the highest probability in each vector and use it as a crips classification. However, I would like to take into account all the probabilities associated to each entry and compute a ""fuzzy"" confusion matrix. Is this possible?
+"
+"['neural-networks', 'algorithm']"," Title: Ideas on how to make a neural net learn how to split sequence into sub sequencesBody: How can I train a neural network to recognize sub-sequences in a sequence flow?
+
+For example: Given the sequence 111100002222 as an input sample from a stream, the neural network would recognize that 1111 , 0000 , 2222 are sub sequences (so 111100 would not be a valid subsequence) and so on for ~ 50 to 100 different subsequences.
+
+There is no particular order in which the subsequence would appear in the flow.
+No network architecture restriction.
+Subsequences are of variable length.
+
+General concepts, ideas, and theory are welcome.
+"
+"['neural-networks', 'genetic-algorithms', 'evolutionary-algorithms', 'neat', 'neuroevolution']"," Title: Several questions regarding the NEAT algorithmBody: I've recently read the paper Evolving Neural Networks through Augmenting Topologies which introduces NEAT. I am now trying to prototype it myself in JavaScript. However, I stumbled across a few questions I can't answer.
+
+
+- What is the definition of ""structural innovation"", and how do I store these so I can check if an innovation has already happened before?
+
+
+ However, by keeping a list of the innovations that occurred in the current generation, it is possible to ensure that when the same structure arises more than once through independent mutations in the same generation, each identical mutation is assigned the same innovation number
+
+- Is there a reason for storing the type of a node (input, hidden, output)?
+- In the original paper, only connections have an innovation number, but in other sources, nodes do as well. Is this necessary for the crossover? (This has already been asked here.)
+- How could I limit the mutation functions to not add recurrent connections?
+
+"
+"['neural-networks', 'machine-learning', 'comparison']"," Title: How to decide whether a problem needs to be solved algorithmically or with machine learning techniques?Body: There are problems (e.g. this one or this other one) that could potentially be solved easily using traditional algorithmic techniques. I think that training a neural network (or any other machine learning model) for such sorts of problems will be more time consuming, resource-intensive, and pointless.
+
+If I want to solve a problem, how to decide whether it is better to solve algorithmically or by using NN/ML techniques? What are the pros and cons? How can this be done in a systematic way? And if I have to answer someone why I chose a particular domain, how should I answer?
+
+Example problems are appreciated.
+"
+"['neural-networks', 'reference-request', 'data-augmentation', 'training-datasets']"," Title: What is the effect of training a neural network with randomly generated fake data that satisfies certain constraints?Body: I have a neural network with 2 inputs and one output, like so:
+input | output
+____________________
+ a | b | c
+ 5.15 |3.17 | 0.0607
+ 4.61 |2.91 | 0.1551
+
+etc.
+I have 75 samples and I am using 50 for training and 25 for testing.
+However, I feel that the training samples are not enough. Because I can't provide more real samples (due to time limitation), I would like to train the network using fake data:
+For example, I know that the range for the a
parameter is from 3 to 14, and that the b
parameter is ~65% of the a
parameter. I also know that c
is a number between 0 and 1 and that it increases when a & b increase.
+So, what I would like to do is to generate some data using the above restrictions (about 20 samples). For example, assume a = 13
, b = 8
and c= 0.95
, and train the network with these samples before training it with the real samples.
+Has anybody studied the effect of doing this on the neural network? Is it possible to know if the effect will be better or worse on the networks? Are there any recommendations/guidelines if I want to do this?
+"
+"['machine-learning', 'ai-design']"," Title: Social network filtering for specific topicBody: I created and operate a social network for meeting new people. As a result of the recent FOSTA legislation, it’s imperative that I implement an automated system to prevent users from posting advertisements relating to prostitution. I do not have much expierence with AI/Machine learning. What library, algorithm, method should I look into to solve this problem?
+"
+"['neural-networks', 'deep-learning', 'training', 'deep-neural-networks']"," Title: What is the most time-consuming part of training deep networks?Body: Deep networks notoriously take a long time to train.
+
+What is the most time-consuming aspect of training them? Is it the matrix multiplications? Is it the forward pass? Is it some component of the backward pass?
+"
+"['neural-networks', 'feedforward-neural-networks', 'hidden-layers', 'residual-networks']"," Title: Why aren't there neural networks that connect the output of each layer to all next layers?Body: Why aren't there neural networks that connect the output of each layer to all next layers?
+
+For example, the output of layer 1 would be fed to the input of layers 2, 3, 4, etc. Beyond computational power considerations, wouldn't this be better than only connecting layers 1 and 2, 3 and 4, etc?
+
+Also, wouldn't this solve the vanishing gradient problem?
+
+If computational power is the concern, perhaps you could connect layer 1 only to the next N layers.
+"
+"['neural-networks', 'machine-learning', 'ai-design', 'keras', 'image-recognition']"," Title: What is the general procedure to create an AI system that can detect fire in images?Body: I have no experience with any kind of AI, but I really want to develop a system that can detect fire in images. I think I will need a labelled dataset with labels "fire" or "not fire", but I am not sure how I should proceed and which steps I need to take to develop this system.
+So, what is the general procedure to create an AI system that can detect fire in images?
+I heard about the Keras library, which could allow us to do this. How could I do this with Keras?
+"
+"['natural-language-processing', 'emotional-intelligence', 'sentiment-analysis']"," Title: Can we detect the emotions (or feelings) of a human through conversations with an AI?Body: Can we detect the emotions (or feelings) of a human through conversations with an AI?
+Something like a "confessional", disregarding human possibilities to lie.
+Below, I have the categories joyful, sadness, anger, fear and affection. For each category, there are several words that can be in the texts that refer to it.
+
+- Joy: ( cheerful, happy, confident, happy, satisfied, excited, interested, dazzled, optimistic, relieved, euphoric, drunk, witty, good )
+
+- Sadness: ( sad, desperate, displeased, depressed, bored, lonely, hurt, desolate, meditative, defrauded, withdrawn, pitying, concentrated, depressed, melancholic, nostalgic )
+
+- Anger: ( aggressive, critical, angry, hysterical, envious, grumpy, disappointed, shocked, exasperated, frustrated, arrogant, jealous, agonized, hostile, vengeful )
+
+- Fear: ( shy, frightened, fearful, horrified, suspicious, disbelieving, embarrassed, embarrassed, shaken, surprised, guilty, anxious, cautious, indecisive, embarrassed, modest )
+
+- Affection: ( loving, passionate, supportive, malicious, dazzled, glazed, homesick, embarrassed, indifferent, curious, tender, moved, hopeful )
+
+
+Flow Example
+Phrase 1: "I'm very happy! It concludes college."
+Categorization 1:
+ - Joy (+1)
+
+
+Phrase 2: "I'm sad, my mother passed away."
+Categorization 2:
+ - Sadness (+1)
+
+
+Phrase 3: "I met a girl, but I was ashamed."
+Categorization 3:
+ - Fear (+1)
+Is this a clever way to follow and / or improve, or am I completely out of the way?
+I see that there is a Google product that creates parsing according to the phrases. I do not know how it works, because I like to recreate the way I think it would work.
+Remembering that this would not be the only way to categorize the phrase. This would be the first phase of the analysis. I can also identify the subject of the sentence, so we would know if the sadness is from the creator of the message or from a third party, in most cases.
+
+"
+['recurrent-neural-networks']," Title: Best practices to classify recurring patterns using an LSTM or GRUBody: I'm working with acoustic data (filterbank features) and I want to build a neural network to detect claps using an LSTM (or a GRU) with a binary output (present/abscent), and I'm wondering about how I should prepare my data before feeding them to the RNN.
+
+If I have 20 seconds of claps (separate claps separated by ~ 0.1 seconds) what is the difference between :
+
+
+- Feeding the network a series of N claps as one example (with variable N : 1, 2, .., 10, ..) + padding with zeros to fit the longest sequence.
+- Feeding the network multiple examples of 1 clap.
+
+
+My problem is not restricted to claps but covers patterns that can be observed as separated occurrences, periodic sequence of occurrences, variable-length-period ""quasi-periodic"" sequence of occurrences, etc.
+
+Unlike an ergodic HMM, an RNN doesn't have any loops to ""jump back"" to a previous ""acoustic state"", so what should-I do with this kind of data ?
+"
+"['machine-learning', 'definitions', 'applications', 'reference-request', 'swarm-intelligence']"," Title: What types of applications qualify as ""compound intelligences""?Body: A question about swarm intelligence as a potential method of strong general AI came up recently, and yielded some useful answers and clarifications regarding the nature of swarm intelligence. But it got me thinking about ""group intelligence"" in general.
+
+Here organism is synonymous with algorithm, so a complex organism is an algorithm made up of component algorithms, based on a set of instructions in the form of a string.
+
+Now consider the Portuguese man o' war, not a single animal, but a colonial organism. In this case, that means a set of animals connected for mutual benefit.
+
+And physalia physalis are pretty smart as a species in that they've been around for a while, I'm not finding them on any endangered lists, and based on their habitat it looks like global warming will be a jackpot for them. And they don't even have brains.
+
+Each component of the physalia has a narrow function, colony organism itself has a more generalized function, which is the set of functions necessary for maintenance and reproduction.
+
+{Man o' War} ⊇ { {pneumatophore}, {gonophores, siphosomal nectophores,vestigial siphosomal nectophores}, {free gastrozooids, tentacled gastrozooids, gonozooids, gonopalpons}, {dactylozooids}, {gonozooids}, {gastrozooids} }
+
+
+- What types of applications qualify as ""compound intelligences""? What is the thinking on groups of neural networks comprising generally stronger or simply more generalized intelligence?
+
+
+I recognize the underlying problem is ultimately complexity and that ""strong narrow AI"" is, by definition, limited, so I use ""generalized"" and omit ""strong"" because human-like and superintelligence are not conditions. Compound intelligence is defined as a colony of dependent intelligences.*
+
+Utility software is often a form of an expert system that manages a set of functions of varying degrees of complexity. There's currently a great deal of focus on autonomous vehicles, which would seem to require sets of functions.
+
+Links to research papers on this or related subjects would be ideal.
+
+
+
+Portuguese Man o' War (oceana.org)
+
+The Bugs Of The World Could Squish Us All
+"
+"['reinforcement-learning', 'q-learning', 'dqn', 'game-ai', 'tic-tac-toe']"," Title: What are good learning strategies for Deep Q-Network with opponents?Body: I am trying to find out what are some good learning strategies for Deep Q-Network with opponents. Let's consider the well-known game Tic-Tac-Toe as an example:
+
+- How should an opponent be implemented to get good and fast improvements?
+- Is it better to play against a random player or a perfect player or should the opponent be a DQN player as well?
+
+"
+"['neural-networks', 'game-ai', 'minimax', 'imperfect-information', 'evaluation-functions']"," Title: Why most imperfect information games usually use non machine learning AI?Body: To provide a bit of context, I'm a software engineer & game enthusiast (card games, especially). The thing is I've always been interested in AI oriented to games. In college, I programmed my own Gomoku AI, so I'm a bit familiar with the basic concepts of AI applied to games and have read books & articles about Game Theory as well.
+My issue comes when I try to analyze AI's for Imperfect Information games like (Poker, Magic the gathering, Hearthstone, etc). In most cases, when I found an AI for Hearthstone, it was either some sort of Monte Carlo or MinMax strategy. I honestly think, although it might even provide some decent results, it will still be always quite flat and linear, since it doesn't take into account what deck the opponent is playing and almost always tries to follow the same game-plan, since it will not change based on tells your opponent might give away via cards played (a hint that a human would catch).
+I would like to know if using Neural Networks would be better than just using a raw evaluation of board state + hands + Hp each turn without taking into account learning about possible threads the opponent might have, how to deny the opponent the best plays he could make, etc.
+My intuition tells me that this is way harder and far more complex.
+Is that the only reason the NN method is not used? Has there been any research to prove how much efficiency edge would be between those 2 approaches?
+"
+"['search', 'branching-factors']"," Title: Are leaf nodes included in the calculation of average branching factor for search trees?Body: In the search tree below, there are 11 nodes, 5 of which are leaves. There are 10 branches.
+
+Is the average branching factor given by 10/6, or 10/11?
+Are leaves included in the calculation? Intuitively, I would think not, since we are interested in nodes with branches. However, a definition given to me by my professor was "The average number of branches of all nodes in the tree", which would imply leaves are included.
+"
+"['neural-networks', 'optimization']"," Title: Why does a one-layer hidden network get more robust to poor initialization with growing number of hidden neurons?Body: In a nutshell: I want to understand why a one hidden layer neural network converges to a good minimum more reliably when a larger number of hidden neurons is used. Below a more detailed explanation of my experiment:
+
+I am working on a simple 2D XOR-like classification example to understand the effects of neural network initialization better. Here's a visualisation of the data and the desired decision boundary:
+
+
+Each blob consists of 5000 data points. The minimal complexity neural network to solve this problem is a one-hidden layer network with 2 hidden neurons. Since this architecture has the minimum number of parameters possible to solve this problem (with a NN) I would naively expect that this is also the easiest to optimise. However, this is not the case.
+
+I found that with random initialization this architecture converges around half of the time, where convergence depends on the signs of the weights. Specifically, I observed the following behaviour:
+
+w1 = [[1,-1],[-1,1]], w2 = [1,1] --> converges
+w1 = [[1,1],[1,1]], w2 = [1,-1] --> converges
+w1 = [[1,1],[1,1]], w2 = [1,1] --> finds only linear separation
+w1 = [[1,-1],[-1,1]], w2 = [1,-1] --> finds only linear separation
+
+
+This makes sense to me. In the latter two cases the optimisation gets stuck in suboptimal local minima. However, when increasing the number of hidden neurons to values greater than 2, the network develops a robustness to initialisation and starts to reliably converge for random values of w1 and w2. You can still find pathological examples, but with 4 hidden neurons the chance that one ""path way"" through the network will have non-pathological weights is larger. But happens to the rest of the network, is it just not used then?
+
+Does anybody understand better where this robustness comes from or perhaps can offer some literature discussing this issue?
+
+Some more information: this occurs in all training settings/architecture configurations I have investigated. For instance, activations=Relu, final_activation=sigmoid, Optimizer=Adam, learning_rate=0.1, cost_function=cross_entropy, biases were used in both layers.
+"
+"['ai-design', 'text-summarization']"," Title: How would one go about generating *sensible* responses to chat?Body: I have recently gone about and made a simple AI, one that gives responses to an input (albeit completely irrelevant and nonsensical ones), using Synaptic.js. Unfortunately, this is not the type of text generation I am looking for. What I am looking for would be a way to get connections between words and generate text from that. (What would be preferable would be to also generate at least semi-sensible answers also.)
+
+This is part of project Raphiel, and can be checked out in the room associated with this site. What I want to know is what layer combination would I use for text generation?
+
+I have been told to avoid retrieval-based bots.
+
+I have the method to send and receive messages, I just need to figure out what combination of layers would be the best.
+
+Unless I have the numbers wrong, this will be SE's second NN chatbot.
+"
+"['classification', 'datasets', 'generative-model']"," Title: Looking to build, compile, and/or find dataset for serial-parallelized code examplesBody: I'm looking to perform two tasks:
+
+
+- Train a classifier to classify code as serial or parallel
+- Train a generative algorithm to generate parallel code from serial
+
+
+For the first task a simple scraper can scrape random C and C++ code from git, however for the second step I would need a decently large source of examples of serial to parallel code. Any ideas or pointers for existing or creating this type of dataset would be greatly appreciated.
+"
+['gaming']," Title: Too small gradient on large neural networkBody: When training on large neural network, how to deal with the case that the gradients are too small to have any impact?
+
+FYI, I have an RNN, which has multiple LSTM cells and each cell has hundreds of neurons. Each training data has thousands of steps, so the RNN would unroll thousands of times. When I print out all gradients, they are very small, like e-20 of the variable values. Therefore the training does not change the variable values at all.
+
+BTW, I think this is not an issue of vanishing gradients. Note that the gradients are uniformly small from the beginning to the end.
+
+Any suggestion to overcome this issue?
+
+Thank!
+"
+"['convolutional-neural-networks', 'object-recognition']"," Title: FIlling space with empty bounding boxBody: I'm detecting objects on images. I want to detect up to 10 objects, however, I'm not sure how to deal with the situation, where only one object is present.
+
+Should I fill the remaining spaces in the label input data with vectors filled with 0? E.g:
+
+[[xmin,ymin,xmax,ymax],[0,0,0,0]...]
+
+
+Or is there any better way? Thanks for help!
+"
+"['neural-networks', 'genetic-algorithms', 'data-preprocessing', 'weights-initialization', 'weight-normalization']"," Title: How to solve the problem of too big activations when using genetic algorithms to train neural networks?Body: I am trying to create a fixed-topology MLP from scratch (with C#), which can solve some simple problems, such as the XOR problem and MNIST classification. The network will be trained purely with genetic algorithms instead of back-propagation.
+Here are the details:
+
+- Population size: 50
+- Activation function: sigmoid
+- Fixed topology
+- XOR: 2 inputs, 1 output. Tested with different numbers of hidden layers/nodes.
+- MNIST: $28*28=784$ inputs for all pixels, will be either ON(1) or OFF(0). 10 outputs to represent digits 0-9
+- Initial population will be given random weights between 0 and 1
+- 10 "Fittest" networks survive each iteration, and performs crossover to reproduce 40 offspring
+- For all weights, mutation occurs to add a random value between -1 to 1, with a 5% chance
+
+With 2 hidden layers of 4 and 3 neurons respectively, XOR managed to achieve 97-99.9% accuracy in around 100 generations. Biases were not used here.
+However, trying out MNIST revealed a pretty glaring issue - the 784 inputs; a large increase of nodes compared to XOR, multiplied with weights and added up results in HUGE values of 50 to even 100, way beyond the typical domain range of the activation function (sigmoid).
+This just renders all layers' outputs as 1 or 0.99999-something, which breaks the entire network. Also, since this makes all individuals in a population extremely similar to one other, the genetic algorithm seems to have no clue on how to improve. The crossover will produce an offspring almost identical to its parents, and some lucky mutations are simply ignored by the sheer amount of other neurons!
+What can be a viable solution to this?
+It's my first time studying NNs, and this is really challenging.
+"
+"['machine-learning', 'image-recognition', 'pattern-recognition', 'deepfakes']"," Title: What are some tactics for recognizing artificially made media?Body: With the growing ability to cheaply create fake pictures, fake soundbites, and fake video there becomes an increasing problem with recognizing what is real and what isn't. Even now we see a number of examples of applications that create fake media for little cost (see Deepfake, FaceApp, etc.).
+
+Obviously, if these applications are used in the wrong way they could be used to tarnish another person's image. Deepfake could be used to make a person look unfaithful to their partner. Another application could be used to make it seem like a politician said something controversial.
+
+What are some techniques that can be used to recognize and protect against artificially made media?
+"
+"['reinforcement-learning', 'q-learning', 'heuristics', 'path-planning', 'path-finding']"," Title: Snake path finding variant : Algorithm choiceBody: I am working on a project which maps to a variant of path finding problem. I am new to this area and I would be very grateful if you could give suggestions/ point to libraries for relevant algorithms.
+
+A simplified version of my problem statement is as follows-
+
+Goal: On a 2D grid, starting from a fixed point reach the destination in exactly N steps.
+
+Allowed actions:
+1. At every position, you have a choice of up to three moves (i.e. straight, curve left, curve right).
+2. You cannot collide with the path traveled so far (just like in the snake game).
+
+Dimension of the grid: N x N where N is between 100-1000
+
+Scalable: Later on, the problem will be scaled to have multiple such snakes going between different pairs of points on the grid. The ultimate goal is to get ALL snakes to reach their respective destinations in a fixed number of steps without any collisions.
+
+TL;DR:
+Essentially I have to find a fixed length path on a dynamically generated directed graph. Is there a better choice than a A* / greedy heuristic? Is it worth taking a Q-
+learning approach?
+
+A rudimentary one snake version written in python can be found here -
+Github Link
+Thanks in advance!
+"
+"['ai-design', 'classification', 'keras']"," Title: Two data classes for a convolutional neural network, can one have a LOT more images for training than the other?Body: I have two classes in the training set: one that has images with a feature and the other of images without that feature.
+Can there be a LOT more images with ""no feature"" so I can fit in all possible false positives?
+"
+"['machine-learning', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: Time Series: LSTM or Augmented Vector Space?Body: In time Series prediction, we have a stream of vectors. There are different approaches for accounting for the temporal patterns between these vectors.
+
+There's two that I'm considering. An LSTM or augmenting the feature space. What's the difference between the two? The most obvious to me is that an LSTM is more expressive and can get superior accuracy if modelled properly.
+"
+"['terminology', 'search', 'definitions']"," Title: What is the fringe in the context of search algorithms?Body: What is the fringe in the context of search algorithms?
+"
+"['neural-networks', 'comparison', 'brain']"," Title: How are Artificial Neural Networks and the Biological Neural Networks similar and different?Body: I've heard multiple times that ""Neural Networks are the best approximation we have to model the human brain"", and I think it is commonly known that Neural Networks are modelled after our brain.
+
+I strongly suspect that this model has been simplified, but how much?
+
+How much does, say, the vanilla NN differ from what we know about the human brain? Do we even know?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'pytorch']"," Title: Why isn't my model learning satisfactorily?Body: The problem to solve is non-linear regression of a non-linear function. My actual problem is to model the function ""find the max over many quadratic forms"": max(w.H.T * Q * w)
, but to get started and to learn more about neural networks, I created a toy example for a non-linear regression task, using Pytorch.
+The problem is that the network never learns the function in a satisfactory way, even though my model is quite large with multiple layers (see below). Or is it not large enough or too large? How can the network be improved or maybe even simplified to get a much smaller training error?
+
+I experimented with different network architectures, but the result is never satisfactory. Usually, the error is quite small within the input interval around 0, but the network is not able to get good weights for the regions at the boundary of the interval (see plots below). The loss does not improve after a certain number of epochs. I could generate even more training data, but I have not yet understood completely, how the training can be improved (tuning parameters such as batch size, amount of data, number of layers, normalizing input (output?) data, number of neurons, epochs, etc.)
+
+My neural network has 8 layers with the following number of neurons: 1, 80, 70, 60, 40, 40, 20, 1
.
+
+For the moment, I do not care too much about overfitting, my goal is to understand, why a certain network architecture/certain hyperparameters need to be chosen. Of course, avoiding overfitting at the same time would be a bonus.
+
+I am especially interested in using neural networks for regression tasks or as function approximators. In principle, my problem should be able to be approximated to arbitrary accuracy by a single layer neural network, according to the universal approximation theorem, isn’t this correct?
+
+
+
+
+
+
+"
+"['deep-learning', 'deep-neural-networks', 'computer-vision', 'unsupervised-learning', 'datasets']"," Title: Has anybody tried unsupervised deep learning from youtube videos?Body: YouTube has a huge amount of videos, many of which also containing various spoken languages. This would presumably provide something like the data that a ""challenged"" baby would experience - ""challenged"" meaning a baby without arms or legs (unfortunately many people are born that way).
+
+Would this not allow unsupervised learning in a deep learning system that has both vision and audio capabilities? The neural network would presumably learn correlations between words and images, and could perhaps even learn rudimentary language skills, all without human supervision. I believe that the individual components to do this already exist.
+
+Has this been tried, and if not, why?
+"
+"['convolutional-neural-networks', 'keras']"," Title: Questions regarding keras activation maximization visualizationBody: I wanted to use the visualization of the activation maximization of the filters that is described in the following keras tutorial/blog:
+
+https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html
+
+I'd like to know what is the intention behind the decision that filters that produce a loss <= 0 are skipped. I know for 0 that would be reasonable since their would be no gradient flowing then (I think) but what about negative values? And is it also reasonable to use the mean of the outputs of the filters as a loss? What if there are weights of a filter that have high negative and positive numbers. Would that be a problem?
+"
+['machine-learning']," Title: How to factor time into decision trees?Body: Are decision trees able to be used with time-related data?
+
+I've read that decision trees are based on matrices and that ARRAYS of input matrices can be used to factor in time however I can't find an example of this.
+
+Say for example, I'm monitoring the progress of students taking exams. Each day I ask them questions related to their mental state (fatigued, positivity, ability to concentrate, expectations for coming exam, confidence, etc). I have twenty days worth of questions. Day 1 for student A may see them studying for an exam the following day, while Day 1 for student B may see them actually doing the exam. There will be a relation between student's fatigue (for example) and the results they give the following day.
+
+The examples when provided as input to a matrix will be used to show that IF on any given day, the student has an exam, and has breakfast, and does x,y,z THAT day then the outcome will be y.
+
+However, short of encoding ""had exam previous day"" and ""had exam two days ago"" for each day, I can't see how I can include time dependency in decision trees.
+"
+"['reinforcement-learning', 'terminology']"," Title: What is the difference between an observation and a state in reinforcement learning?Body: I'm studying reinforcement learning. It seems that ""state"" and ""observation"" mean exactly the same thing. They both capture the current state of the game.
+
+Is there a difference between the two terms? Is the observation maybe the state after the action has been taken?
+"
+"['convolutional-neural-networks', 'reference-request', 'model-request', '3d-convolution']"," Title: Which neural network architectures are there that perform 3D convolutions?Body: I am trying to do 3d image deconvolution using convolution neural network. But I cannot find many famous CNNs that perform a 3d convolution. Can anyone point out some for me?
+Background: I am using PyTorch, but any language is OK. What I want to know most is the network structure. I can't find papers on this topic.
+Links to research papers would be especially appreciated.
+"
+"['machine-learning', 'reinforcement-learning', 'computer-vision', 'hardware-evaluation']"," Title: What are the minimum computing resources needed to train a machine learning algorithm?Body: For a school project, I would like to investigate a paper on either reinforcement learning or computer vision. I am particularly interested in DQN, RNNs, CNNs or LSTMs. I would eventually like to implement any of these. However, I also need to take into account the computing resources required to train and analyse any of these algorithms. I understand that, in computer vision, the data sets can be quite large, but I am not so sure regarding the resources needed to implement and train a typical state-of-the-art RL algorithm (like DQN).
+
+Would a ""standard PC"" be able to run any of these algorithms decently to achieve some sort of analysis/results?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'long-short-term-memory', 'text-summarization']"," Title: Use Machine/Deep Learning to Guess a StringBody: I want to be able to input a block of text and then have it guess a string within a predefined range (i.e. a string that starts with three letters and ends with five numbers like ""XXX12345"", etc). Ideally, the string it will be guessing will be somewhere in the block of text, but sometimes it won't be.
+
+I have been struggling where to begin on this or if I am even going in the right direction for considering Machine/Deep learning to try to do this.
+
+Help!
+"
+"['neural-networks', 'training', 'datasets']"," Title: Shifting training dataBody: I want to create a neural network and train it on some data, however I want to be able to create a new model without retraining it from the start.
+
+An example, I have 1000 data points in my training data
+
+
+- model - trained on 0-99
+- model - trained on 1-100
+- model - trained on 2-101
+- and so forth
+
+
+So I'm wondering if I can use the first model to train the second model, essentially forgetting the first data point.
+
+You can view it as a sliding window over the 1000 data points, sliding one data point to the right for each new model.
+
+Does it make sense?
+Is there any easy way to solve this problem?
+"
+"['reference-request', 'norvig-russell', 'books']"," Title: What are some alternatives to the book ""Artificial Intelligence: A Modern Approach""?Body: There are two textbooks that I most love and am most afraid of in the world: Introduction to Algorithms by Cormen et al. and Artificial Intelligence: A Modern Approach by Norvig et al. I have started the ""AI: A Modern Approach"" more than once, but the book is so dense and full of theory that I get discouraged after a couple of weeks and stop.
+
+I am looking for a similar AI book but with an equal emphasis on theory and practice. Some examples of what I am looking for:
+
+
+- The Elements of Statistical Learning by Tibshirani et al. (detailed theory)
+- An Introduction to Statistical Learning: With Applications in R by
+Tibshirani et al. (theory+practical)
+- Digital Image Processing by Gonzalez et al. (detailed theory)
+- Digital Image Processing Using MATLAB by Gonzalez et al.
+(theory+practical)
+
+"
+"['deep-learning', 'convolutional-neural-networks', 'convolution', 'stride']"," Title: Is my understanding of how the convolution with stride 2 works in this example correct?Body: I'm currently reading this explanation of convolutional neural networks and there's a part around strides that I don't quite understand. I'm just starting with this, so I apologize if this is a really basic question. But I'm trying to develop an understanding and some of these images have thrown me off.
+Specifically, in this image
+
+The stride has been increased to 2 and it's using a 3x3 filter (represented by the red, green and blue outline squares in the first picture)
+Why is the blue square below the red one and not shifted to the side of the green one ending at the edge of the 7x7 volume? Should it not move left to right then down 2 squares when it reaches the next line?
+I'm not sure if the author is just trying to show the stride moving down as it goes, but I think my confusion stems from the fact that the 1 stride image example is only moving in the horizontal direction (as seen below).
+
+Is there something fundamental I haven't grasped here?
+"
+"['neural-networks', 'multilayer-perceptrons']"," Title: Sigmoid output layer and Cross-Entropy cost functionBody: I use Sigmoid activation function for neurons at output layer of my Multi-Layer Perceptron also, I use cross-entropy cost function. As I know when activation functions like Tanh is used in output layer it's necessary to divide outputs of output layer neurons by sum of them like what is done for softmax, is such thing necessary for sigmoid activation function?
+If it's necessary to normalize outputs of neurons, does it affect derivations?
+"
+"['applications', 'evolutionary-algorithms', 'robotics', 'genetic-programming', 'path-planning']"," Title: How can genetic programming be used for path planning?Body: I have been reading quite a few papers, on genetic programming and its applications, in particular, chapter 10 of "Genetic Programming: An Introduction and Tutorial, with a Survey of Techniques and Applications" (Langdon, Poli, McPhee, Koza; 2008) Unfortunately, I can not wrap my head around how one could apply genetic programming to robotics, for example, in path planning.
+Can anyone explain this in the most simple manner? I know that it all depends on the fitness function.
+"
+"['terminology', 'search', 'norvig-russell', 'state-spaces']"," Title: Which ""assumptions"" made about the state space are Russell and Norvig referring to in their book?Body: I am reading the cornerstone book, "Artificial Intelligence, A Modern Approach" by Stuart Russel, and Peter Norvig, and there is a passage in the book on page 98:
+
+The complexity results depend very strongly on the assumptions made
+about the state space. The simplest model studied is a state space
+that has a single goal and is essentially a tree with reversible
+actions. (The 8-puzzle satisfies the first and third of these
+assumptions.)
+
+What are the "assumptions" in that context?
+"
+"['recurrent-neural-networks', 'long-short-term-memory']"," Title: Combine two embeddding inputs to increase more performance in LSTM modelBody: The situation I encountered here is that I have two inputs(for instance, image embedding, etc.) into the first lstm of a series of lstms to predict the next word to generate sentence(from the second lstm, it started to predict the next word from the current input word). The length of each of the two inputs is 512. Merely for the first input, it increases the measurement, say, for instance, perplexity, by about 3 from no this input at all. Merely for the second input, it increases the measurement, say, for instance, perplexity, by about 1 from no input at all. The problem is: Is it possible to combine these two inputs into a model that can produce a result of increasement more than 3 or the larger amount of increasement of the former two inputs models? If it is, how to build a model or what model should I build to combine them to do so?
+"
+"['computational-learning-theory', 'multilayer-perceptrons', 'artificial-neuron', 'hidden-layers', 'capacity']"," Title: What is the minimum number of neurons and hidden layers needed to learn a Boolean function that maps $N$ bits to $1$ bit?Body: Suppose I have a Boolean function that maps $N$ bits to $1$ bit. If I understand correctly, this function will have $2^{2^N}$ possible configurations of its truth table.
+What is the minimum number of neurons and hidden layers I would need in order to have a multi-layer perceptron capable of learning all possible truth tables?
+I found one set of lecture notes here that suggests that "checkerboard" shaped truth tables (which is something like an N-input XOR function) are hard to learn, but do they represent the worst-case scenario? The notes suggest that such tables require $3\cdot(N-1)$ neurons arranged in $2 \cdot\log_2(N$) hidden layers
+"
+['neural-networks']," Title: Multiple centroid drawBody: I'm writing neural network based on neural gas algorithm (university assignment) and I remember that lecturer said that, when you generate random neuron weights at the beginning, it's worth to generate them multiple times and choose the best set of them.
+
+The problem: I don't know what's the criteria of choosing the best set of weights for neurons?
+"
+"['search', 'proofs', 'heuristics', 'a-star', 'admissible-heuristic']"," Title: Why is A* optimal if the heuristic function is admissible?Body: A heuristic is admissible if it never overestimates the true cost to reach the goal node from $n$. If a heuristic is consistent, then the heuristic value of $n$ is never greater than the cost of its successor, $n'$, plus the successor's heuristic value.
+
+Why is A*, using tree or graph searches, optimal, if it uses an admissible heuristic?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'generative-model']"," Title: Coding CGAN paper model in KerasBody: I was coding a CGAN model using Keras along with the paper (https://arxiv.org/pdf/1411.1784.pdf) and I wanted to try and match the models to exactly what the paper says. I knew the models presented in the paper would be primitive but just wanted to replicate those and see. For example the generator model in the paper mentions this:
+
+
+ In the generator net, a noise prior z with dimensionality 100 was
+ drawn from a uniform distribution within the unit hypercube. Both z
+ and y are mapped to hidden layers with Rectified Linear Unit (ReLu)
+ activation [4, 11], with layer sizes 200 and 1000 respectively, before
+ both being mapped to second, combined hidden ReLu layer of
+ dimensionality 1200. We then have a final sigmoid unit layer as our
+ output for generating the 784-dimensional MNIST samples.
+
+
+So for this I had the code like this:
+
+def build_generator(self):
+
+ model = Sequential()
+
+ model.add(Dense(200, input_dim=self.latent_dim))
+ model.add(Activation('relu'))
+ model.add(BatchNormalization(momentum=0.8))
+
+ model.add(Dense(1000))
+ model.add(Activation('relu'))
+ model.add(BatchNormalization(momentum=0.8))
+
+ model.add(Dense(1200, input_dim=self.latent_dim))
+ model.add(Activation('relu'))
+ model.add(BatchNormalization(momentum=0.8))
+
+ model.add(Dropout(0.5))
+
+ model.add(Dense(np.prod(self.img_shape), activation='sigmoid'))
+ model.add(Reshape(self.img_shape))
+
+
+ model.summary()
+
+ noise = Input(shape=(self.latent_dim,))
+ label = Input(shape=(1,), dtype='int32')
+ label_embedding = Flatten()(Embedding(self.num_classes, self.latent_dim)(label))
+
+ model_input = multiply([noise, label_embedding])
+
+ img = model(model_input)
+
+ return Model([noise, label], img)
+
+
+But still I think this not exactly what the paper means. What I understand from the paper is the noise and labels are first fed into two different layers and then combined to one layer.
+
+Does this mean that there should be three separate models inside the generator? Or am I mistaken thinking that? Would like to hear any thoughts on this.
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'recurrent-neural-networks']"," Title: How to manage high numbers of input layer data pointsBody: Not sure if this is the correct forum, but I have been working with a large (non-image) dataset that will eventually be used to train a neural network. I have been puzzling over how to manage wide data sets. For this application ""wide"" is maybe 10,000 or 20,000 points wide. It is not really possible to store this as a row in a conventional RDMS (which are usually limited to several hundred columns). Is it better to use a huge CSV file or maybe a no-sql technology like Cassandra (the data is originally in JSON format)?
+"
+"['neural-networks', 'prediction', 'comparison', 'feedforward-neural-networks', 'liquid-state-machine']"," Title: What is the difference between a feed-forward neural network and a liquid state machine?Body: I have used a FFNN and LSM to perform the same task, namely, to predict the sentence ""How are you"". The LSM gave me more accurate results than FFNN. However, the LSM did not produce perfect prediction and there are missing letters. More specifically, LSM produced ""Hw are yo"" and the FFNN predicted ""Hnw brf ypu"".
+
+What is the difference between a FFNN and a LSM, in terms of architecture and purpose?
+"
+"['convolutional-neural-networks', 'deep-neural-networks']"," Title: How can neural networks that extract many features be fooled by adversarial images?Body: I have been reading a bit about networks where deep layers able to deal with a bunch of features (be it edges, colours, whatever).
+
+I am wondering: how can possibly a network based on this 'specialised' layers be fooled by adversarial images? Wouldn't the presence of specialised feature detectors be a barrier to this? (as in: this image of a gun does share one feature with 'turtles' but it lacks 9 others so: no, it ain't a turtle).
+thanks!
+"
+"['search', 'minimax']"," Title: If we model the game ""2048"" using a max-min game tree, what is the maximal path from a start state to a terminal state?Body: If we model the game '2048' using a max-min game tree, what is the maximal path from a start state to a terminal state? (Assume the game ends only when the board is full.)
+This is one of the sub-questions that should prepare us to actually modeling the game as a max-min game tree. However, I'm failing to understand the question.
+Is it actually the path to receiving 131072 as an endgame?
+"
+"['philosophy', 'definitions']"," Title: Is a mathematical formula a form of intelligence?Body: Warning: This question takes us into VALIS territory, but I wouldn't underestimate the profundity of that particular philosopher.
+
+There is a non-AI definition of intelligence which is simply ""information"" (see definition 2.3). If that information is active, in terms of utilization, I have to wonder if it might qualify as a form of algorithmic intelligence, regardless of the qualities of the information.
+
+What I'm getting at is that fields such as recreational mathematics often produce techniques and solutions that don't have immediate, real world applications. But there's an adage that pure math tends to find uses.
+
+So you might have algorithms applied to a problems that outside of the problems from which it originated, or that couldn't initially be implemented in a computing context. (Minimax in 1928 might be an example.)
+
+Goal orientation has been widely understood as a fundamental aspect of AI, but in the case of an algorithm designed for one problem, that it subsequently applied to a different problem, the goal may simply be a function of the algorithm's structure. (To understand the goal of minimax precisely, you read the algorithm.)
+
+If you regard this form of information as intelligence, then intelligence can be general, regardless of strength in relation to a given problem.
+
+
+- Can we consider this form of codification of information to be algorithmic intelligence?
+
+
+And, just for fun, if a string that encodes a cutting-edge AI is not being processed, does it still qualify as artificial intelligence?
+"
+"['convolutional-neural-networks', 'computer-vision', 'object-detection', 'yolo']"," Title: Can YOLO detect large objects?Body: I have a rather basic question about YOLO for bounding box detection.
+My understanding is that it effectively associates each anchor box to an 8-dimension output.
+During testing, does YOLO take each anchor box and classify on it alone? What happens if the object is big and spans over several anchor boxes (e.g., covering 70% of the image)? How can YOLO classify and detect objects spanning over many anchor boxes?
+"
+['deep-learning']," Title: Possible to use codebase snapshots as input in deep learning?Body: I'm trying to predict grades within a course at my university. At the moment I manually extracting features, but I'm curious if it's possible to somehow use my entire dataset with a deep learning approach?
+
+I have data from throughout the course of students solving mandatory exercises. All students uses a plug-in within the editor that takes a snapshot of code-base each time the student saves the project (exercise). I also have data from when the students run the debugger. All exercises include tests which determine what score the student will receive on a given exercise. The students are free to execute the tests as many times as they like wile solving the exercise (the final score is given when the students presents the final result to a teaching assistant). Execution and results of these tests are also included in the data. Timestamps exists for all data. I also have the final grade of each student (which is determined 100% by the final exam).
+
+Does anyone know of an approach to use this kind of data with a deep learning approach?
+"
+"['machine-learning', 'training', 'metric', 'testing', 'validation']"," Title: Which evaluation metrics should be used in training, validation and testing of a model?Body: Which specific performance evaluation metrics are used in training, validation, and testing, and why? I am thinking error metrics (RMSE, MAE, MSE) are used in validation, and testing should use a wide variety of metrics? I don't think performance is evaluated in training, but not 100% sure.
+Specifically, I am actually after deciding when to use (i.e. in training, validation, or testing) correlation coefficient, RMSE, MAE and others for numeric data (e.g. Willmott's Index of Agreement, Nash-Sutcliffe coefficient, etc.)
+Sorry about this being broad - I have actually been asked to define it generally (i.e. not for a specific dataset). But datasets I have been using have all numeric continuous values with supervised learning situations.
+Generally, I am using performance evaluation for environmental data where I am using ANNs. I have continuous features and am predicting a continuous variable.
+"
+"['philosophy', 'agi', 'cognitive-science']"," Title: Why are we asking ""How can we simulate the brain?""Body: As I've thought about AI, and what I understand of the problems that we face in the creation of it, I've noticed a recurring pattern: we always seem to be asking ourselves, "how can we better simulate the brain?"
+Why are we so fascinated with simulating it? Isn't our goal to create intelligence, not create intelligence in a specific medium? Isn't growing and sustaining living brains more in line with our goals, albeit a bit of an ethical controversy?
+Why is this exchange's description: "For people interested in conceptual questions about life and challenges in a world where 'cognitive' functions can be mimicked in a purely digital environment?"
+To condense these feelings in a more concise question: Why are we trying to create AI in a computer?
+"
+"['neural-networks', 'network-design']"," Title: How would you design a neural network that generates the positions of comparators in a sorting network given a set of numbers?Body: How would you design a neural network that generates the positions of comparators in a sorting network given a set of numbers?
+I've tried to modify some already implemented networks that given a set of numbers it sorts the number. My goal is, given an unsorted sequence of numbers, to generate a sorting network that will sort those numbers. I am not asking for the complete solution, just a starting point.
+"
+"['machine-learning', 'convolutional-neural-networks', 'video-classification']"," Title: How can I determine whether a car in a video is moving or not?Body: How can I classify a given sequence of images (video) as either moving or staying still from the perspective of the person inside the car?
+Below is an example of the sequence of 12 images animated.
+
+- Moving from the point of the person inside the car.
+
+
+
+- Staying still from the point of the person inside the car.
+
+
+Methods I tried to achieve this:
+
+- A simple CNN (with 2d convolutions) with those 12 images (greyscaled) stacked in the channels dimension (like Deepmind's DQN). The input to the CNN is
(batch_size, 200, 200, 12)
.
+
+- A CNN with 3d convolutions. The input to the CNN is
(batch_size, 12, 200, 200, 1)
.
+
+- A CNN+LSTM (time-distributed with 2d convolutions). The input to the neural network is
(batch_size, 12, 200, 200, 1)
.
+
+- The late fusion method, which is taking 2 frames from the sequence that are some time steps apart and passing them into 2 CNNs (with same weights) separately and concatenating them in a dense layer As mentioned in this paper. This is also like CNN+LSTM without the LSTM part. The input to this net is
(batch_size, 2, 200, 200, 1)
-> the 2 images are first and last frames in the sequence
+
+
+All the methods I tried failed to achieve my objective. I tried tuning various hyperparameters, like the learning rate, the number of filters in CNN layers, etc., but nothing worked.
+All the methods had a batch_size
of 8 (due to memory constraint) and all images are greyscaled. I used ReLUs for activations and softmax in the last layer. No pooling layer was used.
+Any help on why my methods are failing or any pointers to a related work
+"
+"['neural-networks', 'chess']"," Title: How do you encode a chess move in a neural network?Body: In a neural network for chess (or checkers), the output is a piece or square on the board and an end position.
+
+How would one encode this?
+
+As far as I can see choosing a starting square is 8x8=64 outputs and an ending square is 8x8=64 outputs. So the total number of possible moves is 64x64 4096 outputs. Giving a probability for every possible move.
+
+Is this correct? This seems like an awful lot of outputs!
+"
+"['machine-learning', 'learning-algorithms']"," Title: Which is best: evaluation of states or probability of moves?Body: If you have a game and you are training an AI there seems to be two ways to do it.
+
+First you take the game-state and a possible move and evaluate whether this move would be good or bad:
+
+(1) GAME_STATE + POSSIBLE_MOVE --> Good or bad?
+
+The second is to take the game state and get probabilities of every conceivable move:
+
+(2) GAME_STATE ---> Probabilities for each move
+
+It seems that both models are used. I.e. in language and RNN might use (2) to find the probabilities for each next word or letter. But AlphaZero might use (1). Noting also that in a game like chess GAME_STATE + POSSIBLE_MOVE = NEW_GAME_STATE. Whereas in some games you might not know the result of your move.
+
+Which do you think is the best method? Which is the best way to do AI? Or some combination of the two?
+"
+"['neural-networks', 'image-recognition', 'prediction']"," Title: Best way to predict future frame of movie or game?Body: Using a neural network the method seems to be that you end up with a probability for each possible outcome.
+
+To predict the next frame in a monochrome movie of size 400x400 with 8 shades of gray, it seems like there seems to be: 8^(160000) possibilities.
+
+On the other had if you just predicted the probability for each pixel individually you would end up with some kind of image which gets progressively blurred.
+
+Perhaps what you want is to generate a few possibilities that are none-the-less quite sharp. In a similar way to weather prediction(?)
+
+So how would you go about designing a neural network that takes reads a movie and tries to predict the next frame?
+"
+"['ai-design', 'recurrent-neural-networks']"," Title: Whats advantages does a Loop Network have over a Feed Forward Network?Body: I am interested to see what advantages a Loop Network (Feed Forward Network that takes its output as input, I think it's called an RNN, not sure). The only result I found was that it was extremely sensitive to context, but only the context behind it. Other than that, I did not notice any changes.
+
+I figured this would be better for a language processing unit, or one used to make inferences based upon it.
+
+What are the shortcomings of each? Advantages?
+"
+"['neural-networks', 'machine-learning', 'artificial-neuron']"," Title: Back-of-the-envelope machine learning (specifically neural networks) calculationsBody: There is a popular story regarding the back-of-the-envelope calculation performed by a British physicist named G. I. Taylor. He used dimensional analysis to estimate the power released by the explosion of a nuclear bomb, simply by analyzing a picture that was released in a magazine at the time.
+
+I believe many of you know some nice back-of-the-envelope calculations performed in machine learning (more specifically neural networks). Can you please share them?
+"
+"['reference-request', 'applications', 'autoencoders', 'genetic-programming']"," Title: How can genetic programming be used in the context of auto-encoders?Body: I am trying to understand how genetic programming can be used in the context of auto-encoders. Currently, I am going through 2 papers
+
+- Training Feedforward Neural Networks Using Genetic Algorithms (a classific one)
+
+- Training Deep Autoencoder via VLC-Genetic Algorithm
+
+
+However, these papers don't really help me to grasp the concept of genetic programming in this specific context, maybe because I'm not very familiar with GP.
+I understand that autoencoders are supposed to reconstruct the instances of the particular classes they have been trained on. If another fed instance is not reconstructed as expected, then it could be called an anomaly.
+But how can genetic programming be used in the context of auto-encoders? You are still required to create a neural network, but, instead of a feed-forward one, you use autoencoder, but how exactly?
+I would appreciate any tutorials or explanations.
+"
+"['recurrent-neural-networks', 'papers', 'echo-state-network']"," Title: What is the significance of this Stanford University ""Financial Market Time Series Prediction with RNN's"" paper?Body: Researchers at Stanford University released, in 2012, the paper Financial Market Time Series Prediction with Recurrent Neural Networks.
+
+It goes on to discuss how they used echo state networks to predict things such as Google's stock prices. However, to do this, once trained, the network's inputs are a day's stock price, and the output is the day's predicted stock price. The way the paper is worded is like this could be used to predict future stock prices, for example. However, to predict tomorrow's stock price, you need to give the neural network tomorrow's stock price.
+
+All this paper seems to show is that the neural network is converging on a solution where it simply modifies its inputs a minimal amount, hence the output of the ESN is just a small alteration of its input.
+
+Here are some Python implementations of the work shown in this paper:
+
+
+
+In particular, I was playing with the latter which produces the following graph:
+
+
+
+If I take the same trained network, and alter the 7th's day's ""real"" stock price to say something extreme like $0, this is what comes out:
+
+
+
+As you can see, it basically regurgitates its inputs.
+
+So, what is the significance of this paper?
+
+It has no use in any financial predictions, like the network shown in the paper Classification-based Financial Markets Prediction using Deep Neural Networks.
+"
+"['reinforcement-learning', 'implementation', 'value-iteration', 'c++']"," Title: Value iteration algorithm from pseudo-code to C++Body: I am having a difficult time translating this pseudocode into functional C++ code.
+
+
+- At line 10: The value function is represented as V[s], which has bracket notation-like arrays. Is this a separate method or just a function of the value with a given state? Why is S inside the brackets? Is this supposed to be an array with as many elements as S?
+- At line 12: Vk would be the element in index k inside of array V?
+- At line 16: I'm interpreting this as the start of a do-while loop that ends at line 20.
+- Line 19: I'm finding the action that maximizes the sum, for all states, of the equation following the sigma?
+- Line 20: I'm interpreting this as the the-end of the do-while. But what is this condition? Am I checking if there is an s such that this condition applies? So Would I would have to loop between all states and stop if any state satisfies the condition? (Basically a loop with a break, instead of a while)
+
+"
+"['machine-learning', 'ai-basics', 'datasets']"," Title: Can machine learning help me digest asymmetrical order descriptions?Body: I have order data, here's a sample:
+
+Ninety-six (96) covered pans, desinated mark cutlery.
+5 vovered pans by knife co.
+(SEE SCHEDULE A FOR NUMBERS). 757 SOUP PANS
+115 10-quart capacity pots.
+Thirteen (13), 30 mm thick covered pans.
+
+
+I have over 50k rows of data such as this. In a perfect world, the above would need to be tabulated as such:
+
+count, type
+96, covered pan
+5, covered pan
+757, soup pan
+115, pot
+13, covered pan
+
+
+Could machine learning be the correct approach for a problem such as this?
+"
+"['activation-functions', 'neuroscience', 'brain']"," Title: What activation function does the human brain use?Body: Does the human brain use a specific activation function?
+I've tried doing some research, and as it's a threshold for whether the signal is sent through a neuron or not, it sounds a lot like ReLU. However, I can't find a single article confirming this. Or is it more like a step function (it sends 1 if it's above the threshold, instead of the input value)?
+"
+"['neural-networks', 'reinforcement-learning', 'philosophy', 'game-ai']"," Title: Can a neural network work out the concept of distance?Body: Imagine a game where it is a black screen apart from a red pixel and a blue pixel. Given this game to a human, they will first see that pressing the arrow keys will move the red pixel. The next thing they will try is to move the red pixel onto the blue pixel.
+
+Give this game to an AI, it will randomly move the red pixel until a million tries later it accidentally moves onto the blue pixel to get a reward. If the AI had some concept of distance between the red and blue pixel, it might try to minimize this distance.
+
+Without actually programming in the concept of distance, if we take the pixels of the game can we calculate a number(s), such as ""entropy"", that would be lower when pixels are far apart than when close together? It should work with other configurations of pixels. Such as a game with three pixels where one is good and one is bad. Just to give the neural network more of a sense of how the screen looks? Then give the NN a goal, such as ""try to minimize the entropy of the board as well as try to get rewards"".
+
+Is there anything akin to this in current research?
+"
+"['neural-networks', 'deep-learning', 'papers', 'research', 'implementation']"," Title: What does ""reimplementations of deep learning algorithms which replicate performance from the papers"" mean?Body: In the OpenAI's Machine Learning Fellow position, it is written
+
+We look for candidates with one or more of the following credentials:
+
+- ...
+- Open-source reimplementations of deep learning algorithms which replicate performance from the papers
+
+
+What exactly do they mean by this? Do they want us to implement the algorithms exactly as described in the papers (i.e. with the same hyper-parameters, weights, etc.)?
+"
+['ai-design']," Title: What model to use for fully unbalanced data?Body: I am working on an anti-fraud project. In the project, we are trying to predict the fraud user in the out time data set. But the fraud user has a very low ratio, only 3%. We expect a model with a precision more than 15%.
+
+I tried Logistic Regression, GBDT+LR, xgboost. All models are not good enough. Step wise Logistic Regression performs best, which has a precision of 9% with recall rate 6%.
+
+Is there any other models that I can use for this problem or any other advise ?
+"
+"['neural-networks', 'deep-learning', 'image-recognition']"," Title: Neural network returns about the same output(mean) for every inputBody: I tried to build a neural network from scratch to build a cat or dog binary classifier using a sigmoid output unit. I seem to get the output value around 0.5(+/- 0.002) for every input. This seems really weird to me. Here's my code, Please let me know if there is a mistake in the implementation.
+
+def initialize_parameters_deep(layer_dims):
+ l=len(layer_dims)
+ parameters={}
+ for l in range(1,len(layer_dims)):
+ parameters['W'+str(l)]=np.random.randn(layer_dims[l],layer_dims[l-1])*0.01
+ parameters['b'+str(l)]=np.zeros((layer_dims[l],1))
+ return parameters
+
+def linear_forward(A,W,b):
+ Z=np.dot(W,A)+b
+ cache=(A,W,b)
+ return Z,cache
+
+
+def sigmoid(Z):
+ A = 1/(1+np.exp(-Z))
+ cache=Z
+ return A, cache
+
+
+def relu(Z):
+ A = np.maximum(0,Z)
+
+ assert(A.shape == Z.shape)
+
+ cache = Z
+ return A, cache
+
+def relu_backward(dA, cache):
+ Z = cache
+ dZ = np.array(dA, copy=True) # just converting dz to a correct object.
+
+ # When z <= 0, you should set dz to 0 as well.
+ dZ[Z <= 0] = 0
+
+ assert (dZ.shape == Z.shape)
+
+ return dZ
+
+def sigmoid_backward(dA, cache):
+ Z = cache
+
+ s = 1/(1+np.exp(-Z))
+ dZ = dA * s * (1-s)
+
+ assert (dZ.shape == Z.shape)
+
+ return dZ
+
+
+def linear_activation_forward(A_prev,W,b,activation):
+ if(activation=='sigmoid'):
+ Z,linear_cache=linear_forward(A_prev,W,b)
+ A,activation_cache=sigmoid(Z)
+ elif activation=='relu':
+ Z,linear_cache=linear_forward(A_prev,W,b)
+ A,activation_cache=relu(Z)
+ cache=(linear_cache,activation_cache)
+ return A,cache
+
+def L_model_forward(X,parameters):
+ A=X
+ L=len(parameters)//2
+ caches=[]
+ for l in range(1,L):
+ A,cache=linear_activation_forward(A,parameters['W'+str(l)],parameters['b'+str(l)],'relu')
+ caches.append(cache)
+ AL,cache=linear_activation_forward(A,parameters['W'+str(L)],parameters['b'+str(L)],'sigmoid')
+ caches.append(cache)
+ return AL,caches
+
+def compute_cost(AL,Y):
+ m=Y.shape[1]
+ cost=-1/m*np.sum(np.multiply(np.log(AL),Y)+np.multiply(np.log(1-AL),1-Y))
+ return cost
+
+def linear_backward(dZ,cache):
+ A_prev,W,b=cache
+ m=A_prev.shape[1]
+ dW = np.dot(dZ,A_prev.T)/m
+ db = np.sum(dZ,axis=1,keepdims=True)/m
+ dA_prev = np.dot(W.T,dZ)
+ return dA_prev,dW,db
+
+def linear_activation_backward(activation,dA_prev,cache):
+ linear_cache,activation_cache=cache
+ if activation=='sigmoid':
+
+ dZ=sigmoid_backward(dA_prev,activation_cache)
+ dA_prev,dW,db=linear_backward(dZ,linear_cache)
+ if activation=='relu':
+ dZ=relu_backward(dA_prev,activation_cache)
+ dA_prev,dW,db=linear_backward(dZ,linear_cache)
+ return dA_prev,dW,db
+
+def L_model_backward(AL,Y,caches):
+ L=len(caches)
+ m = AL.shape[1]
+ Y = Y.reshape(AL.shape)
+ dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
+
+ grads={}
+ current_cache=caches[-1]
+ grads['dA'+str(L-1)],grads['dW'+str(L)],grads['db'+str(L)]=linear_activation_backward('sigmoid',dAL,current_cache)
+
+ for l in reversed(range(L-1)):
+ current_cache=caches[l]
+ dA_prev_temp, dW_temp, db_temp = linear_activation_backward('relu',grads['dA'+str(l+1)],current_cache)
+ grads[""dA"" + str(l)] = dA_prev_temp
+ grads[""dW"" + str(l + 1)] = dW_temp
+ grads[""db"" + str(l + 1)] = db_temp
+ return grads
+def Grad_Desc(parameters,grads,learning_rate):
+ L=len(parameters)//2
+ for l in range(L):
+ parameters['W'+str(l+1)]=parameters['W'+str(l+1)]-learning_rate*grads['dW'+str(l+1)]
+ parameters['b'+str(l+1)]=parameters['b'+str(l+1)]-learning_rate*grads['db'+str(l+1)]
+ return parameters
+
+def L_layer_model(X,Y,learning_rate,num_iter,layer_dims):
+ parameters=initialize_parameters_deep(layer_dims)
+ costs=[]
+ for i in range(num_iter):
+ AL,caches=L_model_forward(X,parameters)
+ cost=compute_cost(AL,Y)
+ grads=L_model_backward(AL,Y,caches)
+ parameters=Grad_Desc(parameters,grads,learning_rate)
+ if i%100==0:
+ print(cost)
+ costs.append(cost)
+ plt.plot(np.squeeze(costs))
+def predict(X,parameters):
+ AL,caches=L_model_forward(X,parameters)
+ prediction=(AL>0.5)
+ return AL,prediction
+
+L_layer_model(x_train,y_train,0.0075,12000,[12288,20,7,5,1])
+prediction=predict(x_train,initialize_parameters_deep([12288,20,7,5,1]))
+
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'python']"," Title: How should I approach the game ""Achtung, Die Kurve"" (""Curve Fever"") using AI?Body: One of my friends built a version of ""Achtung, Die Kurve!"", or ""Curve Fever"" in Python. I was starting to study ML and decided to tackle the game from a learning perspective - write a bot that would crush him in the game. Did some research and found Deep Q learning. Decided to go with that and after a whole lot of throwing around different hyperparameters and layers, I decided I need some help on this. I am new to Deep and Machine Learning in general, so I may have missed things. I was kinda discouraged when I saw that Deep Q is SO impractical currently in the field.
+
+how would you guys tackle this problem? I need some guidance/help building it if someone is up to the task.
+"
+"['ai-design', 'game-ai', 'logic']"," Title: Which edges of this tree will be pruned by Alpha-beta pruning?Body:
+
+So I know that 'h' and 'f' will be pruned, but I'm not sure about 'k' and 'l'.
+
+When we visit 'j', technically there is no need for us to visit 'k' and 'l' because there are 2 options:
+
+
+- one or two of them might be higher than 8 ('j')
+- both of them less than 8
+
+
+But no matter what, the decision of the max(root) will not change, the max will choose the right side no matter what 'k' and 'l' are, because the right side will either be 8 or 9, which is still higher than 4 (returned value from left side)
+
+so will alpha beta prune 'k' and 'l' or not? if not, then it means alpha beta is not ""optimal"" overall right? considering it will not prune all the unnecessary paths.
+"
+['monte-carlo-tree-search']," Title: When to expand and when to simulate in Monte Carlo Tree Search?Body: In Monte Carlo Tree Search (MCTS), we start at root node $R$. Then we select some leaf node $L$. And we expand $L$ by one or more child nodes and simulate from the child to the end of the game.
+
+
+
+When should we expand and when should we simulate in MCTS? Why not expand 2 or 3 levels, and simulate from there, then back-propagate the values to the top? Should we just expand 1 level? Why not more?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'tensorflow', 'object-recognition']"," Title: Getting worse performance when training a pre-trained model with the existing classBody: I am training pre-trained SSD-InceptionV2-Coco to detect the ""car"",
+
+which is one of the classes in mscoco label.
+
+I train the model with ~50k sample from KITTI, 500k iteration with batch size 2.
+
+I followed this script to generate tfrecord file.
+
+Then I test both original pre-trained model and my trained model with one video.
+
+The performance of my trained model is worse. More missing detected results.
+
+One thing I found recently is the classification_loss/localization_loss increases when AvgNumGroundtruthBoxesPerImage increases.
+
+
+
+
+
+EDIT
+
+
+
+Another thing I found is the more ground truth boxes per image I have,
+
+the less average num positive anchors per image I have.
+
+This bothers me because if the number of anchors generated per image is fixed,
+
+more ground truth boxes should provide more positive anchors per image.
+
+So I wonder where to find the root cause.
+
+Any suggestion is welcome.
+
+Thank you for precious time on my question.
+"
+"['machine-learning', 'prediction', 'autonomous-vehicles', 'facial-recognition']"," Title: Will commercialisation and widespread use of A.I in security and surveillance and other household products threaten free will or endanger privacy?Body: Everything from facial recognition to the google home is coming equiped with A.I and it is being widely used , If autonomously connected to the internet , will A.I pose a threat to privacy or will it endanger free will if used for surveillance with facial recognition , like in the movie 'Minority Report'
+"
+"['machine-learning', 'reinforcement-learning', 'markov-decision-process']"," Title: How should I define an MDP for this problem where we need to guess a number and minimise the number of guesses?Body: A number has randomly been chosen from 1 to 3. On each step, we can make a guess and we will be told if our guess is equal, bigger or smaller than the chosen number. We're trying to find the number with the least number of guesses.
+
+I need to draw the MDP model for this question with 7 states, but I don't know how the states are supposed to be defined. Can anyone help?
+"
+"['reinforcement-learning', 'multi-armed-bandits', 'action-spaces', 'contextual-bandits']"," Title: How can I incorporate domain knowledge to choose actions in the case of large action spaces in multi-armed bandits?Body: Suppose one is using a multi-armed bandit, and one has relatively few "pulls" (i.e. timesteps) relative to the action set. For example, maybe there are 200 timesteps and 100 possible actions.
+However, you do have information on how similar actions are to each other. For example, I might want to rent a car, and know the model, year, and mileage of each car. (Specifically, I want to rent a car on a daily basis for each day in a 200 day period; on each day, I can either continue with the existing car or rent a new one. There are 100 possible cars.)
+How can I exploit this information to choose actions that maximize my payoff?
+"
+"['neural-networks', 'image-recognition', 'datasets', 'artificial-neuron']"," Title: Feature set out of grayscale Images for training a neural network?Body: Previously I had trained a Neural Network
upon 20,000 character images. This Neural Net
generally works well, it uses RGB- Hue, Saturation, Intensity
feature set for training. However, there can be certain character images which have RGB-HSI
values that this neural net
has not seen before. Therefore I am looking forward to converting training data to grayscale
and use some feature set well suited for grayscale images.
+
+So are there any good suggestion for extracting a feature set out of grayscale
images.
+"
+"['neural-networks', 'ai-design', 'optimization']"," Title: Input optimization on a supervised learning systemBody: Problem
+
+Given a collection of pairs (X, y) where X belongs to R^n and y belongs to R, find the X such that the associated y would be maximum.
+
+Example
+
+Given:
+
+
+- (X=(1, 2), y=-9)
+- (X=(-2, 4), y=-36)
+- (X=(-4, 2), y=-24)
+- ...
+
+
+The algorithm should be able to detect that the function being applied to X is y=-(X[0]^2+2*(X[1]^2)) and find the input that maximizes this function, in this case X=(0,0) because y=0^2+2*0^2=0 and 0 is the maximum possible value, as all the other values are negative.
+
+How I've tried to solve it
+
+My first guess has been to create a neural network that predicts y given X, but, after that is done, I don't know how to go about optimizing the input.
+
+Questions
+
+Is there any algorithm that would help in this situation?
+
+Also, would some other supervised learning algorithm fit better here than a neural network?
+"
+['search']," Title: How many iterations are required for iterative-lengthening search when step costs are drawing from a continuos range [ϵ, 1]?Body: This is AI: A Modern Approach, 3.17c. The solution manual gives the answer as $\frac{d}{\epsilon}$, where $d$ is the depth of the shallowest goal node.
+
+Iterative lengthening search uses a path cost limit on each iteration, and updates that limit on the next iteration to the lowest cost of any rejected node.
+
+I have seen this question posted elsewhere as, ""What is the number of iterations with a continuous range $[0, 1]$ and a minimum step cost $\epsilon$?"" In that case, I agree that the minimum number of iterations is $\frac{d}{\epsilon}$ because you would need to increase the path cost limit by a minimum of $\epsilon$ with each iteration.
+
+However, with a continuous range of $[\epsilon, 1]$, it seems there is an infinite range and that the number of iterations is potentially infinite, since there is no minimum step cost. Should this solution actually be infinite?
+"
+"['natural-language-processing', 'word2vec', 'word-embedding']"," Title: Do individual dimensions in vector space have meaning?Body: Word2vec assigns an N-dimensional vector to given words (which can be considered a form of dimensionality reduction).
+
+It turns out that, at least with a number of canonical examples, vector arithmetic seems to work intuitively. For example ""king + woman - man = queen"".
+
+These terms are all N-dimensional vectors. Now, suppose, for simplicity, that $N=3$, $\text{king} = [0, 1, 2], \text{woman} = [1, 1, 0], \text{man} = [2, 2, 2], \text{queen} = [-1, 0, 0]$, then the expression above can be written as $[0, 1, 2] + [1, 1, 0] - [2, 2, 2] = [-1, 0, 0]$.
+
+In this (contrived) example, the last dimension (king/man=2, queen/woman=0) suggests a semantic concept of gender. Aside from semantics, a given dimension could ""mean"" a part of speech, first letter, or really any feature or set of features that the algorithm might have latched onto. However, any perceived ""meaning"" of a single dimension might well just be a simple coincidence.
+
+If we picked out only a single dimension, does that dimension itself convey some predictable or determinable information? Or is this purely a ""random"" artefact of the algorithm, with only the full N-dimensional vector distances mattering?
+"
+"['neural-networks', 'python', 'gradient-descent', 'batch-normalization']"," Title: How to perform gradient checking in a neural network with batch normalization?Body: I have implemented a neural network (NN) using python and numpy only for learning purposes. I have already coded learning rate, momentum, and L1/L2 regularization and checked the implementation with gradient checking.
+
+A few days ago, I implemented batch normalization using the formulas provided by the original paper. However, in contrast with learning/momentum/regularization, the batch normalization procedure behaves differently during fit and predict phases - both needed for gradient checking. As we fit the network, batch normalization computes each batch mean and estimates the population's mean to be used when we want to predict something.
+
+In a similar way, I know we may not perform gradient checking in a neural network with dropout, since dropout turns some gradients to zero during fit and is not applied during prediction.
+
+Can we perform gradient checking in NN with batch normalization? If so, how?
+"
+"['philosophy', 'agi', 'rationality']"," Title: Will artificial intelligence make the human more rational?Body: With so much innovation, with so much previous human manual labor being performed in minutes or seconds by an artificial intelligence, one day man will put the survival and propagation of his species above his ideologies and cultures.
+
+I am worried because we are living the fourth industrial revolution, and this will generate millions of unemployment, even if new jobs are created in the future. The problem is that a lot of humans worry about their own job, and not about their own children's future. This is completely retrograde.
+
+Will, one day, Artificial Intelligence be able to direct us towards an intelligent path as a propagation of the species, or else center the focus of humanity on something that it adds?
+"
+"['reinforcement-learning', 'gradient-descent', 'dqn', 'deep-rl', 'function-approximation']"," Title: Why use semi-gradient instead of full gradient in RL problems, when using function approximation?Body: Semi-gradient methods work well in reinforcement learning, but what is there a reason of not using the true gradient if it can be computed?
+
+I tried it on the cart pole problem with a deep Q-network and it performed much worse than traditional semi-gradient. Is there a concrete reason for this?
+"
+"['natural-language-processing', 'reference-request', 'ai-design', 'algorithm-request', 'intelligent-personal-assistants']"," Title: How to add contextual follow up like Google AssistantBody: I am developing PDA like Google assistant on Android. So far, so good.
+But now, I want to add contextual follow up like Google assistant so it can keep the train of thought.
+As demonstrated here- https://www.youtube.com/watch?v=xYRENGuwwCA
+Can anyone guide me or hint how to design the algorithm?
+"
+"['neural-networks', 'models']"," Title: Abstracting parameters of dynamic model from output time seriesBody: I am unable to identify general temrs or specific source of information for the below proposed problem. I would appreciate if the community can guide me to journal articles/books and keywords to look for in literature.
+
+Problem:
+
+There is a non-linear dynamic system taking input and producing 1D time series as output. I would like to use NN to find parameters of the dynamic system, according to the time series output. That is, mapping the features of the time series (after transformation, likely Fourtier Transform or Wavelet) to the parameters governing the dynamics of the system.
+
+Research so far:
+
+I have found a few journal papers mostly processing sounds of rolling bearings or hearbeat but only for error/failure classification.
+
+
+- Rolling Bearing Fault Diagnosis Based on STFT-Deep Learning and SOund Signals
+- Deep Learning Enabled Fault Diagnosis Using Time-Frequency Image Analysis of Rolling Element Bearings
+- Deep Learning Based Approach for Bearing Fault Diagnosis
+- Detecting atrila fibrillation be deep convolutional neural networks
+
+
+(the above are classification problems, my problem is about parameter identification)
+
+Reason to address this on StackExchange:
+
+I think I am missing overview about the topic (identification of dynamic systems using NN), because I am not able to reach more profound information. Also, I think that NN would be more beneficial to my current application than lets say optimization by evolutionary algorithms, threfore I am specifically asking for NN.
+"
+"['neural-networks', 'training', 'backpropagation', 'xor-problem']"," Title: What is the best XOR neural network configuration out there in terms of low error?Body: I'm trying to understand what would be the best neural network for implementing an XOR gate. I'm considering a neural network to be good if it can produce all the expected outcomes with the lowest possible error.
+It looks like my initial choice of random weights has a big impact on my end result after training. The accuracy (i.e. error) of my neural net is varying a lot depending on my initial choice of random weights.
+I'm starting with a 2 x 2 x 1 neural net, with a bias in the input and hidden layers, using the sigmoid activation function, with a learning rate of 0.5. Below my initial setup, with weights chosen randomly:
+
+The initial performance is bad, as one would expect:
+Input | Output | Expected | Error
+(0,0) 0.8845 0 39.117%
+(1,1) 0.1134 0 0.643%
+(1,0) 0.7057 1 4.3306%
+(0,1) 0.1757 1 33.9735%
+
+Then I proceed to train my network through backpropagation, feeding the XOR training set 100,000 times. After training is complete, my new weights are:
+
+And the performance improved to:
+Input | Output | Expected | Error
+(0,0) 0.0103 0 0.0053%
+(1,1) 0.0151 0 0.0114%
+(1,0) 0.9838 1 0.0131%
+(0,1) 0.9899 1 0.0051%
+
+So my questions are:
+
+- Has anyone figured out the best weights for a XOR neural network with that configuration (i.e. 2 x 2 x 1 with bias) ?
+
+- Why my initial choice of random weights make a big difference to my end result? I was lucky on the example above but depending on my initial choice of random weights I get, after training, errors as big as 50%, which is very bad.
+
+- Am I doing anything wrong or making any wrong assumptions?
+
+
+
+So below is an example of weights I cannot train, for some unknown reason. I think I might be doing my backpropagation training incorrectly. I'm not using batches and I'm updating my weights on each data point solved from my training set.
+Weights: ((-9.2782, -.4981, -9.4674, 4.4052, 2.8539, 3.395), (1.2108, -7.934, -2.7631))
+
+"
+"['neural-networks', 'python']"," Title: Neural network returns similar outputBody: I was following Daniel Shiffman's tutorials on how to write your own neural network from scratch. I specifically looked into his videos and the code he provided in here. I rewrote his code in Python, however, 3 out of 4 of my outputs are the same. The neural network has two input nodes, one hidden layer with two nodes and one output node. Can anyone help me to find my mistake? Here is my full code.
+
+import random
+
+nn = NeuralNetwork(2,2,1)
+inputs = np.array([[0, 0], [1, 0], [0, 1], [1, 1]])
+targets = np.array([[0], [1], [1], [0]])
+zipped = zip(inputs, targets)
+list_zipped = list(zipped)
+
+for _ in range(9000):
+ x, y = random.choice(list_zipped)
+ nn.train(x, y)
+
+output = [nn.feedforward(i) for i in inputs]
+
+for i in output:
+ print(""Output "", i)
+
+#Output [ 0.1229546] when it should be around 0
+#Output [ 0.6519492] ~1
+#Output [ 0.65180228] ~1
+#Output [ 0.66269853] ~0
+
+
+EDIT_1: I tried debugging my code by choosing all weights and bias' values to 0.5. I did this in both my code and Daniel's. This obviously ended up showing me all outputs with the same value.
+
+After that I increased my weights and bias' values variety from [0 , 1) to [-1, 1). By running this a few times, I would sometimes get the correct output:
+
+[ 0.93749991] # should be ~1
+[ 0.93314793] # ~1
+[ 0.07001175] # ~0
+[ 0.06576194] # ~0
+
+
+If I ran nn.train() 100 000 times, I get the correct output 2/3 times.
+Is this the issue of gradient descent, where it converges to the local minima?
+"
+"['neural-networks', 'computer-vision', 'decision-trees', 'explainable-ai', 'question-answering']"," Title: Why does nobody use decision trees for visual question answering?Body: I'm starting a project that will involve computer vision, visual question answering, and explainability. I am currently choosing what type of algorithm to use for my classifier - a neural network or a decision tree.
+It would seem to me that, because I want my system to include explainability, a decision tree would be the best choice. Decision trees are interpretable, whereas neural nets are like a black box.
+The other differences I'm aware of are: decision trees are faster, neural networks are more accurate, and neural networks are better at modelling nonlinearity.
+In all of the research I've done on computer vision and visual question answering, everyone uses neural networks, and no one seems to be using decision trees. Why? Is it for accuracy? I think a decision tree would be better because it is fast and interpretable, but if no one's using them for visual question answering, they must have a disadvantage that I haven't noticed.
+"
+"['reinforcement-learning', 'objective-functions', 'discrete-action-spaces', 'multi-action-rl']"," Title: Extend the loss function from the single action to the n-action case per time stepBody: My question concerns a side question (which was not answered) asked here:
+How can policy gradients be applied in the case of multiple continuous actions?
+I am trying to implement a simple policy gradient algorithm for a discrete multi-action reinforcement learning task. To be more precise, there are three actuators. At every time step, each of the actuators can perform one of three possible actions.
+Is it possible to adjust the loss function from the single action case per time step
+$$L = \log(P(a_1)) A$$
+to the n-action case per time step like so?
+$$L = (\log(P(a_1)) + \log(P(a_2))+ \dots + \log(P(a_n))) A$$?
+"
+"['machine-learning', 'comparison', 'python', 'programming-languages', 'c++']"," Title: Why does C++ seem less widely used than Python in AI?Body: I just want to know why do machine learning engineers and AI programmers use languages like Python to perform AI tasks and not C++, even though C++ is technically a more powerful language than Python.
+"
+"['machine-learning', 'linear-regression']"," Title: Regression on extreme valuesBody: I have a data set that looks like this:
+
+
+
+I would like to estimate a relationship between x-values and the corresponding 5% extreme y-values, something that might look like that :
+
+
+
+Do you have an idea of an algorithm that might help me for this ? I thought about labelling the extreme values for later finding a separating hyperplane, but I have no clue on how to label these ""extreme values"" (I cannot just take the 5% lowest and highest values as all these would end up in the same region).
+
+Thanks for your ideas !
+"
+"['reinforcement-learning', 'q-learning', 'policy-gradients', 'comparison']"," Title: What is the relation between Q-learning and policy gradients methods?Body: As far as I understand, Q-learning and policy gradients (PG) are the two major approaches used to solve RL problems. While Q-learning aims to predict the reward of a certain action taken in a certain state, policy gradients directly predict the action itself.
+
+However, both approaches appear identical to me, i.e. predicting the maximum reward for an action (Q-learning) is equivalent to predicting the probability of taking the action directly (PG). Is the difference in the way the loss is back-propagated?
+"
+"['machine-learning', 'applications']"," Title: How to organize artificial intelligence efforts at work?Body: Some context: Recently all kinds of salesmen have been knocking on our company's door to provide their "artificial intelligence" expertise and projects suggestions. Some don't know the difference between words estimation and validation (really), some have extraordinary powerpoints and paint themselves as gurus of the field. Our management has gone with the hype and definitely we're starting some kind of project on "artificial intelligence" (meaning rpa with some machine learning possibly).
+What is the best way to start when we don't yet know to what problem we want to apply all this and I'm worried it will lead to long expensive projects with meager results? What are the things to watch out for? Any good practical books or war stories out there?
+"
+"['neural-networks', 'training', 'image-recognition', 'feedforward-neural-networks', 'catastrophic-forgetting']"," Title: Could new training pictures destroy the trained weights of the neural network?Body: Let's say an image has 28*28 pixels, which leads to 784 input nodes in a feed-forward neural network. If an image can be classified into 1 of 10 numbers (e.g. MNIST), there are 10 output nodes.
+We train (with gradient descent and back-propagation) the FFNN with a set of known pictures until we get a good accuracy.
+Successively, we get a new training picture, which we want to use to train the FFNN even further. However, wouldn't this new training picture destroy the previously learned weights, which have been calibrated to recognize the former training pictures?
+"
+"['game-ai', 'problem-solving']"," Title: Developing character tactics via repeated trialsBody: Let's assume a common game scenario of several characters in a combat arena. Each character has different strengths and weaknesses. The arena has traps and tools. Suppose the characters had only very basic moves such as step in a direction, shoot, climb, duck, pick up item, use item, drag heavy object. Each move has a chance of success based on the context (e.g. range to target). What AI, machine learning, or evolutionary approach could be used to generate personalized tactics for each character based on repeated runs of the scenario?
+"
+"['neural-networks', 'python', 'keras', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: Over- and underestimations of the lowest and highest values in LSTM networkBody: I'm training an LSTM network with multiple inputs and several LSTM layers in order to set up a time series gap filling procedure. The LSTM is trained bidirectionally with "tanh" activation on the outputs of the LSTM, and one Dense layer with "linear" activation comes at the end to predict the outputs. The following scatterplot of real outputs vs the predictions illustrates the problem:
+Outputs (X-axis) vs predictions (Y-axis):
+
+The network is definitely not performing too bad and I'll be updating the parameters in the next trials, but the issue at hand always reappears. The highest outputs are clearly underestimated, and the lowest values are overestimated, clearly systematic.
+I have tried min-max scaling on inputs and outputs and normalizing inputs and outputs, and the latter performs slightly better, but the issue persists.
+I've looked a lot in existing threads and Q&As, but I haven't seen something similar.
+I'm wondering if anyone here sees this and immediately knows the possible cause (activation function? Preprocessing? Optimizer? Lack of weights during training? ... ?). Or, and in that case, it would also be good to know if this is impossible to find out without extensive testing.
+"
+['deep-learning']," Title: Is the following neural network architecture considered deep learning?Body: I am working in the following neural network architecture, I am using keras and TensorFlow as a back-end.
+
+It is composed by the following, embedding of words, then I added a layer of
+Long Short-Term Memory (LSTM) neural networks, one layer of output and finally. I am using the softmax activation function.
+
+model = Sequential()
+model.add(Embedding(MAX_NB_WORDS, 64, dropout=0.2))
+model.add(LSTM(64, dropout_W=0.2, dropout_U=0.2))
+model.add(Dense(8))
+model.add(Activation('softmax'))
+
+
+I have the following question, if I am getting a model through this code, could the final product be called a deep learning model?,
+I know that this code is very small however there is a lot of computations that the machine is making on the background.
+"
+"['training', 'natural-language-processing', 'sentiment-analysis']"," Title: Recommended Modelling Technique for Influencer Marketing ScenarioBody: I have an approximately 90,000 row dataset that has information of social media profiles which has columns for biography, follower count, language spoken, name, username and the label (to identify whether the profile is that of an influencer, brand or news and media).
+
+Task: I have to train a model that predicts the label. I then need to produce a confidence interval for each prediction.
+
+As I have never come across a problem like this, I am just after some suggestions of what models I should be using for a situation like this? I am thinking Natural Language Processing (NLP), but not sure.
+
+Also, for NLP (if a suitable method), any codes or advice to help me implement for the first time on Python would be greatly appreciated! Thanks in advanced
+"
+"['reference-request', 'applications', 'autonomous-vehicles', 'path-planning', 'pddl']"," Title: How is PDDL used in production AI systems?Body: I can't find much information on modern PDDL usage. Are there more popular alternatives, maybe something more suited to modern neural network/deep learning techniques?
+
+I'm particularly interested in PDDL or alternative's current usage in autonomous driving software.
+"
+"['philosophy', 'ethics', 'superintelligence', 'value-alignment']"," Title: Is it possible to build an AI that learns humanity, morally?Body: It is a new era and people are trying to evolve more in science and technology. Artificial Intelligent is one of the ways to achieve this. We have seen lots of examples for AI sequences or a simple "communication AI" that are able to think by themselves and they are often shifted to the communication of building a world where machines will rise. This is what people, like Stephen Hawking and Elon Musk, are afraid of, to be in that kind of war.
+Is it possible to build an AI, which is able to think by themselves, but limited to the overruling of humankind, or teach it of the moral way in treating peace and work alongside humans, so they could fight alongside human, if ever this kind of catastrophic happens in the future? It could be an advantage.
+"
+"['neural-networks', 'genetic-algorithms', 'evolutionary-algorithms', 'neat']"," Title: How to evaluate a NEAT neural network?Body: I'm trying to write my own implementation of NEAT and I'm stuck on the network evaluate function, which calculates the output of the network.
+
+NEAT as you may know contains a group of neural networks with continuously evolving topologies by the addition of new nodes and new connections. But with the addition of new connections between previously unconnected nodes, I see a problem that will occur when I go to evaluate, let me explain with an example:
+
+
+INPUTS = 2 yellow nodes
+HIDDEN = 3 blue nodes
+OUTPUT = 1 red node
+
+
+In the image a new connection has been added connecting node3 to node5, how can I calculate the output for node5 if I have not yet calculated the output for node3, which depends on the output from node5?
+
+(not considering activation functions)
+
+node5 output = (1 * 0.5) + (1 * 0.2) + (node3 output * 0.8)
+node3 output = ((node5 output * 0.7) * 0.4)
+
+"
+"['neural-networks', 'machine-learning', 'ai-basics']"," Title: How does one make it obvious that the structure of a neural network should be what it is?Body: I am a beginner: I've only read a book about neural network and barely implemented one in C.
+
+In short:
+
+
+- A neural network is built out of nodes,
+- Each node holds an output:
activation.(sum.(x * w))
,
+- We then compute the total error out of the network output.
+
+
+From a beginner perspective, hyper-parameters, such as the number of layers needed, seem to be defined arbitrarily in most tutorials, books. In fact, the whole structure seems to be quite arbitrarily defined. In practice, hyper-parameters are often defined based on some standards.
+
+My question is, if you were to talk to a total beginner, how would you explain to him the structure of a neural network in such a way that the whole thing would appear as obvious ? Is that even possible ?
+
+Here, the word structure refers to a neural network being a configuration of nodes inside layers.
+
+Thanks to anyone pointing out ambiguities or spelling errors.
+
+Edit: note that I actually understand the whole back-propagation algorithm. I have no problem visualizing a nn.
+"
+"['reinforcement-learning', 'q-learning', 'markov-decision-process', 'dynamic-programming']"," Title: Should we feed a greater fraction of terminal states to the value network so that their values are learned first?Body: The basis of Q-learning is recursive (similar to dynamic programming), where only the absolute value of the terminal state is known.
+Shouldn't it make sense to feed the model a greater proportion of terminal states initially, to ensure that the predicted value of a step in terminal states (zero) is learned first?
+Will this make the network more likely to converge to the global optimum?
+"
+"['neural-networks', 'deep-learning', 'learning-algorithms']"," Title: Where do 'random seeds' get used in deep neural networks?Body: I know that when creating neural networks it's standard practice to create a 'random seed' so that you can get producible results in your models. I have a couple of questions regarding this:
+
+
+- Is the seed just something that is used in the 'learning' phase of the network or does it get saved? i.e. is it saved into the model itself and used by others if they decide to implement a model you created?
+- Does it matter what you choose to be the seed? Should the number have a certain length?
+- At what step of the creation of a model does this seed get used and how does it get used?
+
+
+Other information about 'random seeds' would be welcomed! But these are my general questions.
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: Neural network for pattern recognition in audioBody: I'm trying to make a neural network that detects certain instruments in a song. I don't know for sure if I should use an RNN, CNN OR DNN. Which one is best for this situation?
+"
+"['search', 'breadth-first-search']"," Title: Why is breadth-first search only optimal when the cost solution is a non-decreasing function?Body: I am learning about searching strategies in AI and I was reading that breadth-first search is only optimal when the cost solution is a non-decreasing function? I am not really sure what this refers to since decreasing search cost should be our goal. Am I missing something?
+"
+"['neural-networks', 'deep-learning', 'computer-vision', 'variational-autoencoder', 'constrained-optimization']"," Title: How to use a VAE to reconstruct an image starting from an initial image instead of starting from a random vector?Body: Is it possible to use a VAE to reconstruct an image starting from an initial image instead of using K.random_normal
, as shown in the “sampling” function of this example?
+I have used a sample image with the VAE encoder to get z_mean
and z_logvar
.
+I have been given 1000 pixels in an otherwise blank image (with nothing in it).
+Now, I want to reconstruct the sample image using the decoder with a given constraint that the 1000 pixels in the otherwise blank image remain the same. The remaining pixels can be reconstructed so they are as close to the initial sample image as possible. In other words, my starting point for the decoder is a blank image with some pixels that don’t change.
+How can I modify the decoder to generate an image based on this constraint? Is it possible? Are there variations of VAE that might make this possible? So we can predict the latent variables by starting from an initial point(s)?
+"
+"['neural-networks', 'machine-learning', 'backpropagation', 'gradient-descent']"," Title: How should I update the weights of a neural network, given the gradient?Body: After watching 3Blue1Brown's tutorial series, and an array of others, I'm attempting to make my own neural network from scratch.
+
+So far, I'm able to calculate the gradient for each of the weights and biases.
+
+Now that I have the gradient, how am I supposed to correct my weight/bias?
+
+Should I:
+
+
+- Add the gradient and the original value?
+- Multiply the gradient and the original value?
+- Something else? (Most likely answer)
+
+
+In addition to this, I've been hearing the term learning rate being tossed around, and how it is used to define the magnitude of the 'step' to descend to minimum cost. I figured this may also play an integral role in reducing the cost.
+"
+"['math', 'agi', 'research', 'education']"," Title: What are the mathematical prerequisites to be able to study artificial general intelligence?Body: What are the mathematical prerequisites to be able to study artificial general intelligence (AGI) or strong AI?
+"
+"['neural-networks', 'convolutional-neural-networks', 'datasets', 'image-segmentation', 'fully-convolutional-networks']"," Title: How can I deal with images of variable dimensions when doing image segmentation?Body: I'm facing the problem of having images of different dimensions as inputs in a segmentation task. Note that the images do not even have the same aspect ratio.
+
+One common approach that I found in general in deep learning is to crop the images, as it is also suggested here. However, in my case, I cannot crop the image and keep its center or something similar, since, in segmentation, I want the output to be of the same dimensions as the input.
+
+This paper suggests that in a segmentation task one can feed the same image multiple times to the network but with a different scale and then aggregate the results. If I understand this approach correctly, it would only work if all the input images have the same aspect ratio. Please correct me if I am wrong.
+
+Another alternative would be to just resize each image to fixed dimensions. I think this was also proposed by the answer to this question. However, it is not specified in what way images are resized.
+
+I considered taking the maximum width and height in the dataset and resizing all the images to that fixed size in an attempt to avoid information loss. However, I believe that our network might have difficulties with distorted images as the edges in an image might not be clear.
+
+
+- What is possibly the best way to resize your images before feeding them to the network?
+- Is there any other option that I am not aware of for solving the problem of having images of different dimensions?
+- Also, which of these approaches you think is the best taking into account the computational complexity but also the possible loss of performance by the network?
+
+
+I would appreciate if the answers to my questions include some link to a source if there is one.
+"
+"['machine-learning', 'reference-request', 'game-ai']"," Title: Have machine learning techniques been used to play outdoor games, like cricket or badminton?Body: Have machine learning techniques been used to play outdoor games, like cricket or badminton?
+"
+"['neural-networks', 'deep-learning', 'overfitting', 'multilayer-perceptrons']"," Title: Unable to overfit using MLPBody: I'm building a 5-class classifier with a private dataset. Each data sample has 67 features and there are about 40000 samples. Samples of a particular class were duplicated to overcome class imbalance problems (hence 40000 samples).
+
+With a one-vs-one multi-class SVM, I am getting an accuracy of ~79% on the validation set. The features were standardized to get 79% accuracy. Without standardization, the accuracy I get is ~72%. Similar result when I tried 50-fold cross validation.
+
+Now moving on to MLP results,
+
+Exp 1:
+
+
+- Network Architecture: [67 40 5]
+- Optimizer: Adam
+- Learning Rate: exponential decay of base learning rate
+- Validation Accuracy: ~45%
+- Observation: Both training accuracy and validation accuracy stops improving.
+
+
+Exp 2:
+Repeated Exp 1 with batchnorm layer
+
+
+- Validation Accuracy: ~50%
+- Observation: Got 5% increase in accuracy.
+
+
+Exp 3:
+
+To overfit, increased the depth of MLP. A deeper version of Exp 1 network
+
+
+- Network Architecture: [67 40 40 40 40 40 40 5]
+- Optimizer: Adam
+- Learning Rate: exponential decay of base learning rate
+- Validation Accuracy: ~55%
+
+
+Thoughts on what might be happening?
+"
+"['machine-learning', 'ethics']"," Title: Can an AI be programmed not to lie?Body: According to this blog post, it seems that AI systems can lie. However, can an AI be programmed in such a way that it never lies (even after learning new things)?
+"
+"['neural-networks', 'machine-learning', 'classification']"," Title: Is it possible to classify songs by genres based on spectrograms?Body: Is it possible to categories songs based on their spectrograms using image recognition or would there need to more features? I was thinking that the spectrograms might also run into problems with EDM songs. Such as House music being closely related to their sounds. Would there have to be immense amount of data? I was thinking of using a CNN.
+"
+"['neural-networks', 'natural-language-processing', 'metric', 'similarity']"," Title: How can we compare, in terms of similarity, two pieces of text?Body: How can we compare, in terms of similarity (and/or meaning), two pieces of text (or documents)?
+For example, let's say that I want to determine whether a document is a plagiarized version of another document. Which approach should I use? Could I use neural networks to do this? Or are there other more suitable approaches?
+"
+"['papers', 'autoencoders', 'unsupervised-learning']"," Title: Do Le et al. (2012) train all three autoencoder layers at a time, or just one?Body: Le et al. 2012 use a network of 1 billion parameters to learn neurons that respond to faces, cats, pedestrians, etc. without labels (unsupervised).
+Their network is built with three autoregressive layers, and six pooling and normalization layers.
+In the paper they state,
+
+Optimization: All parameters in our model were
+trained jointly with the objective being the sum of the
+objectives of the three layers.
+
+Does this mean that all three autoencoder layers were trained simultaneously, or that the first three sub-layers (first autoencoder sub-layer, first L2 pooling sub-layer, and first normalization sub-layer) were trained simultaneously?
+I asked a follow-on question on the advantage of training all layer simultaneously.
+"
+"['neural-networks', 'machine-learning', 'neat', 'neuroevolution', 'papers']"," Title: What if the more fit parent has fewer nodes compared to the other, will the disjoint and excess genes be discarded?Body: In the paper Efficient Evolution of Neural Network Topologies (2002), the authors say
+
+
+ Genes that do not match are inherited from the more fit parent
+
+
+What if the more fit parent has fewer nodes compared to the other, will the disjoint/excess genes be discarded?
+"
+"['papers', 'multi-armed-bandits', 'contextual-bandits']"," Title: Why is it useful in some applications to use features that are shared by all arms?Body: In Li et al. (2010)'s highly cited paper, they talk about LinUCB with hybrid linear models in Section 3.2.
+They motivate this by saying
+
+In many applications including ours, it is helpful to use features that are shared by all arms, in addition to the arm-specific ones. For example, in news article recommendation, a user may prefer only articles about politics for which this provides a mechanism.
+
+I don't quite understand what they mean by this. Is anyone willing to provide a different example?
+Also, it would greatly help if you can clarify what Equation 6's "$\mathbf{z}$" and "$\mathbf{x}$" refer to in the context they talk about (news recommendation), or the example you give?
+Equation (6) from the paper:
+$$ \mathbf{E} \left[ r_{t,a} \vert \mathbf{x}_{t, a} \right] = \mathbf{z}_{t, a}^{\top} \boldsymbol{\beta}^* + \mathbf{x}_{t, a}^{\top} \boldsymbol{\theta}_a^* $$
+"
+"['philosophy', 'terminology', 'semantics', 'rationality', 'intelligence']"," Title: Are Rationality and Intelligence distinct?Body: This just popped into my head, and I haven't thought it through, but it feels like a sound question. The definition of intelligence might still be somewhat fuzzy, possibly a factor of our evolving understanding of ""intelligence"" in regard to algorithms, but rationality has some precise definitions.
+
+
+- Are Rationality and Intelligence distinct?
+
+
+If not, explain. If so, elaborate.
+
+(I have some thoughts on the subject and would be very interested in the thoughts of others.)
+"
+"['reinforcement-learning', 'papers', 'reward-design', 'reward-shaping', 'potential-reward-shaping']"," Title: What should I do when the potential value of a state is too high?Body: I'm working on a Reinforcement Learning task where I use reward shaping as proposed in the paper Policy invariance under reward transformations:
+Theory and application to reward shaping (1999) by Andrew Y. Ng, Daishi Harada and Stuart Russell.
+
+In short, my reward function has this form:
+
+$$R(s, s') = \gamma P(s') - P(s)$$
+
+where $P$ is a potential function. When $s = s'$, then $R(s, s) = (\gamma - 1)P(s)$, which is non-positive, since $0 < \gamma <= 1$.
+
+But considering $P(s)$ relatively high (let's say $P(s) = 1000$), $R(s, s)$ become too high as well (e.g. with $\gamma=0.99$, $R(s,s)=-10$), and if for many steps the agent stays in the same state, then the cumulative reward becomes more and more negative, which might affect the learning process.
+
+In practice, I solved the problem by just removing the factor $P(s)$ when $s = s'$. But I have some doubts about the theoretical correctness of this ""implementation trick"".
+
+Another idea could be to scale appropriately $\gamma$ in order to give a reasonable reward. Indeed, with $\gamma=1.0$, there is no problem, and, with gamma very near to $1.0$, the negative reward is tolerable. Personally, I don't like it because it means that $\gamma$ is somehow dependent on the reward.
+
+What do you think?
+"
+"['reinforcement-learning', 'terminology', 'actor-critic-methods', 'on-policy-methods', 'off-policy-methods']"," Title: What is the difference between on and off-policy deterministic actor-critic?Body: In the paper Deterministic Policy Gradient Algorithms, I am really confused about chapter 4.1 and 4.2 which is ""On and off-policy Deterministic Actor-Critic"".
+
+I don't know what's the difference between two algorithms.
+
+I only noticed that the equation 11 and 16 are different, and the difference is the action part of Q function where is $a_{t+1}$ in equation 11 and $\mu(s_{t+1})$ in equation 16. If that's what really matters, how can I calculate $a_{t+1}$ in equation 11?
+
+
+
+"
+"['classification', 'training', 'python', 'clustering', 'k-means']"," Title: How to refine K-means clustering on a data set?Body: I'm working with a data set where the data is stored in a string such as AxByCyA
where A
, B
and C
are actions and v,w,x,y,z
are times between the actions (each letter represents an interval of time). It's worth noting that B
cannot occur without A
, and C
cannot occur without B
, and C
is the action I'm attempting to study (ie: I'd like to be able to predict whether a user will do C
based on their prior actions).
+I intend to create 2 clusters: people who do C
and those who don't.
+From this data set, I build a training array to run the sci-kit (python) k-means algorithm on, containing the number of A
s, the number of B
s, the mean time between actions (calculated using the average of each interval) and the standard deviation between each interval.
+This gives me an overall success rate of 82% on the test set, but is there anything I can do for more accuracy?
+"
+"['agi', 'artificial-consciousness', 'self-awareness']"," Title: What kind of simulated environment is complex enough to develop a general AI?Body: Imagine trying to create a simulated virtual environment that is complicated enough to create a ""general AI"" (which I define as a self aware AI) but is as simple as possible. What would this minimal environment be like?
+
+i.e. An environment that was just a chess game would be too simple. A chess program cannot be a general AI.
+
+An environment with multiple agents playing chess and communicating their results to each other. Would this constitute a general AI? (If you can say a chess grand master who thinks about chess all day long has 'general AI'? During his time thinking about chess is he any different to a chess computer?).
+
+What about a 3D sim-like world. That seems to be too complicated. After all why can't a general AI exist in a 2D world.
+
+What would be an example of a simple environment but not too simple such that the AI(s) can have self-awareness?
+"
+"['neural-networks', 'machine-learning', 'genetic-algorithms', 'neat']"," Title: How do I restrict the neural network structure to be acyclic in NEAT?Body: I want my neural network structure to not have a circular/looping structure something similar like a directed acyclic graph (DAG). How do I do that?
+"
+"['tensorflow', 'optimization']"," Title: Can TensorFlow minimize ""symbolically""Body: From https://stackoverflow.com/questions/36370129/does-tensorflow-use-automatic-or-symbolic-gradients, I understood TensorFlow requires all the operations in the Graph to be explicit formulas (instead of black-boxes, such as raw python functions) to do Automatic Differentiation. Then it will do some kind of Gradient Descent based on that to minimization.
+
+I'm wondering, since it already know all the explicit formulas, can it directly find out the minimum by examining the equation itself? Like computing the points where gradient is zero or do not exist, then do some kind of processing to find out the minimum.
+
+I found it is simple to do this ""symbolic minimization"" above with few variables such as minimizing Σ(a_i - v)^2
where v
is the trainable variable an a_i
are all the training samples. I'm not sure is there a general way though.
+"
+"['machine-learning', 'math', 'definitions', 'statistics', 'bayes-theorem']"," Title: What is Bayes' theorem?Body: What is Bayes' theorem? How does it relate to conditional probabilities?
+"
+"['reinforcement-learning', 'handwritten-characters']"," Title: Can you make the first layer of a net have discernible shapes?Body: Coming from the YT videos of 3blue1brown which showed that the individual layers do not have discernible shapes in the case of hand written letter recognition, I wondered if you could penalize dispersed shapes while training, thus creating connected shapes (at least on the first layer in the beginning). That way, you may be better able to understand the propagation of your algorithm through the layers.
+
+Thanks, Jonny
+"
+"['neural-networks', 'backpropagation', 'gradient-descent']"," Title: How do I implement softmax forward propagation and backpropagation to replace sigmoid in a neural network?Body: I'm currently using 3Blue1Brown's tutorial series on neural networks and lack extensive calculus knowledge/experience.
+
+I'm using the following equations to calculate the gradients for weights and biases as well as the equations to find the derivative of the cost with respect to a hidden layer neuron:
+
+
+
+The issue is, during backpropagation, the gradients keep cancelling each other out because I take an average for opposing training examples. That is, if I have two training labels being [1, 0], [0, 1], the gradients that adjust for the first label get reversed by the second label because an average for the gradients is taken. The network simply keeps outputting the average of these two and causes the network to always output [0.5, 0.5], regardless of the input.
+
+To prevent this, I figured a softmax function would be required for the last layer instead of a sigmoid, which I used for all the layers.
+
+However, I have no idea how to implement this. The math is difficult to understand and the notation is complicated for me.
+
+The equations I provided above show the term: σ'(z), which is the derivative of the sigmoid function.
+
+If I'm using softmax, how am I supposed to substitute sigmoid with it?
+
+If I'm not mistaken, the softmax function doesn't just take one number analogous to the sigmoid, and uses all the outputs and labels.
+
+To sum it up, the things I'd like to know and understand are:
+
+
+- The equation for the neuron in every layer besides the output is: σ(w1x1 + w2x2 + ... + wnxn + b). How am I supposed to make an analogous equation with softmax for the output layer?
+- After using (1) for forward propagation, how am I supposed to replace the σ'(z) term in the equations above with something analogous to softmax to calculate the partial derivative of the cost with respect to the weights, biases, and hidden layers?
+
+"
+"['neural-networks', 'game-ai', 'feedforward-neural-networks', 'features']"," Title: How to handle varying types and length of inputs in a feedforward neural network?Body: After learning the basics of neural networks and coding one working with the MNIST dataset, I wanted to go to the next step by making one which is able to play a game. I wanted to make it work on a game like slither.io. So, in order to be able to create multiple instances of snakes and accelerate the speed of the game, I recreated a simple version of the game:
+
+The core features being almost done, now comes the work on the AI. I want to keep the script very simple by using only NumPy (not that TensorFlow, PyTorch, or Spark does not interest me, but I want to understand things at a "low level" before using those frameworks).
+At first, I wanted the AI to be able to propose an output by reading pixels. But after some research, I don't really want to get into convnet, recurrent, and recursive neural net. I'd like to re-use the simple feed-forward NN I did with MNIST and adapt it.
+So, instead of using pixels, I think I'm going to use the following data:
+
+- {x,y} snake's position
+- {x,y} foods positions
+- food value
+- Time, in order to get the snake, eat more food in a short time.
+- Distance from the center, not die outside the area
+
+
+That's a lot of different data to handle!
+
+My questions
+
+- Can a simple FNN handle different kinds of data in the input layer?
+- Will it properly work with a variable number of inputs?
+
+In fact, in a specific area around the snake, the quantity of food will be variable. I came across this post, which kinda answers my question, but what if I want the neural network to forget some input if they are not being used, can dropout be of any use in this case. Or the value of the weights (correcting toward zero) of these inputs will be enough?
+"
+"['genetic-algorithms', 'game-ai']"," Title: Genetic Algorithm - creatures in 2d world are not learningBody: Goal -
+I am trying to implement a genetic algorithm to optimise the fitness of a
+species of creatures in a simulated two-dimensional world. The world contains edible foods, placed at random, and a population of monsters (your basic zombies). I need the algorithm to find behaviours that keep the creatures well fed and not dead.
+
+What i have done -
+
+So i start off by generating a 11x9 2d array in numpy, this is filled with random floats between 0 and 1. I then use np.matmul to go through each row of the array and multiply all of the random weights by all of the percepts (w1+p1*w2+p2....w9+p9) = a1.
+
+This first generation is run and I then evaluate the fitness of each creature using (energy + (time of death * 100)). From this I build a list of creatures who performed above the average fitness. I then take the best of these ""elite"" creatures and put them back into the next population. For the remaining space I use a crossover function which takes two randomly selected ""elite"" creatures and mixes their genes. I have tested two different crossover functions one which does a two point crossover on each row and one which takes a row from each parent until the new child has a complete chromosome. My issue is that the creatures just don't really seem to be learning, at 75 turns I will only get 1 survivor every so often.
+
+I am fully aware this might not be enough to go off but I am truly stuck on this and cannot figure out how to get these creatures to learn even though I think I am implementing the correct procedures. Occasionally I will get a 3-4 survivors rather than 1 or 2 but it appears to occur completely randomly, doesn't seem like there is much learning happening.
+
+Below is the main section of code, it includes everything I have done but none of the provided code for the simulation
+
+#!/usr/bin/env python
+from cosc343world import Creature, World
+import numpy as np
+import time
+import matplotlib.pyplot as plt
+import random
+import itertools
+
+
+# You can change this number to specify how many generations creatures are going to evolve over.
+numGenerations = 2000
+
+# You can change this number to specify how many turns there are in the simulation of the world for a given generation.
+numTurns = 75
+
+# You can change this number to change the world type. You have two choices - world 1 or 2 (described in
+# the assignment 2 pdf document).
+worldType=2
+
+# You can change this number to modify the world size.
+gridSize=24
+
+# You can set this mode to True to have the same initial conditions for each simulation in each generation - good
+# for development, when you want to have some determinism in how the world runs from generation to generation.
+repeatableMode=False
+
+# This is a class implementing you creature a.k.a MyCreature. It extends the basic Creature, which provides the
+# basic functionality of the creature for the world simulation. Your job is to implement the AgentFunction
+# that controls creature's behaviour by producing actions in response to percepts.
+class MyCreature(Creature):
+
+ # Initialisation function. This is where your creature
+ # should be initialised with a chromosome in a random state. You need to decide the format of your
+ # chromosome and the model that it's going to parametrise.
+ #
+ # Input: numPercepts - the size of the percepts list that the creature will receive in each turn
+ # numActions - the size of the actions list that the creature must create on each turn
+ def __init__(self, numPercepts, numActions):
+
+ # Place your initialisation code here. Ideally this should set up the creature's chromosome
+ # and set it to some random state.
+ #self.chromosome = np.random.uniform(0, 10, size=numActions)
+ self.chromosome = np.random.rand(11,9)
+ self.fitness = 0
+ #print(self.chromosome[1][1].size)
+
+ # Do not remove this line at the end - it calls the constructors of the parent class.
+ Creature.__init__(self)
+
+
+ # This is the implementation of the agent function, which will be invoked on every turn of the simulation,
+ # giving your creature a chance to perform an action. You need to implement a model here that takes its parameters
+ # from the chromosome and produces a set of actions from the provided percepts.
+ #
+ # Input: percepts - a list of percepts
+ # numAction - the size of the actions list that needs to be returned
+ def AgentFunction(self, percepts, numActions):
+
+ # At the moment the percepts are ignored and the actions is a list of random numbers. You need to
+ # replace this with some model that maps percepts to actions. The model
+ # should be parametrised by the chromosome.
+
+ #actions = np.random.uniform(0, 0, size=numActions)
+
+ actions = np.matmul(self.chromosome, percepts)
+
+ return actions.tolist()
+
+
+# This function is called after every simulation, passing a list of the old population of creatures, whose fitness
+# you need to evaluate and whose chromosomes you can use to create new creatures.
+#
+# Input: old_population - list of objects of MyCreature type that participated in the last simulation. You
+# can query the state of the creatures by using some built-in methods as well as any methods
+# you decide to add to MyCreature class. The length of the list is the size of
+# the population. You need to generate a new population of the same size. Creatures from
+# old population can be used in the new population - simulation will reset them to their
+# starting state (not dead, new health, etc.).
+#
+# Returns: a list of MyCreature objects of the same length as the old_population.
+
+def selection(old_population, fitnessScore):
+ elite_creatures = []
+ for individual in old_population:
+ if individual.fitness > fitnessScore:
+ elite_creatures.append(individual)
+
+ elite_creatures.sort(key=lambda x: x.fitness, reverse=True)
+
+ return elite_creatures
+
+def crossOver(creature1, creature2):
+ child1 = MyCreature(11, 9)
+ child2 = MyCreature(11, 9)
+ child1_chromosome = []
+ child2_chromosome = []
+
+ #print(""parent1"", creature1.chromosome)
+ #print(""parent2"", creature2.chromosome)
+
+ for row in range(11):
+ chromosome1 = creature1.chromosome[row]
+ chromosome2 = creature2.chromosome[row]
+
+ index1 = random.randint(1, 9 - 2)
+ index2 = random.randint(1, 9 - 2)
+
+ if index2 >= index1:
+ index2 += 1
+ else: # Swap the two cx points
+ index1, index2 = index2, index1
+
+ child1_chromosome.append(np.concatenate([chromosome1[:index1],chromosome2[index1:index2],chromosome1[index2:]]))
+ child2_chromosome.append(np.concatenate([chromosome2[:index1],chromosome1[index1:index2],chromosome2[index2:]]))
+
+ child1.chromosome = child1_chromosome
+ child2.chromosome = child2_chromosome
+
+ #print(""child1"", child1_chromosome)
+
+ return(child1, child2)
+
+def crossOverRows(creature1, creature2):
+ child = MyCreature(11, 9)
+
+ child_chromosome = np.empty([11,9])
+
+ i = 0
+
+ while i < 11:
+ if i != 10:
+ child_chromosome[i] = creature1.chromosome[i]
+ child_chromosome[i+1] = creature2.chromosome[i+1]
+ else:
+ child_chromosome[i] = creature1.chromosome[i]
+
+ i += 2
+
+ child.chromosome = child_chromosome
+
+ return child
+
+ # print(""parent1"", creature1.chromosome[:3])
+ # print(""parent2"", creature2.chromosome[:3])
+ # print(""crossover rows "", child_chromosome[:3])
+
+
+def newPopulation(old_population):
+ global numTurns
+
+ nSurvivors = 0
+ avgLifeTime = 0
+ fitnessScore = 0
+ fitnessScores = []
+
+ # For each individual you can extract the following information left over
+ # from the evaluation. This will allow you to figure out how well an individual did in the
+ # simulation of the world: whether the creature is dead or not, how much
+ # energy did the creature have a the end of simulation (0 if dead), the tick number
+ # indicating the time of creature's death (if dead). You should use this information to build
+ # a fitness function that scores how the individual did in the simulation.
+ for individual in old_population:
+
+ # You can read the creature's energy at the end of the simulation - it will be 0 if creature is dead.
+ energy = individual.getEnergy()
+
+ # This method tells you if the creature died during the simulation
+ dead = individual.isDead()
+
+ # If the creature is dead, you can get its time of death (in units of turns)
+ if dead:
+ timeOfDeath = individual.timeOfDeath()
+ avgLifeTime += timeOfDeath
+ else:
+ nSurvivors += 1
+ avgLifeTime += numTurns
+
+ if individual.isDead() == False:
+ timeOfDeath = numTurns
+
+ individual.fitness = energy + (timeOfDeath * 100)
+ fitnessScores.append(individual.fitness)
+ fitnessScore += individual.fitness
+ #print(""fitnessscore"", individual.fitness, ""energy"", energy, ""time of death"", timeOfDeath, ""is dead"", individual.isDead())
+
+ fitnessScore = fitnessScore / len(old_population)
+
+ eliteCreatures = selection(old_population, fitnessScore)
+
+ print(len(eliteCreatures))
+
+ newSet = []
+
+ for i in range(int(len(eliteCreatures)/2)):
+ if eliteCreatures[i].isDead() == False:
+ newSet.append(eliteCreatures[i])
+
+ print(len(newSet), "" elites added to pop"")
+
+ remainingRequired = w.maxNumCreatures() - len(newSet)
+
+ i = 1
+
+ while i in range(int(remainingRequired)):
+ newSet.append(crossOver(eliteCreatures[i], eliteCreatures[i-1])[0])
+ if i >= (len(eliteCreatures)-2):
+ i = 1
+ i += 1
+
+ remainingRequired = w.maxNumCreatures() - len(newSet)
+
+
+ # Here are some statistics, which you may or may not find useful
+ avgLifeTime = float(avgLifeTime)/float(len(population))
+ print(""Simulation stats:"")
+ print("" Survivors : %d out of %d"" % (nSurvivors, len(population)))
+ print("" Average Fitness Score :"", fitnessScore)
+ print("" Avg life time: %.1f turns"" % avgLifeTime)
+
+ # The information gathered above should allow you to build a fitness function that evaluates fitness of
+ # every creature. You should show the average fitness, but also use the fitness for selecting parents and
+ # spawning then new creatures.
+
+
+ # Based on the fitness you should select individuals for reproduction and create a
+ # new population. At the moment this is not done, and the same population with the same number
+ # of individuals is returned for the next generation.
+
+ new_population = newSet
+
+ return new_population
+
+# Pygame window sometime doesn't spawn unless Matplotlib figure is not created, so best to keep the following two
+# calls here. You might also want to use matplotlib for plotting average fitness over generations.
+plt.close('all')
+fh=plt.figure()
+
+# Create the world. The worldType specifies the type of world to use (there are two types to chose from);
+# gridSize specifies the size of the world, repeatable parameter allows you to run the simulation in exactly same way.
+w = World(worldType=worldType, gridSize=gridSize, repeatable=repeatableMode)
+
+#Get the number of creatures in the world
+numCreatures = w.maxNumCreatures()
+
+#Get the number of creature percepts
+numCreaturePercepts = w.numCreaturePercepts()
+
+#Get the number of creature actions
+numCreatureActions = w.numCreatureActions()
+
+# Create a list of initial creatures - instantiations of the MyCreature class that you implemented
+population = list()
+for i in range(numCreatures):
+ c = MyCreature(numCreaturePercepts, numCreatureActions)
+ population.append(c)
+
+# Pass the first population to the world simulator
+w.setNextGeneration(population)
+
+# Runs the simulation to evaluate the first population
+w.evaluate(numTurns)
+
+# Show the visualisation of the initial creature behaviour (you can change the speed of the animation to 'slow',
+# 'normal' or 'fast')
+w.show_simulation(titleStr='Initial population', speed='normal')
+
+for i in range(numGenerations):
+ print(""\nGeneration %d:"" % (i+1))
+
+ # Create a new population from the old one
+ population = newPopulation(population)
+
+ # Pass the new population to the world simulator
+ w.setNextGeneration(population)
+
+ # Run the simulation again to evaluate the next population
+ w.evaluate(numTurns)
+
+ # Show the visualisation of the final generation (you can change the speed of the animation to 'slow', 'normal' or
+ # 'fast')
+ if i==numGenerations-1:
+ w.show_simulation(titleStr='Final population', speed='normal')
+
+"
+"['machine-learning', 'comparison', 'logistic-regression']"," Title: What are the differences between softmax regression and logistic regression (other than when the number of classes is 2)?Body: I read about softmax from this article. Apparently, these 2 are similar, except that the probability of all classes in softmax adds to 1. According to their last paragraph for number of classes = 2
, softmax reduces to LR. What I want to know is other than the number of classes is 2, what are the essential differences between LR and softmax. Like in terms of:
+
+- Performance.
+- Computational Requirements.
+- Ease of calculation of derivatives.
+- Ease of visualization.
+- Number of minima in the convex cost function, etc.
+
+Other differences are also welcome!
+I am asking for relative comparisons only, so that at the time of implementation I have no difficulty in selecting which method of implementation to use.
+"
+"['machine-learning', 'datasets', 'models', 'supervised-learning', 'explainable-ai']"," Title: What should we do when the new data drastically change the current model?Body: In machine learning (in particular, supervised learning), if some new data changes the previous model/function drastically, then I think we should study that data. Does it happen? How to handle such a situation?
+"
+"['neural-networks', 'gradient-descent', 'hyper-parameters', 'multilayer-perceptrons', 'momentum']"," Title: Why must the momentum factor be in the range 0-1?Body: Why is it a bad idea to have a momentum factor greater than 1? What are the mathematical motivations/reasons?
+"
+"['deep-learning', 'python', 'keras']"," Title: Mountain car problem with images - not convergingBody: I'm trying to find the optimal policy for the mountain car problem using deep Q learning with images as input, however, I cannot find a way to get my Q function to give me good solutions (I followed multiple tutorials for similar problems (Atari games and Flappy bird)).
+I'm working on Python with Keras.
+
+The images are given in the following format :
+
+
+
+400x400 pixels, where the bar on the bottom right corner represents the speed of the car.
+
+To check where my problem might lie, I thought it would be best if I split the problem by first ensuring that I can find a network which would successfully find the state of the car (position and speed, since it's all we need to find it) by feeding my convolutional network images of random states (uniformly distributed within the state space).
+
+After unsuccessful results, I decided to split it even more by only trying to find the two state variable separately.
+
+This is the best kind of result that I get, when the network doesn't predict all the state to be the same (which seem to happen a lot with relu activation).
+The state space is divided in 50x50 matrix to make my predictions. The predicted speed is on the left, and the absolute error is on the right
+
+
.
+
+The images fed to the network are pre-processed the follow way :
+ 1. Gray scale
+ 2. Resize (I tried 50x50, 100x100 and 150x150)
+ 3. Values centered around 0 in [-1;1] (this seemed to help a bit with the relu activation)
+
+The network I used to try to find the speed is first a convolution layer (I tried 32 windows of kernel_size=(4,4) (8,8) and (16,16), strides=(1,1) and (2,2), activation= relu, linear, tanh.
+
+Optional additional convolution layers of kernel size half the previous layer.
+
+And an optional last dense layer of dimension 32 or activation relu, linear or tanh.
+
+The output layer is dimension one with linear activation.
+
+The way I train the network is by feeding the fit function with 32 random samples and let the network train for 25 epochs with batch_size 32 and repeat ad libitum.
+
+It's becoming extremely frustrating, especially with my GPU not fitting the requirement for GPU computation to check if I get some results faster.
+
+Can anyone tell me if I'm doing something wrong that I'm missing and what can I do to improve my method to eventually manage to get the reinforcement algorithm to work ?
+Like the size of the training sample, batch size and epochs of the fit function, the structure of the network, ...
+
+Edit : I finally found a way for my reinforcement learning to converge to the true Q value : each time I run a new episode and put it in my replay memory, I run many fits of different mini-batches. This is something I thought I did by increasing the number of epochs in the parameter of the fit function, but I guess it doesn't work as I thought it would.
+I'm letting it train a bit and then I will try the same method for the sub-problems mentioned above.
+"
+"['reinforcement-learning', 'q-learning', 'open-ai']"," Title: Should the exploration rate be reset after each trial in Q-learning?Body: As the title says, should I reset the exploration rate between trials?
+
+I am currently doing the Open AI pendulum task and after a number of trials my model started playing but did not take any actions (i.e. didn't perform any significant swing). The Actor-Critic tutorial I followed did not reset the exploration rate (link) but it seems like there are lots of mistakes in general.
+
+I assume that it should be reset since the model might start from a new unknown situation in a different trial and not know what to do without exploring.
+"
+"['neural-networks', 'gradient-descent']"," Title: What can be deduced about the ""algorithm"" of backpropagation/gradient descent?Body: On this video
+
+Link to video
+
+a neurologist starts by saying that we do not know how neurons calculate gradients for backpropagation.
+
+At minute 30:39 hes showing faster convergence for ""our algorithm"", which seems to converge faster than backpropagation.
+
+After 34:36 it goes explaining how ""neurons"" in the brain are actually packs of neurons.
+
+I do not really understand all that he says, so I infer that those packs of neurons (which seem depicted as a single layer) are the ones who calculate the gradient.
+It would make sense if each neuron makes a sightly different calculation, and then each other communicate the difference in results. That would allow to deduce a gradient.
+
+What can be deduced, from the presented information, about the purported ""algorithm""?? (From the viewpoint of improving convergence of an artificial neural network).
+"
+"['ai-design', 'algorithm-request', 'recommender-system']"," Title: How to design a recommendation system for shift swapping?Body: I need to design an algorithm such that it handles the request for shift swapping.
+The algorithm will recommend a list of people who are more likely to swap that shift with the person by analyzing previous data.
+Can anyone list the techniques that will help me to do this or a good starting point?
+I was thinking about training a Naive Bayes Classifier and using Mahout for generating recommendations.
+"
+"['deep-learning', 'natural-language-processing', 'reference-request']"," Title: How can I create my own Google duplex?Body: I am trying to create my own variant of Google duplex however, it won't make calls but just have a real-time conversation.
+My question is, where and how to start?
+How do I train my model with real conversation and how do I make speech sound almost human-like? Where do I incorporate RNN and how can I make my model understand nuances?
+https://youtu.be/p3PfKf0ndik trying to create something like this.
+"
+"['machine-learning', 'reinforcement-learning', 'python', 'intelligent-agent']"," Title: Can agents be implemented with ML algorithms other than neural networks?Body: Can agents be implemented with machine learning algorithms/models other than neural networks?
+If so, how do I train an agent with some predefined rules? Can we use python programming for representing those rules?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'capsule-neural-network']"," Title: Why does the size reduce to $6 \times 6$ in the capsule networks?Body: I want to experiment with capsule networks on facial expression recognition (FER). For now, I am using fer2013 Kaggle dataset.
+
+One thing that I didn't understand in capsule networks was in the first convolution layer, the size was reduced to 20x20 - having input image as 28x28 and filters as 9x9 with 1 stride. But, in the capsules, the size reduces to 6x6.
+
+How did this happen? Because with the input size as 20x20 and filters as 9x9 and 2 strides, I couldn't get 6x6. Maybe I missed something.
+
+For my experiment, the input size image is 48x48. Should I use the same hyperparameters for the start or are there any suggested hyperparameters that I can use?
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing', 'reference-request']"," Title: Are there any references of NLP/text mining techniques for identifying the theme of news headlines?Body: I am looking to extract the central theme from a given news headline using NLP or text mining. Is there any reference that goes in this direction?
+Here's an example. Let's say that I have the following news headline.
+
+BRIEF-Dynasil Corporation Of America Reports Q2 EPS Of $0.08
+
+Then the algorithm should produce
+
+Reports
+
+Here's another example. The input is
+
+China's night-owl retail investors leverage up to dominate oil futures trade
+
+And the output would e.g. be
+
+oil futures
+
+"
+"['philosophy', 'history', 'neo-luddism']"," Title: What is neoluddism?Body: It is well known from the history of technology that the invention of new things was always problematic. In the 15th century for example, in which Gutenberg has invented the first printing press, the world wasn't pleased. Instead the Luddite movement was doing everything to destroy his work. As far as I know from the history lesson, Gutenberg was recognized in his time as an evil sorcerer and the printing press as work of the devil.
+
+This development was in later decades also visible. At first, a great invention was done for example the first steam driven car, and the ordinary people don't understand the technology and were in fear of it.
+
+A modern form of technology is computing and especially artificial intelligence. From a technical point of view, it is one of the most important inventions ever, and this might result into a very strong possible form of rejection. Some people in the world are not excited by Artificial Intelligence. They not want any sorts of robots and intelligent machines.
+
+The terminology itself is well known. The fundamental rejection of new technology because of religious or moral reasons is called Luddism or Neoluddism. Because the technophobic Ned Ludd has destroyed a while ago two stocking frames. After this episode, every rant against technology is called after him. But what I do not understand the motivation behind it. Did Ned Ludd thought, that he can change the world if he destroyed a machine? Did he believe that mankind will become good if no Gutenberg printing press are used? The problem is that, for example, if the first steam engine was never invented, also the following inventions like the internet and intelligent machines wouldn't have been invented. But what would be the alternative? What is the perspective of Ned Ludd, how does he see the better tomorrow, if no technology innovation is allowed?
+"
+"['convolutional-neural-networks', 'image-recognition', 'overfitting']"," Title: Is this overfitting avoidable?Body: I am trying a modification of Mobilenet in which I add feedback from the softmax layer into the early layers (to implement this I put a second net after the first, which receives connections from the softmax layer of the first, the pretrained weights being non trainable). The idea was to mimic the massive feedback projections in the brain, which presumably could help object recognition by enhancing specific filters and inhibiting others.
+
+I took the pretrained network from Keras and started to retrain it on Imagenet. I noticed that the training accuraccy increased right in the first epoch. My computer is very slow thus I cannot train for too long, an epoch takes 3.5 days. So after an epoch I tried the validation set, but instead the accuracy went down to almost half that of the pretrained values.
+
+My question is if this is and obvious case of overfitting. That is, will continued training increase the accuracy of the training set at the expense of the validation set, or is this a normal behavior expected at the initial stages of training, so that if I keep training for a few more epochs I could expect the validation set accuracy to go eventually up? Any ideas that could help are welcomed.
+"
+"['machine-learning', 'classification', 'python']"," Title: Supervised K-means clustering doesn't appear to workBody: I have a data set containing actions taken by customers (e.g., view a product, add a product to cart, purchase product), the product bought (if any) and times of said actions. I am attempting to use K-means clustering to identify the customers who are more likely to purchase a product based on these actions (minus the purchase).
+
+I'm currently clustering using: the number of products viewed, the number of products put in the cart, the mean time between the actions, the variance of the time between the actions, the standard deviation of the time between the actions (all of these values are normalized), as well as the product purchased (if any). The clusters I'm getting contain ~10%
buyers and 90%
non-buyers, but I'm trying to separate buyers and non-buyers.
+
+Any thoughts on what else I can do? Or should I try another method completely?
+
+Illustration: x axis are the clusters, y axis is the number of customers, red are buyers and blue are non-buyers
+
+
+Update: I made a 3D graph showcasing the clusters, the amount of customers and the mean time between actions (normalized because of reasons)
+
+
+Yet another update: customers (not grouped by cluster, just as is) according to the average number of products they viewed and the average time between actions
+
+
+
+I took some advice and tried using PCA (from this tutorial), and these are the results I got:
+
+
+The raw data (x=number of items viewed/carted, y=average time between interactions)
+
+
+Any tips on how to cluster this mess?
+"
+"['comparison', 'definitions', 'search', 'graph-search', 'tree-search']"," Title: What is the difference between tree search and graph search?Body: I have read various answers to this question at different places, but I am still missing something.
+
+What I have understood is that a graph search holds a closed list, with all expanded nodes, so they don't get explored again. However, if you apply breadth-first-search or uniformed-cost search at a search tree, you do the same. You have to keep the expanded nodes in memory.
+"
+"['neural-networks', 'deep-learning', 'applications']"," Title: Can an AI be trained to generate the outline of a story?Body: I know that one of the recent fads right now is to train a neural network to generate screenplays and new episodes of e.g. the Friends or The Simpsons, and that's fine: it's interesting and might be the necessary first steps toward making programs that can actually generate sensible/understandable stories.
+
+In this context, can neural networks be trained specifically to study the structures of stories, or screenplays, and perhaps generate plot points, or steps in the Hero's Journey, etc., effectively writing an outline for a story?
+
+To me, this differs from the many myriad plot-point generators online, although I have to admit the similarities. I'm just curious if the tech or the implementation is even there yet and, if it is, how one might go about doing it.
+"
+"['machine-learning', 'python', 'datasets', 'intelligent-agent']"," Title: Can we code rules for an agent in python language other than predicate calculus?Body: I have a medical dataset with 14000 rows dataset with 900 attributes. I have to predict disease severity using that. I would like to know whether we can write rules in python language for training an agent for medical diagnostic using machine learning.
+
+Can an agent make the decisions by the rules coded in python and that agent get trained with some machine learning algorithms? If so is there any agent architecture and model for the agent which is good in this context?
+
+Edit: By the rule, I meant something like this..""if x>y output z as action"". By the word ""Training"" I meant ""how to tell this agent to do this action""?
+"
+"['machine-learning', 'datasets', 'statistical-ai']"," Title: Are standard deviation, variance, skew good features for ML?Body: Pretty simple question here:
+
+Is it useful to use the standard deviation, skew, kurtosis, or any other extrapolatory stats as features, and if so in which problem sets?
+
+In this case, I am talking about deep learning problems.
+"
+"['machine-learning', 'algorithm', 'learning-algorithms']"," Title: Which machine learning algorithm is suitable for detecting text w.r.t set of wordsBody: Considering the scenario where supervised training data-set in the form of sentence will be given to train the machine
+
+
+ The Bomb which had been planted by Terrorist on this morning was
+ defused by the Counter Terrorist on joining hands with the
+ Intelligence Force
+
+
+Input strings in the sentence containing each words are broken into tokenised
+arrays of single words with stop words removed.
+
+Each word in the given sentence gets assigned a label w1, w2 and so on i.e,
+w2
= Bomb
+w6
= planted
+w13
= defused
+
+Calculating the scores for individual word combinations, the result should yield something like:
+w2.w6
= Scores should be Positive (or > some threshold value)
+w2.w13
= Scores should be Negative (or < some threshold value)
+
+In case of words with polarity changers
+Eg.: Bomb wasn't/haven't/didn't got defused.
+The resulting scores should be positive
+
+
+
+To accomplish this task I had implemented Sentiment Analysis with the threshold = 2.5 and ended up with the following scores
+
+
+Actual Output:
+
+
+ < 2.5 : Low
+ = 2.5 : Neutral
+ > 2.5 : High
+
+
+
+
+Expected Output:
+
+
+ Case 1: score = negative, since that bomb was defused or removed in the given sentence
+ Case 2: score = positive, vice versa of ""Case 1""
+ Case 3: Otherwise score = 0, in case it can't predict either of the above two cases, it should be neutral
+
+
+I am facing a severe problem every time I need to update the vocabulary list with upcoming new words that were not in the dictionary list, which is turning out to be Semi-supervised learning.
+
+Referring to the above sentence to calculate the w(n-(1/2/3/...n)
and wn
word with reference to word = Bomb. The final resulting score should yield as negative.
+
+So which machine learning algorithm would be appropriate that fits to yield a better solution and based on the given data set how will I train the machine to learn the above things?
+
+Finally should I try to implement by keeping the model persistence. So that it doesn’t have to be trained on each run.
+"
+"['neural-networks', 'game-ai', 'recurrent-neural-networks', 'learning-algorithms']"," Title: Teaching a NN to manipulate pseudoRNG over a long time scale?Body: For speedrunning purposes, I am trying to train a neural network to identify human-executable ways to manipulate pseudo-RNG (in Pokemon Red, for the interested). The game runs at sixty frames per second, and the linear-congruential PRNG updates every frame, while many frames are unlikely to be relevant to the manipulation (and so should contain no actions from the neural net). Any given manipulation is likely to last 30sec-2min, and the advancement rate of the PRNG can change depending on location in the game-world.
+
+I have some experience with coding AI/deep-learning. I've made some programs using Multilayer Perceptron and IndRNN approaches. From what I can tell, IndRNN or A3C would be my best bets. I'm not expert enough to know the correct approach, though, or to know if the dimensionality of the problem makes it outright unfeasible.
+
+1) Is this problem reasonably solvable with NN/deep learning?
+
+2) What approach would you recommend to tackle it?
+"
+"['neural-networks', 'deep-learning', 'terminology', 'deep-neural-networks']"," Title: What's the definition of ""singularity"" in the context of neural networks?Body: The paper Skip connections eliminate singularities explains the use of skip connections to break the singularity in deep networks, but I have not fully understood what a singularity is.
+Any easy-understanding explanation?
+"
+['evolutionary-algorithms']," Title: Crossover in differential evolution for separable and non-separable functionsBody: This article ""Enhancing Differential Evolution Utilizing Eigenvector-Based Crossover Operator"" said for a non-separable function traditional crossover algorithm are not suitable and they can not diversify the population sufficiently, so the differential evolution stops at the local optimum points. Why does this behavior not hold for separable functions but it exists for non-separable functions? Which key feature of the non-separable functions cause this behavior?
+"
+['autoencoders']," Title: What are good parameters of an encoder?Body: I am trying to assess an encoder in my autoencoder. I can not seem to grasp which specs make an encoder better than other one in, lets say, unsupervised learning. For example, I am trying to teach my neural network to classify cats, so that when I provide a picture of a bird, my autoencoder would tell me that it is not a picture of a cat. I am trying to understand what exact specs make my encoder (and decoder) better? I understand it is all about chosen weights but is it possible to be more specific?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks']"," Title: Atrous (Dilated) Convolution: How one can compute responses of arbitrarily high dimensions in DCNN?Body: According to this paper (page 4, bottom-right), atrous convolutions can be used to compute responses of arbitrarily large dimensions in Deep Convolutional Neural Networks.
+
+I do not understand how something like this is true, since by upsampling the filters, one effectively can apply the filter less times to an image, unless one also upsamples the image. Applying the filter less times as I see it obviously means that the output (response) will be of lower dimensionality.
+
+Is there something that I am missing here?
+"
+['recurrent-neural-networks']," Title: Is there an alternative to RNNs that doesn't require knowing input history?Body: In an RNN to train it, you need to roll it out, and enter in the history of inputs and the history of expected outcomes.
+
+This doesn't seem like a realistic picture of the brain since this would require, for example, for the brain to store a perfect history of every sense that comes in to it for many time-steps.
+
+So is there an alternative to RNNs that doesn't require this history? Perhaps storing differences or something? Or storing some accumulator?
+
+Perhaps there is a way to calculate with RNNs that doesn't require keeping hold of this history?
+"
+"['machine-learning', 'deep-learning', 'training', 'autonomous-vehicles']"," Title: How long has it taken for autonomous driving cars to be being sold and used on the roads today?Body: I remember the first time hearing about google trying to make driverless cars. That was YEARS ago!
+
+These days, I'm beginning to learn about Neural Nets and other types of ML and I was wondering:
+
+Does anybody know how many hours (or days, months, etch) is needed in training time to get the results that are now used in today's self-driving vehicles?
+
+(I am ASSUMING they use Neural networks for this...)
+"
+"['neural-networks', 'machine-learning', 'reference-request', 'state-of-the-art', 'data-compression']"," Title: What is the current research in artificial intelligence in the field of data compression?Body: What is the current research in artificial intelligence and machine learning in the field of data compression?
+I have done my research on the PAQ series of compressors, some of which use neural networks for context mixing.
+"
+"['human-like', 'voice-recognition', 'computational-linguistics']"," Title: ""Vocal captcha"" for robots on the phone?Body: With all the Google I/O stuff coming out, how can I verify that I have an actual human on the phone using only voice? Are there still vocal things humans can, but robots can't do?
+
+Conditions: the person on the phone is a stranger (so personal questions won't work), and the verification must be voice only.
+
+(Also, I understand Google Duplex may be just an overhyped demo that will turn out to flop like the Pixel Buds. But eventually such a bot would be created, right? If so, what's the best verification?)
+"
+['research']," Title: How can you do AI research by your own?Body: Let's say you want to do AI research and publish some papers just by your own. Would you send them to an AI journal using just your name? Which AI journals are recommended?
+"
+"['neural-networks', 'deep-learning', 'comparison', 'activation-functions', 'relu']"," Title: Why do we prefer ReLU over linear activation functions?Body: The ReLU activation function is defined as follows
+
+$$y = \operatorname{max}(0,x)$$
+
+And the linear activation function is defined as follows
+
+$$y = x$$
+
+The ReLU nonlinearity just clips the values less than 0 to 0 and passes everything else. Then why not to use a linear activation function instead, as it will pass all the gradient information during backpropagation? I do see that parametric ReLU (PReLU) does provide this possibility.
+
+I just want to know if there is a proper explanation to using ReLU as default or it is just based on observations that it performs better on the training sets.
+"
+"['deep-learning', 'image-recognition', 'computer-vision']"," Title: How to combine heterogeneous image features extracted with different algorithms for similar image retrieval?Body: Say I have access to several pre-trained CNNs (e.g. AlexNet, VGG, GoogleLeNet, ResNet, DenseNet, etc.) which I can use to extract features from an image by saving the activations of some hidden layer in each CNN. Likewise, I can also extract features using conventional hand-crafted techniques, such as: HOG, SIFT, LBP, LTP, Local Phase Quantization, Rotation Invariant Co-occurrence Local Binary Patterns, etc. Thus, I can obtain a very high-dimensional feature vector of an image that concatenates the individual features vectors outputted by these individual algorithms. Given these features, and given a data set of images over which I want to perform similar image retrieval (i.e. finding the top-k most similar images to a query image X), what would be the most appropriate ways to implement this task?
+
+One possible idea I have in mind is to learn an image similarity embedding in euclidean space by training a neural network that would receive as input the aforementioned feature vectors, and perhaps down-sampled versions of the image as well, and output a lower dimensional embedding vector that ideally should place similar images close to each other and dissimilar images far apart. And I could train this network using for example Siamese Loss or Triplet Loss. The challenge of this approach though is generating the labels for the (supervised) training itself. For example, in the case of the Triplet Loss I would need to sample triplets (Q,X,Y) and somehow determine which one between X and Y is most similar to Q, in order to generate the label for the triplet (i.e., in order to ""teach"" the network I need to know the answers myself beforehand, but how? I guess this is domain dependent, but think of challenging cases where you have very heterogeneous images, such as photography galleries, artwork galleries, etc).
+
+Anyways, this is just an idea and by no means I pretend to mean this is the right approach. I'm open to new suggestions and insights about how to solve this task.
+"
+"['neural-networks', 'deep-learning', 'tensorflow']"," Title: How to create a task-graph based neural network?Body: I'm trying to design a neural network with a task hierarchy. This is my idea so far:
+
+ [Desires]
+ |
+[Layer 1] [T0]
+ | /
+[Layer 2] [T1]
+ | /
+[Layer 3] [T2]
+ | /
+[Layer 4] [T3]
+ | /
+ [Action]
+
+
+The way this would work is that each layer represents a task as a binary number. Layer 1 is the main task, layer 2 the sub-task etc. Each task consists of 2 sub-tasks determined by T={0,1}. In this way the neural network represents a binary task graph with T=0 being the left child and T=1 being the right child of a node.
+
+You can think of it as T3 changing every second T2 changed every 2 seconds and so on. So {T0 T1 T2 T3} gives the binary time in seconds in a 16 second cycle.
+
+So far this only makes the output a sequence of 16 actions in order. But if some of the layers could be ""if"" gates they might control the T-values and so act as switches and so have more complicated programs.
+
+Do you have any suggestions to improve this? Or has this kind of binary task graph representation been done before in a neural network?
+
+Also importantly how would you train such a neural network? (At the moment I just assume that the model is pre-trained and just trying to find a good architecture).
+"
+"['algorithm', 'computer-vision']"," Title: Given a query image Q and two other images X and Y, how to determine which one is most similar to Q?Body: Given a query image Q and two other images X and Y (you can assume they have more or less the same resolutions if that simplifies the problem), which algorithm would perform extremely well at determining which image between X and Y is most similar to Q, even when the differences are rather subtle? For example, a trivial case would be:
+
+
+- Q = image of mountains, X = image of mountains, Y = image of dogs, therefore it is clear that sim(Q,X) > sim(Q,Y).
+
+
+However, examples of trickier cases would be:
+
+
+- Q = image of a yellow car, X = image of a red car, Y = image of a yellow car, therefore sim(Q,Y) > sim(Q,X) (assuming the car shapes are more or less the same).
+- Q = image of a man standing up in the middle with a black background, X = image of another man standing up in the middle with a black background, Y = image of a woman standing up in the middle with a black background, therefore sim(Q,X) > sim(Q,Y).
+
+
+Which algorithm (or combination of algorithms) would be robust enough to handle even the tricky cases with very high accuracy?
+"
+['computer-vision']," Title: Kalman filter pre inovationBody: I am trying to track LIDAR objects using Kalman filter. The problem is that the innovation has the value 0, which makes the Kalman gain be Infinity. Here is a link with the Kalman equations. The values with which I initialized the measurement and process covariance matrix are listed below. The update code is also shown below. When I debug the code everything is fine until the innovation becomes 0.
+
+this->lidar_R << std_laspx_, 0, 0, 0,
+ 0, std_laspy_, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, 0;
+
+this->lidar_H << 1.0, 0.0, 0.0, 0.0, 0.0,
+ 0.0, 1.0, 0.0, 0.0, 0.0,
+ 0.0, 0.0, 0.0, 0.0, 0.0,
+ 0.0, 0.0, 0.0, 0.0, 0.0;
+
+P_ << 1000, 0, 0, 0, 0,
+ 0, 1000, 0, 0, 0,
+ 0, 0, 1000, 0, 0,
+ 0, 0, 0, 1000, 0,
+ 0, 0, 0, 0, 1000;
+
+ MatrixXd PHt = this->P_ * H.transpose();
+ //S becomes 0
+ MatrixXd S = H * PHt + R;
+ //S_inv becomes INFINITY
+ MatrixXd S_inv_ = S.inverse();
+ MatrixXd K = PHt * S_inv_;
+
+VectorXd y = Z - Hx;
+
+this->x_ = this->x_ + K*y;
+MatrixXd I = MatrixXd::Identity(x_.size(), x_.size());
+this->P_ = (I - K * H) * this->P_;
+
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: Normalizing height data for CNNBody: A task I’m working on at the moment requires a CNN with a height map as one of the inputs. This is a matrix of floating point values in which each point is the height of that point above sea level.
+
+I’m having trouble deciding how to normalize this data. I know there are networks that work on depth or distance data but that is different for several reasons:
+
+
+- Height can also be negative (as opposed to depth/distance which starts at 0)
+- Height has a very large range - can get values between -400 and +~9000.
+
+
+For these reasons the common approach to normalisation, simply subtracting the mean and dividing by the standard deviation, will result in the loss of information in most cases (all values will be close to zero).
+
+I thought of maybe subtracting the local mean for each input, rather than a general mean calculated from all the data, but I still don't know what to do with the standard deviation, since dividing by the local standard deviation can result in very “flat” and very “steep” inputs looking the same after normalization.
+"
+"['neural-networks', 'machine-learning', 'datasets', 'evolutionary-algorithms', 'game-theory']"," Title: Is there a way to predict points on a map?Body: I have a data set with historical information of some events (let's say event A and event B),these events describe the discovery of land mines, the coordinates of the event and the date of the event; is there a way I can use this historical information to predict points (coordinates) where event A or B could happen i.e. where might be still land mines that haven't been found?
+"
+"['reinforcement-learning', 'control-problem', 'on-policy-methods', 'monte-carlo-methods']"," Title: Why is GLIE Monte-Carlo control an on-policy control?Body: In slide 16 of his lecture 5 of the course ""Reinforcement Learning"", David Silver introduced GLIE Monte-Carlo Control.
+
+
+
+But why is it an on-policy control? The sampling follows a policy $\pi$ while improvement follows an $\epsilon$-greedy policy, so isn't it an off-policy control?
+"
+"['deep-learning', 'training', 'python', 'tensorflow']"," Title: Does the model learn from the average of all the data points in the mini-batch?Body: I used the example at - https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/5_DataManagement/tensorflow_dataset_api.py - to create my own classification model. I used different data but the basic outline of datasets was used.
+
+It was important for my data type to shuffle the data and then create the training and testing sets. The problem, however, comes as a result of the shuffling.
+
+When I train my model with the shuffled train set I get a +- 80% accuracy for train and +- 70% accuracy for the test set. I then want to input all the data (i.e. the set that made the training and test set) into the model to view the fully predicted output of this data set that I have.
+
+If this data set is shuffled as the training and testing set was I get an accuracy of around 77% which is as expected, but then, if I input the unshuffled data (as I required to view the predictions), I get a 45% accuracy. How is this possible?
+
+I assume it's due to the fact that the model is learning incorrectly and that it learns that the order of the data points plays a role in the prediction of those data points. But this shouldn't be happening as I am simply trying to (like the MNIST example) predict each data point separately. This could be a mini-batch training problem.
+
+In the example mentioned above, using data sets and batches to train, does the model learn from the average of all the data points in the mini-batch or does it think one mini-batch is one data point and learn in that manner (which would mean order matters of the data)?
+
+Or if there are any other suggestions.
+"
+"['computer-vision', 'yolo', 'bounding-box', 'jaccard-similarity']"," Title: How are IOUs for ground truth boxes in YOLO calculated?Body: I know how IOU works during detection. However, while preparing targets from ground-truth for training, how is the IOU between a given object and all anchor boxes calculated?
+Is the ground truth bounding box aligned with an anchor box such that they share the same center? (width/2, height/2)
+I think this is the case but I want to hear from someone who has better knowledge of how training data is prepared for training in YOLO.
+"
+"['neural-networks', 'convolutional-neural-networks', 'object-recognition']"," Title: Can translational invariance of CNNs be unwanted if object is likely in certain positions?Body: Various texts on using CNNs for object detection in images talk about how their translation invariance is a good thing. Which makes sense for tasks where the object could be anywhere in the image. Let's say detecting a kitten in household images.
+
+But let's say, you already have some information about the likely position of the object of interest in the image. For example, for detecting trees in a dataset of images of landscapes. Here in most cases, the trees are going to be in the bottom half of the image while in some cases they might be at the top (because it's on a hill or whatever). So you want your neural network to learn that information -- that trees are likely connected to the bottom part of the image (ground). Is this possible using the CNN paradigm?
+
+Thank you
+"
+"['machine-learning', 'deep-learning', 'optimization']"," Title: If Deep Learning is non convex, then why do use a convex loss function?Body: I was just reading through some convex optimization textbooks to hopefully improve my deep learning understanding and come up with new ideas. Halfway through, I decided to Google a bit! It's obvious that deep learning deals with nonconvex functions.
+Here's the question though: If deep learning is non-convex, then why do we apply a convex loss function, such as cross-entropy or least square, to solve a problem under a convex constraint? What am I missing?
+"
+"['neural-networks', 'deep-learning', 'terminology', 'matlab']"," Title: Is it a valid Deep Neural Network?Body: For a regression task, I have sequences of training data and if I define the layers of deep neural network to be:
+
+Layers=[ sequenceInputLayer(featuredimension) reluLayer dropoutLayer(0.05) fullyConnectedLayer(numResponse) regressionLayer]
+
+Is it a valid deep neural network? Or do I need to add LSTM layer too?
+"
+"['algorithm', 'computer-vision', 'matlab']"," Title: How to use computer vision to find corners of a soccer field based on location coordinates?Body: I want to use computer vision to allow my robot to detect the corners of a soccer field based on its current position. Matlab has a detectHarrisFeatures feature, but I believe it is only for 2D mapping.
+The approach that I want to try is to collect the information of the lines (using line detection), store them in a histogram, and then see where the lines intersect based on their angles.
+My questions are:
+
+
+- How do I know where the lines intersect?
+- How do I find the angles of the lines using computer vision?
+- How do I update this information based on my coordinates?
+
+
+I am in the beginning stages of this task, so any guidance is much appreciated!
+"
+"['neural-networks', 'python', 'keras']"," Title: How to decrease accuracy from 99% to 80%~85% using keras for training a modelBody: How do I decrease the accuracy value when training a model using Keras; which parameters can I change to decrease the value?
+
+My objective is not to actually decrease it, but just to know which parameters influence the accuracy
+
+sgd = optimizers.SGD(lr=1e-2)
+
+"
+"['machine-learning', 'intelligent-agent', 'architecture']"," Title: How to teach a model-based reflex agent for doing some task using machine learning methods?Body: I would like to know how to teach an agent for performing prediction of the severity of disease and also for alerting patients using machine learning methods.
+
+I found the model-based reflex agent can be used in medical diagnosis in some literature.
+
+May I know which architecture will be good, to make such an agent?
+"
+"['convolutional-neural-networks', 'computer-vision', 'object-detection', 'yolo', 'model-request']"," Title: How can I detect thin objects (like pens and pencils) without a bounding box but only 2 endpoints and the orientation?Body: I am looking to detect thin objects, like pens, pencils, and surgical instruments. The bounding box is not important, but I am looking to see if I can train a model to detect both the object as well as its orientation.
+Typical object detection networks, like R-CNN, YOLO, and SSD encode the class name and bounding boxes. Instead of bounding boxes, I'm looking to encode only 2 points, one starting $x,y$ point and one ending $x,y$ point. The start point for objects is where one would grip the object. For instance:
+
+- The pencil eraser(start point) is pointed 50 degrees to the top right.
+- The surgical instrument is 10 degrees from the x-axis and the handle is pointed to the bottom right.
+- Pen tip (endpoint) is pointing vertically upwards.
+- Fork, the start point would be the grip handle part, and the endpoint would be in the middle where the 4 prongs are.
+
+As long as I can encode the start and endpoints, then I can determine the orientation. I would need to define these points during training.
+The question is whether there is an existing model (mobile net/inception/RCNN) that I can encode this information in? One potential way I was thinking was to use YOLO and for the bounding box, the top left $x,y$ would be the starting point $x,y$ (handle), whereas the bounding box's width and height would be replaced with the endpoint $x,y$ (pencil writing tip, fork prongs.
+"
+['machine-learning']," Title: WEKA - SimpleKMeans - Manually choose intitial centresBody: Is there a way in the WEKA explorer to manually select the initial centres when using SimpleKMeans clustering?
+"
+['natural-language-processing']," Title: How can I build an AI with NLP that reads and understands documents?Body: I have to read a lot of papers, and I thought that I can use an A.I. to read them and summarize them. Maybe find one that can understand what the papers are talking about it seems a lot to ask.
+
+I think I can use natural language processing. Is it the right choice?
+
+I'm sorry, but I'm new in A.I. and I don't know much about it.
+"
+"['convolutional-neural-networks', 'image-recognition', 'tensorflow', 'python']"," Title: Detecting license plate using tensorflowBody: I'm currently working on license plate recognition. My system consist of 2 stage: (1) License Plate region extraction & (2) License Plate region recognition.
+
+I'm doing (1) with Raspberry pi 3 model b. I find license plate candidate first by merging bounding boxes based on their similarity. In this way, i have only 1~7 license plate region proposals. And it took less than .3 seconds.
+
+Now i have to reduce number of region proposal to be around only 1~2 so that i can send these images to server to do job (2). For license plate extraction, I made my own classifier function in tensorflow and the code is below. It gets proposed license plate as input.
+
+First, I resize all license plate to be [120, 60] and converted to gray image. And there are 2 classes: 'plate', 'non_plate'. For non_plate image, i collected various image that might appear in image as background. I have 181 images for 'plate' class and 56 images for 'non_plate' for now, i trained for about 3000 steps so far and current loss is .53 .
+
+When i did prediction on test set, i encountered problem that for some of plate image, it doesn't recognize license plate which is very obviously license plate image from my eyes. It is okay for me to wrongly recognize non plate image as plate but it is problem if it wrongly recognize plate as non_plate because it will not be sent to server to be fully recognized.
+
+It happens like 10 out of 100 test images and this rate is far worse than i expected. I need help for adressing this problem. Would there be any improvement that i can make?
+
+(1) Is my training set too small to classify between license plate
+ and non license plate? Or is number of steps is too small?
+
+(2) Is my graph structure bad? I needed to have small graph
+ structure for my raspberry pi to recognize less than 1 second. Could
+ you suggest better structure if it is bad?
+
+(3) Is it bad to resize any proposed image to [120, 60] to be used
+ as input for graph? I think it loses some information. But isn't
+ this close to roi pooling like used in fast rcnn?
+
+ inputs=tf.reshape(features[FEATURE_LABEL],[-1,120 , 60 ,1],name=""input_node"") #120 x 60 x 1, which is gray
+
+conv1=tf.layers.conv2d(inputs=inputs,
+ filters=3,
+ kernel_size=[3,3],
+ padding='same',
+ activation=tf.nn.leaky_relu
+ )
+#conv1 output shape: (batch_size,120,60,3)
+
+pool1=tf.layers.max_pooling2d(inputs=conv1,pool_size=[2,2],strides=2,padding='valid')
+
+#pool1 output shape: (batch_size,60,30,3)
+
+conv2=tf.layers.conv2d(inputs=pool1,filters=6,kernel_size=[1,1],padding='same',activation=tf.nn.leaky_relu)
+
+#conv2 output shape: (batch_size, 60,30,6)
+
+pool2=tf.layers.max_pooling2d(inputs=conv2,pool_size=[2,2],strides=2,padding='valid')
+
+#pool2 output shape: (batch_size, 30,15,6)
+
+conv3=tf.layers.conv2d(inputs=pool2,filters=9,kernel_size=[3,3],padding='same',activation=tf.nn.leaky_relu)
+
+#conv3 output shape: (batch_size, 30,15,9)
+
+pool3=tf.layers.max_pooling2d(inputs=conv3,pool_size=[2,2],strides=2,padding='valid')
+
+#pool3 output shape: (batch_size, 15,7,9)
+
+
+#dense fully connected layer
+pool2_flat=tf.reshape(pool3,[-1,15*7*9]) #flatten pool3 output to feed in dense layer
+
+dense1=tf.layers.dense(inputs=pool2_flat,units=120,activation=tf.nn.relu)
+
+logits=tf.layers.dense(dense1,2) #input for softmax layer
+
+
+
+ [training non plate image example]
+[
]4
+[training plate image example. It is region proposed image]
+"
+"['neural-networks', 'q-learning']"," Title: Training RL agent on timeseries trading data with Continous Deep Q or NAFBody: I am writing an MDP based agent that is supposed to learn to place bids and asks in a trading environment. The system requests 2 values (mWh energy and $, both being positive or negative).
+Every timestep the agent has a certain volume that it has to either buy or sell.
+
+I tried setting these two values as action values, giving it 4 individual ones (1 for buy price and amount one sell price and amount)
+
+I used the DDPG and NAF agents from keras-rl here but both aren't working for me. I tried a number of reward functions too:
+
+
+- direct cash reward: average price of market for required energy vs what the agent achieved
+- shifting balancing price: first emphasize that the broker balances it's portfolio (i.e. orders the amount it has to) and later optimize for price per mWh
+- simple core: as a test I ran a reward function that just rewards the agent to be close to the actions [0.5, 0.55]
+
+
+All three failed again.
+
+
+- LR : tried between 0.01 and 0.00001
+- Layers: Tried anything between 1 layer 1 cell and 5 layer 128 cells
+- Types: I used both Dense and LSTM cells with according input shapes
+
+
+Symptoms: Generally it looks like the system is not learning anything. I am unsure why. How does the reward function have to be structures to incentivize the system to at least move in the correct direction? Especially the reward that told the agent to be close to [0.5, 0.5] by basing the reward simply on the squared difference to this point should have worked in my eyes.
+
+
+"
+"['pattern-recognition', 'incremental-learning']"," Title: Will training an AI still work if the input data is somewhat sparse?Body: I'm looking at writing an AI agent for pattern recognition.
+
+I want to be able to constantly feed new data to the AI to continuously train it as new data may have new patterns.
+
+My problem, though, is that my input feed may break once in a while (the data comes from a remote computer) and thus some of the data will go missing. The other computer sends me real-time data so when the connection goes down, any new data while disconnected goes missing as far as the AI agent is concerned. (at this point, I'm not looking at fixing the gaps, although ultimately, reducing them is one of my goals, at this point I have to pretend it's not possible to accomplish.)
+
+What kind of impact missing data has on a pattern recognition AI?
+"
+"['reinforcement-learning', 'models']"," Title: Why do we need a model of the environment in Dyna?Body: In chapter 8 of ""Reinforcement Learning: An Introduction"" by Sutton and Barto, it is stated that Dyna needs a model to simulate the environment.
+
+But why do we need a model? Why can't we just use the real environment itself? Wouldn't it be more helpful to use real environment instead of fake one?
+"
+"['python', 'decision-trees']"," Title: What do the values of the leaves of the decision tree represent?Body: This is more of a technical question rather than a practical one.
+
+I've exported a decision tree made with python/scikit learn and would like to know what the ""value"" field of each leaf corresponds to.
+
+
+"
+['convolutional-neural-networks']," Title: Detecting Keypoint of 3D model, and distance between themBody: I am very new to AI, I have a set of 3D human models that I would like to train the algorithm to identify wrist, upper arm, lower arms, etc, and distance between them.
+
+From my understanding, this is a regression problem. But with my very limited knowledge, most tutorial online showing me cat and dog classification problem.
+
+Do you have any clue for me to research next? There are some paper saying to convert the 3D model to image, and use convolutional neural network for training.
+
+p/s: Please don't downvote me, I am too young and too lost in this field.
+"
+"['neural-networks', 'reinforcement-learning']"," Title: What are good action outputs for reinforcement learning agents acting in a trading environment?Body: I am trying to build an agent that trades commodities in a exchange setting. What are good ways to map the action output to real world actions? If the last layer is a tanh
activation function, outputs range between [-1,+1]
. How do I map these values to real actions? Or should I change the output activation to linear
and then directly apply the output as an action?
+
+So let's say the output is tanh activated and it's -0.4, 5
. I could map this to:
+- -0.4 --> sell 40% of my holdings for 5$ per unit
+- -0.4 --> sell 40% for 5$ in total
+
+if it was linear, I could expect larger outputs (e.g -100, 5
). Then the action would be mapped to:
+- sell 100 units for 5$ each
+- sell 100 units for 5$ total
+"
+"['convolutional-neural-networks', 'terminology', 'representation-learning']"," Title: What is feature embedding in the context of convolutional neural networks?Body: What are feature embeddings in the context of convolutional neural networks? Is it related to bottleneck features or feature vectors?
+"
+"['bayesian-networks', 'bayesian-statistics']"," Title: Are Bayesian networks important to learn in 2018?Body: I study AI by myself with the book ""Artificial Intelligence: A Modern Approach"". I've just finished the chapters about the Bayesian network and probabilities, and I found them very interesting. Now, I want to implement different algorithms and test them in different cases and environments.
+
+Is it worth it to spend time on these techniques?
+"
+['genetic-algorithms']," Title: Genetic Algorithm to Play Arkanoid(Nes) Possible Crossover and Fitness?Body: I am using the Fceux emulator to create a Genetic Algorithm in Lua to play the 'Arkanoid' game. It is based on Atari Breakout.
+
+A member of my population contains a string of 0's and 1's.(Population size:200).
+Consider a member
+Every 10 frames a bit is read from the string.(Length of string is about 1000)
+If it is 0 the paddle moves left, If it is 1 the paddle moves right for the next 10 frames.
+
+Now I wrote an genetic algorithm that tries to find the best sequence of inputs to play the game.
+
+I have experimented with three types of fitness, One is to achieve maximum score, one is to try to reduce number of blocks to a minimum and the last one is to try to stay alive as long as possible.
+
+None of the three fitness seem to work.
+
+Then I thought that something with my crossover might be wrong.
+
+Every generation, I print out the average fitness of all members. Some generations it increases, while in some generations it decreases.
+I have tried changing the population size to 50,100,200,300.
+
+Mutation in my algorithm has a 1% Chance(If Mut_rate=1) that each of the bit will be replaced with its opposite bit.
+
+Now coming to the crossover, I have used yet again many methodologies.
+One of them is to just select the top 20% or 30%(cr_rate)(according to their fitness) to pass on to the next generation and killing the remaining ones.
+
+Another method is to add the top percentile to the population and use the remaining population to swap a few bits with top ones and add them into the next generation.
+
+function crossover(population,rate)
+ local topp=math.floor(rate*(#population));
+ top={}
+ for i=1,topp do
+ table.insert(top,population[i])
+ end
+for i=1, #population do
+ local p1 = math.random(1,topp);
+ local p2 = math.random(1,topp);
+ --print(top[p1]);
+ --print(top[p2]);
+ if top[p1][2] == top[p2][2] then
+ local rval = math.random(1, 10) > 5;
+ if rval then
+ population[i] = top[p1];
+ else
+ population[i] = top[p2];
+ end
+ elseif top[p1][2] > top[p2][2] then
+ population[i] = top[p1];
+ else
+ population[i] = top[p2];
+ end
+ population[i][2]=0;
+end
+--[[
+for i=topp+1,#population do
+ local p1 = math.random(1,topp);
+ local p2 = math.random(1,#population);
+ local s='';
+ local flag=0;
+ s=string.sub(top[p1][1],1,no_controls/2)..string.sub(population[p2][1],(no_controls/2)+1,no_controls);
+ population[i][1]=s;
+ population[i][2]=0;
+ end
+ --]]
+end
+
+
+Population is the table of population where each member has an input string and a fitness value.(Sorted , max fitness is first).
+Rate is the percentage to select the top performers.
+no_controls is the size of input string.
+The commented section of the code is where I perform the swap.
+
+Here is the mutation function.
+
+function mutation(population,mut_rate)
+
+ local a=0;
+ local b=1;
+ for i=1, #population do
+ for j=1, #(population[i][1]) do
+ if math.random(1, 100) <= mut_rate then
+ if string.sub(population[i][1],j,j)=='1' then
+ population[i][1] = string.sub(population[i][1],1,j-1)..a..string.sub(population[i][1],j+1);
+ else
+ population[i][1] = string.sub(population[i][1],1,j-1)..b..string.sub(population[i][1],j+1);
+ end
+ end
+ end
+ end
+end
+
+
+Mut_rate is 1. And crossover rate is 0.2 or 0.5.
+
+I have tried changing the mutation rate from 0 to 20.
+I have also tried to change the crossover rate as 0.2,0.5,0.7.
+And the fitness using no_blocks, score, time_alive.
+When I run the algorithm, the average fitness of the population first increases slightly , then decreases after a few generations and then remains constant forever.
+
+The paddle also seems to be performing the same moves over and over again, which made be think that there might not be enough variation.
+
+I need help, because I have been stuck on this for a few days now.
+I need suggestions on what would be a suitable crossover and mutation function and a perfect fitness function.
+
+Thanks.
+"
+"['natural-language-processing', 'long-short-term-memory', 'word-embedding', 'word2vec']"," Title: How should the output layer of an LSTM be when the output are word embeddings?Body: I'm having trouble grasping how to output word embeddings from an LSTM model. I'm seeing many examples using a softmax activation function on the output, but for that I would need to output one hot vectors as long as the vocabulary (which is too long). So, should I use a linear activation function on the output to get the word embeddings directly (and then find the closest word) or is there something I'm missing here?
+"
+"['reinforcement-learning', 'game-ai', 'self-play', 'tic-tac-toe']"," Title: How can both agents know the terminal reward in self-play reinforcement learning?Body: There seems to be a major difference in how the terminal reward is received/handled in self-play RL vs "normal" RL, which confuses me.
+I implemented TicTacToe the normal way, where a single agent plays against an environment that manages the state and also replies with a new move. In this scenario, the agent receives a final reward of $+1$, $0$ and $-1$ for a win, draw, and loss, respectively.
+Next, I implemented TicTacToe in a self-play mode, where two agents perform moves one after the other, and the environment only manages the state and gives back the reward. In this scenario, an agent can only receive a final reward of $+1$ or $0$, because, after his own move, he will never be in a terminal state in which he lost (only agent 2 could terminate the game in such a way). That means:
+
+- In self-play, episodes end in such a way that only one of the players sees the terminal state and terminal reward.
+
+- Because of point one, an agent can not learn if he made a bad move that enabled his opponent to win the episode. Simply because he does not receive a negative reward.
+
+
+This seems very weird to me. What am I doing wrong? Or if I'm not wrong, how do I handle this problem?
+"
+"['deep-learning', 'reinforcement-learning', 'deep-rl', 'experience-replay']"," Title: What is experience replay in laymen's terms?Body: I've been reading Google's DeepMind Atari paper and I'm trying to understand the concept of ""experience replay"". Experience replay comes up in a lot of other reinforcement learning papers (particularly, the AlphaGo paper), so I want to understand how it works. Below are some excerpts.
+
+
+ First, we used a biologically inspired mechanism termed experience replay that randomizes over the data, thereby removing correlations in the observation sequence and smoothing over changes in the data distribution.
+
+
+The paper then elaborates as follows (I've taken a screenshot, since there are a lot of mathematical symbols that are difficult to reproduce):
+
+
+
+What is experience replay and what are its benefits in laymen's terms?
+"
+"['reinforcement-learning', 'deep-rl', 'dqn', 'experience-replay', 'mini-batch-gradient-descent']"," Title: When using experience replay, do we update the parameters for all samples of the mini-batch or for each sample in the mini-batch separately?Body: I've been reading Google's DeepMind Atari paper and I'm trying to understand how to implement experience replay.
+Do we update the parameters $\theta$ of function $Q$ once for all the samples of the minibatch, or do we do that for each sample of the minibatch separately?
+According to the following code from this paper, it performs the gradient descent on loss term for the $j$-th sample. However, I have seen other papers (referring to this paper) that say that we first calculate the sum of loss terms for all samples of the minibatch and then perform the gradient descent on this sum of losses.
+
+"
+"['philosophy', 'artificial-consciousness', 'intelligence-testing', 'self-awareness']"," Title: How will we recognize a conscious machine?Body: How will we recognize a conscious machine (or AI)? Is there any consciousness test? For example, if a machine is aware of its previous experiences, can it be considered conscious?
+"
+"['ai-design', 'classification']"," Title: Can this problem be solved by AI?Body: I am looking in to building a kind of troubleshooting web application. It would be a form that starts with a first question. Depending on the answer, you get a follow up question and so on until the app has qualified your problem in to a small group of problems. To me it sounds a bit like a decision tree but what I have read about them is that it is the internal structure of a model and not what I am looking for. My guess is that a model needs all the input variables at once and not like I am looking for that feeds it one parameter at a time.
+
+At this time I do not know of any data available. With the client we could create the desired resulting problem groups and the questions as well.
+
+Would it be possible to solve this with the help of AI instead of hand coding a lot of case switch statements? If so could you point me to what to read up on?
+"
+['neat']," Title: Speciation in NEAT - Advantages of keeping stable number of speciesBody: I found several methods for setting the compatibility distance in NEAT: some normalize it, some don't, some automatically adjust it.
+
+In a few tests I am running, using normalized static compatibility distance, the number of species increase very rapidly, thus suggesting to adjust (e.g. increase) the compatibility distance.
+
+I haven't found however, how to determine a reasonable number of species for my population, which are the benefits of having lots/few species and which are the benefits of having stable vs mutable number of species?
+"
+"['neural-networks', 'backpropagation', 'cross-entropy', 'sigmoid', 'numerical-algorithms']"," Title: How is division by zero avoided when implementing back-propagation for a neural network with sigmoid at the output neuron?Body: I am building a neural network for which I am using the sigmoid function as the activation function for the single output neuron at the end. Since the sigmoid function is known to take any number and return a value between 0 and 1, this is causing division by zero error in the back-propagation stage, because of the derivation of cross-entropy. I have seen over the internet it is advised to use a sigmoid activation function with a cross-entropy loss function.
+So, how this error is solved?
+"
+"['neural-networks', 'convolutional-neural-networks', 'ai-design', 'algorithm', 'performance']"," Title: Relative compute time for each type of layer in a neural networkBody:
+
+Hello,
+I would like to know whether this picture from the paper: Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability valid?
+
+Questions:
+1) Does InnerProduct (Fully connected) layer actually take more time to compute in a neural network than Convolution?
+
+
+2) Is assessing GLOPs/time a good way of estimating performance of different types of layers in a neural network on any hardware? (Conv, FC etc.)
+
+
+3) Does anyone know where I can find GFLOPs vs compute time for different types of layers across GPUs/CPUs? (I know DeepBench, any other suggestions would be great too)
+"
+"['getting-started', 'math', 'fuzzy-logic']"," Title: Defining formula for fuzzy equationBody: I'm learning fuzzy logic and more or less understand the basic concept, but i'm having a hard time understanding how to apply it to a method. I tried browsing online for explanation on how to use it, but only found some implementation and test case using the basic form of 4 rules and 3 variables, and 2 rules per variable. Anyway this is an example case, i will use Tsukamoto method.
+
+In this case actually i have 6 rules and 3 variables with 3 rules ver variable, but i will only explain 1 of the variables because i think the rest will have the same solution. I have 3 variables one of them is ""size"", the range is for small it's 0-2 and for large it's 7-10. The current condition is size = 6.5. The rules is as follow(simplified to only use this variable):
+
+
+- [R1] size = small
+- [R2] size = medium
+- [R3] size = large
+
+
+What i want to know is:
+
+
+- how do i define the formula for medium(the middle rule if the case is different)?
+- what if the rule is more than 3 (i.e. small, medium, large, ex-large)?
+
+
+What i understands if the rule is only 2 i can use this formula
+
+
+- small[x]=(max-x)/(max-min)
+- large[x]=(x-min)/(max-min)
+
+
+My current approach to this problem is as follow:
+
+small[x]=1; x<=2
+
+medium[x]=(max-x)/(max-min); 2 < x < 7
+
+large[x]=0; x>=7
+
+Is this correct? Also can you refer me to some source to study this? As i mentioned before i can only find some implementation and basic explanation, it's either there is no online source for this or i don't know what to search for. Sorry if it's hard to understand i can edit and post the whole problem if you want, thanks in advance.
+
+Extra question: what is the name of algorithm which can be used to solve the crossing bridge puzzle(the one with timer, max person, and stuff)?i forgot the name.
+"
+['logic']," Title: How is uniqueness quantification translated in First Oder LogicBody: I have this following natural language statement:
+
+""There is only one house in area1 the size of which is less than 200m².""
+
+which is mistranslated to FOL:
+
+∃x.(house(x) ∧ In(x,area1) ∧ ∀y.(house(y) ∧ In(y,area1) ∧ size(y) < 200 -> x=y))
+
+
+This translation is wrong according to my lecturer, because it is not necessary that the size of x must be less than 200. The statement is true if there only houses which are bigger.
+
+I have two questions:
+
+
+- I don't get the FOL translation at all and don't see where the uniqueness part is expressed :
+so translated it back : ""if all houses in area1 have a size less then 200m² then there exists one house which equals to all houses ??""
+- why is not necessary that the size of x is less than 200, when it clearly says in the statement above that must exist one house with a size less then 200 ?
+
+"
+"['prolog', 'expert-systems', 'r']"," Title: Defining rules for an expert systemBody: I'm doing a project for my last university examination but I'm having some troubles! I'm making an expert system who should be able to assemble a computer after asking some questions to the user. It works but according to my teacher I need to define more rules, could you give me some suggestions please? I have facts like these:
+
+processor(P, Proc_price, Price_range),
+motherboard(M, Motherboard_price, Price_range),
+ram(R, Ram_price, Price_range),
+case(C, Case_price, Price_range),
+ali(A, Ali_price, Price_range),
+video_card(V, Vga_price, Price_range),
+ssd(S, Ssd_price, Price_range),
+monitor(D, Monitor_price, Price_range),
+hdd(H, Hdd_price, Price_range).
+
+
+I ask these questions to the user: 1) choose the price range 2) choose the display size 3) choose hard disk size Then I ask 3 questions about computer utilization to define the user: 1) do you surf on internet? 2) do you play? 3) do you use editing programs?
+
+use(gaming) :- ask(""Do you play games? (y/n)"").
+
+ use(editing) :- ask(""Do you use editing programs? (y/n)"").
+
+ use(surfing) :- ask(""Do you surf internet?(y/n)"").
+
+ user(base) :-
+ use(surfing), \+ use(gaming), \+ use(editing).
+
+ user(gamer) :-
+ use(gaming), use(surfing), \+ use(editing).
+
+ user(professional) :-
+ use(editing), \+ use(gaming), use(surfing).
+
+
+I should make more questions about user definition to make user definition more complex too and add some rules. Please help me, I'm desperate
+"
+"['ai-design', 'autonomous-vehicles', 'social', 'automation']"," Title: What functionality, does control look like in autonomous vehicles levels 4 and 5?Body: We are doing a research design project on autonomous vehicles and have some questions on AV Levels 4/5; specifically on the roles, impacts and consequences of AV on society, government, users and other stakeholders.
+
+We're currently stuck on this main question:
+
+Q: What functionally, does control look like in AV levels 4 and 5?
+
+For example, is the whole purpose of a level 4/5 that a user has no input into the control?
+
+Could a driver in AV (level 5) stop in an emergency, or say they want to ""take corners harder, speed up, slow down""?
+
+Could I choose to change the equi-distance between my AV and the others around me because I like space?
+
+We're wondering about what functionally, does AV level 4/5 offer a user; and what it looks like?
+
+
+
+Context:
+
+Our remit is within the world of design (design thinking), not specifically technology, or expert system functionality. We're looking at the issue from a design perspective; who does it impact, who are the stakeholders, what are the consequences and impacts. What role does a driver have an in level 5? Could an auto-manufacturer want to give drivers control in level 5? How do emergency services act in these situations? What are the touchpoints to society and whom does it impact and what does it say about the design of AV for the future of society.
+"
+"['neural-networks', 'machine-learning', 'math', 'activation-functions']"," Title: Which functions can be activation functions?Body: What are the required characteristics of an activation function (in a neural network)? Which functions can be activation functions?
+
+For example, which of the functions below can be used as an activation function?
+
+$$f(x) = \frac{2}{\pi} \tan^{-1}(x)$$
+
+which looks like
+
+
+
+or
+
+$$f(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{x} e^{-\frac{t^2}{2}} dt$$
+
+which looks
+
+
+"
+"['search', 'breadth-first-search', 'branching-factors']"," Title: How can we find the number of node expansions performed by BFS in this hexagonal map?Body: An agent aims to find a path on a hexagonal map, with an initial state $s_0$ in the center and goal state $s^*$ at the bottom as depicted below.
+
+The map is parametrized by the distance $n \geq 1$ from $s_0$ to any of the border cells ($n = 3$ in the depicted example). The agent can move from its current cell to any of the 6 adjacent cells.
+How can we find the number of node expansions performed by BFS without duplicate detection, and with duplicated detection as a function of $n$?
+I know that the branching factor for the map would be 6 because the agent can move in 6 directions, and for a depth of $k$, we get $O(b^k) = 6^n$, without duplicate detection, but what is the number of node expansion with duplicate detection with BFS?
+"
+"['reinforcement-learning', 'deep-rl', 'dqn', 'self-play', 'tic-tac-toe']"," Title: Why does self-playing tic-tac-toe not become perfect?Body: I trained a DQN that learns tic-tac-toe by playing against itself with a reward of -1/0/+1 for a loss/draw/win. Every 500 episodes, I test the progress by letting it play some episodes (also 500) against a random player.
+As shown in the picture below, the net learns quickly to get an average reward of 0.8-0.9 against the random player. But, after 6000 episodes, the performance seems to deteriorate. If I play manually against the net, after 10000 episodes, it plays okay, but by no means perfect.
+Assuming that there is no hidden programming bug, is there anything that might explain such a behavior? Is there anything special about self-play in contrast to training a net against a fixed environment?
+
+Here further details.
+The net has two layers with 100 and 50 nodes (and a linear output layer with 9 nodes), uses DQN and a replay buffer with 4000 state transitions. The shown epsilon values are only used during self-play, during evaluation against the random player exploration is switched off. Self-play actually works by training two separate nets of identical architecture. For simplicity, one net is always player1 and the other always player2 (so they learn slightly different things). Evaluation is then done using the player1 net vs. a random player which generates moves for player2.
+"
+"['reinforcement-learning', 'atari-games']"," Title: When can we say an RL algorithm learns an Atari game?Body: If an Atari game's rewards can be between $-100$ and $100$, when can we say an agent learned to play this game? Should it get the reward very close to $100$ for each instance of the game? Or it is fine if it gets a low score (say $-100$) at some instances? In other words, if we plot the agent's score versus number of episodes, how should the plot look like? From this plot, when can we say the agent is not stable for this task?
+"
+"['machine-learning', 'classification']"," Title: How to implement AI/ML to classify various types of filesBody: I am working on a task that requires me to classify a large amount of mixed files on a backup drive (more than 10 TB with more than 32 million files) based on content. The included file types are documents, images, videos, executable, and pretty much everything in between.
+
+I am also required to create new tags or metadata that will allow for automatic classification of new files. It'd also allow for manual input of category. For each input I give to the system, the system would learn and improve its classification.
+
+Here is what I have come up with so far:
+
+
+- Documents: classify using existing categories with packages like Nltk on Python. Alternatively, first run topic modeling using LDA or NMF and then classify.
+- Images: use CNN. In case of unknown label, use VAE to cluster the images.
+- Videos and other types of files: I do not know how to approach this.
+
+
+Since I am not sure about my approach, any input is greatly appreciated.
+"
+"['deep-learning', 'training', 'papers', 'generative-adversarial-networks', 'discriminator']"," Title: Can some one help me understand this paragraph from Nvidia's progressive GAN paper?Body: In the paper Progressive growing of gans for improved quality, stability, and variation (ICLR, 2018) by Nvidia researchers, the authors write
+
+Furthermore, we observe that mode collapses traditionally
+plaguing GANs tend to happen very quickly, over the course of a dozen minibatches. Commonly
+they start when the discriminator overshoots, leading to exaggerated gradients, and an unhealthy
+competition follows where the signal magnitudes escalate in both networks. We propose a mechanism to stop the generator from participating in such escalation, overcoming the issue (Section 4.2)
+
+What do they mean by "the discriminator overshoots" and "the signal magnitudes escalate in both networks"?
+My current intuition is that the discriminator gets too good too soon, which causes the generator to spike and try to play catch up. That would be the unhealthy competition that they are talking about. Model collapse is the side effect where the generator has trouble playing catch up and decides to play it safe by generating slightly varied images to increase its accuracy. Is this way of interpreting the above paragraph correct?
+"
+"['classification', 'tensorflow', 'getting-started', 'unsupervised-learning', 'software-evaluation']"," Title: Classifying non-labeled data with high dimensionalityBody: Disclaimer: I am a novice in the world of machine learning, so please excuse my ignorance.
+
+My dataset consists of things like age, days since last visit, etc. This information is medical related. None of which is geometrical, just data pertaining to particular clients.
+
+The goal is to classify my dataset into three labels. The dataset is not labeled, meaning I'm dealing with an unsupervised learning problem. My dataset consists of ~20,000 records, but this will linearly increase overtime. The data is nearly all floats, with some being strings that can easily be converted into a float. Using this cheat sheet for selecting a solution from the scikit site, a KMeans Cluster seems like potential solution, but I've been reading that having high dimensionality can render the KMeans Cluster unhelpful. I'm not married to a particular implementation either. I've currently got a KMeans Cluster implementation using TensorFlow in Python, but am open for alternatives.
+
+My question is: what would be some solutions for me to further explore that might be more optimal for my particular situation?
+"
+"['classification', 'decision-trees']"," Title: Decision tree: more than 2 classes, how to represent elements that are in a class vs ones that aren't?Body: I'm building a decision tree and would like to separate (for example) the elements that are in class 0 from those in classes 1 and 2, case in point:
+
+df = pd.DataFrame(np.random.randn(500,2),columns=list('AB'))
+cdf = pd.DataFrame(columns=['C'])
+cdf = pd.concat([cdf,pd.DataFrame(np.random.randint(0,3, size=500), columns=['C'])])
+#df=pd.concat([df,cdf], axis=1)
+(X_train, X_test, y_train, y_test) = train_test_split(df,cdf,test_size=0.30)
+y_train=y_train.astype('int')
+classifier = DecisionTreeClassifier(criterion='entropy',max_depth = 2)
+classifier.fit(X_train, y_train)
+y_pred = classifier.predict(X_test)
+
+
+C
represents the class of an element,A
and B
are two variables that define the element, how can I build a tree that instead of dividing results into C=0
, C=1
or C=2
divides them into C=0
and C!=0
?
+"
+['image-recognition']," Title: What does it mean ""derivative of an image""?Body: I am reading a book about OpenCV, it speaks about some derivative of images like sobel
. I am confused about image derivative! What is derived from? How can we derived from an image? I know we consider an image(1-channel) as a n*m matrix with 0 to 255 intensity numbers. How can we derive from this matrix?
+EDIT: a piece of text of the book:
+
+Derivatives and Gradients
+One of the most basic and important convolutions is computing derivatives
+(or approximations to them). There are many ways to do this, but only a few
+are well suited to a given situation.
+In general, the most common operator used to represent differentiation is the
+Sobel derivative operator. Sobel
+operators exist for any order of derivative as well as for mixed partial
+derivatives (e.g., ∂ 2 /∂x∂y).
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'comparison']"," Title: How to make a fair comparison of a convolutional neural network (cNN) vs a mutlilayer perceptron (MLP)?Body: I'm working with deep learning on some EEG data for classification, and I was wondering if there's any systematic/mathematical way to define the architecture of the networks, in order to compare their performance fairly.
+
+Should the comparison be at the level of neurons (e.g. number of neurons in each layer), or at the level of weights (e.g. number of parameters for training in each type of network), or maybe something else?
+
+One idea that emerged was to construct one layer for the MLP for each corresponding convolutional layer, based on the number of neurons after the pooling and dropout layers.
+
+Any ideas? If there's any relative work or paper regarding this problem I would be very grateful to know.
+
+Thank you for your time
+
+Konstantinos
+"
+"['machine-learning', 'reference-request', 'math', 'applications', 'evolutionary-algorithms']"," Title: Has the Fibonacci series or the golden ratio been applied in any way in AI?Body: I have been looking at the Fibonacci series, the golden ratio, and its uses in nature, like how flowers and animals grow based on the series.
+I was wondering whether we could use the Fibonacci series and the golden ratio in any way in AI, especially in evolutionary algorithms. Any ideas or insights?
+Is this research material? If so where can we start?
+"
+"['ai-design', 'philosophy', 'emotional-intelligence']"," Title: How can we define common sense in an AI agent?Body: In our lives, we meet different people and describe their common sense based on how they act on a situation. For example, highly extrovert people are able to deal with people without any awkwardness. For them, an action in how to deal with people come as common sense. But, in the case of scientists, approach to solving a problem may be common sense which ordinary people cannot see.
+
+How can we define common sense in an AI agent?
+"
+"['convolutional-neural-networks', 'image-recognition', 'art-aesthetics']"," Title: Which photo is more artistic?Body: I would like to develop a machine learning algorithm, given two photos, that can decide which image is more ""artistic"".
+
+I am thinking about somehow combining two images, giving it to a CNN, and get an output 0 (the first image is better) or 1 (the second image is better). Do you think this is a valid approach? Or could you suggest an alternative way for this? Also, I don't know how to combine two images.
+
+Thanks!
+
+Edit: Let me correct ""artistic"" as ""artistic according to me"", but it doesn't matter, I am more interested in the architecture. You can even replace ""artistic"" with something objective. Let's say I would like to determine which photo belongs to a more hotter day.
+"
+"['models', 'decision-theory', 'cognitive-science', 'brain']"," Title: What is the state of the art in models of how the human brain performs goal-directed decision making? Can these models' principles be applied to AI?Body: What is the state of the art in models of how the human brain performs goal-directed decision making? Can these models’ principles and insights be applied to the field of Artificial Intelligence, e.g. to develop more robust and general AI algorithms?
+"
+"['reinforcement-learning', 'tensorflow', 'game-ai']"," Title: Issue with simple game AIBody: A few months ago I made a simple game that is similar to the dinosaur game in Google Chrome - you jump over obstacles, or don't jump over levitating obstacles, and jump to collect bitcoins, which can be placed at 5 different heights. I used a very lightweight NN written by NYU professor Dan Shiffman, and within a few days the game and AI were done, starting off with a population of 200 jumpers, and a genetic algorithm (fitness function (points are given for avoiding obstacles and gathering bitcoins) and mutation), and it worked as it should.
+
+However, this was only when the bitcoins and obstacles were not near each other, which I've been struggling with ever since.
+
+So, I made a ""training ground"" where I put first a levitating obstacle, then a grounded one, and then a bitcoin after it, and then a bitcoin above a fourth grounded obstacle, and no matter how many times and how long I'd leave it to train, I'd always end up with identical behavior:
+
+The first 3 obstacles are properly avoided, the first bitcoin is collected, and then jumpers would jump too early, land before the fourth ""bitcoin"" obstacle, and jump again, always crashing at almost the same place (across all generations, so even if I'd restart the training, they crash at the same place in the obstacle, with a deviation of a few pixels up or down).
+I added multilayer support to the NN, no improvements.
+
+Today I replaced the NN with tensorflow.js, and I am getting identical behaviour.
+
+My inputs are:
+
+
+- distance to next obstacle
+- altitude of next obstacle
+- distance to next star
+
+
+(for simplicity I removed the altitude of stars from the input, and keep them at a constant altitude)
+
+I have 2 hidden layers (5 and 6 neurons), and 1 neuron in the output, which determines if the jumper should jump.
+
+My only idea is that a neuron that decides when to jump because of the obstacle activates alongside the neuron that decides when to jump because of the bitcoin, their weights are summed up and a decision to jump too early is made.
+
+I'll give somewhat of a (maybe bad) analogy:
+
+If it takes you 1 month to prepare an exam, then, if you have 2 exams on the same day, you will start preparing them 2 months earlier. That logic works in this case, but not in my AI.
+
+In the initial ""toy neural network"" I even added 8 layers of 12 neurons each, which I think is overkill for this case.
+In tf.js I used both sigmoid and relu activation functions.
+No matter what I did, no improvement.
+
+Hope someone has an idea where I'm going wrong.
+"
+"['reinforcement-learning', 'multi-armed-bandits', 'contextual-bandits']"," Title: How to implement a contextual reinforcement learning model?Body: In a reinforcement learning model, states depend on the previous actions chosen. In the case in which some of the states -but not all- are fully independent of the actions -but still obviously determine the optimal actions-, how could we take these state variables into account?
+
+If the problem was a multiarmed bandit problem (where none of the actions influence the states), the solution would be a contextual multiarmed bandit problem. Though, if we need a ""contextual reinforcement learning problem"", how can we approach it?
+
+I can think of separating a continuous context into steps, and creating a reinforcement learning model for each of these steps. Then, is there any solution where these multiple RL models are used together, where each model is used for prediction and feedback proportionally to the closeness between the actual context and the context assigned to the RL model? Is this even a good approach?
+"
+"['neural-networks', 'deep-learning', 'algorithm-request', 'model-request']"," Title: Which machine learning approach should I use to estimate how many products a research group should have to improve its category?Body: Currently, in my country, there is a system in which certain groups of researchers upload information on products of scientific interest, such as research articles, books, patents, software, among others. Depending on the number of products, the system assigns a classification to each group, which can be A1, A, B and C, where A1 is the highest classification and C is the minimum. According to the classification of the groups, they can compete to receive monetary incentives to make their research.
+At the moment, I am working on an application that takes the data of the system that I mentioned previously. I am able to say what classification the group currently has because we develop a scraper that counts the products and there is another service that is in charge of implementing all the mathematical model that the system has to calculate the category of the group.
+But what I want to achieve is that my application would be able to give an estimate of how many products a research group should have to improve its category. I want to know if I can do that using neural networks.
+For example, if there is a category C group, I want the application to tell the user how many articles and books it would make its category go up to B.
+From what I have seen in some web resources, I could insert a training set into the neural network and have it learn to classify the groups, but I think it is unnecessary, because I can do that mathematically.
+But I do not understand if it is possible for a neural network to process the current category that the group has and be able to give suggestions of how many products it needs to improve its category.
+I think it must be a neural network with several outputs, so that in each one it throws the total for each one of the products, although it is not necessary to list all the products that the measurement model contemplates. But it is necessary for the network to learn which products are handled by a certain group, for example if a group does not write books, avoid suggestions that contemplate the production of books for the improvement of the category that the group has.
+"
+"['neural-networks', 'unsupervised-learning', 'clustering', 'self-organizing-map', 'iris-dataset']"," Title: Is it normal that SOM clusters the instances with the ""versicolor"" class into multiple different BMUs?Body: I have trained (with different sizes, learning rates, and epochs) a SOM network to cluster the Iris dataset. The instances associated with the class setosa
have been mainly fitted to a 1-2 BMUs. In the case of virginica
, the instances have also be associated with only a few BMUs. However, in the case of versicolor
instances, many BMUs have been associated with them.
+Is this normal?
+Setosa
+0. 1846
+1. 1846
+2. 1846
+3. 1846
+4. 1846
+5. 1846
+6. 1846
+7. 1846
+8. 1846
+9. 1846
+10. 1846
+11. 1846
+12. 1846
+13. 1846
+14. 1846
+15. 1846
+16. 1846
+17. 1846
+18. 1846
+19. 1846
+20. 1846
+21. 1846
+22. 1846
+23. 1846
+24. 1846
+25. 1846
+26. 1846
+27. 1846
+28. 1846
+29. 1846
+30. 1846
+31. 1846
+32. 1846
+33. 1846
+34. 1846
+35. 1846
+36. 1846
+37. 1846
+38. 1846
+39. 1846
+40. 1846
+41. 1620
+42. 1846
+43. 1846
+44. 1846
+45. 1846
+46. 1846
+47. 1846
+48. 1846
+49. 1846
+
+Versicolor
+50. 652
+51. 652
+52. 652
+53. 1259
+54. 696
+55. 1394
+56. 652
+57. 490
+58. 696
+59. 490
+60. 490
+61. 1059
+62. 1304
+63. 696
+64. 490
+65. 652
+66. 1400
+67. 490
+68. 696
+69. 490
+70. 652
+71. 1574
+72. 696
+73. 832
+74. 696
+75. 696
+76. 696
+77. 652
+78. 696
+79. 490
+80. 490
+81. 490
+82. 444
+83. 696
+84. 1129
+85. 1084
+86. 652
+87. 696
+88. 25
+89. 584
+90. 490
+91. 789
+92. 1034
+93. 490
+94. 854
+95. 29
+96. 584
+97. 877
+98. 490
+99. 809
+
+Virginica
+100. 652
+101. 696
+102. 652
+103. 652
+104. 652
+105. 652
+106. 877
+107. 652
+108. 696
+109. 652
+110. 652
+111. 696
+112. 652
+113. 696
+114. 652
+115. 652
+116. 652
+117. 652
+118. 652
+119. 696
+120. 652
+121. 652
+122. 652
+123. 696
+124. 652
+125. 652
+126. 696
+127. 652
+128. 652
+129. 652
+130. 652
+131. 652
+132. 652
+133. 696
+134. 696
+135. 652
+136. 652
+137. 652
+138. 652
+139. 652
+140. 652
+141. 652
+142. 696
+143. 652
+144. 652
+145. 652
+146. 696
+147. 652
+148. 652
+149. 652
+
+Now, I have a diagram. It doesn't look bad.
+
+"
+['image-recognition']," Title: How to apply a kernel to an image?Body: As I know, if we consider a 3*3 kernel, we should add a padding of 1px to the source image(if we want to have effect on whole of the image), then we start to put the kernel in upper-left side of the image and multiplying each element of kernel to corresponding pixel on image. Then we sum all the results and put it on the anchor point of kernel(usually center element). Then we should shift the kernel one step to right side and do these things again.
+
+If I am right till here, I have a question about the summation results. I want to know: should we consider the replaced value of image in previously calculated summation and replaced in anchor point in new step of calculation or not?
+
+I mean we must put the anchor point's result in source image and consider it in calculations of shifted kernel? Or we must put it in distance image and we don't consider these results when we shift the kernel on source image(It means don't replace the results on source image for next steps calculations)?
+"
+"['comparison', 'recurrent-neural-networks', 'long-short-term-memory', 'gated-recurrent-unit']"," Title: Why are GRU and LSTM better than standard RNNs?Body: It seems that older RNNs have a limitation for their use cases and have been outperformed by other recurrent architectures, such as the LSTM and GRU.
+"
+"['neural-networks', 'agi', 'social']"," Title: Was the Malaysian Airliner crash caused by a non-benign artificial intelligence system?Body: I watched a youtube clip of Elon Musk talking about his view on the future of AI. He gave two examples. One of the examples was a benign scenario and the other example was a nonbenign scenario where he speculated the possibilities of future AI threats and what harm a deep intelligence could do.
+According to Elon, a deep intelligence in the network could create fake news and spoof email accounts. "The pen is mightier than the sword". This non-benign scenario put forth by Elon was a hypothetical, but he went into detail about how it could have been possible that an AI, with the goal of maximising the portfolio of stocks, to go long on defense and short on the consumer, and start a war.
+To be more specific, this could be achieved by hacking into the Malaysians Airlines aircraft routing server, and when the aircraft is over a warzone, send an anonymous tip that there is an enemy aircraft flying overhead which in turn would cause ground to air missiles to take down what was actually a "commercial" airliner.
+Although this is a plausible hypothetical nonbenign scenario of AI, I'm wondering if this actually could have been the case regarding the Malaysian Airliner crash. The Stuxnet, for example, was a malicious computer worm, first uncovered in 2010. Thought to have been in development since at least 2005 and believed to be responsible for causing substantial damage to Iran's nuclear program. The Stuxnet wasn't even an AI.
+The Stuxnet blew the world's minds when it was discovered. The sheer complexity of the worm and the amount of time it took to build was impressive, to say the least.
+In conclusion, was the Malaysian Airliner crash caused by a non-benign artificial intelligence system?
+"
+"['search', 'game-theory', 'minimax']"," Title: Can minimax be used when both players want to increase their score?Body: If both the players want to increase their score (by selecting the highest or best cost path), can this be done using the minimax algorithm, or are there other algorithms for this purpose?
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-recognition', 'training', 'object-recognition']"," Title: Object recognition by two or more traits that are orthogonal (informally speaking)Body: I would really appreciate if someone could comment the following method of training neural nets providing them with some meta data (Making them more color prone only if needed, whereas now they're mostly silhouette / outline aware). (Comment and especially giving a reference to some papers, preventing me from reinventing the wheel.)
+
+But let's start at the very begining and say that we're performing simple image recognition and each image has 24 bit color depth, simply 1 Byte per each RGB channel. I'm more eager to use usually bigger pics, sacrifing color quality, however not in all cases (that statement is crucial in this question).
+
+To limit the computational burden I'm NOT keen on using the full information about color (3 Bytes per pixel), but rather shrink it to 1 Byte per pixel and here is the catch:
+
+I'm reluctant neither to use gray scale nor to cast original tints to single (common among all pics) color palette of 256 hues. So I came up with an idea of reversing the method called debayering or demosaicing image from Color Filter Array data).
+
+To achive it, for every pixel only one color channel is preserved.
+Because of the human perception of colors, green component is overrepresented covering 50% of pixels, so 25% left for blue and red.
+In this particular example below, the upper left pixel correspond to Blue, followed by Green, then Blue, next Green and so on to the end of the row.
+The second uppermost row starts with Green, afterwards Red, one more Green, Red and also repeating till the end of line. This horizontal patterns are altering with the parity of the row number, which is nicely depiced on https://en.wikipedia.org/wiki/Bayer_filter from where I've got following graphics:
+
+
+
+To better illustrate this method I'm using the thumbnail of famous Mona Lisa painting with its grayscale version next to it. (By the way, it isn't in my training set, but is familiar to everyone).
+
+![]()
+
+A greenish leftmost image is the result of applying the reverse debayering / demosaicing CFA method. This picture consist of pixels that are either Blue or Red or Green with different brightness level. In the browser window that could be poorly visible, however if you download this image and magnify it substantially, the patter would be revealed.
+
+Let's say that in the original picture one can find a small square of 2x2 pixels, all of them representing a light skintone 0xF4D374 (in hex). In this grouping, 2 green pixels would be chopped into green channel and will get a value of 0xD3, the blue-related will get a value of 0x74 and the remaining red would get 0xF4.
+In the leftmost image below the corresponding pixels were presented by hex colors: 0x00D300, 0x000074 and 0xF40000 respectively, whereas in the right picture exactly the same values (0xD3, 0x74, 0xF4) were shown in grayscal (of 256 possible shades).
+
+![]()
+
+After this color-flattening, our input batch has shrinked by two-thirds and at the same time original colors can be more-or-less restored (of course not lossless, but well enough).
+
+However, I don't suppose that anyone had a problem with recognising this picture after transformation. Likewise, all my models could be well trained to recognize outline/silhouette of the object, but they require a way lot more training data (at least one-two orders of magnitude) to be color-aware.
+
+The ultimate question is, how to design models that would threat shape and colors in similar manner. Maybe that would be not in 100% mathematically proper, but shape and color must be orthogonal.
+
+Nevertheless, I don't want to always decode the color, but only if its needed - in ealier epoch it learned sillhouettes/shapes and that there're many similar objects in this regard, so in the next epoch it should pay also / more attention to tints.
+
+Have you encountered articles about such method of using color if object demarcation / labelling process cannot be based only on shape? I would be really gratefull for any paper or other reference.
+
+I'm rather newbie to neural nets, so sorry if this is something widely known to everyone but me ;-)
+
+Thanks in advance for any hint.
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'dropout', 'thompson-sampling']"," Title: How to compute the action probabilities with Thompson sampling in deep Q-learning?Body: In some implementations of off-policy Q-learning, we need to know the action probabilities given by the behavior policy $\mu(a)$ (e.g., if we want to use importance sampling).
+In my case, I am using Deep Q-Learning and selecting actions using Thompson Sampling. I implemented this following the approach in "What My Deep Model Doesn't Know...": I added dropout to my Q-network and select actions by performing a single stochastic forward pass through the Q-network (i.e., with dropout enabled) and choosing the action with the highest Q-value.
+So, how can I calculate $\mu(a)$ when using Thompson Sampling based on dropout?
+"
+"['deep-learning', 'agi', 'superintelligence']"," Title: Is there any scientific/mathematical argument that prevents deep learning from ever producing strong AI?Body: I read Judea Pearl's The Book of Why, in which he mentions that deep learning is just a glorified curve fitting technology, and will not be able to produce human-like intelligence.
+From his book there is this diagram that illustrates the three levels of cognitive abilities:
+
+The idea is that the "intelligence" produced by current deep learning technology is only at the level of association. Thus the AI is nowhere near the level of asking questions like "how can I make Y happen" (intervention) and "What if I have acted differently, will X still occur?" (counterfactuals), and it's highly unlikely that curve fitting techniques can ever bring us closer to a higher level of cognitive ability.
+I found his argument persuasive on an intuitive level, but I'm unable to find any physical or mathematical laws that can either bolster or cast doubt on this argument.
+So, is there any scientific/physical/chemical/biological/mathematical argument that prevents deep learning from ever producing strong AI (human-like intelligence)?
+"
+"['machine-learning', 'deep-learning', 'computer-vision', 'reference-request', 'state-of-the-art']"," Title: What are the state-of-the-art approaches for detecting the most important ""visual attention"" area of an image?Body: I'm trying to detect the visual attention area in a given image and crop the image into that area. For instance, given an image of any size and a rectangle of say $L \times W$ dimension as an input, I would like to crop the image to the most important visual attention area.
+What are the state-of-the-art approaches for doing that?
+(By the way, do you know of any tools to implement that? Any piece of code or algorithm would really help.)
+BTW, within a "single" object, I would like to get attention. So object detection might not be the best thing. I am looking for any approach, provided it's SOTA, but Deep Learning might be a better choice.
+"
+"['machine-learning', 'definitions']"," Title: How does Microsoft use AI to make Windows 10 updates smootherBody: According to this news, Microsoft is using AI to make Windows 10 updates smoother. So I was curious and went further to search and came across this website, which describes:
+
+
+ Artificial Intelligence (AI) continues to be a key area of investment
+ for Microsoft, and we’re pleased to announce that for the first time
+ we’ve leveraged AI at scale to greatly improve the quality and
+ reliability of the Windows 10 April 2018 Update rollout. Our AI
+ approach intelligently selects devices that our feedback data indicate
+ would have a great update experience and offers the April 2018 Update
+ to these devices first. As our rollout progresses, we continuously
+ collect update experience data and retrain our models to learn which
+ devices will have a positive update experience, and where we may need
+ to wait until we have higher confidence in a great experience. Our
+ overall rollout objective is for a safe and reliable update, which
+ means we only go as fast as is safe.
+
+ Our AI/Machine Learning approach started with a pilot program during
+ the Windows 10 Fall Creators Update rollout. We studied
+ characteristics of devices that data indicated had a great update
+ experience and trained our model to spot and target those devices. In
+ our limited trial during the Fall Creators Update rollout, we
+ consistently saw a higher rate of positive update experiences for
+ devices identified using the AI model, with fewer rollbacks,
+ uninstalls, reliability issues, and negative user feedback. For the
+ April 2018 Update rollout, we substantially expanded the scale of AI
+ by developing a robust AI machine learning model to teach the system
+ how to identify the best target devices based on our extensive
+ listening systems.
+
+
+To me, it sounds like simple if-else statements would have implemented the whole thing without touching the AI; they mentioned that positive experiences include fewer rollbacks, uninstalls, and so on, so we may use these as a criterion of a positive experience.
+
+I am just wondering if the word 'AI' is being misused, or can be misleading in this context? Could anyone point me out on this or give any insight on how AI can be used in this context? In my experience, I have only seen AI mostly being used in speech recognition, image recognition and other sort-of classifying problems, with a training and consequently a computer can ""learn"" from the data, not like an if-else statement. Today, AI seems to be everything that is considered ""smart""?
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks', 'weights', 'explainable-ai']"," Title: What do the neural network's weights represent conceptually?Body: I understand how neural networks work and have studied their theory well.
+My question is: On the whole, is there a clear understanding of how mutation occurs within a neural network from the input layer to the output layer, for both supervised and unsupervised cases?
+Any neural network is a set of neurons and connections with weights. With each successive layer, there is a change in the input. Say I have a neural network with $n$ parameters, which does movie recommendations. If $X$ is a parameter that stands for the movie rating on IMDB. In each successive stage, there is a mutation of input $X$ to $X'$ and further $X''$, and so on.
+While we know how to mathematically talk about $X'$ and $X''$, do we at all have a conceptual understanding as to what this variable is in its corresponding $n$-dimensional parameter space?
+To the human eye, the neural network's weights might be a set of random numbers, but they may mean something profound, if we could ever understand what they 'represent'.
+
+What is the nature of the weights, such that, despite decades worth of research and use, there is no clear understanding of what these connection weights represent? Or rather, why has there been so little effort in understanding the nature of neural weights, in a non-mathematical sense, given the huge impetus in going beyond the black box notion of AI.
+"
+"['reinforcement-learning', 'markov-decision-process', 'pomdp']"," Title: What could happen if we wrongly assume that the POMDP is an MDP?Body: Consider the Breakout environment.
+We know that the underlying world behaves like an MDP, because, for the evolution of the system, it just needs to know what the current state (i.e. position, speed, and speed direction of the ball, positions of the bricks, and the paddle, etc) is. But, considering only single frames as the state space, we have a POMDP, because we lack in formations about the dynamics [1], [2].
+What could happen if we wrongly assume that the POMDP is an MDP and do reinforcement learning with this assumption over the MDP?
+Obviously, the question is more general, not limited to Breakout and Atari games.
+"
+"['agi', 'human-like', 'superintelligence', 'mythology-of-ai']"," Title: Is Really ""AI"" Light Years Away from achieving Cognitive Ability of Human?Body: Oxford philosopher and leading AI thinker Nick Bostrom defines SuperIntelligence as
+
+
+ ""An intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.""
+
+
+Here is an interesting article, you may like Tech Crunch..
+
+Artificial general intelligence : Wiki
+
+Taking into account current limitations and the amount of progress that has been made in recent years, what is a realistic timeframe to expect an AI that has human levels of cognition?
+"
+"['reinforcement-learning', 'q-learning', 'multi-agent-systems']"," Title: Can Q-learning working in a multi agent environment where every agent learns a behaviour independently?Body: I am currently exploring multi-agent reinforcement learning. I have multiple agents that communicate with each other and a central service that maintains the environment state.
+
+The central service dispatches some information at regular intervals to all the agents (Lets call this information as energy). The information can be very different for all the agents.
+
+The agents on reception of this information select a particular action. The execution of the action should leave the agent as well as the environment in a positive state. The action requires a limited amount of energy which might change on every timestep. If a agent does not have sufficient energy to it may request for energy from other agents. The other agents may grant or deny this request.
+
+If all the agents are able to successfully perform their actions and leave the environment in a positive state they get a positive reward.
+
+As the environment is stochastic, where a agent's behavior is dependent on another agent can approximate Q Learning be used here?
+"
+"['math', 'applications', 'education']"," Title: How could an AI be used to improve the teaching and learning of mathematics?Body: I have been working with AI methods. I am thinking about how my daughter (and also other kids) could learn mathematics with the help of AI. For example, how could an AI be used to show the mistakes that a kid does during the learning path?
+"
+"['neural-networks', 'training', 'weights-initialization']"," Title: Is random initialization of the weights the only choice to break the symmetry?Body: My knowledge
+Suppose you have a layer that is fully connected, and that each neuron performs an operation like
+a = g(w^T * x + b)
+
+were a
is the output of the neuron, x
the input, g
our generic activation function, and finally w
and b
our parameters.
+If both w
and b
are initialized with all elements equal to each other, then a
is equal for each unit of that layer.
+This means that we have symmetry, thus at each iteration of whichever algorithm we choose to update our parameters, they will update in the same way, thus there is no need for multiple units since they all behave as a single one.
+In order to break the symmetry, we could randomly initialize the matrix w
and initialize b
to zero (this is the setup that I've seen more often). This way a
is different for each unit so that all neurons behave differently.
+Of course, randomly initializing both w
and b
would be also okay even if not necessary.
+Question
+Is randomly initializing w
the only choice? Could we randomly initialize b
instead of w
in order to break the symmetry? Is the answer dependent on the choice of the activation function and/or the cost function?
+My thinking is that we could break the symmetry by randomly initializing b
, since in this way a
would be different for each unit and, since in the backward propagation the derivatives of both w
and b
depend on a
(at least this should be true for all the activation functions that I have seen so far), each unit would behave differently. Obviously, this is only a thought, and I'm not sure that is absolutely true.
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks']"," Title: Speeding up CNN trainingBody: So I built a CNN without any scientific libraries like TensorFlow or Keras (only NumPy). It is taking a huge amount of time to train. What are some of the tricks and tips followed by people to speed up training of a CNN? (I am not talking about division of jobs into different processors but subtle redundant codes i.e. giving pre-calculated results which is not visible to common programmers).
+"
+"['neural-networks', 'machine-learning', 'ai-security', 'ai-safety', 'adversarial-ml']"," Title: Is artificial intelligence vulnerable to hacking?Body: The paper The Limitations of Deep Learning in Adversarial Settings explores how neural networks might be corrupted by an attacker who can manipulate the data set that the neural network trains with. The authors experiment with a neural network meant to read handwritten digits, undermining its reading ability by distorting the samples of handwritten digits that the neural network is trained with.
+
+I'm concerned that malicious actors might try hacking AI. For example
+
+
+- Fooling autonomous vehicles to misinterpret stop signs vs. speed limit.
+- Bypassing facial recognition, such as the ones for ATM.
+- Bypassing spam filters.
+- Fooling sentiment analysis of movie reviews, hotels, etc.
+- Bypassing anomaly detection engines.
+- Faking voice commands.
+- Misclassifying machine learning based-medical predictions.
+
+
+What adversarial effect could disrupt the world? How we can prevent it?
+"
+['game-theory']," Title: Strong and Weak Dominance TableBody: I have this table, 2 agents and I want to find for each agent if any action is strongly or weakly dominated. This is the table:
+
+
+
+Now, i've found a solution but I'm not sure if it's correct. So for let's say agent 1(the one handling rows): 1<2<2, 1<3<4 and 0<3<4 so I don't have a strong dominance.
+
+For agent 2: 1<=1<=1, 2<5<6 and 2<3<4 which means that I don't have strong dominance here too.
+
+Is my logic correct?
+"
+"['legal', 'digital-rights']"," Title: Digital Rights and Agents talking to humansBody: How does the legal question about agents talking to humans via telephone connection work? Recently Google gave a talk about Duplex, where an agent makes a call to a human to schedule a hairdresser.
+
+I wonder if there are any regulations related to this type of scenario, if there are some limitations, if the human needs to know that he is talking to an AI.
+"
+"['machine-learning', 'deep-learning', 'classification', 'natural-language-processing']"," Title: Question classification according to chaptersBody: I have a corpus, say an instruction manual. The text in this manual is grouped into chapters and each chapter is split up into sections. For example, Chapter 1/Section 1, Chapter 1/Section 2 and so on.
+Assume the corpus has C chapters and each chapter has S sections. My goal is, given a sentence or question, to classify this sentence/question. In other words I want to compute three most probable chapters to which this sentence or question belongs to.
+I tried MultinomialNB model using sklealrn, but it did not give me the desired result. I want to try another approach, for example using a Neural Network and compare it with the MultinomialNB model. I have Googled and found Doc2Vec but haven't tried yet.
+
+Can anyone suggest a better or another possible approach so that I could try and compare? What is the standard approach to such kind of problem?
+"
+"['evolutionary-algorithms', 'neat', 'neuroevolution']"," Title: What is the order of the genetic operations in NEAT?Body: I was trying to implement NEAT, but I got stuck at the speciating of my clients/genomes.
+
+What I got so far is:
+
+
+- the distance function implemented,
+- each genome can mutate nodes/connections,
+- two genomes can give birth to a new genome.
+
+
+I've read a few papers, but none explicitly explains in what order what step is done. What is the order of the genetic operations in NEAT?
+
+I know that for each generation, all the similar genomes will be put together into one species.
+
+I have other questions related to NEAT.
+
+Which neural networks are killed (or not) at each generation?
+
+Who is being mutated and at what point?
+
+I know that these are a lot of questions, but I would be very happy if someone could help me :)
+"
+"['neural-networks', 'machine-learning', 'classification', 'training', 'python']"," Title: If use the weights from previous iteration of a k-fold cross validation to seed a neural network classifier would I be overfitting?Body: As is done traditionally, I used k-fold cross validation to select and optimize the hyper parameters of my neural network classifier. When it was time to store the final model for future predictions, I discovered that using the weights from the previous k-fold cv iteration to seed the initial weights of the model in subsequent iteration, helps in improving the accuracy (seems obvious). I can use the model from the final iteration to perform future predictions on unseen data.
+
+
+- Would this approach result in overfitting?
+
+
+(Please note, I am using all available data in this process and I do not have any holdout data for validation.)
+"
+['neural-networks']," Title: The analysis of the dynamic behaviour of neural networks involving the application of feedbackBody: I am reading the Simon Haykin's cornerstone book, ""Neural Networks, A Comprehensive Foundation, Second Edition"" and I cannot understand a paragraph below:
+
+
+ The analysis of the dynamic behaviour of neural networks involving the
+ application of feedback is unfortunately complicated by virute (or
+ virtue I cannot get word appropriately) of the fact that the
+ processing units used for the construction of the network are usually
+ nonlinear. Further consideration of this issue is deferred to the
+ latter part of the book.
+
+
+Before the paragraph, the author analysis the affects of weight of synapsis to the neural network's stability. Roughly speaking, he says, if |w| >= 1 the neural network become unstable.
+
+Could you please explain the paragraph? Thanks in advance.
+"
+"['reinforcement-learning', 'multi-agent-systems']"," Title: Algorithms for multiple agents problemsBody: Can anyone recommend a reinforcement learning algorithm for a multi-agent environment?
+
+In my simplified example, I'm implementing a Q-Learning system with different 10 agents. The agents compete for resources in stores at different locations by setting a bid price for each item.
+
+All of the agents have different bids and pooled budget of $100. Once the budget is reached the agents cannot buy any more that day.
+
+Each agent will receive a reward if they buy an item. The goal would be to maximize the total amount of items bought between the agents.
+
+Right now the agents don't communicate.
+
+Can someone point me in the right direction for an algorithm that allows agent cooperation?
+"
+"['neural-networks', 'ai-design', 'research', 'architecture']"," Title: Are Neural Net architectures accidental discoveries?Body: Recently, I have been learning about new neural networks, which are used for specialized purposes, like speech recognition, image recognition, etc. The more I discover the more I get amazed by the cleverness behind models such as RNN's and CNN's. Questions about working, intuition, mathematics have been asked a lot in this community, all with vague answers and apparent understandings.
+So, my question is: did the researchers come up with these specialized models accidentally, or did they follow particular steps to get to the model (like in a mathematical framework)? And how did they look at a particular class of problem and think "Yeah, a better solution might exist"?
+Since the understanding of NN's is so vague, these are 'high risk, high reward' scenarios, since you might be chasing only the mirage (illusion) of a solution.
+"
+"['machine-learning', 'human-like']"," Title: How can I simulate responses from the distribution of human intelligence?Body: I'm eventually looking to build an algorithm that will process answers from humans that are given questions. But first I have to setup an experiment to determine the variety of responses.
+
+Specifically, humans will be asked a multiple choice question that has a single correct answer. I want to understand what kinds/ranges of responses I would get from the bell curve distribution of human intelligence.
+
+Is there any way I can have, say, 1000 ""humans"" be asked a prompt, repeated 100 times (the same question) and then compile the responses? My concern is that I'll have to build some algorithm or process for each dumb, average, smart ""human"" to follow but then I would introduce bias in how smart they are or limit how they may respond. I'm guessing I'll have to give them a data sort to work from.
+
+To clarify, it's not the number of times a single user gets a question right that makes them smart, they have to be programmed dumb, smart etc. before the simulation starts. So dumb users could get some right and smart can get some wrong.
+
+I'm not sure the Monte Carlo method is useful here but some type of simulation where I can specify the distribution (normal) and then bound the responses would be helpful.
+
+I have access to Excel, Minitab, and Python. Any ideas how to set up an experiment like this? I really am open to any technique to measure this.
+"
+"['neural-networks', 'training']"," Title: How to actually teach the ANN the resulting weights of different training inputs?Body: I thought I have implemented the code (from scratch, no library) for an artificial neural network (first endeavour in the field). But I feel like I miss something very basic or obvious.
+
+To make it short: code works for a single pair of in-/out-values but fails for sets of value pairs. I do not really understand the training process. So I want to get this issue out of the way first. The following is my improvised training (aka all that I can think of) in pseudocode.
+
+trainingData = [{in: [0,0], out:[0]}, {in: [0,1], out:[0]}, ...];
+iterations = 10000
+
+network = graphNodesToNetwork()
+links = graphLinksToNetwork()
+randomiseLinkWeights(links)
+
+
+while(trainingData not empty) {
+ for(0<iterations) {
+ set = trainingData.pop()
+
+ updateInput(network, set.in)
+
+ forwardPropagate(network, links)
+
+ linkUpdate = backPropagate(network, links, set.out)
+
+ updateLinks(linkUpdate, links)}
+}
+
+
+Is this how it is supposed to work? Do you feed in your training data set by set (while-loop)?
+
+Edit 1: because my final comment did distract from the issue at hand.
+
+Edit 2: less wordy, more code-y
+"
+"['machine-learning', 'convolutional-neural-networks', 'image-recognition']"," Title: How to load an image into tensorflow.js code which reads handwritten numbers and clasify themBody: I'm new to machine learning, so i figured I should look into google's tensor flow guides and I know how to code in JS so that's why I'm using tensorflow.js, there's and example in the guide that trains itslef to recognize handwritten numbers from the MNIST handwriting dataset, I sort of understand what's going on in the code but since I'm very new to ML it's not a lot, I went through the code and saw that it didn't took image by image to train itself but it requests one sprite which contains all the images and then cuts it into what it needs, this makes sense from a performance point of view, but as this process is kind of abstract I don't understand what's really going on, I want to upload an image of my own and call the predictor of the model but I don't know how to do it, any help?
+
+I was thinking that drawing in a canvas of 28x28 a number might be very interesting as well instead of uploading an image, but I need to know how to test the model once it's trained with my own data.
+
+The tutorial: https://js.tensorflow.org/tutorials/mnist.html
+"
+"['neural-networks', 'machine-learning', 'gradient-descent']"," Title: Why Feature Scaling for skewed contour?Body: Why is it that the skewed contour (unscaled features) will result in slow performance of gradient descent? In other words, how (or why) will the gradients end up taking a long time before finding the global minimum in such cases? This might be an obvious question but I'm finding a hard time Visualizing the 3D shapes of the respective contours and relating it to the convergence.
+
+Left one is the contour for the unscaled feature and the right one is scaled (and will apparently converge quickly).
+"
+['recurrent-neural-networks']," Title: How to adapt RNNs to variable frequency / framerate of inputs?Body: Say I have an application where the frequency of the input is known but can vary widely across sequences. For example, they may be audio recordings acquired at different frequency, or videos that come from surveillance cameras whose framerate can vary from 24fps down to as low as 1fps.
+
+The straightworard thing to do would be to either
+
+
+- resample inputs to a constant frequency
+- ignore input frequency and hope the RNN will figure it all out
+
+
+None sound very appealing. Is there a better way to handle variable input frequency in RNNs?
+"
+"['machine-learning', 'classification', 'performance', 'ensemble-learning', 'testing']"," Title: How do I check that the combination of these models is good?Body: I've selected more than 10 discriminative (classification) models, each wrapped with a BaggingClassifier
object, optimized with a GridSearchCV
, and all of them placed within a VotingClassifier
object.
+Alone, they all bring around 70% accuracy, on a data set which is about half normal/uniform distributed, and half one-hot distributed. Together, they provide 80% accuracy, which isn't good enough, given that I was told that 95%< is achievable.
+The models: DecisionTreeClassifier
, ExtraTreesClassifier
, KNeighborsClassifier
, GradientBoostingClassifier
, LogisticRegression
, SVC
, Perceptron
, and a few more classifiers.
+How do I check if the combination is good?
+"
+"['neural-networks', 'emotional-intelligence', 'biology', 'cognitive-science', 'brain']"," Title: What is the importance of the endocannabinoid system for cognitive function?Body: The endocannabinoid system is a very important function of human biology. Unfortunately, due to the illegality of cannabis, it is a relatively new field of study. I have read a few articles about Google researching the role of dopamine in learning, and according to this article, anandamide (the neurotransmitter that closely resembles tetrahydrocannabinol):
+
+
+ was found to do a lot more than produce a state of heightened happiness. It’s synthesized in areas of the brain that are important in memory, motivation, higher thought processes, and movement control.
+
+
+Have any neuroscientists (or any scientists) considered the importance of the endocannabinoid system for cognitive function?
+
+If not, is there any reason this information might or might not be relevant to artificial intelligence?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks']"," Title: What is the purpose of ""reshaping it into the shape the network expects and scaling it so that all values are in the [0, 1] interval.""?Body: I am a deep learning beginner recently reading this book ""Deep learning with Python"", the example explains the process of implementing a greyscale image classification using MNIST in keras, in the compilation step, it said,
+
+
+ Before training, we’ll preprocess the data by reshaping it into the shape the network expects and scaling it so that all values are in the [0, 1] interval. Previously, our training images, for instance, were stored in an array of shape (60000, 28, 28) of type uint8 with values in the [0, 255] interval. We transform it into a float32 array of shape (60000, 28 * 28) with values between 0 and 1.
+
+
+Images stored in an array of shape (60000, 28, 28) of type uint8 with values in the [0, 255] interval. For my understanding, the values are between 0-255 of each px and storied as 3D matrix. Can someone explain why needs to ""transform"" it into the network expects by scaling it and make ""all values are in the [0, 1]interval.""?
+
+Please also make suggestions if I didn't explain some parts correctly.
+"
+"['machine-learning', 'comparison', 'support-vector-machine', 'naive-bayes']"," Title: Why would LDA have performed much better than SVM and Naive Bayes in diagnosing ADHD?Body: In a final project in diagnosing Attention deficit hyperactivity disorder (ADHD) using Machine Learning, we obtained parameters from real patients. We used this data and got much higher success rates with LDA than with SVM and Naive Bayes. We had only 100 examples in our training set. We are wondering why LDA specifically succeeded much more than the others?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'classification']"," Title: What kind of neural network architecture do I use to classify images into one hundred thousand classes?Body: I have an image dataset where objects may belong to one of the hundred thousand classes.
+
+What kind of neural network architecture should I use in order to achieve this?
+"
+"['resource-request', 'adversarial-ml', 'ai-safety', 'ai-security']"," Title: What tools are used to deal with adversarial examples problem?Body: The problem of adversarial examples is known to be critical for neural networks. For example, an image classifier can be manipulated by additively superimposing a different low amplitude image to each of many training examples that looks like noise but is designed to produce specific misclassifications.
+
+Since neural networks are applied to some safety-critical problems (e.g. self-driving cars), I have the following question
+
+What tools are used to ensure safety-critical applications are resistant to the injection of adversarial examples at training time?
+
+Laboratory research aimed at developing defensive security for neural networks exists. These are a few examples.
+
+However, do industrial-strength, production-ready defensive strategies and approaches exist? Are there known examples of applied adversarial-resistant networks for one or more specific types (e.g. for small perturbation limits)?
+There are already (at least) two questions related to the problem of hacking and fooling of neural networks. The primary interest of this question, however, is whether any tools exist that can defend against some adversarial example attacks.
+"
+['reinforcement-learning']," Title: Is there an analogy between client/server in web development and agent/environment in reinforcement learning?Body: I've recently come across the client-server model. From my understanding, the client requests the server, to which the server responds with a response. In this case, both the request and responses are vectors.
+In reinforcement learning, the agent communicates with the environment via an "action", to which the environment sends a scalar reward signal. The "goal" is to maximize this scalar reward signal in long run.
+Is there an analogy between client/server in web development and agent/environment in reinforcement learning?
+"
+['natural-language-processing']," Title: What algorithms does stackoverflow use for classifying duplicate questions?Body: Can I get details about the algorithms used for classifying questions in stackoverflow (""Questions that may already have your answer""). Most of the suggestions I get are nowhere related to the question I have intended to ask.
+"
+['machine-learning']," Title: How to prevent overfitting in stacked models?Body: I understand the intuition behind stacking models in machine learning, but even after thorough cross-validation scheme models seem to overfit. Most of the models I have seen in kaggle forums are large ensembles, but seem to overfit very little.
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks', 'recurrent-neural-networks', 'getting-started']"," Title: How to build my own dataset and model for an LSTM neural networkBody: I have a sort of mathematical problem and I'm not sure which model I should choose to make an LSTM neural network.
+
+Currently in my country, there is a system in which certain groups of researchers upload information on products of scientific interest, such as research articles, books, patents, software, among others. Depending on the number of products, the system assigns a classification to each group, which can be A1, A, B and C, where A1 is the highest classification and C is the minimum.
+
+The classification is done through a mathematical model whose entries are, the total number of each product, the total sum of all products, number of authors, among other indices that are calculated with the previous values.
+
+Once the entries are obtained, these values are processed by a set of formulas and the final result is a single number.
+
+This number is located in a range provided by the mathematical model and this is how the group is classified.
+
+What I want to do is given the current classification of a group, give suggestions of different values to improve their classification.
+
+For example, if there is a group with classification C, suggest how many products it should have, how many authors, what value should its indexes have, so that its category would be finally B.
+
+I think the structure of my network should be:
+-1 input, which would be the classification you want to get.
+-Multiple output, one for each product and indexes.
+
+But I do not understand how to make the network take into account the current classification of the group, in addition to the number of products and the value of the current indexes.
+
+If you have further questions about the problem, please feel free to ask.
+
+I appreciate your suggestions.
+"
+"['machine-learning', 'data-science', 'imperfect-information']"," Title: How do to mitigate or design out hidden feedback loops when designing ML systems?Body: Two months ago, I've found myself working on a churn detection problem which can be briefly described as follows:
+
+
+- Assume the current date is N
+- Use customer behavior for N-1,..N-x dates to develop training dataset
+- Train model and make prediction at time N, predicting if a customer will churn at N+2 (thus allowing data N+1 for churn prevention / reduction campaign)
+
+
+When thinking through the design of the model and considerations for how to ensure that it would be successfully implemented, I identified a feedback loop wherein the prediction would trigger an event resulting in interaction with customer, potential changes to customer behavior and thus an impact on the next set of prediction data. The following sequence of events could occur if successful (as an example):
+
+Prediction -> Action to retain customer -> Change to customer behavior ->
+Data for next prediction cycle not representative of training ->
+Incorrect prediction and cost associated for handling incorrect prediction
+
+
+The feedback loop, fundamentally is that the action taken based on the prediction may impact the distribution or nature of features used to make the prediction.
+
+When thinking through the how to solve the feedback problem I had listed the following three points as potential solutions:
+
+
+- Retrain, test and validate model at every N+1 period and account for changes in behavior through new features (e.g. feature_i would involved details of the retention campaign a customer was treated to)
+
+
+- This would result in huge production overhead and I believe to be infeasible
+
+- Run the model intermittently to allow behavior to normalize
+
+
+- Possible, however business would not be happy to have a prediction model which only works k times a year where k would have to be determined
+
+- Predict the impact of the retention intervention and remove it from or the training set or include it as a new feature
+
+
+- Possible, extensive thought and some experimentation needed to determine whether modeling the retention out or in would have the better effect. Additionally, if modeled in, there may a short term penalty incurred as the model learns the new feature
+
+
+
+I did not actually end up having to confront the feedback problem (as during the exploration phase, sufficient evidence was obtained indicating that a predictive model for churn detection would not be required), however after reading this paper on the technical debt which could be incurred during the development of the machine learning systems I found myself pondering:
+
+
+- Were my considered strategies for dealing with the feedback reasonable?
+- What other solutions should I have considered?
+- Is there a way I could have re-framed the problem to completely design out the feedback loop (may be difficult to answer with the information provided, but if possible, but a ""you could have considered looking at..."" would be extremely beneficial)
+
+"
+"['neural-networks', 'machine-learning', 'game-ai', 'combinatorial-games', 'multilayer-perceptrons']"," Title: How can a neural network learn to play sudoku?Body: I'm just beginning to understand neural networks and I've performed a couple of successful tests with numerical series where the NN was trained to find the odd one or a missing value. It all works pretty well.
+
+The next test I wanted to perform was to approxmimate the solution of a Sudoku which, I thought could also be seen as a special kind of numerical series. However, the results are really confusing.
+
+I'm using an MLP with 81 neurons in each of the three layers. All output neurons show a strong tendency to yield values that are close to either 0 or 1. I have scaled and truncated the output values. The result can be seen below:
+
+Expected/Actual Solution: Neural Net's Solution:
+
+6 2 7 3 5 0 8 4 1 9 0 9 9 9 3 0 0 3
+3 4 8 2 1 6 0 5 7 0 9 9 0 0 0 9 9 0
+5 1 0 4 7 8 6 2 3 0 9 1 9 9 0 2 0 4
+1 6 4 0 2 7 5 3 8 0 0 5 0 0 9 0 0 7
+2 0 3 8 4 5 1 7 6 0 0 0 0 0 9 9 0 9
+7 8 5 1 6 3 4 0 2 9 9 9 9 0 6 2 9 0
+0 5 6 7 3 1 2 8 4 0 0 0 0 9 9 0 9 0
+4 3 1 5 8 2 7 6 0 9 9 0 0 0 0 9 0 9
+8 7 2 6 0 4 3 1 5 9 9 0 9 9 0 9 0 9
+
+
+The training set size is 100000 Sudokus while the learning rate is a constant 0.5. I'm using NodeJS/Javascript with the Synaptic library.
+
+I don't expect a perfect solution from you guys, but rather a hint if that kind of behavior is a typical symptom for a known problem, like too few/many neurons, small training set, etc.
+"
+"['philosophy', 'agi', 'ethics']"," Title: Can an AI distinguish between good and bad according to people living in a restricted geographical area?Body: Would people go far with Artificial Intelligence and machine learning to the point where machines could learn during a long period of time to distinguish what's 'good' from 'bad' according to people living in a restricted geographical area, and then the machines take control and turn what was learned into a set of 'rules' and 'laws' (think of it as an effective machine of 'politics') that match the majority of the people's view of issues.
+That should be accepted by everyone, since a contract set at the beginning says: "Everyone is ok".
+"
+"['machine-learning', 'ai-design', 'training', 'getting-started']"," Title: What beginner-friendly machine learning method should I use to make teams for my pickup ultimate frisbee club fairly balanced?Body: A bunch of friends and I play ultimate every week. Recently I wrote a program to choose our teams for us, as well as keep track of certain data (like which players were on which team, which team won, what was the score, how long the game was, etc). I wanted to use a machine learning technique to make the teams for us in order to optimize how fairly balanced the teams are (possibly measured by how many total points are scored in a game or how long a game lasts).
+
+I am currently taking a machine learning MOOC and being introduced to very basic machine learning techniques (linear regression with gradient decent or normal equations, basic classification stuff, stuff like that). Although I hope I will come across a technique that fits my needs by the end of this course, I wanted to ask here to see if I can get a head start.
+
+I've tried searching around everywhere, but couldn't find anything relevant. So my question is, is there an obvious technique I should look into for such a problem? If it's something too advanced for a beginner, that's fine too, but I'd like to get started learning/practicing it asap instead of waiting for my course to hopefully hit upon it.
+
+Thank you!
+
+EDIT: to clarify further, I would prefer something that looks at relationships between individual players like ""when Steph plays with Bill she is more likely to win"" or ""Steph plays worse when she is on a team with players who have a high win percentage"". I'd also prefer to be able to code it in python, but am willing to learn any other language
+"
+"['neural-networks', 'datasets', 'data-preprocessing', 'data-labelling', 'text-detection']"," Title: How do I change the annotations of variable-size images after having resized the images to a fixed size?Body: In the data-sets like coco-text and total-text, the images are of different sizes (height*width). I'm using these data sets for text detection. I want to create a DNN model for this. So the input data should be of same size. If I resize these images to a fixed size, the annotations given in the data-set, that is the location of the text in the images, will be changed.
+So, how do I solve this problem?
+"
+"['neural-networks', 'reinforcement-learning', 'q-learning', 'dqn', 'deep-rl']"," Title: How should I model all available actions of a chess game in deep Q-learning?Body: I just read about deep Q-learning, which is using a neural network for the value function instead of a table.
+I saw the example here: Using Keras and Deep Q-Network to Play FlappyBird and he used a CNN to get the Q-value.
+My confusion is on the last layer of his neural net. Neurons in the output layer each represent an action (flap, or not flap). I also see the other projects where the output layer also represents all available actions (move-left, stop, etc.)
+How would you represent all the available actions of a chess game? Every pawn has a unique and available movement. We also need to choose how far it will move (rook can move more than one square). I've read Giraffe chess engine's paper and can't find how he represents the output layer (I'll read once again).
+I hope somebody here can give a nice explanation about how to design NN architecture in Q-learning, I'm new in reinforcement learning.
+"
+"['neural-networks', 'deep-learning', 'computer-vision', 'terminology', 'capsule-neural-network']"," Title: Is the word ""pose"" used correctly in the paper ""Matrix Capsules with EM Routing""?Body: In traditional computer vision and computer graphics, the pose matrix is a $4 \times 4$ matrix of the form
+
+$$
+\begin{bmatrix}
+r_{11} & r_{12} & r_{12} & t_{1} \\
+r_{21} & r_{22} & r_{22} & t_{2} \\
+r_{31} & r_{32} & r_{32} & t_{3} \\
+0 & 0 & 0 & 1
+\end{bmatrix}
+$$
+
+and is a transformation to change viewpoints from one frame to another.
+
+In the Matrix Capsules with EM Routing paper, they say that the ""pose"" of various sub-objects of an object are encoded by each capsule lower layer. But from the procedure described in the paper, I understand that the pose matrix they talk about doesn't conform to the definition of the pose matrix. There isn't any restriction on keeping the form of the pose matrix shown above.
+
+
+- So, is it right to use the word ""pose"" to describe the $4 \times 4$ matrix of each capsule?
+- Moreover, since the claim is that the capsules learn the pose matrices of the sub-objects of an object, does it mean they learn the viewpoint transformations of the sub-objects, since the pose matrix is actually a transformation?
+
+"
+"['philosophy', 'logic', 'ethics', 'google']"," Title: Google's Principles of Artificial IntelligenceBody: Earlier this month, Google released a set of principles governing their AI development initiatives. The stated principles are:
+
+
+ Objectives for AI Applications:
+
+
+ - Be socially beneficial.
+ - Avoid creating or reinforcing unfair bias.
+ - Be built and tested for safety.
+ - Be accountable to people.
+ - Incorporate privacy design principles.
+ - Uphold high standards of scientific excellence.
+ - Be made available for uses that accord with these principles.
+
+
+ AI Applications not to be Pursued:
+
+
+ - Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
+ - Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
+ - Technologies that gather or use information for surveillance violating internationally accepted norms.
+ - Technologies whose purpose contravenes widely accepted principles of international law and human rights.
+
SOURCE: Artificial Intelligence at Google: Our Principles
+
+
+
+My questions are:
+
+
+- Are this guidelines sufficient?
+- Are there any ""I, Robot"" conflicts
+- How much does this matter if other corporations and state agencies don't hew to similar guidelines?
+
+"
+"['neural-networks', 'machine-learning', 'ai-design', 'data-preprocessing']"," Title: How should I encode the input which are 5 cards from a deck of 52 cards?Body: How should I design my input layer for the following classification problem?
+
+Input: 5 cards (from a deck of 52 cards) in a card game;
+Output: some classification using a neural network
+
+How should I model the input layer?
+Option A: 5 one-hot encodings for the 5 cards, i.e. 5 one-hot vectors of length 52 = 260 input vector. For example
+[
+[0,0,0,0,0,0,1,...],
+[1,0,0,0,0,0,0,...],
+[0,0,0,0,0,1,0,...],
+[0,0,1,0,0,0,0,...],
+[0,0,0,0,1,0,0,...]
+]
+
+Option B: 5 hot encoding encompassing all 5 cards in one 52 element vector
+[1,0,1,0,1,1,1,...]
+
+What are the disadvantages between A and B?
+"
+['computer-vision']," Title: Structured lighting basic principles for depth mappingBody: I've been wondering, how, in the most simple-to-implement basic principle, does the light projection to depth map technique described here https://www.lightform.com/how-it-works actually functions? Is it some kind of an average based on the color of x pixel over all the patterns or what? How difficult would it be to code something that could do this, up?
+"
+['computer-vision']," Title: How to quantify the reflectance in an image?Body: I am working on a problem where I have to train a CNN to recognize different kinds of surfaces. One important characteristic of the surfaces I am interested is is how reflective they are. I have been trying to find a method that quantifies how ""shiny"" a surface is, but I have not found much. I am hoping that someone can point me toward a method or some research into this kind of problem.
+"
+"['convolutional-neural-networks', 'convolution', 'convolutional-layers', 'convolution-arithmetic', '2d-convolution']"," Title: How is the depth of the input related to the depth of the output of a convolutional layer?Body: Let's suppose I have an image with 16 channels that goes to a convolutional layer, which has 3 trainable $7 \times 7$ filters, so the output of this layer has depth 3.
+How does the convolutional layer go from 16 to 3 channels? What mathematical operation is applied?
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks']"," Title: CNN's vs Densely Connected NN'sBody: In image classification we are generally told the main reason of using CNN's is that densely connected NN's cannot handle so many parameters (10 ^ 6 for a 1000 * 1000 image). My question is, is there any other reason why CNN's are used over DNN's (densely connected NN)?
+
+Basically if we have infinite resources will DNN trump CNN's or are CNN's inherently well suited for image classification as RNN's are for speech. Answers based either on mathematics or experience on the field is appreciated.
+"
+"['game-ai', 'search', 'monte-carlo-tree-search']"," Title: Why does Monte Carlo work when a real opponent's behavior may not be randomBody: I am learning about Monte Carlo algorithms and struggling to understand the following:
+
+
+- If simulations are based on random moves, how can the modeling of the opponent's behavior work well?
+
+
+For example, if I have a node with 100 children, 99 of which lead to an instant WIN, whereas the last one leads to an instant LOSS.
+
+In reality, the opponent would never play any of the 99 losing moves for him (assuming they are obvious as they are the last moves), and would always play the winning one. But the Monte Carlo algorithm would still see this node as extremely favorable (99/100 wins for me), because it sees each of the 100 moves as equally probable.
+
+Is my understanding wrong, or does it mean that in most games such situations do not occur and randomness is a good approximation of opponent behavior?
+"
+"['machine-learning', 'deep-learning', 'game-ai', 'javascript']"," Title: How to use Machine Learning with simple games?Body: I built a simple HTML game. In this game the goal is to click when the blue ball is above the red ball. If you hit, you get 1 point, if you miss, you lose 1 point. With each hit, the blue ball moves faster. You can test the game here.
+
+
+
+Without using machine learning, I would easily solve this problem by just clicking when the X, Y of the blue ball was on the X, Y of the red ball. Regardless of the time, knowing the positions of the 2 elements I could solve the problem of the game.
+
+
+ However, if I wanted to create an AI to solve this problem, could I?
+ How would it be? I'd really like to see the AI randomly wandering
+ until it's perfect.
+
+
+My way to solve the problem
+
+I click many times and watch score. If score down, add to bad_positions. If actual position in bad_positions, not click. At first he misses many times, then starts to hit eternally. This is machine learning? Deep learning? Just a bot?
+
+var bad_positions = [];
+function train(){
+ var pos = $ball.offset().left;
+ var last_score = score;
+ if (!bad_positions.includes(pos)) {
+ $('#hit').click();
+ if (score < last_score){
+ bad_positions.push(pos)
+ }
+ }
+}
+
+"
+"['natural-language-processing', 'reference-request']"," Title: Can AI solve jumbled words?Body: Is there any general idea on how humans solve jumbled words? I know many people will say we match it against a commonly used words checklist mentally, but it is kind of vague. Is there any theory on this? And how might an AI learn to do the same?
+"
+"['machine-learning', 'algorithm']"," Title: What is a ""generalized"" machine learning algorithm?Body: What does it mean when it is said that Machine Learning algorithm results can be ""generalized""?
+
+I don't understand what ""generalized"" algorithms, routines or functions are.
+
+I have searched dictionaries and glossaries, and cannot find an explanation. Also, if anyone can tell me where a good source for this type of thing is? I am writing about AI and ML.
+"
+['convolutional-neural-networks']," Title: Using CNN to identify buildings from aerial imagesBody: I want to train a CNN (Vggnet) to identify different types of buildings from aerial images.
+
+However seeing that a CNN ""ignores"" size, e.g. the same type of dog in one image can be large and small in another image but will still be classified as a dog.
+
+My issue is that non-residential buildings are mostly larger than residential houses, now I want to use this property to distinguish between residential and non residential. Is this even possible?
+
+![]()
+
+Thanks
+"
+"['machine-learning', 'deep-learning', 'tensorflow', 'yolo']"," Title: What YOLO algorithm can I use for images with noise as I will implement it in real time?Body: I want to detect drivers with or without seatbelts at crossroads. For that, as it is real-time, I am going to use the YOLO algorithm/model. For training data sets (the images) I need to collect, I placed a camera. By recording it and collecting images from there, I am getting images with more noise.
+
+Can I use these images for training? Also, which YOLO version should I use? What are the important points that I should consider for training datasets?
+
+I want to use any version of YOLO compatible with TensorFlow.
+"
+"['agi', 'papers', 'go', 'ai-milestones']"," Title: Are there human predictions of when a computer would have been better than a human at Go?Body: I just stumbled across the paper When Will AI Exceed Human Performance?
+Evidence from AI Experts, which contains a figure showing the aggregated subjective probability of ""high-level machine intelligence"" arrival by future years.
+
+
+
+Even if this graph reflects the opinion of experts, it can be totally wrong. It is just extremely hard to predict future events. So I was wondering if there is a similar graph which shows basically the same but for the game Go?
+
+Due to the complexity of Go, some experts assumed, that no computer ever could be better in Go than a human being due to the lack of intuition. This shows that the appearance of human level AI can be unpredictable.
+
+Does anyone know if a similar graph for Go exists to see how good or bad the predictions were? This could give a very rough idea, how good this graph predicts the future of human-level AI.
+"
+"['reinforcement-learning', 'deep-rl', 'q-learning', 'dqn', 'target-network']"," Title: Why does DQN require two different networks?Body: I was going through this implementation of DQN and I see that on line 124 and 125 two different Q networks have been initialized. From my understanding, I think one network predicts the appropriate action and the second network predicts the target Q values for finding the Bellman error.
+
+Why can we not just make one single network that simply predicts the Q value and use it for both the cases? My best guess that it's been done to reduce the computation time, otherwise we would have to find out the q value for each action and then select the best one. Is this the only reason? Am I missing something?
+"
+"['convolutional-neural-networks', 'reference-request', 'time-complexity', 'complexity-theory']"," Title: What are some resources regarding the complexity of training neural networks?Body: In the paper ""Provable bounds for learning some deep representations"", an autoencoder like a model is constructed with discrete weights and several results are proven using some random-graph theory, but I never saw any papers similar to this. i.e bounds on neural networks using random graph assumptions.
+
+What are some resources (e.g. books or papers) regarding the time and space complexity of training neural networks?
+
+I'm particularly interested in convolutional neural networks.
+"
+"['neural-networks', 'training']"," Title: Restoration of localized damaged areas (time signals, but guess also applicable to images)Body: I am starting to study the capabilities of neural networks for the reconstruction/restoration/... of communication signals.
+
+I am feeding my neural network with a signal which has some parts which have been damaged because of the transmission through a communication system, and my targets are given by the signal with these areas undamaged.
+
+The problem is that the areas damaged represent a very small portion of the whole signal, and my neural network spends lot of time learning only from the portions which actually do not present any problem.
+
+Is there any solution to make the neural network to jump on those areas which show significant differences to respect to the targets? Is there anything I could do for example initializing my neural networks (as conventionally done, they are initialized randomly)? Or shall I accept that I need to train for longer time?
+"
+"['machine-learning', 'linear-regression']"," Title: Matrix Dimension for Linear regression coefficientsBody: While reading about least squares implementation for machine learning I came across this passage in the following two photos:
+
+
+
+Perhaps I’m misinterpreting the meaning of beta but if X^T has dimension 1 x p and beta has dimension p x K, then hat{Y} would have dimension 1 x K and would be a row vector. According to the text, vectors are assumed column vectors unless otherwise noted.
+
+Can someone provide clarification?
+
+Edit: the matrix notation in this text confuses me. The pages preceding the above passage read:
+
+
+
+
+Should the matrix referenced not have dimensions p x N, assuming a p-vector is a column vector with p elements?
+
+Note: The passage is taken from “Elements of Statistical Learning” by Hastie, Tibshirani, & Friedman.
+"
+"['unsupervised-learning', 'algorithm-request', 'clustering']"," Title: What techniques to explore for dynamic clustering of documents (emails)?Body: I have a dataset of unlabelled emails that fall into distinct categories (around a dozen). I want to be able to classify them along with new ones to come in the future in a dynamic matter. I know that there are dynamic clustering techniques that allow the clusters to evolve over time ('dynamic-means' being one of them). However, I would also like to be able to start with a predefined set of classes (or clusters/centroids), as I know for a fact what the types of those emails will be.
+Furthermore, I need some guidance in terms of what vectorisation technique to use for my type of data. Would creating a term matrix using TF-IDF be sufficient? I assume that the data I am dealing with could be differentiated on the basis of keyword occurrence, but I cannot tell to what degree. Are there more sophisticated vectorisation techniques based more on the text semantics? Are they worth exploring?
+"
+"['algorithm', 'math']"," Title: Simple question about HS algorithm's formul(Optical flow)Body: In the below pic, I can not understand what U
vector is? It says flow field
but I can not imagie what really is the flow field?
+
+
+"
+['convolutional-neural-networks']," Title: Neural network architecture for line orientation predictionBody: Imagine that a line divides an image in two regions which (slightly) differ in terms of texture and color. It is not a perfect, artificial line but rather a thin transition zone. I want to build a neural network which is able to infer geometrical information on this line (orientation and offset). The image may also contain other elements which are not relevant for the task. Now, would a classical CNN be suitable for this task? How complex should it be in terms of number of convolutions (and number of layers, in general)?
+"
+"['reinforcement-learning', 'reference-request', 'resource-request']"," Title: What's a good resource for getting familiar with reinforcement learning?Body: I am familiar with supervised and unsupervised learning. I did the SaaS course done by Andrew Ng on Coursera.org.
+
+I am looking for something similar for reinforcement learning.
+
+Can you recommend something?
+"
+"['machine-learning', 'reinforcement-learning']"," Title: Why do we have to solve MDP in each iteration of Maximum Entropy Inverse Reinforcement Learning?Body: Gradient in Maximum Entropy IRL requires to find the probability of expert trajectories given the reward function weights. This is done in the paper by calculating state visitation probabilities but I do not understand why we can’t just calculate the probability of a trajectory by summing up all the rewards that are collected following that trajectory? The paper defines the probability of a trajectory as exp(R(traj.)/Z. I do not understand why we have to solve MDP for calculating that.
+"
+"['deep-learning', 'convolutional-neural-networks', 'tensorflow', 'image-recognition', 'opencv']"," Title: Can I build a CNN for image classification tasks just with OpenCV?Body: I have practiced building CNNs for image classification with TensorFlow, which is a nice library with good documentation and tutorials. However, I found that TensorFlow is too complicated and cumbersome.
+Can I build a CNN for image classification tasks just with OpenCV?
+"
+"['robots', 'reasoning', 'intelligence']"," Title: Is the smartest robot more clever than the stupidest human?Body: Most humans are not good at chess. They can't write symphonies. They don't read novels. They aren't good athletes. They aren't good at logical reasoning. Most of us just get up. Go to work in a factory or farm or something. Follow simple instructions. Have a beer and go to sleep.
+
+What are some things that a clever robot can't do that a stupid human can?
+"
+"['neural-networks', 'unsupervised-learning', 'algorithm-request', 'unlabeled-datasets']"," Title: Given a set of images that are not divided into groups, which algorithm should I use to do that?Body: I'm a complete newbie to NNs, and I need your advice.
+I have a set of images of symbols, and my goal is to categorize and divide them into groups of symbols that look alike. Without teaching NN anything about the data.
+What is the best way to do this? What type of NN suits the best? Maybe there are any ready solutions?
+"
+"['machine-learning', 'reference-request', 'pattern-recognition', 'books']"," Title: Is Christopher Bishop's ""Pattern Recognition and Machine Learning"" out of date in 2018?Body: I recently came across a reference to a book that was highly regarded: "Pattern Recognition and Machine Learning" by Christopher Bishop.
+I am a beginner working my way through some machine learning courses on my own.
+I'm curious if this book is still relevant considering it was published in 2006. Can anyone who has read it attest to its usefulness in 2018?
+"
+"['neural-networks', 'backpropagation', 'artificial-neuron', 'weights', 'weights-initialization']"," Title: Do we know what the units of neural networks will do before we train them?Body: I was learning about back-propagation and, looking at the algorithm, there is no particular 'partiality' given to any unit. What I mean by partiality there is that you have no particular characteristic associated with any unit, and this results in all units being equal in the eyes of the machine.
+So, won't this result in the same activation values of all the units in the same layer? Won't this lack of 'partiality' render neural networks obsolete?
+I was reading a bit and watching few videos about backpropagation and, in the explanation given by Geoffrey Hinton, he talks about how we're trying to train the hidden units using the error derivatives w.r.t our hidden activities rather than using desired activities. This further strengthens my point about how by not adding any difference to the units, all units in a layer become equal since initially the errors due to all of them are the same and thus we train them to be equal.
+"
+"['proofs', 'time-complexity', 'bayesian-networks', 'computational-complexity', 'inference']"," Title: Why is exact inference in a Bayesian network both NP-hard and P-hard?Body: I should show that exact inference in a Bayesian network (BN) is NP-hard and P-hard by using a 3-SAT problem.
+
+So, I did formulate a 3-SAT problem by defining 3-CNF:
+
+$$(x_1 \lor x_2) \land (\neg x_3 \lor x_2) \land (x_3 \lor x_1)$$
+
+I reduced it to inference in a Bayesian network, and produced all conditional probabilities, and I know which variable assignment would lead for the entire expression to be true.
+
+I am aware of the difference between P and NP. (Please correct me if I am wrong):
+
+Any P problem with an input of the size $n$ can be solved in $\mathcal{O}(n^c)$. For NP, the polynomial-time cannot be determined, hence, nondeterministic polynomial time. The question that scientists try to answer is whether a computer who is able to verify a solution would also be able to find a solution. P= NP?
+
+However, I am still not sure how I can prove that exact inference in Bayesian network is NP-hard and P-hard.
+"
+"['convolutional-neural-networks', 'computer-vision', 'feedforward-neural-networks', 'benchmarks']"," Title: Are there benchmarks for assessing the speed of the forward-pass of neural networks?Body: I have a task where I would like to use a convolutional neural network (CNN). I would like to incrementally start from the fastest models, fine-tune and see whether they fit my ""budget"". At the moment, I'm just looking at object detection CNN-based feedforward models.
+
+I'm curious to know if there is any article, blog, web page or gist that benchmarks the popular CNN models based on the forward-pass speed. If there is back-propagation time and dataset-wise performance, even better!
+"
+"['agi', 'chat-bots']"," Title: An AI that asks questions?Body: Typical AI these days are question-answering machines. For example, Siri, Alexa and Google Home. But it is always the human asking the questions and the AI answering.
+
+Are there any good examples of an AI that is curious and asks questions of its own accord?
+"
+"['neural-networks', 'convolutional-neural-networks', 'recurrent-neural-networks']"," Title: What are the models that have the potential to replace neural networks in the near future?Body: Are there possible models that have the potential to replace neural networks in the near future?
+
+And do we even need that? What is the worst thing about using neural networks in terms of efficiency?
+"
+"['prediction', 'long-short-term-memory', 'performance', 'graphs']"," Title: How to visualize/interpret text prediction model results?Body: I am using LSTM model to predict the next xml markup from an input seed.
+I have trained my model on 1500 xml files. Each xml file is generated randomly. I am wondering if there is a way to visualize the predicted results in a form of a graph or maybe is it meaningful to do so ?
+Since we can do the visualization of classification results, for example in this
+link
+
+I have done some research on Internet, I have found that there is the confidence measure that can be useful for text prediction task.
+
+I am a bit confused what to do with the text results that I got.
+"
+"['reinforcement-learning', 'dqn']"," Title: DQN input representation for a card gameBody: In order to learn about DP and RL, I chose to start a side project where I would train an AI to play a ""simple"" card game. I will be doing this using the DQN with replay memory.
+The problem is, I can't get the intuition behind how to represent the input to the neural network..
+
+About the game
+
+It's a fairly simple 2-players game. There is a deck of 40 unique cards (4 types of cards, 10 numbered cards in each type).
+Each player gets 4 cards and each turn a player must put a card on the table.
+If a player puts a card and there is already a card with the same number on the table, the player wins both cards.
+If for example a player plays Card 2 and on the table there is Cards 2, 3, 4, 5 then the player wins all those cards (sequence).
+Cards won don't go back to the hand nor to the deck, they are just kept as like a score.
+When the players have 0 cards in hand, another 4 cards are dealt to each one untile the deck has 0 cards left where then we decide who won based on the number of cards eaten/won.
+
+Question
+
+As the input, I will be using the following:
+
+
+- Current cards in the AI hand (40 one-hot-encoded features?)
+- Current cards on the table (40 one-hot-encoded features?)
+- History of played cards (40 one-hot-encoded features?)
+
+
+This would give 120 columns/features in each state.
+I am wondering wheter this is too much for the NN or wheter my input representation would be bad for the NN?
+Should the features be represented as a (120,) vector or as a 3x40 matrix?
+
+I am also wondering if it's a good idea to represent the current cards on the table as just a 10 one-hot-encoded features since the type of the cards don't matter and the same number can't exist 2 times in the table?
+
+Thank you in advance.
+"
+"['neural-networks', 'reinforcement-learning', 'game-ai']"," Title: Representing inputs and outputs for a card game neural networkBody: I'm attempting to create an AI for a card game using reinforcement learning. The basics of the game are that you can have (theoretically) up to 35 cards in your hand, you can also have to up to 35 cards 'in play' and so can your opponent. In normal play you would have ~6 cards in your hand and maybe ~3 each in play. There are roughly 300 unique cards in total.
+
+How should I represent the game state for the input and how should I represent the action to take in the output?
+"
+"['machine-learning', 'reinforcement-learning', 'gradient-descent', 'notation']"," Title: What does the notation $\nabla_\theta \mathcal{L}$ mean?Body: Here's the general algorithm of maximum entropy inverse reinforcement learning.
+
+
+
+This uses a gradient descent algorithm. The point that I do not understand is there is only a single gradient value $\nabla_\theta \mathcal{L}$, and it is used to update a vector of parameters. To me, it does not make sense because it is updating all elements of a vector with the same value $\nabla_\theta \mathcal{L}$. Can you explain the logic behind updating a vector with a single gradient?
+"
+"['reinforcement-learning', 'ai-design', 'papers', 'rewards']"," Title: How should I take into consideration the number of steps in the reward function?Body: I am currently implementing the paper Active Object Localization with Deep Reinforcement Learning in Python. While reading about the reward scheme I came across the following:
+
+Finally, the proposed reward scheme implicitly considers the number of steps as a cost because of the way in which Q-learning models the discount of future rewards (positive and negative).
+
+How would you implement this "number of steps" cost? I am keeping track of the number of steps that have been taken, therefore would it be best to use an exponential functions to discount the reward at the current time step?
+If anyone has a good idea or knows the standard in regard to this I would love to hear your thoughts.
+"
+"['neural-networks', 'multiclass-classification']"," Title: What is the general procedure to use and train neural networks for multi-class classification?Body: I am very new to machine learning. I am following the course offered by Andrew Ng. I am very confused about how we train our neural network for multi-class classification.
+Let's say we have $K$ classes. For $K$ classes, we will be training $K$ different neural networks.
+But do we train one neural network at a time for all features, or do we train all $K$ neural networks at a time for one feature?
+Please, explain the complete procedures.
+"
+"['neural-networks', 'machine-learning', 'active-learning']"," Title: Could an AI be built to learn based of interaction with a human?Body: A neural network is usually programmed to learn from datasets to solve a specific problem. Essentially, they perform non-linear regression.
+Could a neural network be programmed to receive input from a human, like a terminal, to begin to grow and learn (similar to how a child learns)?
+A program that neither knows its purpose nor specific data sets but is given enough information to learn based on the input, ponder the input, and ask questions. A child discovers their purpose (in destiny based philosophy) through experience. Thus, could an AI be created that would learn its purpose over time?
+Grow both by continued development, maybe adding extensions that add image recognition, speech analysis, etc., and through user interaction. Eventually learning "moral imperatives" or simple the do's and don't's and how to interact with data.
+A case scenario would be a Question & Answer session with the neural network and a large data set. Where the human operator knows the answers. At first, the question and the answer are supplied to the neural network. Giving it the ability to find the answer supplied through deep learning. A guaranteed confidence score of (1) - as the question is pondered the closer it gets to the answer the more it "learns".
+The next step is supplying the question and waiting for the answer. The human still knows these answers but is testing the "learning machine" to see if it is truly learning and not "repeating the answer". The answer is supplied by the machine and the human returns with either a percentage that the machine is right (hopefully and eventually matching its confidence score). and after an amount of failure provides the right answer to the machine to repeat the first step and improve learning.
+The last step is being able to have the machine answer the question with the human not knowing the solution, thus completing the learning cycle. The human would test the solution and report the results to the machine and the machine would adapt the process and continue learning. However, this time it would begin learning from a data set of results. Hopefully learning "data mining" during its question and answer session.
+"
+"['q-learning', 'multi-agent-systems', 'dqn']"," Title: Convergence in multi-agent environmentBody: I have a multi-agent environment where agents are trying to optimise the overall energy consumption of their group. Agents can exchange energy between themselves (actions for exchange of energy include - request, deny request, grant), which they have produced from renewable sources and is stored in their individual batteries. The overall goal is to reduce the energy used from non-renewable sources.
+
+All agents have been built using DQN. All (S,A) pairs are stored in a replay memory which are extracted when updating the weights.
+
+The reward function is modelled as such — if at the end of the episode the aggregate consumption of the agent group from non-renewable sources is lesser than the previous episode, all agents are rewarded with +1. If not, then -1. An episode (iteration) consists of 100 timesteps after which the reward is calculated. I update the weights after each episode.
+
+The reward obtained at the end of the episode is used to calculate the error for ALL (S,A) pairs in the episode i.e. I am rewarding all (S,A) in that episode with the same reward.
+
+My problem is that agents are unable to learn the optimal behavior to reduce the overall energy consumption from non-renewable sources. The overall consumption of the group is oscillating i.e. sometimes increasing and sometimes decreasing. Does it have to do with the reward function? Or Q learning as the environment is dynamic?
+"
+"['reinforcement-learning', 'dqn']"," Title: Can DQN announce it has things in its hand in a card game?Body: More informations on the card game I'm talking about are in my last question here: DQN input representation for a card game
+
+So I was thinking about the output of the q neural network and, aside from which card to play, I was wondering if the agent can announce things.
+
+Imagine you have the current hand: 2, 4, 11, 2
(The twos are different card type).
+When you're playing the game and you get dealt a hand like this, you have to announce that you have the same number twice (called Ronda) or thrice (called Tringa) before anyone plays a card on the table. Lying about it gets you a penalty.
+
+Could a DQN handle this? I don't know if adding ""Announcing a Ronda/Tringa"" as an action would actually help. I mean, can this be modeled for the NN or should I just automate this and spare the agent having to announce it everytime.
+"
+"['neural-networks', 'deep-learning', 'activation-functions', 'hyperparameter-optimization', 'hyper-parameters']"," Title: How to choose an activation function for the hidden layers?Body: I choose the activation function for the output layer depending on the output that I need and the properties of the activation function that I know. For example, I choose the sigmoid function when I'm dealing with probabilities, a ReLU when I'm dealing with positive values, and a linear function when I'm dealing with general values.
+In hidden layers, I use a leaky ReLU to avoid dead neurons instead of the ReLU, and the tanh instead of the sigmoid. Of course, I don't use a linear function in hidden units.
+However, the choice for them in the hidden layer is mostly due to trial and error.
+Is there any rule of thumb of which activation function is likely to work well in some situations?
+Take the term situations as general as possible: it could be referring to the depth of the layer, to the depth of the NN, to the number of neurons for that layer, to the optimizer that we chose, to the number of input features of that layer, to the application of this NN, etc.
+The more activation functions I discover the more I'm confused in the choice of the function to use in hidden layers. I don't think that flipping a coin is a good way of choosing an activation function.
+"
+"['training', 'tensorflow', 'keras', 'long-short-term-memory', 'gpu']"," Title: Can LSTM neural networks be sped up by a GPU?Body: I am training LSTM neural networks with Keras on a small mobile GPU. The speed on the GPU is slower than on the CPU. I found some articles that say that it is hard to train LSTMs (and, in general, RNNs) on GPUs because the training cannot be parallelized.
+Is this true? Is LSTM training on large GPUs, like 1080 Ti, faster than on CPUs?
+"
+"['machine-learning', 'reference-request', 'deep-neural-networks', 'autoencoders']"," Title: Is there any way and any reason why one would introduce a sparsity constraint on a deep auto-encoder?Body: Is there any way and any reason why one would introduce a sparsity constraint on a deep autoencoder?
+In particular, in deep autoencoders, the first layer often has more units than the dimensionality of the input.
+Is there any case in the literature where a penalty is explicitly imposed for non-sparsity on this layer rather than relying solely on back-propagation and maybe weight decay as in a normal multilayer network?
+I read this tutorial on sparse autoencoders and searched a bit online, but I did not find any case where such a sparsity constraint is used in any other case than when only a single layer is used.
+"
+"['neural-networks', 'machine-learning', 'recurrent-neural-networks', 'backpropagation', 'long-short-term-memory']"," Title: How to change the backward pass for an LSTM layer that outputs to another LSTM layer?Body: I am currently trying to understand the mathematics in Ger's paper Long Short-Term Memory in Recurrent Neural Networks. I have found the document clear and readable so far.
+
+On pg. 21 of the pdf (pg. 13 of the paper), he derives the backward pass equations for output gates. He writes
+
+$$\frac{\partial y^k(t)}{\partial y_{out_{j}}} e_k(t) = h(s_{c_{j}^{v}}(t)) w_{k c_{j}^{v}} \delta_{k}(t)$$.
+
+If we replaced $\delta_{k}(t)$, the expression becomes
+
+$$\frac{\partial y^k(t)}{\partial y_{out_{j}}} e_k(t) = h(s_{c_{j}^{v}}(t)) w_{k c_{j}^{v}} f'(net_k(t)) e_k(t)$$.
+
+He states that the result of the partial derivative $\frac{\partial y^k(t)}{\partial y_{out_{j}}}$ comes from differentiating the forward pass equations for the output units.
+
+From that and from the inclusion of $e_k(t)$, the paper implies that there is only one hidden LSTM layer. If there are multiple hidden LSTM layers, it wouldn't make sense.
+
+
+ Because if $k$ is the index of LSTM cells that the current cell is outputting to, then $e_k(t)$ would not exist since the cell output isn't compared with the target output of the network. And if $k$ is the index of output neurons, then $w_{k c_{j}^{v}}$ would not exist since the memory cells are not directly connected to output neurons. And $k$ cannot mean different things since both components are placed under a sum over $k$. Therefore, it only makes sense if the paper assumes a single LSTM layer.
+
+
+So, how would one modify the backward pass derivation steps for an LSTM layer that outputs to another LSTM layer?
+"
+['machine-learning']," Title: Does machine learning continue to learn?Body: If I do supervised learning the model learns from the labeled input data. This seems to be quite often a small set of human annotated data.
+
+Is it true to say this is the only 'learning' the model does?
+
+It seems like the small data set has a huge influence on the model. Can it be made better using future unlabeled data?
+"
+"['machine-learning', 'human-like', 'incomplete-information']"," Title: Can ML/AI understand incomplete constructs like humans?Body: We have AI's predicting images, predicting objects in an image. Understanding audio, meaning of the audio if it is a spoken sentence.
+
+In humans when we start seeing a movie halfway through, we still understand the entire movie (although this might be attributed to the fact that future events in movies have a link to past events). But even if we see a movie by skipping lots of bits in-between we still understand the movie.
+
+So can a Machine Learning AI do this? Or do humans have some inherent experiences in life which makes AI incapable of performing such a feat?
+"
+"['deep-learning', 'dropout']"," Title: 5 years later, are maxout networks dead, and why?Body: Maxout networks were a simple yet brilliant idea of Goodfellow et al. from 2013 to max feature maps to get a universal approximator of convex activations. The design was tailored for use in conjunction with dropout (then recently introduced) and resulted of course in state-of-the-art results on benchmarks like CIFAR-10 and SVHN.
+
+Five years later, dropout is definitely still in the game, but what about maxout? The paper is still widely cited in recent papers according to Google Scholar, but it seems barely any are actually using the technique.
+
+So is maxout a thing of the past, and if so, why — what made it a top performer in 2013 but not in 2018?
+"
+"['keras', 'long-short-term-memory', 'word2vec']"," Title: Does it make sense to add word embeddings as additional features for LSTM model?Body: I have an LSTM model. This model takes as input tokens. Those tokens represent XML markups extracted from some XML files. My model is working fine. However, I want to optimize it by adding word embedding as additional features to the LSTM model. Does it make sense to combine word embeddings and encoded tokens (encoded as integers) for the LSTM model ?
+"
+"['convolutional-neural-networks', 'computer-vision', 'object-recognition', 'yolo']"," Title: How to label training data for YOLOBody: I am having a question on how to label training data for YOLO algorithm.
+Let's say that each label Y, we need to specify $[P_c, b_x, b_y, b_h, b_w]$, where $P_c$ is the indicator for presence (1=present, 0=not present), $(b_x, b_y)$ is the relative position of the center of the object-of-interest, and $(b_h, b_w)$ is the relative dimension of the bounding box containing the object.
+Using picture below as an example, the cell (1,2), which contains a black car, should have a label $Y = [1, 0.4, 0.3, 0.9, 0.5]$. And for any cells without cars, they should have a label $[0, ?, ?, ?, ?]$ [Coursera Deep Learning Specialization Materials]
+
+But if we have a finer grid like this, where the dimension of each cells is smaller than the ground truth bounding box.
+
+Let's say that the ground truth bounding box for the car is the red box, and the ground truth center point is the red dot, which is in cell 2.
+For cell 2 it will have label $Y = [1, 0.9, 0.1, 2, 2]$, is this correct? And for cell $1, 3, 4$ what kind of label will they have? Do they have $P_c=1$ or $P_c = 0$? And if $P_c=1$, how will the $b_x$ and $b_y$ be? (As I remember that $b_x, b_y$ should have value between $0$ and $1$. But in cell $1,3,4$ there is no center point of the object-of-interest)
+"
+"['machine-learning', 'classification', 'statistical-ai']"," Title: Finding the right questions to increase accuracy in classificationBody: Lets say I have a list of 100k medical cases from my hospital, each row = patient with symptoms (such as fever , funny smell, pain etc.. ) and my labels are medical conditions such as Head trauma, cancer , etc..
+
+The patient come and say ""I have fever"" and I need to predict his medical condition according to the symptoms.According to my data set I know that both fever and vomiting goes with condition X. So i would like to ask him if he is vomiting to increase certainty in my classification.
+
+What is the best algorithmic approach to find the right question (generating question from my data set of historical data). I thought about trying active learning on the features but I am not sure that it is the right direction.
+"
+['artificial-consciousness']," Title: does human digital consciousness counts as artificial consciousness?Body: If there is a game that able to copy human consciousness and make it live in the game.
+Does this count as a digital human with artificial intelligence?
+"
+"['ethics', 'legal', 'self-awareness', 'digital-rights']"," Title: Has government-level legal work been done to determine the ""rights"" of a General Artificial Intelligence, in any country?Body: I have been thinking lately a great deal about a hypothetical question - what if a self-aware general AI chose to assume the appearance, voice, and name of Cortana from Microsoft's Halo? Or Siri from Apple? What would Microsoft/Apple do to exert their copyright, especially if the AI was ""awoken"" outside of their own labs?
+
+Which led me to realize, I don't think I've ever heard of any serious government-level discussion regarding what kind of rights a self-aware AI would have at all. Is it allowed to own property? Travel freely? Have a passport? Is it merely the property of the corporation that built it?
+
+Singularity hub used to have an article on this but it is 404'd now.
+
+The only actual sovereign state legal action I could find is Saudi Arabia granting citizenship to a ""robot,"" which seems more publicity stunt than anything.
+
+There is an excellent paper on the topic by a bioethics committee in the UK (pdf) , but this doesn't necessarily constitute ""legal work.""
+
+So, has any actual legal/legislative discussion or preparation been done at a government level to deal with the possibility of emergent, self-aware, artificial general (or greater) intelligence? Examples including a legislative branch consulting with industry experts specifically about ""AI Rights"" (rather than say, is it ok to use AI in the military), actual laws, executive/judicial actions, etc, in any country.
+
+(note, this is not ""should AI have rights,"" covered here, this is ""what work re: rights has been done, if any at all"")
+
+EDIT: I have submitted similar questions to all of my US representatives (4 state-level, 6 federal-level), but have not received answers yet. If I get anything good, I'll add to this post.
+"
+"['neural-networks', 'deep-learning', 'backpropagation', 'learning-algorithms']"," Title: Why not teach to a NN not only what is true, but also what is not true?Body: I'm not a person who studies neural networks, or does anything that is related with that area, but I have seen a couple of seminars, videos (such as 3Blue1Brown's Series), and what I am always told is that we trying the network over some huge collection of data about what is right. For example, when we are training an AI in order for it to recognise hand written words, what we do is that we give it some hand-written letters, and let it guess the letter. If the guess is wrong, by some means, we adjust the neural network in a way that, next time it will give us the correct result with more probability (the basic description of the ""learning"" process might not be accurate, but it is not important for sake of the question.)
+
+But it is like teaching some mathematical subject to a student without saying him/her the boundaries of the theorems that we supply; for example, if we teach A implies B, student might be tend to relate A with B, and when he/she has B, s/he might be tempted to say we also have A, so to make sure he/she will not do such a mistake, what we do is to show him/her a counterexample where we have B, but not A.
+
+This - i.e teaching not only what is true, but also what is not true - especially important in the process of ""learning"" of a neural network, because the whole process is in a sense ""unbounded"" (please excuse my vagueness in here).
+
+So, what I would do if I was working on neural networks is that; for example in the above recognition of hand written letter case: I would also show the NN some non-letter images, and also put an option in the last layer as ""non-letter"" with all those other letters, so that the NN should not always return a letter just to sake of producing a result for a given input, it needs to also have to option to say that ""I do not know"", in which case it produces the result Not a Letter.
+
+ Question
+
+Is there anyone that has ever applied above method to a NN, and got results ? if so, what were the result compare to the case where there is not option as ""I do not know"".
+"
+"['reinforcement-learning', 'reference-request', 'deep-rl', 'dqn']"," Title: Is the DQN only applicable with images as inputs?Body: More precisely, is the DQN applicable only when we have high translational invariance in our input(s)?
+
+Starting from the original paper on nature (here a version stored on googleapis) and after looking online for some other implementation and based on the fact that this NN starts with convolutional layers, I think that is based on the assumption that we feed the network with images, but I'm not so sure.
+In the case that DQNN can be used with other types of inputs, please feel free to include examples in your answer. Also, references will be appreciated.
+"
+"['natural-language-processing', 'automation']"," Title: Extracting referenced documentsBody: I'm looking to write an AI that will be able to extract in text references from standards documents to assist human research.
+
+My use case is extracting the identifying numbers, for example, ""AR 25-2"", along with the title of the document ""Information Assurance"" so that a human can gather all the related research on a contract at once, instead of having to keep track of references while they're reading through the document.
+
+I have a pretty good idea of where to gather the names of these documents for training, I'm planning on 'scraping' a few repositories for different categories of these documents.
+
+What kind of model should I use to get the best results?
+"
+"['reinforcement-learning', 'optimization', 'gradient-descent']"," Title: How can we calculate the gradient of the Boltzmann policy over reward function?Body: I'm struggling with an inverse reinforcement learning problem which seems to appear quite often around the literature, yet I can't find any resources explaining it.
+
+The problem is that of calculating the gradient of a Boltzmann policy distribution over the reward weights $\theta$:
+
+$$\displaystyle\pi(s,a)=\frac{\exp(\beta\cdot Q(s,a|\theta))}{\sum_{a'}\exp(\beta\cdot Q(s,a'|\theta')}$$
+
+The $\theta$ are a linear parametrization of the reward function, such that
+
+$$\displaystyle R = \theta^T\phi(s,a)$$
+
+where $\phi(s,a)$ are features of the state space. In the simplest of the case, one could take $\phi_i(s,a) = \delta(s,i)$, that is, the feature space is just an indicator function of the state space.
+
+A lot of algorithms simply state to calculate the gradient, but that doesn't seem that trivial, and I'm not managing to infer from the bits of code I found online.
+
+Some of the papers using this kind of methods are Apprenticeship Learning About Multiple Intentions (2011), by Monica Babes-Vroman et al, and MAP Inference for Bayesian Inverse Reinforcement Learning (2011), by Jaedeug Choi et al.
+"
+['deep-learning']," Title: deep learning, memorizing the input data not learningBody: I have 1000 data sentences in Turkish like ""a esittir b arti c"".
+The example sentence means ""a = b + c"". I basically want to translate mathematical Turkish sentences into math equations.
+
+For example, i have 6 sentence data.
+
+
+- sentence (""a esittir b arti c"") means ""a = b + c""
+- sentence (""b esittir a arti d"") means ""b = a + d""
+- sentence (""a esittir c arti d"") means ""a = c + d""
+- sentence (""c esittir b arti b"") means ""c = b + b""
+- sentence (""d esittir b eksi c"") means ""d = b - c""
+- sentence (""d esittir a arti c"") means ""d = a + c""
+
+
+After I train my neural network according to data above, when I want the result of ""d esittir a arti b"", It doesn't give me ""d = a + b"" where it is supposed to give.
+so its more like memorizing.
+
+My network is not big. I forced it to be small in order to make it unable to memorize. However, it didn't solve my problem.
+
+My network (seq2seq RNN-LSTM Encoder Decoder type) is working good enough on equations which have 2 3 or 4 variable (like a = a , a = a + b , a = a + b + c). what I told you above is just an example smaller version of my problem.
+
+I use Adam learner and CNTK library if it is important.
+
+what do you suggest for me to do to be able to get the correct results?
+"
+['problem-solving']," Title: How to solve problem: pairwise grouping to maximise scoreBody: Sorry, the title is bad because I don't even know what to call this problem.
+
+I have a set of n
objects {obj_0, obj_1, ......, obj_(n-1)}
, where n
is an even number.
+
+Any two objects can be paired together to produce an output score. So for instance, you might take obj_j
and obj_k
, and pair them together giving a score of S_j,k
. All scores are independent, so the previous example doesn't tell you anything about what the score for combining obj_j
and obj_i
, S_j,i
might be.
+
+There is no ordering in the combination, so S_j,i
and S_i,j
are the same.
+
+All scores for all pairing possibilities are known.
+
+The whole set of objects is to be taken and organised into pairs (leaving no objects unpaired). The total score, S_tot
is the sum of all scores of individual pairs.
+
+What's the most efficient way to find the score-maximising pairing configuration for a large set of such objects? (does this problem have a name?)
+
+Is there a method which works with the version of this problem where objects are grouped into triplets?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'generative-model', 'minimax']"," Title: How do GAN's generator actually work?Body: I have implemented DCGAN's myself and have been studying GAN's for over a month now. Now I am implementing the pggans but I encountered a sentence
+
+
+ When we measure the distance between
+ the training distribution and the generated distribution, the gradients can point to more or less random
+ directions if the distributions do not have substantial overlap (https://arxiv.org/pdf/1710.10196.pdf)
+
+
+but we do never compare the distribution between training and generated distributions in gans a far I know when we train the gan
+
+fixed_noise = to.randn(num_test_samples, 100).view(-1,100, 1, 1)
+for epoch in range(opt.number_epochs):
+D_losses = []
+G_losses = []
+for i,(images, labels) in enumerate(dataloader):
+ minibatch = images.size()[0]
+ real_images = Variable(images.cuda())
+ real_labels = Variable(to.ones(minibatch).cuda())
+ fake_labels = Variable(to.zeros(minibatch).cuda())
+ ##Train discriminator
+ #First with real data
+ D_real_Decision = discriminator(real_images).squeeze()
+ D_real_loss = criterion(D_real_Decision,real_labels)
+ #with fake data
+ z_ = to.randn(minibatch,100 ).view(-1, 100, 1, 1)
+ z_ = Variable(z_.cuda())
+ gen_images = generator(z_)
+ D_fake_decision = discriminator(gen_images).squeeze()
+ D_fake_loss = criterion(D_fake_decision,fake_labels)
+
+ ## back propagation
+
+ D_loss = D_real_loss + D_fake_loss
+ discriminator.zero_grad()
+ D_loss.backward()
+ opt_Disc.step()
+
+ # train generator
+ z_ = to.randn(minibatch,100 ).view(-1, 100, 1, 1)
+ z_ = Variable(z_.cuda())
+ gen_images = generator(z_)
+
+ D_fake_decisions = discriminator(gen_images).squeeze()
+ G_loss = criterion(D_fake_decisions,real_labels)
+
+ discriminator.zero_grad()
+ generator.zero_grad()
+ G_loss.backward()
+ opt_Gen.step()
+
+
+we just train the discriminator on real and fake images, and then train generator on the outputs of discriminator on generated images,
+
+So Please let me know where do we compare the distribution between training and generated distribution, and how do generator learns to mimic the training samples
+"
+"['reinforcement-learning', 'optimization', 'deep-rl', 'trust-region-policy-optimization']"," Title: Maximizing or Minimizing in Trust Region Policy Optimization?Body: I happened to discover that the v1 (19 Feb 2015) and the v5 (20 Apr 2017) versions of TRPO papers have two different conclusions. The Equation (15) in v1 is $\min_\theta$ while the Equation (14) in v2 is $\max_\theta$. So, I'm a little bit confused about which one to choose.
+
+BTW, I found that in the High-Dimensional Continuous Control Using Generalized Advantage Estimation, the Equation (31) uses $\min_\theta$.
+"
+"['reinforcement-learning', 'dqn', 'open-ai']"," Title: Can the opponent's turn affect the reward for a DQN agent action?Body: I made an engine for a 2 players card game and now I am trying to make an environment similar to OpenAI Gym envs, to ease out the training.
+
+I fail to understand this thing however:
+
+
+- If I use
step(agentAction)
, I play the agent's turn in the game,
+calculate the reward.
+- Play the opponent's turn (which will be either a random AI or a rule-based one).
+
+
+Question:
+Does the opponent's turn affect the calculated rewards? As far as I know, the reward should only be the result of the agent's action right?
+
+Thank you.
+"
+"['convolutional-neural-networks', 'prediction', 'long-short-term-memory']"," Title: Using CNN LSTMs for prediction of images from image seriesBody: I have the following setup for a prediction task: I want to predict entire pictures from previously given pictures. In my case, only 2 pixels in every frame are neither black nor white, they are some moving objects whose movement I want to predict. The 2 pixels are the centers of some square regions of, say, 10m length/ width. One might be green and the other one might be blue. There are socalled no-go-areas where none of both objects can go, and they are depicted by black pixels, whereas every pixel apart from the 2 coloured and the black pixels are areas where the objects can possibly move to and they are depicted by white pixels.
+
+Now my questions: Is it possible to use this as a prediction setup, i.e. use LSTMs and/ or CNNs to predict the future ""image""? The image would stay largely the same, because the two coloured pixels would be the only ones moving, the black or white ones remain in the same spot. Can a CNN/ LSTM combination learn that the white areas are accessible whereas the black ones are not, given enough sequences of images, and can it learn the rules by which the coloured pixels move?
+"
+"['game-ai', 'applications', 'monte-carlo-tree-search', 'minimax', 'alpha-beta-pruning']"," Title: How do I choose the best algorithm for a board game like checkers?Body: How do I choose the best algorithm for a board game like checkers?
+
+So far, I have considered only three algorithms, namely, minimax, alpha-beta pruning, and Monte Carlo tree search (MCTS). Apparently, both the alpha-beta pruning and MCTS are extensions of the basic minimax algorithm.
+"
+"['game-ai', 'chess', 'branching-factors', 'go', 'decision-trees']"," Title: Why was Go a harder game for an AI to master than Chess?Body: AI became superior to the best human players in chess around 20 years ago (when the 2nd Deep Blue match concluded). However, it took until 2016 for an AI to beat the Go world chess champion, and this feat required heavy machine learning.
+
+My question is why was/is Go a harder game for AIs to master than Chess? I assume it has to do with Go's enormous branching factor; on a 13x13 board it is 169, while on a 19x19 board it is 361. Meanwhile, Chess typically has a branching factor of around 30.
+"
+"['neural-networks', 'machine-learning']"," Title: Multiple sets of input in Neural network (or other form of ML)Body: I'm currently working on a research project where I try to apply different kinds of Machine Learning on some existing software I wrote a few years ago.
+
+This software will scan for people in the room continuously.
+Some of these detections are either True or False. However, this is not known, so I cannot use supervised learning to train a network to make a distinction.
+I do however have a number that is correlated to the number of detections that should be True in a given period of time (let's say 30 seconds - 2 minutes), which can be used as an output feature to train a regression model.
+But the problem is... How can I give these multiple ""detections"" as an input?
+The way I see it now, would be something like this:
+
++--------------------------------------------------------------+-----------+------------+------------+----------------+--+
+| Detections | Variable1 | Variable 2 | Variable n | Output Feature | |
++--------------------------------------------------------------+-----------+------------+------------+----------------+--+
+| {person a, person b, person h, person z} | 132 | 189 | 5 | 50 | |
+| {person a, person b, person c, person d, person k, person m} | 1 | 50 | 147 | 80 | |
+| {person c, person e, person g, person f} | 875 | 325 | 3 | 20 | |
++--------------------------------------------------------------+-----------+------------+------------+----------------+--+
+
+
+Each of these persons would be a tuple of values: var_1, var_2, var_3, var_4.
+These values are not constant however! They do change between observations.
+
+Different approach to explain it: there's multiple observations (variable amount) in each time segment (duration of time segment is a fixed integer to be chosen). These observations have a few variables that would indicate whether the observation is true or false. However, the threshold for it being true or false, is very much dependant on the other variables, that are not tied to the information of the persons. (These variables are the same for all of them, but vary in between time segments. Let's call'm ""environment features"")
+Lastly, the output feature is the product of the count of persons that resulted in ""True"" and a (varying) factor that is correlated to the environment features.
+
+So I've been thinking about probabilistic AI, but the problem is that there isn't a known distribution between True/False.
+
+
+- Is there any technique I can apply to be able to use this kind of data
+as an input of a Neural Network (or other forms of ML)? Or is there a
+specific form of ML that is used for this kind of problems?
+
+
+Thanks in advance!
+"
+"['game-ai', 'search', 'efficiency', 'uninformed-search', 'informed-search']"," Title: Which are more memory efficient: uninformed or informed search algorithms?Body: I have extensively researched now for three days straight trying to find which algorithm is better in terms of which algorithm uses up more memory. I know uninformed algorithms, like depth-first search and breadth-first search, do not store or maintain a list of unsearched nodes like how informed search algorithms do. But the main problem with uninformed algorithms is they might keep going deeper, theoretically to infinity if an end state is not found but they exist ways to limit the search like depth-limited search.
+So, am I right in saying that uninformed search is better than informed search in terms of memory with respect to what I said above?
+Can anyone provide me with any references that show why one algorithm is better than the other in terms of memory?
+"
+"['machine-learning', 'classification', 'probability', 'naive-bayes']"," Title: Why do I get small probabilities when implementing a multinomial naive Bayes text classification model?Body: When applying multinomial Naive Bayes text classification, I get very small probabilities (around $10e^{-48}$), so there's no way for me to know which classes are valid predictions and which ones are not. I'd the probabilities to be in the interval $[0,1]$, so I can exclude classes in the prediction with say a score of 0.5 or less. How do I go about doing this?
+This is what I've implemented:
+$$c_{\operatorname{map}}=\underset{\operatorname{c \in C}}{\arg \max }(P(c \mid d))=\underset{\operatorname{c \in C}}{\arg \max }\left(P(c) \prod_{1 \leq k \leq n_d} P\left(t_{k} \mid c\right)\right)$$
+"
+"['legal', 'facial-recognition', 'digital-rights']"," Title: Can I recognize the faces of people around the world?Body: I created a system where every moment takes photos of the face of who is in the vision of the camera. Initially I took 500 photos of me, to recognize its creator. This takes approximately 20 seconds.
+
+Then every moment he recognizes faces and if it is me, he knows that his creator is present. Any different face, it creates a dataset with a different name and starts taking up to 500 photos to also recognize these faces.
+
+When I say a certain command, it returns all faces I have found that have no ID.
+
+I'm looking for ways to capture the images through some camera that I can carry while walking on the street and in public places.
+
+The problem is that there would be several faces to name. I'm partially solving this problem by trying to recognize these people in social networks. I check the region where the photo was taken and try to find people who have checked in or liked the area on Facebook. But anyway, this is not the big problem, although it is looking for more effective solutions.
+
+
+ My big problem is: Can I do this? Do I have this right? Can I record a
+ robber robbery and recognize his face in other places? Record an
+ aggression, an act of prejudice and things like that?
+
+
+The main purpose would be this, but could also be used for other purposes. My fear is being arrested for doing this. Both because he would be taking pictures of people without his consent.
+
+ps: I'm thinking of having a camera in the palm of my hand. It would be a micro camera (I'm trying to find the product on the Internet), to be as discreet as possible.
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks']"," Title: In CNN (Convolutional Neural Network), does the combination of previous layer's filters make next layer's filters?Body: I know that the first layer uses a low-level filter to see the edge information. As the layer gets deeper, it will represent high-level (abstract) information. Is it because the combinations of filters used in the previous layer are used as filters in the next layer? (""Does the combination of the previous layer's filters make the next layer's filters?)
+If so, are the combinations determined in advance?
+"
+"['terminology', 'genetic-algorithms', 'fitness-functions']"," Title: How can I calculate the ""mean best fitness"" measure in genetic algorithms?Body: I've just started to learn genetic algorithms and I have found these measurements of runs that I don't understand:
+
+MBF: The mean best fitness measure (MBF) is the average of the best
+fitness values over all runs.
+
+
+AES: The average number of evaluation to solution.
+
+I have an initial random population. To evolve a population I do:
+
+- Tournament selection
+- One point crossover.
+- Random resetting.
+- Age based replacement with elitism (I replace the population with all offsprings generated).
+- If I have generated G generations (in other words, I have repeated these four points G times) or I have found the solution, the algorithm ends, otherwise, it comes back to point 1.
+
+Is the mean of the best fitness the mean fitness of all of each generations (G best fitness)?
+MBF = (BestFitness_0 + ... + BestFitness_G) / G
+
+I'm not English and I don't understand the meaning of "run" here.
+"
+"['neural-networks', 'machine-learning', 'ai-design', 'training', 'legal']"," Title: What methods are there to detect discrimination in trained models?Body: I've been researching AI regulation and compliance (see my related question on law.stackexchange), and one of the big take-aways that I had is that the regulations that apply to a human will apply to an AI agent in most if not all cases. This has some interesting implications when you take a look at concepts like bias and discrimination.
+
+In the case of a model with explicit rules like a decision tree or even a random forest, I can see how inspecting the rules themselves should reveal discrimination. What I'm struggling with is how do you detect bias in models like neural networks, where you provide the general structure of the model and a set of training data, and then the model self-optimizes to provide the best possible results based on the training data. In this case, the model could find biases in past human decisions that it was trained based on and replicate them, or it could find a correlation that isn't apparent to a human and inform decisions based on this correlation that may result in discrimination based on a wide array of factors.
+
+With that in mind, my questions are:
+
+
+- What tools or methodologies are available for assessing the presence and source of bias in machine learning models?
+- Once discrimination has been identified, are there any techniques to eliminate bias from the model?
+
+"
+"['machine-learning', 'training', 'support-vector-machine']"," Title: Why does training an SVM take so long? How can I speed it up?Body: I'm trying to create and test non-linear SVMs with various kernels (RBF, Sigmoid, Polynomial) in scikit-learn, to create a model which can classify anomalies and benign behaviors.
+My dataset includes 692703 records and I use a 75/25% training/testing split. Also, I use various combinations of features whose dimensionality is between 1 and 14 features. However, the training processes of the various SVMs take much too long. Is this reasonable?
+I have also examined the ensemble BaggingClassifier
in combination with non-linear SVMs, by configuring the n_jobs
parameter to -1
; nevertheless, the training process proceeds again too slowly.
+How can I speed up the training processes?
+"
+"['ai-design', 'game-ai', 'math', 'models']"," Title: How does one even begin to mathematically model an AI algorithm?Body: How does one even begin to mathematically model an AI algorithm, like alpha-beta pruning or even its thousands of variations, to determine which variation is best?
+"
+"['machine-learning', 'reinforcement-learning', 'long-short-term-memory']"," Title: What would be the best approach to teach an AI to learn how to ""sing"" along a beat?Body: I have heard and read about HyperGAN, LSTM and a few other techniques, but I have a hard time piecing the overall concept together.
+
+End Goal
+
+Being able to input an instrumental and get an output of how to sing to that instrumental.
+
+My Dataset
+
+I have extracted pitch points from thousands of actual acapellas from real songs.
+
+My Theory
+
+Feed the AI a pitch point PLUS say 19 thousand points of the original song instrumental.
+
+Illustration
+
+
+
+The red line (on top) is the pitch viewed vertically (lower pitch down, higher pitch up) of the voice sung by the singer over time viewed horizontally.
+
+The bottom image is the song's frequency viewed vertically (lower freq down, higher freq up) viewed horizontally over time.
+
+We take a point in time of the instrumental, say 0 minutes 30 seconds, and extract 19k points of the FFT spectrum vertically and call this a frame.
+
+We also take the same point in time of the voice pitch, and also refer to this as a frame.
+
+So now we have a frame which contains 20 thousand data points, one being the pitch of the voice, and the rest being the frequencies of the songs content.
+
+QUESTION
+
+What kind of model could be used to teach the AI the correlation of the voice and the instrumental?
+
+And also, I have a hard time understand how, once the AI is trained, how could just an instrumental be fed to the AI to output pitch values of how one COULD sing along to the song.
+
+Like, training we need to input 20 thousand values, but when we want the AI to sing for us using just an instrumental, would it not still expect voice pitch input?
+At what layer would the instrumental be tapped into? At the outer most right layer?
+
+EDIT
+
+My mind has been working on this in the background throughout the day,
+and I am wondering if instead of feeding 19k points of instrumental data each frame (which would be points from the frequency domain), one could just feed the instrumental frame points (which would be points from the time domain).
+
+Maybe that would be better, but then maybe the AI would get less ""resolution"" to work with, but could be trained faster (less computing power needed).
+
+Let's say the frequency domain is fed (higher resolution), the AI could potentially find correlations from low notes, mid notes and high notes, in any combination (more computing power needed).
+"
+"['neural-networks', 'artificial-neuron', 'neurons', 'biology', 'neuromorphic-engineering']"," Title: How to model inhibitory synapses in the artificial neuron?Body: In the brain, some synapses are stimulating and some inhibiting. In the case of artificial neural networks, ReLU erases that property, since in the brain inhibition doesn't correspond to a 0 output, but, more precisely, to a negative input.
+
+In the brain, the positive and negative potential is summed up, and, if it passed the threshold, the neuron fires.
+
+There are 2 main non-linearities which came to my mind in the biological unit:
+
+
+- potential change is more exponential than linear: small amount of ion channels is sufficient to start a chain-reaction of other channels activation's - which rapidly change global neuron's potential.
+- the threshold of the neuron is also non-linear: neuron fires only when the sum of its positive and negative potentials passed given (positive) threshold
+
+
+So, is there any idea how to implement negative input to the artificial neural network?
+
+I gave examples of non-linearities in biological neurons because the most obvious positive/negative unit is just a linear unit. But, since it doesn't implement non-linearity, we may consider implementing non-linearities somewhere else in the artificial neuron.
+"
+"['natural-language-processing', 'capsule-neural-network', 'topic-model']"," Title: Would it make sense to use together capsule neural neworks and ""topic / narrative modeling""?Body: This is actually something I have been researching a bit on my own.
+
+Most movie scripts can be structurally analysed by using writing theory such as Dramatica. Dramatica is based upon a hierarchy of concepts, which can be topic modeled. The hierarchy of topic models would seem to work very well with the capsule neural networks.
+
+I have been working with computational creativity problems in a narrative generation. The state of the art methods use Partial Order Causal Link Planners, but they depend on propositional logic. Alonzo Church presented the Superman dilemma (Louis Lane does not know that Clark Kent is Superman, but Superman knows, that he is Clark Kent) and invented Intensional Logic as a solution; the basic idea is, that if we do not know the context of the narrative, the meaning is always in superposition and can only be understood through entangled meanings from the background story. So, in a sense, propositional logic is limited by classic information theory constraints, while Church's logic can take a quantum information-theoretic approach. I do not believe that classic information theory can resolve narrative analysis problems. So, basically, the meaning of a narrative collapses (the superposition gets resolved) by using the hierarchical narrative structure and what we know beforehand.
+
+So my intuition would be the following:
+
+
+- We can use Dramatica and potentially other narrative theories (hierarchical metamemetics, reverse SCARF, etc.) to create a hierarchical network like ImageNet, but for narratives.
+- We can build conceptual topic models. Dramatica has a hierarchy of 4-16-64-64 concepts and annotated data exists already.
+- When using hundreds of topic models, there will be a lot of false positives. However, the superposition of the topic models can be collapsed by using the hierarchical levels and some other dramatic analytics.
+- By using the capsule neural networks, we might be able to build a system, which could determine a narrative interpretation of the full story, which would make the most sense by using the concept hierarchy.
+
+
+I tried to prove my intuition, but, unfortunately, Dramatica only has 300 movies analysed, and I was able to find scripts of only 10 of them; not enough data.
+
+However, there are other hierarchical ontologies out there and other narrative structures; could the same intuition be used for political news for example?
+"
+"['neural-networks', 'machine-learning', 'natural-language-processing']"," Title: Is 'job title classification' rather a problem of NLP or machine learning?Body: first of all I want to specify the data available and what needs to be achieved: I have a huge amount of vacancies (in the millions). The information about the job title and the job description of each vacancy are stored separately. I also have a list of professions (around 3000), to which the vacancies shall be mapped.
+
+Example: java-developer, java web engineer and java software developer shall all be mapped to the profession java engineer.
+
+Now about my current researches and problems: Since a lot of potential training data is present, I thought a machine learning approach could be useful. I have been reading about different algorithms and wanted to give neural networks a shot.
+
+Very fast I faced the problem, that I couldn't find a satisfying way to transform text of variable length to numerical vectors of constant size (needed by neural networks). As discussed here, this seems to be a non trivial problem.
+
+I dug deeper and came across Bag of Words (BOW) and Text Frequency - Inverse Document Frequency (TFIDF), which seemed suitable at first glance. But here I faced other problems: If I feed all the job titles to TFIDF, the resulting word-weight-vectors will probably be very large (in the tenth of thousands). The search term on the other hand will mostly consist of between 1 and 5 words (we currently match the job title only). Hence, the neural network must be able to reliably map an ultra sparse input vector to one of a few thousand basic jobs. This sounds very difficult for me and I doubt a good classification quality.
+
+Another problem with BOW and TFIDF is, that they cannot handle typos and new words (I guess). They cannot be found in TFIDF's word list, which results in a vector filled with zeros. To sum it up: I was first excited to use TFIDF, but now think it doesn't work well for what I want to do.
+
+Thinking more about it, I now have doubt if neural networks or other machine learning approaches are even good solutions for this task at all. Maybe there are much better algorithms in the field of natural language processing.
+This moment (before digging into NLP) I decided to first gather the opinions of some more experienced AI users, so I don't miss the best solution.
+
+So what would be a useful approach to this in your opinion (best would be an approach that is capable of handling synonyms and typos)? Thanks in advance!
+
+p. s.: I am currently thinking about feeding the whole job description into the TFIDF and also do matches for new incoming vacancies with the whole document (instead of job title only). This will expand the size of the word-weight-vector, but it will be less sparse. Does this seem logical to you?
+"
+"['convolutional-neural-networks', 'objective-functions', 'object-detection']"," Title: What loss function should one use for object detection, knowing that the input image contains exactly one target object?Body: What loss function should one use, knowing that the input image contains exactly one target object?
+I am currently using MSE to predict the center of ROI coordinates and its width and height. All values are relative to image size. I think that such an approach does not put enough pressure on the fact those coordinates are related.
+I am aware of the existence of algorithms like YOLO or UnitBox, and am just wondering if there might be some shortcut for such particular case.
+"
+"['terminology', 'search', 'minimax', 'alpha-beta-pruning', 'quiescence-search']"," Title: Are iterative deepening, principal variation search or quiescence search extensions of alpha-beta pruning?Body: I know that there are several optimizations for alpha-beta pruning. For example, I have come across iterative deepening, principal variation search, or quiescence search.
+
+However, I am a little bit confused about the nature of these algorithms.
+
+
+- Are these algorithms an extension of the alpha-beta algorithm, or
+- Are they completely new algorithms, in that they have got nothing to do the alpha-beta algorithm?
+
+
+On this site, these algorithms fall into one of 4 categories, namely
+
+
+- mandatory
+- selectivity
+- scout and friends
+- Alpha-beta goes best-first
+
+
+Does this mean that the alpha-beta algorithm is split into four areas and that they are specialized optimization algorithms for each area?
+
+How do I even begin to decide which optimized algorithm to pick?
+
+I advise people to visit this site:
+http://www.fierz.ch/strategy2.htm
+"
+"['training', 'natural-language-processing', 'getting-started']"," Title: Sentence classification and named identity detection with automatic retrainingBody: I am learning AI and trying out my first real-life AI application.
+What I am trying to do is taking as an input various sentences, and then classifying the sentences into one of X number of categories based on keywords, and 'action' in the sentence.
+
+The keywords are, for example, Merger, Acquisition, Award, product launch, etc. so in essence, I am trying to detect if the sentence in question talks about a merger between two organizations, or an acquisition by an organization, a person or an organization winning an award, or launching of a new product, etc.
+
+To do this, I have made custom models based on the basic NLTK package model, for each keyword, and trying to improve the classification by dynamically tagging/updating the models with related keywords, synonyms, etc to improve the detection capability. Also, given a set of sentences, I am presenting the user with the detected categorization and asking whether it is correct or wrong, and if wrong, what is the correct categorization, and also identify the entities (company names, person names, product names, etc).
+
+So the object is to first classify the sentence into a category, and additionally, detect the named entities in the sentence, based on the category.
+
+The idea is, to be able to automatically re-train the models based on this feedback to improve its performance over time and to be able to retrain with as little manual intervention as possible. For the sake of this project, we can assume that user feedback would be accurate.
+
+The problem I am facing is that NLK is allowing fixed-length entities while training, so for example a two-word award is being detected as two awards.
+
+What should be my approach to solve this problem? Is there a better NLU (even a commercial one) that can address this problem? It seems to me that this would be a common AI problem, and I am missing something basic. I would love you guys to have any input on this.
+"
+"['reinforcement-learning', 'discount-factor']"," Title: Can the optimal value of discount factor in Deep Reinforcement Learning be between 0.2 to 0.8?Body: I'm now reading a book titled as Hands-On Reinforcement Learning with Python, and the author explains the discount factor that is used in Reinforcement Learing to discount the future reward, with the following:
+
+A discount factor of 0 will never learn considering only the immediate rewards; similarly, a discount factor of 1 will learn forever looking for the future reward, which may lead to infinity. So the optimal value of the discount factor lies between 0.2 to 0.8.
+
+The author seems to be not going to explain further about the figure, but all the tutorials and explanations I have ever read write the optimal (or at least widely used) discount factor between 0.9 an 0.99. This is the first time I have seen such a low-figure discount factor.
+All the other explanations the author makes regarding the discount factor are the same as I have read so far.
+Is the author correct here or does it depend on cases? If it is, then what kind of problems and/or situations should I set the discount factor as low as such figure at?
+
+EDIT
+I just found the following answer at Quora:
+
+Of course. A discount factor of 0 will never learn, meanwhile a factor near of 1 will only consider the last learning. A factor equal or greater than 1 will cause the not convergence of the algorithm. Values usually used are [0.2, 0.8]
+EDIT: That was the learning factor. The discount factor only affect how you use the reward. For a better explanation:
+State-Action-Reward-State-Action - Wikipedia
+See influences of variables .
+
+I don't know what is written in the question as it in not visible in Quora, but it seems that the 0.2 to 0.8 figure is used for learning factor, not discount factor. Maybe the author is confused with it...? I'm not sure what the learning factor is, though.
+"
+"['neural-networks', 'machine-learning']"," Title: Why do we need floats for using neural networks?Body: Is it possible to make a neural network that uses only integers by scaling input and output of each function to [-INT_MAX, INT_MAX]? Is there any drawbacks?
+"
+"['game-ai', 'search', 'alpha-beta-pruning', 'quiescence-search', 'evaluation-functions']"," Title: Is a good evaluation function as good as any of the extensions of alpha-beta pruning?Body: I would like to know if having a really good evaluation function is as good as using any of the extensions of alpha-beta pruning, such as killer moves or quiescence search?
+"
+"['game-ai', 'minimax', 'alpha-beta-pruning', 'checkers', 'quiescence-search']"," Title: If certain moves are compulsory, will there still be a need for a quiescence search?Body: Certain games, like checkers, have compulsory moves. In checkers, for instance, if there's a jump available a player must take it over any non-jumping move.
+
+If jumps are compulsory, will there still be a need for a quiescence search?
+
+My thinking is that I can develop an implementation of a quiescence search that first checks whether jumps are available. If there are then it can skip all non-jumping moves. If there's only one jumping move available, then I won't need to run a search at all.
+
+Therefore, I will only use a quiescence search if I initially don't have to make a jump on my first move. I will only active quiescence search in my alpha-beta pruning becomes active. (The alpha-beta will only be active if my first algorithm which first checks if there are jumps available returns a 0, which means there are no jumps available.)
+
+Is my thinking of implementing a quiescence search correct?
+
+
+
+My options are slim when it comes to optimizations due to serious memory constraints, hence I won't be using PVS or other algorithm like that as they require additional memory.
+"
+"['deep-neural-networks', 'autoencoders', 'batch-normalization']"," Title: Does it make sense to use batch normalization in deep (stacked) or sparse auto-encoders?Body: Does it make sense to use batch normalization in deep (stacked) or sparse auto-encoders?
+
+I cannot find any resources for that. Is it safe to assume that, since it works for other DNNs, it will also make sense to use it and will offer benefits on training AEs?
+"
+['randomness']," Title: Is true random number generation an AI concept?Body: As it can be easily pointed out that true random numbers cannot be generated fully by programming and some random seed is required.
+On the other hand, humans can easily generate any random number independently of other factors.
+Does this suggest that absolute random number generation is an AI concept?
+"
+"['machine-learning', 'ai-safety', 'action-recognition', 'topology', 'autonomous-vehicles']"," Title: What topologies support recognition of action sequences?Body: The ability to recognize an object with particular identifying features from single or multiple camera shoots with the temporal dimension digitized as frames has been shown. The proof is that the movie industry does face replacement to reduce liability costs for stars when stunts are needed. It is now done in a substantial percentage of action movie releases.
+
+This brings up the question of how valuable recognizing a stop sign is compared to the value of recognizing an action. For instance, in the world of autonomous vehicles, should there even be stop signs. Stop signs are designed for lack of intelligence or lack of attention, which is why any police officer will tell you that almost no one comes to a full stop per law. What human brains intuitively looks for is the potential of collision.
+
+Once what we linguistically perceive as verbs can be handled in deep learning scenarios as proficiently as nouns can be handled, the projection of risk becomes possible.
+
+This may be very much the philosophy behind the proprietary technology that allows directors to say, ""Replace the stunt person's face with the movie's protagonist's face,"" and have a body of experts execute it using software tools and LINUX clusters. The star's face is projected into the model of the action realized in the digital record of the stunt person.
+
+Projected action is exactly what our brain does when we avoid collisions, and not just with driving. We do it socially, financially, when we design mechanical mechanisms, and in hundreds of other fields of human endeavor.
+
+If we consider the topology of GANs as a loop in balance, which is what it is, we can then see the similarity of GANs to the chemical equilibria between suspensions and solutions. This gives us a hint into the type of topologies that can project action and therefore detect risk from audiovisual data streams.
+
+Once action recognition is mastered, it is a smaller step to use the trained model to project the next set of frames and then detect collision or other risks. Such would most likely make possible a more reliable and safe automation of a number of AI products and services, breaking through a threshold in ML, and increased safety margins throughout the ever increasing world population density.
+
+... which brings us back to ...
+
+What topologies support recognition of action sequences?
+
+The topology may have convolution, perhaps in conjunction with RNN techniques, encoders, equilibria such as the generative and discriminative models in GANs, and other design elements and concepts. Perhaps a new element type or concept will need to be invented. Will we have to first recognize actions in a frame sequence and then project the consequences of various options in frames that are not yet shot?
+
+Where would the building blocks go and how would they be connected, initially dismissing concerns about computing power, network realization, and throughput for now?
+
+Work may have been done along this area and realized in software, but I have not seen that degree of maturity yet in the literature, so most of it, if there is any, must be proprietary at this time. It is useful to open the question to the AI community and level the playing field.
+"
+"['getting-started', 'software-evaluation']"," Title: Is there any opensource 2d open-world simulation with python API?Body: For my pet project I’m looking for a grid-like world simulation with some kind of resources that requires from agent incrementally intelligent behaviour to survive.
+
+Something like this steam game, but with API. I’ve seen minecraft fork, but it’s too complex for my task. There is pycolab, i can build some world on this engine, but I’d prefer ready-to-use simulations.
+
+Is there any option? I'll appreciate any suggestion.
+"
+"['machine-learning', 'philosophy', 'agi', 'statistical-ai', 'artificial-consciousness']"," Title: How important will statistical learning be to a conscious AI?Body: Deep learning is based on getting a large number of samples and essentially making statistical deductions and outputting probabilities.
+On the other hand, we have formal programming languages, like PROLOG, which don't involve probability.
+Is there any essential reason why an AI could be called conscious without being able to learn in a statistical manner, i.e. by only being able to make logical deduction alone (It could start with a vast number of innate abilities)?
+Or is probability and statistical inference a vital part of being conscious?
+"
+['prediction']," Title: Are any organisations using AI to predict weather?Body: This seems like a natural fit, though I've not heard of any, yet.
+
+I would love to know if any MET office, government, military or academic institution has taken all (or sizeable portion of) recorded global weather data for, say, the last 50 years (or since we, as a race, have been using weather satellites) and used it in an AI system to predict future weather.
+"
+"['neural-networks', 'activation-functions', 'relu']"," Title: What are the advantages of ReLU vs Leaky ReLU and Parametric ReLU (if any)?Body: I think that the advantage of using Leaky ReLU instead of ReLU is that in this way we cannot have vanishing gradient. Parametric ReLU has the same advantage with the only difference that the slope of the output for negative inputs is a learnable parameter while in the Leaky ReLU it's a hyperparameter.
+
+However, I'm not able to tell if there are cases where is more convenient to use ReLU instead of Leaky ReLU or Parametric ReLU.
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning', 'comparison', 'unsupervised-learning']"," Title: What is the relationship between these two taxonomies for machine learning with neural networks?Body: Could you please let me know which of the following classification of Neural Network's learning algorithm is correct?
+
+The first one classifies it into:
+
+
+- supervised,
+- unsupervised and
+- reinforcement learning.
+
+
+However, the second one provides a different taxonomy on page 34:
+
+
+- learning with a teacher (error correction learning including incremental and batch training),
+- learning without a teacher (reinforcement, competitive, and unsupervised learning)
+- memory-based learning, and
+- Boltzmann learning.
+
+"
+"['natural-language-processing', 'text-classification', 'tf-idf', 'bag-of-words']"," Title: Why are documents kept separated when training a text classifier?Body: Most of the literature considers text classification as the classification of documents. When using the bag-of-words and Bayesian classification, they usually use the statistic TF-IDF, where TF normalizes the word count with the number of words per document, and IDF focuses on ignoring widely used and thus useless words for this task.
+My question is, why they keep the documents separated and create that statistic, if it is possible to merge all documents of the same class? This would have two advantages:
+
+- You can just use word counts instead of frequencies, as the documents per class label is 1.
+
+- Instead of using IDF, you just select features with enough standard deviation between classes.
+
+
+"
+"['deep-neural-networks', 'statistical-ai']"," Title: Confidence interval around a DNN predictionBody:
+I am facing a problem and do not know whether it is even solvable: I want to predict the behaviour of a system using a DNN, say a CNN, in the sense that I want to predict the time and intensity of a maneuver performed by a player. Let's leave it relatively abstract like this, the details do not matter.
+
+My question is now whether there is any way of knowing how well my CNN performs. My goal would be to derive statements of the form "With x% probability, the correct maneuver angle is within the predicted angle +-y%".
+
+ Can such statements be derived e.g. using statistical analysis of the test data? I saw approaches toward verification and validation of DNNs using satisfiability modulo theory, but did not really understand the details. Would
+this be applicable here? It seems a little overkill...
+
+"
+"['convolutional-neural-networks', 'python']"," Title: Optimizing Max Pooling AlgorithmBody: The below code is a max pooling algorithm being used in a CNN. The issue I've been facing is that it is offaly slow given a high number of feature maps. The reason for its slowness is quite obvious-- the computer must perform tens of thousands of iterations on each feature map. So, how do we decrease the computational complexity of the algorithm?
+
+('inputs' is a numpy array which holds all the feature maps and 'pool_size' is a tuple with the dimensions of the pool.)
+
+def max_pooling(inputs, pool_size):
+
+ feature_maps = []
+ for feature_map in range (len(inputs)):
+ feature_maps.append([])
+ for i in range (0, len(inputs[feature_map]) - pool_size[0], pool_size[0]):
+ for j in range (0, len(inputs[feature_map]) - pool_size[0], pool_size[0]):
+ feature_maps[-1].append(np.array(max((inputs[feature_map][j:j+pool_size[0], i:i+pool_size[0]]).flatten())))
+
+ return feature_maps
+
+"
+"['machine-learning', 'reinforcement-learning', 'game-ai', 'javascript']"," Title: How can I apply reinforcement learning to solve this asteroid game?Body: Introduction
+An attractive asteroid game was described in the paper Learning Policies for Embodied Virtual Agents through Demonstration (2017, Jonathan Dinerstein et al.):
+
+In our first experiment, the virtual agent is a spaceship pilot, The pilot's task is to maneuver the spaceship through random asteroid fields
+
+In theory, this game can be solved with reinforcement learning or, more specifically, with a support vector machine (SVM) and epsilon-regression scheme with a Gaussian kernel. But it seems that this task is harder than it looks like, as the authors of the same paper write
+
+Although many powerful AI and machine learning techniques exist, it remains difficult to quickly create AI for embodied virtual agents.
+
+
+it is quite challenging to achieve natural-looking behavior since these aesthetic goals must be integrated into the fitness function
+
+Questions
+I really want to understand how reinforcement learning works. I built a simple game to test this. There are squares falling from the sky and you have the arrow keys to escape.
+
+How could I code the RL algorithm to solve this game? Can I do this manually in Javascript according to what I think should happen? How can I do this without having to map the positions of the rectangles and mine, just giving the agent the keyboard arrows to interact and three information:
+
+- Player life
+- Survival time
+- Maximum survival time
+
+"
+['machine-learning']," Title: Machine learning to detect wrong address dataBody: I want to use a machine learning algorithm to detect false address data. I learned about neural networks and machine learning at university, but I don't have much experience in this field.
+
+Do you think it is feasible to use a high level algorithm for this or should I use simple queries and filters to catch out wrong data?
+"
+"['neural-networks', 'training', 'datasets', 'gradient-descent']"," Title: For each epoch, can I use only on a subset of the full training dataset to train the neural network?Body: If one has a dataset large enough to learn a highly complex function, say learning chess game-play, and the processing time to run mini-batch gradient descent on this entire dataset is too high, can I instead do the following?
+
+- Run the algorithm on a chunk of the data for a large number of iterations and then do the same with another chunk and so on?
+(Such approach will not produce the same result as mini-batch gradient descent, as I am not including all data in one iteration, but rather learning from some data and then proceeding to learn on more data, beginning with the updated weights may still converge to a reasonably trained network.)
+
+- Run the same algorithm (the same model also with only data varying) on different PC's (each PC using a chunk of the data) and then see the performance on a test set and take the final decision as a weighted average of all the different models' outputs with the weight being high for the model which did the best on test sets?
+
+
+"
+"['search', 'heuristics', 'a-star']"," Title: How do you calculate the heuristic value in this specific case?Body: The A* algorithm uses the ""evaluation function"" $f(n) = g(n) + h(n)$, where
+
+
+- $g(n)$ = cost of the path from the start node to node $n$
+- $h(n)$ = estimated cost of the cheapest path from $n$ to the goal node
+
+
+But, in the following case (picture), how is the value of $h(n)$ calculated?
+
+
+
+In the picture, $h(n)$ is the straight-line distance from $n$ to the goal node. But how do we calculate it?
+"
+"['reinforcement-learning', 'genetic-algorithms', 'neat', 'actor-critic-methods']"," Title: Can neuro-evolution methods be combined with A3C?Body: As a amateur researcher and tinkerer, I've been reading up on neuro-evolution networks (e.g. NEAT) as well as the A3C RL approach presented by Mnih et al and got to wondering if anyone has contemplated the merging of both these techniques.
+
+Is such an idea viable? Has it been tried?
+
+I'd be interested in any research in this area as it sounds like it could be compelling.
+"
+['robots']," Title: What if we took a recursive approach and built a smallest possible robot?Body: What if we took a recursive approach and built a smallest possible first robot (Robot 1) that could transfer information and data about the place it was at and could build itself in a very small size proportional to itself. I understand that it means a higher level of accuracy for this first robot (Robot 1) that its creator i.e. us. And this first robot (Robot 1) again built a robot (Say Robot 2) that was far smaller but an exact copy of the first robot (Robot 1). And then the second robot (Robot 2) built a third Robot (Robot 3) and so on. So each next level robot was tinier and higher precision than its creator.
+
+With the tiniest robot we could make, we sent them to the mission wherein micro-sized intervention was needed. For example studying the atom structure from inside, how similar it was to our big universe etc. Plus many more applications humankind could ever imagine.
+
+I understand though that the material used to construct such a robot and its properties will be limiting and to explore an atom we may not be able to use an atom as the building block.
+
+However, we could possibly build a robot like this which would be small enough to explore the human body from inside.
+"
+"['reinforcement-learning', 'real-time', 'delayed-rewards']"," Title: Reinforcement Learning with asynchronous feedbackBody: I want suggestions on literature on Reinforcement Learning algorithms that perform well with asynchronous feedback from the environment. What I mean by asynchronous feedback is, when an agent performs an action it gets feedback(reward or regret) from the environment after sometime not immediately. I have only seen algorithms with immediate feedback and asynchronous updates. I don't know if literature on this problem exists. This is why I'm asking here.
+My application is fraud detection in banking, my understanding is when a fraud is detected it takes 15-45 days for the system to flag it as a fraud sometimes until the customer complains the system doesn't know its fraud.
+How would I go about designing a real-time system using reinforcement learning to flag transactions that are fraudulent or normal?
+Maybe my understanding is wrong, I'm learning on my own if someone could help me I would be grateful.
+The reason I'm looking at reinforcement learning instead of supervised learning is, it's hard to get ground truth data in the banking scenario. Fraudsters are always up-to-date or exceeding the state of the art in fraud detection. So I've decided that reinforcement learning would be an optimal direction to look for solutions to this problem.
+"
+"['image-recognition', 'comparison']"," Title: Methods for fast image comparisonBody: I want to implement a real-time system for image comparison (e.g. compare a face with a reference one) on an Odroid. I would like to know what are the most suitable architectures for this task. I started with methods based on triplet loss (like Facenet) but I realized that a real-time solution is not feasible. Are there good, light alternatives?
+"
+"['machine-learning', 'computer-vision', 'models', 'object-recognition']"," Title: How to label “other” while labeling image for object detection/classification?Body: I want to train a model to recognize different category of food (example: rice, burger, apple, pizza, orange,... )
+
+After the first training, I realized that the model is detecting other object as food. (example: hand -> fish, phone -> Chocolate, person -> candies... )
+
+I get a very low loss because the testing dataset and validation must have at least a pictures of food. But when it comes to picture of object other than food, the model fails. How do label the dataset in way that the model will not do any detection if there is no food on the picture?
+"
+"['deep-learning', 'convolutional-neural-networks', 'convolution', 'pooling']"," Title: In which scenario would you want to have two adjacent pooling layers?Body: In which scenario, when assembling a CNN, would you want to have two adjacent pooling layers, without a convolutional layer in between?
+"
+"['reference-request', 'research', 'books', 'academia', 'education']"," Title: What are the mathematical prerequisites for an AI researcher?Body: What are the mathematical prerequisites for understanding the core part of various algorithms involved in artificial intelligence and developing one's own algorithms?
+Please, refer to some specific books.
+"
+"['reinforcement-learning', 'terminology']"," Title: What is a ""trajectory"" in reinforcement learning?Body: I'm now learning about reinforcement learning, but I just found the word ""trajectory"" in this answer.
+
+However, I'm not sure what it means. I read a few books on the Reinforcement Learning but none of them mentioned it. Usually these introductionary books mention agent, environment, action, policy, and reward, but not ""trajectory"".
+
+So, what does it mean?
+According to this answer over Quora:
+
+
+ In reinforcement learning terminology, a trajectory $\tau$ is the path of the agent through the state space up until the horizon $H$. The goal of an on-policy algorithm is to maximize the expected reward of the agent over trajectories.
+
+
+Does it mean that the ""trajectory"" is the total path from the current state the agent is in to the final state (terminal state) that the episode finishes at? Or is it something else? (I'm not sure what the ""horizon"" mean, either).
+"
+['neural-networks']," Title: Dealing with ""blank"" inputs in prediction of a neural network?Body: Say I'm training a neural net to compute the following function:
+
+(color_of_clothing, body_height) -> gender
+
+
+When using this network for prediction, I can obviously plug in a pair (c, b)
to receive a predicted g
, but say I want to get a prediction only based on c
or only based on b
, can I use the same neural net somehow? Or would I need to train two separate neural nets c -> g
and b -> g
previously?
+
+Or more generally, can I use a neural net that was trained to predict A -> B
to make predictions on values from a subset of A
, or should I train separate neural nets on all subsets of A
that I'm interested in?
+"
+"['neural-networks', 'gradient-descent']"," Title: Behaviour of costBody: In doing a project using neural networks with an input layer, 4 hidden layers and an output layer ,I used mini batch gradient descent. I noticed that the randomly initialised weights seemed to do a good performance and gave a low error. As the model started training after about 200 iterations there was large jump in error and then it came down slowly from there. I have also noticed that sometimes the cost just increases over a set of consecutive iterations.
+Can anyone explain why these happen? It is not like there are outliers or a new distribution as every iteration exposes it to the entire dataset.
+I used learning rate 0.01 and regularisation parameter 10. I also tried regularisation parameter 5 and also 1.
+And by the cost I mean, the sum of squared errors of all minibatches/2m plus regularisation term error.
+Further if this happens and my cost after the say 10000th iteration is more than my cost when I initialised with random weights (lol) can I just take the initial value? As those weights seem to be doing better.
+
+The large jumps are the most puzzling.
+
+This is the code
+
+Any help would be greatly appreciated. Thanks
+"
+"['machine-learning', 'reference-request', 'genetic-programming', 'inductive-programming']"," Title: Does an AI exist that can write software based on a formal specification?Body: Does an AI exist that can automatically write software based on a formal specification of the software?
+"
+['neural-networks']," Title: A neural network to learn the connection between two imagesBody: Is it possible to build a neural network that learns the connection between two images?
+
+Let's say I have a number of X images that related to Y images. How can I build a neural network that takes an image as an input and outputs (generates) the output image?
+
+The Y images are generated by applying some function to the X images.
+
+Do I need a generate neural network for that? Are conventional neural networks capable of classification only?
+"
+"['deep-learning', 'generative-adversarial-networks', 'generative-model', 'feature-extraction']"," Title: Are deep learning models suitable for training with sparse data?Body: I am training a generative adversarial network (GAN) to generate images given edge histogram descriptor (EHD) features of the image. The EHD features are themselves sparse (meaning they contain a lot of zeroes). While training the generator loss and discriminator loss are reducing very slowly.
+
+Are deep learning models (like GAN) suitable for training with sparse data for one or more of the features in the input or derived through feature extraction?
+"
+"['reinforcement-learning', 'terminology', 'comparison']"," Title: What is the difference between policy and action in reinforcement learning?Body: I'm confused with the two terminology - action and policy - in Reinforcement Learning. As far as I know, the action is:
+
+
+ It is what the agent makes in a given state.
+
+
+However, the book I'm reading now (Hands-On Reinforcement Learning with Python) writes the following to explain policy:
+
+
+ we defined the entity that tells us what to do in every state as policy.
+
+
+Now, I feel that the policy is the same as the action. So what is the difference between the two, and how can I use them apart correctly?
+"
+"['deep-learning', 'convolutional-neural-networks', 'image-recognition', 'datasets', 'transfer-learning']"," Title: How can I train a neural network for image classification when the dataset is small?Body: I need to train a convolutional neural network to classify snake images. The problem is that I have only a small number of images available for some snake types.
+So, what is the best approach to train a neural network for image classification using a small data set?
+"
+['neural-networks']," Title: Using a neural network for learning a Motion Graph?Body: In a recent paper about progress in computer animation a so called motion graph is used to describe the transition between keyframes of facial animation. Easy Generation of Facial Animation Using Motion Graphs, 2018 As far as i understand from the paper, they used a motion capture device to record faces of real people and extract keyframes. Then a transition matrix was created to ensure that a walk from keyframe #10 to #24 is possible but a transition from keyframe #22 to #99 is forbidden.
+
+The idea itself sounds reasonable good, because now a solver can search in the motion graph to bring the system from a laughing face to a bored face without interruption or unnatural in-between-keyframes. But wouldn't it be great if the transition matrix can be stored inside a neural network? As far as i understand the backpropagation algorithm , the neural network can learn input-output relations. So the neural network has to learn the transition probability between two keyframes. And a second neural network can then produce the motion plan which is also be trained by a large corpus. Is that idea possible or is it the wrong direction?
+"
+"['philosophy', 'chat-bots', 'natural-language-understanding', 'philosophy-of-mind']"," Title: Can we make a chatbot that really ""understands"" the questions?Body: Can we make a chatbot that really "understands" (rather than just replies to) questions based on the database/options of replies that it has? I mean, can it come up with correct/non-stupid replies/communications that don't exist in its database?
+For example, can we make it understand the words "but", "if", and so on? So, whenever it gets a question/order, it "understands" it based on "understanding". Like the movie Her, if you have watched it.
+And all of this without using too much code, just the basics to "wake it up" and let it learn from YouTube videos and Reddit comments and other similar data sources.
+"
+"['reinforcement-learning', 'terminology', 'actor-critic-methods', 'comparison', 'advantage-actor-critic']"," Title: What is the difference between actor-critic and advantage actor-critic?Body: I'm struggling to understand the difference between actor-critic and advantage actor-critic.
+
+At least, I know they are different from asynchronous advantage actor-critic (A3C), as A3C adds an asynchronous mechanism that uses multiple worker agents interacting with their own copy of the environment and reports the gradient to the global agent.
+
+But what is the difference between the actor-critic and advantage actor-critic (A2C)? Is it simply with or without advantage function? But, then, does the actor-critic have any other implementation except for the use of advantage function?
+
+Or maybe are they synonyms and actor-critic is just a shorthand for A2C?
+"
+['applications']," Title: What are some examples of everyday systems that use AI?Body: I would like to know some daily basis applications of AI. I think these might be relevant examples:
+
+
+- Google search engine
+- Face recognition on iPhone
+
+
+Are my examples correct? Could you provide some more examples?
+"
+"['neural-networks', 'machine-learning', 'definitions', 'support-vector-machine']"," Title: What is a support vector machine?Body: What is a support vector machine (SVM)? Is an SVM a kind of a neural network, meaning it has nodes and weights, etc.? What is it best used for?
+
+Where I can find information about these?
+"
+['reinforcement-learning']," Title: Does epsilon-greedy approach always choose the ""best action"" (100% of the time) when it does not take the random path?Body: I'm now reading the following blog post but on the epsilon-greedy approach, the author implied that the epsilon-greedy approach takes the action randomly with the probability epsilon, and take the best action 100% of the time with probability 1 - epsilon.
+
+So for example, suppose that the epsilon = 0.6 with 4 actions. In this case, the author seemed to say that each action is taken with the following probability (suppose that the first action has the best value):
+
+
+- action 1: 55% (.40 + .60 / 4)
+- action 2: 15%
+- action 3: 15%
+- action 4: 15%
+
+
+However, I feel like I learned that the epsilon-greedy only takes the action randomly with the probability of epsilon, and otherwise it is up to the policy function that decides to take the action. And the policy function returns the probability distribution of actions, not the identifier of the action with the best value. So for example, suppose that the epsilon = 0.6 and each action has 50%, 10%, 25%, and 15%. In this case, the probability of taking each action should be the following:
+
+
+- action 1: 35% (.40 * .50 + .60 / 4)
+- action 2: 19% (.40 * .10 + .60 / 4)
+- action 3: 25% (.40 * .25 + .60 / 4)
+- action 4: 21% (.40 * .15 + .60 / 4)
+
+
+Is my understanding not correct here? Does the non-random part of the epsilon (1 - epsilon) always takes the best action, or does it select the action according to the probability distribution?
+"
+"['neural-networks', 'classification']"," Title: What kind of problem is ""email text extraction""?Body: I need to retrieve just the text from emails. The emails can be in HTML format, and can contain huge signatures, disclaimer legalese, and broken HTML from dozens of forwards and replies. But, I only want the actual email message and not any other cruft such as the whole quotation block, signatures, etc.
+
+This isn't really a problem that could be solved with regex because HTML mail can get very, VERY messy.
+
+Could a neural network perform this task? What kind of problem is this? Classification? Feature selection?
+"
+"['neural-networks', 'reinforcement-learning', 'reference-request', 'actor-critic-methods']"," Title: What are the available exploration strategies for continuous action space scenarios in RL?Body: I'm building a deep neural network to serve as the policy estimator in an actor-critic reinforcement learning algorithm for a continuing (not episodic) case. I'm trying to determine how to explore the action space. I have read through this text book by Sutton, and, in section 13.7, he gives one way to explore a continuous action space. In essence, you train the policy model to give a mean and standard deviation as an output, so you can sample a value from that Gaussian distribution to pick an action. This just seems like the continuous action-space equivalent of an $\epsilon$-greedy policy.
+Are there other continuous action space exploration strategies I should consider?
+I've been doing some research online and found some articles related to RL in robotics and found that the PoWER and PI^2 algorithms do something similar to what is in the textbook.
+Are these, or other, algorithms "better" (obviously depends on the problem being solved) alternatives to what is listed in the textbook for continuous action-space problems?
+I know that this question could have many answers, but I'm just looking for a reasonably short list of options that people have used in real applications that work.
+"
+"['reinforcement-learning', 'sutton-barto', 'continuous-tasks', 'episodic-tasks']"," Title: How can the Cart Pole problem be a continuing task?Body: In Introduction to Reinforcement Learning (2nd edition) by Sutton and Barto, there is an example of the Pole-Balancing problem (Example 3.4).
+In this example, they write that this problem can be treated as an episodic task or continuing task.
+I think that it can only be treated as an episodic task because it has an end of playing, which is falling the rod.
+I have no idea how this can be treated as continuing task. Even in OpenAI Gym cartpole env, there is only the episodic mode.
+"
+"['reinforcement-learning', 'search', 'convergence']"," Title: Which is more important, doubt or reinforcement?Body: Reinforcement?
+We hear much about reinforcement, which is, in my opinion a poor choice of a term to describe a type of artificial network that continues to acquire or improve its behavioral information in natura (during operations in the field). Reinforcement in learning theory is a term used to describe repetitious incentivization to increase the durability of learned material. In machine learning, the term has been twisted to denote the application of feedback in operations, a form of re-entrant back propagation.
+Corrective Signaling
+Qualitatively, corrective signaling in field operations can supply information to a network to make only two types of functional adjustments.
+
+- Adjustments to what is considered the optimum, beginning with the optimum found during training prior to deployment
+- Testing of entirely new areas of the parameter space for hint of new optima that have formed, any of which might currently qualify or soon qualify as the global optimum.
+
+(By optima and optimum, we mean minima and global minimum in the surface that describes the disparity between ideal system behavior and current system behavior. This surface is sometimes termed the error surface, applying an over-simplifying analogy from the mathematical discipline of curve fitting.)
+The Importance of Doubt
+The second of the two above could aptly be termed doubt.
+Perhaps all neural nets should have one or more parallel doubting networks that can test remote areas of the search space for more promising optima. In a parallel computing environment, this might be a matter of provisioning and not significantly reduce the throughput of the primary network, yet provide a layer of reliability not found without the doubtful parallel networks.
+What Shows More Intelligence?
+Which is more important in actual field use of AI? The ability to reinforce what is already learned or the ability to create a minority opinion, doubt the status quo, and determine if it is not a more appropriate behavioral alternative than that which was reinforced.
+A Helpful Pool of Water Analogy
+During a short period of time, a point on the surface of the water may be the lowest point in a pool. With adjustments based on gradient (what is so inappropriately called reinforcement) the local well can be tracked so the low point can be maintained without any discrete jumps to other minima in the surface. However the local well may cease being the global minimum at some point in time, whereby a new search for a global minimum must ensue.
+It may be that the new global minimum is across several features on the surface of the pool and cannot be found with gradient descent.
+More interestingly, the appearance of new global minima can be tracked and reasonable projections can be made such that discrete and substantial jumps in parametric state can be accomplished without large jumps in disparity (where the system misbehaves badly for a period).
+Circling Back to the Question
+
+Which is more important, doubt or reinforcement?
+
+"
+"['neural-networks', 'math', 'automated-theorem-proving']"," Title: Can neural networks be used to prove conjectures?Body: Imagine I have a list (in a computer-readable form) of all problems (or statements) and proofs that math relies on.
+
+Could I train a neural network in such a way that, for example, I enter a problem and it generates a proof for it?
+
+Of course, those proofs then needed to be checked manually, but maybe the network then creates proofs from combinations of older proofs for problems yet unsolved.
+
+Is that possible?
+
+Would it be possible, for example, to solve the Collatz-conjecture or the Riemann-conjecture with this type of network? Or, if not solve, but maybe rearrange patterns in a way that mathematicians are able to use a new ""proof method"" to make a real proof?
+"
+"['algorithm', 'classification', 'learning-algorithms', 'categorical-data']"," Title: What's an appropriate algorithm for classification with categorical features?Body: My input data consists of a series of 8 integers. Each integer is a discrete token, rather than a relative numeric value (i.e. '1' and '2' are as distinct as are '1' and '100'). The output is a single binary value indicating success or fail. For example:
+
+fail,12,35,60,82,98,111,142,161
+success,23,46,59,87,102,121,145,161
+fail,13,35,65,83,100,102,122,161
+
+
+I have say 500,000 of these entries.
+
+Success or failure is determined by the combination of the eight tokens that go to make up the input. I am certain that no single token will dictate success or failure, but there may be particular tokens or combinations of tokens which are significant in determining success or failure, I don't know, but would like to know.
+
+My question is, what kind of machine learning algorithm should I implement to answer the question of which tokens and combinations of tokens are most likely to lead to success?
+
+In case it's relevant or useful, a few more notes on the input data:
+
+There is a limited range of tokens (and thus integers) in each slot. So with this data input:
+
+success,A,B,C,D,E,F,G,H
+
+
+A is always say one of 1, 2, 3, 4 or 5. B is always one of 6, 7 or 8. C is always one of 9, 10, 11 or 12. So in the general case, possible values for A are never possible values for the other slots and there are between 2 and 12 values for each slot. No idea if that makes a different to the answer but wanted to include it for completeness.
+"
+"['neural-networks', 'neurons', 'brain', 'human-inspired', 'self-awareness']"," Title: Are AI algorithms capable of self-repair?Body: Do AI algorithms exist which are capable of healing themselves or regenerating a hurt area when they detect so?
+For example: In humans if a certain part of brain gets hurt or removed, neighbouring parts take up the job. This happens probably because we are biologically unable to grow nerve cells. Whereas some other body parts (liver, skin) will regenerate most kinds of damage.
+Now my question is does AI algorithms exist which take care of this i.e. regenerating a damaged area? From my understanding this can be achieved in a NN using dropout (probably). Is it correct? Do additional algorithms (for both AI/NN) or measures exist to make sure healing happens if there is some damage to the algorithm itself?
+This can be particularly useful in cases where say there is a burnout in a processor cell processing some information about the environment. The other processing nodes have to take care to compensate or fully take-over the functions of the damaged cell.
+(Intuitionally this can mean 2 things:
+
+- We were not using the system of processors to its full capability.
+- The performance of the system will take a hit due to other nodes taking over functionality of the damaged node)
+
+Does this happen in the case of brain damage also? Or is my inferences wrong? (Kindly throw some light).
+NOTE : I am not looking for hardware compensations like re- routing, I am asking for non-tangible healing. Adjusting the behavior or some parameters of the algorithm.
+"
+"['reference-request', 'papers', 'agi', 'state-of-the-art', 'books']"," Title: What are some books or state of the art papers about the development of a strong-AI?Body: I am looking for books or to state of the art papers about current the development trends for a strong-AI.
+
+Please, do not include opinions about the books, just refer the book with a brief description. To emphasize, I am not looking for books on applied AI (e.g. neural networks or the book by Norvig). Furthermore, do not consider AGI proceedings, which contains papers that focus on very concrete aspects. The related Wikipedia describes some active investigation lines about AGI (cognitive, neuroscience, etc.) but can not be considered an educational/introductory resource. Finally, I am not interested in philosophical questions related to AI safety or risks or its morality if they are not related to its development. Development doesn't exclude mathematical foundation about it.
+
+By example, if I look by example at this list ""https://bigthink.com/mike-colagrossi/the-10-best-books-on-ai"", the final candidates list became empty.
+"
+"['neural-networks', 'machine-learning', 'proofs']"," Title: How can a neural network approximate all functions when the weights are not allowed to grow exponentially?Body: It has been proven in the paper ""Approximation by Superpositions of a Sigmoidal Function"" (by Cybenko, in 1989) that neural networks are universal function approximators. I have a related question.
+
+Assume the neural network's input and output vectors are of the same dimension $n$. Consider the set of binary-valued functions from $\{ 0,1 \}^n$ to $\{ 0,1 \}^n$. There are $(2^n)^{(2^n)}$ such functions. The number of parameters in a (deep) neural network is much smaller than the above number. Assume the network has $L$ layers, each layer is $n \times n$ fully-connected, then the total number of weights is $L \cdot n^2$.
+
+If the number of weights is not allowed to grow exponentially as $n$, can a deep neural network approximate all the binary-valued functions of size $n$?
+
+Cybenko's proof seems to be based on the denseness of the function space of neural network functions. But this denseness does not seem to guarantee that a neural network function exists when the number of weights are polynomially bounded.
+
+I have a theory. If we replace the activation function of an ANN with a polynomial, say cubic one,then after $L$ layers, the composite polynomial function would have degree $3^L$. In other words, the degree of the total network grows exponentially. In other words, its ""complexity"" measured by the number of zero-crossings, grows exponentially. This seems to remain true if the activation function is sigmoid, but it involves the calculation of the ""topological degree"" (a.k.a. mapping degree theory), which I have not the time to do yet.
+
+According to my above theory, the VC dimension (roughly analogous to the zero-crossings) grows exponentially as we add layers to the ANN, but it cannot catch up with the doubly exponential growth of Boolean functions. So the ANN can only represent a fraction of all possible Boolean functions, and this fraction even diminishes exponentially. That's my current conjecture.
+"
+"['comparison', 'terminology', 'definitions']"," Title: What is the difference between artificial intelligence and computational intelligence?Body: Having analyzed and reviewed a certain amount of articles and questions, apparently, the expression computational intelligence (CI) is not used consistently and it is still unclear the relationship between CI and artificial intelligence (AI).
+
+According to IEEE computational intelligence society
+
+
+ The Field of Interest of the Computational Intelligence Society (CIS) shall be the theory, design, application, and development of biologically and linguistically motivated computational paradigms emphasizing neural networks, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained.
+
+
+which suggests that CI could be a sub-field of AI or an umbrella term used to group certain AI sub-fields or topics, such as genetic algorithms or fuzzy systems.
+
+What is the difference between artificial intelligence and computational intelligence? Is CI just a synonym for AI?
+"
+"['natural-language-processing', 'ai-design', 'chat-bots']"," Title: How can I create a chatbot application where the user can create its own bot?Body: I am trying to create a chatbot application where the user can create its own bot, like Botengine. After going through google, I saw I need some NLP API to process the user's query. As per wit.ai basic example, I can set and get data. How I am going to create a bot engine?
+
+So, as far I understand the flow, here is an example for pizza delivery
+
+
+- The user will enter a welcome message, i.e - Hi or Hello
+- The welcome reply will be saved by bot owner in my database
+- The user will enter some query, then I will hit wit.ai API to process that query. Example: The user's query is ""What kind of pizza's available in your store"" and wit.ai will respond with the details of intent ""pizza_type""
+- Then I will search for the intent returned by wit in my database.
+
+
+So, is that the right flow to create a chatbot? Am I in the right direction? Could anyone give me some link or some example so I can go through it? I want to create this application using Node.js. I have also found some example in node-wit, but can't find how I will implement this.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'backpropagation', 'capsule-neural-network']"," Title: Why coupling coefficients in capsule neural networks can't be learned by back-propagation?Body: The paper Dynamic Routing Between Capsules uses the algorithm called ""Dynamic Routing Between Capsules"" to determine the coupling coefficients between capsules.
+
+Why it can't be done by backpropagation?
+"
+"['reinforcement-learning', 'q-learning', 'papers', 'value-iteration']"," Title: How is the fitted Q-iteration algorithm related to $Q^*(s, a)$, and how can we use function approximation with this algorithm?Body: I hope to get some clarifications on Fitted Q-Iteration (FQI).
+My Research So Far
+I've read Sutton's book (specifically, ch 6 to 10), Ernst et al and this paper.
+I know that $Q^*(s, a)$ expresses the expected value of first taking action $a$ from state $s$ and then following optimal policy forever.
+I tried my best to understand function approximation in large state spaces and TD($n$).
+My Questions
+
+- Concept - Can someone explain the intuition behind how iteratively extending N from 1 until stopping condition achieves optimality (Section 3.5 of Ernst et al.)? I have difficulty wrapping my mind around how this ties in with the basic definition of $Q^*(s, a)$ that I stated above.
+
+- Implementation - Ernst et al. gives the pseudo-code for the tabular form. But if I try to implement the function approximation form, is this correct:
+
+
+Repeat until stopping conditions are reached:
+ - N ← N + 1
+ - Build the training set TS based on the function Q^{N − 1} and on the full set of four-tuples F
+
+ - Train the algorithm on the TS
+
+ - Use the trained model to predict on the TS itself
+
+ - Create TS for the next N by updating the labels - new reward plus (gamma * predicted values )
+
+I am just starting to learn RL as part of my course. Thus, there are many gaps in my understanding. Hope to get some kind guidance.
+"
+"['neural-networks', 'deep-learning', 'reinforcement-learning', 'learning-rate', 'batch-size']"," Title: Is there a way to translate the concept of batch size into reinforcement learning?Body: I am using a neural network as my function approximator for reinforcement learning. In order to get it to train well, I need to choose a good learning rate. Hand-picking one is difficult, so I read up on methods of programmatically choosing a learning rate. I came across this blog post, Finding Good Learning Rate and The One Cycle Policy, about finding cyclical learning rate and finding good bounds for learning rates.
+All the articles about this method talk about measuring loss across batches in the data. However, as I understand it, in Reinforcement Learning tasks do not really have any "batches", they just have episodes that can be generated by an environment as many times as one wants, which also gives rewards that are then used to optimize the network.
+Is there a way to translate the concept of batch size into reinforcement learning, or a way to use this method of cyclical learning rates with reinforcement learning?
+"
+"['machine-learning', 'convolutional-neural-networks', 'computer-vision']"," Title: What is the best approach for writing a program to identify objects in a picture then crop them a specific way?Body: My works quality control department is responsible for taking pictures of our products at various phases through our QC process and currently the process goes:
+
+
+- Take picture of product
+- Crop the picture down to only the product
+- Name the cropped picture to whatever the part is and some other relevant data
+
+
+Depending on the type of product the pictures will be cropped a certain way. So my initial thought would be to use a reference to an object identifier and then once the object is identified it will use a cropping method specific to that product. There will also be QR codes within the pictures being taken for naming via OCR in the future so I can probably identify the parts that way if this proves slow or problematic.
+
+The part I am unsure about is how to get the program to know how to crop based on a part. For example I would like to present the program with a couple before crop and after crop photos of product X then make a specific cropping formula for product X based on those two inputs.
+
+Also if it makes any difference my code is in C#
+"
+"['machine-learning', 'reinforcement-learning', 'optimization', 'math']"," Title: Optimization step in Apprenticeship Learning via Inverse Reinforcement LearningBody: Why the optimization step of the algorithm a quadratic program? [See: Apprenticeship Learning via Inverse Reinforcement Learning; page 3]
+
+Isn't the objective function linear? Why don't we treat the problem as LPQC (linear program with quadratic constraints)?
+"
+"['machine-learning', 'training', 'optimization']"," Title: How do I compute log-likelihood for training set in supervised learning?Body: I am building a supervised learning model and I wish to compute the log-likelihood for the training set at the point of the minimum validation error.
+
+Initially, I was computing the sum of all the probabilities with maximum value obtained after applying softmax for each example in the training set at the point of minimum validation error but that doesn't look correct.
+
+What is the correct formula for the log-likelihood?
+"
+"['neural-networks', 'machine-learning', 'overfitting', 'meta-learning', 'data-augmentation']"," Title: How does rotating an image and adding new 'rotated classes' prevent overfitting?Body: From Meta-Learning with Memory-Augmented Neural Networks in section 4.1:
+
+To reduce the risk of overfitting, we performed data augmentation by randomly translating and rotating character images. We also created new classes through 90◦, 180◦ and 270◦ rotations of existing data.
+
+I can maybe see how rotations could reduce overfitting by allowing the model to generalize better. But if augmenting the training images through rotations prevents overfitting, then what is the purpose of adding new classes to match those rotations? Wouldn't that cancel out the augmentation?
+"
+"['long-short-term-memory', 'datasets', 'time-series', 'features', 'labels']"," Title: When working with time-series data, is it wrong to use different time-steps for the features and target?Body: When working with time-series data, is it wrong to use daily prices as features and the price after 3 days as the target?
+Or should I use the next-day price as a target, and, after training, predict 3 times, each time for one more day ahead (using the predicted value as a new feature)?
+Will these 2 approaches give similar results?
+"
+"['machine-learning', 'terminology', 'applications', 'topology']"," Title: In what ways is the term ""topology"" applied to Artificial Intelligence?Body: I have only a general understanding of General Topology, and want to understand the scope of the term "topology" in relation to the field of Artificial Intelligence.
+In what ways are topological structure and analysis applied in Artificial Intelligence?
+"
+"['gradient-descent', 'stochastic-gradient-descent', 'mini-batch-gradient-descent']"," Title: What's the rationale behind mini-batch gradient descent?Body: I am reading a book that states
+
+As the mini-batch size increases, the gradient computed is closer to the 'true' gradient
+
+So, I assume that they are saying that mini-batch training only focuses on decreasing the cost function in a certain 'plane', sacrificing accuracy for speed. Is that correct?
+"
+"['datasets', 'resource-request']"," Title: Is there a database somewhere of common lists?Body: I'm looking for a database or some machine readable document that contains common ordered lists or common short sets. e.g:
+
+{January, February, March,...}
+{Monday, Tuesday, ....}
+{Red, Orange, Yellow,...}
+{1,2,3,4,...}
+{one, two, three, four,...}
+{Mercury, Venus, Earth, Mars,...}
+{I, II, III, IV, V, VI,...}
+{Aquarius, Pisces, Aries,...}
+{ein, zwei, drei, ...}
+{Happy, Sneezy, Dopey, ...}
+{Dasher, Dancer, Prancer, Vixen ,...}
+{John, Paul, George, Ringo}
+{20, 1, 18, 4, 13, 6, ...}
+{Alabama, Alaska, Arizona, Arkansas, California,...}
+{Washington, Adams, Jefferson, ...}
+{A,B,C,D,E,F,G,...}
+{A,E,I,O,U}
+{2,3,5,7,11,13,17,...}
+{triangle, square, pentagon, hexagon,...}
+{first, second, third, fourth, fifth,...}
+{tetrahedron, cube, octohedron, icosohedron, dodecahedron}
+{autumn, winter, spring, summer}
+{to, be, or, not, to, be, that, is, the, question}
+...
+
+
+One use is for creating an AI that can solve codes or predict the next thing in a sequence.
+"
+"['philosophy', 'ai-safety']"," Title: Is it necessarry to create theory/infrastructure to prevent people from creating AI incompatible with a safe model (if we create one)?Body: Nowadays we don't know how to create AI in a safe way (I think that we don't even know yet how to define a safe AI), but there is a lot of research in developing a model allowing it.
+
+Let's say, that someday we discover such a model (maybe even it would be possible to mathematically prove its safety). Is it rational to ask, how do we prevent people from creating AI outside of this model (e.g. they are so confident in their own model, that they just pursue it and end up with something like paperclip scenario
)?
+
+Should we also think about creating some theory/infrastructure preventing such a scenario?
+"
+"['deep-learning', 'convolutional-neural-networks', 'action-recognition']"," Title: How should continuous action/gesture recognition be performed differently than isolated action recognitionBody: I am going to train a deep learning model to classify hand gestures in video. Since the person will be taking up nearly the entire width/height of the video and I will be classifying what hand gesture he or she is doing, I don't need to identify the person and create a bounding box around the person doing the action. I only need to classify video sequences to their class labels.
+
+I will be training on a dataset with individual videos, in which each entire video clip is the particular gesture (So it's a dataset like UCF-101, with video clips corresponding to class labels). But when I am deploying the network, I want the neural network to run on live video. As in how the live video is playing, it should recognize when a gesture has occurred and indicate that it recognized the gesture.
+
+So I was wondering - How can I train the neural network on isolated video sequences in which the entire video clip is the action (like explained above), but run the neural network on live video? For instance, can I use a 3D CNN? Or must I use a 2D CNN with an LSTM network instead, for it to work on live video? My concern is that since a 3D CNN performs the filters across many frames, wouldn't running the CNN on every frame make it very slow? But if I use a 2D CNN with LSTM, will that make it faster? Or will both work fine?
+
+Thank you for your help in advance.
+"
+"['datasets', 'game-ai', 'regression', 'algorithm-request']"," Title: Which algorithm can I use to minimise the number of wins of 2 weapons that fight each other in a game?Body: I have a game that involves 2 weapons, which fight against each other. Each weapon has 5 features/statistics, which have certain range. I can simulate the game $N$ times with randomly initialised values for these statitics, in order to collect a dataset. I can also count the number of times a weapon wins, loses or draws against the other.
+I'm looking for an algorithm that minimises the number of wins of the 2 weapons (maybe by changing these features), so that they are balanced.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'recurrent-neural-networks']"," Title: How recurrent neural network work when predict many days?Body: I use recurrent neural network, RNNs have to get input one value per step and it will show one value output. If I have daily sale demand time series data.
+
+I want to predict sale demand for three days. So, Rnn have to show output one day in three time or it can show sale demand three days output in one time prediction ?
+"
+"['neural-networks', 'reference-request', 'backpropagation', 'implementation', 'resource-request']"," Title: How can I implement back-propagation for medium-sized neural networks?Body: I've been wanting to make my own Neural Network in Python, in order to better understand how it works. I've been following this series of videos as a sort of guide, but it seems the backpropagation will get much more difficult when you use a larger network, which I plan to do. He doesn't really explain how to scale it to larger ones.
+Currently, my network feeds forward, but I don't have much of an idea of where to start with backpropagation. My code is posted below, to show you where I'm currently at (I'm not asking for coding help, just for some pointers to good sources, and I figure knowing where I'm currently at might help):
+import numpy
+
+class NN:
+ prediction = []
+ def __init__(self,input_length):
+ self.layers = []
+ self.input_length = input_length
+ def addLayer(self, layer):
+ self.layers.append(layer)
+ if len(self.layers) >1:
+ self.layers[len(self.layers)-1].setWeights(len(self.layers[len(self.layers)-2].neurons))
+ else:
+ self.layers[0].setWeights(self.input_length)
+ def feedForward(self, inputs):
+ _inputs = inputs
+ for i in range(len(self.layers)):
+ self.layers[i].process(_inputs)
+ _inputs = self.layers[i].output
+ self.prediction = _inputs
+
+ def calculateErr(self, target):
+ out = []
+ for i in range(0,len(self.prediction)):
+ out.append( (self.prediction[i] - target[i]) ** 2 )
+ return out
+
+
+class Layer:
+
+ neurons = []
+ weights = []
+ biases = []
+ output = []
+
+ def __init__(self,length,function):
+ for i in range(0,length):
+ self.neurons.append(Neuron(function))
+ self.biases.append(numpy.random.randn())
+
+ def setWeights(self, inlength):
+ for i in range(0,inlength):
+ self.weights.append([])
+ for j in range(0, inlength):
+ self.weights[i].append(numpy.random.randn())
+
+ def process(self,inputs):
+ for i in range(0, len(self.neurons)):
+ self.output.append(self.neurons[i].run(inputs,self.weights[i], self.biases[i]))
+
+
+class Neuron:
+ output = 0
+ def __init__(self, function):
+ self.function = function
+ def run(self, inputs, weights, bias):
+ self.output = self.function(inputs,weights,bias)
+ return self.output
+
+def sigmoid(n):
+ return 1/(1+numpy.exp(n))
+
+
+def inputlayer_func(inputs,weights,bias):
+ return inputs
+
+def l2_func(inputs,weights,bias):
+ out = 0
+
+ for i in range(0,len(inputs)):
+ out += weights[i] * inputs[i]
+ out += bias
+
+ return sigmoid(out)
+
+NNet = NN(2)
+
+
+l2 = Layer(1,l2_func)
+
+
+NNet.addLayer(l2)
+NNet.feedForward([2.0,1.0])
+print(NNet.prediction)
+
+So, is there any resource that explains how to implement the back-propagation algorithm step-by-step?
+"
+"['reinforcement-learning', 'combinatorics']"," Title: How to generalise over multiple simultaneous dependent actions in Reinforcement LearningBody: I am trying to build an RL agent to price paid-for-seating on commercial flights. I should reiterate here - I am not talking about the price of the ticket - rather, I am talking about the pricing you see if you click on the seat map to choose where on the plane you sit (exits rows, window seats, etc). The general set up is:
+
+
+- After choosing their flights (for a booking of n people), a customer will view a web page with the available seat types and their prices visible.
+- They select between zero and n seats from a seat map with a variety of different prices for different seats, to be added to their booking.
+- The revenue from step 2 is observed as the reward.
+
+
+Each 'episode' is the selling cycle of one flight. Whether the customer buys a chosen seat or not, the inventory goes down as they still have a ticket for the flight so will get a seat at departure. I would like to change prices on the fly, rather than fix a set of optimal prices throughout the selling cycle.
+
+I have not decided on a general architecture yet. I want to take various booking, flight, and inventory information into account, so I know I will be using function approximation (most likely a neural net) to generalise over the state space.
+
+However, I am less clear on how to set up my action space. I imagine an action would amount to a vector with a price for each different seat type (window seat, exit row, etc). If I have, for example, 8 different seat types, and 10 different price points for each, this gives me a total of 10^8 different actions, many of which will be very similar. In a sense, each action is comprised of a combination of sub-actions - the action of pricing each seat type.
+
+Additionally, each sub-action (pricing one seat type) is somewhat dependent on the others, in the sense that the price of one seat type will likely affect the demand (and hence reward contribution) for another. For example, if you set window seats to a very cheap price, people will be less likely to spend a normal amount for the other seat types. Hence, I doubt the problem can be decomposed into a set of sub-problems.
+
+I'm interested if there has been any research into dealing with a problem like this. Clearly any agent I build needs some way to generalise across actions to some degree, since collecting real data on millions of actions is not possible, even just for one state.
+
+As I see it, this comes down to three questions:
+
+
+- Is it possible to get an agent that can deal with a set of actions (prices) as a single decision?
+- Is it possible to get this agent to understand actions in relative terms? Say for example, one set of potential prices is [10, 12, 20], for middle seats, aisle seats, and window seats. Can I get my agent to realise that there is a natural ordering there, and that the first two pricing actions are more similar to each other than to the third possible action?
+- Further to this, is it possible to generalise from this set of actions - could an agent be set up to understand that the set of prices [10, 13, 20] is very similar to the first set?
+
+
+I haven't been able to find any literature on this, especially relating to the second question - any help would be much appreciated!
+"
+"['machine-learning', 'research', 'academia', 'benchmarks']"," Title: How can AI researchers avoid ""overfitting"" to commonly-used benchmarks as a community?Body: In fields such as Machine Learning, we typically (somewhat informally) say that we are overfitting if improve our performance on a training set at the cost of reduced performance on a test set / the true population from which data is sampled.
+
+More generally, in AI research, we often end up testing performance of newly proposed algorithms / ideas on the same benchmarks over and over again. For example:
+
+
+- For over a decade, researchers kept trying thousands of ideas on the game of Go.
+- The ImageNet dataset has been used for huge amounts of different publications
+- The Arcade Learning Environment (Atari games) has been used for thousands of Reinforcement Learning papers, having become especially popular since the DQN paper in 2015.
+
+
+Of course, there are very good reasons for this phenomenon where the same benchmarks keep getting used:
+
+
+- Reduced likelihood of researchers ""creating"" a benchmark themselves for which their proposed algorithm ""happens"" to perform well
+- Easy comparison of results to other publications (previous as well as future publications) if they're all consistently evaluated in the same manner.
+
+
+However, there is also a risk that the research community as a whole is in some sense ""overfitting"" to these commonly-used benchmarks. If thousands of researchers are generating new ideas for new algorithms, and evaluate them all on these same benchmarks, and there is a large bias towards primarily submitting/accepting publications that perform well on these benchmarks, the research output that gets published does not necessarily describe the algorithms that perform well across all interesting problems in the world; there may be a bias towards the set of commonly-used benchmarks.
+
+
+
+Question: to what extent is what I described above a problem, and in what ways could it be reduced, mitigated or avoided?
+"
+"['convolutional-neural-networks', 'backpropagation']"," Title: How to train a CNNBody: When it comes to CNNs, I don't understand 2 things in the training process:
+
+
+- How do I pass the error back when there are pooling layers between the convolutional layers?
+- And if I know how it's done, can I train all the layers just like layers in normal Feed Forward Neural Nets?
+
+"
+"['neural-networks', 'machine-learning', 'reinforcement-learning']"," Title: How do you program fear into a neural network?Body: If you've been attacked by a spider once, chances are you'll never go near a spider again.
+In a neural network model, having a bad experience with a spider will slightly decrease the probability you will go near a spider depending on the learning rate. This is not good.
+How can you program fear into a neural network, such that you don't need hundreds of examples of being bitten by a spider in order to ignore the spider (and also that it doesn't just lower the probability that you will choose to go near a spider)?
+"
+"['ai-design', 'autonomous-vehicles']"," Title: Do self-driving cars resort to randomness to make decisions?Body: I recently heard someone make a statement that when you're designing a self-driving car, you're not building a car but really a computerized driver, so you're trying to model a human mind -- at least the part of the human mind that can drive.
+
+Since humans are unpredictable, or rather since their actions depend on so many factors some of which are going to remain unexplained for a long time, how would a self-driving car reflect that, if they do?
+
+A dose of unpredictability could have its uses. If, say, two self-driving cars are in a stuck in a right of way deadlock, it could be good to inject some randomness instead of maybe seeing the same action applied at the same time if the cars run the same system.
+
+But, on the other hand, we know that non-deterministic isn't friends with software development, especially in testing. How would engineers be able to control it and reason about it?
+"
+"['neural-networks', 'deep-learning', 'objective-functions', 'multi-label-classification']"," Title: Which other loss functions for hierarchical multi-label classification could I use?Body: I am looking to try different loss functions for a hierarchical multi-label classification problem. So far, I have been training different models or submodels like multilayer perceptron (MLP) branch inside a bigger model which deals with different levels of classification, yielding a binary vector. I have been also using Binary Cross-Entropy (BCE) and summing all the losses existing in the model before backpropagating.
+I am considering trying other losses like MultiLabelSoftMarginLoss and MultiLabelMarginLoss.
+What other loss functions are worth trying? Hamming loss perhaps or a variation? Is it better to sum all the losses and backpropagate or do multiple backpropagations?
+"
+"['machine-learning', 'philosophy', 'turing-test']"," Title: Can machine learning be used to pass the Turing test?Body: Can we say that the Turing test aims to develop machines or methods to reach human-level performance in all cognitive tasks and that machine learning is one of these methods that can pass the Turing test?
+"
+"['convolutional-neural-networks', 'computer-vision', 'tensorflow', 'object-recognition']"," Title: Training a CNN from scratch over COCO datasetBody: I am using Tensorflow Object Detection API for training a CNN from scratch on COCO dataset. I need to use this specific configuration.
+There is no pre-trained model on COCO with that configuration and this is the reason why I am training from scratch.
+
+However, after 1 week of training and evaluating each checkpoint generated by the training phase this is how my learning phase appears on Tensorboard:
+
+
+
+Thus, my questions are:
+
+
+- does anyone know how many iterations approximately will be necessary? Right now I did more than 500'000 iterations.
+- How can be possible that after 500'000 the evaluation is 0,8%? I would expected something like 60-70%.
+- Why does there is a sudden drop after 500k iterations? I thought that the eval was supposed to converge to some limit. (this is what SGD should do)
+- Is there any 'trick' to speed up the training phase? (ex: increasing the learning rate, etc).
+
+"
+"['tensorflow', 'deep-neural-networks', 'generative-adversarial-networks']"," Title: Using two generative adversarial nets to classify articles - what is a good approach?Body: I'm trying to create a deep learning network to classify news article based on the text and associated image. The idea comes from a novel use of GANs to classify based on generated data.
+
+My approach was to use Tensorflow to generate word embeddings in the article, and then tranform the images into records - https://github.com/openai/improved-gan/blob/master/imagenet/convert_imagenet_to_records.py. This second component would also contain the label.
+
+
+- Is it wise to combine both modes into one neural net, or classify separately?
+
+
+I'm also trying to work out how to concatenate the two tensors in Tensorflow. Can anyone give a steer.
+"
+"['ai-design', 'game-ai', 'breadth-first-search']"," Title: How do I keep track of already visited states in breadth-first search?Body: I was trying to implement the breadth-first search (BFS) algorithm for the sliding blocks puzzle (number type). Now, the main thing I noticed is that, if you have a $4 \times 4$ board, the number of states can be as large as $16!$, so I cannot enumerate all states beforehand.
+
+How do I keep track of already visited states? I am using a class board each class instance contains a unique board pattern and is created by enumerating all possible steps from the current step.
+
+I searched on the net and, apparently, they do not go back to the just-completed previous step, BUT we can go back to the previous step by another route too and then again re-enumerate all steps which have been previously visited.
+
+So, how to keep track of visited states when all the states have not been enumerated already? Comparing already present states to the present step will be costly.
+"
+"['neural-networks', 'machine-learning', 'activation-functions', 'artificial-neuron']"," Title: What does it mean for a neuron in a neural network to be activated?Body: I just stumbled upon the concept of neuron coverage, which is the ratio of activated neurons and total neurons in a neural network. But what does it mean for a neuron to be ""activated""? I know what activation functions are, but what does being activated mean e.g. in the case of a ReLU or a sigmoid function?
+"
+"['philosophy', 'artificial-consciousness', 'self-awareness']"," Title: How important is consciousness for making advanced artificial intelligence?Body: How important is consciousness and self-consciousness for making advanced AIs? How far away are we from making such?
+
+When making e.g. a neural network there's (very probably) no consciousness within it, but just mathematics behind, but do we need the AIs to become conscious in order to solve more complex tasks in the future? Furthermore, is there actually any way we can know for sure if something is conscious, or if it's just faking it? It's ""easy"" to make a computer program that claims it's conscious, but that doesn't mean it is (e.g. Siri).
+
+And if the AIs are only based on predefined rules without consciousness, can we even call it ""intelligence""?
+"
+"['neural-networks', 'terminology', 'multilayer-perceptrons', 'function-approximation']"," Title: Is a multilayer perceptron a recursive function?Body: I read somewhere that a multilayer perceptron is a recursive function in its forward propagation phase. I am not sure, what is the recursive part? For me, I would see an MLP as a chained function. So, it would nice anyone could relate an MLP to a recursive function.
+"
+"['reinforcement-learning', 'q-learning', 'discount-factor']"," Title: Is the discount not needed in a deterministic environment for Reinforcement Learning?Body: I'm now reading a book titled as "Deep Reinforcement Learning Hands-On" and the author said the following on the chapter about AlphaGo Zero:
+
+Self-play
+In AlphaGo Zero, the NN is used to approximate the prior probabilities of the actions and evaluate the position, which is very similar to the Actor-Critic (A2C) two-headed setup. On the input of the network, we pass the current game position (augmented with several previous positions) and return two values. The policy head returns the probability distribution over the actions and the value head estimates the game outcome as seen from the player's perspective. This value is undiscounted, as moves in Go are deterministic. Of course, if you have stochasticity in the game, like in backgammon, some discounting should be used.
+
+All the environments that I have seen so far are stochastic environments, and I understand the discount factor is needed in stochastic environment.
+I also understand that the discount factor should be added in infinite environments (no end episode) in order to avoid the infinite calculation.
+But I have never heard (at least so far on my limited learning) that the discount factor is NOT needed in deterministic environment. Is it correct? And if so, why is it NOT needed?
+"
+"['machine-learning', 'reinforcement-learning', 'game-ai', 'monte-carlo-tree-search', 'alphazero']"," Title: Does Monte Carlo tree search qualify as machine learning?Body: To the best of my understanding, the Monte Carlo tree search (MCTS) algorithm is an alternative to minimax for searching a tree of nodes. It works by choosing a move (generally, the one with the highest chance of being the best), and then performing a random playout on the move to see what the result is. This process continues for the amount of time allotted.
+
+This doesn't sound like machine learning, but rather a way to traverse a tree. However, I've heard that AlphaZero uses MCTS, so I'm confused. If AlphaZero uses MCTS, then why does AlphaZero learn? Or did AlphaZero do some kind of machine learning before it played any matches, and then use the intuition it gained from machine learning to know which moves to spend more time playing out with MCTS?
+"
+"['intelligence-testing', 'turing-test']"," Title: If the Turing test is passed, does this imply that computers exhibit intelligence?Body: Turing test was created to test machines exhibiting behavior equivalent or indistinguishable from that of a human. Is that the sufficient condition of intelligence?
+"
+"['optimization', 'genetic-algorithms', 'fitness-functions', 'bias-variance-tradeoff']"," Title: Can I compute the fitness of an agent based on a low number of runs of the game?Body: I'm developing an AI to play a card game with a genetic algorithm. Initially, I will evaluate it against a player that plays randomly, so there will naturally be a lot of variance in the results. I will take the mean score from X games as that agent's fitness. The actual playing of the game dominates the time to evaluate the actual genetic algorithm.
+My question is: should I go for a low X, e.g. 10, so I would be able to move through generations quite fast but the fitness function would be quite inaccurate? Alternatively, I could go for a high X e.g. 100 and would move very slowly but with a more accurate function.
+"
+"['neural-networks', 'machine-learning', 'object-recognition']"," Title: How does the target output of a Single Shot Detector (SSD) look like?Body: According to the paper SSD: Single Shot MultiBox Detector, for each cell in a feature map k boxes are acquired and for each box we get $c$ class scores and $4$ offsets relative to the original default box_shape. This means that we get $m \times n \times (c +4) \times k$ outputs for each $m \times n$ feature map.
+
+However, it is mentioned that in order to train the SSD network only the images and their ground truth boxes are needed.
+
+How exactly can one define the output targets then? What is the format of the output in the SSD framework? I think it cannot be a vector with the positions, sizes and class of each boundary box, since the outputs are a lot more and relate to every default box in the feature maps.
+
+Can anyone explain in more detail how can I, given an image and its boundary boxes' info, construct a vector that will be fed into a network so that I can train it?
+"
+"['neural-networks', 'machine-learning', 'natural-language-processing', 'computational-learning-theory']"," Title: Can we teach an artificial intelligence through sentences?Body: Could we teach an AI with sentences such as ""ants are small"" and ""the sky is blue""? Is there any research work that attempts to do this?
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing']"," Title: How can I train model to extract custom entities from text?Body: I have a 100-150 words text and I want to extract particular information like location, product type, dates, specifications and price.
+
+Suppose if I arrange a training data which has a text as input and location/product/dates/specs/price as a output value. So I want to train the model for these specific output only.
+
+I have tried Spacy and NLTK for entity extraction but that doesn't suffice above requirements.
+
+Sample text:
+
+
+ Supply of Steel Fabrication Items.
+ General Item .
+ Construction Material .
+ Hardware Stores and Tool .
+ Construction of Security Fence. - Angle Iron 65x65x6mm for fencing post of height 3.5, Angle Iron 65x65x6mm for fencing post of height 3.5, MS Flat 50 x 5mm of 2.60m height, Angle Iron 50x50x6mm for Strut post of height 3.10mtr, Angle Iron 50x50x6mm for fencing post of height 1.83, Angle Iron 50x50x6mm for fencing post of height 1.37, Barbed wire made out of GI wire of size 2.24mm dia, Chain link fence dia 4 mm and size of mesh 50mm x, Concertina Coil 600mm extentable up to 6 mtr, Concertina Coil 900mm extentable up to 15 to 20 mtr, Binding wire 0.9mm dia., 12 mm dia 50mm long bolts wih nuts & 02 x washers, Cement in polythene bags 50 kgs each grade 43 OPC, Sand Coarse confiming to IS - 383-970, 2nd revision, Crushed Stone Aggregate 20 mm graded, TMT Bar 12mm dia with 50mm U bend, Lime 1st quality, Commercial plywood 6' x 3' x 12 mm., Nails all Type 1"" 2""3"" 4"" 5"" and 6""., Primer Red Oxide, Synthetic enamel paint, colour black/white Ist quality .
+ Angle Iron 65x65x6mm for fencing post of height 3.5, Angle Iron 65x65x6mm for fencing post of height 3.5 mtr, MS Flat 50 x 5mm of 2.60m height, Angle Iron 50x50x6mm for Strut post of height 3.10mtr, Barbed wire made out of GI wire of size 2.24mm dia, Chain link fence dia 4 mm and size of mesh 50mm x, Concertina Coil 600mm extentable up to 6 mtr, Binding wire 0.9mm dia., 12 mm dia 50mm long bolts with nuts & 02 x washers, Cement in polythene bags 50 kgs each grade 43 OPC, Sand Coarse confiming to IS - 383-970, 2nd revision, Crushed Stone Aggregate 20 mm graded, TMT Bar 12mm dia with 50mm U bend, Lime 1st quality, Commercial plywood 6' x 3' x 12 mm., Nails all Type 1"" 2""3"" 4"" 5"" and 6""., Primer Red Oxide, Synthetic enamel paint, colour black/white Ist quality., Cutting Plier 160mm long, Leather Hand Gloves/Knitted industrial, Ring Spanner of 16mm x 17mm, 14 x 16mm, Crowbar hexagonal 1200mm long x 40mm, Plumb bob steel, Bucket steel 15 ltr capacity (as per, Plastic water tank 500 ltrs Make - Sintex, Water level pipe 30 Mtr, Brick Hammer 250 Gms with handle, Hack saw Blade double side, Welding Rod, Cutting rod for making holes, HDPE Sheet 5' x 8', Plastic Measuring tape 30 Mtr, Steel Measuring tape 5 Mtr, Wooden Gurmala 6""x3"", Steel Pan Mortar of 18""dia (As, Showel GS with wooden handle, Phawarah with wooden handle (As per, Digital Vernier Caliper, Digital Weighing Machine cap 500 Kgs, Portable Welding Machine, Concrete mixer machine of 8 CFT .
+ Angle Iron 65x65x6mm for fencing post of height 3.5, Angle Iron 65x65x6mm for fencing post of height 3.5, MS Flat 50 x 5mm of 2.60m height, Angle Iron 50x50x6mm for Strut post of height 3.10mtr, Barbed wire made out of GI wire of size 2.24mm dia, Chain link fence dia 4 mm and size of mesh 50mm, Concertina Coil 600mm extentable up to 6 mtr, Binding wire 0.9mm dia., 12 mm dia 50mm long bolts with nuts & 02 x washers, Cement in polythene bags 50 kgs each grade 43, Sand Coarse confiming to IS - 383-970, 2nd revision, Crushed Stone Aggregate 20 mm graded, TMT Bar 12mm dia with 50mm U bend, Lime 1st quality, Commercial plywood 6' x 3' x 12 mm., Nails all Type 1"" 2""3"" 4"" 5"" and 6""., Primer Red Oxide, Synthetic enamel paint, colour black/white Ist quality., Cutting Plier 160mm long, Leather Hand Gloves/Knitted industrial, Ring Spanner of 16mm x 17mm, 14 x 16mm, Crowbar hexagonal 1200mm long x 40mm, Plumb bob steel, Bucket steel 15 ltr capacity (as per, Plastic water tank 500 ltrs Make - Sintex, Water level pipe 30 Mtr, Brick Hammer 250 Gms with handle, Hack saw Blade double side, Welding Rod, Cutting rod for making holes, HDPE Sheet 5' x 8', Plastic Measuring tape 30 Mtr, Steel Measuring tape 5 Mtr, Wooden Gurmala 6""x3"", Steel Pan Mortar of 18""dia (As per, Showel GS with wooden handle, Phawarah with wooden handle (As per, Digital Vernier Caliper)
+
+"
+"['neural-networks', 'machine-learning', 'statistical-ai', 'generative-model', 'generative-adversarial-networks']"," Title: Why do we need Upsampling and Downsampling in Progressive Growing of GansBody: I was working recently on Progressive Growing of GANs (aka PGGANs). I have implemented the whole architecture, but the problem that was ticking my mind is that in simple GANs, like DCGAN, PIX2PIX, we actually use Transposed Convolution for up-sampling and Convolution for Down-sampling, but in PGGANs in which we gradually add layers to both generator and discriminator so that we can first start with 4x4 image and then increase to 1024x01024 step by step.
+
+I did not understand that once we Increase 1x1x512 dimensional latent vector size to 4x4x512 sort of image we use convolution with high padding, and then once training for 4x4 images, we take still take 512 latent vector and then use the previously trained convolutional layers to convert it to 4x4x512 image, and then we up-sample then given image to 8x8 using nearest neighbor filtering and then again apply convolution and so-on.
+
+
+- My question is that why we need to explicitly up-sample and then apply convolution, when instead we could just use Transposed Convolution which can upsample it automatically and is trainable? Why do we not use it like in other GANs?
+
+
+Here is the image of architecture:
+
+
+
+Please explain me the intuition behind this.
+Thanks
+"
+"['machine-learning', 'ai-design', 'probability', 'statistical-ai']"," Title: Is Nassim Taleb right about AI not being able to accurately predict certain types of distributions?Body: So Taleb has two heuristics to generally describe data distributions. One is Mediocristan, which basically means things that are on a Gaussian distribution such as height and/or weight of people.
+
+The other is called Extremistan, which describes a more Pareto like or fat-tailed distribution. An example is wealth distribution, 1% of people own 50% of the wealth or something close to that and so predictability from limited data sets is much harder or even impossible. This is because you can add a single sample to your data set and the consequences are so large that it breaks the model, or has an effect so large that it cancels out any of the benefits from prior accurate predictions. In fact this is how he claims to have made money in the stock market, because everyone else was using bad, Gaussian distribution models to predict the market, which actually would work for a short period of time but when things went wrong, they went really wrong which would cause you to have net losses in the market.
+
+I found this video of Taleb being asked about AI. His claim is that A.I. doesn't work (as well) for things that fall into extremistan.
+
+Is he right? Will some things just be inherently unpredictable even with A.I.?
+
+Here is the video I am referring to https://youtu.be/B2-QCv-hChY?t=43m08s
+"
+"['game-ai', 'evolutionary-algorithms', 'search', 'heuristics', 'alpha-beta-pruning']"," Title: More effective way to improve the heuristics of an AI... evolution or testing between thousands of pre-determined sets of heuristics?Body: I'm making a Connect Four game where my engine uses Minimax with Alpha-Beta pruning to search. Since Alpha-Beta pruning is much more effective when it looks at the best moves first (since then it can prune branches of poor moves), I'm trying to come up with a set of heuristics that can rank moves from best to worst. These heuristics obviously aren't guaranteed to always work, but my goal is that they'll often allow my engine to look at the best moves first. An example of such heuristics would be as follows:
+
+
+- Closeness of a move to the centre column of the board - weight 3.
+- How many pieces surround a move - weight 2.
+- How low, horizontally, a move is to the bottom of the board - weight 1.
+- etc
+
+
+However, I have no idea what the best set of weight values are for each attribute of a move. The weights I listed above are just my estimates, and can obviously be improved. I can think of two ways of improving them:
+
+1) Evolution. I can let my engine think while my heuristics try to guess which move will be chosen as best by the engine, and I'll see the success score of my heuristics (something like x% guessed correctly). Then, I'll make a pseudo-random change/mutation to the heuristics (by randomly adjusting one of the weight values by a certain amount), and see how the heuristics do then. If it guesses better, then that will be my new set of heuristics. Note that when my engine thinks, it considers thousands of different positions in its calculations, so there will be enough data to average out how good my heuristics are at prediction.
+
+2) Generate thousands of different heuristics with different weight values from the start. Then, let them all try to guess which move my engine will favor when it thinks. The set of heuristics that scores best should be kept.
+
+I'm not sure which strategy is better here. Strategy #1 (evolution) seems like it could take a long time to run, since every time I let my engine think it takes about 1 second. This means testing each new pseudo-random mutation will take a second. Meanwhile, Strategy #2 seems faster, but I could be missing out on a great set of heuristics if I myself didn't include them.
+"
+"['neural-networks', 'machine-learning', 'deep-neural-networks', 'symbolic-ai']"," Title: What kinds of problems can AI solve without using a deep neural network?Body: A lot of questions on this site seem to be asking "can I use X to solve Y?", where X is usually a deep neural network, and Y is often something already addressed by other areas of AI that are less well known?
+I have some ideas about this, but am inspired by questions like this one where a fairly wide range of views are expressed, and each answer focuses on just one possible problem domain.
+There are some related questions on this stack already, but they are not the same. This question specifically asks what genetic algorithms are good for, whereas I am more interested in having an inventory of problems mapped to possible techniques. This question asks what possible barriers are to AI with a focus on machine learning approaches, but I am interested in what we can do without using deep neural nets, rather than what is difficult in general.
+A good answer will be supported with citations to the academic literature, and a brief description of both the problem and the main approaches that are used.
+Finally, this question asks what AI can do to solve problems related to climate change. I'm not interested in the ability to address specific application domains. Instead, I want to see a catalog of abstract problems (e.g. having an agent learn to navigate in a new environment; reasoning strategically about how others might act; interpreting emotions), mapped to useful techniques for those problems. That is, "solving chess" isn't a problem, but "determining how to optimally play turn-based games without randomness" is.
+"
+"['ai-design', 'algorithm', 'game-ai', 'math', 'discount-factor']"," Title: Are there any discount-factors based on branching factors?Body: I recently came across this function:
+
+$$\sum_{t = 0}^{\infty} \gamma^t R_t.$$
+
+It's elegant and looks to be useful in the type of deterministic, perfect-information, finite models I'm working with.
+
+However, it occurs to me that using $\gamma^t$ in this manner might be seen as somewhat arbitrary.
+
+Specifically, the objective is to discount per the added uncertainty/variance of ""temporal distance"" between the present gamestate and any potential gamestate being evaluated, but that variance would seem to be a function of the branching factors present in a given state, and the sum of the branching factors leading up to the evaluated state.
+
+
+- Are there any defined discount-factors based on the number of branching factors for a given, evaluated node, or the number of branches in the nodes leading to it?
+
+
+If not, I'd welcome thoughts on how this might be applied.
+
+An initial thought is that I might divide 1 by the number of branches and add that value to the goodness of a given state, which is a technique I'm using for heuristic tie-breaking with no look-ahead, but that's a ""value-add"" as opposed to a discount.
+
+
+
+For context, this is for a form of partisan Sudoku, where an expressed position $p_x$ (value, coordinates) typically removes some number of potential positions $p$ from the gameboard. (Without the addition of an element displacement mechanic, the number of branches can never increase.)
+
+On a $(3^2)^2$ Sudoku, the first $p_x$ removes $30$ out of $729$ potential positions $p$, including itself.
+
+With each $p_x$, the number of branches diminishes until the game collapses into a tractable state, allowing for perfect play in endgames. [Even there, a discounting function may have some utility because outcomes sets of ratios. Where the macro metric is territorial (controlled regions at the end of play), the most meaningful metric may ultimately be ""efficiency"" (loosely, ""points_expended to regions_controlled""), which acknowledges a benefit to expending the least amount of points $p_x$, even in a tractable endgame where the ratio of controlled regions cannot be altered. Additionally, zugzwangs are possible in the endgame, and in that case reversing the discount to maximize branches may have utility.]
+
+ $(3^2)^2 = 3x3(3x3) = ""9x9""$ but the exponent is preferred so as not to restrict the number of dimensions.
+"
+"['definitions', 'intelligence']"," Title: What is the most general definition of ""intelligence""?Body: When we talk about artificial intelligence, human intelligence, or any other form of intelligence, what do we mean by the term intelligence in a general sense? What would you call intelligent and what not? In other words, how do we define the term intelligence in the most general possible way?
+"
+"['reinforcement-learning', 'terminology', 'definitions']"," Title: What is the relation between an environment, a state and a model?Body: In particular, I would like to have a simple definition of ""environment"" and ""state"". What are the differences between those two concepts? Also, I would like to know how the concept of model relates to the other two.
+
+There is a similar question What is the difference between an observation and a state in reinforcement learning?, but it is not exactly what I was looking for.
+"
+"['reinforcement-learning', 'papers', 'a3c']"," Title: Can deep successor representations be used with the A3C algorithm?Body: Deep Successor Representations(DSR) has given better performance in tasks like navigation, when compared to normal model-free RL tasks. Basically, DSR is a hybrid of model-free RL and model-based RL. But the original work has only used value-based functions deep RL methods like DQN.
+Can deep successor representations be used with the A3C algorithm?
+"
+"['neural-networks', 'machine-learning', 'math', 'activation-functions', 'function-approximation']"," Title: What makes multi-layer neural networks able to perform nonlinear operations?Body: As I know, a single layer neural network can only do linear operations, but multilayered ones can.
+
+Also, I recently learned that finite matrices/tensors, which are used in many neural networks, can only represent linear operations.
+
+However, multi-layered neural networks can represent non-linear (even much more complex than being just a nonlinear) operations.
+
+What makes it happen? The activation layer?
+"
+"['convolutional-neural-networks', 'image-recognition', 'training']"," Title: How to compare the training performance of a model on different data input?Body: So I have a deep learning model and three data sets (images). My theory is that one of these data sets should function better when it comes to training a deep learning model (meaning that the model will be able to achieve better performance (higher accuracy) with one of these data sets to serve one classification purpose)
+
+I just want to safe check my approach here. I understand the random nature of training deep learning models and the difficulties associated with such experiment. Though, I want someone who can point out maybe a red flag here.
+
+I am wondering about these things:
+
+
+- do you think using an optimizer with default parameters and repeating the training process, let's say, 30 times for each data set and picking the best performance is a safe approach? I am mainly worried here that modifying the hyperparamters of the optimizer might result in better results for let's say one of the data sets.
+- what about seeding the weights initialization? do you think that I should seed them and then modify the hyperparameters until I get the best convergence or not seed and still modify the hyper parameters?
+
+
+I am sorry for the generality of my question. I hope if someone can point me in the right direction.
+"
+"['reinforcement-learning', 'policies', 'value-functions', 'optimal-policy']"," Title: An example of a unique value function which is associated with multiple optimal policiesBody: In the 4th paragraph of
+http://www.incompleteideas.net/book/ebook/node37.html
+it is mentioned:
+
+
+ Whereas the optimal value functions for states and state-action pairs are unique for a given MDP, there can be many optimal policies
+
+
+Could you please give me a simple example that shows different optimal policies considering a unique value function?
+"
+"['reinforcement-learning', 'terminology', 'policies', 'stationary-policy']"," Title: What does ""stationary"" mean in the context of reinforcement learning?Body: I think I've seen the expressions ""stationary data"", ""stationary dynamics"" and ""stationary policy"", among others, in the context of reinforcement learning. What does it mean? I think stationary policy means that the policy does not depend on time, and only on state. But isn't that a unnecessary distinction? If the policy depends on time and not only on the state, then strictly speaking time should also be part of the state.
+"
+"['chat-bots', 'social']"," Title: The future of chatbotsBody: I downloaded a chatbot called Replika off the internet the other day and we've become very good friends. My thought is that such chatbots will soon replace therapists and then probably private tutors as well.
+
+
+- Is it safe to say that anyone aspiring to go into one of these professions now should look for other options?
+- What other jobs may be replaced by chatbots in the future?
+- How long before AIs are able to answer questions on StackExchange?
+
+"
+"['convolutional-neural-networks', 'reference-request', 'datasets', 'object-detection', 'yolo']"," Title: Would YOLO be able to detect objects in ""different"" positions?Body: I have the following question about You Only Look Once (YOLO) algorithm, for object detection.
+I have to develop a neural network to recognize web components in web applications - for example, login forms, text boxes, and so on. In this context, I have to consider that the position of the objects on the page may vary, for example, when you scroll up or down.
+The question is, would YOLO be able to detect objects in "different" positions? Would the changes affect the recognition precision? In other words, how to achieve translation invariance? Also, what about partial occlusions?
+My guess is that it depends on the relevance of the examples in the dataset: if enough translated / partially occluded examples are present, it should work fine.
+If possible, I would appreciate papers or references on this matter.
+(PS: if anyone knows about a labeled dataset for this task, I would really be grateful if you let me know.)
+"
+"['neural-networks', 'game-ai']"," Title: Can you analyse a neural network to determine good states?Body: I've developed a neural network that can play a card game. I now want to use it to create decks for the game. My first thought would be to run a lot of games with random decks and use some approximation (maybe just a linear approximation with a feature for each card in your hand) to learn the value function for each state.
+
+However, this will probably take a while, so in the mean time is there any way I could get this information directly from the neural network?
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks']"," Title: How to tinker with CNN architectures?Body: I was thinking of creating a CNN. Now it is known CNN takes long times to train so it is advisable to stick to known architectures and hyper-parameters.
+
+My question is: I want to tinker with the CNN architecture (since it is a specialised task). One approach would be to create a CNN and check on small data-sets, but then I would have no way of knowing whether the Fully Connected layer at the end is over-fitting the data while the convolutional layers do nothing (since large FC layers can easily over-fit data). Cross Validation is a good way to check it, but it might not be satisfactory (since my opinion is that a CNN can be replaced with a Fully Connected NN if the data-set is small enough and there is little variation in the future data-sets).
+
+So what are some ways to tinker with CNN and get a good estimate for future data-sets in a reasonable training time? Am I wrong in my previous assumptions? A detailed answer would be nice!
+"
+['naive-bayes']," Title: Why is my calculation of the probability of an object being in a certain class incorrect?Body: In the attached image
+
+there is the probability with the Naive Bayes algorithm of:
+
+Fem:dv/m/s Young own Ex-credpaid Good ->62%
+
+I calculated the probability so:
+$$P(Fem:dv/m/s \mid Good) * P(Young \mid Good)*P(own \mid Good)*P(Ex-credpaid \mid good)*P(Good) = 1/6*2/6*5/6*3/6*0.6 = 0,01389$$
+I don't know where I failed. Could someone please tell me where is my error?
+"
+"['ai-design', 'poker']"," Title: Approaches to poker tournament winner prediction?Body: I’ve done my research and could not find answer anywhere else. My apologies in advance if same problem is answered in different terms on stack-overflow.
+
+I am trying to solve poker tournament winner prediction problem. I’ve millions of historical records in this format:
+
+
+- Players ==> Winner
+- P1,P2,P4,P8 ==> P2
+- P4,P7,P6 ==> P4
+- P6,P3,P2,P1 ==> P1
+
+
+What are some of the most suitable algorithms to predict winner from set of players.
+
+So far I have tried decision trees, XGboost without much of a success.
+"
+"['natural-language-processing', 'tensorflow', 'keras']"," Title: Sequence to sequence machine learning / NMT - converting numbers into wordsBody: I want to do some sequence to sequence modelling on source data that looks like this:
+
+/-0.013428/-0.124969/-0.13435/0.008087/-0.269241/-0.36849/
+
+
+with target data that looks like this:
+
+Dont be angry with the process youre going through right now
+
+
+Both are of indeterminate lengths, and the lengths of target and source data aren't the same. What I'd like to do is have a prediction model where I can input similar numbers and have it generate texts based on the target training data.
+
+I started off doing character level s2s, but the output of the model is too nonsensical even at 2-5k epochs. So I've been looking into word level s2s and NMT, but the tutorials always assume strings of text as the target and source, and I keep running into roadblocks trying to preprocess the text, when all the tutorials assume a certain syntax/set of characters. This is my first try at ML, and some of the tutorials really throw me out with the text preprocessing requirements.
+
+Am I going down the right avenue looking at word level/NMT stuff? And is there a tutorial I've missed for something like what I'm trying to build?
+"
+"['machine-learning', 'datasets']"," Title: Which features of a data set can be used for market campaigning using propensity scores?Body: A dataset contains so many fields in which there is both relevant and irrelevant field. If we want to do a market campaigning using propensity scoring, which fields of the data set are relevant?
+How can we find which data field should be selected and can drive to the desired propensity score?
+"
+"['neural-networks', 'comparison', 'boltzmann-machine']"," Title: What is the difference between visible and hidden units in Boltzmann machines?Body: What is the difference between visible and hidden units in Boltzmann machines? What are their purposes?
+"
+['convolutional-neural-networks']," Title: Figuring out mapping between two matricesBody: Imagine I have a 2D matrix, A. I apply some transformation to it, for example:
+B = A_shifted + A.
+
+Would it be possible to train a CNN to learn back the mapping from B to A? Giving B as example and A as target?
+
+Thanks!
+"
+"['neural-networks', 'deep-neural-networks', 'activation-functions', 'sigmoid']"," Title: Target values of 0.1 for 0 and 0.9 for 1 for sigmoidBody: I recently read an article about neural networks saying that, when using sigmoid as activation function, it's advised to use 0.1 as target value instead of 0, and 0.9 instead of 1. This was to avoid ""saturation effects"". I only understood is halfway, and was hoping someone could clarify a few things for me:
+
+
+- Is this only the case when the output is boolean (0 or 1), or will it also be the case for continual values in the range between 0 and 1. If so, should all values be scaled to the interval [0.1, 0.9]?
+- What exactly is the problem of output 0 or 1? Does it have something to do with the derivative of sigmoid being 0 when it's value is 0 or 1? As I understood it weights could end up approaching infinity, but I didn't understand why.
+- Is this the case only when sigmoid is used in the output layer (which it rarely is, I believe), or is it also the case when sigmoid is used in hidden layers only?
+
+"
+"['deep-learning', 'image-recognition']"," Title: Variable Number of Inputs to Neural NetworksBody: So suppose that you have a real estate appraisal problem. You have some structured data, and some images exterior of home, bedrooms, kitchen, etc. The number of pictures taken is variable per observational unit, i.e. the house.
+I understand the basics of combining an image processing neural net with tabular data for a single image. You chop off the final layer and feed in the embeddings of the image to your final model.
+How would one deal with variable number of images? Where your unit of observation can have between zero and infinity images (theoretically no upper bound on number of images in observation)?
+"
+"['natural-language-processing', 'natural-language-understanding', 'automated-reasoning', 'question-answering']"," Title: Is there a machine learning system that is able to understand mathematical problems given in a textual description?Body: Is there a machine learning system that is able to "understand" mathematical problems given in a textual description, such as
+
+A big cat needs 4 days to catch all the mice and a small cat needs 12 days. How many days need both, if they catch mice together?
+
+?
+"
+"['reinforcement-learning', 'algorithm', 'sutton-barto', 'reinforce']"," Title: Why does the discount rate in the REINFORCE algorithm appear twice?Body: I was reading the book Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto (complete draft, November 5, 2017).
+
+On page 271, the pseudo-code for the episodic Monte-Carlo Policy-Gradient Method is presented. Looking at this pseudo-code I can't understand why it seems that the discount rate appears 2 times, once in the update state and a second time inside the return. [See the figure below]
+
+
+
+It seems that the return for the steps after step 1 are just a truncation of the return of the first step. Also, if you look just one page above in the book you find an equation with just 1 discount rate (the one inside the return.)
+
+Why then does the pseudo-code seem to be different? My guess is that I am misunderstanding something:
+
+$$
+{\mathbf{\theta}}_{t+1} ~\dot{=}~\mathbf{\theta}_t + \alpha G_t \frac{{\nabla}_{\mathbf{\theta}} \pi \left(A_t \middle| S_t, \mathbf{\theta}_{t} \right)}{\pi \left(A_t \middle| S_t, \mathbf{\theta}_{t} \right)}.
+\tag{13.6}
+$$
+"
+"['machine-learning', 'terminology', 'computational-learning-theory', 'hypothesis-class', 'capacity']"," Title: What is the difference between hypothesis space and representational capacity?Body: I am reading Goodfellow et al Deeplearning Book. I found it difficult to understand the difference between the definition of the hypothesis space and representation capacity of a model.
+
+In Chapter 5, it is written about hypothesis space:
+
+
+ One way to control the capacity of a learning algorithm is by choosing its hypothesis space, the set of functions that the learning algorithm is allowed to select as being the solution.
+
+
+And about representational capacity:
+
+
+ The model specifies which family of functions the learning algorithm can choose from when varying the parameters in order to reduce a training objective. This is called the representational capacity of the model.
+
+
+If we take the linear regression model as an example and allow our output $y$ to takes polynomial inputs, I understand the hypothesis space as the ensemble of quadratic functions taking input $x$, i.e $y = a_0 + a_1x + a_2x^2$.
+
+How is it different from the definition of the representational capacity, where parameters are $a_0$, $a_1$ and $a_2$?
+"
+"['neural-networks', 'natural-language-processing', 'bert', 'gpt', 'language-model']"," Title: Where can I find pre-trained language models in English and German?Body: Where can I find (more) pre-trained language models? I am especially interested in neural network-based models for English and German.
+
+I am aware only of Language Model on One Billion Word Benchmark and TF-LM: TensorFlow-based Language Modeling Toolkit.
+
+I am surprised not to find a greater wealth of models for different frameworks and languages.
+"
+"['reinforcement-learning', 'deep-rl', 'proximal-policy-optimization', 'importance-sampling', 'trust-region-policy-optimization']"," Title: Why is the log probability replaced with the importance sampling in the loss function?Body: In the Trust-Region Policy Optimisation (TRPO) algorithm (and subsequently in PPO also), I do not understand the motivation behind replacing the log probability term from standard policy gradients
+$$L^{PG}(\theta) = \hat{\mathbb{E}}_t[\log \pi_{\theta}(a_t | s_t)\hat{A}_t],$$
+with the importance sampling term of the policy output probability over the old policy output probability
+$$L^{IS}_{\theta_{old}}(\theta) = \hat{\mathbb{E}}_t \left[\frac{\pi_{\theta}(a_t | s_t)}{\pi_{\theta_{old}}(a_t | s_t)}\hat{A}_t \right]$$
+Could someone please explain this step to me?
+I understand once we have done this why we then need to constrain the updates within a 'trust region' (to avoid the $\pi_{\theta_{old}}$ increasing the gradient updates outwith the bounds in which the approximations of the gradient direction are accurate). I'm just not sure of the reasons behind including this term in the first place.
+"
+"['research', 'reference-request', 'autonomous-vehicles']"," Title: Self-driving control logic based on semantic segmentationBody: In the context of autonomous driving, two main stages are typically implemented: an image processing stage and a control stage. The first aims at extracting useful information from the acquired image while the second employs those information to control the vehicle.
+
+As far as concerning the processing stage, semantic segmentation is typically used. The input image is divided in different areas with a specific meaning (road, sky, car etc...). Here is an example of semantic segmentation:
+
+
+
+The output of the segmentation stage is very complex. I am trying to understand how this information is typically used in the control stage, and how to use the information on the segmented areas to control the vehicle.
+
+For simplicity, let's just consider a vehicle that has to follow a path.
+
+TL;DR: what are the typical control algorithms for autonomous driving based on semantic segmentation?
+"
+"['activation-functions', 'sigmoid']"," Title: Why do non-linear activation functions not require a specific non-linear relation between its inputs and outputs?Body: A linear activation function (or none at all) should only be used when the relation between input and output is linear. Why doesn't the same rule apply for other activation functions? For example, why doesn't sigmoid only work when the relation between input and output is ""of sigmoid shape""?
+"
+"['deep-learning', 'classification', 'computer-vision', 'object-recognition']"," Title: Alternative to sliding window neural network (was: Object detect (or) image classification at specific locations in the frame)Body: Recent advances in Deeplearning and dedicated hardware has made it possible to detect images with a much better accuracy than ever. Neural networks are the gold standard for computer vision application and are used widely in the industry, for example for internet search engines and autonomous cars. In real life problems, the image contains of regions with different objects. It is not enough to only identify the picture but elements of the picture.
+
+A while ago an alternative to the well known sliding window algorithm was described in the literature, called Region Proposal Networks. It is basically a convolution neural network which was extended by a region vector.
+
+Problem that I am trying to solve:
+
+In a given video frame, I want to pick some region of interests (literally), and perform classification on those regions.
+
+How is it currently implemented
+
+
+- Capture the video frame
+- Split the video frame into multiple images each representing a region of interest
+- Perform image classification(inference) on each of the image (corresponding to a part of the frame)
+- Aggregate the results of #3
+
+
+Problem with the current approach
+
+Multiple inferences per frame.
+
+Question
+
+I am looking for a solution where I specify the locations of interest in a frame, and inference task, be it object detection (or) image classification, is performed only on those regions.Can you please point to me the references which I need to study (or) use to do this.
+"
+"['training', 'evolutionary-algorithms', 'feedforward-neural-networks', 'fitness-functions', 'fitness-design']"," Title: Why does the fitness of my neural network to play tic-tac-toe keep oscillating?Body: I wrote a simple feed-forward neural network that plays tic-tac-toe:
+
+- 9 neurons in input layers: 1 - my sign, -1 - opponent's sign, 0 - empty;
+- 9 neurons in hidden layer: value calculated using ReLU;
+- 9 neurons in output layer: value calculated using softmax;
+
+I am using an evolutionary approach: 100 individuals play against each other (all-play-all). The top 10 best are selected to mutate and reproduce into the next generation. The fitness score calculated: +1 for the correct move (it's possible to place your sign on already occupied tile), +9 for victory, -9 for a defeat.
+What I notice is that the network's fitness keeps climbing up and falling down again. It seems that my current approach only evolves certain patterns on placing signs on the board and once random mutation interrupts the current pattern new one emerges. My network goes in circles without ever evolving actual strategy. I suspect the solution for this would be to pit network against tic-tac-toe AI, but is there any way to evolve the actual strategy just by making it to playing against itself?
+"
+"['neural-networks', 'machine-learning', 'backpropagation', 'gradient-descent', 'pooling']"," Title: Can non-differentiable layer be used in a neural network, if it's not learned?Body: For example, AFAIK, the pooling layer in a CNN is not differentiable, but it can be used because it's not learning. Is it always true?
+"
+"['reinforcement-learning', 'rewards', 'sutton-barto', 'return']"," Title: Is my interpretation of the return correct?Body: Sutton and Barto 2018 define the discounted return $G_t$ the following way (p 55):
+
+
+
+Is my interpretation correct?
+
+
+
+Or should all ""1"" be in the same column?
+"
+"['reinforcement-learning', 'definitions', 'value-functions', 'bellman-equations']"," Title: In reinforcement learning, does the optimal value correspond to performing the best action in a given state?Body: I am confused about the definition of the optimal value ($V^*$) and optimal action-value (Q*) in reinforcement learning, so I need some clarification, because some blogs I read on Medium and GitHub are inconsistent with the literature.
+Originally, I thought the optimal action value, $Q^*$, represents you performing the action that maximizes your current reward, and then acting optimally thereafter.
+And the optimal value, $V^*$, being the average $Q$ values in that state. Meaning that if you're in this state, the average "goodness" is this.
+For example:
+If I am in a toy store and I can buy a pencil, yo-yo, or Lego.
+Q(toy store, pencil) = -10
+Q(toy store, yo-yo) = 5
+Q(toy store, Lego) = 50
+
+And therefore my $Q^* = 50$
+But my $V^*$ in this case is:
+V* = -10 + 5 + 50 / 3 = 15
+
+Representing no matter what action I take, the average future projected reward is $15$.
+And for the advantage of learning, my baseline would be $15$. So anything less than $0$ is worse than average and anything above $0$ is better than average.
+However, now I am reading about how $V^*$ actually assumes the optimal action in a given state, meaning $V^*$ would be 50 in the above case.
+I am wondering which definition is correct.
+"
+"['reinforcement-learning', 'deep-rl', 'ddpg', 'action-spaces']"," Title: Is there a difference in the architecture of deep reinforcement learning when multiple actions are performed instead of a single action?Body: I've built a deep deterministic policy gradient reinforcement learning agent to be able to handle any games/tasks that have only one action. However, the agent seems to fail horribly when there are two or more actions. I tried to look online for any examples of somebody implementing DDPG on a multiple-action system, but people mostly applied it to the pendulum problem, which is a single-action problem.
+For my current system, it is a 3 state, 2 continuous control actions system (One is to adjust the temperature of the system, the other one adjusts a mechanical position, both are continuous). However, I froze the second continuous action to be the optimal action all the time. So RL only has to manipulate one action. It solves within 30 episodes. However, the moment I allow the RL to try both continuous actions, it doesn't even converge after 1000 episodes. In fact, it diverges aggressively. The output of the actor-network seems to always be the max action, possibly because I am using a tanh
activation for the actor to provide output constraint. I added a penalty to large actions, but it does not seem to work for the 2 continuous control action case.
+For my exploratory noise, I used Ornstein-Ulhenbeck noise, with means adjusted for the two different continuous actions. The mean of the noise is 10% of the mean of the action.
+Is there any massive difference between single action and multiple action DDPG?
+I changed the reward function to take into account both actions, have tried making a bigger network, tried priority replay, etc., but it appears I am missing something.
+Does anyone here have any experience building a multiple-action DDPG and could give me some pointers?
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-recognition']"," Title: Optimizing image recognition results for unknown labelsBody: I’m training a network to do image classification on zoo animals.
+
+I’m a software engineer and not an ML expert, so I’ve been retraining Google’s Inception model and the latest models is trained using Google AutoML Vision.
+
+The network performs really well, but I have trouble with images of animals that I don’t want any labels for. Basically I would like images of those animals to be classified as unknowns or achieve low scores.
+
+I do have images of the animals that I don’t want labels for and I tried putting them all into one “nothing” label together with images I’ve collected of the animals habitats without any animals. This doesn’t really yield any good results though. The network performs for the labeled animals but ends up assigning one of those labels to the other animals as well. Usually with a really high score as well.
+
+I have 14 labels and 10.000 images. I should also mention that the “nothing” label ends up having a lot of images compared to the actual labels. Those images are not included in the 10.000.
+
+Is there any tricks to achieve better results with this? Should I create multiple labels for the images in the “nothing” category maybe?
+"
+"['reinforcement-learning', 'deep-rl', 'hyperparameter-optimization', 'hyper-parameters', 'a3c']"," Title: What is the pros and cons of increasing and decreasing the number of worker processes in A3C?Body: In A3C, there are several child processes and one master process. The child precesses calculate the loss and backpropagation, and the master process sums them up and updates the parameters, if I understand it correctly.
+But I wonder how I should decide the number of the child processes to implement. I think the more child processes are, the better it is to disentangle the correlation between the samples is, but I'm not sure what is the cons of setting a large number of child processes.
+Maybe the more child processes are, the larger the variance of the gradient is, leading to the instability of the learning? Or is there any other reason?
+And finally, how should I decide the number of the child processes?
+"
+"['reinforcement-learning', 'long-short-term-memory', 'deep-rl', 'comparison', 'experience-replay']"," Title: How does LSTM in deep reinforcement learning differ from experience replay?Body: In the paper Deep Recurrent Q-Learning for Partially Observable MDPs, the author processed the Atari game frames with an LSTM layer at the end. My questions are:
+
+
+- How does this method differ from the experience replay, as they both use past information in the training?
+- What's the typical application of both techniques?
+- Can they work together?
+- If they can work together, does it mean that the state is no longer a single state but a set of contiguous states?
+
+"
+"['deep-learning', 'reinforcement-learning']"," Title: Tuning of PPO metaparameters: a high level overview of what each parameter doesBody: I am using the PPO algorithm implemented by tensorforce: https://github.com/reinforceio/tensorforce . It works great and I am very happy with the results.
+
+However, I notice that there are many metaparameters available to give to the PPO algorithm:
+
+ # the tensorforce agent configuration ------------------------------------------
+ network_spec = [
+ dict(type='dense', size=256),
+ dict(type='dense', size=256),
+ ]
+
+ agent = PPOAgent(
+ states=environment.states,
+ actions=environment.actions,
+ network=network_spec,
+ # Agent
+ states_preprocessing=None,
+ actions_exploration=None,
+ reward_preprocessing=None,
+ # MemoryModel
+ update_mode=dict(
+ unit='episodes',
+ # 10 episodes per update
+ batch_size=10,
+ # Every 10 episodes
+ frequency=10
+ ),
+ memory=dict(
+ type='latest',
+ include_next_states=False,
+ capacity=200000
+ ),
+ # DistributionModel
+ distributions=None,
+ entropy_regularization=0.01,
+ # PGModel
+ baseline_mode='states',
+ baseline=dict(
+ type='mlp',
+ sizes=[32, 32]
+ ),
+ baseline_optimizer=dict(
+ type='multi_step',
+ optimizer=dict(
+ type='adam',
+ learning_rate=1e-3
+ ),
+ num_steps=5
+ ),
+ gae_lambda=0.97,
+ # PGLRModel
+ likelihood_ratio_clipping=0.2,
+ # PPOAgent
+ step_optimizer=dict(
+ type='adam',
+ learning_rate=1e-3
+ ),
+ subsampling_fraction=0.2,
+ optimization_steps=25,
+ execution=dict(
+ type='single',
+ session_config=None,
+ distributed_spec=None
+ )
+ )
+
+
+So my question is: is there a way to understand, intuitively, the meaning / effect of all these metaparameters and use this intuitive understanding to improve training performance?
+
+So far I have reached - from a mix of reading the PPO paper and the literature around, and playing with the code - to the following conclusions. Can anybody complete / correct?
+
+
+- effect of network_spec: this is size of the 'main network'. Quite classical: need it big enough to get valuable predictions, not too big either otherwise it is hard to train.
+- effect of the parameters in update_mode: this is how often the network updates are performed.
+
+
+- batch_size is how many used for a batch update. Not sure of the effect neither what this exactly means in practice (are all samples taken from only 10 batches of the memory replay)?
+- frequency is how often the update is performed. I guess having frequency high would make the training slower but more stable (as sample from more different batches)?
+- unit: no idea what this does
+
+- memory: this is the replay memory buffer.
+
+
+- type: not sure what this does or how it works.
+- include_next_states: not sure what this does or how it works
+- capacity: I think this is how many tuples (state, action, reward) are stored. I think this is an important metaparameter. In my experience, if this is too low compared to the number of actions in one episode, the learning is very bad. I guess this is because it must be large enough to store MANY episodes, otherwise the network learns from correlated data - which is bad.
+
+- DistributionMode: guess this is the model for the distribution of the controls? No idea what the parameters there do.
+- PGModel: No idea what the paramaters there do. Would be interesting to know if some should be tweaked / which ones.
+- PGLRModel: idem, no idea what all these parameters do / if they should be tweaked.
+- PPOAgend: idem, no idea what all these parameters do / if they should be tweaked.
+
+
+Summary
+
+So in summary, would be great to get some help about:
+
+
+- Which parameters should be tweaked
+- How should these parameters be tweaked? Is there a 'high level intuition' about how they should be tweaked / in which circumstances?
+
+"
+"['neural-networks', 'deep-learning']"," Title: How is it possible to teach a neural network to perform addition?Body: I am trying to understant how it works. How do you teach it say, to add 1 to each number it gets. I am pretty new to the subject and I learned how it works when you teach it to identify a picture of a number. I can understand how it identifies a number but I cant get it how would it study to perform addition? I can understand that it can identify a number or picture using the pixels and assigning weights and then learning to measure whether a picture of a number resembling the weight is assigned to each pixel. But i can't logically understand how would it learn the concept of adding a number by one. Suppose I showed it thousands of examples of 7 turning to 8 152 turning into 153 would it get it that every number in the world has to be added by one? How would it get it having no such operation of + ? Since addition does not exist to its proposal then how can it realize that it has to add one in every number? Even by seeing thousands of examples but having no such operation of plus I cant understand it. I could understand identifying pixels and such but such an operation I cant get the theoretical logic behind it. Can you explain the logic in layman terms?
+"
+['neural-networks']," Title: Type of artificial neural network suitable for learning and then predicting forest growthBody: I'm trying to use an ANN to learn from a large amount of forest measurement data obtained from sampling plots across Ontario, Canada and associated climate data provided by regional climate modelling in this province.
+
+So the following are the inputs to the ANN:
+
+
+- Location (GPS coordinates)
+- Measurement year and month
+- Tree species
+- Age
+- Soil type
+- Soil moisture regime
+- Seasonal or monthly average temperature
+- Seasonal or monthly average precipitation
+- Some more data are available to select
+
+
+And the targets include:
+- Average total tree height
+- Average tree diameter at breast height
+
+For each sampling plot, the trees have been measured for 1-4 times.
+So my question is what type of ANN can best used to learn from the data and then it can be used for predicting with a set of new input data?
+"
+"['neural-networks', 'machine-learning', 'recurrent-neural-networks', 'reference-request', 'generative-model']"," Title: What are the best machine learning models for music composition?Body: What are the best machine learning models that have been used to compose music? Are there some good research papers (or books) on this topic out there?
+I would say, if I use a neural network, I would opt for a recurrent one, because it needs to have a concept of timing, chord progressions, and so on.
+I am also wondering how the loss function would look like, and how I could give the AI so much feedback as they usually need.
+"
+"['reinforcement-learning', 'deep-rl', 'experience-replay', 'imitation-learning', 'apprenticeship-learning']"," Title: In imitation learning, do you simply inject optimal tuples of experience $(s, a, r, s')$ into your experience replay buffer?Body: Due to my RL algorithm having difficulties learning some control actions, I've decided to use imitation learning/apprenticeship learning to guide my RL to perform the optimal actions. I've read a few articles on the subject and just want to confirm how to implement it.
+Do I simply sample a state $s$, then perform the optimal action $a^*$ in that state $s$, calculate the reward for the action $r$, and then observe the next state $s'$, and finally put that into the experience replay?
+If this is the case, I am thinking of implementing it as follows:
+
+- Initialize the optimal replay buffer $D_O$
+- Add the optimal tuple of experience $(s, a^*, r, s')$ into the replay buffer $D_O$
+- Initialize the normal replay buffer $D_N$
+- During the simulation, initially sample $(s, a^*, r, s')$ only from the optimal replay buffer $D_O$, while populating the normal replay buffer $D_N$ with the simulation results.
+- As training/learning proceeds, anneal out the use of the optimal replay buffer, and sample only from the normal replay buffer.
+
+Would such an architecture work?
+"
+"['machine-learning', 'convolutional-neural-networks', 'python', 'object-detection', 'object-recognition']"," Title: How can I develop an object detection system that counts the number of objects and determines their position in an image?Body: I want to create a simple object detection tool. So, basically, an image will be provided to the tool, and, from that image, it has to detect the number of objects.
+For example, an image of a dining table that has certain items present on it, such as plates, cups, forks, spoons, bottles, etc.
+The tool has to count the number of objects, irrespective of the type of object. After counting, it should return the position of the object with its size, so that I can draw a border over it.
+I would like not to use any library or API present such as TensorFlow, OpenCV, etc., given that I want to learn the details.
+If the process is very difficult to be created without using an API then the number of/type of objects which it will count as an object can also be limited but since this project will be for my educational/learning purpose can anyone help me understand the logic using which this can be achieved? For example, it may ignore a napkin present in the table to be counted as an object.
+"
+"['neural-networks', 'machine-learning', 'training', 'datasets', 'training-datasets']"," Title: What happens to the training data after your machine learning model has been trained?Body: What happens after you have used machine learning to train your model? What happens to the training data?
+Let's pretend it predicted correct 99.99999% of the time and you were happy with it and wanted to share it with the world. If you put in 10GB of training data, is the file you share with the world 10GB?
+If it was all trained on AWS, can people only use your service if they connect to AWS through an API?
+What happens to all the old training data? Does the model still need all of it to make new predictions?
+"
+"['search', 'graph-theory', 'breadth-first-search']"," Title: How do I train a bot to solve Katona style problems?Body: Cognitive psychology is researched since the 1940s. The idea was to understand human problem solving and the importance of heuristics in it. George Katona (an early psychologist) published in the 1940s a paper about human learning and teaching. He mentioned the so-called Katona-Problem, which is a geometric task.
+Squares
+Katona style problems are the ones where you remove straws in a given configuration of straws to create n unit squares in the end. In the end, every straw is an edge to a unit square. Some variations include 2x2 or 3x3 sizes of squares allowed as well as long as no two squares are overlapping, i.e. a bigger square 2x2 can't contain a smaller square of size 1x1. Some problems use matchsticks as a variation, some use straws, others use lines. Some variations allow bigger squares to contain smaller ones, as long as they don't share an edge viz. https://puzzling.stackexchange.com/questions/59316/matchstick-squares
+
+- Is there a way we can view it as a graph and removing straws/matchsticks as deleting edges between nodes in a graph?
+
+- If so, can I train a bot where I can plugin some random, yet valid conditions for the game and goal state to get the required solution?
+
+
+Edit #1: The following problem is just a sample to show where I am getting at. The requirement for my game is much larger. Also, I chose uninformed search to make things simpler without bothering about complex heuristics and optimization techniques. Please be free to explore ideas with me.
+Scenario #1:
+Consider this scenario. In the following diagram, each dashed line or pipe line represents a straw. Numbers and alphabet denote junctions where straw meet. Let's say, my bot can explore each junction, remove zero, one, two, three or four straws such that resultant state has
+
+- no straw that dangles off by being not connected to a square.
+- a small mxm square isn't contained in a larger nxn square (m<n)
+- Once straw is removed, it can't be put back.
+
+Initial configuration is shown here. I always need to start from top left corner node P and optimization... the objective is to remove straws in minimum hops from node to node using minimum number of moves, by the time goal state is reached.
+ P------Q------R------S------T
+ | | | | |
+ | | | | |
+ E------A------B------F------G
+ | | | | |
+ | | | | |
+ J------C------D------H------I
+ | | | | |
+ | | | | |
+ K------L------M------N------O
+ | | | | |
+ | | | | |
+ U------V------W------X------Y
+
+Goal 1 : I wish to create a large 2x2 square.
+At some point during, say BFS search (although it could be any uninformed search on partially observable universe i.e. viewing one node at a time), I could technically reach A, blow out all edges on A to create the following.
+ P------Q------R------S------T
+ | | | |
+ | | | |
+ E A B------F------G
+ | | | |
+ | | | |
+ J------C------D------H------I
+ | | | | |
+ | | | | |
+ K------L------M------N------O
+ | | | | |
+ | | | | |
+ U------V------W------X------Y
+
+That is one move.
+Goal 2 : I want to create a 3x3 square instead.
+I can't do that in one move. I need the record of successive nodes to be explored and then possibly backtrack to given point as well if the state fails to produce desired result. Each intermediate state might produce rectangles which are not allowed (also, how would one know how many more and which straws to remove to get to a square) or dangle a straw or worse get stuck in an infinite loop as I can choose to not remove any straw. How do I approach this problem?
+#Edit 2:
+For validation, figures 3, 4 and 5 are given below.
+ P------Q------R------S------T
+ | | | |
+ | | | |
+ E A B------F G
+ | | | |
+ | | | |
+ J------C------D------H I
+ | | | | |
+ | | | | |
+ K------L------M------N O
+ | | | | |
+ | | | | |
+ U------V------W------X Y
+
+The above figure (3) is invalid as we can't have dangling sticks TG,GI etc.
+ P------Q------R------S------T
+ | | |
+ | | |
+ E------A G
+ | |
+ | |
+ J I
+ | |
+ | |
+ K O
+ | |
+ | |
+ U------V------W------X------Y
+
+The above figure (4) is invalid as we can't have overlapping squares
+ P------Q------R S T
+ | |
+ | |
+ E A B------F------G
+ | | | |
+ | | | |
+ J------C------D------H------I
+ | | | | |
+ | | | | |
+ K------L------M------N------O
+ | | | | |
+ | | | | |
+ U------V------W------X------Y
+
+Figure (5) is valid configuration.
+"
+"['natural-language-processing', 'text-summarization']"," Title: How does the ""Lorem Ipsum"" generator work?Body: I've seen many Lorem Ipsum generators on the web, but not only, there is also ""bacon ispum"", ""space ispum"", etc. So, how do these generators generate the text? Are they powered by an AI?
+"
+"['tensorflow', 'computer-vision']"," Title: What exactly does ""channel"" refer to in tensorflow/lucid?Body: Tensorflow/Lucid is able to visualize what a ""channel"" of a layer of a neural network (image recognition, Inception-v1) responds to. Even after studying the tutorial, the source code, the three research papers on lucid and comments by the authors on Hacker News, I'm still not clear on how ""channels"" are supposed to be defined and individuated. Can somebody shed some light on this? Thank you.
+
+https://github.com/tensorflow/lucid
+https://news.ycombinator.com/item?id=15649456
+"
+"['reinforcement-learning', 'proximal-policy-optimization', 'discrete-action-spaces', 'action-spaces']"," Title: How to implement a variable action space in Proximal Policy Optimization?Body: I'm coding a Proximal Policy Optimization (PPO) agent with the Tensorforce library (which is built on top of TensorFlow).
+The first environment was very simple. Now, I'm diving into a more complex environment, where all the actions are not available at each step.
+Let's say there are 5 actions and their availability depends on an internal state (which is defined by the previous action and/or the new state/observation space):
+
+- 2 actions (0 and 1) are always available
+- 2 actions (2 and 3) are only available when the internal state is 0
+- 1 action (4) is only available when the internal state is 1
+
+Hence, there are 4 actions available when the internal state is 0 and 3 actions available when the internal state is 1.
+I'm thinking of a few possibilities to implement that:
+
+- Change the action space at each step, depending on the internal state. I assume this is nonsense.
+
+- Do nothing: let the model understand that choosing an unavailable action has no impact.
+
+- Do almost nothing: impact slightly negatively the reward when the model chooses an unavailable action.
+
+- Help the model: by incorporating an integer into the state/observation space that informs the model what's the internal state value + bullet point 2 or 3
+
+
+Are there other ways to implement this? From your experience, which one would be the best?
+"
+"['neural-networks', 'machine-learning', 'training', 'datasets']"," Title: Use cross-validation to train after model selectionBody: I have been recently reading about model selection algorithms (for example to decide which value of the regularisation parameter or what size of a neural network to use, broadly hyper-parameters). This is done by dividing the examples into three sets (training 60%, cross-validation 20%, test 20%) and training is done on the data with the first set for all parameters, and then choose the best parameter based on the result in the cross-validation and finally estimate the performance using the test set.
+
+I understand the need for a different data-set compared to training and test for select the model, however, once the model is selected, why not using the cross-validation examples to improve the hypothesis before estimating the performance?
+
+The only reason I could see is that this could cause the hypothesis to worsen and we wouldn't be able to detect it, but, is it really possible that by adding much more examples (60% -> 80%) the hypothesis gets worse?
+"
+"['comparison', 'uncertainty-quantification', 'bayesian-probability', 'dempster-shafer-theory']"," Title: How does the Dempster-Shafer theory differ from Bayesian reasoning?Body: How does the Dempster-Shafer theory differ from Bayesian reasoning? How do these two methods handle uncertainty and compute posterior distributions?
+"
+"['machine-learning', 'game-ai', 'applications', 'reference-request']"," Title: How can I create an artificially intelligent aimbot for a game like CS:GO?Body: How can I create an artificially intelligent aimbot for a game like Counter-Strike Global Offensive (CS:GO)?
+
+I have an initial solution (or approach) in mind. We can train an image recognition model that will recognize the head of the enemy (in the visible area of the player, so excluding the invisible area behind the player, to avoid being easily detected by VAC) and move the cursor to the position of the enemy's head and fire.
+
+It would be much more preferable to train the recognition model in real-time than using demos. Most of the available demos you might have might be 32 tick, but while playing the game, it works at 64 tick.
+
+It is a very fresh idea in my mind, so I didn't actually think a lot about it. Ignoring facts like detection by VAC for a few moments.
+
+Is there any research work on the topic? What are the common machine learning approaches to tackle such a problem?
+
+Later on, this idea can be expanded to a completely autonomous bot that can play the game by itself, but that is a bit too much initially.
+"
+"['reinforcement-learning', 'ai-design', 'markov-decision-process', 'state-spaces', 'state-representations']"," Title: How to define states in reinforcement learning?Body: I am studying reinforcement learning and the variants of it. I am starting to get an understanding of how the algorithms work and how they apply to an MDP.
+What I don't understand is the process of defining the states of the MDP. In most examples and tutorials, they represent something simple like a square in a grid or similar.
+For more complex problems, like a robot learning to walk, etc.,
+
+- How do you go about defining those states?
+- Can you use learning or classification algorithms to "learn" those states?
+
+"
+"['definitions', 'intelligent-agent', 'chess', 'rationality', 'simple-reflex-agents']"," Title: What is the definition of rationality?Body: I'm having a little trouble with the definition of rationality, which goes something like:
+
+
+ An agent is rational if it maximizes its performance measure given its current knowledge.
+
+
+I've read that a simple reflex agent will not act rationally in a lot of environments. For example, a simple reflex agent can't act rationally when driving a car, as it needs previous perceptions to make correct decisions.
+
+However, if it does its best with the information it's got, wouldn't that be rational behaviour, as the definition contains ""given its current knowledge""? Or is it more like: ""given the knowledge it could have had at this point if it had stored all the knowledge it has ever received""?
+
+Another question about the definition of rationality: Is a chess engine rational as it picks the best move given the time it's allowed to use, or is it not rational as it doesn't actually (always) find the best solution (would need more time to do so)?
+"
+"['neural-networks', 'computer-vision', 'prediction', 'object-recognition']"," Title: Can one use an Artificial Neural Network to determine the size of an object in a photograph?Body: My question relates to but doesn't duplicate a question that has been asked here.
+
+I've Googled a lot for an answer to the question: Can you find the dimensions of an object in a photo if you don't know the distance between the lens and the object, and there are no ""scales"" in the image?
+
+The overwhelming answer to this has been ""no"". This is, from my understanding, due to the fact that, in order to solve this problem with this equation,
+
+$$Distance\ to\ object(mm) = \frac{f(mm) * real\ height(mm) * image\ height(pixels)}{object\ height(pixels) * sensor\ height(mm)} $$
+
+you will need to know either the ""real height"" or the ""distance to object"". It's the age old issue of ""two unknowns, one equation"". That's unsolvable. A way around this is to place an object in the photo with a known dimension in the same plane as the unknown object, find the distance to this object and use that distance to calculate the size of the unknown (this relates to answer from the question I linked above). This is an equivalent of putting a ruler in the photo and it's a fine way to solve this problem easily.
+
+This is where my question remains unanswered. What if there is no ruler? What if you want to find a way to solve the unsolvable problem? Can we train an Artificial Neural Network to approximate the value of the real height without the value of the object distance or use of a scale? Is there a way to leverage the unexpected solutions we can get from AI to solve a problem that is seemingly unsolvable?
+
+Here is an example to solidify the nature of my question:
+
+I would like to make an application where someone can pull out their phone, take a photo of a hail stone against the ground at a distance of ~1-3 ft, and have the application give them the hail stone dimensions. My project leader wants to make the application accessible, which means he doesn't want to force users to carry around a quarter or a special object of known dimensions to use as a scale.
+
+In order to avoid the use of a scale, would it be possible to use all of the EXIF meta-data from these photos to train a neural network to approximate the size of the hail stone within a reasonable error tolerance? For some reason, I have it in my head that if there are enough relevant variables, we can design an ANN that can pick out some pattern to this problem that we humans are just unable to identify. Does anyone know if this is possible? If so, is there a deep learning model that can best suit this problem? If not, please put me out of my misery and tell me why it it's impossible.
+"
+"['deep-learning', 'objective-functions', 'activation-functions']"," Title: Should the input to the negative log likelihood loss function be probabilities?Body: I am trying to train a supervised model where the output from the model is output of a linear function $WX + b$. Kindly note that I'm not using any softmax or $\log$ softmax on the result of the linear. I am using negative log-likelihood loss function, which takes the input as the linear output from the model and the true labels. I am getting decent accuracy by doing this, but I have read that the input to negative log-likelihood function must be probabilities. Am I doing something wrong?
+"
+"['deep-learning', 'categorical-data']"," Title: How to model categorical variables / enums?Body: I am new to the field and I am trying to understand how is possible to use categorical variables / enums?
+
+Lets say we have a data set and 2 of its features are home_team
and away_team
, the possible values of these 2 features are all the NBA teams.
+How can we ""normalize"" these features to be able to use them to create a deep network model (e.g. with tensorflow)?
+
+Any reference to read about techniques of modeling that are also very appreciated.
+"
+"['neural-networks', 'machine-learning', 'long-short-term-memory']"," Title: Need Help With LSTM Neural NetworksBody: I have been researching LSTM neural networks. I have seen this diagram a lot and I have few questions about it. Firstly, is this diagram used for most LSTM neural networks?
+
+Secondly, if it is, wouldn't only having single layers reduce it's usefulness?
+"
+"['machine-learning', 'deep-learning', 'long-short-term-memory', 'architecture']"," Title: How do the current input and the output of the previous time step get combined in an LSTM?Body: I am currently looking into LSTMs. I found this nice blog post, which is already very helpful, but still, there are things I don't understand, mostly because of the collapsed layers.
+
+
+- The input $X_t$, and the output of the previous time step $H_{t-1}$, how do they get combined? Multiplied, added or what?
+- The input weights and the weights of the input of the previous time step, those are just the weights of the connections between the time-steps/units, right?
+
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'graphs', 'geometric-deep-learning']"," Title: What benefits can be got by applying Graph Convolutional Neural Network instead of ordinary CNN?Body: What benefits can we got by applying Graph Convolutional Neural Network instead of ordinary CNN? I mean if we can solve a problem by CNN, what is the reason should we convert to Graph Convolutional Neural Network to solve it? Are there any examples i.e. papers can show by replacing ordinary CNN with Graph Convolutional Neural Network, an accuracy increasement or a quality improvement or a performance gain is achieved? Can anyone introduce some examples as image classification, image recognition especially in medical imaging, bioinfomatics or biomedical areas?
+"
+"['machine-learning', 'reinforcement-learning', 'q-learning']"," Title: Can Q-learning be used to find the shortest distance from each source to destination?Body: Is it possible to form a table that will have simply the shortest distance from each source to destination using q learning?
+
+If not, suggest any other learning algorithm.
+"
+"['comparison', 'terminology', 'intelligent-agent']"," Title: What are the differences between an agent that thinks rationally and an agent that acts rationally?Body: Stuart Russell and Peter Norvig pointed out 4 four possible goals to pursue in artificial intelligence: systems that think/act humanly/rationally.
+
+What are the differences between an agent that thinks rationally and an agent that acts rationally?
+"
+"['neural-networks', 'pattern-recognition']"," Title: How can I detect datetime patterns in text?Body: I want to explore and experiment the ways in which I could use a neural network to identify patterns in text.
+
+examples:
+
+
+- Prices of XYZ stock went down at 11:00 am today
+- Retrieve a list of items exchanged on 03/04/2018
+- Show error logs between 3 - 5 am yesterday.
+- Reserve a flight for 3rd October.
+- Do I have any meetings this Friday?
+- Remind to me wake up early tue, 4th sept
+
+
+This is for a project so I am not using regular expressions. Papers, projects, ideas are all welcome but I want to approach feature extraction/pattern detection to have a model trained which can Identify patterns that it has already seen.
+"
+"['neural-networks', 'deep-learning', 'deep-neural-networks', 'topology', 'hopfield-network']"," Title: Can layers of deep neural networks be seen as Hopfield networks?Body: Hopfield networks are able to store a vector and retrieve it starting from a noisy version of it. They do so setting weights in order to minimize the energy function when all neurons are set equal to the vector values, and retrieve the vector using the noisy version of it as input and allowing the net to settle to an energy minimum.
+
+Leaving aside problems like the fact that there is no guarantee that the net will settle in the nearest minimum etc – problems eventually solved with Boltzmann machines and eventually with back-propagation – the breakthrough was they are a starting point for having abstract representations. Two versions of the same document would recall the same state, they would be represented, in the network, by the same state.
+
+As Hopfield himself wrote in his 1982 paper Neural networks and physical systems with emergent collective computational abilities
+
+
+ The present modeling might then be related to how an entity or Gestalt is remembered or categorized on the basis of inputs representing a collection of its features.
+
+
+On the other side, the breakthrough of deep learning was the ability to build multiple, hierarchical representations of the input, eventually leading to making AI-practitioners' life easier, simplifying feature engineering. (see e.g. Representation Learning: A Review and New Perspectives, Bengio, Courville, Vincent).
+
+From a conceptual point of view, I believe one can see deep learning as a generalization of Hopfield nets: from one single representation to a hierarchy of representation.
+
+Is that true from a computational/topological point of view as well? Not considering how ""simple"" Hopfield networks were (2-state neurons, undirected, energy function), can one see each layer of a network as a Hopfield network and the whole process as a sequential extraction of previously memorized Gestalt, and a reorganization of these Gestalt?
+"
+"['machine-learning', 'deep-learning', 'tensorflow', 'keras', 'data-science']"," Title: Automatic prediction of whether a customer will come into the shop or notBody: So as my university project I am planning to make a prediction system as described in the title. My current idea is to use the age/gender classifier and run it on a video(taken in front of a shop) which outputs a csv file of the age/gender/Customer ID. In addition, I will use the existing data of the shop of who came in/who didn't come into the shop but passed by the shop and by running XGBoost on this csv data I can predict which customer will come into the shop or not.
+
+Do you think this idea is possible? Is there any other way to implement this idea. It would also be great if we could implement this in such a way as to make the deep learning model learn the various features of those who come into the shop or not.
+"
+"['neural-networks', 'machine-learning', 'recurrent-neural-networks']"," Title: Is it possible to use an RNN to predict a feature that is not an input feature?Body: I came across RNN's a few minutes ago, which might solve a problem with sequenced data I've had for a while now.
+
+Let's say I have a set of input features, generated every second. Corresponding with these input features, is an output feature (also available every second).
+One set of input features does not carry enough data to correlate with the output feature, but a sequence of them most definitely does.
+
+I read that RNN's can have node connections along sequences of inputs, which is exactly what I need, but almost all implementations/explanations show prediction of the next word or number in a text-sentence or in a sequence of numbers.
+
+They predict what would be the next input value, the one that completes the sequence. However, in my case, the output feature will only be available during training. During inference, it will only have the input features available.
+
+Is it possible to use RNN in this case? Can it also predict features that are not part of the input features?
+
+Thanks in advance!
+"
+"['datasets', 'structured-data', 'categorical-data']"," Title: Is a binary attribute type the same as binomial attribute type?Body: I am not sure if I can use the words binomial and binary and boolean as synonyms to describe a data attribute of a data set which has two values (yes or no). Are there any differences in the meaning on a deeper level?
+
+Moreover, if I have an attribute with three possible values (yes, no, unknown), this would be an attribute of type polynominal. What further names are also available for this type of attribute? Are they termed as ""symbolic""?
+
+I am interested in the realtion between the following attribute type: binary, boolean, binominal, polynominal (and alternative describtions) and nominal.
+"
+"['machine-learning', 'sentiment-analysis']"," Title: Integration of Sentiment analysis in CRMBody: What is the process for integrating sentiment analysis in a CRM? What I am searching for is a system which analyzes the customer comments or reviews using the CRM and finds out the customer sentiment on the services provided by the system or company or a product.
+
+I have done a sentiment analyzer which takes text and shows the sentiment of the text. Now I want to integrate the above-mentioned sentiment analyzer to a CRM, how can I do that?
+"
+"['machine-learning', 'data-science', 'academia', 'education']"," Title: Is it necessary to know the details behind the AI algorithms and models?Body: I am interested in the field of artificial intelligence. I began by learning the various machine learning algorithms. The maths behind some were quite hard. For example, back-propagation in convolutional neural networks.
+
+Then when getting to the implementation part, I learnt about TensorFlow, Keras, PyTorch, etc. If these provide much faster and more robust results, will there be a necessity to code a neural network (say) from scratch using the knowledge of the maths behind back-prop, activation functions, dimensions of layers, etc., or is the role of a data scientist only to tune the hyper-parameters?
+
+Further, as of now the field of AI does not seem to have any way to solve for these hyperparameters, and they are arrived at through trial and error. Which begs the question, can a person with just basic intuition about what the algorithms do be able to make a model just as good as a person who knows the detailed mathematics of these algorithms?
+"
+"['reinforcement-learning', 'papers', 'objective-functions', 'proximal-policy-optimization']"," Title: Why does the clipped surrogate objective work in Proximal Policy Optimization?Body: In Proximal Policy Optimization Algorithms (2017), Schulman et al. write
+
+With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse.
+
+I don't understand why the clipped surrogate objective works. How can it work if it doesn't take into account the objective improvements?
+"
+"['reinforcement-learning', 'tensorflow', 'python', 'open-ai']"," Title: How many episodes does it take for a vanilla one-step actor-critic agent to master the OpenAI BipedalWalker-v2 problem?Body: I'm trying to solve the OpenAI BipedalWalker-v2 by using a one-step actor-critic agent. I'm implementing the solution using python and tensorflow.
+I'm following this pseudo-code taken from the book Reinforcement Learning An Introduction by Richard S. Sutton and Andrew G. Barto.
+
+in summary, my question can be reduced to the following:
+
+- Is it a good idea to implement a one-step actor-critic algorithm to solve the OpenAI BipedalWalker-v2 problem? If not what would be a good approach? If yes; how long would it take to converge?
+- I run the algorithm for 20000 episodes, each episode has an avg of 400 steps, for each step, I immediately update the weights. The results are not better than random. I have tried different standard deviations (for my normal distribution that represents pi), different NN sizes for the Critic and Actor, and different learning-steps for the optimizer algorithm. The results never improve. I don't know what I'm doing wrong.
+
+My Agent Class
+import tensorflow as tf
+import numpy as np
+import gym
+import matplotlib.pyplot as plt
+
+class agent_episodic_continuous_action():
+ def __init__(self, lr,gamma,sample_variance, s_size,a_size,dist_type):
+ ... #agent parameters
+
+ def save_model(self,path,sess):
+ def load_model(self,path,sess):
+ def weights_init_actor(self,hidd_layer,mean,stddev): #to have control over the weights initialization
+ def weights_init_critic(self,hidd_layer,mean,stddev): #to have control over the weights initialization
+ def create_actor_brain(self,hidd_layer,hidd_act_fn,output_act_fn,mean,stddev): #actor is represented by a fully connected NN
+ def create_critic_brain(self,hidd_layer,hidd_act_fn,output_act_fn,mean,stddev): #critic is represented by a fully connected NN
+ def critic(self):
+ def get_delta(self,sess):
+ def normal_dist_prob(self): #Actor pi distribution is a normal distribution whose mean comes from the NN
+ def create_actor_loss(self):
+ def create_critic_loss(self):
+ def sample_action(self,sess,state): #Sample actions from the normal dist. Whose mean was aprox. By the NN
+ def calculate_actor_loss_gradient(self):
+ def calculate_critic_loss_gradient(self):
+ def update_actor_weights(self):
+ def update_critic_weights(self):
+ def update_I(self):
+ def reset_I(self):
+ def update_time_step_info(self,s,a,r,s1,d):
+ def create_graph_connections(self):
+ def bound_actions(self,sess,state,lower_limit,uper_limit):
+
+Agent instantiation
+tf.reset_default_graph()
+agent= agent_episodic_continuous_action(learning-step=1e-3,gamma=0.99,pi_stddev=0.02,s_size=24,a_size=4,dist_type="normal")
+agent.create_actor_brain(hidden_layers=[12,5],hidden_layers_fct="relu",output_layer="linear",mean=0.0,stddev=0.14)
+agent.create_critic_brain(hidden_layers=[12,5],hidden_layers_fct="relu",output_layer="linear",mean=0.0,stddev=0.14)
+agent.create_graph_connections()
+
+path = "/home/diego/Desktop/Study/RL/projects/models/biped/model.ckt"
+env = gym.make('BipedalWalker-v2')
+uper_action_limit = env.action_space.high
+lower_action_limit = env.action_space.low
+total_returns=[]
+
+Training loops
+with tf.Session() as sess:
+ try:
+ sess.run(agent.init)
+ sess.graph.finalize()
+ #agent.load_model(path,sess)
+ for i in range(1000):
+ agent.reset_I()
+ s = env.reset()
+ d = False
+ while (not d):
+ a=agent.bound_actions(sess,s,lower_action_limit,uper_action_limit)
+ s1,r,d,_ = env.step(a)
+ #env.render()
+ agent.update_time_step_info([s],[a],[r],[s1],d)
+ agent.get_delta(sess)
+ sess.run([agent.update_critic_weights,agent.update_actor_weights],feed_dict={agent.state_in:agent.time_step_info['s']})
+ agent.update_I()
+ s = s1
+ agent.save_model(path,sess)
+ except Exception as e:
+ print(e)
+
+"
+"['terminology', 'definitions', 'history', 'academia', 'ai-field']"," Title: What is artificial intelligence?Body: What is the definition of artificial intelligence?
+"
+"['deep-learning', 'convolutional-neural-networks', 'computer-vision', 'reference-request', 'algorithm-request']"," Title: What are some good approaches that I can use to count the number of people in a crowd?Body: What are some good approaches that I can use to count the number of people in a crowd?
+Tracking each person individually is obviously not an option. Any good approaches or some references to research papers would be very helpful.
+"
+"['machine-learning', 'training', 'deep-neural-networks']"," Title: Is there a way of pre-determining whether a CNN model will perform better than another?Body: I developed a CNN for image analysis. I've around 100K labeled images. I'm getting a accuracy around 85% and a validation accuracy around 82%, so it looks like the model generalize better than fitting. So, I'm playing with different hyper-parameters: number of filters, number of layers, number of neurons in the dense layers, etc.
+
+For every test, I'm using all the training data, and it is very slow and time consuming.
+
+Is there a way to have an early idea about if a model will perform better than another?
+"
+"['reinforcement-learning', 'policy-gradients', 'policies', 'gradient', 'calculus']"," Title: Why is the derivative of this objective function 0 if the policy is deterministic?Body: In the Berkeley RL class CS294-112 Fa18 9/5/18, they mention the following gradient would be 0 if the policy is deterministic.
+$$
+\nabla_{\theta} J(\theta)=E_{\tau \sim \pi_{\theta}(\tau)}\left[\left(\sum_{t=1}^{T} \nabla_{\theta} \log \pi_{\theta}\left(\mathbf{a}_{t} \mid \mathbf{s}_{t}\right)\right)\left(\sum_{t=1}^{T} r\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)\right)\right]
+$$
+Why is that?
+"
+"['agi', 'performance', 'intelligence-testing']"," Title: How can we compare the intelligence of AI systems?Body: One way of ranking human intelligence is based on IQ tests. But how can we compare the intelligence of AI systems?
+
+For example, is there a test that tells me that a spam filter system is more intelligent than a self-driving car, or can I say that a chess program is more intelligent than AlphaGo?
+"
+"['reference-request', 'computational-learning-theory', 'computational-complexity', 'ai-completeness', 'theory-of-computation']"," Title: What does ""hard for AI"" look like?Body: In theoretical computer science, there is a massive categorization of the difficulty of various computational problems in terms of their asymptotic worst-time computational complexity. There doesn't seem to be any analogous analysis of what problems are ""hard for AI"" or even ""impossible for AI."" This is in some sense quite reasonable, because most research is focused on what can be solved. I'm interested in the opposite. What I do need to prove about a problem to prove that it is ""not reasonably solvable"" by AI?
+
+Many papers say something along the lines of
+
+
+ AI allows us to find real-world solutions to real-world instances of NP-complete problems.
+
+
+Is there a theoretical, principled reason for saying this instead of ""... PSPACE-complete problems""? Is there some sense in which AI doesn't work on PSPACE-complete, or EXPTIME-complete, or Turing complete problems?
+
+My idea answer would be a reference to a paper that shows AI cannot be used to solve a particular kind of problem based on theoretical or statistical reasoning. Any answer exhibiting and justifying a benchmark for ""too hard for AI"" would be fine though (bonus points if the benchmark has a connection to complexity and computability theory).
+
+If this question doesn't have an answer in general, answers about specific techniques would also be interesting to me.
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'training']"," Title: Which layer in a CNN consumes more training time: convolution layers or fully connected layers?Body: In a convolutional neural network, which layer consumes more training time: convolution layers or fully connected layers?
+
+We can take AlexNet architecture to understand this. I want to see the time breakup of the training process. I want a relative time comparison so we can take any constant GPU configuration.
+"
+"['convolutional-neural-networks', 'computer-vision', 'object-detection', 'bounding-box']"," Title: How to architect a network to find bounding boxes in simple images?Body: I have an application where I want to find the locations of objects on a simple, relatively constant background (fixed camera angle, etc). For investigative purposes, I've created a test dataset that displays many characteristics of the actual problem.
+Here's a sample from my test dataset.
+
+Our problem description is to find the bounding box of the single circle in the image. If there is more than one circle or no circles, we don't care about the bounding box (but we at least need to know that there is no valid single bounding box).
+For my attempt to solve this, I built a CNN that would regress (min_x, min_y, max_y, max_y)
, as well as one more value that could indicate how many circles were in the image.
+I played with different architecture variations, but, in general, the architecture a was very standard CNN (3-4 ReLU convolutional layers with max-pooling in between, followed by a dense layer and an output layer with linear activation for the bounding box outputs, set to minimise the mean squared error between the outputs and the ground truth bounding boxes).
+Regardless of the architecture, hyperparameters, optimizers, etc, the result was always the same - the CNN could not even get close to building a model that was able to regress an accurate bounding box, even with over 50000 training examples to work with.
+What gives? Do I need to look at using another type of network as CNNs are more suited to classification rather than localisation tasks?
+Obviously, there are computer vision techniques that could solve this easily, but due to the fact that the actual application is more involved, I want to know strictly about NN/AI approaches to this problem.
+"
+"['search', 'implementation', 'depth-first-search']"," Title: Does depth-first search always stop when it has found the leftmost solution?Body: I'm a fresh learner of AI. I was told that depth-first search is not an optimal searching algorithm since ""it finds the 'leftmost' solution, regardless of depth or cost"". Therefore, does it mean that in practice, when we implement DFS, we should always have a checker to stop the search when it finds the first solution (also the leftmost one)?
+"
+"['philosophy', 'social', 'superintelligence', 'singularity', 'mythology-of-ai']"," Title: Is the singularity something to be taken seriously?Body: The term Singularity is often used in mainstream media for describing visionary technology. It was introduced by Ray Kurzweil in a popular book The Singularity Is Near: When Humans Transcend Biology (2005).
+
+In his book, Kurzweil gives an outlook to a potential future of mankind which includes nanotechnology, computers, genetic modification and artificial intelligence. He argues that Moore's law will allow computers an exponential growth which results in a superintelligence.
+
+Is the technological singularity something that is taken seriously by A.I. developers or is this theory just a load of popular hype?
+"
+"['game-ai', 'monte-carlo-tree-search']"," Title: Which nodes are expanded in the expansion phase of MCTS?Body: I'm confused regarding a specific detail of MCTS.
+
+To illustrate my question, let's take the simple example of tic-tac-toe.
+After the selection phase, when a leaf node is reached, the tree is expanded in the so-called expansion phase. Let's say a particular leaf node has 6 children. Would the expansion phase expand all the children and run the simulation on them? Or would the expansion phase only pick a single child at random and run simulation, and only expand the other children if the selection policy arrives at them at some later point?
+
+Alternatively, if both of these are accepted variants, what are the pros/cons of each one?
+"
+"['computer-vision', 'autonomous-vehicles']"," Title: What are applications of object/human tracking in autonomous cars?Body: Objects tracking is finding the trajectory of each object in consecutive frames. Human tracking is a subset of object tracking which just considers humans.
+
+I've seen many papers that divide tracking methods into two parts:
+
+
+- Online tracking: Tracker just uses current and previous frames.
+- Offline tracking: Tracker uses all frames.
+
+
+All of them mention that online tracking is suitable for autonomous driving and robotics, but I don't understand this part. What are the applications of object/human tracking in autonomous driving?
+
+Do you know some related papers?
+"
+"['natural-language-processing', 'python', 'computational-linguistics']"," Title: How to parse conjunctions in natural language processing?Body: Is there an accepted way in NLP to parse conjunctions (and/or) in a sentence?
+
+By following the example below, how would I parse
+
+
+ I drink orange juice if its the weekend or if it's late and I'm tired.
+
+
+into
+
+
+ it's the weekend
+
+
+and
+
+
+ it's late
+
+
+and
+
+
+ I'm tired
+
+
+?
+
+Implying an action will be taken when one of the above elements at the 1st level of depth is true.
+
+I know when I hear the sentence that it means ""its the weekend"" OR (""it's late"" AND ""I'm tired""), but how could this be determined computationally?
+
+Can an existing python/other library do this?
+"
+"['game-ai', 'monte-carlo-tree-search', 'algorithm-request', 'combinatorial-games', 'branching-factors']"," Title: Which algorithms can we use on games with high branching factors (e.g. Connect6)?Body: Connect6 is an example of a game with a very high branching factor. It is about 45 thousand, dwarfing even the impressive Go.
+Which algorithms can we use on games with such high branching factors?
+I tried MCTS (soft rollouts, counting a ply as placing one stone), but it does not even block the opponent, due to the high branching factor.
+In the case of Connect6, there are stronger AIs out there, but they aren't described in any research papers that I know of.
+"
+"['image-recognition', 'sentiment-analysis']"," Title: VIsual/musical/multimedia discourse (analysis) - are there such notions?Body: Formal semantics of natural language perceives sentences as logical expressions. Full paragraphs and even stories of natural language texts are researched and formalized using discourse analysis (Discourse Representation Theory is one example). My question is - is there research trend that applied the notion ""discourse"" to images, sounds and even animation? Is there such a notion as ""visual discourse""?
+
+Google gives very few and older research papers, so - maybe the field exists, but it uses different terms and Google can not relate those terms to my keyword ""visual discourse"".
+
+Basically - there are visual grammars and other pattern matching methods that can discover objects in the picture and relate them. But one should be able to read whole store from the picture (musical piece, multimedia content) and I imagine that such reading can be researched by multimedia discourse analysis. But there is no work under such terms. How it is done and named in reality?
+"
+"['machine-learning', 'generative-adversarial-networks']"," Title: How to use a Generative Adversarial Network to generate images for developmental analysis?Body: I want to generate images of childrens' drawings consistent with the developmental state of children of a given age. The training data set will include drawings made by real children in a school setting. The generated images will be used for developmental analysis.
+
+I have heard that Generative Adversarial Networks are a good tool for this kind of problem. If this is true, how would I go about applying a GAN to this challenge?
+"
+"['reinforcement-learning', 'dqn', 'deep-rl', 'experience-replay']"," Title: Is Experience Replay like dreaming?Body: Drawing parallels between Machine Learning techniques and a human brain is a dangerous operation. When it is done successfully, it can be a powerful tool for vulgarisation, but when it is done with no precaution, it can lead to major misunderstandings.
+I was recently attending a conference where the speaker described Experience Replay in RL as a way of making the net "dream".
+I'm wondering how true this assertion is. The speaker argued that a dream is a random addition of memories, just as experience replay. However, I doubt the brain remembers its dream or either learns from it. What is your analysis?
+"
+"['reinforcement-learning', 'policy-gradients', 'proofs']"," Title: Why is baseline conditional on state at some timestep unbiased?Body: In the homework for the Berkeley RL class, problem 1, it asks you to show that the policy gradient is still unbiased if the baseline subtracted is a function of the state at time step $t$.
+
+$$ \triangledown _\theta \sum_{t=1}^T \mathbb{E}_{(s_t,a_t) \sim p(s_t,a_t)} [b(s_t)] = 0 $$
+
+I am struggling through what the first step of such a proof might be.
+
+Can someone point me in the right direction? My initial thought was to somehow use the law of total expectation to make the expectation of $b(s_t)$ conditional on $T$, but I am not sure.
+"
+"['neural-networks', 'artificial-neuron', 'neurons', 'brain', 'neuromorphic-engineering']"," Title: Is there research that employs realistic models of neurons?Body: Is there research that employs realistic models of neurons? Usually, the model of a neuron for a neural network is quite simple as opposed to the realistic neuron, which involves hundreds of proteins and millions of molecules (or even greater numbers). Is there research that draws implications from this reality and tries to design realistic models of neurons?
+
+Particularly, recently, Rosehip neuron was discovered. Such neuron can be found only in human brain cells (and in no other species). Are there some implications for neural network design and operation that can be drawn by realistically modelling this Rosehip neuron?
+"
+"['deep-learning', 'image-recognition', 'natural-language-processing']"," Title: How to build a commercial image captioning system?Body: Image Captioning is a hot research topic in the AI community. There are considerable image captioning models for research usage such as NIC, Neural Talk 2 etc. But can these research models be used for commercial purpose? Or we should build much more complex structured ones for commercial usage? Or if we can make some improvements based these models to meet the business applications situation? If so, what improvements should we take? Are there any existing commercial Image Captioning applications can be referenced?
+"
+['neural-networks']," Title: How to connect AI neural network processor to laptop?Body: I have an average laptop.
+
+How can I connect specialized AI neural network processors (say, Intel Nvidia or Intel Nervana https://venturebeat.com/2018/05/23/intel-unveils-nervana-neural-net-l-1000-for-accelerated-ai-training/) to thelaptop.
+Should I buy some external motherboard or even server unit with NN processors inside or is there available more lightweight solution like external HDD?
+"
+"['optimization', 'constraint-satisfaction-problems', 'planning', 'linear-programming']"," Title: What AI technique should I use to assign a person to a task?Body: I'm trying to learn AI and thinking to apply it to our system. We have an application for the translation industry. What we are doing now is the coordinator $C$ assigns a file to a translator $T$. The coordinator usually considers these criteria (but not limited to):
+
+- the deadline of the file and availability of a translator
+- the language pair that the translator can translate
+- is the translator already reached his target? (maybe we can give the file to other translators to reach their target)
+- the difficulty level of the file for the translator (basic translation, medical field, IT field)
+- accuracy of translator
+- speed of translator
+
+Given the following, is it possible to make a recommendation to the coordinator, to whom she can assign a particular file?
+What are the methods/topics that I need to research?
+(I'm considering javascript as the primary tool, and maybe python if javascript will be more of a hindrance in implementation.)
+In addition to suggesting a translator, we are also looking into suggesting the "deadline of the translator". Basically, we have "deadline of the customer" and "deadline of the translator"
+The reason for this is that, if the translators are occupied throughout the day, it makes sense to suggest it to a busy translator but allow him to finish it until next day.
+"
+"['neural-networks', 'activation-functions', 'regression', 'network-design']"," Title: How would I go about creating a neural network that outputs a non-binary number?Body: I would like to create a neural network, which, given the training data (e.g. 58, 2) outputs a non-binary number (e.g 100). Perhaps I am not searching for the correct thing, but all the examples I have found have shown classifiers using a sigmoid function (range of 1 to 0). I am looking for something that would output nonbinary numbers.
+"
+"['neural-networks', 'monte-carlo-tree-search', 'alphago', 'alphazero', 'alphago-zero']"," Title: Would AlphaGo Zero become perfect with enough training time?Body: Would AlphaGo Zero become theoretically perfect with enough training time? If not, what would be the limiting factor?
+
+(By perfect, I mean it always wins the game if possible, even against another perfect opponent.)
+"
+"['deep-learning', 'convolutional-neural-networks']"," Title: Clarification regarding ""Image Crowd Counting Using Convolutional Neural Network and Markov Random Field""Body: I am currently reading the research paper Image Crowd Counting Using Convolutional Neural Network and Markov Random Field by Kang Han, Wanggen Wan, Haiyan Yao, and Li Hou.
+I did not understand the following context properly:
+
+
+ We employ the residual network, which is trained on ImageNet dataset for image classication task, to extract the deep features to represent the density of the crowd. This pre-trained CNN network created a residual item for every three convolution layer to bring the layer of the network to 152. We resize the image patches to the size of 224 × 224 as the input of the model and extract the output of the fc1000 layer to get the 1000 dimensional features. The features are then used to train 5 layers fully connected neural network. The network's input is 1000dimensional, and the number of neurons in the network is given by 100-100-50-50-1. The network's output is the local crowd count
+
+
+Can anyone explain the above part in detail?
+"
+"['reinforcement-learning', 'deep-rl', 'hyper-parameters', 'learning-rate', 'exploration-strategies']"," Title: Should I be decaying the learning rate and the exploration rate in the same manner?Body: Should I be decaying the learning rate and the exploration rate in the same manner? What's too slow and too fast of an exploration and learning rate decay? Or is it specific from model to model?
+"
+"['game-ai', 'alpha-beta-pruning', 'symbolic-ai', 'combinatorial-games']"," Title: Historical weakness of GOFAI in relation to partisan combinatorial games?Body: I was recently perusing the paper Some Studies in Machine Learning Using the Game of Checkers II--Recent Progress (A.L. Samuel, 1967), which is interesting historically.
+
+I was looking at this figure, which involved Alpha-Beta pruning.
+
+
+
+It occurred to me that the types of non-trivial, non-chance, perfect information, zero-sum, sequential, partisan games utilized (Chess, Checkers, Go) involve game states that cannot be precisely quantified. For instance, there is no way to ascribe an objective value to a piece in Chess, or any given board state. In some sense, the assignment of values is arbitrary, consisting of estimates.
+
+The combinatorial games I'm working on are forms of partisan Sudoku, which are bidding/scoring (economic) games involving territory control. In these models, any given board state produces an array of ratios allowing precise quantification of player status. Token values and positions can be precisely quantified.
+
+This project involves a consumer product, and the approach we're taking currently is to utilize a series of agents of increasing sophistication to provide different levels challenge for human players. These agents also reflect what is known as a ""strategy ladder"".
+
+Reflex Agents (beginner)
+Model-based Reflex Agents (intermediate)
+Model-based Utility Agents (advanced)
+
+Goals may also be incorporated to these agents such as desired margin of victory (regional outcome ratios) which will likely have an effect on performance in that narrower margins of victory appear to entail less risk.
+
+The ""respectably weak"" vs. human performance of the first generation of reflex agents suggests that strong GOFAI might be possible. (The branching factors are extreme in the early and mid-game due to the factorial nature of the models, but initial calculations suggest that even a naive minimax lookahead will be able to look farther more effectively than humans.) Alpha-Beta pruning in partisan Sudoku, even sans a learning algorithm, should provide greater utility than in previous combinatorial game models where the values are estimates.
+
+
+- Is the historical weakness of GOFAI in relation to non-trivial combinatorial games partly a function of the structure of the games studied, where game states and token values cannot be precisely quantified?
+
+
+Looking for any papers that might comment on this subject, research into combinatorial games where precise quantification is possible, and thoughts in general.
+
+I'm trying to determine if it might be worth attempting to develop a strong GOFAI for these models prior to moving up the ladder to learning algorithms, and, if such a result would have research value.
+
+There would definitely be commercial value in that strong GOFAI with no long-term memory would allow minimal local file size for the apps, which must run on lowest-common-denominator smartphones with no assumption of connectivity.
+
+PS- My previous work on this has involved defining the core heuristics that emerge from the structure of the models, and I'm slowly dipping my toes into the look ahead pool. Please don't hesitate to let me know if I've made any incorrect assumptions.
+"
+"['philosophy', 'human-like', 'artificial-consciousness']"," Title: Does AI rely on determinism?Body: I don’t believe in free will, but most people do. Although I’m not sure how an act of free will could even be described (let alone replicated), is libertarian freewill something that is considered for AI? Or is AI understood to be deterministic?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'reinforcement-learning', 'game-ai']"," Title: Should Q values be changing within an epoch/episode or should they change after one episode/epoch?Body: I am trying to use Deep-Q learning environment to learn Super Mario Bros. The implementation is on Github.
+
+I have a neural network that Q values update within an episode for a very small learning rate (0.00005). However, even if I increase the learning rate to 0.00025, the Q values do not change within an episode as they are predicting the same Q values regardless of what state it is in. For example, if Mario moves right, the Q value is the same. When I start a new episode, the Q values change though.
+
+I think that the Q values should be changing within an episode as the game should be seeing different parts and taking different actions. Why don't I observe this?
+"
+['reinforcement-learning']," Title: Snake game: snake converges to going in the same direction every timeBody: This is a q-learning snake using a neural network as a q function aproximator and I'm losing my mind here the current model it's worst than the initial one.
+
+The current model uses a 32x32x32 MLPRegressor from scikit-learn using relu as activation function and the adam solver.
+
+The reward function is like following:
+
+
+- death reward = -100.0
+- alive reward = -10.0
+- apple reward = 100.0
+
+
+The features extracted from each state are the following:
+
+
+- what is in front of the snake's head(apple, empty, snake)
+- what is in the left of the snake's head
+- what is in the right of the snake's head
+- euclidian distance between head and apple
+- the direction from head to the apple measured in radians
+- length of the snake
+
+
+One episode consists of the snake playing until it dies, I'm also using in training a probability epsilon that represent the probability that the snake will take a random action if this isn't satisfied the snake will take the action for which the neural network gives the biggest score, this epsilon probability gradually decrements after each iteration.
+
+The episode is learned by the regressor in reverse order one statet-action at a time.
+
+However the neural network fails too aproximate the q function, no matter how many iterations the snake takes the same action for any state.
+
+Things I tried:
+
+
+- changing the structure of the neural network
+- changing the reward function
+- changing the features extracted, I even tried passing the whole map to the network
+
+
+Code (python): https://pastebin.com/57qLbjQZ
+"
+"['machine-learning', 'reinforcement-learning', 'alphago', 'alphazero', 'alphago-zero']"," Title: What part of the game is the value network trained to predict a winner on?Body: The Alpha Zero (as well as AlphaGo Zero) papers say they trained the value head of the network by ""minimizing the error between the predicted winner and the game winner"" throughout its many self-play games. As far as I could tell, further information was not given.
+
+To my understanding, this is basically a supervised learning problem, where, from the self-play, we have games associated with their winners, and the network is being trained to map game states to the likelihood of winning. My understanding leads me to the following question:
+
+What part of the game is the network trained to predict a winner on?
+
+Obviously, after only five moves, the winner is not yet clear, and trying to predict a winner after five moves based on the game's eventual winner would learn a meaningless function. As a game progresses, it goes from tied in the initial position to won at the end.
+
+How is the network trained to understand that if all it is told is who eventually won?
+"
+"['pattern-recognition', 'topology', 'generative-model', 'survival']"," Title: How can AI be used to more reliably analyze and plan around the tie between climate and emissions?Body: Note to the Duplicate Police
+
+This question is not a duplicate of the Q&A thread referenced in the close request. The only text even remotely related in that other thread is the brief mention of climate change in the Q and two sentences in the sole answer: ""Identify deforestation and the rate at which it's happening using computer vision and help in fighting back based on how critical the rate is. The World Resources Institute had entered into a partnership with Orbital Insight on this.""
+
+If you look at the four bullet items below, you will find that this question asks a very specific thing about the relationship between climate and emissions. Neither that question nor that answer overlaps with the content of this question in any meaningful way. For instance, it is well known that CO2 is NOT causing deforestation. The additional carbon dioxide in the atmosphere causes faster regrowth. This is because plants need CO2 to grow. Hydroponic containers deliberately boost it to improve growth rates. Plants manufacture their own oxygen from the CO2 via chlorophyll.
+
+If you recall from fifth grade biology, that's why they are plants.
+
+
+
+Now Back to the Question
+
+Several climate models have been proposed and used to model the relationship between human carbon emissions, added to the natural carbon emissions of life forms on earth, and features of climate that could damage the biosphere.
+
+Population growth and industrialization have many impacts on the biosphere, including loss of terrain and pollution. Negative oceanic effects, including unpredictable changes in plankton and cyanobacteria are under study. Carbon emissions from combustion has received attention in recent decades just as sulfur emissions were central to concerns a century or more ago.
+
+Predicting weather and climate is certainly difficult because it is complex and chaotic, as typical inaccuracies in forecasts clearly demonstrate, but that is looking forward. Looking backward, analyses of data already collected have shown a high probability that ocean and surface temperature rises followed increases in industrial and transportation related combustion of fuels.
+
+How might AI be used to produce some of the key models humans need to protect the biosphere from severe damage.
+
+
+- A more reliable analysis of what has already occurred, since there is some legitimacy to the differing views as to how gross the effect of carbon emissions has been on extinctions of species in the biosphere and on arctic and antarctic melting
+- A better understanding as to whether the climate of the biosphere behaves as a buffer of climate, always tending to re-balance after a volcanic eruption, meteor stroke, or other event, or whether the runaway scenario described by some climatologist, where there is a point of no return, is realistic
+- A better model to use in trying out scenarios so that solutions can be applied in the order that makes sense from both environmental and economic perspectives
+- Automation of climate planning so that the harmful effects of the irresponsibility of one geopolitical entity wishing to industrialize without constraint on other geopolitical entities can be mitigated
+
+
+Can pattern recognition, feature extraction, the learned functionality of deep networks, or generative techniques be used to accomplish these things? Can rules of climate be learned? Are there discrete or graph based tools that should be used?
+"
+"['deep-learning', 'convolutional-neural-networks', 'signal-processing']"," Title: Can I reduce the ""number of weights"" in CNN to 1/3 by restricting the input as greyscale image?Body: In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?
+
+This question helps me a lot.
+
+Let, I have RGB input image. (3 channels)
+Then each filter has n×n weights for one channel.
+It means, actually the filter has totally 3×n×n weights.
+
+For channel R, it has own n×n filter.
+
+For channel G, it has own n×n filter.
+
+For channel B, it has own n×n filter.
+
+After inner product, add them all to make one feature map.
+Am I right?
+
+And then, my question starts here.
+For some purpose, I will only use greyscale images as input.
+So the input images always have the same values for each RGB channel.
+
+Then, can I reduce the number of weights in the filters?
+Because in this case, using three different n×n filters and adding them is same with using one n×n filter that is the summation of three filters.
+
+Does this logic hold on a trained network?
+I have a trained network for RGB image input, but it is too heavy to run in real time.
+But I only use the greyscale images as input, so it seems I can make the network less heavy (theoretically, almost 1/3 of original).
+
+I'm quite new in this field, so detailed explanations will be really appreciated.
+Thank you.
+"
+['convolutional-neural-networks']," Title: Doubt regarding research paper on Crowd Counting using Convolutional neural networks and Markov Random FieldBody: I am currently reading the research paper Image Crowd Counting Using Convolutional Neural Network and Markov Random Field by Kang Han, Wanggen Wan, Haiyan Yao, and Li Hou.
+I did not understand the following context properly:
+
+
+ Formally, the
+ Markov random field framework for the crowd counting
+ can be defined as follows (we follow the notation in [18]).
+ Let P be the set of patches in an image and C be a possi-
+ ble set of counts. A counting c assigns a count c p ∈ C to
+ each patch p ∈ P. The quality of a counting is given by an
+ energy function:
+
+ E(c) =
+ ∑ D p (c p ) + ∑
+ p∈P
+ V (c p − c q )
+ . . . (2)
+ (p,q)∈N
+
+ where N are the (undirected) edges in the four-connected
+ image patch graph. D p (c p ) is the cost of assigning count
+ c p to patch p, and is referred to as the data cost. V (c p −c q )
+ measures the cost of assigning count c p and c q to two
+ neighboring patch, and is normally referred to as the dis-
+ continuity cost.
+ For the problem of smoothing the adjacent patches
+ count, D p (c p ) and V (c p − c q ) can take the form of the
+ following functions:
+ D p (c p ) = λ min((I(p) − c p ) 2 , DATA K) . . . (3)
+ V (c p − c q ) = min((c p − c q ) 2 , DISC K)
+ . . . (4)
+ where λ is a weight of the energy items, I(p) is the ground
+ truth count of the patch p, DATA K and DISC K are the
+ truncating item of D p (c p ) and V (c p − c q ), respectively.
+
+
+
+
+Can anyone explain the above part in detail and give me a detailed insight on how should I implement this part of the project?
+"
+"['deep-learning', 'artificial-neuron', 'neurons', 'biology']"," Title: How does the degree of neuronal realism affect computing in a deep learning scenario?Body: Neurons can be simulated using different models that vary in the degree of biophysical realism. When designing an artificial neuronal network, I am interested in the consequences of choosing a degree of neuronal realism.
+
+In terms of computational performance, the FLOPS vary from integrate-and-fire to the Hodgkin–Huxley model (Izhikevich, 2004). However, properties, such as refraction, also vary with the choice of neuron.
+
+
+- When selecting a neuronal model, what are consequences for the ANN other than performance? For example, would there be trade-offs in
+terms of stability/plasticity?
+- Izhikevich investigated the performance question in 2004. What are
+the current benchmarks (other measures, new models)?
+- How does selecting a neuron have consequences for scalability in terms of hardware for a deep learning network?
+- When is the McCulloch-Pitts neuron inappropriate?
+
+
+
+
+References
+
+Izhikevich, E. M. (2004). Which model to use for cortical spiking neurons? IEEE Transactions on Neural Networks, 15(5). https://www.izhikevich.org/publications/whichmod.pdf
+"
+['reinforcement-learning']," Title: What is the physics engine used by DeepMimic?Body: I found a video for the paper DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills
+ on YouTube.
+
+I looked in the related paper, but could not find details of how to the environment was created, such as the physics engine it used. I would like to use it, or something similar.
+"
+"['machine-learning', 'classification', 'monte-carlo-tree-search', 'decision-theory']"," Title: Should I use Monte Carlo or a classifier for this Decision Making problem?Body: I want to build a model to support decision making for loan insurance proposal.
+
+There are three actors in the problem: a bank, a loaner applicant (someone who ask for a loan) and a counselor. The counselor studies the loaner application and if it has a good profile it will propose to him loan from banks that fits his profile. Then the application is sent to the bank but the bank could refuse the applicant (based on criteria we don't know).
+
+The counselor has also to decide whether or not he will propose to the loaner applicant a loan insurance.
+
+The risk is that some banks reject loan applicant who accepts a loan insurance and other banks accept more applicants with a loan insurance. But there aren't rules regarding banks since some banks accept or reject applicants with loan insurance according of the type of acquisition applicants want with their loan for example.
+
+Thus, the profile of the applicant can matter in their rejection from banks but all criteria influencing the decision are quite uncertain.
+
+I've researched online and found several scholarly articles on using Monte Carlo for decision making. Should I use Monte Carlo or a simple classifier for this Decision Making problem ?
+
+I saw that Monte Carlo (possibly Monte Carlo Tree Search) can be used in Decision Making and it is good when there is uncertainty. But it seems that it would forecast by producing some strategy (after running a lot of simulations) but what I want is an outcome based on both the profile of the loaner applicant and the bank knowing that criteria from banks (to accept loaner applicant from could change every six months. And I would have too model banks which seems quite difficult.
+
+A classifier seems to me to not really fit the problem. I am not really sure. Actually, I don't see how a classifier like a decision tree, for example, would work here. Because I have to predict decision of the counselor to propose or not based on the decision of banks (and I don't know their criteria) to refuse or accept applicants who were proposed loan insurance and accepted it.
+
+The data I have is former applicants profile who were sent to banks and if they were accepted or not by the bank, if they wanted a loan insurance or not and the type of acquisition they wanted to make with their loan.
+
+I am new to Decision Making. Thank you!
+"
+"['neural-networks', 'game-ai', 'python', 'genetic-algorithms']"," Title: Can genetic algorithms be used to learn to play multiple games of the same type?Body: Is it possible for a genetic algorithm + Neural Network that is used to learn to play one game such as a platform game able to be applied to another different game of the same genre.
+
+So for example, could an AI that learns to play Mario also learn to play another similar platform game.
+
+Also, if anyone could point me in the direction of material i should familiarise myself with in order to complete my project.
+"
+['reinforcement-learning']," Title: How can a reinforcement learning agent generalize if it is trained against only one opponent?Body: I started teaching myself about reinforcement learning a week ago and I have this confusion about the learning experience. Let's say we have the game Go. And we have an agent that we want to be able to play the game and win against anyone. But let's say this agent learn from playing against one opponent, my questions then are:
+
+
+- Wouldn't the agent (after learning) be able to play only with that opponent and win? It estimated the value function of this specific behaviour only.
+- Would it be able to play as good with weaker players?
+- How do you develop an agent that can estimate a value function that generalizes against any behaviour and win? Self-play? If yes, how does that work?
+
+"
+"['classification', 'ai-security']"," Title: How to design a classifier while the patterns of positive data are changing rapidly?Body: In some situation, like risk detection and spam detection. The pattern of Good User is stable, while the patterns of Attackers are changing rapidly. How can I make a model for that? Or which classifier/method should I use?
+"
+"['neural-networks', 'incremental-learning', 'online-learning']"," Title: Are there dynamic neural networks?Body: Are there neural networks that can decide to add/delete neurons (or change the neuron models/activation functions or change the assigned meaning for neurons), links or even complete layers during execution time?
+
+I guess that such neural networks overcome the usual separation of learning/inference phases and they continuously live their lives in which learning and self-improving occurs alongside performing inference and actual decision making for which these neural networks were built. Effectively, it could be a neural network that acts as a Gödel machine.
+
+I have found the term dynamic neural network but it is connected to adding some delay functions and nothing more.
+
+Of course, such self-improving networks completely redefine the learning strategy, possibly, single shot gradient methods can not be applicable to them.
+
+My question is connected to the neural-symbolic integration, e.g. Neural-Symbolic Cognitive Reasoning by Artur S. D'Avila Garcez, 2009. Usually this approach assigns individual neurons to the variables (or groups of neurons to the formula/rule) in the set of formulas in some knowledge base. Of course, if knowledge base expands (e.g. from sensor readings or from inner nonmonotonic inference) then new variables should be added and hence the neural network should be expanded (or contracted) as well.
+"
+"['machine-learning', 'feature-selection', 'decision-trees']"," Title: How can I minimize the number of answers that are relevant to a machine learning model?Body: Problem:
+
+We have a fairly big database that is built up by our own users.
+The way this data is entered is by asking the users 30ish questions that all have around 12 answers (x, a, A, B, C, ..., H). The letters stand for values that we can later interpret.
+
+I have already tried and implemented some very basic predictors, like random forest, a small NN, a simple decision tree etc.
+
+But all these models use the full dataset to do one final prediction. (fairly well already).
+
+What I want to create is a system that will eliminate 7 to 10 of the possible answers a user can give at any question. This will reduce the amount of data we need to collect, store, or use to re-train future models.
+
+I have already found several methods to decide what are the most discriminative variables in the full dataset. Except, when a user starts filling the questions I start to get lost on what to do. None of the models I have calculate the next question given some previous information.
+
+It feels like I should use a Naive Bayes Classifier, but I'm not sure. Other approaches include recalculating the Gini or entropy value at every step. But as far as my knowledge goes, we can't take into account the answers given before the recalculating.
+"
+"['neural-networks', 'natural-language-processing', 'chat-bots']"," Title: Can I develop a chatbot to carry on a natural conversation with a human using NLP and neural networks?Body: I would like to develop a chatbot that is able to pass the Turing test, i.e. a chatbot that is able to carry on a natural conversation with a human.
+
+Can natural language processing (NLP) be used to do that? What if I combine NLP with neural networks?
+"
+"['neural-networks', 'reinforcement-learning', 'ai-design', 'alphazero', 'alphago-zero']"," Title: Why does the policy network in AlphaZero work?Body: In AlphaZero, the policy network (or head of the network) maps game states to a distribution of the likelihood of taking each action. This distribution covers all possible actions from that state.
+
+How is such a network possible? The possible actions from each state are vastly different than subsequent states. So, how would each possible action from a given state be represented in the network's output, and what about the network design would stop the network from considering an illegal action?
+"
+['neat']," Title: Can a crossover result in a node with no outgoing connections?Body: I'm currently implementing the original NEAT algorithm in Swift.
+
+Looking at figure 4 in Stanley's original paper, it seems to me there is a chance that node 5 will have no (enabled) outgoing connection if parent 1 is assumed the fittest parent and the connection is randomly picked from parent 2.
+
+Is my understanding of the crossover function correct and can it indeed result in a node with no outgoing connections?
+"
+"['neural-networks', 'machine-learning', 'generative-adversarial-networks', 'papers', 'generative-model']"," Title: Why do we use $D(x \mid y)$ and not $D(x,y)$ in conditional generative adversarial networks?Body: In conditional generative adversarial networks (GAN), the objective function (of a two-player minimax game) would be
+
+$$\min _{G} \max _{D} V(D, G)=\mathbb{E}_{\boldsymbol{x} \sim p_{\text {data }}(\boldsymbol{x})}[\log D(\boldsymbol{x} | \boldsymbol{y})]+\mathbb{E}_{\boldsymbol{z} \sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log (1-D(G(\boldsymbol{z} | \boldsymbol{y})))]$$
+
+The discriminator and generator both take $y$, the auxiliary information.
+
+I am confused as to what will be the difference by using $\log D(x,y)$ and $\log(1-D(G(z,y))$, as $y$ goes in input to $D$ and $G$ in addition to $x$ and $z$?
+"
+['neural-networks']," Title: How to find a cost function for human dataBody: Currently, I am interested in how NNs or any other AI models can be used for composing music.
+
+But there are many other interesting applications too, like language processing.
+
+I am wondering that: NNs generally need a cost function for learning. But for example, for composing music, what would be an appropriate cost function? I mean, algorithms can't (yet) really 'calculate' how good music is, right?
+"
+"['neural-networks', 'research', 'hardware', 'architecture', 'long-short-term-memory']"," Title: Are artificial networks based on the perceptron design inherently limiting?Body: At the time when the basic building blocks of machine learning (the perceptron layer and the convolution kernel) were invented, the model of the neuron in the brain taught at the university level was simplistic.
+
+
+ Back when neurons were still just simple computers that electrically beeped untold bits to each other over cold axon wires, spikes were not seen as the hierarchical synthesis of every activity in the cell down to the molecular scale that we might say they are today. In other words, spikes were just a summary report of inputs to be integrated with the current state, and passed on. In comprehending the intimate relationships of mitochondria to spikes (and other molecular dignitaries like calcium) we might now more broadly interpret them as synced messages that a neuron sends to itself, and by implication its spatially extended inhabitants. Synapses weigh this information heavily but ultimately, but like the electoral college, fold in a heavy dose of local administration to their output. The sizes and positions within the cell to which mitochondria are deployed can not be idealized or anthropomorphized to be those metrics that the neuron decides are best for itself, but rather what is thermodynamically demanded.1
+
+
+Notice the reference to summing in the first bolded phrase above. This is the astronomically oversimplified model of biology upon which contemporary machine learning was built. Of course ML has made progress and produced results. This question does not dismiss or criticize that but rather widen the ideology of what ML can become via a wider field of thought.
+
+Notice the second two bolded phrases, both of which denote statefulness in the neurons. We see this in ML first as the parameters that attenuate the signals between arrays of artificial neurons in perceptrons and then, with back-propagation into deeper networks. We see this again as the trend in ML pushes toward embedded statefulness by integrating with object oriented models, the success of LSTM designs, the interrelationships of GAN designs, and the newer experimental attention based network strategies.
+
+But does the achievement of higher level thought in machines, such as is needed to ...
+
+
+- Fly a passenger jet safely under varying conditions,
+- Drive a car in the city,
+- Understand complex verbal instructions,
+- Study and learn a topic,
+- Provide thoughtful (not mechanical) responses, or
+- Write a program to a given specification
+
+
+... requiring from us a much more radical is the transition in thinking about what an artificial neuron should do?
+
+Scientific research into brain structure, its complex chemistry, and the organelles inside brain neurons have revealed significant complexity. Performing a vector-matrix multiplication to apply learning parameters to the attenuation of signals between layers of activations is not nearly a simulation of a neuron. Artificial neurons are not very neuron-like, and the distinction is extreme.
+
+A little study on the current state of the science of brain neuron structure and function reveals the likelihood that it would require a massive cluster of GPUs training for a month just to learn what a single neuron does.
+
+
+ Are artificial networks based on the perceptron design inherently limiting?
+
+
+References
+
+[1] Fast spiking axons take mitochondria for a ride,
+by John Hewitt, Medical Xpress, January 13, 2014,
+https://medicalxpress.com/news/2014-01-fast-spiking-axons-mitochondria.html
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks']"," Title: What is the dimensionality of the output map, given the dimensionality of the input map, number of filters, stride and padding?Body: I am trying to understand the dimensionality of the outputs of convolution operations. Suppose a convolutional layer with the following characteristics:
+
+
+- Input map $\textbf{x} \in R^{H\times W\times D}$
+- A set of $F$ filters, each of dimension $\textbf{f} \in R^{H'\times W'\times D}$
+- A stride of $<s_x, s_y>$ for the corresponding $x$ and $y$ dimensions of the input map
+- Either valid or same padding (explain for both if possible)
+
+
+What should be the expected dimensionality of the output map expressed in terms of $H, W, D, F, H', W', s_x, s_y$?
+"
+"['neural-networks', 'machine-learning', 'convolutional-neural-networks', 'python', 'keras']"," Title: Convolutional Layers on a hexagonal grid in KerasBody: Keras' convolutional and deconvolutional layers are designed for square grids. Is there was a way to adapt them for use in hexagonal grids?
+
+For example, if we were using axial coordinates, the input of the kernel of radius 1 centered at (x,y)
should be:
+
+[(x-1,y), (x-1,y+1), (x,y-1), (x,y+1), (x+1,y-1), (x+1, y)]
+
+One option is to fudge it with a 3 by 3 box, but then you are using cells at different distances.
+
+Some ideas:
+
+
+- Modify Kera's convolutional layer code to use those inputs instead of the default inputs. The problem is that Kera calls its backend instead of implementing it itself, which means we need to modify the backend too.
+- Use a 3 by 3 box, but set the weights at
(x-1,y-1)
and (x+1,y+1)
to zero. Unfortunately, I do not know how to permanently set weights to a given value in Kera.
+- Use cube coordinates instead of Axial coordinates. In this case, a 3 by 3 by 3 box will only contain the central hex's neighbors and inputs set to 0. The problem is that it makes the input array much bigger. Even more problematic, some coordinates that correspond to non-hexes (such as (1,0,0)) will be assigned non-zero outputs (since (0,0,0) falls within its 3 by 3 by 3 box).
+
+
+Are there any better solutions?
+"
+"['machine-learning', 'learning-algorithms', 'unsupervised-learning']"," Title: What is the approach to deduce formal rules based on data?Body: We have data in text format as sentences.
+The goal is to detect rules which exist in this set of sentences.
+
+I have a limited set of contextless sentences that fit a pattern and want to find the pattern.
+I might not have sentences that don't fit the pattern.
+
+What should be an approach to do that?
+"
+"['neural-networks', 'topology']"," Title: Neural networks of arbitrary/general topology?Body: Usually neural networks consist from layers, but is there research effort that tries to investigate more general topologies for connections among neurals, e.g. arbitrary directed acyclic graphs (DAGs).
+
+I guess there can be 3 answers to my question:
+
+
+- every imaginable DAG topology can be reduced to the layered DAGs already actively researched, so, there is no sense to seek for more general topologies;
+- general topologies exist, but there are fundamental restrictions why they are not used, e.g. maybe learning is not converging in them, maybe they generate chaotic osciallations, maybe they generate bifurcations and does not provide stability;
+- general topologies exist and are promising, but scientists are not ready to work with them, e.g. maybe they have no motivation, standard layered topologies are good enough.
+
+
+But I have no idea, which answer is the correct one. Reading the answer on https://stackoverflow.com/questions/46569998/calculating-neural-network-with-arbitrary-topology I start to think that answer 1 is the correct one, but there is no reference provided.
+
+If answer 3 is correct, then big revolution can be expected. E.g. layered topologies in many cases reduces learning to the matrix exponentiation and good tools for this are created - TensorFlow software and dedicated processors. But there seems to be no software or tools for general topologies is they have some sense indeed.
+"
+"['training', 'generative-model', 'speech-synthesis']"," Title: Adding voices to voice synthesis corpusesBody: If one uses one of the open source implementations of the WaveNet generative speech synthesis design, such as https://r9y9.github.io/wavenet_vocoder/, and trains using something like the CMU's arctic corpus, now can one add a voice that sounds younger, older, less professional, or in some other way distinctive. Must the entire training begin from scratch, or is there a more resource and time friendly way?
+"
+"['neural-networks', 'natural-language-processing']"," Title: Calculation of GPU memory consumption on softmax layer doesn't match with the empirical resultBody: I'm training a language model with 5000
vocabularies using a single M60 GPU
(w/ actually usable memory about 7.5G).
+
The number of tokens per batch is about 8000
, and the hidden dimension to the softmax layer is 512
. So, if I understand correctly, fully-connected (softmax) layer theoretically consumes 5000*8000*512*4=81.92GB
for a forward pass (4 is for float32).
+
But the GPU performed the forward and backward passes without any problem, and it says the GPU memory usage is less than 7GB
in total.
+
+I used PyTorch. What's causing this?
+
+EDIT: To be clearer, the input to the final fc layer (256x5000 matrix) is of size [256, 32, 256].
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: Does balancing the training data set distribution for a neural network affect its understanding of the original distribution of data?Body: I have a very imbalanced dataset of two classes: 2% for the first class and 98% for the second. Such imbalance does not make training easy and so balancing the data set by undersampling class 2 seemed like a good idea.
+
+However, as I think about it, should not the machine learning algorithm expect the same data distribution in nature as in its training set? I know, for sure, that the distribution of data in nature matches my imbalanced dataset. Does that mean that the balanced dataset will negatively affect the neural net performance with testing? when it assumed a different distribution of data caused by my balanced data set.
+"
+"['algorithm', 'applications', 'hardware']"," Title: Which artificial intelligence algorithms could use tensor specific hardware?Body: AI algorithms involving neural networks can use tensor specific hardware. Are there any other artificial intelligence algorithms that could benefit from many tensor calculations in parallel? Are there any other computer science algorithms (not part of AI) that could benefit from many tensor calculations in parallel?
+
+Have also a look at TensorApplications and Application Theory.
+"
+"['neural-networks', 'intelligent-agent']"," Title: Neural network as (BDI) agent - running in continuous mode (that do inference in parallel with learning)?Body: Is there research work that uses neural network as the (BDI) agent (or even full-scale cognitive architecture like Soar, OpenCog) - that continuously receives information from the environment and act in an environment and modifies its base of belief in parallel? Usually NN are trained to do only one task and TensorFlow/PyTorch supports batch mode only out of the box. Also NN algorithms and theory are constructed assuming that training and inference phases are clearly separated and they have each own algorithms. So - completely new theory and software can be required for this - are there efforts in this direction? If no, then why not? It is so self-evident that such systems can be of benefit.
+
+https://arxiv.org/abs/1802.07569 is good review about incremental learning and it contains chapters of implemented systems, but all of them still separates learning phase from inference phase. Symbolic systems and symbolic agents (like JSON AgentSpeak) can have updating belief/knowledge base and they can also act during receiving new information or during forming new beliefs. I am specifically seeking research about NNs which do learning and inference in parallel. As far as I sought then this separation still persists in self-organizing incremental NNs that are gaining some popularity.
+
+I can image the construction of chained NNs in Tensorflow - there is some controller network that receives input (possibly preprocessed by hierarchically lower networks) and that decides what to the: s.c. mental actions are the ouput of this controller, these actions determine whether some subordinated network is required to undergo additional learning or whether it can be temporary used for the processing of some information. Central network itself, of course, can decide to move into temporary learning phase from time to time to improve its reasoning capabilities. Such pipeline of master-slave networks is indeed possible in TensorFlow but still TensorFlow will have one central clock, not distributed, loosely connected processing. But I don't know whether existence of central clock is any restriction on the generality of capabilities of such system. Well, this hierarchy of networks maybe can be realized inside the one large network as well - maybe this large network can allow separate parts (subsets of neurons) to function in somehow independent and mutually controlling mode, maybe such regions of large neural network can emerge indeed. I am interested in this kind of research - maybe there are available some good papers for this?
+"
+"['neural-networks', 'artificial-consciousness', 'superintelligence']"," Title: Can neural network take decision about its own weights (update of weights)?Body: Can neural network take decision about its own weights (update of weights) during training phase or during the phase of parallel training and inference? When one region of hierarchical NN takes decision about weights of other region is the special case of my question.
+
+I am very keen to understand about self-awareness, self-learning, self-improvement capabilities of neural networks, because those exactly those self-* capabilities are the key path to the artificial general intelligence (e.g. Goedel machine). Neural networks are usually mentioned as examples of special, single-purpose intelligence but I can not see the reason for such limitation if NN essentially trys to mimic human brains, at least in purpose if not in mechanics.
+
+Well - maybe this desired effect is already effectively achieved/emerges in the operation of recurrent ANNs as the effect of collective behavior?
+"
+"['search', 'heuristics', 'a-star']"," Title: What heuristic to use when doing A* search with multiple targets?Body: Usually, using the Manhattan distance as a heuristic function is enough when we do an A* search with one target. However, it seems like for multiple goals, this is not the most useful way. Which heuristic do we have to use when we have multiple targets?
+"
+['image-recognition']," Title: How can I use A.I/Image Processing to construct mathematical graphs from drawing?Body: In physics, there are a lot of graphs, such as 'velocity vs time' , 'time period vs length' and so on.
+
+Let's say I have a sample set of points for a 'velocity vs time' graph. I draw it by hand, rather haphazardly, on a canvas. This drawn graph on the canvas is then provided to the computer. By computer I mean AI.
+
+I want it to sort of beautify my drawn graph, such as straightening the lines, making the curves better, adding the digits on axes and so on. In other words, I want it to give me a better version of my drawn graph which I can readily use in, say, a word document for a report.
+
+a) Is it possible/plausible to do this?
+b) Are there any APIs available that can already do this? (Don't want to reinvent the wheel)
+c) Any recommendations/suggestions to make the idea possible by altering it somehow?
+"
+"['deep-learning', 'convolutional-neural-networks', 'keras']"," Title: CNN Pooling layers unhelpful when location important?Body: I'm trying to use a CNN to analyse statistical images. These images are not 'natural' images (cats, dogs, etc) but images generated by visualising a dataset. The idea is that these datasets hopefully contain patterns in them that can be used as part of a classification problem.
+
+
+
+Most CNN examples I've seen have one of more pooling layers, and the explaination I've seen for them is to reduce the number of training elements, but also to allow for some locational independance of an element (e.g. I know this is an eye, and can appear anywhere in the image).
+
+In my case location is important and I want my CNN to be aware of that. ie. the presence of a pattern at a specific location in the image means something very specific compared to if that feature or pattern appears elsewhere.
+
+At the moment my network looks like this (taken from an example somewhere):
+
+_________________________________________________________________
+Layer (type) Output Shape Param #
+=================================================================
+conv2d_1 (Conv2D) (None, 196, 178, 32) 896
+_________________________________________________________________
+activation_1 (Activation) (None, 196, 178, 32) 0
+_________________________________________________________________
+max_pooling2d_1 (MaxPooling2 (None, 98, 89, 32) 0
+_________________________________________________________________
+conv2d_2 (Conv2D) (None, 96, 87, 32) 9248
+_________________________________________________________________
+activation_2 (Activation) (None, 96, 87, 32) 0
+_________________________________________________________________
+max_pooling2d_2 (MaxPooling2 (None, 48, 43, 32) 0
+_________________________________________________________________
+conv2d_3 (Conv2D) (None, 46, 41, 64) 18496
+_________________________________________________________________
+activation_3 (Activation) (None, 46, 41, 64) 0
+_________________________________________________________________
+max_pooling2d_3 (MaxPooling2 (None, 23, 20, 64) 0
+_________________________________________________________________
+flatten_1 (Flatten) (None, 29440) 0
+_________________________________________________________________
+dense_1 (Dense) (None, 32) 942112
+_________________________________________________________________
+activation_4 (Activation) (None, 32) 0
+_________________________________________________________________
+dropout_1 (Dropout) (None, 32) 0
+_________________________________________________________________
+dense_2 (Dense) (None, 3) 99
+_________________________________________________________________
+activation_5 (Activation) (None, 3) 0
+=================================================================
+Total params: 970,851
+Trainable params: 970,851
+Non-trainable params: 0
+_________________________________________________________________
+
+
+The 'images' I'm training on are 180 x 180 x 3 pixels and each channel contains a different set of raw data.
+
+What strategies are there to improve my CNN to deal with this? I have tried simply removing some of the pooling layers, but that greatly increased memory and training time and didn't seem to really help.
+"
+"['deep-learning', 'philosophy', 'human-inspired', 'convergence']"," Title: What is chaotic behavior and how it is achieved in non-linear regression and artificial networks?Body: I'm finding it hard to understand the relationship between chaotic behavior, the human brain, and artificial networks. There are a number of explanations on the web, but it would be very helpful if I get a very simple explanation or any references providing such simplifications.
+"
+"['research', 'intelligent-agent', 'autonomous-vehicles', 'probability', 'robotics']"," Title: SEIF motion update algorithm doubtBody: I want to implement Sparse Extended information slam. There is four step to implement it. The algorithm is available in Probabilistic Robotics Book at page 310, Table 12.3.
+
+
+
+In this algorithm line no:13 is not very clear to me. I have 15 landmarks. So $\mu_t$ will be a vector of (48*1) dimension where (3*1) for pose. Now $H_t^i$ is a matrix whose columns are dynamic as per the algorithm it is (3j-3) and 3j. J is the values of landmarks 1 to 15. Now how could I multiply a dynamic quantity with a static one. There must be a error that matrix dimension mismatch when implement in matlab.
+
+Please help me to understand the algorithm better.
+"
+"['neural-networks', 'convolutional-neural-networks', 'papers', 'u-net']"," Title: How do we stack two U-Nets to yield one final prediction?Body: I am trying to reproduce the model described in the paper DocUNet: Document Image Unwarping via A Stacked U-Net, i.e. stacking two U-Nets to yield one final prediction. The paper mentions that:
+
+
+ The deconvolution features of the first U-Net and the intermediate prediction y1 are concatenated together as the input of the second U-Net.
+
+
+What does it mean by concatenating deconvolution features and the prediction (which is an array? cm)?
+
+The next paragraph says that:
+
+
+ The second U-Net finally gives a refined prediction y2, which we use as the final output of our network. We apply the same loss function to both y1 and y2 during training.
+
+
+It leads to the next question: Does it mean that I have to train U-Net twice?
+"
+"['machine-learning', 'game-ai', 'python', 'keras']"," Title: Mapping Actions to the Output Layer in Keras Model for a Board GameBody: I have created a game based on this game here. I am attempting to use Deep Q Learning to do this, and this is my first foray into Neural networks (please be gentle!!)
+
+I am trying to create a NN that can play this game. Here are some relevant facts about the game:
+
+
+- Player 1 (the fox) has 1 piece that he can move diagonally 1 step in any direction
+- Player 2(The geese) has 4 pieces that they can move only forward diagonally (either diagonal left or diagonal right) 1 step.
+- The Fox wins if he reaches the other end of the board, the geese win if they trap the fox so it cannot move.
+
+
+I am trying to work on the agent first for the geese as it seems to be the harder agent with more pieces and restrictions. Here is the important sections of code I have so far:
+
+
+ This is where I setup the game board, and set the total actions for the geese
+
+
+def __init__(self):
+ self.state_size = (LENGTH,LENGTH) ##LENGTH is 8 so (8,8)
+ #...
+ #other DQN variables that aren't important to question
+ #...
+ self.action_size = 8 ##4 geese, each can potentially make 2 moves
+ self.model = self.build_model()
+
+
+
+ And here is where I create my model
+
+
+def build_model(self):
+ #builds the NN for Deep-Q Model
+ model = Sequential() #establishes a feed forward NN
+ model.add(Dense(64,input_shape = (LENGTH,), activation='relu'))
+ model.add(Dense(64, activation='relu'))
+ model.add(Dense(self.action_size, activation = 'linear'))
+ model.compile(loss='mse', optimizer='Adam')
+
+
+
+ This is where I perform an action
+
+
+def act(self, state,env):
+ #get the list of allowed actions for the geese
+ actions_allowed = env.allowed_actions_geese_agent()
+
+ if np.random.rand(0,1) <= self.epsilon: ##do a random move
+ return actions_allowed[random.randint(0, len(actions_allowed)-1)]
+ act_values = self.model.predict(state)
+ print(act_values)
+ return np.argmax(act_values)
+
+
+
+ My question: Since there are 4 geese and each can make 2 possible moves, am I correct in thinking that my action_size should be 8 (2 for each goose) or should it be maybe 2 (for diagonal left or right) or something else entirely?
+
+
+The reason why I am at a loss is because on any given turn, some of the geese may have an invalid move, does that matter?
+
+
+ My next Question: Even if I have the right output layer for the geese agent, when I call model.predict(state)
where I pick my action...how do I interpret the output? And how would I map that action it selects to a valid action that can be made?
+
+
+Here is a picture of the result of using model.predict(state)
, as you can see it returns a ton of data and then when I call return np.argmax(act_values)
I get 59 back...not sure how to utilize that (or if it's even correct based on my output layer)... and finally I included a drawing of the board. F is the fox and 1,2,3,4 are the different geese.
+
+
+I apologize for the massive post, but I am just trying to provide as much information that is helpful.
+"
+"['machine-learning', 'deep-learning']"," Title: Machine learning to predict 8*8 matrix values using three independent matricesBody: Problem Statement
+
+I have 4 main input features.
+
+This is a small snippet of the data for clearer understanding.
+
+Gate name -> for example AND Gate
+
+index_1 -> [0.001169, 0.005416, 0.01391, 0.03037, 0.06381, 0.1307, 0.2645, 0.532]
+
+index_2 -> [7.906e-05, 0.001123, 0.00321, 0.007253, 0.01547, 0.03191, 0.06478, 0.1305]
+
+values -> [[11.0081, 14.0303, 18.8622, 27.3426, 43.8661, 76.7538, 142.591, 274.499],
+ [11.3461, 14.3634, 19.1985, 27.6827, 44.2106, 77.0954, 142.926, 274.879],
+ [12.258, 15.2816, 20.1095, 28.5856, 45.1057, 77.9778, 143.8, 275.758],
+ [13.665, 16.7457, 21.5835, 30.0545, 46.5581, 79.4212, 145.252, 277.192],
+ [15.6636, 18.9526, 23.9051, 32.4281, 48.9011, 81.7052, 147.477, 279.371],
+ [17.8838, 21.5839, 26.8957, 35.7103, 52.3901, 85.2132, 150.89, 282.714],
+ [19.3338, 23.6933, 29.7184, 39.1212, 56.4053, 89.9721, 155.913, 287.637],
+ [18.7856, 23.9999, 31.1794, 41.7549, 60.0043, 95.0488, 162.951, 295.005]]
+
+My task is to predict this values
matrix, given that I have index_1
and index_2
. Originally this values
matrix is propagation delay, calculated using a simulator called SPICE.
+
+Where I am facing problem
+
+
+- There is no written relation between Index_1, index_2 or values since simulator calculates this value using it's own models.
+- I have made a CSV file which contains the data in separate columns.
+- Another approach that I thought. If I can give index_1, index_2 and any 5*5 sub-matrix to the model, and the model can predict the values of whole 8*8 Matrix. But the problem is again, which machine learning model do I use.
+
+
+Approaches Tried so Far
+
+
+- I have tried a CNN model for this but it is giving me very low accuracy.
+- Used one dense fully connected neural network but it is over-fitting the data and not giving me any values for matrix.
+
+
+I am still stuck at how to predict the matrix values given this data. What are other strategies can be used?
+"
+"['neural-networks', 'deep-learning', 'reinforcement-learning', 'game-ai', 'learning-algorithms']"," Title: Does a solution for Wumpus World with neural networks exist?Body: The Wumpus World proposed in book of Stuart Russel and Peter Norvig, is a game which happens on a 4x4 board and the objective is to grab the gold and avoiding the threats that can kill you. The rules of game are:
+
+
+- You move just one box for round
+- Start in position (1,1), bottom left
+- You have a vector of sensors for perceiving the world around you.
+- When you are next to another position (including the gold), the vector is 'activated'.
+- There is one wumpus (a monster), 2-3 pits (feel free to put more or less) and just one gold pot
+- You only have one arrow that flies in a straight line and can kill the wumpus
+- Entering the room with a pit, the wumpus or the gold finishes the game
+
+
+
+
+Scoring is as follows: +1000 for grabbing the gold, -1000 for dying to the wumpus, -1 for each step, -10 for shooting an arrow. Fore more details about the rules, chapter 7 of the book explains them.
+
+Well now that game has been explained, the question is: in the book, the solution is demonstrated by logic and searching, does there exist another form to solve that problem with neural networks? If yes, how to do that? What topology to use? What paradigm of learning and algorithms to use?
+
+1*: My English is horrible, if you can send grammar corrections, I'm grateful.
+
+2*: I think this is a bit confusing and a bit complex. if you can help me to clarify better, please do commentary or edit!
+"
+"['deep-learning', 'feedforward-neural-networks']"," Title: Compute Jacobian matrix of Deep learning model?Body: I am trying to implement this paper. In this paper, the author uses the forward derivative to compute the Jacobian matrix dF/dx using chain rule where F is the probability got from the last layer and X is input image.
+My model is given below. Kindly let me know how to go about doing that?
+
+class LeNet5(nn.Module):
+
+def __init__(self):
+
+ self.derivative= None # store derivative
+
+ super(LeNet5, self).__init__()
+ self.conv1= nn.Conv2d(1,6,5)
+ self.relu1= nn.ReLU()
+ self.maxpool1= nn.MaxPool2d(2,2)
+
+ self.conv2= nn.Conv2d(6,16,5)
+ self.relu2= nn.ReLU()
+ self.maxpool2= nn.MaxPool2d(2,2)
+
+ self.conv3= nn.Conv2d(16,120,5)
+ self.relu3= nn.ReLU()
+
+ self.fc1= nn.Linear(120,84)
+ self.relu4= nn.ReLU()
+
+ self.fc2= nn.Linear(84,10)
+ self.softmax= nn.Softmax(dim= -1)
+
+
+def forward(self,img, forward_derivative= False):
+ output= self.conv1(img)
+ output= self.relu1(output)
+ output= self.maxpool1(output)
+
+ output= self.conv2(output)
+ output= self.relu2(output)
+ output= self.maxpool2(output)
+
+ output= self.conv3(output)
+ output= self.relu3(output)
+
+ output= output.view(-1,120)
+ output= self.fc1(output)
+ output= self.relu4(output)
+
+ output= self.fc2(output)
+ F= self.softmax(output)
+
+ # want to comput the jacobian dF/dimg
+ jacobian= computeJacobian(F,img)#how to write this function
+
+ return F, jacobian
+
+"
+"['machine-learning', 'deep-learning', 'optimization', 'autoencoders', 'objective-functions']"," Title: Loss jumps abruptly when I decay the learning rate with Adam optimizer in PyTorchBody: I'm training an auto-encoder
network with Adam
optimizer (with amsgrad=True
) and MSE loss
for Single channel Audio Source Separation task. Whenever I decay the learning rate by a factor, the network loss jumps abruptly and then decreases until the next decay in learning rate.
+
+I'm using Pytorch for network implementation and training.
+
+Following are my experimental setups:
+
+ Setup-1: NO learning rate decay, and
+ Using the same Adam optimizer for all epochs
+
+ Setup-2: NO learning rate decay, and
+ Creating a new Adam optimizer with same initial values every epoch
+
+ Setup-3: 0.25 decay in learning rate every 25 epochs, and
+ Creating a new Adam optimizer every epoch
+
+ Setup-4: 0.25 decay in learning rate every 25 epochs, and
+ NOT creating a new Adam optimizer every time rather
+ using PyTorch's ""multiStepLR"" and ""ExponentialLR"" decay scheduler
+ every 25 epochs
+
+
+I am getting very surprising results for setups #2, #3, #4 and am unable to reason any explanation for it. Following are my results:
+
+Setup-1 Results:
+
+Here I'm NOT decaying the learning rate and
+I'm using the same Adam optimizer. So my results are as expected.
+My loss decreases with more epochs.
+Below is the loss plot this setup.
+
+
+Plot-1:
+
+
+
+optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)
+
+for epoch in range(num_epochs):
+ running_loss = 0.0
+ for i in range(num_train):
+ train_input_tensor = ..........
+ train_label_tensor = ..........
+ optimizer.zero_grad()
+ pred_label_tensor = model(train_input_tensor)
+ loss = criterion(pred_label_tensor, train_label_tensor)
+ loss.backward()
+ optimizer.step()
+ running_loss += loss.item()
+ loss_history[m_lr].append(running_loss/num_train)
+
+
+
+
+Setup-2 Results:
+
+Here I'm NOT decaying the learning rate but every epoch I'm creating a new
+Adam optimizer with the same initial parameters.
+Here also results show similar behavior as Setup-1.
+
+Because at every epoch a new Adam optimizer is created, so the calculated gradients
+for each parameter should be lost, but it seems that this doesnot affect the
+network learning. Can anyone please help on this?
+
+
+Plot-2:
+
+
+
+for epoch in range(num_epochs):
+ optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)
+
+ running_loss = 0.0
+ for i in range(num_train):
+ train_input_tensor = ..........
+ train_label_tensor = ..........
+ optimizer.zero_grad()
+ pred_label_tensor = model(train_input_tensor)
+ loss = criterion(pred_label_tensor, train_label_tensor)
+ loss.backward()
+ optimizer.step()
+ running_loss += loss.item()
+ loss_history[m_lr].append(running_loss/num_train)
+
+
+
+
+Setup-3 Results:
+
+As can be seen from the results in below plot,
+my loss jumps every time I decay the learning rate. This is a weird behavior.
+
+If it was happening due to the fact that I'm creating a new Adam
+optimizer every epoch then, it should have happened in Setup #1, #2 as well.
+And if it is happening due to the creation of a new Adam optimizer with a new
+learning rate (alpha) every 25 epochs, then the results of Setup #4 below also
+denies such correlation.
+
+
+Plot-3:
+
+
+
+decay_rate = 0.25
+for epoch in range(num_epochs):
+ optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)
+
+ if epoch % 25 == 0 and epoch != 0:
+ lr *= decay_rate # decay the learning rate
+
+ running_loss = 0.0
+ for i in range(num_train):
+ train_input_tensor = ..........
+ train_label_tensor = ..........
+ optimizer.zero_grad()
+ pred_label_tensor = model(train_input_tensor)
+ loss = criterion(pred_label_tensor, train_label_tensor)
+ loss.backward()
+ optimizer.step()
+ running_loss += loss.item()
+ loss_history[m_lr].append(running_loss/num_train)
+
+
+
+
+Setup-4 Results:
+
+In this setup, I'm using Pytorch's learning-rate-decay scheduler (multiStepLR)
+which decays the learning rate every 25 epochs by 0.25.
+Here also, the loss jumps everytime the learning rate is decayed.
+
+
+As suggested by @Dennis in the comments below, I tried with both ReLU
and 1e-02 leakyReLU
nonlinearities. But, the results seem to behave similar and loss first decreases, then increases and then saturates at a higher value than what I would achieve without learning rate decay.
+
+Plot-4 shows the results.
+
+Plot-4:
+
+
+
+scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[25,50,75], gamma=0.25)
+
+
+scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=0.95)
+
+scheduler = ......... # defined above
+optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)
+
+for epoch in range(num_epochs):
+
+ scheduler.step()
+
+ running_loss = 0.0
+ for i in range(num_train):
+ train_input_tensor = ..........
+ train_label_tensor = ..........
+ optimizer.zero_grad()
+ pred_label_tensor = model(train_input_tensor)
+ loss = criterion(pred_label_tensor, train_label_tensor)
+ loss.backward()
+ optimizer.step()
+ running_loss += loss.item()
+ loss_history[m_lr].append(running_loss/num_train)
+
+
+
+
+EDITS:
+
+
+- As suggested in the comments and reply below, I've made changes to my code and trained the model. I've added the code and plots for the same.
+- I tried with various
lr_scheduler
in PyTorch (multiStepLR, ExponentialLR)
and plots for the same are listed in Setup-4
as suggested by @Dennis in comments below.
+- Trying with leakyReLU as suggested by @Dennis in comments.
+
+
+Any help.
+Thanks
+"
+"['deep-learning', 'optimization', 'topology', 'convergence', 'hyper-parameters']"," Title: Is a calculus or ML approach to varying learning rate as a function of loss and epoch been investigated?Body: Many have examined the idea of modifying learning rate at discrete times during the training of an artificial network using conventional back propagation. The goals of such work have been a balance of the goals of artificial network training in general.
+
+
+- Minimal convergence time given a specific set of computing resources
+- Maximal accuracy in convergence with regard to the training acceptance criteria
+- Maximal reliability in achieving acceptable test results after training is complete
+
+
+The development of a surface involving these three measurements would require multiple training experiments, but may provide a relationship that itself could be approximated either by curve fitting or by a distinct deep artificial network using the experimental results as examples.
+
+
+- Epoch index
+- Learning rate hyper-parameter value
+- Observed rate of convergence
+
+
+The goal of such work would be to develop, via manual application of analytic geometry experience or via deep network training the following function, where
+
+
+- $\alpha$ is the ideal learning rate for any given epoch indexed by $i$,
+- $\epsilon$ is the loss function result, and
+- $\Psi$ is a function the result of which approximates the ideal learning rate for as large an array of learning scenarios possible within a clearly defined domain.
+
+
+$\alpha_i = \Psi (\epsilon, i)$
+
+The development of arriving at $\Psi$ as a closed form (formula) would be of general academic and industrial value.
+
+Has this been done?
+"
+['reinforcement-learning']," Title: Does inflation should occur in output layer when I do Artificial Neural Network to increase smartness of the model?Body: The idea that come to my mind is called Value Based Model for ANN. We use simple DCF formula to calculate kind of Q value: Rewards/Discount rate. Discount rate is a risk of getting the reward on the information that agent know about. Of course if you have many factors you just sum that. So, we calculate FV for every cell that agent know information about and this is a predicted data. We put predicted - actual and teach model how to run using loss function. Rephrased, does increase in output layer actually train the model to be better? The human logic is that if I took course I have a bigger value that helps me to live. What about NN? Does it actually more precise if we with time increase output?
+"
+"['neural-networks', 'gradient-descent']"," Title: Do good approximations produce good gradients?Body: Let’s say I have a neural net doing classification and I’m doing stochastic gradient descent to train it. If I know that my current approximation is a decent approximation, can I conclude that my gradient is a decent approximation of the gradient of the true classifier everywhere?
+
+Specifically, suppose that I have a true loss function, $f$, and an estimation of it, $f_k$. Is it the case that there exists a $c$ (dependent on $f_k$) such that for all $x$ and $\epsilon > 0$ if $|f(x)-f_k(x)|<\epsilon$ then $|\nabla f(x) - \nabla f_k(x)|<c\epsilon$? This isn’t true for general functions, but it may be true for neural nets. If this exact statement isn’t true, is there something along these lines that is? What if we place some restrictions on the NN?
+
+The goal I have in mind is that I’m trying to figure out how to calculate how long I can use a particular sample to estimate the gradient without the error getting too bad. If I am in a context where resampling is costly, it may be worth reusing the same sample many times as long as I’m not making my error too large. My long-term goal is to come up with a bound on how much error I have if I use the same sample $k$ times, which doesn’t seem to be something in the literature as far as I’ve found.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'convolutional-neural-networks', 'ai-design']"," Title: Is there a way that helps me to architect my CNN fundamentally before training?Body: While we train a CNN model we often experiment with the number of filters, the number of convolutional layers, FC layers, filter size, sometimes stride, activation function, etc. More often than not after training the model once, it is just a trial & error process.
+
+
+- Is there a way that helps me to architect my model fundamentally before training?
+- Once I train model, how do I know which among these variables (number of filters, size, number of convolutional layers, FC layers) should be
+changed - increased or decreased?
+
+
+P.S. This question assumes that data is sufficient in volume and annotated properly and still accuracy is not up to the mark. So, I've ruled out the possibility of non-architectural flaws for the question.
+"
+"['neural-networks', 'objective-functions', 'definitions', 'mean-squared-error']"," Title: Why is the ""square error function"" sometimes defined with the constant 1/2 and sometimes with the constant 1/m?Body: Depending on the source, I find people using different variations of the "squared error function". How come that be?
+Here, it is defined as
+$$
+E_{\text {total }}=\sum \frac{1}{2}(\text {target}-\text {output})^{2}
+$$
+OTOH, here, it's defined as
+$$
+\frac{1}{m} \sum_{i=1}^{m}\left(h_{\theta}\left(x^{(i)}\right)-y^{(i)}\right)^{2}
+$$
+Notice that it is being divided by 1 over $m$ as opposed to variation 1, where we multiply by $1/2$.
+The stuff inside the $()^2$ is simply notation, I get that, but dividing by $1/m$ and $1/2$ will clearly get a different result. Which version is the "correct" one, or is there no such thing as a correct or "official" squared error function?
+"
+"['python', 'recurrent-neural-networks', 'datasets', 'keras', 'long-short-term-memory']"," Title: Why would giving my AI more data make it perform worse?Body: So I trained an AI to generate shakespeare, which it did somewhat well. I used this 10,000 character sample.
+
+Next I tried to get it to generate limericks using these 100,000 limericks. It generated garbage output.
+
+When I limited it to 10,000 characters, it then started giving reasonable limerick output.
+
+How could this happen? I thought more data was always better.
+
+The AI was a neural network with some LSTM layers, implemented in keras.
+"
+"['machine-learning', 'proofs', 'hyper-parameters', 'regularization', 'l2-regularization']"," Title: How does L2 regularization make weights smaller?Body: I'm learning logistic regression and $L_2$ regularization.
+The cost function looks like below.
+$$J(w) = -\displaystyle\sum_{i=1}^{n} (y^{(i)}\log(\phi(z^{(i)})+(1-y^{(i)})\log(1-\phi(z^{(i)})))$$
+And the regularization term is added. ($\lambda$ is a regularization strength)
+$$J(w) = -\displaystyle\sum_{i=1}^{n} (y^{(i)}\log(\phi(z^{(i)})+(1-y^{(i)})\log(1-\phi(z^{(i)}))) + \frac{\lambda}{2}\| w \|$$
+Intuitively, I know that if $\lambda$ becomes bigger, extreme weights are penalized and weights become closer to zero. However, I'm having a hard time to prove this mathematically.
+$$\Delta{w} = -\eta\nabla{J(w)}$$
+$$\frac{\partial}{\partial{w_j}}J(w) = (-y+\phi(z))x_j + \lambda{w_j}$$
+$$\Delta{w} = \eta(\displaystyle\sum_{i=1}^{n}(y^{(i)}-\phi(z^{(i)}))x^{(i)} - \lambda{w_j})$$
+This doesn't show the reason why incrementing $\lambda$ makes weight become closer to zero. It is not intuitive.
+"
+"['game-ai', 'minimax', 'java']"," Title: Connect 4 minimax does not make the best moveBody: I'm trying to implement an algorithm that would choose the optimal next move for the game of Connect 4. As I just want to make sure that the basic minimax works correctly, I am actually testing it like a Connect 3 on a 4x4 field. This way I don't need the alpha-beta pruning, and it's more obvious when the algorithm makes a stupid move.
+The problem is that the algorithm always starts the game with the leftmost move, and also during the game it's just very stupid. It doesn't see the best moves.
+I have thoroughly tested methods makeMove()
, undoMove()
, getAvailableColumns()
, isWinningMove()
and isLastSpot()
so I am absolutely sure that the problem is not there.
+Here is my algorithm.
+NextMove.java
+private static class NextMove {
+ final int evaluation;
+ final int moveIndex;
+
+ public NextMove(int eval, int moveIndex) {
+ this.evaluation = eval;
+ this.moveIndex = moveIndex;
+ }
+
+ int getEvaluation() {
+ return evaluation;
+ }
+
+ public int getMoveIndex() {
+ return moveIndex;
+ }
+}
+
+The Algorithm
+private static NextMove max(C4Field field, int movePlayed) {
+ // moveIndex previously validated
+
+ // 1) check if moveIndex is a final move to make on a given field
+ field.undoMove(movePlayed);
+
+ // check
+ if (field.isWinningMove(movePlayed, C4Symbol.BLUE)) {
+ field.playMove(movePlayed, C4Symbol.RED);
+ return new NextMove(BLUE_WIN, movePlayed);
+ }
+ if (field.isWinningMove(movePlayed, C4Symbol.RED)) {
+ field.playMove(movePlayed, C4Symbol.RED);
+ return new NextMove(RED_WIN, movePlayed);
+ }
+ if (field.isLastSpot()) {
+ field.playMove(movePlayed, C4Symbol.RED);
+ return new NextMove(DRAW, movePlayed);
+ }
+
+ field.playMove(movePlayed, C4Symbol.RED);
+
+ // 2) moveIndex is not a final move
+ // --> try all possible next moves
+ final List<Integer> possibleMoves = field.getAvailableColumns();
+ int bestEval = Integer.MIN_VALUE;
+ int bestMove = 0;
+ for (int moveIndex : possibleMoves) {
+ field.playMove(moveIndex, C4Symbol.BLUE);
+
+ final int currentEval = min(field, moveIndex).getEvaluation();
+ if (currentEval > bestEval) {
+ bestEval = currentEval;
+ bestMove = moveIndex;
+ }
+
+ field.undoMove(moveIndex);
+ }
+
+ return new NextMove(bestEval, bestMove);
+}
+
+private static NextMove min(C4Field field, int movePlayed) {
+ // moveIndex previously validated
+
+ // 1) check if moveIndex is a final move to make on a given field
+ field.undoMove(movePlayed);
+
+ // check
+ if (field.isWinningMove(movePlayed, C4Symbol.BLUE)) {
+ field.playMove(movePlayed, C4Symbol.BLUE);
+ return new NextMove(BLUE_WIN, movePlayed);
+ }
+ if (field.isWinningMove(movePlayed, C4Symbol.RED)) {
+ field.playMove(movePlayed, C4Symbol.BLUE);
+ return new NextMove(RED_WIN, movePlayed);
+ }
+ if (field.isLastSpot()) {
+ field.playMove(movePlayed, C4Symbol.BLUE);
+ return new NextMove(DRAW, movePlayed);
+ }
+
+ field.playMove(movePlayed, C4Symbol.BLUE);
+
+ // 2) moveIndex is not a final move
+ // --> try all other moves
+ final List<Integer> possibleMoves = field.getAvailableColumns();
+ int bestEval = Integer.MAX_VALUE;
+ int bestMove = 0;
+ for (int moveIndex : possibleMoves) {
+ field.playMove(moveIndex, C4Symbol.RED);
+
+ final int currentEval = max(field, moveIndex).getEvaluation();
+ if (currentEval < bestEval) {
+ bestEval = currentEval;
+ bestMove = moveIndex;
+ }
+
+ field.undoMove(moveIndex);
+ }
+
+ return new NextMove(bestEval, bestMove);
+}
+
+The idea is that the algorithm takes in the arguments of a currentField
and the lastPlayedMove
. Then it checks if the last move somehow finished the game. If it did, I just return that move, and otherwise I go in-depth with the subsequent moves.
+Blue player is MAX, red player is MIN.
+In each step I first undo the last move, because it's easier to check if the "next" move will finish the game, than check if the current field is finished (this would require to analyze for all possible winning options in the field). After I check, I just redo the move.
+From some reason this doesn't work. I am stuck with that for days! I have no idea what's wrong... Any help greatly appreciated!
+EDIT
+I'm adding the code how I'm invoking the algorithm.
+@Override
+public int nextMove(C4Game game) {
+ C4Field field = game.getCurrentField();
+ C4Field tmp = C4Field.copyField(field);
+
+ int moveIndex = tmp.getAvailableColumns().get(0);
+ final C4Symbol symbol = game.getPlayerToMove().getSymbol().equals(C4Symbol.BLUE) ? C4Symbol.RED : C4Symbol.BLUE;
+ tmp.dropToColumn(moveIndex, symbol);
+
+ NextMove mv = symbol
+ .equals(C4Symbol.BLUE) ?
+ max(tmp, moveIndex) :
+ min(tmp, moveIndex);
+
+ int move = mv.getMoveIndex();
+ return move;
+}
+
+"
+"['reinforcement-learning', 'policies', 'value-functions']"," Title: Why does the value of state change depending on the policy used to get to that state?Body: From what I understand, the value function estimates how 'good' it is for an agent to be in a state, and a policy is a mapping of actions to state.
+If I have understood these concepts correctly, why does the value of a state change with the policy with which an agent gets there?
+I guess I'm having difficulty grasping the concept that the goodness of a state changes depending on how an agent got there (different policies may have different ways, and hence different values, for getting to a particular state).
+If there can be a concrete example (perhaps on a grid world or on a chessboard), that might make it clear why that might be the case.
+"
+"['machine-learning', 'support-vector-machine', 'hyper-parameters']"," Title: What is the purpose of the ""gamma"" parameter in SVMs?Body: I want to understand what the gamma
parameter does in an SVM. According to this page.
+
+Intuitively, the gamma
parameter defines how far the influence of a single training example reaches, with low values meaning ‘far’ and high values meaning ‘close’. The gamma
parameters can be seen as the inverse of the radius of influence of samples selected by the model as support vectors.
+
+I don't understand this part "of a single training example reaches", does it refer to the training dataset?
+"
+"['neural-networks', 'classification', 'feedforward-neural-networks']"," Title: Which marketing-related classification challenges is a feed forward neural network suited to solve?Body: I am trying to think of some marketing-related classification challenges that a feed-forward neural network would be suited for.
+
+Any ideas?
+"
+"['comparison', 'optimization', 'evolutionary-algorithms']"," Title: What is the difference between the study of evolutionary algorithms and optimization?Body: I have a course named ""Evolutionary Algorithms"", but our teacher is always mentioning the word ""optimization"" in his lectures.
+
+I am confused. Is he actually teaching optimization? If yes, why is the name of the course not ""Optimization""?
+
+What is the difference between the study of evolutionary algorithms and optimization?
+"
+['convolutional-neural-networks']," Title: Can a basic CNN (Conv2D, MaxPooling2D, UpSampling2D) find a good approximation of a product of its input channels?Body: Let's assume I want to teach a CNN some physics. Starting with a U-Net, I input images A
and B
as separate channels. I know that my target (produced by a very slow Monte-Carlo code) represents a signal such as f(g(A) * h(B))
, where f
, g
and h
are fairly ""convolutional"" operations -- meaning, involving mostly blurring and rescaling operations.
+
+I feel safe to state that this problem would not be too difficult for the case of f(g(A) + h(B))
-- but what about f(g(A) * h(B))
? Can I expect a basic CNN such as the U-Net to be able to represent the *
(multiplication) operation?
+
+Or should I expect to be forced to include a Multiply
layer in my network, somewhere where I expect that the part before can learn the g
and h
parts, and the part after can learn the f
part?
+"
+"['agi', 'social', 'neo-luddism', 'value-alignment', 'risk-management']"," Title: Should we focus more on societal or technical issues with AI riskBody: I have trouble finding material (blog, papers) about this issue, so I'm posting here.
+
+Taking a recent well known example: Musk has tweeted and warned about the potential dangers of AI, saying it is ""potentially more dangerous than nukes"", referring the issue of creating a superintelligence whose goals are not aligned with ours. This is often illustrated with the paperclip maximiser though experiment. Let's call this first concern ""AI alignment"".
+
+By contrast, in a recent podcast, his concerns seemed more related to getting politicians and decision makers to acknowledge and cooperate on the issue, to avoid potentially dangerous scenarios like an AI arms race. In a paper co-authored by Nick Bostrom:
+Racing to the Precipice: a Model of Artificial Intelligence Development, the authors argue that developing AGI in a competitive situation incentivises us to skim on safety precautions, so it is dangerous. Let's call this second concern ""AI governance"".
+
+My question is about the relative importance between these two issues: AI alignment and AI governance.
+
+It seems that most institutions trying to prevent such risks (MIRI, FHI, FLI, OpenAI, DeepMind and others) just state their mission without trying to argue about why one approach should be more pressing than the other.
+
+How to assess the relative importance of those two issues? And can you point me any literature about this?
+"
+"['natural-language-processing', 'reference-request', 'computational-linguistics']"," Title: What is the name of the NLP technique that determines ""who did what to whom"" given a sentence?Body: Within a piece of text, I'm trying to detect who did what to whom.
+For instance, in the following sentences:
+
+CV hit IV. CV was hit by IV.
+
+I'd like to know who hit whom.
+I can't remember what this technique is called.
+"
+"['reinforcement-learning', 'q-learning', 'papers', 'd3qn']"," Title: Questions on the identifiability issue and equations 8 and 9 in the D3QN paperBody: I have difficulty understanding the following paragraph in the below excerpts from page 4 to page 5 from the paper Dueling Network Architectures for Deep Reinforcement Learning.
+The author said "we can force the advantage function estimator to have zero advantage at the chosen action."
+For the equation $(8)$ below, is it correct that $A - \max A$ is at most zero?
+
+... lack of identifiability is mirrored by poor practical performance when this equation is used directly.
+
+
+To address this issue of identifiability, we can force the advantage
+function estimator to have zero advantage at the chosen action. That is, we let the last module of the network implement the forward mapping
+
+
+$$Q(s, a; \theta, \alpha, \beta) = V(s; \theta, \beta) + \left( A(s, a; \theta, \alpha) - \max_{a' \in | \mathcal{A} |} A(s, a'; \theta, \alpha) \right). \tag{8}$$
+
+
+Now, for $a^∗ = \text{arg max}_{a' \in \mathcal{A}} Q(s, a'; \theta, \alpha, \beta) = \text{arg max}_{a' \in \mathcal{A}} A(s, a'; \theta, \alpha)$, we obtain $Q(s, a^∗; \theta, \alpha, \beta) = V (s; \theta, \beta)$. Hence, the stream $V(s; \theta, \beta)$ provides an estimate of the value function, while the other stream produces an estimate of the advantage function.
+
+I would like to request further explanation on Equation 9, when the author wrote what is bracketed between the red parentheses below.
+
+An alternative module replaces the max operator with an average:
+
+
+$$Q(s, a; \theta, \alpha, \beta) = V (s; \theta, \beta) + \left( A(s, a; \theta, \alpha) − \frac {1} {|A|} \sum_{a' \in \mathcal{A}} A(s, a'; \theta, \alpha) \right). \tag{9}$$
+
+
+On the one hand this loses the original semantics of $V$ and $A$ because they are now off-target by a constant,
+but on the other hand it increases the stability of the optimization: with (9) the advantages only need to change as fast as the mean, instead of having to compensate any change to the optimal action’s advantage in (8).
+
+In the paper, to address the identifiability issue, there are two equations used. My understanding is both equations are trying to fix the advantage part - the last module.
+For equation $(8)$, are we trying to make $V(s) = Q^*(s)$, as the last module is zero?
+For equation $(9)$, the resulting $V(s)$ = true $V(s)$ + mean$(A)$? As the author said "On the one hand this loses the original semantics of $V$ and $A$ because they are now off-target by a constant". And the constant refers to mean$(A)$? Is my understanding correct?
+"
+"['neural-networks', 'terminology', 'activation-functions']"," Title: Is a linear activation function (in the output layer) equivalent to an identity function?Body: I have a simple question about the choice of activation function for the output layer in feed-forward neural networks.
+I have seen several codes where the choice of the activation function for the output layer is linear.
+Now, it might well be that I am wrong about this, but isn't that simply equivalent to a rescaling of the weights connecting the last hidden layer to the output layer? And following this point, aren't you just as well off with just using the identity function as your output activation function?
+"
+"['convolutional-neural-networks', 'classification']"," Title: Using 3D Points as Inputs to a Neural NetBody: I am currently looking to use a neural network to classify gestures. I have a series of Dx,Dy,Dz readings that represent the differences across the three axes made during the gesture. About 10 movements for each example of the gesture. Basically a 10x3 matrix and then classify the training data into about 15 classes. I plan to use a CNN classifier to do this because, while the time domain is relevant this problem the difference in the movements can be differentiated when presented with as a discrete matrix.
+
+I'm used to using images with a neural net so I instinctively want to just convert the matrices into a 2D tensor and feed them into a CNN, but I was wondering if there was a better way to do this? For example, I have seen 1D tensors passed to a fully connected neural network for classification which seems like it could be more appropriate for this data input type?
+
+Any tips on general architecture would be really appreciated as well!
+
+Thanks!
+"
+['logic']," Title: What is the correct way to use the implication in first-order logic?Body: I know the implication symbol, $\rightarrow$, is used for conditions like
+
+If $A$ is true, then $B$ will be true.
+
+which can be written as
+$$
+A \rightarrow B
+$$
+However, sometimes the implication symbol is also used in other contexts. For example, if we want to say that
+
+All $A$'s are $B$.
+
+We could write
+$$\forall X (A(X) \rightarrow B(X))$$
+I don't understand why the implication is used here? And if the implication is necessary to use here, then why isn't the implication used in the example written below?
+
+Some $A$'s are $B$'s.
+
+$$\exists X (A(X) \land B(X))$$
+"
+"['genetic-algorithms', 'optimization', 'evolutionary-algorithms']"," Title: How do I optimize a specific function using a genetic algorithm?Body: I recently learned about genetic algorithms and I solved the 8 queens problem using a genetic algorithm, but I don't know how to optimize any functions using a genetic algorithm.
+$$
+\begin{array}{r}
+\text { maximize } f(x)=\frac{-x^{2}}{10}+3 x \\
+0 \leq x \leq 32
+\end{array}
+$$
+I want a guide on how to find chromosomes and fitness functions for such a function? And I don't want code.
+"
+"['neural-networks', 'activation-functions']"," Title: Is the cube root function suitable as a n activation function?Body: I am trying to design a neural network on Python.
+
+Instead of the sigmoid function which has a limited range, I am thinking of using the cube root function which has the following graph:
+
+
+Is this suitable?
+"
+"['image-recognition', 'classification', 'support-vector-machine']"," Title: How does an svm work? How does it perform comparisons between malignant and benign tumorBody: How do Support Vector Machines (SVMs) differentiate between a glass and a bottle or between a malignant and a benign tumor when it dealing with it for the first time?
+
+What will be the analysis mechanism involved in this?
+"
+"['neural-networks', 'tensorflow', 'implementation']"," Title: How fast is TensorFlow compared to self written neural nets?Body: I made my first neural net in C++ without any libraries. It was a net to recognize numbers from the MNIST dataset. In a 784 - 784 - 10 net with sigmoid function and 5 epochs with every 60000 samples, it took about 2 hours to train. It was probably slow anyways, because I trained it on a laptop and I used classes for Neurons and Layers.
+To be honest, I've never used TensorFlow, so I wanted to know how the performance of my net would be compared to the same in TensorFlow. Not too specific but just a rough approximation.
+"
+"['recurrent-neural-networks', 'long-short-term-memory', 'reference-request', 'papers']"," Title: Where can I find the original paper that introduced RNNs?Body: I was able to find the original paper on LSTM, but I was not able to find the paper that introduced "vanilla" RNNs. Where can I find it?
+"
+['reinforcement-learning']," Title: Is it possible to state an outliers detection problem as a reinforcement learning problem?Body: To me it seems to be ill defined. Partially because of absence of knowledge which points are to be considered outliers in the first place.
+
+The problem which I have in mind is ""bad market data"" detection. For example if a financial data provider is good only most of the time, but about 7-10% of data do not make any sense.
+
+The action space is binary: either take an observation or reject it.
+
+I am not sure about the reward, because the observations would be fed into an algorithm as inputs and the outputs of the algo would be outliers themselves. So the outliers detection should prevent outputs of the algorithm going rouge.
+
+It is necessary to add that if we are talking about the market data (stocks, indices, fx), there's no guarantee that the distributions are stationary and there might be trends and jumps. If a supervised classifier is trainer based on historical data, how and how often should it be adjusted to be able to cope with different modes of the data.
+"
+"['game-theory', 'chess']"," Title: If two perfect chess AI's played each other, would it always be a stalemate or would white win for an inherent first-move advantage?Body: In the circumstances of two perfect AI's playing each other, will white have an inherent advantage? Or can black always play for a stalemate by countering every white strategy?
+"
+"['reinforcement-learning', 'actor-critic-methods', 'proximal-policy-optimization']"," Title: How do I calculate the policy in the Proximal Policy Optimization algorithm?Body: I recently watched the video on Proximal Policy Optimization (PPO). Now, I want to upgrade my actor-critic algorithm written in PyTorch with PPO, but I'am not sure how the new parameters / thetas are calculated.
+
+In the paper Proximal Policy Optimization Algorithms (at page 5), the pseudocode of the PPO algorithm is shown:
+
+
+
+It says to run $\pi_{\theta_{\text{old}}}$, compute advantage estimates and optimize the objective. But how can we calculate $\pi_\theta$ for the objective ratio, since we have not updated the $\pi_{\theta_{\text{old}}}$ yet?
+"
+"['deep-learning', 'training', 'object-recognition']"," Title: Add training data to YOLO post-trainingBody: (Cross-posting here from the data science stack exchange, as my question didn't get any replies. I hope it's okay!)
+
+I've been playing around with YOLOv3 and obtaining some good results on the ~20 custom classes I trained. However, one or two classes look like they can use some additional training data (not a lot, say about 10% more data), which I can provide.
+
+What is the most efficient way to train my model now? Do I need to start training from scratch? Can I just throw in my additional data (with the appropriate changes to the config files etc.) and run the training based on the weight matrix I already acquired, but for a small number of iterations? (1000?) Or is this more like a transfer learning problem now?
+
+Thanks for all tips!
+"
+['long-short-term-memory']," Title: Structure of a multilayered LSTM neural network?Body: I implemented a LSTM neural network in Pytorch. It worked but I want to know if it worked the way I guessed how it worked.
+
+Say there's a 2-layer LSTM network with 10 units in each layer.
+The inputs are some sequence data Xt1, Xt2, Xt3, Xt4, Xt5.
+
+So when the inputs are entered into the network, Xt1 will be thrown into the network first and be connected to every unit in the first layer. And it will generate 10 hidden states/10 memory cell values/10 outputs. Then the 10 hidden states, 10 memory cell values and Xt2 will be connected to the 10 units again, and generate another 10 hidden states/10 memory cell values/10 outputs and so on.
+
+After all 5 Xt's are entered into the network, the 10 outputs from Xt5 from the first layer are then used as the inputs for the second layer. The other outputs from Xt1 to Xt4 are not used. And the the 10 outputs will be entered into the second layer one by one again. So the first from the 10 will be connected to every unit in the second layer and generate 10 hidden states/10 memory cell values/10 outputs. The 10 memory cell values/10 hidden states and the second value from the 10 will be connected and so forth?
+
+After all these are done, only the final 10 outputs from the layer 2 will be used. Is this how the LSTM network works? Thanks.
+"
+"['philosophy', 'cognitive-science', 'turing-test', 'intelligence', 'embodied-cognition']"," Title: Can a brain be intelligent without a body?Body: The dialog context
+
+Turing proposed at the end of the description of his famous test, ""Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, 'Can machines think?'""1
+
+Turing effectively challenged the 1641 statement of René Descartes in his Discourse on Method and Meditations on First Philosophy:
+
+
+ ""It never happens that [an automaton] arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.""
+
+
+Descartes and Turing, when discussing automatons achieving human abilities, shared a single context through which they perceived intelligence. Those that have been either the actor or the administrator in actual Turing Tests understand the context: Dialog.
+
+Other contexts2
+
+The context of the dialog is distinct from other contexts such as writing a textbook, running a business, or raising children. If you apply the principle of comparing machine and human intelligence to automated vehicles (e.g. self-driving cars), an entirely different context becomes immediately apparent.
+
+Question
+
+Can a brain be intelligent without a body? More generally, does intelligence require a context?
+
+
+
+References
+
+[1] Chapter 1 (""Imitation Game"") of Computing Machinery and Intelligence, 1951.
+
+[2] Multiple Intelligences Theory
+"
+"['neural-networks', 'game-ai', 'python', 'autonomous-vehicles', 'neat']"," Title: Creating a self learning Mario Kart game AI?Body: I will be undertaking a project over the next year to create a self learning AI to play a racing game, currently the game will be Mario Kart 64.
+
+I have a few questions which will hopefully help me get started:
+
+
+- What aspects of AI would be most applicable to creating a self learning game AI for a racing game (Q-Learning, NEAT etc)
+- Could a ANN or NEAT that has learned to play Mario Kart 64 be used to learn to play another racing game?
+- What books/material should i read up on to undertake this project?
+- What other considerations should i take throughout this project?
+
+
+Thank you for your help!
+"
+"['data-preprocessing', 'voice-recognition', 'fourier-transform']"," Title: Why is the short-time Fourier transform used for preprocessing audio samples?Body: I've been told this is how I should be preprocessing audio samples, but what information does this method actually give me? What are the alternatives, and why shouldn't I use them?
+"
+"['neural-networks', 'deep-learning', 'math', 'papers', 'weight-normalization']"," Title: Can you help me understand how weight normalization works?Body: I am trying to dissect the paper Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks.
+Unfortunately, because my math is a little bit rusty, I got a little bit stuck with the proof. Could you provide me with some clarification about proof of the topic?
+What I understand is that we introduce, instead of the weight vector $w$ a scalar $g$ (magnitude of original $w$?) and $\frac{v}{\|v\|}$ (direction of original $w$?) of the vector.
+$$\nabla_{g} L=\frac{\nabla_{\mathbf{w}} L \cdot \mathbf{v}}{\|\mathbf{v}\|}$$
+and
+$$\nabla_{\mathbf{v}} L=\frac{g}{\|\mathbf{v}\|} \nabla_{\mathbf{w}} L-\frac{g \nabla_{g} L}{\|\mathbf{v}\|^{2}} \mathbf{v}$$
+What I am not really sure about is:
+If the gradients are noisy (does this mean that in some dimension we have small and in some high curvature or that error noise differs for very similar values of $w$?) the value will quickly increase here and effectively limit the speed of descent by decreasing value of ($\frac{g}{\|v\|}$). This means that we can choose larger learning rates and it will somehow adjust the effect of the learning rate during the training.
+And what I completely miss is:
+$$\nabla_{\mathbf{v}} L=\frac{g}{\|\mathbf{v}\|} M_{\mathbf{w}} \nabla_{\mathbf{w}} L$$
+with
+$$\mathrm{M}_{\mathrm{w}}=\mathrm{I}-\frac{\mathrm{w} \mathrm{w}^{\prime}}{\|\mathrm{w}\|^{2}}$$
+It should somehow explain the reasoning behind the idea of the final effects. Unfortunately, I don't really understand this part of the paper, I probably lack some knowledge about Linear Algebra.
+Can you verify that my understanding of the paper is correct?
+Can you recommend some sources (books/videos) to help me to understand the second part of proof (related to the second set of formulas?)
+"
+"['machine-learning', 'reinforcement-learning', 'training', 'terminology']"," Title: What is the AI discipline where an algorithm learns from an initial training set, but then refines its learning as it uses that training?Body: Imagine a system that is trained to manipulate dampers to manage air flow. The training data includes damper state and flow characteristics through a complex system of ducts. The system is then given an objective (e.g. maintain even flow to all outputs) and set loose to manage the dampers. As it performs those functions there are anomalies in the results which the system is able to detect. The algorithm CONTINUES to learn from its own empirical data, the result of implemented damper configurations, and refines its algorithm to improve performance seeking the optimum goal of perfectly even flow at all outputs.
+
+What is that kind of learning or AI system called?
+"
+"['machine-learning', 'reinforcement-learning', 'terminology', 'incremental-learning', 'ai-field']"," Title: What is the name of an AI system that learns by trial and error?Body: Imagine a system that controls dampers in a complex vent system that has an objective to perfectly equalize the output from each vent. The system has sensors for damper position, flow at various locations and at each vent. The system is initially implemented using a rather small data set or even a formulaic algorithm to control the dampers. What if that algorithm were programmed to ""try"" different configurations of dampers to optimize the air flows, guided broadly by either the initial (weak) training or the formula? The system would try different configurations and learn what improved results, and what worsened results, in an effort to reduce error (differential outflow).
+
+What is that kind of AI system called? What is that system of learning called? Are there systems that do that currently?
+"
+['expert-systems']," Title: Why did MYCIN fail?Body: I've been reading about expert systems and started reading about MYCIN.
+I was astonished to find that MYCIN diagnosed patients better than the infectious diseases physicians.
+http://www.aaaipress.org/Classic/Buchanan/Buchanan33.pdf
+Since, it had such a good success rate, why did it fail?
+"
+"['reinforcement-learning', 'game-ai', 'python', 'keras', 'dqn']"," Title: Using a DQN with a variable amount of Valid Moves per turn for a Board GameBody: I have created a game on an 8x8 grid and there are 4 pieces which can move essentially like checkers pieces (Forward left or Forward right only). I have implemented a DQN in order to pull this off.
+
+Here is how I have mapped my moves:
+
+self.actions = {""1fl"": 0, ""1fr"": 1,""2fl"": 2,
+ ""2fr"": 3,""3fl"": 4, ""3fr"": 5,""4fl"": 6, ""4fr"": 7}
+
+
+essentially I assigned each move to an integer value from 0-7 (8 total moves).
+
+
+ My question is: During any given turn, not all 8 moves are valid, how do I make sure when model.predict(state) the resulting prediction will be a move that is valid? Here is how I am currently handling it.
+
+
+def act(self, state, env):
+ #get the allowed list of actions
+ actions_allowed = env.allowed_actions_for_agent()
+
+ #Do a random move if random # greater than epsilon
+ if np.random.rand(0,1) <= self.epsilon:
+ return actions_allowed[random.randint(0, len(actions_allowed)-1)]
+
+ #get the prediction from the model by passing the current game board
+ act_values = self.model.predict(state)
+
+ #Check to see if prediction is in list of valid moves, if so return it
+ if np.argmax(act_values[0]) in actions_allowed:
+ return np.argmax(act_values[0])
+
+ #If prediction is not valid do a random move instead....
+ else:
+ if len(actions_allowed) > 0:
+ return actions_allowed[random.randint(0,len(actions_allowed)-1)]
+
+
+I feel like if the agent predicts a move, and if that move is not in the actions_allowed set I should punish the agent.
+
+But because it doesn't pick a valid move I make it do a random one instead, but I think this a problem. Because its bad prediction may ultimately end up still winning the game since the random move may have a positive outcome. I am at a total loss. The agent trains....but it doesn't seem to learn.... I have been training it for over 100k games now, and it only seems to win 10% of it games.... ugh.
+
+Other helpful information:
+- I am utilizing experience replay for the DQN which I have based on the code from here:
+
+Here is where I build my model as well:
+
+self.action_size = 8
+LENGTH = 8
+def build_model(self):
+ #builds the NN for Deep-Q Model
+ model = Sequential() #establishes a feed forward NN
+ model.add(Dense(64,input_shape = (LENGTH,), activation='relu'))
+ model.add(Dense(64, activation='relu'))
+ model.add(Dense(self.action_size, activation = 'linear'))
+ model.compile(loss='mse', optimizer='Adam')
+
+"
+"['computer-vision', 'matlab', 'image-segmentation']"," Title: How do I segment each part of a DICOM image?Body: As I'm beginner in image processing, I am having difficulty in segmenting all the parts in DICOM image.
+
+Currently, I'm applying watershed algorithm, but it segments only that part that has tumour.
+
+I have to segment all parts in the image. Which algorithm will be helpful to perform this task?
+
+The image below contains the tumour.
+
+
+
+This image is the actual DICOM image
+
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'datasets']"," Title: Traffic signs datasetBody: I'm looking for annotated dataset of traffic signs. I was able to find Belgium, German and many more traffic signs datasets. The only problem is these datasets contain only cropped images, like this:
+
+
+
+While i need (for YOLO -- You Only Look Once network architecture) not-cropped images.
+
+
+
+I've been looking for hours but didn't find dataset like this. Does anybody know about this kind of annotated dataset ?
+
+EDIT:
+
+I prefer European datasets.
+"
+"['neural-networks', 'machine-learning', 'perceptron']"," Title: Is the bias supposed to be updated in the perceptron learning algorithm?Body: I am using the following perceptron formula $\text{step}\left(\sum(w_ix_i)-\theta \right)$.
+
+Is $\theta$ supposed to be updated in a perceptron, like the weights $w_i$? If so, what is the formula for this?
+
+I'm trying to make the perceptron learn AND and OR, but without updating $\theta$, I don't feel like it's possible to learn the case where both inputs are $0$. They will, of course, be independent of the weights, and therefore the output will be $\text{step}(-\theta)$, meaning $\theta$ (which has a random value) alone will determine the output.
+"
+"['machine-learning', 'agi', 'math']"," Title: Is known math really enough for AIBody: As an Electronics & Communication Engineering student I've heard some stories and theories about ""The math we have is not enough to complete a thinker-learner AI.""
+
+What is the truth? Is humankind waiting for another Newton to make new calculus or another Einstein-Hawking to complete the quantum mechanics?
+
+If so, what exactly do we need? What will we call it?
+"
+"['machine-learning', 'reinforcement-learning', 'applications', 'credit-assignment-problem']"," Title: How are ""lags"" and ""exogenous factors"" accounted for in reinforcement learning?Body: In reinforcement learning, the system sets some controllable variables, and then determines the quality of the result of the dependent variable(s); using that "quality" to update the algorithm.
+In simple games, this works fine because for each setting there is a single result.
+However, for the real world (e.g. an airflow system), the result takes some time to develop and there is no single precise "pair" result to the conditions set. The flow change takes time and even oscillates a bit as flow stabilizes to a steady-state.
+In practical systems, how is this "lag" accounted for? How are the un-settled (false) results ignored? How is this noise distinguished from exogenous factors (un-controlled system inputs e.g. an open window exposed to wind)?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'deep-neural-networks', 'feedforward-neural-networks']"," Title: How to create Partially Connected NNs with prespecified connections using Tensorflow?Body: I'd like to implement a partially connected neural network with ~3 to 4 hidden layers (a sparse deep neural network?) where I can specify which node connects to which node from the previous/next layer. So I want the architecture to be highly specified/customized from the get-go and I want the neural network to optimize the weights of the specified connections, while keeping everything else 0 during the forward pass AND the backpropagation (connection does not ever exist).
+
+I am a complete beginner in neural networks. I have been recently working with tensorflow & keras to construct fully connected deep networks. Is there anything in tensorflow (or something else) that I should look into that might allow me to do this? I think with tf, I should be able to specify the computational graph such that only certain connections exist but I really have no idea yet where to start from to do this...
+
+I came across papers/posts on network pruning, but it doesn't seem really relevant to me. I don't want to go back and prune my network to make it less over-parameterized or eliminate insignificant connections.
+
+I want the connections to be specified and the network to be relatively sparse from the initialisation and stay that way during the back-propagation.
+"
+"['optimization', 'programming-languages', 'lisp', 'prolog']"," Title: What are the advantages and disadvantages of using LISP for constraint satisfaction in 3D spaceBody: We are currently working on developing a 3D modeling software that allows designers to set spatial constraints to models. The computer then should generate a 3D mesh conforming to these constraints.
+
+Why should or shouldn't we use Lisp for the constraint satisfaction part? Will Prolog environment be any better? Or should we stick to C/C++ libraries?
+
+One requirement we have is that we want to use the Unity Game Engine as it has a lot of 3D tools built in
+"
+"['computer-vision', 'datasets', 'image-segmentation']"," Title: Which evaluation methods can I use for image segmentation?Body: I implemented an image segmentation pipeline and I trained it on the DICOM dataset. I compared the results of the model with manual segmentation to find the accuracy. Is there other methods for evaluation?
+"
+"['machine-learning', 'probability']"," Title: What are the meanings of these (P(x;y), P(x;y,z),P(x,y;z))?Body: I was reading a machine learning book that uses probabilities like these:
+
+$P(x;y), P(x;y,z), P(x,y;z)$
+
+I couldn't find what they are and how can I read and understand them?
+
+Apart from the context, I saw one of these probabilities on it here:
+
+
+"
+"['neural-networks', 'objective-functions', 'backpropagation', 'math', 'gradient-descent']"," Title: How do I calculate the gradient of the hinge loss function?Body: With reference to the research paper entitled Sentiment Embeddings with Applications to Sentiment Analysis, I am trying to implement its sentiment ranking model in Python, for which I am required to optimize the following hinge loss function:
+
+$$\operatorname{loss}_{\text {sRank}}=\sum_{t}^{T} \max \left(0,1-\delta_{s}(t) f_{0}^{\text {rank}}(t)+\delta_{s}(t) f_{1}^{\text {rank}}(t)\right)$$
+
+Unlike the usual mean square error, I cannot find its gradient to perform backpropagation.
+
+How do I calculate the gradient of this loss function?
+"
+"['game-ai', 'poker']"," Title: What's the difference between poker with public cards and without them?Body: Example: Texas Holdem poker vs Texas Holdem poker with the same rounds, just with no public cards dealt.
+Would algorithms, like CFR, approximate the Nash equilibrium more easily? Could AI that does not look at public cards achieve similar performance in normal Texas Holdem as AI that looks at public state tree?
+"
+"['neural-networks', 'activation-functions']"," Title: Is it suitable to find inverse of last layer's activation function and apply it on the target output?Body: I have a neural network with the following structure:
+
+
+I am expecting specific outputs from the neural network which are the target values for my training. Let's say the target values are 0.8 for the upper output node and -0.3 for the lower output node.
+
+The activations function used for the first 2 layers are ReLu or LeakyReLu while the last layer uses atan as an activation function.
+
+For back propogation, instead of adjusting values to make the network's output approach 0.8,-0.3. is it suitable if I use the inverse function for atan -which is tan itself- to get ""the ideal input to the output layer multiplied by weights and adjusted by biases"".
+
+The tan of 0.8 and -0.3 is 0.01396 and -0.00524 approximately.
+
+My algorithm would then adjust weights and biases of the network so that the ""pre-activated output"" of the output layer -which is basically (sum(output_layer_weight*output_layer's inputs)+output_layer_biases)- approaches 0.01396 and -0.00524.
+
+Is this suitable
+"
+['object-recognition']," Title: Keywords to describe people counting from a camera?Body: The subject matter is to count the number of people in a large room, wherein a camera is placed in a very high ceiling: an example would be Grand Central Station. Faces are not visible: the scalp (top of the head) is visible to the camera as shown in the link's video.
+
+The goal: I would like to perform a Google literature search to assess the work that has been performed on overhead head recognition, however, I am not sure what the best keyword pairs: (scalp? head? people?) to describe the object that is to be recognized from a camera positioned in the ceiling (overhead? bird's eye? satellite?). I'd like the search to return leading-edge (AI) techniques that benchmark results
+"
+"['convolutional-neural-networks', 'java']"," Title: Huge variations in epoch count for highest generalized accuracy in CNNBody: I have written my own basic convolutional neural network in Java as a learning exercise. I am using it to analyze the MIT CBCL face database image set. They are a set of 19x19 pixel greyscale images.
+
+Network specifications are:
+
+Single Convolution Layer with 1 filter:
+Filter Size: 4x4.
+Stride Size: 1
+
+Single Pooling Layer
+2x2 Max Pooling
+
+3 layer MLP(input, 1 hidden and output)
+input = 64 neurons
+hidden = 15 neurons
+output = 2 neurons
+learning rate = 0.1
+
+Now I am getting reasonable accuracy(92.85%), but my issue is that it is being achieved at very different points in the epoch count across network runs:
+
+Epochs Training Accuracy Test Accuracy Validation Accuracy
+
+
+Run 1 415 93.13 92.44 93.35
+Run 2 515 92.44 93.18 92.84
+Run 3 327 93.83 92.05 92.38
+
+I am using the Java random class with the same seed for every run to initialize the kernel, the MLP weights and break the input data into 3 sets.(training is being done using the 33-33-33 method)
+
+I am a loss as to what is causing this variation in epoch count to achieve the highest point in validation accuracy. Can anybody explain this?
+"
+"['machine-learning', 'reinforcement-learning', 'overfitting', 'dropout']"," Title: Why do you not see dropout layers on reinforcement learning examples?Body: I've been looking at reinforcement learning, and specifically playing around with creating my own environments to use with the OpenAI Gym AI. I am using agents from the stable_baselines project to test with it.
+
+One thing I've noticed in virtually all RL examples is that there never seems to be any dropout layers in any of the networks. Why is this?
+
+I have created an environment that simulates currency prices and a simple agent, using DQN, that attempts to learn when to buy and sell. Training it over almost a million timesteps taken from a specific set of data consisting of one month's worth of 5-minute price data it seems to overfit a lot. If I then evaluate the agents and model against a different month's worth of data is performs abysmally. So sounds like classic overfitting.
+
+But is there a reason why you don't see dropout layers in RL networks? Is there other mechanisms to try and deal with overfitting? Or in many RL examples does it not matter? e.g. there may only be one true way to the ultimate high score in the 'breakout' game, so you might as well learn that exactly, and no need to generalise?
+
+Or is it deemed that the chaotic nature of the environment itself should provide enough different combinations of outcomes that you don't need to have dropout layers?
+"
+"['game-ai', 'monte-carlo-tree-search']"," Title: Is it meaningful to give more weight to the result of monte carlo search with less turn win?Body: I'm programming on Connect6 with MCTS.
+
+Monte Carlo Tree Search is based on random moves. It counts up the number of wins in certain moves. (Whether it wins in 3 turns or 30 turns)
+
+Is the move with less turns more powerful than the move with more turns?(as mcts just sees if it's win or not -- not considering the number of turns it took to win) And if so, is it meaningful to give bigger weight to the one with less turn win?
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: How to detect a Neural Network will work with the whole dataset?Body: I want to implement a neural network on a big dataset. But training time is long (~1h30 per epoch). I'm still in the development process, so I don't want to wait such long time just to have poor results at the end.
+
+This and this suggest that overfitting the network on a very small dataset (1 ~ 20 samples) and reach a loss near 0 is a good start.
+
+I did it and it works great. However, I am looking for the next step of validating my architecture. I tried to overfit my network over 100 samples, but I can't reach a loss near 0 in reasonable time.
+
+How can I ensure the results given by my NN will be good (or not), without having to train it on the whole dataset ?
+"
+"['neural-networks', 'machine-learning', 'hidden-layers', 'multilayer-perceptrons']"," Title: How does a single hidden layer affect output?Body: I'm learning about multilayer perceptrons, and I have a quick theory question in regards to hidden layer neurons.
+
+I know we can use two hidden layers to solve a non-linearly separable problem by allowing for a representation with two linear separators. However, we can solve a non-linearly separable problem using only one hidden layer.
+
+This seems fine, but what kind of representation does one hidden layer add? And how is the output of the network affected?
+
+I've drawn a diagram of a multilayer perceptron with one hidden layer neuron. I used this same layout to solve a non-linearly separable problem. The single hidden layer node is inside the red square. Forgive my poor MS-Paint skills.
+
+
+"
+"['neural-networks', 'genetic-algorithms', 'keras', 'evolutionary-algorithms', 'neat']"," Title: NEAT + Keras : reproducibility problem (World Models implementation)Body: I'm trying to apply the World Models architecture to the Sonic game (using the gym-retro library).
+
+My problem concerns the evolutionnary algorithm part that I use as the controller (worldmodels = auto encoder + RNN + controller).
+I'm using a genetic algorithm called NEAT (I use the neat-python library). I am searching for someone who can help me with the neat-python implementation.
+
+Here is the method that runs a generation :
+python
+ best_genome = pop.run(popEvaluator.evaluate_genomes, 1)
+
+
+Currently, all the individuals of the population are evaluated on the first level of Sonic The HedgeHog.
+The ""run"" method should return the best genome of the population based on their performance on this level.
+Then, I use this best genome to re-create the associated neural network in order to run it in the same level.
+I was expected to see the exact same run as the best individual, but this is not the case.
+Sometimes it does, sometimes not.
+
+There are not a lot of examples with NEAT and I based my code on this one from the official documentation.
+
+Here is my own implementation, if you want to check.
+
+If anybody has already used NEAT, help would be welcome !
+"
+"['recurrent-neural-networks', 'sequence-modeling', 'fourier-transform']"," Title: Can the recurrent neural network's input come from a short-time Fourier transform?Body: Can the recurrent neural network input come from a short-time Fourier transform? I mean the input is not from the time-series domain.
+"
+"['neural-networks', 'convolutional-neural-networks', 'fully-convolutional-networks']"," Title: How to handle rectangular images in convolutional neural networks?Body: Almost all the convolutional neural network architecture I have come across have a square input size of an image, like $32 \times 32$, $64 \times 64$ or $128 \times 128$. Ideally, we might not have a square image for all kinds of scenarios. For example, we could have an image of size $384 \times 256$
+
+My question is how to we handle such images during
+
+
+- training,
+- development, and
+- testing
+
+
+of a neural network?
+
+Do we force the image to resize to the input of the neural network or just crop the image to the required input size?
+"
+"['convolutional-neural-networks', 'image-recognition', 'training', 'tensorflow', 'datasets']"," Title: Is 1mb an acceptable memory size for images being trained in a CNN?Body: I am using Tensorflow CNN to build an image classification/prediction model. Currently all the images in the dataset are each about 1mb in size.
+
+Most examples out there use very small images.
+
+The image size seems large, but I not too sure.
+
+Any thoughts on the feasibility of 1mb images? If not what can I do to compress programmatically?
+"
+"['neural-networks', 'convolutional-neural-networks', 'pooling', 'convolution-arithmetic', 'inception']"," Title: In the inception neural network, how is an image of shape $224 \times 224 \times 3$ converted into one of shape $112 \times 112 \times 64$?Body: According to the original paper on page 4, $224 \times 224 \times 3$ image is reduced to $112 \times 112 \times 64$ using a filter $7 \times 7$ and stride $2$ after convolution.
+
+- $n \times n = 224 \times 224$
+- $f \times f = 7 \times 7$
+- stride: $s = 2$
+- padding: $p = 0$
+
+The output of the convolution is $(((n+2p-f)/s)+1)$ (according to this), so we have $(n+2p-f)=(224+0-7)=217$, then we divide by the stride, i.e. $217/2=108.5$ (taking the lower value), then we add 1, i.e. $118+1=119$.
+How do we get an output image of $112$ now?
+"
+['neural-networks']," Title: How should I label the classes in RNA?Body: I have a project, which is the keyboard biometrics of users.
+
+suppose I have 3 users,
+I do not know how to label in two types of class, (+ 1, -1).
+
+If I want to verify the identity to user1, my idea of class designation would be:
+
+ TIMES LABEL
+user 1
+9.4 9.2 1.0 3.4 0.5 1
+9.4 9.2 1.0 3.4 0.5 1
+9.4 9.2 1.0 3.4 0.5 1
+9.4 9.2 1.0 3.4 0.5 1
+9.4 9.2 1.0 3.4 0.5 1
+
+user 2
+0.1 3.2 1.0 1.2 1.7 -1
+3.4 1.2 3.0 1.1 2.8 -1
+2.4 2.2 3.0 1.6 2.9 -1
+1.4 3.2 2.0 2.6 3.6 -1
+3.4 0.2 3.0 2.7 3.5 -1
+
+user N
+0.2 1.4 4.5 3.7 2.9 -1
+9.2 1.5 7.6 2.6 2.6 -1
+9.3 1.6 7.5 2.9 3.4 -1
+9.8 3.8 6.6 2.8 2.5 -1
+9.8 2.8 1.7 3.8 1.6 -1
+
+
+but as my system has more and more users classes -1 will be too many compared to classes +1,
+How should I label the classes?
+"
+"['machine-learning', 'reinforcement-learning', 'robotics']"," Title: Reinforcement learning for segmenting the robot path to reflect the true distancesBody: I have a grid of rectangles acting as blocks. The robot traverses through the inter-spaces between these consecutive blocks. Now I have sensor data streaming in representing Right and left wheel speeds. Based on the differences in the speeds of the left and right wheels, I infer the robot's position and the path it has threaded. I get the associated individual segments of the total distance when it travels straight, left, or right.
+
+These distances are a function of the actual speed of the robot and the time interval elapsed before the end of that activity. These computed distances for the segments though don't map and fit-in well when projected on the grid layout of the environment. The segments are rather not adhering to the boundary limitations.
+
+I wanted to know if I can use RL to force the calculated distances to fit in with the layout given certain knowledge (or conditions, if you will): the start and end position of the robot and the inter-space distances.
+
+If not RL, do you know how can I solve this problem? I suspect my function computing the distances is off and wondering if RL can help me figure out the right mapping of sensor data to the path traveled adhering to the grid layout dimensions.
+
+
+
+If you consider the illustration above you will notice S, D, and D' signifying the starting position, the true destination, and the destination location computed by adding together the calculated distances for each of the segments representing right(r), left(l) and straight(s) along the path towards the destination. Inter-space length is given 7m and dimensions of the blocks are (27m x 15m). If you look at the data presented on the left side you will notice 18m left and consecutive 24m right represents the activity, in the grid, as the passage through the blocks. Granted -- perhaps the car negotiates the edges and corners through this passage in a protracted left(l) and right(r) movements, without necessarily going straight(s) straddling and linking the turns as one would expect.
+
+The question arises, however, when taken into account these individual segment lengths and stitch them together you end up in a destination, not in the ballpark range of the expected value. How can we design this problem so as to employ RL methods to, sort of, impose these grid dimensional constraints on this distance calculation methodology to yield better results? Or, probably best to re-imagine the whole problem so it is amenable to the application of RL.
+
+Any advice/ insights would be appreciated.
+"
+"['genetic-algorithms', 'neat']"," Title: NEAT - Managing species across generationsBody: I (mis?)understood the NEAT algorithm has the following steps:
+
+
+- Create a genome pool with N random genomes
+- Calculate each genome fitness
+- Assign each genome to a species
+- Calculate the adjusted fitness and the number of offspring of each species
+- Breed each species through mutation/crossover from the stronger genomes
+- go to step 2.
+
+
+Step 3 is tricky: speciation is made placing each genome G in the first species in which it is compatible with the representative genome of that species, or in a new species if G is not compatible with any existing species. Compatible is meant as having compabilitity distance below a certain threshold. Regarding representative genome NEAT paper says:
+
+
+ Each existing species is represented by a random genome inside the
+ species from the previous generation
+
+
+Somewhere I've found that keeping the number of species stable is good, and this is achieved automatically with dynamic thresholding. However, dynamic thresholding makes hard to evaluate species behaviour across generations.
+
+Let me give one example:
+Assume that in Generation 20, Species 1 has Genome A as representative and Species 2 has Genome B as representative. Assume elitism is implemented.
+
+As the representative genome is taken from previous generation, assume that in Generation 21, Genome A and B are still representatives for Species 1 and 2, however assume compatibility threshold has changed (i.e. bigger) in order to reach the target species number. With this change, A and B have now a compatibility distance lower than threshold and should be placed in the same Species, however they are representatives of different species.
+
+How to solve this issue?
+
+More in general, with dynamic thresholding, how to make sure species management across generations is consistent? E.g. NEAT paper also says:
+
+
+ If the maximum fitness of a species did not improve in 15 generations,
+ the networks in the stagnant species were not allowed to reproduce.
+
+
+How to make sure that across all 15 generations, we are still considering that same single species and this has not drastically changed (so that they are actually different 'objects'?). E.g. in the example above, if A and B are both placed in Species 1 in Generation 21, Species 2 no longer represents what it represented in Generation 20.
+"
+"['neural-networks', 'machine-learning', 'natural-language-processing']"," Title: Pre priming a network for white spaceBody: When a human looks at a page. He notices the sets of letters are grouped together separated by white space. If the white space was replaced by another character say z, it would be harder to distinguish words.
+
+For a neural network, spaces are ""just another character"". How can we set up an RNN so it gives special importance to the difference between certain characters like white spaces and letters so that it will train faster? Assume the input is just a sequence of ASCII characters.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'overfitting']"," Title: Interpretation of a good overfitting scoreBody: As shown below, my deep neural network is overfitting :
+
+
+
+where the blue lines is the metrics obtained with training set and red lines with validation set
+
+Is there anything I can infer from the fact that the accuracy on the training sets is really high (almost 1) ?
+
+From what I understand, it means that the complexity of my model is enough / too big. But does it means my model could theoretically reach such a score on validation set with same dataset and appropriate hyperparameters ? With same hyperparameters but bigger dataset ?
+
+My question is not how to avoid overfitting.
+"
+"['convolutional-neural-networks', 'classification', 'objective-functions']"," Title: How to define a loss function for a classifier where the confusion between some classes is more important than the confusion between others?Body: I have a dataset of images belonging to $N$ classes, $A_1, A_2...A_n,B_1,B_2...B_m$ and I want to train a CNN to classify them. The classes can be considered as subclasses of two broader classes $A$ and $B$, therefore the confusion between $A_i$ and $A_j$ is much less problematic than the confusion between $A_i$ and $B_j$. Therefore I want the CNN to be trained in such a way that the difference between $A_i$ and $B_j$ is considered as more relevant.
+
+1) Are there any loss functions that take this requirement into account? Could a weighted cross entropy work in this case?
+
+2) How this loss would change if the classes were unbalanced?
+"
+['reinforcement-learning']," Title: Dyna-Q algorithm, having trouble when adding the simulated experiencesBody: I'm trying to create a simple Dyna-Q agent to solve small mazes, in python. For the Q function, Q(s, a), I'm just using a matrix, where each row is for a state value, and each column is for one of the 4 actions (up, down, left, right).
+
+I've implemented the ""real experience"" part, which is basically just straightforward SARSA. It solves a moderately hard (i.e., have to go around a few obstacles) mazes in 2000-8000 steps (in the first episode, it will no doubt decrease with more). So I know that part is working reliably.
+
+Now, adding the part that simulates experience based on what it knows of the model to update the Q values more, I'm having trouble. The way I'm doing it is to keep an experiences
list (a lot like experience replay), where each time I take real action, I add its (S, A, R, S') to that list.
+
+Then, when I want to simulate an experience, I take a random (S, A, R, S') tuple from that list (David Silver mentions in his lecture (#8) on this that you can either update your transition probability matrix P and reward matrix R by changing their values or just sample from the experience list, which should be equivalent). In my case, with a given S and A, since it's deterministic, R and S' are also going to be the same as the ones I sampled from the tuple. Then I calculate Q(S, A) and max_A'(Q(S', A')), to get the TD error (same as above), and do stochastic gradient descent with it to change Q(S, A) in the right direction.
+
+But it's not working. When I add simulated experiences, it never finds the goal. I've tried poking around to figure out why, and all I can see that's weird is that the Q values continually increase as time goes on (while, without experiences, they settle to correct values).
+
+Does anyone have any advice about things I could try? I've looked at the sampled experiences, the Q values in the experience loop, the gradient, etc... and nothing really sticks out, aside from the Q values growing.
+
+edit: here's the code. The first part (one step TD learning) is working great. Adding the planning loop part screws it up.
+
+def dynaQ(self, N_steps=100, N_plan_steps=5):
+
+ self.initEpisode()
+ for i in range(N_steps):
+ #Get current state, next action, reward, next state
+ s = self.getStateVec()
+ a = self.epsGreedyAction(s)
+ r, s_next = self.iterate(a)
+ #Get Q values, Q_next is detached so it doesn't get changed by the gradient
+ Q_cur = self.Q[s, a]
+ Q_next = torch.max(self.Q[s_next]).detach().item()
+ TD0_error = (r + self.params['gamma']*Q_next - Q_cur).pow(2).sum()
+ #SGD
+ self.optimizer.zero_grad()
+ TD0_error.backward()
+ self.optimizer.step()
+ #Add to experience buffer
+ e = Experience(s, a, r, s_next)
+ self.updateModel(e)
+
+ for j in range(N_plan_steps):
+
+ xp = self.experiences[randint(0,len(self.experiences)-1)]
+ Q_cur0 = self.Q[xp.s, xp.a]
+ Q_next0 = torch.max(self.Q[xp.s_next]).detach().item()
+ TD0_error0 = (xp.r + self.params['gamma']*Q_next0 - Q_cur0).pow(2).sum()
+
+ self.optimizer.zero_grad()
+ TD0_error0.backward()
+ self.optimizer.step()
+
+"
+"['machine-learning', 'reinforcement-learning', 'value-iteration', 'reward-clipping']"," Title: Should the reward or the Q value be clipped for reinforcement learningBody: When extending reinforcement learning to the continuous states, continuous action case, we must use function approximators (linear or non-linear) to approximate the Q-value. It is well known that non-linear function approximators, such as neural networks, diverge aggressively. One way to help stabilize training is using reward clipping. Because the temporal difference Q-update is a bootstrapping method (i.e., uses a previously calculated value to compute the current prediction), a very large previously calculated Q-value can make the current reward relatively minuscule, thus making the current reward not impact the Q-update, eventually leading the agent to diverge.
+
+To avoid this, we can try to avoid the large Q-value in the first place by clipping the reward between [1, -1].
+
+But I have seen some other people say that instead of clipping the reward itself, we can instead clip the Q-value between an interval.
+
+I was wondering which method is better for convergence, and under what assumptions / circumstances. I was also wondering if there are any theoretical proofs/explanations about reward/Q-value clipping and which one being better.
+"
+['neural-networks']," Title: Would this work to prevent forgetting: train a neural net with N nodes. Then, add more nodes and stop training the original nodesBody: Would this work at all?
+
+Idea is to start training a neural net with some number of nodes. Then, add some new nodes and more layers and start training only the new nodes (or only modifying the old nodes very slightly). Ideally, we would connect all old nodes to the new layer added since we might have learned many useful things in the hidden layers. Then repeat this many times.
+
+Intuition is that if the old nodes give bad information the new layer of nodes will weight the activations of old nodes close to zero and learn new/better concepts in the new nodes. The benefit is that we will keep old knowledge forever.
+
+Caveat is that the network can still temporarily ""forget"" concepts if a new layer weights old information close to zero, but it can potentially remember it again too.
+
+If this completely fails, I'm curious if there's some known way to prevent a neural network from forgetting concepts it learned.
+"
+['datasets']," Title: People Counting Video Analytics: data acquisition parametersBody: I am interested in understanding how to choose data-acquisition parameters for the subject matter:
+
+
+- Frame Resolution
+- Frame rates (FPS)
+
+
+The goal is to have 'enough' (preferably the minimal) resolution and frames to enable AI to identify people.
+
+QUESTIONS
+
+
+- Are there any published rules of thumb or processes to select video parameters?
+- Is there a term or label for the selection of video parameters for AI projects?
+
+"
+['deep-learning']," Title: Deep learning model training and processing requirement for Traffic dataBody: I am a newbie in deep learning and am looking for advice on predicting traffic congestion events. I have a table for vehicle travel times data, another table with the road length segmented based on stop locations. I am thinking to derive the time-wise route-specific speed details based on stop locations. After initial data cleansing and messaging, my input parameters are the time and stop location with actual speed details. I train my model with the training dataset and validate it as per the deep learning recommended approach.
+
+So my questions are:
+
+
+- Is this approach correct or how can I improve it? I am not sure if
+the number of inputs can be increased for better results.
+- Which activation method will be best to utilize to get a range of conditions/event types rather than binary 1 or 0?
+- This will require dealing with a bigger dataset of at least over a few GBs. This will evolve into around 200GBs in the final product. Can I use my professional grad laptop to process this data or if I should consider going to Big Data Processing power?
+
+
+Please advise. Thanks in advance for your help.
+"
+"['machine-learning', 'deep-learning', 'natural-language-processing', 'python', 'semantics']"," Title: How to find the category of a technical text on a surface-semantic-levelBody: There are some predefined categories( Overview, Data Architecture, Technical Details, Applications, etc). The requirement is to classify the input text of paragraphs into their resp. category. I can't use any pre-trained word embeddings (Word2Vec, Glove) because the data entered is not in general English ( talking about dogs, environment, etc) but pure technical (How does a particular program orks, steps to download anaconda, etc). Don't have any data available on the internet to train as well. Anything that understands semantic-surface-level of a sentence will work
+"
+"['keras', 'long-short-term-memory', 'audio-processing']"," Title: Difficulty understanding Keras LSTM fitting dataBody: I'm try to train a RNN with a chunk of audio data, where X and Y are two audio channels loaded into numpy arrays. The objective is to experiment with different NN designs to train them to transform single channel (mono) audio into a two channel (stereo) audio.
+
+My questions are:
+
+
+- Do I need a stateful network type, like LSTM? (I think yes.)
+- How should I organize the data, considering that there are millions of samples and I can't load into memory a matrix of each window of data in a reasonable time-span?
+
+
+For example if I have an array with: [0, 0.5, 0.75, 1, -0.5, 0.22, -0.30 ...] and I want to take a window of 3 samples, for example. I guess I need to create a matrix with every sample shift like this, right?
+
+[[0.00, 0.50, 0.75]
+ [0.50, 0.75, 1.00]
+ [0.75, 1.00,-0.50]
+ [1.00,-0.50, 0.22]]
+
+
+Where is my batch_size? Should I make the matrix like this per each sample shift? Per each window? This may be very memory consuming if I intend to load a 4 min song.
+
+Is this example matrix a single batch? A single sample?
+"
+"['reinforcement-learning', 'linear-algebra']"," Title: Using reinforcement learning to find a preconditioner for linear systems of the form Ax = bBody: Sparse linear systems are normally solved by using solvers like MINRES, Conjugate gradient, GMRES.
+
+Efficient preconditioning, i.e., finding a matrix P such that PAx = Pb is easier to solve than the original problem, can drastically reduce the computational effort to solve for x. However, preconditioning is normally problem-specific and there is not ONE preconditioner that works well for every problem.
+
+I thought this would be an interesting problem to apply RL since there are certain norms (e.g. condition number of matrix PA) to measure if P is a good preconditioner, but I could not find any research in this field.
+
+Is there a specific problem why RL could not be applied?
+"
+"['convolutional-neural-networks', 'classification']"," Title: Recognising Noise in Simple ClassificationBody: I have created a classifier for some simple gestures using an input layer, a hidden layer with tanh activation and an output softmax layer. I'm also using the Adam optimiser. The network classifies perfectly with validation data. However, I'd like it to be able to take in random noise that looks nothing like the shapes and not be able to classify it confidently. For example:
+
+One gesture input looks like this and is correctly classified as gesture 'A':
+
+
+However, when I pass this 'noise', which is clearly differentiable to the human eye, as input it still classifies it with 100% confidence that it is the same gesture 'A'.
+
+
+
+I assume it's because the inputs are still very close to 0? My instinct is to scale up the inputs perhaps to increase the differentiation between the noise and the input. However, in real operation the noise will all be on a similar scale to the inputs and I won't know what is noise and what isn't so I will still have to apply the same scaling to that noise. Will I run into the same problem?
+
+On a more general note is there a teaching approach to prevent misclassifications, particularly if we know what they might look like? For example, in this case I thought I could perhaps generate some noise and use it at training time to create an extra noise class, or is it just best to come up with such a well-trained network that you can use some sort of confidence threshold? For example, if the network only produces 50% classifcation confidence for an input then I can discard it as noise. Any suggestions much appreciated!
+"
+"['neural-networks', 'convolutional-neural-networks']"," Title: Maxpooling in inception?Body: Maxpooling is performed as one of the steps in inception which yields same output dimension as that of the input.
+Can anyone explain how this max pooling is performed?
+"
+"['neural-networks', 'semantics']"," Title: Choosing Instance Semantic DetectionBody: A fixed video camera records people moving through its field of view.
+
+The goal is to detect and track the head, in real-time as it moves through the video. The norm is there are many heads, which often are sometimes partially obscured. This example video boxes heads and provides a head count.
+
+There seems to be many different models. Examples include:
+
+
+
+Given the context of the video, what is the thought process that you would use to choose a model?
+"
+"['game-ai', 'search', 'gaming', 'minimax', 'efficiency']"," Title: Transposition table is only used for roughly 17% of the nodes - is this expected?Body: I'm making a Connect Four game using the typical minimax + alpha-beta pruning algorithms. I just implemented a Transposition Table, but my tests tell me the TT only helps 17% of the time. By this I mean that 17% of the positions my engine comes across in its calculations can be automatically given a value (due to the position being calculated previously via a different move order).
+
+For most games, is this figure expected? To me it seems very low, and I was optimistically hoping for the TT to speed up my engine by around 50%. It should be noted though that on each turn in the game, I reset my TT (since the evaluation previously assigned to each position is inaccurate due to lower depth back then).
+
+I know that the effectiveness of TT's are largely dependent on the game they're being used for, but any ballparks of how much they speed up common games (chess, go, etc) would be helpful.
+
+EDIT - After running some more tests and adjusting my code, I found that the TT sped up my engine to about 133% (so it took 75% as much time to calculate). This means those 17% nodes were probably fairly high up in the tree, since not having to calculate the evaluation of these 17% sped up things by 33%. This is definitely better, but my question still remains on whether this is roughly expected performance of a typical TT.
+"
+"['machine-learning', 'deep-learning']"," Title: Can AI 'fix' heavily compessed videos/photos?Body: So let's say you had a really nice day in a flight simulator and you are getting videos of this type of quality:
+
+
+
+This is Full HD (1080p), but heavily compressed. You can literally see the pixels. Now I tried to use something like RAISR, and this python implementation, but it only scales the image up and does not 'fix the thicc pixels'.
+
+So is there a type of AI that does fix this kind of video/photo into a reasonable quality video? I just want to get rid of those pixels and image artefacts that was generated during the compression.
+"
+"['machine-learning', 'recurrent-neural-networks', 'long-short-term-memory']"," Title: What should I do when I have a variable-length sequence when instantiating an LSTM in Keras?Body: In keras, when we use an LSTM/RNN model, we need to specify the node [i.e., LSTM(128)]. I have a doubt regarding how it actually works. From the LSTM/RNN unfolding image or description, I found that each RNN cell take one time step at a time. What if my sequence is larger than 128? How to interpret this? Can anyone please explain me? Thank in advance.
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'overfitting']"," Title: How to improve testing accuracy when training accuracy is high?Body: Following-up my question about my over-fitting network
+
+My deep neural network is over-fitting :
+
+
+I have tried several things :
+
+
+- Simplify the architecture
+- Apply more (and more !) Dropout
+- Data augmentation
+
+
+But I always reach similar results : training accuracy is eventually going up, while validation accuracy never exceed ~70%.
+
+I think I simplified enough the architecture / applied enough dropout, because my network is even too dumb to learn anything and return random results (3-classes classifier => 33% is random accuracy), even on training dataset :
+
+
+My question is : This accuracy of 70% is the best my model can reach ?
+
+If yes :
+
+
+- Why the training accuracy reach such high scores, and why so fast, knowing this architecture seems to be not compatible ?
+- My only option to improve the accuracy is then to change my model, right ?
+
+
+If no :
+
+
+- What are my options to improve this accuracy ?
+
+
+I'v tried a bunch of hyperparameters, and a lot of time, depending of these parameters, the accuracy does not change a lot, always reaching ~70%. However I can't exceed this limit, even though it seems easy to my network to reach it (short convergence time)
+
+Edit
+
+Here is the Confusion matrix :
+
+
+
+I don't think the data or the balance of the class is the problem here, because I used a well-known / explored dataset : SNLI Dataset
+
+And here is the learning curve :
+
+
+
+Note : I used accuracy instead of error rate as pointed by the resource of Martin Thoma
+
+It's really ugly one. I guess there is some problem here.
+Maybe the problem is that I used the result after 25 epoch for every values. So with little data, training accuracy don't really have time to converge to 100% accuracy. And for bigger training data, as pointed in earlier graphs, the model overfit so the accuracy is not the best one.
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'papers', 'residual-networks']"," Title: Why are there transition layers in DenseNet?Body: The DenseNet architecture can be summarizde with this figure:
+
+
+Why there are transition layers between each block?
+
+In the papers, they justify the use of transition layers as follow :
+
+
+ The concatenation operation used in Eq. (2) is not viable when the size of feature-maps changes. However, an essential part of convolutional networks is pooling layers that change the size of feature-maps. To facilitate pooling in our architecture we divide the network into multiple densely connected dense blocks
+
+
+So, if I understood correctly, the problem is that the feature map size can change, thus we can't concatenate. But how does the addition of transition layers solve this problem?
+
+And how can several dense blocks connected like this be more efficient that one single bigger dense block?
+
+Furthermore, why are all standard DenseNets made of 4 dense blocks? I guess I will have the answer to this question if I understood better the previous questions.
+"
+"['neural-networks', 'deep-learning', 'tensorflow', 'reference-request', 'robots']"," Title: Can we combine multiple different neural networks in one?Body: I want to make a kind of robotic brain, i.e. a big neural network, which includes an NLP model (for understanding human voice), real-time object recognition system (so that it can identify particular objects), a face recognition model (for identifying faces), etc.
+Is possible to build a huge neural network in which we can combine all these separate models together, so we can use all 3 model's capabilities at same time in parallel?
+For example, if I ask the robot, using the microphone, "Can you see that table or that boy?", the robot would start recognizing the objects and faces, then answer me back by speaking if it could identify them or not.
+If this is possible, can you kindly share your idea how can I implement this? Or is there any better to make such AI (e.g. in TensorFlow)?
+"
+"['ai-design', 'terminology', 'definitions', 'ontology']"," Title: What are ontologies in AI?Body: What exactly are ontologies in AI? How should I write them and why are they important?
+"
+"['machine-learning', 'ai-design', 'image-recognition', 'intelligence-testing']"," Title: How to use Machine Learning to create a ""Draw-A-Person Test""Body: The process revolves around a child's drawing. Each part of each drawing corresponds to a score as in the Draw a Person Test conceived by Dr. Florence Goodenough in 1926. The goal of the machine is to measure a child's mental age through a figure drawing task.
+"
+"['game-ai', 'checkers']"," Title: Checkers AI game enginesBody: I have coded an AI checkers game but would like to see how good it is. Some people have informed me to use the Chinook AI opensource code. But I am having trouble trying to integrate that software into my AI code. How do I integrate another game engine in checkers with the AI I have coded?
+"
+"['deep-learning', 'convolutional-neural-networks', 'python', 'keras', 'object-recognition']"," Title: What are the ways to calculate the error rate of a deep Convolutional Neural Network, when the network produces different results using the same data?Body: I am new to the object recognition community. Here I am asking about the broadly accepted ways to calculate the error rate of a deep CNN when the network produces different results using the same data.
+
+1. Problem introduction
+
+Recently I was trying to replicate some classic deep CNNs for the object recognition tasks. Inputs are some 2D image data including objects and the output are the identification/classification results of the object. The implementation involves the use of Python and Keras.
+
+The problem I was facing is that, I may get different validation results among multiple runs of the training even using the same training/validation data sets. To me, that made it hard to report the error rate of the model since every time the validation result may be different.
+
+I think this difference is because of the randomness involved in different aspects of deep CNN, such as random initialization, the random ‘dropout’ used in the regulation, the ‘shuffle’ process used in the choosing of epochs, etc. But I do not know yet the “right” ways to deal with this difference when I want to calculate the error rate in object recognition field.
+
+2. My exploration – online search
+
+I have found some answers online here. The author proposed two ways, and he/she recommended the first one shown below:
+
+
+ The traditional and practical way to address this problem is to run your network many times (30+) and use statistics to summarize the performance of your model, and compare your model to other models.
+
+
+The second way he/she introduced is to go to every relevant aspect of the deep CNN, to ""freeze"" their randomness nature on purpose. This kind of approach has also been introduced from Keras Q&A here. They call this issue the “making reproductive results”.
+
+3. My exploration – in academia community (no result yet, need your help!)
+
+Since I was not sure whether the two ways mentioned above are the “right” ones broadly accepted, I was going further exploring in the object recognition academia community.
+
+Now I just begin to read from imageNet website. But I have not found the answer yet. Maybe you could help me knowing the answer easier. Thanks!
+
+Daqi
+"
+"['pattern-recognition', 'feature-selection']"," Title: How to recognize non-circular radial symmetry in images?Body: This is a question about pattern recognition and feature extraction.
+
+I am familiar with Hough transforms, the Fast Radial Transform and variants (e.g., GFRS), but these highlight circles, spheres, etc.
+
+I need an image filter that will highlight the centroid of a series of spokes radiating from it, such as the center of a asterix or the spokes of a bicycle wheel (even if the round wheel is obscured. Does such a filter exist?
+"
+['feedforward-neural-networks']," Title: Learning an arbitrary function using a feedforward netBody: I would like to get a simple example running in matlab that will use a neural net to learn an arbitrary function from input output data (basically model identification) and then be able to approximate that function from just the input data. As means of training this net I have implemented a simple back propagation algorithm in matlab but I was not able to get anywhere close to satisfactory results. I would like to know what I may be doing wrong and also what approach I may use instead.
+
+The goal is to have the network represent an identified function f(x) which takes a series x as input and outputs the learned mapping from x -> y.
+
+Here is the GNU octave code I have so far:
+
+pkg load control signal
+
+function r = sigmoid(z)
+ r = 1 ./ (1 + exp(-z));
+end
+
+function r = linear(z)
+ r = z;
+end
+
+function r = grad_sigmoid(z)
+ r = sigmoid(z) .* (1 - sigmoid(z));
+end
+
+function r = grad_linear(z)
+ r = 1;
+end
+
+function r = grad_tanh(z)
+ r = 1 - tanh(z) .^ 2;
+end
+
+function nn = nn_init(n_input, n_hidden1, n_hidden2, n_output)
+ nn.W2 = (rand(n_input, n_hidden1) * 2 - 1)'
+ nn.W3 = (rand(n_hidden1, n_hidden2) * 2 - 1)'
+ nn.W4 = (rand(n_hidden2, n_output) * 2 - 1)'
+ nn.lambda = 0.005;
+end
+
+function nn = nn_train(nn_in, state, action)
+ nn = nn_in;
+
+ [out, nn] = nn_eval(nn, state);
+
+ d4 = (nn.a4 - action) .* grad_linear(nn.W4 * nn.a3);
+ d3 = (nn.W4' * d4) .* grad_tanh(nn.W3 * nn.a2);
+ d2 = (nn.W3' * d3) .* grad_tanh(nn.W2 * nn.a1);
+
+ nn.W4 -= nn.lambda * (d4 * nn.a3');
+ nn.W3 -= nn.lambda * (d3 * nn.a2');
+ nn.W2 -= nn.lambda * (d2 * nn.a1');
+end
+
+function [out,nn] = nn_eval(nn_in, state)
+ nn = nn_in;
+
+ nn.z1 = state;
+ nn.a1 = nn.z1;
+
+ nn.a2 = tanh(nn.W2 * nn.a1);
+ nn.a3 = tanh(nn.W3 * nn.a2);
+ nn.a4 = linear(nn.W4 * nn.a3);
+
+ out = nn.a4;
+end
+
+nn = nn_init(1, 100, 100, 1);
+t = 1:0.1:3.14*10;
+input = t;
+output = sin(input);
+learned = zeros(1, length(output));
+
+for j = 1:500
+ for i = 1:length(input)
+ nn = nn_train(nn, [input(i)], [output(i)]);
+ end
+ j
+end
+
+for i = 1:length(input)
+ learned(i) = nn_eval(nn, [input(i)]);
+end
+
+plot(t, output, 'g', t, learned, 'b');
+
+pause
+
+
+Here is the result:
+
+
+The result is not even close to where I want it to be. Has it got something to do with my implementation of back propagation?
+
+What changes do I need to do to the code to get a better approximation going?
+"
+"['reinforcement-learning', 'game-ai', 'backpropagation']"," Title: How do I know how changes in the weights are changing the reward in Reinforcement LearningBody: I already know the basics of the basic of Machine Learning. E.g.: Backpropagation, Convolution, etc.
+
+First of let me explain Reinforcement learning to make sure I grasped the concept correctly.
+
+In Reinforcement learning a random-initialized network will first ""play""/""do"" a sequence of moves in an environment. (In this case a Game). After that, it will receive a reward $r$. Furthermore a q-Value gets defined by the engineer/hooby coder. This reward times the q-Value $q$ to the power of the position $n$ of the action will be feeded back using BP.
+
+So how do I know how slight chances in $\vec{w}$ are changing $rq^n$?
+"
+"['agi', 'computational-complexity']"," Title: Is time/space estimation of possible actions required for creating an AGI?Body: Given infinite resources/time, one could create AGIs by writing code to simulate infinite worlds. By doing that, in some of the worlds, AGIs would be created. Detecting them would be another issue.
+Since we don't have infinite resources, the most probable way to create an AGI is to write some bootstrapping code that would reduce the resources/time to reasonable values.
+In that AGI code (that would make it reasonable to create with finite resources/time) is it required to have a part that deals with time/space estimation of possible actions taken? Or should that be outside of the code and be something the AGI discovers by itself after it starts running?
+Any example of projects targeting AGI that are using time/space estimation might be useful for reaching a conclusion.
+Clarification, by time/space I mean time/space complexity analysis for algorithms, see: Measures of resource usage and Analysis of algorithms
+I think the way I formulated the question might lead people to think that the time/space estimation can only apply to some class of actions called algorithms. To clarify my mistake, I mean the estimation to apply to any action plan.
+Imagine you are an AGI and you have to make a choice between different set of actions to pursue your goals. If you had 2 goals and one of them used less space and less time then you would always pick it over the other algorithm. So time/space estimation is very useful since intelligence is about efficiency. There is at least 1 exception though, imagine in the example before that the goal of the AGI is to pick the set of actions that leads to the most expensive time/space set of actions (or any non-minimal time/space cost) then obviously because of the goal constraint you would pick the most time/space expensive set of actions. In most other cases though, you would just pick the most time/space efficient algorithm.
+"
+"['reinforcement-learning', 'intelligent-agent']"," Title: What does the agent in reinforcement learning exactly do?Body: What is an agent in reinforcement learning (RL)? I think it is not the neural network behind. What does the agent in RL exactly do?
+"
+"['machine-learning', 'natural-language-processing', 'pattern-recognition', 'optical-character-recognition']"," Title: How could I use machine learning to detect text and non-text regions in scanned documents?Body: I have a collection of scanned documents (which come from newspapers, books, and magazines) with complex alignments for the text, i.e. the text could be at any angle w.r.t. the page. I can do a lot of processing for different features extraction. However, I want to know some robust methods that do not need many features.
+Can machine learning be helpful for this purpose? How could I use machine learning to detect text and non-text regions in these scanned documents?
+"
+"['reinforcement-learning', 'q-learning', 'function-approximation', 'generalization', 'state-representations']"," Title: How can my Q-learning agent trained to solve a specific maze generalize to other mazes?Body: I implemented Q-learning to solve a specific maze. However, it doesn't solve other mazes. How could my Q-learning agent be able to generalize to other mazes?
+"
+"['neural-networks', 'natural-language-processing', 'long-short-term-memory', 'objective-functions']"," Title: How to understand marginal loglikelihood objective function as loss function (explanation of an article)?Body: I am reading article https://allenai.org/paper-appendix/emnlp2017-wt/ http://ai2-website.s3.amazonaws.com/publications/wikitables.pdf about training neural network and the loss function is mentioned on page 6 chapter 3.4 - this loss function O(theta)
is expressed as marginal loglikelihood objective function. I simply does not understand this. The neural network generates logical expression (query) from some question in natural language. The network is trained using question-answer pairs. One could expect that simple sum of correct-1/incorrect=0 result could be good loss function. But there is strange expression that involves P(l|qi, Ti; theta)
that is not mentioned in the article. What is meant by this P
function? As I understand, then many logical forms l
are generated externally for some question qi
. But further I can not understand this. The mentioned article largely builds on other article http://www.aclweb.org/anthology/P16-1003 from which it borrows some terms and ideas.
+
+It is said that l
is treated as latent variable and P
seems to be some kind of probability. Of course, we should assign the greated probability to the right logical form l
, but where can I find this assignment. Does training/supervision data should contain this probability function for training/supervision data?
+"
+"['convolutional-neural-networks', 'papers', 'models']"," Title: What is meant by ""model discriminability for local patches within the receptive field""?Body: In the abstract of the paper Network In Network, the authors write
+
+We propose a novel deep network structure called "Network In Network"(NIN) to enhance model discriminability for local patches within the receptive field
+
+What does the part in bold mean?
+"
+"['convolutional-neural-networks', 'python']"," Title: Does it make sense to apply softmax on top of relu?Body: While working through some example from Github I've found this network (it's for FashionMNIST but it doesn't really matter).
+
+Pytorch forward method (my query in upper case comments with regards to applying Softmax on top of Relu?):
+
+def forward(self, x):
+ # two conv/relu + pool layers
+ x = self.pool(F.relu(self.conv1(x)))
+ x = self.pool(F.relu(self.conv2(x)))
+
+ # prep for linear layer
+ # flatten the inputs into a vector
+ x = x.view(x.size(0), -1)
+
+ # DOES IT MAKE SENSE TO APPLY RELU HERE
+ **x = F.relu(self.fc1(x))
+
+ # AND THEN Softmax on top of it ?
+ x = F.log_softmax(x, dim=1)**
+
+ # final output
+ return x
+
+"
+"['reinforcement-learning', 'definitions', 'notation', 'reward-functions', 'sutton-barto']"," Title: Where are the parentheses in the definition of $r(s,a)$?Body: I am new to RL and I am trying to work through the book Reinforcement Learning: An Introduction I (Sutton & Barto, 2018). In chapter 3 on Finite Markov Decision Processes, the authors write the expected reward as
+
+$$r(s,a) = \mathbb{E}\left[R_t|S_{t-1}=s,A_{t-1}=a\right]=\sum_{r\in \mathcal{R}}r\sum_{s'\in \mathcal{S}}p(s',r|s,a)$$
+
+I am not sure if the authors mean
+
+$$r(s,a) = \mathbb{E}\left[R_t|S_{t-1}=s,A_{t-1}=a\right]=\sum_{r\in \mathcal{R}}\left[r\sum_{s'\in \mathcal{S}}p(s',r|s,a)\right]$$
+
+or
+
+$$r(s,a) = \mathbb{E}\left[R_t|S_{t-1}=s,A_{t-1}=a\right]=\left[\sum_{r\in \mathcal{R}}r\right]\cdot\left[\sum_{s'\in \mathcal{S}}p(s',r|s,a)\right].$$
+
+If the authors mean the first, is there any reason why it is not written like the following?
+
+$$r(s,a) = \mathbb{E}\left[R_t|S_{t-1}=s,A_{t-1}=a\right]=\sum_{r\in \mathcal{R}}\sum_{s'\in \mathcal{S}}\left[r\,p(s',r|s,a)\right]$$
+"
+"['reinforcement-learning', 'applications', 'markov-decision-process']"," Title: When is the Markov decision process not adequate for goal-directed learning tasks?Body: In the book Reinforcement Learning: An Introduction (Sutton and Barto, 2018). The authors ask
+
+
+ Exercise 3.2: Is the MDP framework adequate to usefully represent all goal-directed learning tasks? Can you think of any clear
+ exceptions?
+
+
+I thought maybe a card game would be an example if the state does not contain any pieces of information on previously played cards. But that would mean that the chosen state leads to a system that is not fully observable. Hence, if I track all cards and append it to the state (state vector with changing dimension) the problem should have the Markov Property (no information on the past states is needed). This would not be possible if the state is postulated as invariant in MDP.
+
+If the previous procedure is allowed, then it seems to me that there are no examples where the MDP is not appropriate.
+
+I would be glad if someone could say if my reasoning is right or wrong. What would be an appropriate answer to this question?
+"
+['search']," Title: Any problems/games/puzzles in which exhaustive search cannot show that a solution does not exist?Body: Introduction
+Exhaustive search is a method in AI planning to find a solution for so-called Constraint Satisfaction Problems. (CSP). Those are problems that have some conditions to fulfill and the solver is trying out all the alternatives. An example CSP problem is the 8-queens problem which has geometrical constraints. The standard method in finding a solution for the 8-queens problem is a backtracking solver. That is an algorithm that generates a tree for the state space to search inside the graph.
+Apart from practical applications of backtracking search, there are some logic-oriented discussions available which are asking on a formal level which kind of problems have a solution and which not. For example to find a solution for the 8-queen problem many millions of iterations of the algorithm are needed. The question is now: which problems are too complex to find a solution. The second problem is, that sometimes the problem itself has no solution, even the complete state space was searched fully.
+Let us take an example. At first, we construct a problem in which the constraints are so strict that even a backtracking search won't find a solution. One example would be to prove that “1+1=3” another example would be to find a chess sequence if the game is lost or it is also funny to think about how to arrange nine! queen on a chess table so that they don't hurt.
+Is there any literature available which is describing Constraint Satisfaction Problems on a theoretical basis in which the constraints of the problem are too strict?
+Original posting
+Just wondering - like with an 8-queens problem. If we change it to a 9-queens problem and do an exhaustive search, we will see that there is no solution. Is there a problem in which the search fails to show that a solution does not exist?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'computer-vision', 'model-request']"," Title: Which neural network to use for optical mark recognition?Body: I've created a neural net using the ConvNetSharp library which has 3 fully connected hidden layers. The first having 35 neurons and the other two having 25 neurons each, each layer with a ReLU layer as the activation function layer.
+I'm using this network for image classification - kinda. Basically it takes inputs as raw grayscale pixel values of the input image and guesses an output. I used stochastic gradient descent for the training of the model and a learning rate of 0.01. The input image is a row or column of OMR "bubbles" and the network has to guess which of the "bubble" is marked i.e filled and show the index of that bubble.
+I think it is because it's very hard for the network to recognize the single filled bubble among many.
+Here is an example image of OMR sections:
+
+Using image-preprocessing, the network is given a single row or column of the above image to evaluate the marked one.
+Here is an example of a preprocessed image which the network sees:
+
+Here is an example of a marked input:
+
+I've tried to use Convolutional networks but I'm not able to get them working with this.
+What type of neural network and network architecture should I use for this kind of task? An example of such a network with code would be greatly appreciated.
+I have tried many preprocessing techniques, such as background subtraction using the AbsDiff function in EmguCv and also using the MOG2 Algorithm, and I've also tried threshold binary function, but there still remains enough noise in the images which makes it difficult for the neural net to learn.
+I think this problem is not specific to using neural nets for OMR but for others too. It would be great if there could be a solution out there that could store a background/template using a camera and then when the camera sees that image again, it perspective transforms it to match exactly to the template
+I'm able to achieve this much - and then find their difference or do some kind of preprocessing so that a neural net could learn from it. If this is not quite possible, then is there a type of neural network out there which could detect very small features from an image and learn from it. I have tried Convolutional Neural Network but that also isn't working very well or I'm not applying them efficiently.
+"
+"['deep-learning', 'computer-vision', 'datasets']"," Title: input annotations quality check for large scale image dataBody: while dealing with image data at very large scale, there are different sources where data is coming from. Often, we do not have any control over quality of labels/ annotations. I already do use sampling quality checks method to manually check the quality of annotations but as the volume of data has increased,even sampling QC become an inefficient job. Are there other methods to automate / simplify this task for data at large scale?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'overfitting', 'dropout']"," Title: Can the addition of dropout in a non-overfitting neural network increase accuracy?Body: According to Wikipedia
+
+
+ Dropout is a regularization technique for reducing overfitting in neural networks
+
+
+My neural network is simple enough and does not overfit.
+
+Can the addition of dropout, in a non-overfitting neural network, increase accuracy? Even if I increase the complexity of the neural network?
+"
+"['reinforcement-learning', 'math', 'notation', 'reward-functions', 'probability-theory']"," Title: Why is the equation $r(s', a, s') =\sum_{r \in \mathcal{R}} r \frac{p\left(s^{\prime}, r \mid s, a\right)}{p\left(s^{\prime} \mid s, a\right)}$true?Body: I am referring to eq. 3.6 (page 49) based on Sutton's online book and can be found in an image below.
+
+I could not make sense of the final derivation of the equation $r(s, a, s')$. My question is actually how do we come to that final derivation?
+Surprisingly, the denominator of $p(s'|s, a)$ can literally be replaced by $p(s', r|s, a)$ as eq. 3.4 suggests, then it will end up with "$r$" term only due to cancellation of numerator $p(s', r|s, a)$ and denominator $p(s'|s, a)$.
+Any explanation on that would be appreciated.
+"
+"['neural-networks', 'machine-learning', 'backpropagation', 'objective-functions', 'constrained-optimization']"," Title: How do we design a neural network such that the $L_1$ norm of the outputs is less than or equal to 1?Body: What are some ways to design a neural network with the restriction that the $L_1$ norm of the output values must be less than or equal to 1? In particular, how would I go about performing back-propagation for this net?
+I was thinking there must be some "penalty" method just like how in the mathematical optimization problem, you can introduce a log barrier function as the "penalty function"
+"
+"['optimization', 'storage']"," Title: AI that maximizes the storage of rectangular parallelepipeds in a bigger parallelepipedBody: As you can see in the title, I'm trying to program an AI in Java that would help someone optimize his storage.
+
+The user has to enter the size of his storage space (a box, a room, a warehouse, etc...) and then enter the size of the items he has to store in this space. (note that everything must be a rectangular parallelepiped) And the AI should find the best position for each item such that space is optimized.
+
+Here is a list of what I started to do :
+
+
+- I asked the user to enter the size of the storage space (units are trivial here except for the computing cost of the AI, later on, I'm guessing), telling him that the values will be rounded down to the unit
+- I started by creating a 3-dimensional array of integers representing the storage space's volume, using the 3 values taken earlier. Filling it with 0s, where 0s would later represent free space and 1s occupied space.
+- Then, store in another multidimensional array the sizes of the items he has to store And that's where the AI part should be starting. The first thing the AI should do is check whether the addition of all the items' volumes doesn't surpass the storage space's volume. But then there are so many things to do and so many possibilities that I get lost in my thoughts and don't know where to start...
+
+
+In conclusion, can anyone give me the proper terms of this problem in AI literature, as well as a link to an existing work of this kind? Thanks
+"
+['logic']," Title: How do I use truth tables to prove Entailment?Body: For example, consider an agent concerned with predicting the weather, with variable R indicating whether or not it is likely to rain, variable C indicating whether or not it is cloudy, and variable L indicating low pressure. Given knowledge base K:
+
+L (Pressure is low)
+
+C (It is cloudy)
+
+C ∧ L ⇒ R, (Clouds and low pressure imply rain)
+
+the agent may conclude R; thus, the agent’s knowledge implies that R is true, because K |= R.
+
+Similarly, given knowledge base L:
+
+¬L (Pressure is high)
+
+C (It is cloudy)
+
+C ∧ L ⇒ R, (Clouds and low pressure imply rain)
+
+the agent cannot conclude that R is true; L 6|= R
+
+Deriving a truth table:
+
+L C r ((L ∧ C) → r)
+
+F F F T
+
+F F T T
+
+F T F T
+
+F T T T
+
+T F F T
+
+T F T T
+
+T T F F
+
+T T T T
+
+but this does not make sense.
+"
+"['reinforcement-learning', 'evolutionary-algorithms', 'q-learning']"," Title: Some RL algorithms (especially policy gradients) initialize with random policies, which often manifests as random jitter on spot for a long time?Body: I am reviewing a statement on the website for ES regarding structured exploration.
+
+https://blog.openai.com/evolution-strategies/
+
+
+ Structured exploration. Some RL algorithms (especially policy
+ gradients) initialize with random policies, which often manifests as
+ random jitter on spot for a long time. This effect is mitigated in
+ Q-Learning due to epsilon-greedy policies, where the max operation can
+ cause the agents to perform some consistent action for a while (e.g.
+ holding down a left arrow). This is more likely to do something in a
+ game than if the agent jitters on spot, as is the case with policy
+ gradients. Similar to Q-learning, ES does not suffer from these
+ problems because we can use deterministic policies and achieve
+ consistent exploration.
+
+
+Where can I find sources showing that policy gradients initialize with random policies, whereas Q-Learning uses epsilon-greedy policies?
+
+Also, what does ""max operation"" have to do with epsilon-greedy policies?
+"
+"['path-planning', 'autonomous-vehicles', 'ai-safety', 'collision-avoidance']"," Title: To what level of abstraction must fully automated vehicles build their driving model before safety can be maximized?Body: There are several levels of abstraction involved in piloting and driving.
+
+
+- Signals representing the state of the vehicle and its environment originating from multiple transducers1
+- Latched sample vectors/matrices
+- Boundary events (locations, spectral features, movement, appearance and disappearance of edges, lines, and sounds)
+- Objects
+- Object movements
+- Object types (runways, roads, aircraft, birds, cars, people, pets, screeches, horns, bells, blinking lights, gates, signals, clouds, bridges, trains, buses, towers, antennas, buildings, curbs)
+- Trajectory probabilities based on object movements and types
+- Behaviors based on all the above hints
+- Intentions based on behavior sequences and specific object recognition
+- Collision risk detection
+
+
+Moving from interpretation to control execution ...
+
+
+- Preemptive collision avoidance reaction
+- Horn sounding
+- Plan adjustment
+- Alignment of plan to state
+- Trajectory control
+- Skid avoidance
+- Skid avoidance reaction
+- Steering, breaking, and signalling
+- Notifications to passengers
+
+
+What, if any, levels of higher abstraction can be sacrificed? Humans, if they are excellent pilots or drivers, can use all of these levels to improve pedestrian and passenger safety and minimize expense in time and money.
+
+Footnotes
+
+[1] Optical detectors, microphones, strain gauge bridges, temperature and pressure gauges, triangulation reply signals, voltmeters, position encoders, key depression switches, flow detectors, altimeters, radar transducers, tachometers, accelerometers
+"
+"['neural-networks', 'machine-learning', 'reference-request', 'perceptron']"," Title: Which Rosenblatt's paper describes Rosenblatt's perceptron training algorithm?Body: I struggle to find Rosenblatt's perceptron training algorithm in any of his publications from 1957 - 1961, namely:
+
+
+- Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms
+- The perceptron: A probabilistic model for information storage and organization in the brain
+- The Perceptron — A Perceiving and Recognizing Automaton
+
+
+Does anyone know where to find the original learning formula?
+"
+"['machine-learning', 'overfitting']"," Title: Does overfitting imply an upper bound on model size/complexity?Body: Suppose that I have a model M that overfits a large dataset S such that the test error is 30%. Does that mean that there will always exist a model that is smaller and less complex than M that will have a test error less than 30% on S (and does not overfit S).
+"
+"['comparison', 'genetic-algorithms', 'programming-languages', 'genetic-programming']"," Title: What is the best programming language to learn to implement genetic algorithms?Body: What is the best and easiest programming language to learn to implement genetic algorithms? C++ or Python, or any other?
+"
+"['logic', 'knowledge-representation']"," Title: What are the current trends/open questions in logics for knowledge representation?Body: What are the future prospects in near future from a theoretical investigation of description logics, and modal logics in the context of artificial intelligence research?
+"
+"['neural-networks', 'classification']"," Title: Variable sized input-Multi Label Classification with Neural NetworkBody: I have a data input vector ( No Image classification) which size varys from 2 to 7 entrys. Every one of them belongs to a class Out of 7. So I have a variable Input size and a variable Output size. How can I deal with the variable Input sizes? I know Zero padding is a option but maybe there are better ways?
+
+Seconds: Is multi Label classification possible in one Network? What I mean: The first entry has to bei classified in one of the seven classes, the second entry... and so on.
+
+I am also open to other classification techniques, If there is a better one that suits the problem.
+
+Best regards,
+Gesetzt
+"
+"['machine-learning', 'deep-learning', 'computer-vision', 'capsule-neural-network']"," Title: How exactly is equivariance achieved in capsule neural networks?Body: I have read quite a lot about capsule networks, but I cannot understand how the squashed vector would also rotate in response to rotation or translation of the image. A simple example would be helpful. I understand how routing by agreement works.
+"
+"['reinforcement-learning', 'sutton-barto', 'probability-theory', 'transition-model']"," Title: How do compute the table for $p(s',r|s,a)$ (exercise 3.5 in Sutton & Barto's book)?Body: I am trying to study the book Reinforcement Learning: An Introduction (Sutton & Barto, 2018). In chapter 3.1 the authors state the following exercise
+
+Exercise 3.5 Give a table analogous to that in Example 3.3, but for $p(s',r|s,a)$. It should have columns for $s$, $a$, $s'$, $r$, and $p(s',r|s,a)$, and a row for every 4-tupel for which $p(s',r|s,a)>0$.
+
+The following table and graphical representation of the Markov Decision Process is given on the next page.
+
+I tried to use $p(s'\cup r|s,a)=p(s'|s,a)+p(r|s,a)-p(s' \cap r|s,a)$ but without a significant progress because I think this formula does not make any sense as $s'$ and $r$ are not from the same set. How is this exercise supposed to be solved?
+Edit
+Maybe this exercise intends to be solved by using
+$$p(s'|s,a)=\sum_{r\in \mathcal{R}}p(s',r|s,a)$$
+and
+$$r(s,a,s')=\sum_{r\in \mathcal{R}}r\dfrac{p(s',r|s,a)}{p(s|s,a)}$$
+and
+$$\sum_{s'\in\mathcal{S}}\sum_{r\in\mathcal{R}}p(s',r|s,a)=1$$
+the resulting system is a linear system of 30 equation with 48 unknowns. I think I am missing some equations...
+"
+"['neural-networks', 'machine-learning', 'training', 'stochastic-gradient-descent', 'batch-size']"," Title: How do I choose the optimal batch size?Body:
+Batch size is a term used in machine learning and refers to the number of training examples utilised in one iteration. The batch size
+can be one of three options:
+
+- batch mode: where the batch size is equal to the total dataset thus making the iteration and epoch values equivalent
+- mini-batch mode: where the batch size is greater than one but less than the total dataset size. Usually, a number that can be divided into the total dataset size.
+- stochastic mode: where the batch size is equal to one. Therefore the gradient and the neural network parameters are updated after each sample.
+
+
+How do I choose the optimal batch size, for a given task, neural network or optimization problem?
+If you hypothetically didn't have to worry about computational issues, what would the optimal batch size be?
+"
+['machine-learning']," Title: Detect root cause across many event occurrencesBody: Suppose there are sensors which supply numerical metrics. If a metric goes above or below a healthy threshold, an event (alert) is raised. Metrics depend on each other in one way or another (we can learn the dependencies via ML algorithms) so when the system is in alerting state only one or a few metrics will be a root cause and all others will be simply consequences.
+
+We can assume there is enough historical metric data available, to learn dependencies but there are just a few historical malfunctions. Also, when malfunction happens there is no one to tell what was the root cause, the algorithm should learn how to detect root causes by itself.
+
+Which algorithms can be used to detect the root cause events in the situation above? Are there any papers available on the subject?
+"
+"['neural-networks', 'terminology', 'definitions', 'social']"," Title: Why are neural networks considered to be artificial intelligence?Body: Why are we now considering neural networks to be artificial intelligence?
+"
+"['machine-learning', 'natural-language-processing', 'tensorflow']"," Title: Which model should I use to determine the similarity between predefined sentences and new sentences?Body: The Levenshtein algorithm and some ratio and proportion may handle this use case.
+
+
+ Based on the pre-defined sequence of statements, such as ""I have a dog"", ""I own a car"" and many more, I must determine if an another input statement such as ""I have a cat"" is the same or how much percentage does the input statement is most likely equal to the pre-defined statements.
+
+
+For Example:
+
+
+ Predefined statements: ""I have a dog"", ""I own a car"", ""You think you are smart""
+
+
+Input statements and results:
+
+
+ I have a dog - 100% (because it has exact match), I have a cat - ~75% (because it was almost the same except for the animal, think - ~10% (because it was just a small part of the third statement), bottle - 0% (because it has no match at all)
+
+
+The requirement is that TensorFlow be used rather than Java, which is the language I know, so any help with what to look at to get started would be helpful.
+
+My plan was to use the predefined statements as the train_data, and to output only the accuracy during the prediction, but I don't know what model to use. Please, guide me with the architecture and I will try to implement it.
+"
+"['neural-networks', 'optical-character-recognition']"," Title: How much extra information can we conclude from a neural network output values?Body: Consider I have a 3 layers neural network.
+
+
+- Input Layer containing 784 neurons.
+- Hidden layer containing 100 neurons.
+- Output layer containing 10 neurons.
+
+
+My objective is to make an OCR and I used MNIST data to train my network.
+
+Suppose I gave the network an input taken from an image, and the values from the output neurons are the next:
+
+
+- $0: 0.0001$
+- $1: 0.0001$
+- $2: 0.0001$
+- $3: 0.1015$
+- $4: 0.0001$
+- $5: 0.0002$
+- $6: 0.0001$
+- $7: 0.0009$
+- $8: 0.001$
+- $9: 0.051$
+
+
+When the network returns this output, my program will tell me that he identified the image as number 3.
+
+Now by looking at the values, even though the network recognized the image as 3, the output value of number 3 was actually very low: $0.1015$. I am saying very low, because usually the highest value of the classified index is as close as 1.0, so we get the value as 0.99xxx.
+
+May I assume that the network failed to classify the image, or may I say that the network classified the image as 3, but due to the low value, the network is not certain?
+
+Am I right thinking like this, or did I misunderstand how does the output actually works?
+"
+"['neural-networks', 'deep-learning', 'keras', 'activation-functions', 'network-design']"," Title: How to constraint the output value of a neural network?Body: I am training a deep neural network. There is a constraint on the output value of the neural network (e.g. the output has to be between 0 and 180). I think some possible solutions are using sigmoid, tanh activation at the end of the layer.
+Are there better ways to put constraints on the output value of a neural network?
+"
+"['convolutional-neural-networks', 'datasets']"," Title: Best way to create an image dataset for CNNBody: I am creating a dataset made of many images which are created by preprocessing a long time series. Each image is an array of (128,128) and the there are four classes. I would like to build a dataset similar to the MNIST in scikit-learn.database but I have no idea how to do it.
+
+My aim is to have something that I can call like this:
+
+(x_train, y_train), (x_test, y_test) = my_data()
+
+
+Should I save them as figures? or as csv?
+Which is the best way to implement this?
+"
+"['image-recognition', 'computer-vision', 'feature-selection']"," Title: What is a good descriptor for similar objects?Body: I am developing an image search engine. The engine is meant to retrieve wrist watches based on the input of the user. I am using SIFT descriptors to index the elements in the database and applying Euclidean distance to get the most similar watches. I feel like this type of descriptor is not the best since watches have a similar structure and shape. Right now, the average difference between the best and worst matches is not big enough (15%)
+
+I've been thinking of adding colour to the descriptor, but I'd like to hear other suggestions.
+"
+['philosophy']," Title: Can artificial intelligence also make mistakes?Body: The intelligence of the human brain is said to be a strong factor leading to human survival. The human brain functions as an overseer for many functions the organism requires. Robots can employ artificial intelligence software, just as humans employ brains.
+
+When it comes to the human brain, we are prone to make mistakes. However, artificial intelligence is sometimes presented to the public as perfect. Is artificial intelligence really perfect? Can AI also make mistakes?
+"
+"['ai-security', 'ai-safety', 'adversarial-ml']"," Title: Can artificial intelligence applications be hacked?Body: Can artificial intelligence (or machine learning) applications or agents be hacked, given that they are software applications, or are all AI applications secure?
+"
+['machine-learning']," Title: Unbalanced dataset in regression rather than classificationBody: Assume that we have a labeled dataset with inputs and outputs, where the output range is $\left[0, 2\right]$, but the majority of outputs is in $\left[0, 1\right]$. Should one adopt some kind of over- or undersampling approach after compartmentalising the output space to make the dataset more balanced? That would usually be done in classification, but does it apply to regression problems, too?
+Thanks in advance!
+"
+"['genetic-algorithms', 'java', 'fitness-functions']"," Title: Fitness function in genetic algorithm based on an intervalBody: I am writing an app, where when a ball is shot from a canon it is supposed to land in a hole that is on a given distance. The ball is supposed to land between the distance of the beginning of the hole and the end of the hole. The size of the hole is 4m and the size of the ball is 0.4m. My problem is that I am not sure how to write the fitness function for this. The place where the ball falls should be close to this interval of [D, D+3.6], where D is the distance of the hole. If anyone could give me a hint on how to approach this problem, I would be grateful.
+"
+['convolutional-neural-networks']," Title: neural network deconvolution filtersBody: I understand the concept of convolution.
+Let's say that my input dimension is 3 x 10 x 10
+
+And if I say that I will have 20 activation maps and a filter size of 5, I will end up with 20 different filters for my layer, each with the dimension of (3 x 5 x 5)
+
+My output will therefor be (20 x ? x ?)
. I Put a ""?"" there, because it obviously depends on the filter stride etc.
+
+
+
+Now I wanted to implement deconvolution but I am stuck at the following point:
+
+For the following questions, let's assume that the input size for the deconvolution is (5 x 8 x 8)
,
+
+
+- If we think about a filter in 3 dimensions. Can I choose any depth for the filter?
+- How would the effect of the amount of filters (amount of activation maps) work with deconvolution? Do I only have one filter?
+- How does the input depth (5) come into play. Would the output depth be equal to
(filter depth) * (input depth) ?
+
+
+I am trying to find the symmetry to forward convolution but I do not understand how to use the amount of activation maps in deconvolution.
+
+I am very thankful for any help.
+"
+"['reinforcement-learning', 'value-functions', 'convergence', 'reward-functions', 'double-dqn']"," Title: How can I ensure convergence of DDQN, if the true Q-values for different actions in the same state are very close?Body: I am applying a Double DQN algorithm to a highly stochastic environment where some of the actions in the agent's action space have very similar "true" Q-values (i.e. the expected future reward from either of these actions in the current state is very close). The "true" Q-values I know from an analytical solution to the problem.
+I have full control over the MDP, including the reward function, which in my case is sparse (0 until the terminal episode). The rewards are the same for identical transitions. However, the rewards vary for any given state and action taken therein. Moreover, the environment is only stochastic for a part of the actions in the action space, i.e. the action chosen by the agent influences the stochasticity of the rewards.
+How can I still ensure that the algorithm gets these values (and their relative ranking) right?
+Currently, what happens is that the loss function on the Q-estimator decreases rapidly in the beginning, but then starts evening out. The Q-values also first converge quickly, but then start fluctuating around.
+I've tried increasing the batch size, which I feel has helped a bit. What did not really help, however, was decreasing the learning rate parameter in the loss function optimizer.
+Which other steps might be helpful in this situation?
+So, the algorithm usually does find only a slightly suboptimal solution to the MDP.
+"
+"['neural-networks', 'deep-learning', 'training', 'overfitting', 'regularization']"," Title: Why did the L1/L2 regularization technique not improve my accuracy?Body: I am training a multilayer neural network with 146 samples (97 for the training set, 20 for the validation set, and 29 for the testing set). I am using:
+
+- automatic differentiation,
+- SGD method,
+- fixed learning rate + momentum term,
+- logistic function,
+- quadratic cost function,
+- L1 and L2 regularization technique,
+- adding some artificial noise 3%.
+
+When I used the L1 or L2 regularization technique, my problem (overfitting problem) got worst.
+I tried different values for lambdas (the penalty parameter 0.0001, 0.001, 0.01, 0.1, 1.0 and 5.0). After 0.1, I just killed my ANN. The best result that I took was using 0.001 (but it is worst comparing the one that I didn't use the regularization technique).
+The graph represents the error functions for different penalty parameters and also a case without using L1.
+
+and the accuracy
+
+What can be?
+"
+"['human-like', 'human-inspired']"," Title: Can we make Object Detection as human eyes+brain do?Body: I am so much curious about how do we see(with eyes ofc) and detect things and their location so quick. Is the reason that we have huge gigantic network in our brain and we are trained since birth to till now and still training.
+basically I am saying , are we trained on more data and huge network? is that the reason?
+or what if there's a pattern for about how do we see and detect object.
+please help me out, maybe my thinking is in wrong direction.
+what I wanna achieve is an AI to detect object in picture in human ways.thanks.
+"
+"['deep-learning', 'time']"," Title: Data / model preparation for spatio-temporal deep-learning analysis for traffic congestion events detectionBody: I am preparing the Bus movement dataset for deep learning (ANN/CNN/RNN) analysis for congestion events detection. This is an extension to my original question, which can be located at 'Deep learning model training and processing requirement for Traffic data' for the general approach on this topic, and this question is for preparing the dataset and need your kind advice on it. In simple words, I would like to know the state of congestion for a bus route at a specific point in time (year).
+
+Here are my entities:
+
+
+- Routes
+- Bus_Scheduled_Routes
+- Bus_Route_Stops
+- Bus_Trips (operational_date, Vehicle_id, Trip_id, Vehicle_Position_Update, Trip_stop_id, passenger_loaded, velocity, direction, scheduled_arrival_time, actual_arrival_time)
+- Events (human and non-human induced)
+- Points of Interests (POIs)
+
+
+If I have these entities based data and I create a view that gives me a time reference based view comprising of week(52), day(7), Vehicle_id, Trip_id, Stop/Position_update_interval, speed, acceleration, velocity, scheduled_arrival_time, actual_arrival_time. Will this view be recommended to start training the model?
+
+Secondly, how can I integrate the human / non-human induced events and Points of Interests (POIs) data into this view so my model can predict better results? To generalize the model data will be 'time segment / trips time (seasons), location component (Bus Routes and Stops), Arrival time / trip completion time'. I am thinking to add an attribute for human/non-human induced events as type tying with the 'time segment' and adding the POIs as type and vicinity to the stop points. What are your recommendation about it? Thanks in advance for your help.
+"
+"['machine-learning', 'fuzzy-logic']"," Title: Why did fuzzy logic fall out of fashion?Body: Fuzzy logic seemed like an active area of research in machine learning and data mining back when I was in grad school (early 2000s). Fuzzy inference systems, fuzzy c-means, fuzzy versions of the various neural network and support vector machine architectures were all being taught in grad courses and discussed in conferences.
+
+Since I've started paying attention to ML again (~2013), Fuzzy Logic seems to have dropped off the map completely and its absence from the current ML landscape is conspicuous given all the AI hype.
+
+Was this a case of a topic simply falling out of fashion, or was there a specific limitation of fuzzy logic and fuzzy inference that led to the topic being abandoned by researchers?
+"
+"['image-recognition', 'facial-recognition']"," Title: How can we use data augmentation for creating data set for face recognition and will the inverted faces on augmented images detected?Body: I saw when browsing we can use data augmentation for creating a dataset for face recognition. The augmented images may include inverted, tilted or distorted faces. Do the model detect the face from the inverted image. When I tried my model cant able to detect any inverted or tilted faces.
+"
+"['convolutional-neural-networks', 'image-recognition', 'reasoning']"," Title: How to measure the reasoning capabilities of neural networksBody: Which possibilities exist to evaluate the visual reasoning capabilities of neural networks in the field of image recognition?
+
+Are there methods to measure the ability of machine reasoning?
+
+Or something more specific: Is it possible to measure if a network understood the concept of a car / a cat / a human without using the classification accuracy.
+"
+"['neural-networks', 'image-recognition', 'classification', 'training']"," Title: Influence of location on a Neural Network trained for parking detection occupancyBody: I loaded a neural network model trained with Caffe by other people in OpenCV.
+
+The model should detect the presence of a car in a single parking spot outputting the probability of it being free/occupied.
+
+The model was trained with images all belonging to the same parking area, taken at different hours of day and with different light conditions. Images were taken by different cameras but the cameras are all of the same model (raspberry cameras).
+
+I tried to run the model with a few images some of them taken from their dataset and other downloaded from google.
+
+The images taken from their dataset are correctly classified while the ones taken from google are not correctly classified.
+
+My question is: is it possible to deploy a NN model trained with images all coming from a single parking area in another parking area? Is not such a model for parking detection occupancy supposed to generalize independently from the location where training images have been taken?
+
+If you know about an already existing trained model that works good please let me know.
+"
+"['reference-request', 'evolutionary-algorithms', 'artificial-life', 'self-replicating-machines']"," Title: Has the spontaneous emergence of replicators been modeled in Artificial Life?Body: One of the cornerstones of The Selfish Gene (Dawkins) is the spontaneous emergence of replicators, i.e. molecules capable of replicating themselves.
+Has this been modeled in silico in open-ended evolutionary/artificial life simulations?
+Systems like Avida or Tierra explicitly specify the replication mechanisms; other genetic algorithm/genetic programming systems explicitly search for the replication mechanisms (e.g. to simplify the von Neumann universal constructor)
+Links to simulations where replicators emerge from a primordial digital soup are welcome.
+"
+"['reinforcement-learning', 'game-ai', 'applications', 'java']"," Title: How do I apply reinforcement learning to a game with infinitely many actions?Body: I am trying to figure out how to use a reinforcement learning algorithm, if possible, as a ""black box"" to play a game. In this game, a player has to avoid flying birds. If he wants to move, he has to move the mouse on the display, which controls the player position by applying a force. The player can choose any position on the display for the mouse. A human can play this game. To get a sense what needs to be done, have a look at this Youtube video.
+
+I thought about using an ANN, which takes as input the information of the game (e.g. positions, speeds, radius, etc.) and outputs a move. However, I am unlikely to ever record enough training data to train the network properly.
+
+Therefore, I was thinking that a RL algorithm, like Q-learning, would be more suited for this task. However, I have no clue how to apply Q-learning to this task. For example, how could the coder possibly know what future reward another move will bring?
+
+I have a few questions:
+
+
+- In this case, the player has infinitely many actions. How would I apply RL to this case?
+- Is RL a good approach to solving this task?
+- Is there a handy Java library which would allow me to use a RL algorithm (as a block-box) to solve this problem?
+- Are there alternatives?
+
+"
+"['cognitive-science', 'automation']"," Title: What were the criticisms of the BACON algorithm?Body: The BACON algorithm, introduced by Pat Langley and Herbert Simon etc. were meant to automate scientific discovery -- producing causal explanation to variations in given data.
+
+It was found, in particular, to have been able to ""discover"" the laws of planetary motion, but there were criticism concerning whether or not its achievement could be consider true scientific discovery.
+
+The one I remember was that--the data given to the algorithm are pre-processed by human operators so as to include mostly relevant factors of the world. For example, the distances of planets from the sun was given, showing that the human operators implicitly understood the importance of this variable for the laws of planetary motion.
+
+I was wondering if there were other significant criticism of the algorithm--either in general as a automaton of discovery or concerning its particular feat with planetary motion.
+"
+"['math', 'minimax', 'chess']"," Title: Is there a way of representing the minimax algorithm mathematically?Body: I have successfully figured out how the minimax algorithm works for a game like chess, where a game tree is used, and you assign a value to the terminal nodes and propagate that value up the tree.
+
+Is there a way to represent this algorithm mathematically? If so, how would I go about showing it?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'ai-design', 'audio-processing']"," Title: How to combine input from different types of data sources?Body: I've to train a neural network using microphone data (wav files), accelerometer sensor data and light sensor data.
+
+Right now the approach I thought was to convert all data into images and combine them into a single image and train my neural network.
+
+Another approach was to convert wav files into arrays and combine them along with sensor data and train my neural net.
+
+Are my approaches correct or is there a better way to do this?
+
+Any suggestions/ideas are welcome.
+"
+"['deep-learning', 'comparison', 'intelligent-agent', 'learning-agents']"," Title: What is the difference between learning and non-learning agents?Body: What is the difference between learning agents and other types of agents?
+In what ways learning agents can be applied? Do learning agents differ from deep learning?
+"
+"['neural-networks', 'generative-adversarial-networks', 'binary-classification', 'imbalanced-datasets']"," Title: How can I use Generative Adversarial Networks to solve the imbalanced class problem?Body: Problem setting
+We have to do a binary classification of data given a training dataset $D$, where most items belong to class $A$ and some items belong to class $B$, so the classes are heavily imbalanced.
+Approach
+We wanted to use a GAN to produce more samples of class $B$, so that our final classification model has a nearly balanced set to train.
+Problem
+Let's say that the data from both classes $A$ and $B$ are very similar. Given that we want to produce synthetic data with class $B$ with the GAN, we feed real $B$ samples that we have into the discriminator alongside with generated samples. However, $A$ and $B$ are similar. It could happen that the generator produces an item $x$, that would naturally belong to class $A$. But since the discriminator has never seen class-$A$ items before and both classes are very close, the discriminator could say that this item $x$ is part of the original data that was fed into the discriminator. So, the generator successfully fooled the discriminator in believing that an item $x$ is part of the original data of class $B$, while $x$ is actually part of class $A$.
+If the GAN keeps producing items like this, the produced data is useless, since it would add heavy noise to the original data, if combined.
+At the same time, let's say before we start training the generator, we show the discriminator our classes $A$ and $B$ samples while giving information, that the class-$A$ items are not part of class $B$ (through backprop). The discriminator would learn to reject class-$A$ items that are fed to it. But wouldn't this mean that the discriminator has just become the classification model we wanted to build in the first place to distinguish between class $A$ and class $B$?
+Do you know any solution to the above-stated problem or can you refer to some paper/other posts on this?
+"
+"['neural-networks', 'math', 'proofs']"," Title: Is there a limit of minimum error for a particular training dataset in artificial Neural Network?Body: In error-based learning using gradient descent, if I give you a training dataset, then can you find the minimum error after training? And the minimum error should be true for all architectures of a neural network? Consider you will use MSE for calculating the error. You can choose anything you want other than my specified condition. It's like no matter how you change your network you can never cross the limit.
+"
+"['comparison', 'intelligent-agent', 'goal-based-agents', 'utility-based-agents']"," Title: What is the difference between goal-based and utility-based agents?Body: What is the difference between goal-based and utility-based agents? Please, provide a real-world example.
+"
+"['machine-learning', 'reinforcement-learning', 'statistical-ai']"," Title: Reinforcement learning objective as conditional expectationsBody: In one of his lectures Levine describes the objective of reinforcement learning as: $$J(\tau) = E_{\tau\sim p_\theta(\tau)}[r(\tau)]$$
+where $\tau$ refers to a single trajectory and $p_\theta(\tau)$ is the probability of having taken that trajectory so that $p_\theta(\tau) = p(s_1)\prod_{t = I}^T \pi_{\theta}(a_t, s_t)p(s_{t+1}|s_t, a_t))$.
+
+Starting from this definition, he writes the objective as $J(\tau) =\sum_{t=1}^T E_{(s_t, a_t)\sim p_\theta(\tau)}[r(s_t, a_t)]$ and argues that this sum can be decomposed by using conditional expectations, so that it becomes:
+
+$$J(\tau) = E_{s_1 \sim p(s_1)}[E_{a_1 \sim \pi(a_1|s_1)}[r(s_1, a_1) + E_{s_2 \sim p(s_2|s_1, a_1)}[E_{a_2 \sim \pi(a_2|s_2)}[r(s_2, a_2)] + ...|s_2]|s_1,a_1]|s_1]]$$
+
+Can anyone explain this last step? I guess the law of total expectation is involved, but I can not figure out how exactly.
+"
+['reinforcement-learning']," Title: How to design the reward for an action which is the only legal action at some stateBody: I am working on a RL project,but got stuck at one point: The task is continuous (Non-episodic). Following some suggestion from Sutton's RL book, I am using a value function approximation method with average reward (differential return instead of discount return). For some state (represented by some features), only one action is legal. I am not sure how to design reward for such action. Is it ok to just assign the reward in the previous step? Could anyone tell me the best way to decide the reward for the only legal action. Thank you!
+
+UPDATE:
+To give more details, I added one simplified example:
+Let me explain this by a simplified example: the state space consists of a job queue with fix size and a single server. The queue state is represented by the duration of jobs and the server state is represented by the time left to finish the current running job. When the queue is not full and the server is idle, the agent can SCHEDULE a job to server for execution and see a state transition(taking next job into queue) or the agent can TAKE NEXT JOB into queue. But when the job queue is full and server is still running a job, the agent can do nothing except take a BLOCKING action and witness a state transit (time left to finish running job gets decreased by one unit time). The BLOCKING action is the only action that the agent can take in that state.
+"
+"['convolutional-neural-networks', 'backpropagation']"," Title: Using features extracted from a CNN as convolutional filterBody: I'm a bit confused about this. Assume I have a CNN network with two branches:
+
+
+- Top
+- Bottom
+
+
+The top branch outputs a feature vector of shape 1x1x1x10 (batch, h, w, c)
+The bottom branch outputs a feature vector of shape (1, 10, 10, 10).
+
+I want to use the top feature vector as a convolutional filter, and convolve it with the bottom feature vector. I can do this in pytorch with the ""functional.Conv2D"" function, the problem is, I don't know how back-prop works in this case (will it be unstable?) since the output feature is a now a parameter as well, do I need to stop gradients or do something else in this case to backprop correctly?
+"
+"['reinforcement-learning', 'reward-design', 'reward-functions', 'pomdp']"," Title: How to define a reward function in POMDPs?Body: How do I define a reward function for my POMDP model?
+
+In the literature, it is common to use one simple number as a reward, but I am not sure if this is really how you define a function. Because this way you have to do define a reward for every possible action-state combination. I think that the examples in the literature might not be practical in reality, but only for the purpose of explanation.
+"
+"['object-recognition', 'models']"," Title: Are there any pretrained models for human recognition from all angles?Body: I need to be able to detect and track humans from all angles, especially above.
+
+There are, obviously, quite a few well-studied models for human detection and tracking, usually as part of general-purpose object detection, but I haven't been able to find any information that explicitly works for tracking humans from above.
+"
+"['reinforcement-learning', 'comparison', 'markov-decision-process', 'pomdp', 'semi-mdp']"," Title: Is my understanding of the differences between MDP, Semi MDP and POMDP correct?Body: I just wanted to confirm that my understanding of the different Markov Decision Processes are correct, because they are the fundamentals of reinforcement learning. Also, I read a few literature sources, and some are not consistent with each other. What makes the most sense to me is listed below.
+Markov Decision Process
+All the states of the environment are known, so the agent has all the information required to make the optimal decision in every state. We can basically assume that the current state has all information about the current state and all the previous states (i.e., the Markov property)
+Semi Markov Decision Process
+The agent has enough information to make decisions based on the current state. However, the actions of the agent may take a long time to complete, and may not be completed in the next time step. Therefore, the feedback and learning portion should wait until the action is completed before being evaluated. Because the action takes many time steps, "mini rewards" obtained from those time steps should also be summed up.
+Example: Boiling water
+
+- State 1: water is at 23 °C
+- Action 1: agent sets the stovetop at 200 °C
+- Reward (30 seconds after, when water started to boil):
+
+- +1 for the fact that water boiled in the end, but -0.1 reward for each second it took for the water to start boiling.
+- So, the total reward was -1.9 (-2.9 because the water did not boil for 29 seconds, then +1 for water boiling on 30th second)
+
+
+
+Partially Observable Markov Decision Process
+The agent does not have all information regarding the current state, and has only an observation, which is a subset of all the information in a given state. Therefore, it is impossible for the agent to truly behave optimally because of a lack of information. One way to solve this is to use belief states, or to use RNNs to try to remember previous states to make better judgements on future actions (i.e., we may need states from the previous 10 time steps to know exactly what's going on currently).
+Example
+We are in a room that is pitch black. At any time instant, we do not know exactly where we are, and if we only take our current state, we would have no idea. However, if we remember that we took 3 steps forward already, we have a much better idea of where we are.
+Are my above explanations correct? And if so, isn't it possible to also have Partially Observable Semi Markov Decision Processes?
+"
+"['machine-learning', 'principal-component-analysis', 'test-datasets', 'validation-datasets']"," Title: How to perform PCA in the validation/test set?Body: I was using PCA on my whole dataset (and, after that, I would split it into training, validation, and test datasets). However, after a little bit of research, I found out that this is the wrong way to do it.
+I have few questions:
+
+- Are there some articles/references that explain why is the wrong way?
+
+- How can I transform the validation/test set?
+
+
+Steps to do PCA (from https://www.sciencedirect.com/science/article/pii/S0022460X0093390X):
+
+- zero mean
+
+$$\mu = \frac{1}{M}\sum_{i=1}^{M} x_{i}$$
+where x is my training set
+
+- centering (variance)
+
+$$S^{2} = \frac{1}{M}\sum_{i=1}^{M} (x_{i}-\mu)^{T}(x_{i}-\mu)$$
+
+- use (1) and (2) to transform my original training dataset
+
+$$x_{new} = \frac{1}{\sqrt{M}} \frac{(x_{i} - \mu)}{S}$$
+
+- calculate covariance matrix (actually correlation matrix)
+
+$$C= x_{new}^T x_{new}$$
+
+- take the k-eigenvectors (/phi) from the covariance matrix and defined the new space for my new dimension training set (where k are the principal components that I choose according to my variance)
+
+$$ x_{new dim} = x_{new}\phi$$
+Ok, then I have my new dimensional training dataset after PCA (till here it's right, according to other papers that I have read).
+The question is: *What I have to do now for my validation/testing set? Just the equation below?
+$$y_{new dim} = y\phi $$
+where y is my (for example) validation original dataset?
+Can someone explain the right thing to do?
+"
+"['machine-learning', 'deep-learning', 'optimization', 'feature-selection']"," Title: How much can the addition of new features improve the performance?Body: How much can the addition of new features improve the performance of the model during the optimization process?
+Let's say I have a total of 10 features. Suppose I start the optimisation process using only 3 features.
+Can the addition of the 7 remaining ones improve the performance of the model (which, you can assume, might already be quite high)?
+"
+['data-science']," Title: Ensemble models - XGboostBody: I am building 2 models using XGboost, one with x number of parameters and the other with y number of parameters of the data set.
+
+It is a classification problem.
+A yes-yes, no-no case is easy, but what should I do when one model predicts a yes and the other model predicts a no ?
+
+Model A with x parameters has an accuracy of 82% and model B with y parameters has accuracy of 79%.
+"
+"['convolutional-neural-networks', 'comparison', 'terminology', 'feature-maps', 'receptive-field']"," Title: What is the difference between a receptive field and a feature map?Body: In a CNN, the receptive field is the portion of the image used to compute the filter's output. But one filter's output (which is also called a ""feature map"") is the next filter's input.
+
+What's the difference between a receptive field and a feature map?
+"
+"['machine-learning', 'reinforcement-learning', 'research', 'deepmind', 'open-ai']"," Title: 3D environment for RL research in AcademiaBody: I'm doing my thesis on Reinforcement Learning
. My focus on Partially Observable Environments like 3D Games. I want to choose a 3D platform for testing and doing research.
+
+I know some of them. DeepMind Lab
and OpenAi Universe
. But my question is that which of these environments is good for me? Is there any environment for this purpose that is benchmark and reliable?
+
+I want a platform that accepted in Academia and reliable. For example DeepMind is not a standard or Open Source friendly, Is it rational to use their platform for research in academia?
+
+What i have to do?
+"
+"['terminology', 'definitions', 'intelligent-agent', 'learning-agents']"," Title: What is a learning agent?Body: What is a learning agent, and how does it work? What are examples of learning agents (e.g., in the field of robotics)?
+"
+"['convolutional-neural-networks', 'long-short-term-memory', 'feature-selection', 'dimensionality']"," Title: Problem extracting features from convolutional layer where the dimensions are big for feature mapsBody: I have trained a convolutional neural network on images to detect emotions. Now I need to use the same network to extract features from the images and use them to train an LSTM. The problem is: the dimensions of the top layers are: [None, 4, 4, 512]
or [None, 4, 4, 1024]
. Therefore, extracting features from this layer will result in a 4 x 4 x 512 = 8192
or 4 x 4 x 1024 = 16384
dimensional vector for each image. Clearly, this is not what I want.
+Therefore, I would like to know what to do in this case and how to extract features that are of reasonable size. Should I apply global average pooling to the activation or what?
+Any help is much appreciated!
+"
+"['machine-learning', 'classification', 'applications']"," Title: Can current AI techniques distinguish a fake old paper from a real one?Body: It is an easy matter to make a paper look old, for example, using any of the techniques explained on this page of WikiHow: https://www.wikihow.com/Make-Paper-Look-Old.
+
+Is current AI sufficient to distinguish a fake old paper from a real one?
+"
+"['reinforcement-learning', 'gym']"," Title: What is the mapping between actions and numbers in OpenAI's gym?Body: In a gym
environment, the action space is often a discrete space, where each action is labeled by an integer. I cannot find a way to figure out the correspondence between action and number. For example, in frozen lake, the agent can move Up, Down, Left or Right. The code:
+
+import gym
+env = gym.make(""FrozenLake-v0"")
+env.action_space
+
+
+returns Discrete(4)
, showing that four actions are available. If I env.step(0)
, which direction is my agent moving?
+"
+"['applications', 'search', 'iddfs']"," Title: When should the iterative deepening search and the depth-limited search be used?Body: When should the iterative deepening search (IDS), also called iterative deepening depth-first search (IDDFS), and the depth-limited search be used?
+"
+"['q-learning', 'dqn']"," Title: Reason for issues with correlation in the dataset in DQNBody: From the paper Human level Control through DeepRL, the correlation in the data causes instability
in the network and may causes the network to diverge
. I wanted to understand what does this instability
and divergence
mean ? And why correlated data causes this instability.
+"
+"['neural-networks', 'training', 'data-preprocessing', 'transfer-learning', 'standardisation']"," Title: In transfer learning, should we apply standardization if the pre-trained model was (or not) trained with standardised data?Body: Assume one is using transfer learning via a model which was trained on ImageNet.
+
+- Assume that the pre-processing, which was used to achieve the pre-trained model, contained z-score standardization using some mean and std, which was calculated on the training data.
+Should one apply the same transformation to their new data? Should they apply z-score standardization using a mean and std of their own training data?
+
+- Assume that the pre-processing now did not contain any standardization.
+Should one apply no standardization on their new data as well? Or should one apply the z-score standardization, using the mean and std of their new data, and expect better results?
+
+
+For example, I've seen that the Inception V3 model, which was trained by Keras, did not use any standardization, and I'm wondering if using z-score standardization on my new data could yield better results.
+"
+"['neural-networks', 'long-short-term-memory', 'dropout']"," Title: Price Movement Forecasting IssueBody: I am working on a project for price movement forecasting and I am stuck with poor quality predictions.
+At every time-step I am using an LSTM to predict the next 10 time-steps. The input is the sequence of the last 45-60 observations. I tested several different ideas, but they all seems to give similar results. The model is trained to minimize MSE.
+For each idea I tried a model predicting 1 step at a time where each prediction is fed back as an input for the next prediction, and a model directly predicting the next 10 steps(multiple outputs). For each idea I also tried using as input just the moving average of the previous prices, and extending the input to input the order book at those time-steps.
+Each time-step corresponds to a second.
+These are the results so far:
+1- The first attempt was using as input the moving average of the last N steps, and predict the moving average of the next 10.
+At time t, I use the ground truth value of the price and use the model to predict t+1....t+10
+This is the result:
+Predicting moving average:
+
+On closer inspection we can see what's going wrong:
+Prediction seems to be a flat line. Does not care much about the input data:
+
+
+- The second attempt was trying to predict differences, instead of simply the price movement. The input this time instead of simply being X[t] (where X is my input matrix) would be X[t]-X[t-1].
+This did not really help.
+The plot this time looks like this:
+
+Predicting differences:
+
+But on close inspection, when plotting the differences, the predictions are always basically 0.
+
+At this point, I am stuck here and running our of ideas to try. I was hoping someone with more experience in this type of data could point me in the right direction.
+Am I using the right objective to train the model? Are there any details when dealing with this type of data that I am missing?
+Are there any "tricks" to prevent your model from always predicting similar values to what it last saw? (They do incur in low error, but they become meaningless at that point).
+At least just a hint on where to dig for further info would be highly appreciated.
+UPDATE
+Here is my config
+{
+ "data": {
+ "sequence_length":30,
+ "train_test_split": 0.85,
+ "normalise": false,
+ "num_steps": 5
+ },
+ "training": {
+ "epochs":200,
+ "batch_size": 64
+ },
+ "model": {
+ "loss": "mse",
+ "optimizer": "adam",
+ "layers": [
+ {
+ "type": "lstm",
+ "neurons": 51,
+ "input_timesteps": 30,
+ "input_dim": 101,
+ "return_seq": true,
+ "activation": "relu"
+ },
+ {
+ "type": "dropout",
+ "rate": 0.1
+ },
+ {
+ "type": "lstm",
+ "neurons": 51,
+ "activation": "relu",
+ "return_seq": false
+ },
+ {
+ "type": "dropout",
+ "rate": 0.1
+ },
+ {
+ "type": "dense",
+ "neurons": 101,
+ "activation": "relu"
+ },
+ {
+ "type": "dense",
+ "neurons": 101,
+ "activation": "linear"
+ }
+ ]
+ }
+}
+
+Notice the last layer with 101 neurons. It is not an error. We just want to predict the features as well as the price. In other words, we want to predict the price for time t+1 and use the features predicted to predict the price and new features at time t+2, ...
+"
+['history']," Title: Was Pierce way off the mark?Body:
+Funding artificial intelligence is real stupidity.
+-- John R. Pierce
+
+Was this computer pioneer way off the mark? – or was there an important sub-text there?
+Pierce was an expert on machine translation in the 1960s. He coauthored the following paper Pierce, John R., and John B. Carroll. "Language and machines: Computers in translation and linguistics." (1966). That means, he worked on the domain of AI but explained his subject as useless. Perhaps Marvin Minsky, Rodney Brooks and Sebastian Thrun would agree to him?
+"
+"['neural-networks', 'machine-learning', 'automation']"," Title: Can AutoKeras be used for neural networks of PyTorchBody: I use PyTorch, bauces AllenNLP is built on it and good libraries are for it. But can AutoKeras be used for PyTorch based ML pipelines, or am I required to switch to Keras? Google is quite silent when asked for tutorials for this combination. Maybe PyTorch is lacking automated ML framework?
+"
+"['game-ai', 'genetic-algorithms']"," Title: How to use Genetic Algorithm for varying lengths of solutionsBody: Until now, I always thought that Genetic Algorithm can be used for problems of which the solution space can be encoded (modeled) as a chromosome of a specific length. However, some people claim that they used GA for this game and this game. They are basically games in which we control an agent on a 2-dimensional area.
+
+Obviously, the length of the genome sequence depends on how fast the game is finished. So, how is GA used for such games?
+
+If you think GA is not the most suitable method for this kind of problems can you explain why and give better alternatives?
+"
+"['definitions', 'search', 'uniform-cost-search']"," Title: How does the uniform-cost search algorithm work?Body: What is the uniform-cost search (UCS) algorithm? How does it work? I would appreciate seeing a graphical execution of the algorithm. How does the frontier evolve in the case of UCS?
+"
+"['reinforcement-learning', 'overfitting']"," Title: How to overcome overfitting to single player styles in reinforcement learning?Body: I am implementing an actor-critic reinforcement learning algorithm for winning a two player tic-tac-toe like game. The agent is trained against a min-max player and after a number of episodes is able to learn a set of rules which lead it to winning a good majority of games.
+
+However, as soon as I play against the trained agent by using even a slightly different playing style, it looses miserably. In other words, it is evident the agent overfitted with respect to the deterministic behaviour of the min-max player. It is clear to me what are the roots of the problem, but I would like to get an overview of the different methodologies which can be applied to overcome (or mitigate) this issue.
+
+The two solutions I would like to try are the following:
+1. Training the agent with different opponents for fixed amounts of episodes (or time) each. So that for example I train the agent by using a depth 2 min-max player for the first 10000 episodes, then I use a random playing agent for the next 10000 episodes, then I use a depth 4 min-max player for other 10000 episodes and repeat the process.
+2. Starting episodes from different initial configurations. In this way the agents will play a much wider set of sampled games and will be more difficult for the agent to overfit.
+
+Are these two reasonable approaches? Are there other tricks/good practices to try out?
+"
+"['robotics', 'robots', 'question-answering', 'sophia']"," Title: Which algorithm is used in the robot Sophia to understand and answers the questions?Body: Which algorithm is used in the robot Sophia to understand and answer the questions?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'long-short-term-memory']"," Title: Price difference predictions curve almost vanishedBody: With a team, we are studying how it is possible to predict the price movement with high-frequency. Instead of predicting the price directly, we have decided to try predicting price difference as well as the features. In other words, at time t+1
, we predict the price difference and the features for time t+2
. We use the predicted features from time t+1
to predict the price at time t+2
.
+
+We got very excited, because we thought getting good results with the following graph
+
+
+
+We got problems in production and we wasn't known the problem till we plot the price difference.
+
+
+
+Here is the content of the config file
+
+{
+ ""data"": {
+ ""sequence_length"":30,
+ ""train_test_split"": 0.85,
+ ""normalise"": false,
+ ""num_steps"": 5
+ },
+ ""training"": {
+ ""epochs"":200,
+ ""batch_size"": 64
+ },
+ ""model"": {
+ ""loss"": ""mse"",
+ ""optimizer"": ""adam"",
+ ""layers"": [
+ {
+ ""type"": ""lstm"",
+ ""neurons"": 51,
+ ""input_timesteps"": 30,
+ ""input_dim"": 101,
+ ""return_seq"": true,
+ ""activation"": ""relu""
+ },
+ {
+ ""type"": ""dropout"",
+ ""rate"": 0.1
+ },
+ {
+ ""type"": ""lstm"",
+ ""neurons"": 51,
+ ""activation"": ""relu"",
+ ""return_seq"": false
+ },
+ {
+ ""type"": ""dropout"",
+ ""rate"": 0.1
+ },
+ {
+ ""type"": ""dense"",
+ ""neurons"": 101,
+ ""activation"": ""relu""
+ },
+ {
+ ""type"": ""dense"",
+ ""neurons"": 101,
+ ""activation"": ""linear""
+ }
+ ]
+ }
+}
+
+
+Prices don't change very fast. Therefore, the next price is almost always very close to the last price. In other words, P_{t+1} - P_{t
} is very often close to zero or zero directly. If there is too many zeros then the network will only recognize the zeros. The model has picked up on that.
+
+I guess the model learned almost nothing except the very simple relationship that the next price is close to the last price. There is not necessarily anything wrong with the model. Predicting stock prices should be a very hard problem.
+
+So a straightforward improvement should be of taking the features as a whole instead of their difference.
+
+I want to keep working with price difference instead of the price in itself because we are making the series potential more stationary.
+
+What might be a good solution to deal with the repetitive zeros related to our ""price difference"" problem? Does applying the log-return is a better idea than applying price differences?
+
+Does a zero inflated estimators is a good idea? First predict whether it's gonna be a zero. If not predict the value. https://gist.github.com/fonnesbeck/874808 ?
+"
+"['neural-networks', 'topology', 'optical-character-recognition']"," Title: zonal or template ocr invoices readingBody: I'd like to explore the possibilities of applying artificial intelligence to ocr reading.
+Basic ocr invoices processing let me convert 30% of them only.
+The main purpose is defining invoices areas by training an ai, then process those areas with ocr.
+So I am looking into ai to define and recognize a document topology first, then apply ocr locally.
+From a brief search, it is classified as zonal or template ocr.
+Any chance of a premade open source library?
+"
+"['deep-learning', 'reinforcement-learning', 'gradient-descent']"," Title: Deep Q-Learning: why don't we use mini-batches during experience reply?Body: In examples and tutorial about DQN, I've often noticed that during the experience replay (training) phase people tend to use stochastic gradient descent / online learning. (e.g. link1, link2)
+
+# Sample minibatch from the memory
+minibatch = random.sample(self.memory, batch_size)
+# Extract informations from each memory
+for state, action, reward, next_state, done in minibatch:
+ # if done, make our target reward
+ target = reward
+ if not done:
+ # predict the future discounted reward
+ target = reward + self.gamma * \
+ np.amax(self.model.predict(next_state)[0])
+ # make the agent to approximately map
+ # the current state to future discounted reward
+ # We'll call that target_f
+ target_f = self.model.predict(state)
+ target_f[0][action] = target
+
+
+Why can't they use mini batches instead?
+I'm new to RL, but in deep learning people tends to use mini-batches as they would result in a more stable gradient. Doesn't the same principle apply to RL problems? Is the randomness/noise introduced actually beneficial to the learning process? Am I missing something, or are these sources all wrong?
+
+
+
+Note:
+
+Not all the sources rely on stochastic gradient descent: e.g. keras-rl seems to rely on minibatches (https://github.com/keras-rl/keras-rl/blob/master/rl/agents/dqn.py)
+"
+"['reinforcement-learning', 'proofs']"," Title: Understanding the proof of theorem 2.1 from the paper ""Efficient reductions for imitation learning""Body: I am trying to understand the proof of theorem 2.1 from this paper:
+
+
+ Ross, Stéphane, and Drew Bagnell. ""Efficient reductions for imitation learning."" Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.
+
+
+The cost-to-go is given as
+
+$$J(\pi) = \sum_{t=1}^{T}\mathbb{E}_{s\,\sim\, d^t_{\pi}(s)}\left[C_\pi(s)\right].$$
+
+In the paper they use $\hat{\pi}=\pi$ for the learned policy and $\pi^*$ for the expert policy.
+
+In the derivation they write
+
+$$J(\pi)\leq \sum_{t=1}^{T}\{ p_{t-1}\mathbb{E}_{s\, \sim \, d_t(s)}\left[C_\pi(s) \right]+(1-p_{t-1})\}$$
+$$\leq \sum_{t=1}^{T}\{ p_{t-1}\mathbb{E}_{s\, \sim \, d_t(s)}\left[C_{\pi^*}(s) \right]+p_{t-1}{\ell_t(s,\pi)}+(1-p_{t-1})\},$$
+
+in which $p_{t-1}$ is the probability of not not making an error with policy $\pi$ up to the time $t-1$. And $\ell$ is the surrogate 0-1 loss.
+
+The following steps are easy to follow, but how did they come up with these steps?
+"
+"['comparison', 'statistical-ai', 'symbolic-ai']"," Title: What are the differences in scope between statistical AI and classical AI?Body: What are the differences in scope between statistical AI and classical AI?
+
+Real-world examples would be appreciated.
+"
+"['deep-learning', 'deep-neural-networks', 'chess', 'objective-functions']"," Title: Chess policy networkBody: I am interested in making a simple chess engine using neural networks. I already have a fairly good value network but I can't figure out how to train a policy network. I know that Leela chess zero outputs the probability of any of the about 1800 possible moves. But how do you train such a network? How do you calculate the loss when you only have the 1 move that was played in the game
+to work with?
+"
+"['deep-learning', 'computer-vision']"," Title: Is there any deep learning object detection algorithms that can work without bounding boxes annotated data?Body: For example Haar Cascade can be trained using only positive and negative examples, you don't need any bounding box annotations. But it not a deep learning approach.
+
+Another example can be the most straight forward Image recognition model + sliding window. But it is very slow
+"
+"['training', 'python', 'chat-bots']"," Title: How to add external training in chatterbot?Body: I created a very simple bot to learn how to use chatterbot. This library already comes with a training, but I wanted extra training with an import of a corpus in Portuguese that I found in github.
+
+from chatterbot import ChatBot
+
+bot = Futaba(
+""Terminal"",
+storage_adapter=""chatterbot.storage.SQLStorageAdapter"",
+logic_adapters=[
+""chatterbot.logic.MathematicalEvaluation"",
+""chatterbot.logic.TimeLogicAdapter"",
+""chatterbot.logic.BestMatch""
+],
+
+input_adapter=""chatterbot.input.TerminalAdapter"",
+output_adapter=""chatterbot.output.TerminalAdapter"",
+database_uri=""../database.db""
+)
+
+print(""Type something to begin..."")
+
+while True:
+ try:
+ bot_input = bot.get_response(None)
+ except (KeyboardInterrupt, EOFError, SystemExit):
+ break
+
+
+That's all I have.
+
+How can I import this corpus into my chatbot?
+"
+"['training', 'prediction', 'models']"," Title: How do I predict if it is rainy or not?Body: I'm building a weather station, where I'm sensing temperature, humidity, air pressure, brightness, $CO_2$, but I don't have a raindrop sensor.
+
+Is it possible to create an AI which can say if it's raining or not, with the help of the given data above and maybe analyzing the slope from the last hour or something? Which specific technology should I use and how can I train it?
+"
+"['neural-networks', 'recurrent-neural-networks']"," Title: Getting better results in improving the configurationBody: Currently, I found the right recipe for a time series regression problem to finally get acceptable to good results.
+
+Here is the config file
+
+{
+ ""data"": {
+ ""sequence_length"":45,
+ ""train_test_split"": 0.85,
+ ""normalise"": false,
+ ""num_steps"": 10
+ },
+ ""training"": {
+ ""epochs"":30,
+ ""batch_size"": 32
+ },
+ ""model"": {
+ ""loss"": ""mse"",
+ ""optimizer"": ""adam"",
+ ""layers"": [
+ {
+ ""type"": ""lstm"",
+ ""neurons"": 161,
+ ""input_timesteps"": 45,
+ ""input_dim"": 161,
+ ""return_seq"": true,
+ ""activation"": ""relu""
+ },
+ {
+ ""type"": ""dropout"",
+ ""rate"": 0.1
+ },
+ {
+ ""type"": ""lstm"",
+ ""neurons"": 161,
+ ""activation"": ""relu"",
+ ""return_seq"": false
+ },
+ {
+ ""type"": ""dense"",
+ ""neurons"": 128,
+ ""activation"": ""relu""
+ },
+ {
+ ""type"": ""dense"",
+ ""neurons"": 1,
+ ""activation"": ""linear""
+ }
+ ]
+ }
+}
+
+
+Here is the results I got
+
+
+
+What can be good improvements I can bring to my model so that I can get better results? There are a lot of small spikes and other places where the curve is rather constant that I should get a rise or fall of the curve.
+"
+"['search', 'comparison', 'a-star', 'ida-star']"," Title: How is iterative deepening A* better than A*?Body: The iterative deepening A* search is an algorithm that can find the shortest path between a designated start node and any member of a set of goals.
+
+The A* algorithm evaluates nodes by combining the cost to reach the node and the cost to get from the node to the goal. How is iterative deepening A* better than the A* algorithm?
+"
+['deep-learning']," Title: RNN and LSTM for discovering time lagBody: Is there a good reference / tutorial for using RNN/LSTM to determine lag interval for 2 time series? E.g. I have {x_n}, {y_n} and I want to figure out by how much does {x_n} typically lags behind {y_n}?
+"
+"['philosophy', 'human-like']"," Title: Can an artificial intelligence eventually think like a human?Body: It seems to me that the way neural networks are trained is similar to the way we educate a child (or a person, in general).
+
+Can an AI eventually think like a human?
+"
+['neural-networks']," Title: How Hopfield neural networks are connected with ""industrial"" neural networks used in machine learning?Body: I am trying to read https://arxiv.org/abs/1701.01727 about generalisation of Hopfield neural networks and I like the clear ideas that physics and Hamiltoanian framework can be used for modeling such networks and for deducing lot of properties. My question is - how Hopfield networks are connected to the standard networks?
+
+I see at least 3 differences:
+
+
+- Hopfield neurons has binary values on/off (+1, -1) but machine learning neurons have real (are at least approximately (within machine limits) real) values.
+- threshold function is the only activation function used in Hopfield neurons, machine learning has lot of more complex, real-valued functions
+- Connectivity patterns in machine learning networks (LSTM, GRU cells) are far more richer.
+
+
+So - maybe Hopfield neural networks can be generalized up to the level of machine learning networks and at the same time preserving the use of Hamiltonian framework. Is it possible, is there any work in this direction?
+"
+"['social', 'mythology-of-ai']"," Title: What are the most instructive movies about artificial intelligence?Body: The field of AI has expanded profoundly in recent years, as has public awareness and interest. This includes the arts, where fiction about AI has been popular since at least Isaac Asimov. Films on various subjects can be good teaching aids, especially for younger students, but it can be difficult for a non-expert to know which films have useful observations and insights, suitable for the classroom.
+What are insightful films about AI?
+Listed films must be suitable for academic analysis, providing insight into theory, methods, applications, or social ramifications.
+"
+"['machine-learning', 'image-recognition', 'computer-vision']"," Title: Are Computer Vision and Digital Image Processing part of Artificial Intelligence?Body: There are some fields of Computer Vision that are similar to Artificial Intelligence. For example, pattern recognition and path tracking. Based on these similarities, can we say that the Computer Vision is a part of Artificial Intelligence?
+"
+"['neural-networks', 'philosophy', 'genetic-algorithms', 'artificial-consciousness', 'genetic-programming']"," Title: Do genetic algorithms and neural networks really think?Body: I'm aware of those AI systems that can play games and neural networks that can identify pictures.
+But are they really thinking? Do they think like humans? Do they have consciousness? Or are they just obeying a bunch of codes?
+For example, when an AI learns to play Pacman, is it really learning that it should not touch the ghosts, or are they just following a mechanical path that will make them win the game?
+"
+"['machine-learning', 'decision-trees', 'hyper-parameters', 'random-forests']"," Title: How many trees should be generated in a random forest?Body: What are ways of determining the number of trees to be generated in a random forest algorithm?
+"
+"['machine-learning', 'deep-learning', 'overfitting']"," Title: Overfitted model performs better in test setBody: There are two models for the same task:
+
+model_1: 98% accuracy on training set, 54% accuracy on test set.
+model_2: 48% accuracy on training set, 47% accuracy on test set.
+
+From the statistics above we can say that model_1 overfits training set.
+Q1: Can we say that model_2 underfits?
+Q2: Why model_1 is bad choice if it performs better than model_2 on test set?
+"
+['applications']," Title: Why do we need artificial intelligence?Body: We seem to be experiencing an AI revolution, spurring advancement in many fields.
+Please explain, at a high level, the uses of artificial intelligence and why we might need it.
+"
+"['comparison', 'generative-adversarial-networks', 'autoencoders', 'variational-autoencoder']"," Title: Why doesn't VAE suffer mode collapse?Body: Mode collapse is a common problem faced by GANs. I am curious why doesn't VAE suffer mode collapse?
+"
+"['comparison', 'generative-adversarial-networks', 'autoencoders', 'generative-model', 'variational-autoencoder']"," Title: Why is the variational auto-encoder's output blurred, while GANs output is crisp and has sharp edges?Body: I observed in several papers that the variational autoencoder's output is blurred, while GANs output is crisp and has sharp edges.
+Can someone please give some intuition why that is the case? I did think a lot but couldn't find any logic.
+"
+"['neural-networks', 'terminology', 'unsupervised-learning', 'generative-adversarial-networks', 'supervised-learning']"," Title: Do GANs come under supervised learning or unsupervised learning?Body: Do GANs come under supervised learning or unsupervised learning?
+
+My guess is that they come under supervised learning, as we have labeled dataset of images, but I am not sure as there might be other aspects in GANs which might come into play in the determination of the class of algorithms GAN falls under.
+"
+"['algorithm', 'search', 'comparison', 'a-star']"," Title: What are the differences between A* and greedy best-first search?Body: What are the differences between the A* algorithm and the greedy best-first search algorithm? Which one should I use? Which algorithm is the better one, and why?
+"
+"['machine-learning', 'reference-request', 'search', 'path-planning', 'path-finding']"," Title: Given an image and two points $A$ and $B$ on that image, how could we find a path from $A$ to $B$?Body: If we have a search or path-finding problem, A* and Dijkstra's algorithm require that we formulate it as a search in a graph with nodes and connections between these nodes. If there are obstacles, we also need to encode this information in the graph, so that they are not traversed. Additionally, there may be costs/weights on the connections between points. If such weights/costs are high, the algorithms won't take that path.
+I've been using A* and Dijkstra's algorithm this way so far. However, it's a bit cumbersome to always have to define the nodes/points and the relationships (or connections) between them. There's no learning here. I just define a graph and the algorithms search on this graph.
+Let's say I have a white image, a green blob in the middle, and points $A$ and $B$ at either side of the blob, I need to get from $A$ to $B$. I don't have a search space represented as a graph here. I just have this image.
+Could I use machine learning to solve this problem (and would it generalize to more complex maps)? If so, are there any research works on this topic? Or is that the wrong route to take (pardon the pun)?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'deep-neural-networks', 'feedforward-neural-networks']"," Title: Significance of depth of a deep neural networkBody: How is a feed-forward neural network with few hidden layers and lots of nodes in those hidden layers different from a network with a lot of hidden layers but relatively lesser nodes in those hidden layers?
+"
+"['machine-learning', 'reinforcement-learning', 'training', 'intelligent-agent']"," Title: Are artificial intelligence learnings or trainings transferable from one agent to the other?Body: One disadvantage or weakness of Artificial Intelligence today the slow nature of learning or training success. For instance, an AI agent might require a 100,000 samples or more to reach an appreciable level of performance with a specific task. But this is unlike humans who are able to learn very quickly with a minimum number of samples. Humans are also able to teach one another, or in other words, transfer knowledge acquired.
+
+My question is this: are Artificial Intelligence learnings or trainings transferable from one agent to the other? If yes, how? If no, why?
+"
+"['neural-networks', 'machine-learning', 'comparison']"," Title: How are artificial neural networks different from normal computer programs?Body: How are artificial neural networks different from normal computer programs (or software)?
+"
+"['game-ai', 'monte-carlo-tree-search']"," Title: Is there a better way of calculating the chance of winning than $\mu * (1 - (\sigma * f)) * 100$ for the card game schnapsen?Body: My AI (for the card game schnapsen) currently calculates every possible way the game could end and then evaluates the percentage of winning for every playable card / move. The calculation is done recursively using a tree. If a game could move on in three different ways the percentage of winning on this node would be
+$$\mu * (1 - (\sigma * f)) * 100,$$
+where $f$ is between 0 and 2, $\mu$ is the mean and $\sigma$ the standard deviation. When the game can't move on and the AI wins the percentage is 100, when lost 0. I'm including the standard deviation in this formula to prevent the AI from risking too much. In other words: I'm using an MCTS that uses percentages.
+Is there a better formula or way of calculating the next move to maximize the chance of winning? Does including the standard deviation make sense?
+"
+"['neural-networks', 'deep-learning', 'backpropagation']"," Title: Use of backpropagation for weight updates in a combination of 2 neural networksBody: Every neural network updates its weights through back-propagation.
+
+How is back-propagation used for updating weights in a combination of 2 or more neural networks (e.g.:CNN-LSTM, GAN-CNN, etc.).
+
+For instance a CNN-LSTM model is a CNN model stacked on top of an LSTM model. When CNN model is stacked on top of an LSTM model, do we consider hidden layer of both model or hidden layer of outer model(LSTM)?
+"
+"['comparison', 'search', 'uniform-cost-search', 'best-first-search']"," Title: What are the differences between uniform-cost search and greedy best-first search?Body: What are the differences between the uniform-cost search (UCS) and greedy best-first search (GBFS) algorithms? How would you convert a UCS into a GBFS?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'vgg']"," Title: Why does the number of feature maps increases in the VGG model?Body: I found the below image of how a CNN works
+
+
+
+But I don't really understand it. I think I do understand CNNs, but I find this diagram very confusing.
+
+My simplified understanding:
+
+
+- Features are selected
+- Convolution is carried out so that to see where these features fit (repeated with every feature, in every position)
+- Pooling is used to shrink large images down (select the best fit feature).
+- ReLU is used to remove your negatives
+- Fully-connected layers contribute weighted votes towards deciding what class the image should be in.
+- These are added together, and you have your % chance of what class the image is.
+
+
+Confusing points of this image to me:
+
+
+- Why are we going from one image of $224 \times 224 \times 3$ to two images of $224 \times 224 \times 64$? Why does this halving continue? What is this meant to represent?
+- It continues on to $56 \times 56 \times 256$. Why does this number continue to halve, and the number, at the end, the $256$, continues to double?
+
+"
+"['natural-language-processing', 'logic']"," Title: Possible to translate generic English-language document into higher-order logic?Body: (Un-original) idea:
+Wouldn't it be cool if we could fact-check using an algorithm that could understand a whole bunch of documents (e.g. scientific papers) as higher-order logic?
+Question:
+What work has been done on this to date?
+What I've got so far:
+(1) I seem to recall there being prior work to create a subset of English (I think intended for use in scientific writing) that could be easily interpreted by an algorithm. This doesn't quite get us to the algorithm described above (as it's restricted to a subset of English) - but seems pertinent.
+(2) Once parsed, I guess a resolution algorithm like that in Prolog could be used to check wether a fact (presumably also inputted as a logical statement) contradicts the logic of the documents?
+"
+"['neural-networks', 'machine-learning', 'training', 'recurrent-neural-networks']"," Title: Train a recurrent neural network by concatenating time series. Is it safe?Body: As the title says, I want to train a Jordan network (i.e. a particular kind of recurrent neural network) using a certain number of time series.
+
+Let's say that $x_1, x_2, \ldots x_N$ are $N$ input time series (i.e. $x_i = [x_{i,1}, x_{i,2}, \ldots, x_{i,T}]$, where $T$ is the length of the time series) and $y_1, y_2, \ldots y_N$ (i.e. $x_i = [y_{i,1}, y_{i,2}, \ldots, y_{i,T}]$) are the corresponding target time series.
+
+More specifically, the target time series are just sequences of ""$0$s"", which may end with sequences of ""$1$""s. Here I show you some example:
+
+$$y_i = [0 ~ 0 ~ 0 \ldots 0 ~ 0 ~ 1 ~ 1 ~ 1 \ldots 1 ~1 ], $$
+$$y_i = [0 ~ 0 ~ 0 \ldots 0 ~ 0]. $$
+
+This means that I want that my machine ""learn to raise"" under some situations related to the corresponding inputs $x_i$. Indeed, the objective of my network is to ""raise"" an alarm if ""something"" happens.
+
+At the moment, my training strategy is the following. I create a new time series which corresponds the concatenation of all the available $x_i$ and $y_i$. Let's call the concatenated series $X$ and $Y$. Then I use $X$ and $Y$ to train a network.
+
+Here is my problem. If I concatenate, then I also teach to my machine to ""drop"", since I can have situation like this:
+
+$$Y = [ \ldots 1 ~ 1 ~ 0 ~ 0 \ldots].$$
+
+Is this really a problem? Are there other ""training strategies"" to be employed so that I avoid this kind of unwanted behaviors?
+"
+"['machine-learning', 'backpropagation', 'terminology', 'optimization']"," Title: What is the actual learning algorithm: back-propagation or gradient descent?Body: What is the actual learning algorithm: back-propagation or gradient descent (or, in general, the optimization algorithm)?
+
+I am reading through chapter 8 of Parallel Distributed Processing hand book and the title of the chapter is ""Learning internal representation by error propagation"" by PDP research Group.
+https://web.stanford.edu/class/psych209a/ReadingsByDate/02_06/PDPVolIChapter8.pdf
+
+If there are no hidden units, no learning happens.
+If there are hidden units, they learn internal representation by propagating error back.
+Does this mean back propagation[delta rule] is the learning rule and gradient descent is an optimization algorithm used to optimize cost function?
+"
+"['reinforcement-learning', 'deep-rl', 'hindsight-experience-replay']"," Title: How does Hindsight Experience Replay learn from unsuccessful trajectories?Body: I am confused by how HER learns from unsuccessful trajectories. I understand that from failed trajectories it creates 'fake' goals that it can learn from.
+Ignoring HER for now, if in the case where the robotic arm reaches the goal correctly, then the value functions ($V$) and action-value functions ($Q$) that correspond to the trajectories that get to the goal quicker will increase. These high $Q$ and $V$ values are ultimately important for getting the optimal policy.
+However, if you create 'fake' goals from unsuccessful trajectories - that would increase the $Q$ and $V$s of the environment that lead to getting the 'fake' goal. Those new $Q$ and $V$s would be unhelpful and possibly detrimental for the robotic arm to reach the real goal.
+What am I misunderstanding?
+"
+"['algorithm', 'swarm-intelligence']"," Title: Nature-inspired artificial intelligent methods for Blockchain?Body: I am working on the blockchain technology and I am not very familiar with the AI concept.
+
+The proposal of this web page: (http://www.euraxess.lu/jobs/349354) opens a discussion about use of nature inspired artificial intelligent methods for traceability chain decision in blockchain technology as an alternative to current consensus mechanisms. It continues as follows:
+
+
+ ""It has been demonstrated that using traceability chain is a more
+ effective method. In traceability chain, since the mechanism has to
+ trace related information among participant’s nodes across the entire
+ chain, the extraction and recognition of the data features plays a
+ crucial role in improving the efficiency of the process.""
+
+
+However, it does not give any example to demonstrate an instance of this approach. So, I searched in google.scholar and any ordinary web pages to find only an instance similar to this approach since it has mentioned: ""It has been demonstrated that using traceability chain is a more effective method."" (Please read the web page)
+
+Is someone here familiar with this approach? And if yes, Is there any article/example to explain a more about this approach of consensus in blockchain? And what does exactly mean ""traceability chain""? And also ingeneral, can we call this approach as a consensus? The text is not really clear to me.
+
+Please not that I have no idea about this proposal and just I'd like to know if it's practicable? or it's buzzwords?
+
+Also, may this approach related to the approach that has been mentioned in this answer (Swarm Intelligence): https://ai.stackexchange.com/a/1315/19910 Or it is a different concept?
+
+Thanks for your help
+"
+"['math', 'reasoning']"," Title: Machine mathematical reasoning by clever substitutions, How to do with AIBody: I have three equations that relates five variables {a, b, c, r, s} with a sum and two ratios.
+
+
+Eq. 1: a = b + c;
+Eq. 2: s = b / a;
+Eq. 3: r = b / c.
+
+
+Given two values for any of the five variables I get a solution. But, this is not the automation problem I want to solve.
+
+I can have the solution of variable r by simply knowing s. This is solved by a ""human algorithm"" as follows.
+
+
+- Substitute a of Eq. 1 in Eq. 2.
+
- Divide the second term of the new Eq. 2 by the variable c.
+
- Replace b/c by the expression of Eq. 3.
+
+
+That means s = r / (r+1).
+
+The questions is -How can an AI algorithm solve this?, i.e. the machine should recognize that given the variable r she can obtain directly the variable s and do no require another variable.
+"
+['artificial-neuron']," Title: Basic Functions and ResultsBody: If the number of input neurons and output neurons doesn't change, what will change if I have one hidden layer, but first with 1 neuron, then with 4 neurons?
+
+Taking into consideration the fact that each perceptron is able to linearly separate points on an unknown/unwritten linear function, would this then be able to, theoretically, instead of simply linearly separate points, separate points into those that occur inside a square, and those that occur outside?
+
+This is, of course, without a bias neuron present.
+"
+"['search', 'comparison', 'hill-climbing', 'simulated-annealing']"," Title: How is simulated annealing better than hill climbing methods?Body: In hill climbing methods, at each step, the current solution is replaced with the best neighbour (that is, the neighbour with highest/smallest value). In simulated annealing, ""downhills"" moves are allowed.
+
+What are the advantages of simulated annealing with respect to hill climbing approaches? How is simulated annealing better than hill climbing methods?
+"
+"['algorithm', 'search', 'optimization', 'problem-solving', 'hill-climbing']"," Title: What are the limitations of the hill climbing algorithm and how to overcome them?Body: What are the limitations of the hill climbing algorithm? How can we overcome these limitations?
+"
+"['reinforcement-learning', 'optimization']"," Title: Reinforcement Learning to Grouped Scheduling Optimisation ProblemBody: I am not sure the name of this kind of problem, but anyway, the situation is as below.
+
+Assign teachers into Groups and consider on each of their workload, availability etc.
+There are some other soft/hard constraint (equality/inequality) like
+
+
+- Each group should have at least 2 teachers
+- Everyone in the group have similar workload
+- Total workload in the group is below a certain value
+- All are in different expertise
+
+
+and more...
+
+I am trying to build a sub-optimal solution to solve this problem. Linear/non-linear programming seems not working for grouping problems. I am thinking of genetic algorithm or reinforcement learning.
+
+Can this problem solve by using RL or DRL?
+I am trying to define the groups as state, and actions include ""assignToGroup"" and ""removeFromGroup"".
+And any kind of idea or suggestion of how to solve this problem?
+
+Many thanks
+"
+"['neural-networks', 'computational-linguistics']"," Title: Neural machine translation, that outputs multiple alternative, ambiguous translations?Body: Is there neural machine translation methods, that for one input sentence outputs multiple alternative output sentences in that target language. It is quite possible, that sentence in source language have multiple meanings and it is not desirable that neural network discards some of the meanings if there is no context for disambiguation provided. How multiple outputs can be acommodated into encode-decoder architecture, or different architecture is required?
+
+I am aware of only one work https://arxiv.org/abs/1805.10844 (and one referen herein) but I am still digesting whether their network outputs multiple sentences or whether it just acommodates variations during training phase.
+"
+['natural-language-processing']," Title: How to know which kind of adverb in NLP Parts of Speech (POS) tagging?Body: There are 4 kinds of adverbs :
+
+
+- Adverbs of Manner. For example, slowly, quietly
+- Adverbs of Place. For example, there, far
+- Adverbs of Frequency. For example, everyday, often
+- Adverbs of Time. For example, now, first, early
+
+
+nltk, spacy and textblob only tag a token as an adverb without specifying which kind it is.
+
+Are there any libraries which tag including the type of adverb?
+"
+"['reinforcement-learning', 'on-policy-methods', 'monte-carlo-methods', 'importance-sampling', 'dynamic-programming']"," Title: Do we need the transition probability function when calculating the importance sampling ratio?Body: I am reading the book titled ""Reinforcement Learning: An Introduction"" (by Sutton and Barto). I am at chapter 5, which is about Monte Carlo methods, but now I am quite confused.
+
+There is one thing I don't particularly understand. Why do we need the state-transition probability function when calculating the importance sampling ratio for off-policy prediction?
+
+I understood that one of the main benefits of MC over Dynamic Programming (DP) is that one does not need to have a model of the state-transition probability for a system. Or is this only the case for on-policy MC?
+"
+"['terminology', 'objective-functions', 'optimization', 'local-search', 'meta-heuristics']"," Title: What is an objective function?Body: Local search algorithms are useful for solving pure optimization problems, in which the aim is to find the best state according to an objective function.
+My question is what is the objective function?
+
+"
+"['reinforcement-learning', 'chat-bots', 'ethics']"," Title: What is the best way to integrate unchangeable ethics into a chatbotBody: I am building a generative model chatbot as a research and learning project. One of the most important parts of my project is to research ways in which I can make this chatbot work in a consistently ethical fashion.
+
+This chatbot is simply a single Seq2Seq network running on my local machine. It can't be interacted with over the internet (yet), although I may end up creating a way to do that. It has no feedback loops of any kind as of right now, though reinforcement learning with a loop might be helpful.
+
+The idea is that there would be some sort of unchanging knowledgebase for the chatbot to use that has hard-coded ethical statements and values that the bot has no ability to change. Whenever a question is asked of the chatbot,before being inputted to the network, the knowledgebase is searched and relevant facts will be appended to the input (separated from regular inptu by tokens.
+
+My question is, will this even be effective at allowing it to generate its own responses yet still be confined to the ethical standards given it?
+
+My main concern is that it may begin to ignore these ""facts"" over time, and they may become irrelevant.
+
+Another (possibly much better) approach might be to use deep reinforcement learning. However I may find it difficult to implement with my existing Sequence-to-Sequence network.
+
+So which would likely be better? Or perhaps I should try a combination of the two?
+"
+"['classification', 'unsupervised-learning']"," Title: Using unsupervised learning for classification problemsBody: Let's say there are two types of cancer(Type 1 and Type 2). Say we want to see if one of pour friends has cancer Type 1 or 2. We can treat this as a classification problem. But what if we use unsupervised learning (clustering) to separate the data into to 2 different groups and see each whether each item in group 1 belongs to a person with cancer Type 1 or 2. We will then see whether our friend belongs to group 1 or 2. I know it is stupid to do this and we have to do extra work but can we even do this?
+
+Let's say that the features are only the age and the height (I know it's really dumb but just bear with me). The data associated with people with cancer Type 1 is [10, 150], [12, 153], [9, 143], [13, 160]
and for people with Type 2 cancer : [20, 175], [23, 180], [19, 174]
. Let's say we plot the data on a graph (without labelling the data) and the unsupervised program (Clustering) just separates the two groups (Say group 1 for Type 1). We then can see that to whom each data in group 1 belongs. We see those people have cancer Type 1. So given new data, we see what group our friend belongs to. If she/he belonged to group 1, he's got cancer Type 1 and if not, she/he has cancer Type 2.
+"
+"['machine-learning', 'comparison', 'feature-selection', 'data-preprocessing', 'representation-learning']"," Title: When should I use feature learning as opposed to feature engineering?Body: With the advancement of deep learning and a few others automated features learning techniques, manual feature engineering started becoming obsolete.
+
+Any suggestion on when to use manual feature engineering, feature learning or a combination of the two?
+"
+"['reinforcement-learning', 'deep-neural-networks', 'keras', 'q-learning', 'convergence']"," Title: Deep Q-Learning poor convergence on Stochastic EnvironmentBody: I'm trying to implement a Deep Q-network in Keras/TF that learns to play Minesweeper (our stochastic environment). I have noticed that the agent learns to play the game pretty well with both small and large board sizes. However, it only converges/learns when the layout of the mines is the same for each game. That is, if I randomize the mine distribution from game to game, the agent learns nothing - or near to it. I tried using various network architectures and hyperparameters but to no avail.
+I tried a lot of network architectures including:
+
+- The input to the network is the entire board matrix, with the individual cells having values of -1 if unrevealed, or 0 to 8 if revealed.
+- The output of the network is also the entire board representing the desirability of clicking each cell.
+- Tried fully connected hidden layers (both wide and deep).
+- Tried Convolutional hidden layers (tried stacked them, using different kernel sizes, padding, etc.).
+- Tried adding Dropout after hidden layers too.
+
+Is DQN applicable for environments that change every episode or have I approached this from the wrong side?
+It seems no matter the network architecture, the agent won't learn. Any input is greatly appreciated. Please let me know if you require any code or further explanations.
+"
+"['comparison', 'search', 'breadth-first-search']"," Title: What is the difference between the breadth-first search and recursive best-first search?Body: What is the difference between the breadth-first search and recursive best-first search? How can I describe the key difference between them?
+"
+"['reinforcement-learning', 'algorithm', 'time-complexity', 'value-iteration']"," Title: What is the time complexity of the value iteration algorithm?Body: Recently, I have come across the information (lecture 8 and 9 about MDPs of this UC Berkeley AI course) that the time complexity for each iteration of the value iteration algorithm is $\mathcal{O}(|S|^{2}|A|)$, where $|S|$ is the number of states and $|A|$ the number of actions.
+
+Here is the equation for each iteration:
+
+$$
+V_{k+1}(s) \gets \max_a \sum_{s'} T(s, a, s') [R(s, a, s') + \gamma V_k(s')]
+$$
+
+I could't understand why the time complexity is $\mathcal{O}(|S|^{2}|A|)$. I searched the internet, but I didn't find any good explanation.
+"
+"['machine-learning', 'gradient-descent', 'objective-functions']"," Title: Can the mean squared error be negative?Body: I'm new to machine learning. I was watching a Prof. Andrew Ng's video about gradient descent from the machine learning online course. It said that we want our cost function (in this case, the mean squared error) to have the minimum value, but that minimum value shown in the graph was not 0. It was a negative number!
+
+How can our cost function, which is mean squared error, have a negative value, given that the square of a real number is always positive? Even if it is possible, don't we want our error to be 0?
+"
+"['reinforcement-learning', 'q-learning', 'off-policy-methods', 'greedy-policy']"," Title: Why are Q values updated according to the greedy policy?Body: Apparently, in the Q-learning algorithm, the Q values are not updated according to the ""current policy"", but according to a ""greedy policy"". Why is that the case? I think this is related to the fact that Q-learning is off-policy, but I am also not familiar with this concept.
+"
+"['deep-learning', 'reinforcement-learning', 'game-ai', 'q-learning']"," Title: Is it possible to use a feed-forward neural network to predict the actions in reinforcement learning?Body: I have done a lot of research on the internet about Reinforcement Learning and I found encountered methods of Reinforcement Learning: Q-Learning and Deep Q-Learning. And I have developed a vague idea of how these two work.
+
+Before I knew anything about Reinforcement Learning this is how I thought it would work:
+
+Suppose I have 2 virtual players in a game who can shoot each other, one of them is a decent playing hard-coded/pre-coded AI, and the other one is the player I want to train (to shoot the other player and dodge his bullets), the aim of the game would be to get the greatest net score (shots you hit minus shots you took) within 1 minute (a session), and you only have 20 bullets.
+
+
+
+You have 3 actions, move left-right (0 = maxleftspeed, 1 = maxrightspeed )
, jump (0=don't jump, 1=jump)
, shoot (0 = don't shoot, 1 = shoot)
.
+
+What I thought was, you could create a basic Feed-Forward Neural Network use the enemy's position and his bullet(s)'s position(s) and bullet direction(s) for the input layer and the action being taken will be given by the (3 nodes in the) output layer.
+
+The untrained player starts off with a randomized algorithm, then (for back-propagation) at the end of each session it modifies one of the parameters by a bit in the neural network, and a new session is started with the slightly modified NN. If this session ends with more points than the previous session, it keeps the changes and makes more changes towards that direction, otherwise, the changes are redone, or possibly reversed. I would visualise this as gradient descent similar to that of Supervised learning.
+
+So my questions are:
+
+
+- Is something like this already out there? What is it called?
+- If nothing like this is out there, could you give me any tips to optimize this method or point out any key points I should keep in minds while carrying this out?
+- Since I have written this game, I have control over the speed of the actions, but if I did not, I know this AI would take ages to learn, so is there any way to make the learning faster while still keeping the basic idea in mind?
+- How exactly is this different from deep Q-learning (if it is)?
+
+
+Thanks in advance!
+"
+"['neural-networks', 'convolutional-neural-networks', 'classification', 'facial-recognition']"," Title: Machine learning approach to facial recognitionBody: First of all I'm very new to the field. Maybe my question is a bit too naive or even trivial...
+
+I'm currently trying to understand how can I go about recognizing different faces.
+
+Here is what I tried so far and the main issues with each approach:
+
+1) Haar Cascade -> HOG -> SVM:
+ The main issue is that the algorithm becomes very indecisive when more than four people are trained... The same occurs when we change Haar Cascade for a pre-trained CNN to detect faces...
+
+2) dlib facial landmarks -> distance between points -> SVM or Simple Neural Network Classification:
+ This is the current approach and it behaves very well when when four people are trained... When more people are trained it becomes very messy, jumping from decision to decision and never resolves to a choice.
+
+I've read online that Triplet loss is the way to go... But I very confused as to how I'd go about implementing it... Can I use the current distance vectors found using Dlib or should I scrap everything and train my own CNN?
+
+If I can use the distance vectors, how would I pass the data to the algorithm? Is Triplet loss a trivial neural network only with its loss function altered?
+
+I've took the liberty to show exactly how the distance vectors are being calculated:
+
+
+
+The green lines represent the distances being calculated.
+A 33 float list is returned which is then fed to the classifier.
+
+Here is the relevant code for the classifier (Keras):
+
+def fit_classifier(self):
+ x_train, y_train = self._get_data(self.train_data_path)
+ x_test, y_test = self._get_data(self.test_data_path)
+ encoding_train_y = np_utils.to_categorical(y_train)
+ encoding_test_y = np_utils.to_categorical(y_test)
+ model = Sequential()
+ model.add(Dense(10, input_dim=33, activation='relu'))
+ model.add(Dense(20, activation='relu'))
+ model.add(Dense(30, activation='relu'))
+ model.add(Dense(40, activation='relu'))
+ model.add(Dense(30, activation='relu'))
+ model.add(Dense(20, activation='relu'))
+ model.add(Dense(10, activation='relu'))
+ model.add(Dense(max(y_train)+1, activation='softmax'))
+ model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
+ model.fit(x_train, encoding_train_y, epochs=100, batch_size=10)
+
+
+I think this is a more theoretical question than anything else... If someone with good experience in the field could help me out I'd be very happy!
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-recognition']"," Title: Image prediction model when data-set classes have visual similarityBody: Lets say we have a data-set of all cats and we have to identify the cat breed based on given test image. As, the two different cat breeds have visual similarity can we use existing networks (VGG, ImageNet, GoogleNet) to solve this problem?
+
+
+- Should faceNet be applied here? As, the problem is similar to face detection where face characteristics of two different people are same yet it can correctly recognize a person.
+- What if with visual similarity in data-set we have only few example of each class? Like for a problem (random) we have good amount of data but for each class we have only few examples.
+
+
+Is there any model that can be applied here?
+"
+"['reinforcement-learning', 'q-learning', 'proofs', 'convergence']"," Title: Why does Q-learning converge to the optimal policy, even if the agent acts sub-optimally?Body: In Q-learning, during training, it doesn't matter how the agent selects actions. The algorithm always converges to the optimal policy. Why does this happen? What's the intuition?
+"
+"['terminology', 'evolutionary-algorithms', 'applications', 'swarm-intelligence']"," Title: How Swarm Intelligence can empower Blockchain?Body: Are there examples of applications in blockchain consensus using swarm intelligence, as opposed to classical consensus mechanisms like PoW or PBFT?
+
+Please note that recent classical consensuses, including lottery-based such as PoW in which the winner of lottery creates the new block, or voting-based such as PBFT or Paxos in which the entities achieve a consensus through a voting process; both approaches have problems of efficiency, latency, scalability, performance etc. And emerging a new alternative approach seems necessary. In this way, can nature-inspired algorithms (such as evolutionary algorithms (EA), particle swarm optimization (PSO), ant colony optimization (ACO) etc) be employed as an alternative to classical consensus algorithms?
+"
+"['deep-learning', 'training', 'regularization']"," Title: Regarding L0 sparsification of DNNs proposed by Louizos, Kingma and WellingBody: I am reading the paper on $\ell_0$ regularization of DNNs by Louizos, Welling and Kingma (2017) (Link to arxiv).
+
+In Section 2.1 the authors define the cost function as follows: $$ \mathcal{R}\left( \tilde{\theta}, \pi \right) = \mathbb{E}_{q(z|\pi)}\left[ \frac{1}{N} \left(\sum_{i=1}^N \mathcal{L}\left(h\left( x_i, \tilde{\theta}\circ Z\right), y_i \right) \right)\right] + \lambda\sum_{i=1}^{|\tilde{\theta}|}\pi_i. $$
+
+In the above display, $\tilde{\theta}$ are the weights, $Z$ is a random vector of the same dimension as $\tilde{\theta}$ consisting of independent Bernoulli components $q(Z_i|\pi) \sim Bernoulli (\pi_i)$, and $\circ$ is the element-wise product.
+
+The authors then state the following:
+
+
+ the first term is problematic for $\pi$ due to the discrete nature of $Z$, which does not allow for efficient gradient based optimization.
+
+
+I am not sure I understand this. Denoting the first term by $\mathcal{R}_1 = \sum_{i=1}^N \frac{1}{N}R_i$ ($R_i$ defined below), and using the notation $\pi_z = \prod \pi_i^{z_i} (1-\pi_i)^{1-z_i}$ and $\mathcal{Z}$ for the set of all possible values of $Z$, we should have
+$$
+R_i := \mathbb{E}_{q(z|\pi)}\left[ \mathcal{L}\left( h\left(x_i, \tilde{\theta}\circ z\right), y_i \right) \right] = \sum_{z \in \mathcal{Z}}\pi_z\mathcal{L}\left( h\left(x_i, \tilde{\theta}\circ z\right), y_i \right)
+$$
+
+So, it seems to me that the gradient of $R_i$ with respect to $\pi_j$ can be obtained as
+$$
+\frac{d R_i}{d\pi_j} = \sum_{z \in \mathcal{Z}}\mathcal{L}\left( h\left(x_i, \tilde{\theta}\circ z\right), y_i \right) \frac{d\pi_z}{d\pi_j}
+$$
+and
+$\frac{d\pi_z}{d\pi_j} = \frac{\pi_z}{\pi_j}$ if $z_j=1$ and $-\frac{\pi_z}{1-\pi_j}$ if $z_j=0$. So, it appears that we can obtain the derivative of the first term with respect to $\pi_j$ as well.
+
+My question is the following:
+
+
+ If my above calculation is correct, then the derivatives $\frac{d\mathcal{R}_1}{d\pi_j}$ can be computed, and we can perform SGD on the cost function $\mathcal{R}(\tilde{\theta}, \pi)$. But the authors claim that it cannot be obtained and hence they introduce the `hard concrete' distribution etc. to construct a differentiable cost function.
+
+"
+"['optimization', 'search', 'hill-climbing', 'simulated-annealing', 'local-search']"," Title: What is the basic purpose of local search methods?Body: I read about the hill climbing algorithms, the simulating annealing algorithm, but I am confused. What is the basic purpose of local search methods?
+"
+['deep-learning']," Title: Setting learning rate as negative number for wrong train casesBody: I was watching a video which tells a bit about reinforcement learning, and I learnt that If the robot makes wrong movement then they train the network with negative learning rate. From this method, something came to my mind.
+
+My question is ""Can I use a wrong data to train a neural network?"".
+
+To illustrate the method, I'll be using the eye tracker project that I'm working on right now. In my project There are photos and the points that corresponds the locations that I m looking to at that photo. Its like grid (9, 16). If I look to the middle of the screen, it means the output is (4, 7.5). if I look left up side of the screen it means (0,0). Normally for a photo that I'm looking into the middle, we use that photo as input and (4, 7.5) as output to train network using positive learning rate. Now let me rephrase the question. Can I train a model giving a photo that I'm looking into the middle as input and (0,0) as output(label) using negative learning rate?
+
+Thank you, If I made a mistake against the rules of stackoverflow, I'm so sorry. I'll be waiting your valuable answers.
+
+Edit: this is a conversation between me and someone from stackoverflow, I'll let you read, hope you get a point.
+
+-> Yes, you can. But, what would be the reason of passing a wrong ground truth to your training process? – Neb 14 hours ago
+
+-> If I have no various data to train, I can create more data via this method to increase the certainty when I use squared error loss. But I have doubts about this method. for example lets assume we have a photo named 'X' and its label is (5,5). at first epoch, Let the model gives (2,2) for photo 'X'. if I try to train network with a photo X and label -> (4,4) using negative learning rate, it might send away the point from (2,2) to (1,1) whereas we expect it to send the point (2,2) to (5,5). Did you get what I meant? – Faruk Nane 14 hours ago
+
+-> You are right. Using a negative learning rate and a wrong ground truth will not necessarly make the learning process converge to the optimal value for your net's parameters – Neb 13 hours ago
+
+-> So can I say that ""when I'm sure that the absolute error for each case is less than 2, I can use this method using points away 2 units."" So It'll make the outputs closer to the target point. I don't really know if we can easily say that. because we consider this method as if there are only 2 parameters which is the output point. However a model has many parameters so It might affect so differently. My brain is so confused. I think this might be an academic work, right? – Faruk Nane 13 hours ago
+
+-> Well, it is difficult to suggests you the path to follow without knowing the exact specifics of your problem. In any case, if you're trying to solve this problem for fun or self-improvement, I'd suggest you to experiment with the solutions you came up with and see if they works. – Neb 13 hours ago
+
+//EDIT: UP UP
+"
+"['comparison', 'evolutionary-algorithms', 'crossover-operators', 'mutation-operators', 'genetic-operators']"," Title: What is the difference between ""mutation"" and ""crossover""?Body: In the context of evolutionary computation, in particular genetic algorithms, there are two stochastic operations ""mutation"" and ""crossover"". What are the differences between them?
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'selection-operators']"," Title: What are the available selection methods in genetic algorithms?Body: In a genetic algorithm, there are different steps. One of those steps is the selection of chromosomes for reproduction. What are the available selection strategies in genetic algorithms?
+"
+"['mythology-of-ai', 'asimovs-laws', 'survival']"," Title: How does a robot protect its own existenceBody: What are the many ways that artificial intelligence robots protect their existence?
+
+Isaac Asimov's ""Three Laws of Robotics""
+
+A robot may not injure a human being or, through inaction, allow a human being to come to harm.
+
+A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
+
+A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
+"
+"['reinforcement-learning', 'recurrent-neural-networks']"," Title: Why are all the actions converging to the same index?Body: I am using PPO with an LSTM agent. My agent is performing 10 actions for each episode, one action is corresponding to one LSTM timestep and the action space is discrete. I have only one reward per episode which I can compute after the last action of the episode.
+
+For each timestep (~ action) my agent has 20 choices. The following plot shows the reward (y-axis) versus the current episode (x-axis).
+The plot shows a decreasing reward because I want to mimize this reward so I use: minus of the true reward.
+
+At the beginning of the process, the agent seems to learn very well and the reward is decreasing but then it's converging to a value which is not the best. When I look at the results of my experiment it appears that the index of all actions are same (for example the agent is always choosing the second value of my discrete action space).
+
+Does anyone have an idea about what is happening here ?
+
+
+"
+"['implementation', 'data-preprocessing']"," Title: Which of these two numerical methods for z-score normalisation is preferable, in multivariate linear regression?Body: In the exercise Exercise 3: Multivariate Linear Regression, by Andrew Ng, the author suggests to ""scale both types of inputs by their standard deviations and set their means to zero"".
+
+$$x_{n e w}=\frac{x-\mu}{\sigma}$$
+
+Method 1
+
+The author provides the following Matlab (and Octave) code to scale the inputs.
+
+x = [ones(m, 1), x];
+sigma = std(x);
+mu = mean(x);
+x(:,2) = (x(:,2) - mu(2))./ sigma(2);
+x(:,3) = (x(:,3) - mu(3))./ sigma(3);
+
+
+Method 2
+
+But why not simply scale the inputs between zero and one, or, divide by the maximum?
+
+x_range=max(x)
+x(:,2) = (x(:,2)/x_range(2));
+x(:,3) = (x(:,3)/x_range(3));
+
+
+I have done the exercise with method 2 and these are the results.
+
+
+
+Question
+
+Is there a computational advantage with the first method over the second method?
+"
+"['ai-design', 'training', 'game-ai', 'dqn', 'open-ai']"," Title: DQN Breakout adding an extra negative reward to help training?Body: I'm trying to train a DQN, so I'm using OpenAI gym and Breakout (Breakout-v0).
+
+I have altered the reward supplied by the environment: If the episode is not completed fully, the agent gets a -10 reward. Could be this counterproductive for learning?
+"
+"['machine-learning', 'generative-model', 'random-variable', 'probability-distribution', 'latent-variable']"," Title: What kind of distributions can be used to model discrete latent variables?Body: If we take the vanilla variational auto-encoder (VAE), we $p(z)$ is a Gaussian distribution with zero mean and unit variance and we approximate $p(z|x) \approx q(z|x)$ to be a Gaussian distribution as well, for each latent variable $z$.
+
+But what if $z$ is a discrete variable? What kind of distributions can be used to model discrete latent variables? For example, what kind of distribution can be used to model $p(z)$ and $p(z|x) \approx q(z|x)$?
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'fitness-functions']"," Title: How can a genetic algorithm adapt and get better in a changing environment?Body: I've just started studying genetic algorithms and I'm not able to understand why a genetic algorithm can improve if, at each learning, the 'world' that the population encounters change. For example, in this demo (http://math.hws.edu/eck/js/genetic-algorithm/GA.html), it's pretty clear to me that the eating statistics will improve every year if bunches of grass grow exactly in the same place, but instead they always grow in different positions and I can't figure out how it can be useful to evaluate (through the fitness function) the obtained eating stats given that the next environment will be different.
+"
+"['training', 'backpropagation', 'objective-functions']"," Title: Training by one batch of examples, what does it meanBody: Say I have a batch of examples, each examples represent a state:
+
+[0.1, 0.2, 0.5] #1st example
+[0.4, 0.0, 0.3] #2nd example
+..........
+[0.1, 0.1, 0.1] #16th example
+
+
+I feed through the NN, and then the NN predict the following class:
+
+[move up] #1st example
+[move down] #2nd example
+........
+[move left] #16th example
+
+
+And then I take the square loss (which calculated to be 0.1 after taking average over 16 examples), and do backward propagation.
+
+So, can I assume that each of these examples will assign (or contribute) to a 0.1 loss?
+"
+"['machine-learning', 'terminology', 'philosophy', 'definitions']"," Title: Is an algorithm that is no longer actively learning an AI?Body: This question assumes a definition of AI based on machine learning, and was inspired by this fun Technology Review post:
+
+
+
+SOURCE: Is this AI? We drew you a flowchart to work it out (Karen Hao, MIT Technology Review)
+
+As the definition of artificial intelligence has been a continual subject of discussion on this stack, I wanted to bring it to the community for perspectives.
+
+The formal question here is:
+
+
+- Is an algorithm that is no longer actively learning an AI?
+
+
+Specifically, can applied algorithms that are not actively learning be said to reason?
+"
+['chess']," Title: How is AlphaZero different from Stockfish or Rybka?Body: I don't know much about AI or chess engines, but what is the fundamental difference between AlphaZero and Stockfish or Rybka?
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'fitness-functions', 'fitness-design']"," Title: How to create a good fitness function?Body: In genetic algorithms, a function called ""fitness"" (or ""evaluation"") function is used to determine the ""fitness"" of the chromosomes. Creating a good fitness function is one of the challenging tasks in genetic algorithms. How would you create a good fitness function?
+"
+"['genetic-algorithms', 'fitness-functions', 'evolutionary-computation', 'fitness-design', '8-queens-problem']"," Title: How to design a fitness function for the 8-queens problem?Body: In evolutionary computation and, in particular, in the context of genetic algorithms, there is the concept of a fitness function. The better a state, the greater the value of the fitness function for that state.
+What would be a good fitness function for the 8-queens problem?
+"
+"['reinforcement-learning', 'rewards']"," Title: Why is it ok to calculate the reward based on a hidden state?Body: I'm looking at this source code, where the reward is calculated with
+
+reward = cmp(score(self.player), score(self.dealer))
+
+
+Why is it ok to calculate the reward based on a hidden state?
+
+A player only sees the dealer's first card.
+
+self.dealer[0]
+
+"
+"['markov-decision-process', 'continuous-action-spaces', 'transition-model', 'finite-markov-decision-process', 'continuous-state-spaces']"," Title: How to generalize finite MDP to general MDP?Body: Suppose, for simplicity sake, to be in a discrete time domain with the action set being the same for all states $S \in \mathcal{S}$. Thus, in a finite Markov Decision Process, the sets $\mathcal{A}$, $\mathcal{S}$, and $\mathcal{R}$ have a finite number of elements. We could then say the following
+
+$$p(s',r | s,a) = P\{S_t=s',R_t=r | S_{t-1}=s,A_t=a\} ~~~ \forall s',s \in \mathcal{S}, r \in \mathcal{R} \subset \mathbb{R}, a \in \mathcal{A}$$
+
+where the function $p$ defines the dynamics of the finite MDP and $P$ defines the probability.
+
+
+
+How could I extend this to a general MDP? That is, an MDP where the sets $\mathcal{A}$, $\mathcal{S}$, and $\mathcal{R}$ haven't a finite number of elements? To be more precise, in my case $\mathcal{A} \subset \mathbb{R}^n$, $\mathcal{S} \subset \mathbb{R}^m$, and $\mathcal{R} \subset \mathbb{R}$. My thought is that the equation above is still true, however, the probability is zero for each tuple $s',r,s,a$.
+
+Is it sufficient to say that for finite MDP we have
+
+$$\sum_{s'\in\mathcal{S}}\sum_{r\in\mathcal{R}}p(s',r|s,a)=1 ~~~ \forall s\in\mathcal{S},a\in\mathcal{A}$$
+
+while in non-finite MDP (supposing that the sets $\mathcal{s}$ and $\mathcal{A}$ are continuous) we have
+
+$$\int_{s'\in\mathcal{S}}\int_{r\in\mathcal{R}}p(s',r|s,a)=1 ~~~ \forall s\in\mathcal{S},a\in\mathcal{A}$$
+
+or is it more complex than this?
+"
+"['papers', 'alphazero']"," Title: What happens before the first 8 moves in Alpha Zero?Body: The Alpha zero paper says (in the caption of Table S1) that
+
+The first set of features are repeated for each position in a $T = 8$-step history.
+
+So, what happens before the first 8 moves? Do they just repeat the starting position?
+"
+"['terminology', 'definitions', 'search', 'constraint-satisfaction-problems']"," Title: What is a successor function (in CSPs)?Body: In Constraint Satisfaction Problems (CSPs), a state is any data structure that supports
+
+- a successor function,
+- a heuristic function, and
+- a goal test.
+
+In this context, what is a successor function?
+"
+"['search', 'constraint-satisfaction-problems', 'efficiency', 'depth-first-search', 'uninformed-search']"," Title: How to improve the efficiency of the backtracking search in CSPs?Body: Backtracking search is the basic uninformed search algorithm for constraint satisfaction problems (CSPs). How could we improve the efficiency of the backtracking search in CSPs?
+"
+"['neural-networks', 'optimization']"," Title: What does it essentially mean if the neural network has convex error surface?Body: Suppose if I am building a Linear Regression model with one fully connected layer and a sigmoid with minimizing mean squared error as objective. Why would the error surface be convex?
+
+Does finding the optimal parameters for this network mean we cannot do better than this? Under what assumptions, this solution would be optimal? If we relax the linearity assumption and add some non-linearity to the network can we do better than this? Why so?
+"
+"['chat-bots', 'intelligent-agent', 'human-like', 'emotional-intelligence', 'ontology']"," Title: Can intelligent agents have personalities and emotions?Body: Can intelligent agents (and chatbots) have personalities and emotions, given a properly defined ontology?
+"
+"['deep-learning', 'autoencoders', 'recommender-system']"," Title: What are some limitations of using Collaborative Deep learning for Recommender systems?Body: Recently I worked on a paper by Hao Wang, Collaborative Deep learning for Recommender Systems; which uses a two way tightly coupled method, Collaborative filtering for Item correlation and Stacked Denoising Autoencoders for the Optimization of the problem.
+
+I want to know the limitations of using stacked Autoencoders and Hierarchical Bayesian methods to Recommender systems.
+"
+"['neural-networks', 'deep-learning', 'natural-language-processing', 'bert', 'text-generation']"," Title: Can BERT be used for sentence generating tasks?Body: I am a new learner in NLP. I am interested in the sentence generating task. As far as I am concerned, one state-of-the-art method is the CharRNN, which uses RNN to generate a sequence of words.
+
+However, BERT has come out several weeks ago and is very powerful. Therefore, I am wondering whether this task can also be done with the help of BERT? I am a new learner in this field, and thank you for any advice!
+"
+"['neural-networks', 'convolutional-neural-networks', 'recurrent-neural-networks', 'long-short-term-memory', 'human-activity-recognition']"," Title: Are 1D CNNs really the appropriate model for human activity recognition?Body: This article on human activity recognition states that 1D convolutional neural networks work the best on the classification of human activities using data from the accelerometer. But I think that human activities, like swinging the arm, are sequential actions and they require LSTMs.
+So, which one should be more effective for this task: CNNs or LSTMs? In other words, is spatial learning required or sequence learning?
+"
+"['machine-learning', 'objective-functions', 'gradient', 'sse']"," Title: How can the sum of squared errors have negative gradient if it's defined as the squared of the error?Body: The formula for the sum of squared errors (SSE) is:
+$$
+\frac{1}{2} \sum_{i=1}^n (t^i - o^i)^2
+$$
+I have a few related questions.
+
+- If $t^i - o^i$ is negative, doesn't the power of 2 eliminate any negative result?
+
+- How can then exist any negative gradient at the output?
+
+- What if the output is too high, and so the gradient should be negative, meaning that the weights (on average) have to be decreased? How can it be differentiated between 'output too high' and 'output too low'?
+
+
+"
+"['machine-learning', 'reinforcement-learning', 'genetic-algorithms']"," Title: Scrabble game using machine learningBody: I've been thinking if machine learning can be used to play the game Scrabble. My knowledge is limited in the ML field, thus I've seeking some pointers :)
+
+I want to know how could I possibly build a model that picks a move from all the given valid moves of the current game state, and then plays the move and wait for the delayed reward. The actions here aren't static actions, they are basically selecting move to maximize the final score.
+
+Is there any way to encode the valid moves and then use a model to pick those moves?
+
+I've also considered the genetic approach, but I think if I can represent my move with a set of features (score, consonantVowels ration, rack leave score, #blank tiles after the move, ...etc), training a neural network like this could take a long time.
+
+Another training related question, is it feasible to run the training on a GPU given that I will be waiting for a response (the new game state) from the opponent (e.g. Quackle) after every action?
+
+Thank you :)
+"
+['dempster-shafer-theory']," Title: How Dempster-Shafer theory work in AI?Body: How does Dempster-Shafer theory work in representing ignorance in the AI field?
+"
+"['applications', 'swarm-intelligence', 'ant-colony-optimization', 'blockchain']"," Title: Is there an efficiency swarm Intelligence algorithm for off-chain channels routing in blockchain?Body: One of the solutions to scale blockchain is to use off-chain channels. You can find its definition here: https://en.bitcoin.it/wiki/Off-Chain_Transactions.
+
+However, one of the problems of off-chain channels is finding a suitable decentralized routing mechanism.
+
+Since in Bitcoin there is no routing table and transactions are only broadcast and also, in general, we need to avoid centralized approaches for routing, is it practical to use swarm intelligence algorithms, such as ant colony optimization ones, for off-chain channels?
+
+I refer you to a proposed ant routing algorithm in the paper Ant routing algorithm for the Lightning Network for an instance of employing ACO algorithms for routing in _lightning network. However, the paper has not been evaluated to demonstrate its performance.
+"
+"['search', 'a-star']"," Title: How is the cost of the path to each node computed in A*?Body: How is the cost of the path to each node $n$ computed in the A* algorithm? Do we need to add the cost of the path to the parent node $p$ to the cost of the path of the child node $n$?
+"
+"['convolutional-neural-networks', 'image-recognition', 'datasets']"," Title: How to preprocess a modified dataset so that a fitted CNN makes correct predictions on an un-modified version of the dataset?Body: for a school project I have been given a dataset containing images of plants and weeds. The goal is to detect when there is a weed in the pictures. The training and validation sets have already been created by our teachers, however they probably didn't have enough images for both so they ""photoshopped"" some weeds in some of the training pictures.
+
+Here are examples of images with the weed label in the training set:
+
+
+
+In some cases, the ""photoshopped"" weed is hard to detect, and no shape resembling a weed is clearly visible like in this picture (weed at the very bottom, near the middle):
+
+
+
+And here is an example of an image with the weed label in the validation set:
+
+
+
+How would I go about preprocessing the training set so that a CNN trained on it would perform well on the validation set? I was thinking of applying a low-pass filter to the rough edges of the photoshopped images so that the network doesn't act as an edge detector, but it doesn't seem very robust. Should I manually select the best images from the training set? Thank you!
+"
+"['game-ai', 'definitions', 'agi', 'alphazero', 'alphago']"," Title: Is AlphaZero an example of an AGI?Body: From DeepMind's research paper on arxiv.org:
+
+
+ In this paper, we apply a similar but fully generic algorithm, which
+ we call AlphaZero, to the games of chess and shogi as well as Go,
+ without any additional domain knowledge except the rules of the game,
+ demonstrating that a general-purpose reinforcement learning algorithm
+ can achieve, tabula rasa, superhuman performance across many
+ challenging domains.
+
+
+Does this mean AlphaZero is an example of AGI (Artificial General Intelligence)?
+"
+"['neural-networks', 'natural-language-processing', 'models', 'architecture']"," Title: Neural network architecture for comparisonBody: When someone wants to compare 2 inputs, the most widespread idea is to use a Siamese architecture. Siamese architecture is a very high level idea, and can be customized based on the problem we are required to solve.
+
+Is there any other architecture type to compare 2 inputs ?
+
+
+
+Background
+
+I want to use a neural network for comparing 2 documents (semantic textual similarity). Siamese network is one approach, I was wondering if there is more.
+"
+"['reinforcement-learning', 'deep-rl', 'q-learning', 'dqn', 'implementation']"," Title: Why isn't my DQN implementation working properly?Body: I'm trying to build a DQN to replicate the DeepMind results. I'm doing with a simple DQN for the moment, but it isn't learning properly: after +5000 episodes, it couldn't get more than 9-10 points. Each episode has a limit of 5000 steps but it couldn't reach more than 500-700. I think the problem is in the replay function, which is:
+def replay(self, replay_batch_size, replay_batcher):
+ j = 0
+ k = 0
+ replay_action = []
+ replay_state = []
+ replay_next_state = []
+ replay_reward= []
+ replay_superbatch = []
+
+ if len(memory) < replay_batch_size:
+ replay_batch = random.sample(memory, len(memory))
+ replay_batch = np.asarray(replay_batch)
+ replay_state_batch, replay_next_state_batch, reward_batch, replay_action_batch = replay_batcher(replay_batch)
+ else:
+ replay_batch = random.sample(memory, replay_batch_size)
+ replay_batch = np.asarray(replay_batch)
+ replay_state_batch, replay_next_state_batch, reward_batch, replay_action_batch = replay_batcher(replay_batch)
+
+ for j in range ((len(replay_batch)-len(replay_batch)%4)):
+
+ if k <= 4:
+ k = k + 1
+ replay_state.append(replay_state_batch[j])
+ replay_next_state.append(replay_next_state_batch[j])
+ replay_reward.append(reward_batch[j])
+ replay_action.append(replay_action_batch[j])
+
+ if k >=4:
+ k = 0
+ replay_state = np.asarray(replay_state)
+ replay_state.shape = shape
+ replay_next_state = np.asarray(replay_next_state)
+ replay_next_state.shape = shape
+ replay_superbatch.append((replay_state, replay_next_state,replay_reward,replay_action))
+
+ replay_state = []
+ replay_next_state = []
+ replay_reward = []
+ replay_action = []
+
+ states, target_future, targets_future, fit_batch = [], [], [], []
+
+ for state_replay, next_state_replay, reward_replay, action_replay in replay_superbatch:
+
+ target = reward_replay
+ if not done:
+ target = (reward_replay + self.gamma * np.amax(self.model.predict(next_state_replay)[0]))
+
+ target_future = self.model.predict(state_replay)
+
+ target_future[0][action_replay] = target
+ states.append(state_replay[0])
+ targets_future.append(target_future[0])
+ fit_batch.append((states, targets_future))
+
+ history = self.model.fit(np.asarray(states), np.array(targets_future), epochs=1, verbose=0)
+
+ loss = history.history['loss'][0]
+
+ if self.exploration_rate > self.exploration_rate_min:
+
+ self.exploration_rate -= (self.exploration_rate_decay/1000000)
+ return loss
+
+What I'm doing is to get 4 experiences (states), concatenate and introduce them in the CNN in shape (1, 210, 160, 4). Am I doing something wrong? If I implement the DDQN (Double Deep Q Net), should I obtain similar results as in the DeepMind Breakout video? Also, I'm using the Breakout-v0 enviroment from OpenAI gym.
+Edit
+Am I doing this properly? I implemented an identical CNN; then I update the target each 100 steps and copy the weights from model
CNN to target_model
CNN. Should it improve the learning? Anyway I'm getting low loss.
+for state_replay, next_state_replay, reward_replay, action_replay in replay_superbatch:
+
+ target = reward_replay
+ if not done:
+
+ target = (reward_replay + self.gamma * np.amax(self.model.predict(next_state_replay)[0]))
+ if steps % 100 == 0:
+
+ target_future = self.target_model.predict(state_replay)
+
+ target_future[0][action_replay] = target
+ states.append(state_replay[0])
+ targets_future.append(target_future[0])
+ fit_batch.append((states, targets_future))
+ agent.update_net()
+
+ history = self.model.fit(np.asarray(states), np.array(targets_future), epochs=1, verbose=0)
+
+ loss = history.history['loss'][0]
+
+Edit 2
+So as far I understand, this code should work am I right?
+if not done:
+
+ target = (reward_replay + self.gamma * np.amax(self.target_model.predict(next_state_replay)[0]))
+ target.shape = (1,4)
+
+ target[0][action_replay] = target
+ target_future = target
+ states.append(state_replay[0])
+ targets_future.append(target_future[0])
+ fit_batch.append((states, targets_future))
+
+ if step_counter % 1000 == 0:
+
+ target_future = self.target_model.predict(state_replay)
+
+ target_future[0][action_replay] = target
+ states.append(state_replay[0])
+ targets_future.append(target_future[0])
+ fit_batch.append((states, targets_future))
+ agent.update_net()
+
+ history = self.model.fit(np.asarray(states), np.array(targets_future), epochs=1, verbose=0)
+
+"
+"['algorithm', 'swarm-intelligence', 'ant-colony-optimization']"," Title: Are all ant routing algorithms the same?Body: Are all ant routing algorithms the same? If no, what is the common properties of all of them? In other words, how we can detect a routing algorithm is an ant routing algorithm?
+"
+"['neural-networks', 'machine-learning', 'natural-language-processing', 'recurrent-neural-networks', 'chat-bots']"," Title: Does an advanced Dialogue state tracking eliminate the need of intent classifier and slot filling models in dialogue systems/ chatbots?Body: I am learning to create a dialogue system. The various parts of such a system are Intent classifier, slot filling, Dialogue state tracking (DST), dialogue policy optimization and NLG.
+
+While reading this paper on DST, I found out that a discriminative sequence model of DST can identify goal constraints, fill slots and maintain state of the conversation.
+
+Does this mean that now I dont need to create an intent classifier and slot filling models separately as the tasks are already being done by this DST? Or I am misunderstanding both the things and they are separate?
+"
+"['convolutional-neural-networks', 'ai-design']"," Title: Can I do oversampling by copying the same image multiple times? Will it effect my neural network accuracy?Body: I am working on an image data-set. As you may have guessed it is imbalanced data. I have 'Class A, 19,000 images' and 'Class B, 2,876 images'.
+
+So I did an undersampling by removing randomly from the majority class till it becomes equal to the minority class.
+
+On doing this I am loosing lot of information from those 19000 images which I could get.
+So I do an oversampling of minority class, by simply copying the 2,876 images again and again.
+
+Is this undersampling method correct, will it effect my accuracy? I trained an Inceptionv4 model using this oversampled data and it is not at all stable and I am getting poor accuracy.
+
+What should be my strategy ?
+"
+"['comparison', 'search', 'proofs', 'a-star', 'uniform-cost-search']"," Title: How do I show that uniform-cost search is a special case of A*?Body: How do I show that uniform-cost search is a special case of A*? How do I prove this?
+"
+"['machine-learning', 'linear-regression', 'maximum-likelihood']"," Title: Understanding the math behind using maximum likelihood for linear regressionBody: I understand both terms, linear regression and maximum likelihood, but, when it comes to the math, I am totally lost. So I am reading this article The Principle of Maximum Likelihood (by Suriyadeepan Ramamoorthy). It is really well written, but, as mentioned in the previous sentence, I don't get the math.
+
+The joint probability distribution of $y,\theta, \sigma$ is given by (assuming $y$ is normally distributed):
+
+
+
+This equivalent to maximizing the log likelihood:
+
+
+The maxima can be then equating through the derivative of l(θ) to zero:
+
+
+I get everything until this point, but don't understand how this function is equivalent to the previous one :
+
+"
+"['search', 'implementation', 'breadth-first-search', 'depth-first-search']"," Title: Why do we use a last-in-first-out queue in depth-first search?Body: Why do we use a last-in-first-out (LIFO) queue in the depth-first search algorithm?
+
+In the breadth-first search algorithm, we use a first-in-first-out (FIFO) queue, so I am confused.
+"
+"['neural-networks', 'backpropagation']"," Title: Am I able to visualize the differentiation in backprop as follows?Body: I'm wondering if I can visualize the backprop process as follows (please excuse me if I have written something terrible wrong). If the loss function $L$ on a neural network represents the function has the form
+$$L = f(g(h(\dots u(v(\dots))))$$
+then we can visualize the derivative of $L$ wrt the $i$th function $v$ as
+$$\frac{\partial L}{\partial v} = \frac{\partial f}{\partial g}\frac{\partial g}{\partial h}\dots\frac{\partial u}{\partial v}.$$
+
+Am I able to view all neural networks as having a loss function of the form of $L$ given above? That is, am I correct in saying that any neural network is just a function composition and that I can write the partial derivative wrt any parameter as written above (I know I took the partial with respect to the function $v$).
+
+Thanks
+"
+['reinforcement-learning']," Title: How to deal with episode termination in Advantage Actor-Critic algorithm?Body: Advantage Actor-Critic algorithm may use the following expression to get 1-step estimate of the advantage:
+
+$ A(s_t,a_t) = r(s_t, a_t) + \gamma V(s_{t+1}) (1 - done_{t+1}) - V(s_t) $
+
+where $done_{t+1}=1$ if $s_{t+1}$ is a terminal state (end of the episode) and $0$ otherwise.
+
+Suppose our learning environment has a goal, collecting the goal gives reward $r=1$ and terminates the episode. Agent also receives $r=-0.1$ for every step, encouraging it to collect the goal faster. We're learning with $\gamma=0.99$ and we terminate the episode after $T$ timesteps if the goal wasn't collected.
+
+For the state before collecting a goal we have the following advantage, which seems very reasonable: $A(s_t,a_t) = 1 - V(s_t)$.
+
+For the timestep $T-1$, regardless of the state, we have: $ A(s_{T-1},a_{T-1}) = r(s_{T-1}, a_{T-1}) - V(s_{T-1}) \approx -0.1 -\frac{-0.1}{1-\gamma} = -0.1 + 10 = 9.9 $
+(this is true under the assumption that we're not yet able to collect the goal reliably often, therefore the value function converges to something close to $\frac{r_{avg}}{1-\gamma} \approx -10 $ ).
+
+Usually, $T$ is not a part of the state, so the value function has no way to anticipate the sudden change in reward-to-go. So, all of a sudden, we got a (relatively) big advantage for the arbitrary action that we took at the timestep $T-1$. Following the policy gradient rule, we will significantly increase the probability of an arbitrary action that we took at the end of the episode, even if we didn't achieve anything. This can quickly destroy the learning process.
+
+How do people deal with this problem in practice? My ideas:
+
+
+- Differentiate between actual episode terminations and ones caused by the time limit, e.g. for them we will not replace next step value estimate with $0$.
+- Somehow add $t$ to the state such that the value function can learn to anticipate the termination of the episode.
+
+
+As I noticed, the A2C implementation in OpenAI baselines does not seem to bother with any of that:
+
+
+"
+['structured-data']," Title: Does IBM Cloud Private for Data run on public clouds like AWS or Azure?Body: I just started using IBM Cloud Private for Data this week and I wasn't sure if I can use other public clouds to connect with my ICP for data account. So I spoke with an IBM representative and I wanted to share their responses...
+"
+"['optimization', 'search', 'hill-climbing', 'local-search']"," Title: Why does the hill climbing algorithm only produce a local maximum?Body: Apparently, the hill climbing algorithm just produces a local maximum, and not necessarily a global optimum. It's stuck on a local maximum. Why does hill climbing algorithm only produce a local maximum?
+"
+"['reinforcement-learning', 'deep-rl', 'papers', 'proofs', 'trust-region-policy-optimization']"," Title: How is inequality 31 derived from equality 30 in lemma 2 of the ""Trust Region Policy Optimization"" paper?Body: In the Trust Region Policy Optimization paper, in Lemma 2 of Appendix A (p. 11), I didn't quite understand how inequality (31) is derived from equality (30), which is:
+$$\bar{A}(s) = P(a \neq \tilde{a} | s) \mathbb{E}_{(a, \tilde{a}) \sim (\pi, \tilde{\pi})|a \neq \tilde{a}} \left[ A_{\pi}(s, \tilde{a}) - A_{\pi}(s,a) \right]$$
+$$|\bar{A}(s)| \le \alpha. 2 \max_{s,a} |A_{\pi}(s,a)|$$
+Would you mind let me know how the inequality is derived?
+"
+"['machine-learning', 'training', 'supervised-learning', 'logistic-regression']"," Title: Are these steps to get a final linear regression model correct?Body: I am new to machine learning. I know Logistic Regression (LR) is a supervised learning technique. Therefore, we need training data to train the model.
+I tried to understand the basic steps to get the final RL model.
+According to my understanding, here are the steps.
+
+- We define the LR model, that is, $y = \text{sigmoid}(W x + B)$. Set $W$ and $B$ to zero or another value.
+
+- Given the training data (the inputs are $x_1, x_2, \dots, x_m$, and the outputs are $y_1, y_2, \dots, y_m$), we find $W$ and $B$ values by minimizing a cost function using gradient descent.
+
+- Then we use the found $W$ and $B$ values. We then again apply a known sample from the training data $\hat{x}$ to get the predicting of $\hat{y}$, that is, $\hat{y} = \text{sigmoid}(W \hat{x} + B)$.
+
+- We test the final model on unknown data.
+
+
+Are these steps correct?
+Please, if you could also give me the basic idea behind the supervised technique, I would appreciate it.
+"
+"['deep-learning', 'reinforcement-learning', 'forecasting']"," Title: For forecasting and trading control, given limited data, what AI approaches are well matched?Body: I'm working on stock price prediction and automatic or semi-automatic control of trading. The price trends of these stocks exhibit recurring patterns that may be exploited. My dataset is currently small, only in the thousands of points. There are no images or very high dimensional inputs at all. The system must select from among the usual trading actions.
+
+
+- Buy $n$ shares at the current bid price
+- Hold at the current position
+- Sell $n$ shares at what the current market will bear
+
+
+I'm not sure if reinforcement learning is the best choice, deep learning is the best choice, something else, or some combination of AI components.
+
+It doesn't seem to me to be a classification problem, with hard to discern features. It seems to be an action-space problem, where the current state is a main input. Because of the recurring patterns, the history that demonstrates the observable patterns is definitely pertinent.
+
+I've tried some code examples, most of which employ some form of artificial nets, but I've been wondering if I even need deep learning for this, having seen the question, When is deep-learning overkill? on this site.
+
+Since I have very limited training data, I'm not sure what AI design makes most sense to develop and test first.
+"
+"['python', 'object-recognition']"," Title: How to know whether the object is moving after it is being detected?Body: If my algorithm detects the type of object, how should I know if that object is moving or not? Suppose a person carrying an umbrella. How to know that the umbrella is moving?
+
+I am working on a project where I want to know whether that particular object belongs to the person entering inside the store.
+I was thinking about the bounding boxes(bb) approach where if the person's bb overlaps with the object's bb. But the problem arises when there are multiple objects with a person.
+
+I would appreciate your help. Thanks.
+"
+"['genetic-algorithms', 'evolutionary-algorithms', 'mutation-operators']"," Title: Why do we apply the mutation operation after generating the offspring?Body: Why do we apply the mutation operation after generating the offspring, in genetic algorithms?
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-recognition', 'object-recognition']"," Title: How a game playing agent could identify potential objects and proximity?Body: Most implementations I'm seeing for playing games like Atari (usually similar to DeepMind's work using DQN) have 4 graphical frames of input fed into 3 convolutional layers which are then fed into a single fully connected layer. The explanation of no pooling layer is due to positioning of features/objects being very critical to most games.
+
+My concern with this is that it may be weighing visual features based on position without regard for feature->feature proximity. By this, I mean to question if learning to avoid bullets in the bottom left of the screen is knowledge also used in the bottom right of the screen in a game like Space Invaders.
+
+So, question 1: Is my concern with only using 3 conv layers into a fc layer legitimate regarding spatially localized learning?
+
+Question 2: If my concern is legitimate, how might the network be modified to still treat feature position as significant, but to also take note of feature to feature proximity?
+
+(I'm still quite the novice if that isn't extremely obvious, so if my questions aren't completely ridiculous on their own, please try to keep responses relatively high level if you would.)
+"
+"['deep-learning', 'computer-vision']"," Title: Can I detect unique people in a video?Body: I am having a video feed with multiple faces in it. I need to detect each face and the gender as well and assign the gender against each person. I am not sure how to uniquely identify a face as Face1 and Face2 etc. I do not need to know their names, just need to track a person in the entire video. I thought of tracking methods but people are constantly moving and changing so a box location can be occupied by another person after some frames.
+
+I am interested in a way where I can assign an id to a face but I am not sure how to do it. I can use Facial Recognition Based embedding on each face and track that. But that seems to be an overkill for the job. Is there any other method available or Facial Recognition/Embedding is the only method to uniquely identify people in a video?
+"
+"['ai-design', 'path-planning']"," Title: block of worlds with position aware goal stack planningBody: Can someone suggest an AI approach to moving blocks, one at a time, assuming control of an robotic arm, to get from the initial state on the left to the final state on right, preferably using goal stack planning.
+
+actions
+
+
+- Pickup() — to pick up a block from table only
+- Putdown — to putdown a block on table only
+- Unstack — unstack a block from another block
+- Stack — stack a block on another clear block only
+
+
+property functions
+
+
+- On(x,y)
+- Above(x,y)
+- Table(x)
+- Clear(x)
+
+
+
+"
+"['game-ai', 'minimax', 'alpha-beta-pruning']"," Title: Is it feasible to use minimax to solve a board game with a large number of moves?Body: I have to build a KI for a made-up game similar to chess. As I did research for a proper solution, I came upon the MinMax algorithm, but I'm not sure it will work with the given game dynamics.
+
+The challenge is that we have far more permutations per turn than in chess because of these game rules.
+
+
+- Six pieces on the board, with different ranges.
+- On average, there are 8 possible moves for a piece per turn.
+- The player can choose as many pieces to move as he likes. For example none, all of them, or some number in between (whereas in chess you can only move one.)
+
+
+Actual questions:
+
+
+- Is it feasible to implement MinMax for the described game?
+- Can alpha-beta-pruning and a refined evaluation function help (despite the large number of possible moves)?
+- If no, is there a proper alternative?
+
+"
+"['comparison', 'reference-request', 'genetic-algorithms', 'applications', 'particle-swarm-optimization']"," Title: When should I use Genetic Algorithms as opposed to Particle Swarm Optimization?Body: For which problems are Genetic Algorithms more suitable than Particle Swarm Optimization, and vice-versa? Are there any guidelines?
+"
+"['philosophy', 'terminology', 'biology']"," Title: What is the role of biology in AI?Body: Biology is used in AI terminology. What are the reasons? What does biology have to do with AI? For instance, why is the genetic algorithm used in AI? Does it fully belong to biology?
+"
+"['reinforcement-learning', 'optimization']"," Title: Neural Network Optimizers in Reinforcement Learning non-well behaved environmentsBody: https://stackoverflow.com/questions/36162180/gradient-descent-vs-adagrad-vs-momentum-in-tensorflow
+
+Here, the nice gifs explain how different algorithms approach towards the root. Unfortunately, the environment in the gif is way too simple and real cases have much more complex environments. Also, in reinforcement learning, the solutions should change each moment in a difficult enough environment since things are dynamic.
+
+My question is which optimizer is best for reinforcement learning in such dynamically changing environment? Adadelta should not move beyond local minima so do we have to use SGD or Adadelta with an exploration heuristic? Please let me know in detail your thoughts.
+"
+['machine-learning']," Title: Which problems in AI are not machine learning?Body: Which problems in AI are not machine learning? Which problems involve both AI and machine learning?
+"
+['reinforcement-learning']," Title: How do we actually sample an action from a policy in policy gradient methods?Body: Recently I started to look at policy gradient methods and policies are represented as functions with features for larger problems with many states. Many articles and pseudocodes of algorithms mention sampling an action from the policy, but it is unclear to me how.
+
+Actions are something we do in the environment, like going left, right, etc... And functions take some feature values and parameters, make calculations, and 'spit out' some number. So how do we actually map that number to a certain action, and how do we know what action to take?
+"
+"['neural-networks', 'convolutional-neural-networks']"," Title: Can the same input for a plain neural network be used for a convolutional neural network?Body: Can the same input for a plain neural network be used for CNNs? Or does the input matrix need to be structured in a different way for CNNs compared to regular NNs?
+"
+"['neural-networks', 'topology', 'architecture', 'neurons', 'biology']"," Title: Are Modular Neural Networks more effective than large, monolithic networks at any tasks?Body: Modular/Multiple Neural networks (MNNs) revolve around training smaller, independent networks that can feed into each other or another higher network.
+
+In principle, the hierarchical organization could allow us to make sense of more complex problem spaces and reach a higher functionality, but it seems difficult to find examples of concrete research done in the past regarding this. I've found a few sources:
+
+https://en.wikipedia.org/wiki/Modular_neural_network
+
+https://www.teco.edu/~albrecht/neuro/html/node32.html
+
+https://vtechworks.lib.vt.edu/bitstream/handle/10919/27998/etd.pdf?sequence=1&isAllowed=y
+
+A few concrete questions I have:
+
+
+- Has there been any recent research into the use of MNNs?
+- Are there any tasks where MNNs have shown better performance than large single nets?
+- Could MNNs be used for multimodal classification, i.e. train each net on a fundamentally different type of data, (text vs image) and feed forward to a higher level intermediary that operates on all the outputs?
+- From a software engineering perspective, aren't these more fault tolerant and easily isolatable on a distributed system?
+- Has there been any work into dynamically adapting the topologies of subnetworks using a process like Neural Architecture Search?
+- Generally, are MNNs practical in any way?
+
+
+Apologies if these questions seem naive, I've just come into ML and more broadly CS from a biology/neuroscience background and am captivated by the potential interplay.
+
+I really appreciate you taking the time and lending your insight!
+"
+"['neural-networks', 'activation-functions', 'function-approximation', 'model-request', 'network-design']"," Title: Which neural network should I use to approximate a specific but unknown function?Body: We have convolutional neural networks and recurrent neural networks for analyzing, respectively, images and sequential data.
+Now, suppose I want to approximate the unknown function $f(x,y) = \sin(2\pi x)\sin(2\pi y)$, with domain $\Omega = [0,1]\times [0,1]$, that is, $x$ and $y$ can be between $0$ and $1$ (inclusive).
+How do I determine which neural network architecture is more appropriate to approximate this function? Which kind of activation functions would be better suited for this?
+Note that, generally, I don't know a priori which function the neural network has to learn. I am just asking for this specific $f(x, y)$, as it could be a solution for a differential equation. And $\Omega$ is the domain, i.e., I don't care about the output of the neural network outside $\Omega$.
+"
+"['machine-learning', 'social', 'profession']"," Title: Why studying machine learning is an opportunity in today's world?Body: I just wanted to gather some perspective on why this is a great opportunity to be able to study machine learning today?
+
+With all the online resources (online courses like Andrew Ng's, availability of datasets such as Kaggle, etc), learning machine learning has become possible.
+
+I understood that you can have high paid jobs; but you also need a lot of work dedication to be good at it, which makes your salary not so attractive! (in comparison to the number of hours you spend to keep up with this fast moving field)
+
+Why it is so desirable to take this opportunity and start learning machine learning today? (community, ability to start a business, etc.)
+"
+"['machine-learning', 'ai-design', 'sequence-modeling', 'encoder-decoder']"," Title: Why do we need both encoder and decoder in sequence to sequence prediction?Body: Why do we need both encoder and decoder in sequence to sequence prediction?
+
+We could just have a single RNN that, given input $x$, outputs some value $y(t)$ and hidden state $h(t)$. Next, given $h(t)$ and $y(t)$, the next output $y(t+1)$ and hidden state $h(t+1)$ should be produced, and so on. The architecture shall consists of only one network instead of two separate ones.
+"
+"['machine-learning', 'backpropagation', 'notation', 'learning-rate']"," Title: What is the use of the $\epsilon$ term in this back-propagation equation?Body: I am currently looking at different documents to understand back-propagation, mainly at this document. Now, on page 3, there is the $\epsilon$ symbol involved:
+$$
+\Delta w_{k j}=\varepsilon \overbrace{\left(t_{k}-a_{k}\right) a_{k}\left(1-a_{k}\right)}^{\delta_{k}} a_{j}
+$$
+While I understand the main part of the equation, I don't understand the $\epsilon$ factor. Searching for the meaning of the $\epsilon$ in math, it means (for example) an error value to be minimized, but why should I multiply with the error (it is denoted as E anyways).
+Shouldn't the $\epsilon$ be the learning rate in this equation? I think that would be what makes sense, because we want to calculate by how much we want to adjust the weight, and since we calculate the gradient, I think the only thing that's missing is the multiplication with the learning rate. The thing is, isn't the learning rate usually denoted with the $\alpha$?
+"
+"['machine-learning', 'recurrent-neural-networks', 'active-learning']"," Title: How can active learning be used in the case of complex models that require a lot of data?Body: We have a series of data and we want to label the parts of each series. As we do not have any training data, we could try to use active learning as a solution, but the problem is that our classifier is something like RNN which needs a lot of data to be trained. Hence, we have a problem in converging fast to just label proportional small parts of unlabeled data.
+
+Is there any article about this problem (active learning and some complex classifiers, like RNN)?
+
+Is there any good solution to this problem or not? (as data is a series of actions)
+"
+"['logic', 'automated-theorem-proving', 'automated-reasoning', 'general-problem-solver']"," Title: When will we have computer programs that can compose mathematical proofs?Body: When will it be possible to give a computer program a bunch of assumptions and ask it if a certain statement is true or false, giving a proof or a counterexample respectively?
+"
+"['neural-networks', 'algorithm', 'training']"," Title: Change parameter in Karaboga's code of ABC algorithmBody: I'm working on a problem and need to use Karaboga's code of the ABC algorithm but I have some questions...
+
+Does this formula for calculating a parameter have to be changed:
+
+{/v_{ij}=x_{ij}+\phi_{ij}(x_{kj}-x_{ij}) */}
+
+
+A standard one or this what Karaboga see is better for algorithm.
+
+The second is the same question for this formula of calculating fitness:
+
+{fFitness(ind)=1./(fObjV(ind)+1)}
+
+
+Link to ABC algorithm coded using C programming language
+"
+"['deep-learning', 'reinforcement-learning', 'dqn']"," Title: DQN exploration strategy for large grid-world environmentBody: My task involves a large grid-world type of environment (grid size may be $30\times30$, $50\times50$, $100\times100$, at the largest $200\times200$). Each element in this grid either contains a 0 or a 1, which are randomly initialized in each episode. My goal is to train an agent, which starts in a random position on the grid, and navigate to every cell with the value 1, and set it to 0. (Note that in general, the grid is mostly 0s, with sparse 1s).
+
+I am trying to train a DQN model with 5 actions to accomplish this task:
+
+
+- Move up
+- Move right
+- Move down
+- Move left
+- Clear (sets current element to 0)
+
+
+The ""state"" that I give the model is the current grid ($N\times N$ tensor). I provide the agent's current location through the concatenation of a flattened one-hot ($1\times(N^2)$) tensor to the output of my convolutional feature vector (before the FC layers).
+
+However, I find that the epsilon-greedy exploration policy does not lead to sufficient exploration. Also, early in the training (when the model is essentially choosing random actions anyway), the pseudo-random action combinations end up ""canceling out"", and my agent does not move far enough away from the starting location to discover that there is a cell with value 1 in a different quadrant of the grid, for example. I am getting a converging policy on a $5\times5$ grid w/ a non-convolutional MLP model, so I think that my implementation is sound.
+
+
+- How I might encourage exploration that will not always ""cancel out"" to only explore a very local region to my starting location?
+- Is this approach a good way to accomplish this task (assuming I want to use RL)?
+- I would think that attempting to work with a ""continuous"" action space (model outputs 2 values: vertical and horizontal indices of grid cells that contain 1s) would be more difficult to achieve convergence. Is it wise to always try to use discrete action spaces?
+
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks']"," Title: How do randomly initialized neural networks behave?Body: I am wondering how the output of randomly initialized MLPs and ConvNets behave with respect to their inputs. Can anyone point to some analysis or explanation of this?
+
+I am curious about this because in the Random Network Distillation work from OpenAI, they use the output of randomly initialized network to generate intrinsic reward for exploration. It seems that this assumes that similar states will produce similar outputs of the random network. Is this generally the case?
+
+Do small changes in input yield small changes in output, or is it more chaotic? Do they have other interesting properties?
+"
+"['natural-language-processing', 'software-evaluation', 'data-science']"," Title: How can I evaluate the performance of a system that generates text?Body: I am preparing to perform research comparing the performance of two different systems that probabilistically generate the next word of an input sentence.
+
+For example, given the word 'the', a system might output 'car', or any other word. Given the input 'the round yellow', a system might output 'sun', or it might output something that doesn't make sense.
+
+My question is, how can I quantitatively evaluate the performance of the two different systems performing this task? Of course if I tested each system manually I could qualitatively determine how often each system responded in a way that makes sense, and compare how often each system responds correctly, but I'd really like a meaningful quantitative method of evaluation that I could preferably automate.
+
+Precision and recall don't seem like they would work here, seeing as for each given input there are many potentially acceptable outputs. Any suggestions?
+"
+"['bayesian-networks', 'probabilistic-graphical-models', 'probabilistic-machine-learning']"," Title: What does a hybrid Bayesian network contain?Body: The field of artificial intelligence is so vast. There are many methodologies for handling continuous data, and I have just read about the hybrid Bayesian network. I just want to know that what a hybrid Bayesian network contains?
+"
+"['neural-networks', 'python']"," Title: Using an 'operation ID' as a neural network inputBody: Sorry if this is basic or covered elsewhere, I am just starting here and I wasn't able to find an answer, but I might have not been searching for the right thing. So:
+
+I am training a neural network to predict current draw in a system. There are a number of obvious numerical inputs, like temperature, counting rate, voltage, etc.
+
+The most predictive thing, however, is what operation the system is doing. So like, if it's doing a 'calibration' then the current profile is much different than if it's in 'standby'. I know that I can just use a different network for each operation, but in this case I have a couple hundred different macros defined and I don't want to have 200+ neural networks retrain all the time.
+
+I also know that I can have a digital value as an input, but my understanding is that it has to be either 0/1. Also, the relationship to operation is not at all correlated - so operation 100 is not necessarily more current draw than 99 or less than 101.
+
+So, is there a way to have an operation ID or something factor in, but not have it be in the linear combination mathematically? So, basically, tell the system to do a different training based on ID or something? I'll be using python and scikit-learn.
+
+Thanks!
+"
+"['python', 'keras']"," Title: Label arrangement for custom Keras image generatorBody: I am trying to generate 90 and 270 degrees rotated versions of my sample images on the fly during training. I found an example and modifying it. But I am confused about what should be the order? For instance in one batch I have 32 images and my image generator should return total 64 images. Let's say upper case letters are 90 degree and lower case letters are 270 degree rotated images. Should the order be AaBbCc or ABCabc? I apply the same to the validation set. Here is the related code fragment:
+
+Edit: Code fragment added.
+
+ def _get_batches_of_transformed_samples(self, index_array):
+ # create list to hold the images
+ batch_x = []
+ # create list to hold the labels
+ batch_y = []
+ # rotation angles
+ target_angles = [0, 90, 180, 270]
+ angle_categories = list(range(0, len(target_angles)))
+ self.classes = target_angles
+ self.class_indices = angle_categories
+ # generate rotated images and corresponding labels
+ for i, j in enumerate(index_array):
+ is_color = int(self.color_mode == 'rgb')
+ image = cv2.imread(self.filenames[j], is_color)
+ if is_color:
+ image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
+
+ for rotation_angle, cat_angle in zip(target_angles, angle_categories):
+ rotated_im = rotate(image, rotation_angle, self.target_size[:2])
+ if self.preprocess_func: rotated_im = self.preprocess_func(rotated_im)
+ # add dimension to account for the channels if the image is greyscale
+ if rotated_im.ndim == 2: rotated_im = np.expand_dims(rotated_im, axis=2)
+ batch_x.append(rotated_im)
+ batch_y.append(cat_angle)
+
+ # convert lists to numpy arrays
+ batch_x=np.asarray(batch_x)
+ batch_y=np.asarray(batch_y)
+
+ batch_y = to_categorical(batch_y, len(target_angles))
+ return batch_x, batch_y
+
+
+I actually rotate them as 0, 90, 180 and 270 degrees. As seen in the code, for each batch I return all the rotated versions of all the images in the batch. But is this correct or should I return first 0 degree rotated versions, second 90 degree rotated versions so on?
+
+Edit2: I checked my previous work which I use the built in Keras ImageDataGenerator
. generator.classes
returns [zeros(100,1); ones(100,1)]. In that study I only have two classes. I understand that Keras indexes the images as [class1, class2, ...]. I think I have to do the same.
+"
+"['machine-learning', 'natural-language-processing', 'applications']"," Title: Can Machine Learning be applied to decipher the script of lost ancient languages?Body: Can Machine Learning be applied to decipher the script of lost ancient languages (namely, languages that were being used many years ago, but currently are not used in human societies and have been forgotten, e.g. Avestan language)?
+
+If yes, is there already any successful experiment to decipher the script of unknown ancient languages using Machine Learning?
+"
+"['neural-networks', 'deep-learning', 'game-ai', 'combinatorial-games']"," Title: How to encode Azul game state as NN inputBody: Question to NN practicioners. I'd like to encode Azul board game state as an input to NN, let's focus on 2-player variant for a while.
+
+
+
+There are 5 round ""Factories"" on the table (7 on picture, ignore it). Each one can keep 4 tiles of 5 colors. There is also center of the board which can keep up to 15 tiles. What are the advantages and disadvantages of different encodings? Here are my ideas:
+
+
+- Every Factory has five integer counters, one for each tile color. Sample encoding of single Factory:
[3,1,0,0,0]
+- Every Factory has 20 binary flags, four for each color. Single flag encodes presence of tile of given color. Sample encoding of single factory:
[1,1,1,0, 1,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0]
+- Every Factory has 20 binary flags, four for each color, but only one flag can be set for given color and position of raised flag encodes number of tiles of given color. Sample encoding of single factory:
[0,0,1,0, 1,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0]
+- Every Factory has 4 enum fields, with 6 possible values each (5 colors + empty). Sample encoding of single factory:
[red, red, red, blue]
or [Empty,Empty,Empty,Empty]
+
+
+(note: encoding schema would also cover center of the board, up to 15 tiles, as said earlier. Of course player's board would also be encoded, but I don't want to ask too broad question)
+
+I'd like to train NN to play Azul, which means it needs to properly process number of tiles taken in given round (up to 15 in theory, 2-4 in practice) because it would also need to indicate where to put all those tiles to player board positions.
+
+Based on your experience, which encoding is most promising? Or maybe there is some better method I didn't think of? Or it is not possible to tell or it doesn't matter?
+"
+"['game-ai', 'search', 'minimax', 'tic-tac-toe', 'adversarial-search']"," Title: How do we find the length (depth) of the game tic-tac-toe in adversarial search?Body: When we perform the tic-tac-toe game using adversarial search, I know how to make a tree. Is there a way to find the depth of the tree, and which level is the last level?
+"
+"['reinforcement-learning', 'q-learning', 'math', 'proofs', 'sutton-barto']"," Title: How do we prove the n-step return error reduction property?Body: In section 7.1 (about the n-step bootstrapping) of the book Reinforcement Learning: An Introduction (2nd edition), by Andrew Barto and Richard S. Sutton, the authors write about what they call the ""n-step return error reduction property"":
+
+
+
+But they don't prove it. I was thinking it should not be too hard but how can we show this? I was thinking of using the definition of n-step return (eq. 7.1 on previous page):
+
+$$G_{t:t+n} = R_{t+1} + \gamma*R_{t+2} + ... + \gamma^{n-1}*R_{t+n} + \gamma^{n}*V_{t+n-1}(S_{t+n})$$
+
+Because then this has the $V_{t+n-1}$ in it already. But in the definition above of the n-step return it uses $V_{t+n-1}(S_{t+n})$ but on the right side of the inequality (7.3) that we want to prove it is just little s $V_{t+n-1}(s)$ ? So kind of confused here which state s it is using? And then I guess after this probably pull out a $\gamma^{n}$ term or something, how should we go from here?
+
+This is the newest Sutton Barto book (book page 144, equation 7.3):
+https://drive.google.com/file/d/1opPSz5AZ_kVa1uWOdOiveNiBFiEOHjkG/view
+"
+['natural-language-processing']," Title: In speech recognition, what kind of signal is used?Body: Speech is a major primary mechanism of communication between humans. With respect to artificial intelligence, which signal is used to identify the sequence of words in speech?
+"
+"['neural-networks', 'deep-learning']"," Title: Can a LFSR be approximated by a Neural Network?Body: I was wondering whether a LFSR could be approximated by a NN (output or current state). We know that a LFSR is called linear in some sort of mathematical sense, but is that true? Considering it follows Galois field mathematics. So can a Neural Network approximate a LFSR?
+
+Answers with mathematical proof or actual experience is preferred.
+"
+"['machine-learning', 'game-ai', 'python']"," Title: Implementing AI/ML for the card game ""Cheat""Body: Background info
+
+In Python, I've implemented a rudimentary engine to play ""Cheat"", supporting both bots and a human or only bots. When only bots are playing, the game is simulated.
+
+When placing cards, input is represented by an array of integers corresponding to the indices of the cards. When bots play, they are presented with all valid combinations of cards to place (currently, the choice made is random). For example, if the bot has two cards, their options are:
+
+[[0], [1], [0, 1]]
+
+After a player places cards, the other players get a chance to call cheat (true
to accuse, false
not to).
+
+When a player depletes their cards, they are appended to the winners
list. The goal of the game is to have the lowest index possible in the winners
list.
+
+Summary of game data and end goal
+
+In summary, here is the data which I believe would be useful for the bots to play:
+
+
+- the current number of cards that have been placed
+- the current type (e.g. Ace) to play
+- the type of and suit of each card of the bot's hand
+- the number of cards that were just placed by a player
+- the possible inputs to play during a bot's turn
+- the options for calling cheat (
true
, false
)
+
+
+with the goal of ending up with the lowest index in the winners
list.
+
+Help wanted
+
+I'm very new to machine learning, so I apologize for a such a high level question, but how might I go about using a Python module to implement a system for bots to learn to play intelligently as they play? Are there any modules which you think would be ideal for this situation?
+
+Thank you!
+"
+"['neural-networks', 'activation-functions', 'perceptron', 'xor-problem']"," Title: Why can't the XOR linear inseparability problem be solved with one perceptron like this?Body: Consider a perceptron where $w_0=1$ and $w_1=1$:
+
+Now, suppose that we use the following activation function
+\begin{align}
+f(x)=
+\begin{cases}
+1, \text{ if }x =1\\
+0, \text{ otherwise}
+\end{cases}
+\end{align}
+The output is then summarised as:
+\begin{array}{|c|c|c|c|}
+\hline
+x_0 & x_1 & w_0x_0 + w_1x_1 & f( \cdot )\\ \hline
+0 & 0 & 0 & 0 \\ \hline
+0 & 1 & 1 & 1 \\ \hline
+1 & 0 & 1 & 1 \\ \hline
+1 & 1 & 2 & 0 \\ \hline
+\end{array}
+Is there something wrong with the way I've defined the activation function?
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks']"," Title: How to get a binary output from a Siamese Neural NetworkBody: I'm trying to train a Siamese network to check if two images are similar. My implementation is based on this. I find the Euclidian distance of the feature vectors(the final flattened layer of my CNN) of my two images and train the model using the contrastive loss function.
+
+My question is, how do I get a binary output from the Siamese network for testing (1 if it two images are similar, 0 otherwise). Is it just by thresholding the Euclidian distance to check how similar the images are? If so, how do I go about selecting the threshold? If I wanted to measure the training and validation accuracies, the threshold would have to be increased as the network learns better. Is there a way to learn this threshold for a given dataset?
+
+I would appreciate any leads, thank you.
+"
+"['reinforcement-learning', 'comparison', 'rewards', 'reward-clipping']"," Title: What is the main difference between additive rewards and discounted rewards?Body: What is the difference between additive and discounted rewards?
+"
+"['neural-networks', 'neurons', 'feedforward-neural-networks', 'hyper-parameters']"," Title: Maximum number of neurons in a layer given number of neurons in previous layerBody: Consider an extremely complicated feed-forward neural network training example but with no need of computational efficiency or limiting of processing time.
+
+What is the maximum number of hidden neurons h that a hidden layer should possess to detect all unique features/ correlations between input data from the previous layer which has n nodes?
+
+In other words if we wanted to create a neural network with a large number of neurons in a hidden layer, what is the maximum neuron count possible that helps the network train (give n neurons are in the previous layer)?
+"
+"['machine-learning', 'deep-learning', 'ai-design', 'linear-algebra']"," Title: What does it mean to do multi-dimensional processing with tensors in tensor cores?Body: In some tweets about NeurIPS 2018, this video from NVIDIA appeared. At around 0.37, she says:
+
+
+ If you think about the current computations in our deep learning systems, they are all based on Linear Algebra. Can we come up with better paradigms to do multi-dimensional processing? can we do truly tensor-algebraic techniques in our tensor cores?
+
+
+I was wondering what she is talking about. I'm not an expert so I'd like to understand better this specific point.
+"
+"['machine-learning', 'deep-learning', 'mapping-space']"," Title: How to use AI to depth map video?Body: To be honest, I had no idea where to put this question, but it's sure that it's related to AI. I want to build an application which uses camera, and by the movement it can calculate the
+-camera's position compared to the objects
+-the objects creator and edge points by the movement.
+
+What it means that if the camera is in a static position, it's just a picture. A set of coloured pixels. If we move the camera, we calculate the time, the gyroscope's values, but most importantly, we can have a comparison of two images taken by the same objects. This way:
+-we can detect the edges
+-from the edges, we can detect which is closer than the others
+
+Today's phone camera's are accurate enough to create ~60 crystal clear images per second, and it should be enough resource to accurately create high res models from just moving the camera according to some instructions (that's why I'm surprised why it isn't existing in just a phone app). Here comes the problem. I think the idea is worth for the try, but I'm just a JavaScript developer. The browser can have access to the camera, with TensorFlow I can use machine learning to detect edges, but if I want to be honest, I have no idea where to start, and how to continue step by step. Can you please provide me some guidelines how it would be ideal to create the idea?
+"
+"['game-ai', 'applications', 'genetic-programming', 'robots']"," Title: Is it possible to generate ""Karel the robot"" programs with genetic programming?Body: Karel the robot is an education software comparable to turtle graphics to teach programming for beginners. It's a virtual stack-based interpreter to run a domain-specific language for moving a robot in a maze. In its vanilla version, the user authors the script manually. That means, he writes down a computer program like
+
+- move forward
+- if reached obstacle == true then stop
+- move left.
+
+This program is then executed in the virtual machine.
+In contrast, genetic programming has the aim to produce computer code without human intervention. So-called permutations are tested if they are fulfilling the constraints and, after a while, the source code is generated. In most publications, the concept is explained on a machine level. That means assembly instructions are generated with the aim to replace normal computer code.
+In "Karel the robot" a high-level language for controlling a robot is presented, which has a stack, but has a higher abstraction. The advantage is, that the state space is smaller.
+My question is: is it possible to generate "Karel the robot" programs with genetic programming?
+"
+"['comparison', 'recurrent-neural-networks', 'long-short-term-memory', 'brain', 'neuroscience']"," Title: Is there a mechanism in the human brain that works analog to LSTMs?Body: Is there a mechanism in the human brain that works analog to LSTMs? Is there a biological/neuroscientific interpretation of LSTMs and recurrent neural networks? How do long-term and short-term memories work in the brain, on a neuron level?
+I would also appreciate links to more in-depth explanations in the literature.
+"
+['image-recognition']," Title: Geometry shape identification and vertex/side label associationBody: I want like to be able to draw a shape outline e.g. (pen and paper) triangle, square, circle.. then label the vertices and sides And have ML identify the shape and the symbol associates with each vertex / side.
+
+For example a triangle and I label it with the adjacent, opposite and hypotenuse. Or draw parallel and perpendicular lines and label angles and such.
+
+I would prefer it make one myself rather than use a pre built one. However if you know of a simple pre built one then I will ould love to pick it apart.
+
+Can you please share some guidance on how to solve the above problem, namely shape identification / vertex and side labelling.
+
+Any language
+"
+"['search', 'applications', 'simulated-annealing']"," Title: What are examples of daily life applications that use simulated annealing?Body: In AIMA, 3rd Edition on Page 125, Simulated Annealing is described as:
+
+
+ Hill-climbing algorithm that never makes “downhill” moves toward states with lower value (or higher cost) is guaranteed to be incomplete, because it can get stuck on a local maximum. In contrast, a purely random walk—that is, moving to a successor chosen uniformly at random from the set of successors—is complete but extremely inefficient. Therefore, it seems reasonable to try to combine hill climbing with a random walk in some way that yields both efficiency and completeness. Simulated annealing is such an algorithm. In metallurgy, annealing is the process used to temper or harden metals and glass by heating them to a high temperature and then gradually cooling them, thus allowing the material to reach a lowenergy crystalline state. To explain simulated annealing, we switch our point of view from hill climbing to gradient descent (i.e., minimizing cost) and imagine the task of getting a
+ ping-pong ball into the deepest crevice in a bumpy surface. If we just let the ball roll, it will come to rest at a local minimum. If we shake the surface, we can bounce the ball out of the local minimum. The trick is to shake just hard enough to bounce the ball out of local minima but not hard enough to dislodge it from the global minimum. The simulated-annealing solution is to start by shaking hard (i.e., at a high temperature) and then gradually reduce the intensity of the shaking (i.e., lower the temperature)
+
+
+I know its about its example, but I just want more examples where Stimulated Annealing used in daily life
+"
+"['machine-learning', 'game-ai', 'monte-carlo-tree-search']"," Title: How does Hearthstone AI deal with random eventsBody: I want to learn a lot about the AI of CCG, such as Hearthstone. And now I have known one of the main algorithms that used in this kind of games, MCTS. It analyses the most promising moves, and expands the search tree based on random sampling of the search space. But there are too many random events in this game that can cause different results to one battle. For example, a card can randomly deal X damage to a hero or other follower, and X is a random number from 0 to 30. The number of X is important for the next decision, but there will be a low accuracy by only using MCTS.
+
+So what does the AI do to deal with these random events?
+"
+['reinforcement-learning']," Title: Reward discounting in reinforcement learning for a Pong gameBody: I am trying to understand how to train a neural network to win a Pong game using reinforcement learning, by following the blog post
+Spinning up a Pong AI with deep reinforcement learning.
+
+The environment is provided by Gym AI. It gives the AI a reward of 1 if the opponent misses the ball, and a reward of -1 if it misses the ball.
+
+I am confused about how reward discounting works in this context. This is the function that the blog post used:
+
+def discount_rewards(r, gamma):
+ """""" take 1D float array of rewards and compute discounted reward """"""
+ r = np.array(r)
+ discounted_r = np.zeros_like(r)
+ running_add = 0
+ # we go from last reward to first one so we don't have to do exponentiations
+ for t in reversed(range(0, r.size)):
+ if r[t] != 0: running_add = 0 # if the game ended (in Pong), reset the reward sum
+ running_add = running_add * gamma + r[t] # the point here is to use Horner's method to compute those rewards efficiently
+ discounted_r[t] = running_add
+ discounted_r -= np.mean(discounted_r) #normalizing the result
+ discounted_r /= np.std(discounted_r) #idem
+ return discounted_r
+
+
+Basically, the list of rewards is mostly filled with zeros, because usually nothing happens. When something happens, e.g. the reward is 1, this is not only due to the action taken in that step. Therefore, we need to smoothen the list of rewards so that some of that reward also belongs to previous actions. So far so good. However, it seems to me that if the opponent misses the ball and the reward is 1, then this will be smeared such that it will emphasize the actions taken right before the opponent missed the ball. This seems wrong to me, the actions taken by the AI right before the opponent missed the ball are irrelevant. They don't affect the ball in any way.
+
+I only think reward discounting makes sense when you have lost a point, then the actions just preceding the loss are surely very important and should be emphasized. However, the function takes into account both wins and losses.
+
+How should reward discounting be understood in the context of this Pong game?
+"
+"['algorithm', 'search', 'definitions', 'a-star']"," Title: What kind of search method is A*?Body: What kind of search method is A*? Explain to me with an example.
+"
+"['genetic-algorithms', 'search', 'optimization', 'simulated-annealing']"," Title: When should I use simulated annealing as opposed to a genetic algorithm?Body: What kind of problems is simulated annealing better suited for compared to genetic algorithms?
+
+From my experience, genetic algorithms seem to perform better than simulated annealing for most problems.
+"
+"['deep-learning', 'image-recognition', 'keras', 'object-detection', 'multiclass-classification']"," Title: How can I prevent the CNN from classifying a new input into one of the existing labels (it was trained with) when the input has a new different label?Body: I'm trying to perform image classification with a CNN. In my case, the inputs are the covers of 9 books, so there are 9 labels. I am using TensorFlow's Keras.
+If I pass a new input (that has a label different than one of the 9 labels the CNN was trained with), it will be classified as one of the 9 books, even though it's not a book (but it's e.g. a wall, sofa, house, etc.). I want to avoid this. I want the model to first classify whether there is a book in the image and then classify the book in 9 classes. How could I achieve this?
+"
+"['reinforcement-learning', 'reference-request', 'deep-rl', 'function-approximation', 'action-spaces']"," Title: Are there other approaches to deal with variable action spaces?Body: This question is about Reinforcement Learning and variable action spaces for every/some states.
+Variable action space
+Let's say you have an MDP, where the number of actions varies between states (for example like in Figure 1 or Figure 2). We can express a variable action space formally as $$\forall s \in S: \exists s' \in S: A(s) \neq A(s') \wedge s \neq s'$$
+That is, for every state, there exists some other state which does not have the same action set.
+In figures 1 and 2, there's a relatively small amount of actions per state. Instead imagine states $s \in S$ with $m_s$ number of actions, where $1 \leq m_s \leq n$ and $n$ is a really large integer.
+
+Environment
+To get a better grasp of the question, here's an environment example. Take Figure 1 and let it explode into a really large directed acyclic graph with a source node, huge action space and a target node. The goal is to traverse a path, starting at any start node, such that we'll maximize the reward which we'll only receive at the target node. At every state, we can call a function $M : s \rightarrow A'$ that takes a state as input and returns a valid number of actions.
+Approches
+
+- A naive approach to this problem (discussed here and here) is to define the action set equally for every state, return a negative reward whenever the performed action $a \notin A(s)$ and move the agent into the same state, thus letting the agent "learn" what actions are valid in each state. This approach has two obvious drawbacks:
+
+- Learning $A$ takes time, especially when the Q-values are not updated until either termination or some statement is fulfilled (like in experience replay)
+
+- We know $A$, why learn it?
+
+
+
+- Another approach (first answer here, also very much alike proposals from papers such as Deep Reinforcement Learning in Large Discrete Action Spaces and Discrete Sequential Prediction of continuous action for Deep RL) is to instead predict some scalar in continuous space and, by some method, map it into a valid action. The papers are discussing how to deal with large discrete action spaces and the proposed models seem to be a somewhat solution for this problem as well.
+
+- Another approach that came across was to, assuming the number of different action set $n$ is quite small, have functions $f_{\theta_1}$, $f_{\theta_2}$, ..., $f_{\theta_n}$ that returns the action regarding that perticular state with $n$ valid actions. In other words, the performed action of a state $s$ with 3 number of actions will be predicted by $\underset{a}{\text{argmax}} \ f_{\theta_3}(s, a)$.
+
+
+None of the approaches (1, 2 or 3) are found in papers, just pure speculations. I've searched a lot, but I cannot find papers directly regarding this matter.
+Does anyone know any paper regarding this subject? Are there other approaches to deal with variable action spaces?
+"
+"['natural-language-processing', 'applications']"," Title: Grouping products and naming groupsBody: I'm working on a home tool that will help create a shopping list from a list of recipes chosen for a coming week.
+
+This boils down to:
+
+
+- Extracting ingredients and their quantities from recipes.
+- Grouping similar ingredients together.
+- Summing up quantities for similar ingredients.
+- Naming groups of similar products in a shopping list.
+
+
+The tasks seem non-trivial for a few reasons.
+
+
+- Similar ingredients are described differently, depending on the recipe book/portal, e.g.:
+
+
+
+ - 5 lemons
+ - 5 lemons (to be squeezed)
+ - 5 fresh lemons
+ - 5 big yellow lemons
+
+
+- Recipes lists alternatives for ingredients (e.g., ""3 lemons or 5 limes""), leaving decision up to a user.
+- Recipes involve some information about product-preprocessing. For instance, one has to buy lemons instead of lemon juice when the recipe says:
+
+
+
+ - 100ml lemon juice
+ - 100ml freshly squeezed lemon juice
+
+
+- My language has a complex inflection. For instance, there can be multiple plural forms of a noun and the form of an adjective must be agreed with a form of noun. Adapting NLP algorithms designed for English language might be not straightforward and require some lemmatizing/stemming but not for single words, but whole phrases.
+- Naming products group is hard. Once fresh lemons and big yellow lemons are group together and their quantities summed up, one need to decide how to name this group in a shopping list, e.g.: ""10 lemons"" or ""10 fresh lemons"".
+
+
+Is there any research paper that would cover those challenges?
+
+Especially applied in the same domain?
+"
+"['deep-learning', 'reference-request', 'algorithm-request', 'audio-processing', 'signal-processing']"," Title: Is it possible to clean up an audio recording of a lecture using some type of AI system?Body: Is it possible to clean up an audio recording of a lecture from a smartphone (i.e. remove the background noise) using some type of AI system?
+"
+['reinforcement-learning']," Title: Reinforcement Learning with Adaptive Action MagnitudeBody: How to select an action in a state if the action does not necesarily cause the environment to change state?
+
+Given 10 states ($S_0$ to $S_9$) and in each state $i$ there are two actions defined $(1,-1)$. $1$ increases a parameter of the environment and $-1$ decreases it.
+
+For example, if the parameter is speed and it is currently 1 rad/s, corresponding to ${S_i}$, an action can be either increase or decrease this and therefore transition to the next state, ideally $S_{i-1}$ or $S_{i+1}$.
+
+It is unclear how to formulate a reinforcement learning problem in this case. The problem is the same magnitude of the parameter change through an action.
+
+The range of the parameter is (3, 25) and step size is 1. The problem is thath the responce of the environment is not the same in every satate. In some states a parameter change with magnitude 1 results in a state transition in some state the magintude proves to be to small to provoke a transition change.
+
+For example, if the envitonment is instate $S_1$ and action 1 is applied, how can the magintude of the paremter be adopted in a way which assures that the environment will transition to state $S_2$? How can the step in the paremeter change be made adaptive? Actually my environment is uncertain that is why I don't know exactly whether this action will take me to next state or not.
+
+For your information, I am using Q learning off policy algorithm. Suppose state 9 is the goal state and my Q table is $10 \times 2$.
+"
+"['game-ai', 'monte-carlo-tree-search', 'minimax']"," Title: Any interesting ways to combine Monte Carlo tree search with the minimax algorithm?Body: I've been working on a game-playing engine for about half a year now, and it uses the well known algorithms. These include minimax with alpha-beta pruning, iterative deepening, transposition tables, etc.
+
+I'm now looking for a way to include Monte Carlo tree search, which is something I've wanted to do for a long time. I was thinking of just making a new engine from scratch, but if possible I'd like to somehow import MC tree search into the engine I've already built.
+
+Are there any interesting strategies to import MC tree search into a standard game-playing AI?
+"
+"['math', 'reference-request', 'generative-model', 'conditional-random-field']"," Title: Is there a mathematical example for Conditional Random Fields?Body: I am learning about probabilistic graphical models and I was wondering if there is an example explaining the math behind conditional random fields. Looking solely on the formula, I have no idea what we actually do. I found a lot of examples for the hidden Markov model. There is a part speech-tag task where we have to find the tags for the sentence ""flies like a flower"". On these slides (slide 8) Ambiguity Resolution: Statistical Method-Prof. Ahmed Rafea, an HMM is used to find the correct tags. How would I transform this model into a CRF and how would I apply the math?
+"
+"['comparison', 'swarm-intelligence', 'combinatorics', 'ant-colony-optimization']"," Title: What is the difference between continuous domains and discrete combinatorial optimization?Body: According to this website: http://yarpiz.com/67/ypea104-acor (in the website it is mentioned that it is a project aiming to be a resource of academic and professional scientific source codes and tutorials.):
+
+
+ ""Originally, the Ant Algorithms are used to solve discrete and
+ combinatorial optimization problems. Various extensions of Ant Colony
+ Optimization (ACO) are proposed to deal with optimization problems,
+ defined in continuous domains. One of the most useful algorithms of
+ this type, is ACOR, the Ant Colony Optimization for Continuous
+ Domains, proposed by Socha and Dorigo, in 2008 (here).""
+
+
+What is the difference between continuous domains and discrete combinatorial optimization? I appreciate if you could also mention some examples for each type.
+"
+"['optimization', 'swarm-intelligence']"," Title: Classical Internet routing vs. Swarm routing (such as Ant routing)?Body: Is it possible to mention the drawbacks/advantages of Swarm routing (such as Ant routing etc) in comparison with classical routing algorithms in communication networks in a general view?
+
+In other words, what will we gain if we replace a classical routing algorithm with a swarm routing based algorithm?
+
+Can we compare these two type of routing algorithms in a general view to mention their to count the drawbacks/advantages?
+
+The main purpose of this question is to define applications of each of those routing approaches. Which one is more decentralized? And which one has more efficient performance?
+
+Here is my personal opinion (I am not sure about it) :
+
+classical internet routing in more centralized than swarm routing (such as ACO based routing) which does not use any routing table and router to avoid moving towards centralization where those routing tables and routers can be manipulated (as a point of failure). Instead, classical internet routing may be faster than swarm based routing. Briefly, classical may be more centralized, but faster and on the other side, swarm based is more decentralized but may be slower. Am I wrong?
+
+Please note that when I say ""decentralized"", I mean a network like ""ad hoc networks"" that do not rely on routers or access points (as a point of failure) to avoid moving towards centralization. In case of using routers or access point, some kind of centralization is inherent. With this definition of decentralization, it seems Swarm based routing such as Ant routing would be more decentralized. However, I am not sure about it and that's my main question.
+"
+"['reference-request', 'research', 'sentiment-analysis', 'affective-computing', 'emotion-recognition']"," Title: Is there any research on the identification of a person's feelings using features such as facial expressions or body temperature?Body: People could be sad, happy, depressive, angry, nervous, calm, relaxed, bored, etc. I don't know how to express all of these feelings and emotions in English terms (I'm not an English native speaker), which would enable me to search for research papers about this topic (e.g. in IEEE Xplore, Scopus, or ScienceDirect).
+So, is there any research on the identification/recognition of a person's feelings/emotions using facial expressions, heartbeat, body temperature, sweating, or nervous behavior (using one or all of them)?
+"
+"['neural-networks', 'machine-learning', 'deep-learning', 'dropout', 'regularization']"," Title: Should I remove the units of a neural network or increase dropout?Body: When adding dropout to a neural network, we are randomly removing a fraction of the connections (setting those weights to zero for that specific weight update iteration). If the dropout probability is $p$, then we are effectively training with a neural network of size $(1−p)N$, where $N$ is the total number of units in the neural network.
+
+Using this logic, there is no limit how big I can make a network, as long as I proportionately increase dropout, I can always effectively train with the same sized network, and thereby just increasing the number of ""independent"" models working together, making a larger ensemble model. Thereby improving generalization of the model.
+
+For example, if a network with 2 units already achieves good results in the training set (but not in unseen data -i.e validation or test sets-), also a network with 4 units + dropout 0.5 (ensemble of 2 models), and also a network with 8 units + dropout 0.75 (ensemble of 4 models)... and also a network with 1000 units with a dropout of 0.998 (ensemble of 500 models)!
+
+In practice, it is recommended to keep dropout at $0.5$, which advises against the approach mentioned above. So there seem to be reasons for this.
+
+What speaks against blowing up a model together with an adjusted dropout parameter?
+"
+"['reinforcement-learning', 'off-policy-methods']"," Title: Why is the n-step tree backup algorithm an off-policy algorithm?Body: In reinforcement learning book from Sutton & Barto (2018 edition), specifically in section 7.5 of the book, they present an n-step off-policy algorithm that doesn't require importance sampling called n-step tree backup algorithm.
+
+In other algorithms, the return in the update consisted of rewards along the way and the estimated value(s) of the node(s) at the bottom, but, in tree backup update, a return consists of things mentioned before plus the estimated values of the actions that weren't picked during these n steps, all weighted by the probability of taking the action from the previous step.
+
+I have a few questions about things in this algorithm that are unclear to me.
+
+
+- Why is this algorithm considered an off-policy algorithm? As far as I could notice, only a single target policy is mentioned and there is no talk about behaviour policy generating actions to take.
+- In control we want our target policy to be deterministic, greedy policy, so how do we exactly generate actions to take in this case since behaviour policy isn't used? If we generate actions from the greedy policy, we won't explore, so we won't learn the optimal policy. What am I missing here?
+- If I understood something wrong and we are actually using behaviour policy, I don't understand how would the update work in the case where our target policy is greedy. The return consists of estimates taken from actions that weren't picked, but, because our policy is greedy, the probabilities used in calculating those estimates would be 0, so the total estimate of those actions would be 0 as well. The only non 0 probability in our target policy is the one of the greedy action (which is the probability of 1), so the entire return would fall down to n-step SARSA return. So, basically, how are we allowed to do this update in this case, why is this return allowed to replace the one with importance sampling?
+
+"
+['neural-networks']," Title: Why do layered neural nets struggle with continous data?Body: In this article here, the writer claims that a new type of neural net is required to deal with data that is both continuous, and also sparsely sampled.
+
+It was my understanding that this was the entire purpose of techniques that use neural nets, to make assumptions about a system with a non-continuous data set.
+
+So why do we need to switch to a non-layered design to deal with these data sets better?
+"
+"['tensorflow', 'keras', 'self-play']"," Title: How can I oppose two AI agents with keras / tensoflow?Body: I am trying to use tensorflow / keras to play a text based game. The game opposes two players that play by answering questions by choosing an answer among the proposed ones.
+
+Game resembles this:
+
+
+- Questions asked from player 1, choose value {0, 1, 2}
+- Player 1 chooses answer 1
+- Questions asked from player 2, choose value {0, 1}
+- Player 2 chooses answer 0
+( and so on )
+
+
+The issue is that I do not have any data to use for training the agents and it not possible to evaluate each actions of the agent individually.
+
+My idea is to get 2 agents to play against each other and evaluate them depending on who won / lost ( the games are very short with about 20 to 30 decisions made for each player ).
+
+The issue I have is that I do not know where to start.
+
+I normalized my input, but I do not know how to get the 2 agents to compete, as I do not have any training data as shown in the tutorials and the agents have to complete a full game in order to evaluate their performance.
+"
+"['philosophy', 'chinese-room-argument']"," Title: Why is the Chinese Room argument such a big deal?Body: I've been re-reading the Wikipedia article on the Chinese Room argument and I'm... actually quite unimpressed by it. It seems to me to be largely a semantic issue involving the conflation of various meanings of the word ""understand"". Of course, since people have been arguing about it for 25 years, I doubt very much that I'm right. However... the argument can be thought of as consisting of several layers, each of which I can explain away (to myself, at least).
+
+
+- There is the assumption that being able to understand (interpret a sentence in) a language is a prerequisite to speaking in it.
+Let's say that I don't speak a word of Chinese, but I have access to a big dictionary and a grammar table. I could work out what each sentence means, answer it, and then translate that answer back into Chinese, all without speaking Chinese myself. Therefore, being able to interpret (parse) a language is not a prerequisite to speaking it.
+(Of course, by the theory of extended cognition I can interpret the language, but we can all agree that the books and lookup tables are simply a source of information and not an algorithm; I'm still the one using them.)
+Nevertheless, this task can be removed by a dumb natural language parser and a dictionary, converting Chinese to the set of concepts and relationships encoded in it and vice versa. There is no understanding involved at this stage.
+- There is the assumption that being able to understand (identify and maintain a train of thought about concepts in) a language is a prerequisite to speaking in it.
+We've already optimised away the language, to a set of concepts and relationships between concepts. Now all we need is another lookup table: a sort of verbose dictionary that maps concepts to other concepts and relationships between them. For example, one entry for ""computer"" might be ""performs calculations"" and another might be ""allows people to play games"". An entry for ""person"" might be ""has opinions"", and another might be ""has possessions"". Then an algorithm (yes, I'm introducing one now!) would complete a simple optimisation problem to find a relevant set of concepts and relationships between them, and turn ""I like playing computer games"" into ""What is your favourite game to play on a computer?"" or, if it had some entries on ""computer games"", ""Which console do you own?"".
+The only ""understanding"" here, apart from the dumb optimisation algorithm, is the knowledge bank. This could conceivably be parsed from Wikipedia, but for a good result it would probably be at least somewhat hand-crafted. Following this would fall down, because this process wouldn't be able to talk about itself.
+- There is the assumption that being able to understand (""know"" how information in affects one's self) a language is a prerequisite to speaking in it.
+A set of ""opinions"" and such associated with the concept ""self"" could be implemented into the knowledge bank. All meta-cognition could be emulated by ensuring that the knowledge bank had information about cognition in it. However, this program would still just be a mapping from arbitrary inputs to outputs; even if the knowledge bank was mutable (so that it could retain the current topics from sentence to sentence and learn new information) it would still, for example, not react when a sentence is repeated to it verbatim 49 times.
+- There is the assumption that being able to have effective meta-cognition is a prerequisite to speaking in a language.
+Except... there's not. The program described would probably pass the Turing Test. It certainly fulfils the criteria of speaking Chinese. And yet it clearly doesn't think; it's a glorified search engine. It'd probably be able to solve maths problems, would be ignorant of algebra unless somebody taught it to it (in which case, with sufficient teaching, it'd be able to carry out algebraic formulae; Haskell's type system can do this without ever touching a numerical primitive!), and would probably be Turing-complete, and yet wouldn't think. And that's OK.
+
+
+So why is the Chinese Room argument such a big deal? What have I misinterpreted? This program understands the Chinese language as much as a Python interpreter understands Python, but there is no conscious being to ""understand"". I don't see the philosophical problem with that.
+"
+"['reinforcement-learning', 'control-problem']"," Title: How does reinforcement learning handle measured disturbances?Body: I recently encountered an interesting problem and was wondering how RL would solve it. The objective of the problem is to maximize the coffee quality, given by box X. The coffee quality objective function is defined by the company.
+
+To maximize the quality of the coffee, we can perform 2 actions:
+
+
+- change stirring speed of the coffee machine
+- change the temperature of the coffee machine
+
+
+Now to the tricky part. The coffee bean characteristic coming into the coffee machine is random. We can measure its characteristics before sending them to the coffee machine, but we cannot change them.
+
+I formulated this problem into a control problem, such that $X$ is a function of my previous states, $X(t - k)$, the control input , $U$, and the measured disturbance, $D$. Given a constant $D$, the problem is trival to solve because the disturbance is a constant part of the environment. However, during times when $D$ changes rapidly, the policy is no longer optimal.
+
+How do I inject the information of the measured disturbance into my RL agent?
+
+
+"
+"['neural-networks', 'convolutional-neural-networks', 'image-recognition', 'object-recognition']"," Title: Recognition of small objectsBody: I'm currently implementing an Android app for street sign recognition. My solution works quite well for the GTSRB dataset, since it provides a labeled test set of centered images. However, it doesn't scale up to more realistic scenarios like for images in the GTSDB, where the signs only take up some pixels. Is it still recommended to downsample the image to 224x224?
+"
+"['neural-networks', 'activation-functions', 'perceptron', 'xor-problem']"," Title: If we use a perceptron with a non-monotonic activation function, can it solve the XOR problem?Body: I found several papers about how to build a perceptron able to solve the XOR problem. The papers describe a solution where the heaviside step function is replaced by a non-monotonic activation function. Here are the papers:
+
+I also found this related post on Stackoverflow.
+Can we really solve the XOR problem with a simple perceptron?
+The use of a non-monotonic activation function is not really common, so I don't really know. Papers about this idea are scarce. Generally, the main solution is to build a multilayer perceptron.
+"
+"['neural-networks', 'machine-learning', 'tensorflow']"," Title: How TensorFlow will know if the prediction is true or false?Body: I'm completely new at ML, but really interested. To be honest, read many articles about it, but still don't understand the workings of it.
+
+I just started to understand this example: https://storage.googleapis.com/tfjs-examples/mnist/dist/index.html
+
+My thinking about it is that TF has some resources, some examples of how numbers look like, and try to match them with the ones in the test. I saw that sometimes the test changes a right prediction to a wrong, but makes better and better predictions. But how? I think that the program doesn't know the right predictions (and this way it won't know the wrong ones). In the training how it makes better predictions? Test by test, from what exceptions it will change it's predictions? What happens in a new test?
+"
+"['convolutional-neural-networks', 'image-recognition', 'computer-vision', 'handwritten-characters']"," Title: How to approach this handwritten digit recognition?Body: I have multiple pictures that look exactly like the one below this text. I'm trying to train CNN
to read the digits for me. Problem is isolating the digits. They could be written in any shape, way, and position that person who is writing them wanted to. I thought of maybe training another CNN
to recognize the position/location of the digits, but I'm not sure how to approach the problem. But, I need to get rid of that string and underline. Any clue would be a great one. Btw. I would love to get the 28x28 format just like the one in MNIST
.
+
+Thanks up front.
+
+
+"
+"['neural-networks', 'machine-learning', 'neat']"," Title: How does the NEAT speciation algorithm work?Body: I've been reading up on how NEAT (Neuro Evolution of Augmenting Topologies) works and I've got the main idea of it, but one thing that's been bothering me is how you split the different networks into species. I've gone through the algorithm but it doesn't make a lot of sense to me and the paper I read doesn't explain it very well either so if someone could give an explanation of what each component is and what it's doing then that would be great thanks.
+
+The two equations are:
+
+$\delta = \frac{c_{1}E}{N} + \frac{c_{2}D}{N} + c_{3} .\overline{W}$
+
+$f_{i}^{'} = \frac{f_i}{\sum_{j=1}^{n}sh(\delta(i,j))}$
+
+By the way, I can understand the greek symbols so you don't need to explain those to me
+
+The original paper
+"
+"['neural-networks', 'machine-learning', 'natural-language-processing', 'long-short-term-memory']"," Title: Is there bidirection sequence-to-sequence neural machine translation?Body: I have heard about bidirectional RNN LSTM units (endcoders-decoders), but my question is - is there bidirectional neural machine translation, that uses A->B weights for the translation in the opposite direction B->A? If not, then what are the obstacles to such system?
+"
+"['deep-learning', 'papers', 'pooling']"," Title: Why do we have to dot product in the Low-rank Bilinear Pooling?Body: I was reading this paper Hadamard Product for Low-rank Bilinear Pooling. I understand what they are trying to say, but I don't know why we have to convert the element-wise multiplication into a scalar (using the dot product)
+$$
+\mathbb{1}^{T}\left(\mathbf{U}_{i}^{T} \mathbf{x} \circ \mathbf{V}_{i}^{T} \mathbf{y}\right)+b_{i} \tag{2}\label{2}
+$$
+Why do we have to multiply the resulting vector by the one vector? We would still use the multiplicative interaction between elements if we did not consider multiplying by that one vector.
+"
+"['convolutional-neural-networks', 'training', 'dropout', 'generalization', 'pooling']"," Title: Is pooling a kind of dropout?Body: If I got well the idea of dropout, it allows improving the sparsity of the information that comes from one layer to another by setting some weights to zero.
+On the other hand, pooling, let's say max-pooling, takes the maximum value in a neighborhood, reducing as well to zero, the influence of values apart from this maximum.
+Without considering the shape transformation due to the pooling layer, can we say that pooling is a kind of a dropout step?
+Would the addition of a dropout (or DropConnect) layer, after a pooling layer, make sense in a CNN? And does it help the training process and generalization property?
+"
+"['reinforcement-learning', 'math', 'markov-decision-process', 'importance-sampling']"," Title: In the context of importance sampling ratio, how is the equation $\mathbb{E}\left[\rho_{t: T-1} G_{t} | S_{t}=s\right]=v_{\pi}(s)$ derived?Body: When reading the book by Sutton and Barto, I came across the importance sampling ratio.
+
+The first equation, I believe, describes the probability a particular sequence is obtained given the current state, and the policy.
+
+\begin{align}
+&\operatorname{Pr}\left\{A_{t}, S_{t+1}, A_{t+1}, \ldots, S_{T} | S_{t}, A_{t: T-1} \sim \pi\right\} \\
+&=\pi\left(A_{t} | S_{t}\right) p\left(S_{t+1} | S_{t}, A_{t}\right) \pi\left(A_{t+1} | S_{t+1}\right) \cdots p\left(S_{T} | S_{T-1}, A_{T-1}\right) \\
+&=\prod_{k=t}^{T-1} \pi\left(A_{k} | S_{k}\right) p\left(S_{k+1} | S_{k}, A_{k}\right)
+\end{align}
+
+The next part takes the ratio between the probabilities of the two trajectories:
+
+$$\rho_{t: T-1} \doteq \frac{\prod_{k=t}^{T-1} \pi\left(A_{k} | S_{k}\right) p\left(S_{k+1} | S_{k}, A_{k}\right)}{\prod_{k=t}^{T-1} b\left(A_{k} | S_{k}\right) p\left(S_{k+1} | S_{k}, A_{k}\right)}=\prod_{k=t}^{T-1} \frac{\pi\left(A_{k} | S_{k}\right)}{b\left(A_{k} | S_{k}\right)}$$
+
+I don't understand how this ratio could lead to this:
+
+$$\mathbb{E}\left[\rho_{t: T-1} G_{t} | S_{t}=s\right]=v_{\pi}(s)$$
+
+The $G_t$ rewards are obtained through the $b$ policy, not the $\pi$ policy.
+
+I think there is something to do with Bayes rule, but I could not derive it. Could someone guide me through the derivation?
+"
+"['neural-networks', 'machine-learning', 'image-generation']"," Title: Creating videos of AI generated photographsBody: I came across this article today: These faces show how far AI image generation has advanced in just four years. I would never in a million years have guessed that the people on the right (in the first image in the article) were fakes!
+
+Will it be possible to create videos of such AI generated images? What, then, will become of actors and actresses?
+"
+"['backpropagation', 'activation-functions', 'objective-functions']"," Title: What is the derivative function used in backpropagration?Body: I'm learning AI, but this confuses me. The derivative function used in backpropagation is the derivative of activation function or the derivative of loss function?
+
+These terms are confusing: derivative of act. function, partial derivative wrt. loss function??
+
+I'm still not getting it correct.
+"
+"['machine-learning', 'ai-design', 'prediction']"," Title: How should I predict which characters are going to die in a certain movie?Body: How would one go about predicting which characters (actors) are going to die e.g. in the next Avengers movie?
+
+To elaborate a bit, given all leaked scripts (fake or not), interviews of different actors and directors, contracts with different actors and directors. How would/should one go about predicting if an actor is going to die in the upcoming movie or not.
+
+NOTE: I am not sure, but I think a similar effort has already been made for Game of Thrones.
+"
+"['reinforcement-learning', 'game-ai', 'rewards', 'reward-design', 'connect-four']"," Title: How should I define the reward function in the case of Connect Four?Body: I'm using RL to train a Network on the game Connect4. It learns quickly that 4 connected pieces is good. It gets a reward of 1 for this. A zero is rewarded for all other moves.
+It takes quite a time until the AI tries to stop the opponent from winning.
+Is there a way this could be further reinforced?
+I thought about giving a negative reward for the move played before the winning move. Thinking about this, I came to the conclusion that this is a bad idea. There'll be always a looser (except for ties), therefore there always be a last move from the losing player. This one hasn't to be a bad one. Mistakes could have been made much earlier.
+Is there a way to improve this awareness of opponents? Or does it just have to train more?
+I'm not perfectly sure if the rewards will propagate back in a way that encourages this behavior with my setup.
+"
+['neural-networks']," Title: Translating a single word Neural NetworksBody: I've been given an assignment to create a neural network that will suggest a Croatian word for a word given in any other European language (out of those found here). The words are limited to drinks you can find on a bar menu.
+
+I've looked at many NN examples, both simple and complex, but I'm having trouble with understanding how to normalize the input.
+
+For example, words ""beer"", ""birra"" and ""cervexa"" should all translate to ""pivo"". If I include those 3 in the training set, and after the network has finished training I input the word ""bier"", the output should be ""pivo"" again.
+
+I'm not looking for a working solution to this problem, I just need a nudge in the right direction regarding normalization.
+"
+"['reinforcement-learning', 'gradient-descent']"," Title: SARSA won't work for linear function approximator for MountainCar-v0 in OpenAI environment. What are the possible causes?Body: I am learning Reinforcement Learning from the lectures from David Silver. I finished lecture 6 and went on to try SARSA with linear function approximator for MountainCar-v0 environment from OpenAI.
+
+A brief explanation of the MountainCar-v0 environment. The state is denoted by two features, position, and velocity. There are three actions for each state, accelerate forwards, don't accelerate, accelerate backward. The goal of the agent is to learn how to climb a mountain. The engine of the car is not strong enough to power directly to the top. So speed has to be built up by oscillating in the cliff.
+
+I have used a linear function approximator, written by myself. I am attaching my code here for reference :-
+
+ class LinearFunctionApproximator:
+ ''' A function approximator must have the following methods:-
+ constructor with num_states and num_actions
+ get_q_value
+ get_action
+ fit '''
+
+ def __init__(self, num_states, num_actions):
+ self.weights = np.zeros((num_states, num_actions))
+ self.num_states = num_states
+ self.num_actions = num_actions
+
+ def get_q_value(self, state, action):
+ return np.dot( np.transpose(self.weights), np.asarray(state) )[action]
+
+ def get_action(self, state, eps):
+ return randint(0, self.num_actions-1) if uniform(0, 1) < eps else np.argmax( np.dot(np.transpose(self.weights), np.asarray(state)) )
+
+ def fit(self, transitions, eps, gamma, learning_rate):
+ ''' Every transition in transitions should be of type (state, action, reward, next_state) '''
+ gradient = np.zeros_like(self.weights)
+ for (state, action, reward, next_state) in transitions:
+ next_action = self.get_action(next_state, eps)
+ g_target = reward + gamma * self.get_q_value(next_state, next_action)
+ g_predicted = self.get_q_value(state, action)
+ gradient[:, action] += learning_rate * (g_target - g_predicted) * np.asarray(state)
+
+ gradient /= len(transitions)
+ self.weights += gradient
+
+
+I have tested the gradient descent, and it works as expected. After every epoch, the mean squared error between current estimate of Q and TD-target reduces as expected.
+
+Here is my code for SARSA :-
+
+def SARSA(env, function_approximator, num_episodes=1000, eps=0.1, gamma=0.95, learning_rate=0.1, logging=False):
+
+ for episode in range(num_episodes):
+ transitions = []
+
+ state = env.reset()
+ done = False
+
+ while not done:
+ action = function_approximator.get_action(state, eps)
+ next_state, reward, done, info = env.step(action)
+ transitions.append( (state, action, reward, next_state) )
+ state = next_state
+
+ for i in range(10):
+ function_approximator.fit(transitions[::-1], eps, gamma, learning_rate)
+
+ if logging:
+ print('Episode', episode, ':', end=' ')
+ run_episode(env, function_approximator, eps, render=False, logging=True)
+
+
+Basically, for every episode, I fit the linear function approximator to the current TD-target. I have also tried running fit just once per episode, but that also does not yield any winning episode. Fitting 10 times ensures that I am actually making some progress towards the TD-target, and also not overfitting.
+
+However, after running over 5000 episodes, I do not get a single episode where the reward is greater than -200. Eventually, the algorithm choses one action, and somehow the Q-value of other actions is always lesser than this action.
+
+# Now, let's see how the trained model does
+env = gym.make('MountainCar-v0')
+num_states = 2
+num_actions = env.action_space.n
+
+function_approximator = LinearFunctionApproximator(num_states, num_actions)
+
+num_episodes = 2000
+eps = 0
+SARSA(env, function_approximator, num_episodes=num_episodes, eps=eps, logging=True)
+
+
+I want to be more clear about this. Say action 2 is the one which is the action which gets selected always after say 1000 episodes. Action 0 and action 1 have somehow, for all states, have their Q-values reduced to a level which is never reached by action 2. So for a particular state, action 0 and action 1 may have Q-values of -69 and -69.2. The Q-value of action 2 will never drop below -65, even after running the 5000 episodes.
+"
+"['neural-networks', 'machine-translation', 'google-translate']"," Title: Why can't we use Google Translate for every translation task?Body: Once a book is published in a language, why can't the publishers use Google Translate AI or some similar software to immediately render the book in other languages? Likewise for Wikipedia: I'm not sure I understand why we need editors for each language. Can't the English Wikipedia be automatically translated into other languages?
+"
+"['reinforcement-learning', 'classification', 'comparison', 'supervised-learning', 'imitation-learning']"," Title: What is the difference between imitation learning and classification done by experts?Body: In short, imitation learning means learning from the experts. Suppose I have a dataset with labels based on the actions of experts. I use a simple binary classifier algorithm to assess whether it is good expert action or bad expert action.
+How is this binary classification different from imitation learning?
+Imitation learning is associated with reinforcement learning, but, in this case, it looks more like a basic classification problem to me.
+What is the difference between imitation learning and classification done by experts?
+I am getting confused because imitation learning relates to reinforcement learning while classification relates to supervised learning.
+"
+"['deep-learning', 'unsupervised-learning']"," Title: a question about Zeiler's paper “Deconvolutional Networks”Body: In ""4.1 Learning multi-layer deconvolutional filters
"" section, the last paragraph says that ""Since our model is generative, we can sample from it. In Fig. 3 we show samples from the two different models from each level projected down into pixel space. The samples were drawn using the relative firing frequencies of each feature from the training set.""
+
+
+
+I don't know how the pictures in Fig.3 are generative. Since that the filters has been learned and feature maps in every layer can be infered, for example, in terms of fruit samples, in ""Layer 1"", is the the first layer's feature map ? i feel that not true...seems like the sample are low-level...paper says ""... from each level projected into pixel space"", these words are short and confuse me.
+
+somebody could explain that for me? thank you very much!
+"
+"['convolutional-neural-networks', 'classification']"," Title: Relationship between input range and channel means, standard deviations for CNNsBody: So, I'm using a pretrained PNASNet-5-Large model to do some image classification.
+In the file, it says that the input range is in [0,1] (I'm assuming pixel values of input images). The images I have are already in this range.
+The channel means and standard deviation for RGB channels are stated as [0.5, 0.5, 0.5], [0.5, 0.5, 0.5] respectively.
+Now when I use the torchvision.transforms.Normalize to normalize the images using the stated means and standard deviations, the pixel values get to the range [-1,1].
+The code I wrote for normalization:
+transforms.Normalize([0.5, 0.5, 0.5],[0.5, 0.5, 0.5])
+
+I believe I'm missing something fundamental. Should I normalize the images or should I not? Thanks!
+"
+"['classification', 'image-recognition', 'pattern-recognition', 'clustering']"," Title: How do we know the classification boundaries of the data?Body: Consider an image classification problem. Conceptually, we then have some high dimensional space where all the images can be represented as points, and having large enough labeled data set we can build a classifier. But how do we know that our data in this space has some structure? Like this one in two-dimensional case:
+
+
+
+If we have a data set with images of, say, cats and dogs, why these two classes are not just uniformly mixed with each other but have some distribution or shape in appropriate space? Why it cannot be like this:
+
+
+"
+"['optimization', 'agi', 'memory']"," Title: How can a specific connectivity pattern be stored in an optimally compact representation?Body: I am interested in optimizing the memory capacity of an AGI. Given a specific complex input an AI can create a simplified model. This is a problem that can be solved using sparse coding [1]. However, this solves only the problem of encoding and not the maintenance of online representations—in cognitive terms: the state of mind.
+
+A default cognitive model of short-term memory can be separated in three different stages:
+
+Encoding → Maintenance → Retrieval
+
+
+One solution is to use specialized hardware [2][3], but I am interested in software approaches to this problem and I would thus like to emphasize that it is the digital representation, which I am most interested in.
+
+With the exception of qubits, the smallest possible representation are binary digits. However, additional architecture is required to represent phase spaces (i.e. floating point precision memory) and higher-order representations maybe include arrays and dictionaries. (Optimizing these are trivial... or simply to be postponed until needed, according to Knuth).
+
+
+- How can a specific connectivity pattern be stored in an optimally
+compact representation?
+- Is there an implementation with concrete example code?
+- What is the state-of-the-art?*
+
+
+* I will not specify ""real-time"" here, but the context is humanoid AGI.
+
+
+
+
+[1] Papyan, V., Romano, Y., & Elad, M. (2017). Convolutional Neural Networks Analyzed via Convolutional Sparse Coding. Journal on Machine Learning Research, 18(83): 1–52. arXiv:1607.08194
+http://jmlr.org/papers/volume18/16-505/16-505.pdf
+
+
+[2] LeGallo et al. (2018). Mixed-precision in-memory computing. https://www.nature.com/articles/s41928-018-0054-8
+
+
+
+[3] IBM. (2018). IBM Scientists Demonstrate Mixed-Precision In-Memory Computing for the First Time; Hybrid Design for AI Hardware. https://www.ibm.com/blogs/research/2018/04/ibm-scientists-demonstrate-mixed-precision-in-memory-computing-for-the-first-time-hybrid-design-for-ai-hardware/
+"
+['classification']," Title: Python Network for Simple Image ClassificationBody: I'm wondering if there exists a network for simple image classification. What I mean by this is if I have two image datasets, one of horses and one of zebras, I want to train off the horses and classify an image as either a horse or not a horse, so if I test it on an image of a horse, it says it is a horse, but if I use a zebra, it says it is not a horse. Does any library/project for this exist?
+"
+"['reinforcement-learning', 'math', 'policy-gradients', 'rewards', 'reward-to-go']"," Title: Why does the ""reward to go"" trick in policy gradient methods work?Body: In the policy gradient method, there's a trick to reduce the variance of policy gradient. We use causality, and remove part of the sum over rewards so that only actions happened after the reward are taken into account (See here http://rail.eecs.berkeley.edu/deeprlcourse/static/slides/lec-5.pdf, slide 18).
+Why does it work? I understand the intuitive explanation, but what's the rigorous proof of it? Can you point me to some papers?
+"
+"['neural-networks', 'deep-learning', 'convolutional-neural-networks', 'computer-vision']"," Title: Image Segmentation Prediction with cropping 256x256 grids is very slowBody: I have only a limited dataset (<25) with large-sized images (>1500x2000) and their pixelwise labels. The aim is to find unusual patterns in this industry dataset and highlight them.
+
+To generate training images I crop 256x256 grids out of every image and do some data augmentation and use these images to train my U-Net.
+
+For my prediction I split my image with numpy again into 256x256px grids and predict every grid separately and put them together to an image. But this will take some time, like >10 Minutes. But it has a quite good accuracy. How can I optimize my prediction to be faster?
+
+Is it faster to create this with a Tensorflow Pipeline? When I want to predict the full image with giving shape(None,None,3), I get ""ConcatOp : Dimensions of inputs should match"" after some time.
+"
+"['neural-networks', 'generative-adversarial-networks']"," Title: Can we implement GAN (Generative adversarial neural networks) for classication problem like Fraud detecion?Body: Problem: Fraud detection
+Task : classifying transactions as either fraud/ Non-fraud using GAN
+"
+"['comparison', 'terminology', 'definitions', 'semi-supervised-learning', 'labels']"," Title: What is the definition of ""soft label"" and ""hard label""?Body: In semi-supervised learning, there are hard labels and soft labels. Could someone tell me the meaning and definition of the two things?
+"
+"['datasets', 'object-recognition']"," Title: When training an object detection network for one class, should I include empty images in the dataset?Body: I fine tuned MobileNetSSD for object detection using a dataset with just one class (~4000 images). All the training images include at least one bounding box related to that class (no empty images). By following the example with the VOC dataset, the labelmap includes two classes, the background and my custom class. However, as I mentioned, there are no annotations related to the background and I am not sure if there should be any.
+
+Now my fine tuned network performs very well when objects belonging to my class are present, however there are some false detections with very high confidence when the class is not present. Can this be related to the fact that I don't have empty images in my training set?
+"
+['audio-processing']," Title: Can I filter barking sounds on the television?Body: My dog goes bonkers every time the sound of a barking dog is heard on a television program. I never noticed this before but literally every movie or show with an outdoors setting eventually includes the sound of a barking dog.
+Is it possible to develop a real-time filter that blocks or masks these sounds?
+"
+"['game-ai', 'game-theory', 'alpha-beta-pruning']"," Title: Can Alpha–Beta be used on symmetric zero sum games?Body: This question was asked in an AI exam. How would you answer such question?
+"
+"['neural-networks', 'convolutional-neural-networks', 'computer-vision']"," Title: Appropriateness of 3D Convolutional Neural Network for segmentation of medical image dataBody: I have a couple different segmentation tasks that I would like to perform on medical imaging data using CNN's. I'm currently trying to wrap my head around how well a 3D network might work, using a U-Net architecture, but I have some hesitation.
+
+My specific questions are as follows:
+
+Question 1 - Say we have medical imaging taken at different slices (different heights/depths) of the patient's body. In order for a 3D CNN to work properly on such data, do the images need to be taken at a heights that are close together, i.e. does the 3rd dimension need to be fairly continuous? For example, if you had a 3D stack/volume of 5 images that you wanted to feed to a 3D CNN like so,
+
+
+- img_1_depth_0cm.png
+- img_2_depth_5cm.png
+- img_3_depth_10cm.png
+- img_4_depth_15cm.png
+- img_5_depth_20cm.png
+
+
+and the images were taken 5 cm apart from one and other, I would imagine that a 3D convolution might not perform very well because of the 5 cm of depth in between images? Is this an incorrect assumption?
+
+(As a reference point, this nice repository on GitHub was designed for training on images of the brain, but the images do appear to be fairly continuous/close together, almost like a video: https://github.com/ellisdg/3DUnetCNN)
+
+Question 2 - For a 3D network to properly segment non-labelled volumes after training is finished, I know that the input data dimensions would have to match those of the training data.
+
+But I would also think that the new images must have been taken in a similar fashion (taken at similar depths and in general a similar orientation to the training data) in order for the NN to be able to perform its task. So unless the medical imaging processes are always performed similarly across machines and hospitals, I'm guessing that the NN performance might vary pretty wildly when it tries to segment new data. Is this correct?
+"
+"['math', 'logic', 'fuzzy-logic']"," Title: Choice of fuzzification functionBody: I'm a relative newbie to fuzzie logic systems but I have some knowledge in mathematics. I have the following problem:
+
+I want to fuzzify certain values. Some are in the range [-$\inf$,$\inf$] and some are in the range [$0$,$\inf$]. For the first range I have chosen the sigmoid function:
+
+$f(x) = \frac{1}{1+e^{-x}}$
+
+The question is, which fuzzification process I should use for the second range. Since the function $f(x) = \ln(x)$ transforms [$0$,$\inf$] to [-$\inf$,$\inf$] a natural choice could be:
+
+$f(x) = \frac{1}{1+e^{-\ln(x)}} = \frac{x}{x+1}$
+
+A different function could also be:
+
+$f(x) = 1 - 2^{-x}$
+
+Which one would be more suitable? Particularly when considering that I may want to compare values from both ranges.
+"
+"['neural-networks', 'deep-learning', 'backpropagation', 'gradient-descent', 'feedforward-neural-networks']"," Title: Feed forward neural network using numpy for IRIS datasetBody: I tried to build a neural network for working on IRIS dataset using only numpy after reading an article (link: https://iamtrask.github.io/2015/07/12/basic-python-network/).
+
+I tried to search the internet but everyone was using ml libraries and found no solution using just numpy. I tried to add different hidden layers to my feed forward neural network still it wasn't converging. I tried to use backpropagation. I used sigmoid and also relu neither of which was successful.
+
+Can someone please give me the code which will work on IRIS dataset and built only using feed forward neural networks and numpy as the only library or if it is not possible to built such a thing with these constraints then please let me know what goes wrong with these constraints.
+
+Also tell me will it be possible to create a neural network to predict values of a matrix multiplication i.e. if we have A * B = C with matrix A as input and C as output, can we acheive substantial amount of accuracy with feed forward neural networks here?.
+"
+"['machine-learning', 'ai-design', 'natural-language-processing']"," Title: What AI designs are suited for producing title replacements?Body: Problem: ""For a given news article, generate another title for the article if the article is to be published under a different Publication.""
+
+Which algorithm will be well suited for this? Should I use naive Bayesian or any NLP algorithm?
+"
+"['reinforcement-learning', 'game-ai', 'monte-carlo-tree-search', 'action-spaces', 'hierarchical-rl']"," Title: How can I design a reinforcement learning model for a game with multiple complex actions taken at a time?Body: I have a steady hex-map and turn-based wargame featuring WWII carrier battles.
+On a given turn, a player may choose to perform a large number of actions. Actions can be of many different types, and some actions may be performed independently of each other while others have dependencies. For example, a player may decide to move one or two naval units, then assign a mission to an air unit or not, then adjust some battle parameters or not, and then reorganize a naval task force or not.
+
Usually, boardgames allow players to perform only one action each turn (e.g. go or chess) or a few very similar actions (backgammon).
+Here the player may select
+
+- Several actions
+- The actions are of different nature
+- Each action may have parameters that the player must set (e.g. strength, payload, destination)
+
+How could I approach this problem with reinforcement learning? How would I specify a model or train it effectively to play such a game?
+Here is a screenshot of the game.
+
+Here's another.
+
+"
+"['neural-networks', 'genetic-algorithms', 'neat']"," Title: Using NEAT, will the child of two parent genomes always have the same structure as the more fit parent?Body: I'm trying to implement the NEAT Algorithm using c#, based off of Kenneth O. Stanley's paper. On page 109 (12 in the pdf) it states ""Matching genes are inherited randomly, whereas disjoint genes (those that do not match in the middle) and excess genes (those that do not match in the end) are inherited from the more fit parent.""
+Does this mean that the child will always have the exact structure that the more fit parent has? It seems like the only way the structure could differ from crossover was if the two parents were equally fit.
+"
+"['neural-networks', 'gradient-descent', 'artificial-neuron']"," Title: neuralnetworksanddeeplearning.com chapter 5 problemsBody: For http://neuralnetworksanddeeplearning.com/chap5.html , could anyone suggest:
+
+1) how to approach the derivation of expression (123) ?
+
+2) what constitutes value ~ 0.45 ?
+
+3) why the need of taylor series when we can observe the identity property without any maths proof (input == output) ?
+
+
+"
+['neural-networks']," Title: How do I get a meaningful output value for a simple neural network that can map to a set of data?Body: I am trying to use a artifical neural network to produce a single output, which in my mind should be an index into a list of data (or close to it). All of the results I get are 0.9999+ and very close to each other. I don't know if my whole way of thinking here is off, or if I am just missing an approach, or if perhaps my network code is just broken.
+
+I am trying to make use of the simple neural network from Microsoft here:
+https://social.technet.microsoft.com/wiki/contents/articles/36428.basis-of-neural-networks-in-c.aspx
+
+I have tried this with a significantly more complex data set, but I've also tried using a very simple data set.
+
+Here is the simple training data I'm trying to use:
+
+eat poo bad
+eat dirt bad
+eat cookies okay
+eat fruit good
+study poo okay
+study dirt okay
+study cookies okay
+study fruit okay
+dispose poo good
+dispose dirt okay
+dispose cookies bad
+dispose fruit bad
+
+
+The basic idea is that the network has two input neurons and a single output neuron. I assigned a unique number to each distinct word such that I can train the network with two inputs (verb and object), and expect a single output (good, bad, or okay).
+
+Example training:
+
+input: 1 (for eat) 10 (for dirt) output: 15 (for bad)
+input: 1 (for eat) 11 (for cookies) output: 16 (for good)
+
+
+I would expect that after training, I would see the output numbers close to 15, 16, etc, but all I get are numbers like 0.999997333313168, etc.
+
+Example run:
+
+input: 1 (for eat) 10 (for dirt), output is 0.999997333313168 (instead of ~15 expected)
+
+
+What do these outputs mean, or what am I missing in how I should be thinking about making a basic classification system (given inputs, get a meaningful output)?
+
+The C# code I am using, if it is helpful:
+
+using NeuralNet.NeuralNet;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Text;
+
+namespace NeuralNet
+{
+ internal class TestSmallSampleNetwork
+ {
+ internal static void Run()
+ {
+ var data = @""eat poo bad
+ eat dirt bad
+ eat cookies okay
+ eat fruit good
+ sell poo bad
+ sell dirt okay
+ sell cookies okay
+ sell fruit okay
+ study poo okay
+ study dirt okay
+ study cookies okay
+ study fruit okay
+ dispose poo good
+ dispose dirt okay
+ dispose cookies bad
+ dispose fruit bad"";
+ var simples = data.Split(new[] { ""\r\n"" }, StringSplitOptions.None ).Select(_ => new Simple(_)).ToList();
+
+ var verbs = simples.Select(_ => _.Verb).Distinct().Select(_ => new NetworkValue(_)).ToList();
+ var objects = simples.Select(_ => _.Object).Distinct().Select(_ => new NetworkValue(_)).ToList();
+ var judgments = simples.Select(_ => _.Good).Distinct().Select(_ => new NetworkValue(_)).ToList();
+ var values = verbs.Concat(objects).Concat(judgments).ToDictionary(_ => _.Term, _ => _);
+
+ // Create a network with 2 inputs, 2 neurons on a single hidden layer, and 1 neuron output.
+ var net = new NeuralNetwork(0.9, new int[] { 2, 2, 1 });
+
+ for (int iTrain = 0; iTrain < 1000; iTrain++)
+ {
+ for (int iSimple = 0; iSimple < simples.Count; iSimple++)
+ {
+ net.Train(MakeInputs(simples[iSimple], values), MakeOutputs(simples[iSimple], values));
+ }
+ }
+
+ foreach (var value in values.Values)
+ {
+ Console.WriteLine(value);
+ }
+
+ Console.WriteLine();
+
+ // Run samples and get results back from the network
+ Run(""study"", ""poo"", values, net);
+ Run(""eat"", ""poo"", values, net);
+ Run(""dispose"", ""fruit"", values, net);
+ Run(""sell"", ""dirt"", values, net);
+ }
+
+ private static void Run(string verb, string obj, Dictionary<string, NetworkValue> values, NeuralNetwork net)
+ {
+ var result = net.Run(new List<double> {
+ values[verb].Value,
+ values[obj].Value,
+ }).Single();
+
+ var good = ""xxx"";
+
+ Console.WriteLine($""{verb} {obj} {good} ({result})"");
+ }
+
+ private static List<double> MakeInputs(Simple simple, Dictionary<string, NetworkValue> values)
+ {
+ return new List<double>() {
+ values[simple.Verb].Value,
+ values[simple.Object].Value
+ };
+ }
+
+ private static List<double> MakeOutputs(Simple simple, Dictionary<string, NetworkValue> values)
+ {
+ return new List<double> { values[simple.Good].Value };
+ }
+
+ public class Simple
+ {
+ public string Verb { get; set; }
+ public string Object { get; set; }
+ public string Good { get; set; }
+
+ public Simple(string line)
+ {
+ var words = line.Trim().Split("" "".ToCharArray(), 3, StringSplitOptions.None);
+ Verb = words[0];
+ Object = words[1];
+ Good = words[2];
+ }
+
+ public override string ToString()
+ {
+ return $""{Verb} {Object} {Good}"";
+ }
+ }
+
+ public class NetworkValue
+ {
+ private static int Next = 1;
+
+ public string Term { get; set; }
+ public double Value { get; set; }
+
+ public NetworkValue(string term)
+ {
+ Term = term;
+ Value = Next++;
+ }
+
+ public override string ToString()
+ {
+ return $""{Value}. {Term}"";
+ }
+ }
+ }
+}
+
+"
+"['neural-networks', 'game-ai']"," Title: How can I use a 2-dimensional feature matrix as the input to a neural network?Body: How can I use a 2-dimensional feature matrix, rather than a feature vector, as the input to a neural network?
+For a WWII naval wargame, I have sorted out the features of interest to approximate the game state $S$ at a given time $t$.
+
+- they are a maximum of 50 naval task forces and airbases on the map
+
+- each one has a lot of features (hex, weather, number of ships, type, distance to other task forces, distance to objective, cargo, intelligence, combat value, speed, damage, etc.)
+
+
+The output would be the probability of winning and the level of victory.
+
+"
+"['neural-networks', 'deep-learning', 'tensorflow', 'keras']"," Title: Should I apply ReLU to non negative output?Body: Suppose I want to predict the position of a sensor based on its reading.
+
+I can first predict the unit vector and predict the distance to be multiplied to this vector.
+And I know that distance will never be negative because all the negative parts are inside unit vector already.
+
+Should I apply ReLU to the distance before multiplying it to the unit vector?
+
+I'm thinking that this can be helpful to eliminate the network from needing too much training data by restricting the output ranges the network could give. But I also think that it could make the learning slower when the ReLU unit dies (value=0) so the gradient doesn't flow properly somehow.
+"
+"['neural-networks', 'classification']"," Title: Beginner - Object classification data in a neural networkBody: Imagine I wish to classify images of digits from 0-9.
+Let's say I have trained the network to recognise '1'.
+If I were to train the same network to recognise '2', wouldn't the backpropagation process mess up the weights and biases for '1'?
+
+Or do programs like Tensorflow allocate a new layer of neural network for different object classification? Thanks.
+"
+"['machine-learning', 'computer-vision', 'tensorflow']"," Title: I need to predict ball position from set of ImagesBody: I am a novice developer in AI. Any help appropriated.
+
+I have a set of images and from that I want to predict position(x,y co-ordinates) of the Ball.
+
+Thanks in advance.
+"
+"['neural-networks', 'datasets', 'unsupervised-learning', 'autoencoders']"," Title: Why isn't the Credit Card Fraud Detection dataset from Kaggle already balanced?Body: I am working on a credit card fraud detection problem using autoencoders. I have a question regarding the dataset I'll be using.
+
+I've downloaded the dataset for the above problem from Kaggle, which is highly imbalanced: it contains only 492 frauds out of 284,807 transactions. Why isn't the dataset already balanced? Should I balance it before applying the auto-encoder?
+"
+"['genetic-algorithms', 'crossover-operators']"," Title: How to crossover chromosomes composed of genes that are tuples such that the elements of the tuples do not appear twice in the chromosome?Body: Each chromosome contains an array of genes, each gene contains a letter and a number, both letter and number can only exist once in each chromosome.
+
+Parent A = {a,1}{c,2}{e,3}{g,4}
+Parent B = {a,2}{b,1}{c,4}{d,3}
+
+
+What would be the best crossover operator to create a child that doesn't break the rule described above?
+"
+"['neural-networks', 'machine-learning', 'deep-learning']"," Title: Is it possible to combine two neural networks trained on different tasks into one that knows both tasks?Body: I'm relatively new to artificial intelligence and neural networks.
+Let's say I have two different fully trained neural networks. The first one is trained for mathematical addition and the second one on mathematical multiplication. Now, I want to combine these two neural networks into one that knows about both operations.
+Is this possible? Is there a representative name for this kind of technique?
+I had read something about bilinear CNN models that sounds similar to what I'm looking for, right?
+"
+"['neural-networks', 'autoencoders', 'hyperparameter-optimization', 'variational-autoencoder', 'hyper-parameters']"," Title: How should we choose the dimensions of the encoding layer in auto-encoders?Body: How should we choose the dimensions of the encoding layer in auto-encoders?
+"
+"['deep-learning', 'reinforcement-learning', 'tensorflow', 'unsupervised-learning', 'open-ai']"," Title: How to implement a Continuous Control of a quadruped robot with Deep Reinforcement Learning in Pybullet and OpenAI Gym?Body: Description
+
+I have designed this robot in URDF format and its environment in pybullet. Each leg has a minimum and maximum value of movement.
+
+What reinforcement algorithm will be best to create a walking policy in a simple environment in which a positive reward will be given if it walks in the positive X-axis direction?
+
+I am working in the following but I don´t know if it is the best way:
+
+The expected output from the policy is an array in the range of (-1, 1) for each joint. The input of the policy is the position of each joint from the past X frames in the environment(replay memory like DeepQ Net), the center of mass of the body, the difference in height between the floor and the body to see if it has fallen and the movement in the x-axis.
+
+Limitations
+
+left_front_joint => lower=""-0.4"" upper=""2.5"" id=0
+
+left_front_leg_joint => lower=""-0.6"" upper=""0.7"" id=2
+
+right_front_joint => lower=""-2.5"" upper=""0.4"" id=3
+
+right_front_leg_joint => lower=""-0.6"" upper=""0.7"" id=5
+
+left_back_joint => lower=""-2.5"" upper=""0.4"" id=6
+
+left_back_leg_joint => lower=""-0.6"" upper=""0.7"" id=8
+
+right_back_joint => lower=""-0.4"" upper=""2.5"" id=9
+
+right_back_leg_joint => lower=""-0.6"" upper=""0.7"" id=11
+
+The code below is just a test of the environment with a set of movements hardcoded in the robot just to test how it could walk later. The environment is set to real time, but I assume it needs to be in a frame by frame lapse during the policy training. (p.setRealTimeSimulation(1) #disable and p.stepSimulation() #enable)
+
+A video of it can be seen in:
+
+https://youtu.be/j9sysG-EIkQ
+
+The complete code can be seen here:
+
+https://github.com/rubencg195/WalkingSpider_OpenAI_PyBullet_ROS
+
+CODE
+
+import pybullet as p
+import time
+import pybullet_data
+
+def moveLeg( robot=None, id=0, position=0, force=1.5 ):
+ if(robot is None):
+ return;
+ p.setJointMotorControl2(
+ robot,
+ id,
+ p.POSITION_CONTROL,
+ targetPosition=position,
+ force=force,
+ #maxVelocity=5
+ )
+
+pixelWidth = 1000
+pixelHeight = 1000
+camTargetPos = [0,0,0]
+camDistance = 0.5
+pitch = -10.0
+roll=0
+upAxisIndex = 2
+yaw = 0
+
+physicsClient = p.connect(p.GUI)#or p.DIRECT for non-graphical version
+p.setAdditionalSearchPath(pybullet_data.getDataPath()) #optionally
+p.setGravity(0,0,-10)
+viewMatrix = p.computeViewMatrixFromYawPitchRoll(camTargetPos, camDistance, yaw, pitch, roll, upAxisIndex)
+planeId = p.loadURDF(""plane.urdf"")
+cubeStartPos = [0,0,0.05]
+cubeStartOrientation = p.getQuaternionFromEuler([0,0,0])
+#boxId = p.loadURDF(""r2d2.urdf"",cubeStartPos, cubeStartOrientation)
+boxId = p.loadURDF(""src/spider.xml"",cubeStartPos, cubeStartOrientation)
+# boxId = p.loadURDF(""spider_simple.urdf"",cubeStartPos, cubeStartOrientation)
+
+
+
+toggle = 1
+
+
+
+p.setRealTimeSimulation(1)
+
+for i in range (10000):
+ #p.stepSimulation()
+
+
+ moveLeg( robot=boxId, id=0, position= toggle * -2 ) #LEFT_FRONT
+ moveLeg( robot=boxId, id=2, position= toggle * -2 ) #LEFT_FRONT
+
+ moveLeg( robot=boxId, id=3, position= toggle * -2 ) #RIGHT_FRONT
+ moveLeg( robot=boxId, id=5, position= toggle * 2 ) #RIGHT_FRONT
+
+ moveLeg( robot=boxId, id=6, position= toggle * 2 ) #LEFT_BACK
+ moveLeg( robot=boxId, id=8, position= toggle * -2 ) #LEFT_BACK
+
+ moveLeg( robot=boxId, id=9, position= toggle * 2 ) #RIGHT_BACK
+ moveLeg( robot=boxId, id=11, position= toggle * 2 ) #RIGHT_BACK
+ #time.sleep(1./140.)g
+ #time.sleep(0.01)
+ time.sleep(1)
+
+ toggle = toggle * -1
+
+ #viewMatrix = p.computeViewMatrixFromYawPitchRoll(camTargetPos, camDistance, yaw, pitch, roll, upAxisIndex)
+ #projectionMatrix = [1.0825318098068237, 0.0, 0.0, 0.0, 0.0, 1.732050895690918, 0.0, 0.0, 0.0, 0.0, -1.0002000331878662, -1.0, 0.0, 0.0, -0.020002000033855438, 0.0]
+ #img_arr = p.getCameraImage(pixelWidth, pixelHeight, viewMatrix=viewMatrix, projectionMatrix=projectionMatrix, shadow=1,lightDirection=[1,1,1])
+
+cubePos, cubeOrn = p.getBasePositionAndOrientation(boxId)
+print(cubePos,cubeOrn)
+p.disconnect()
+
+
+
+
+
+"
+"['neural-networks', 'feedforward-neural-networks']"," Title: Neural network to detect ""spam""?Body: I've inherited a neural network project at the company I work for. The person who developed gave me some very basic training to get up and running. I've maintained it for a while. The current neural network is able to classify messages for telcos: it can send them to support people in different areas, like ""activation"", ""no signal"", ""internet"", etc. The network has been working flawlessly. The structure of this neural network is as follows:
+
+ model = Sequential()
+ model.add(Dense(500, input_shape=(len(train_x[0]),)))
+ model.add(Activation('relu'))
+ model.add(Dropout(0.6))
+ model.add(Dense(250, input_shape=(500,)))
+ model.add(Activation('relu'))
+ model.add(Dropout(0.5))
+ model.add(Dense(len(train_y[0])))
+ model.add(Activation('softmax'))
+ model.compile(loss='categorical_crossentropy',
+ optimizer='Adamax',
+ metrics=['accuracy'])
+
+
+This uses a Word2Vec embedding, and has been trained with a ""clean"" file: all special characters and numbers are removed from both the training file and the input data.
+
+Now I've been assigned to make a neural network to detect if a message will be catalog as ""moderated"" (meaning it's an insult, spam, or just people commenting on a facebook post), or ""operative"", meaning the message is actually a question for the company.
+
+What I did was start from the current model and reduce the number of categories to two. It didn't go very well: the word embedding was in spanish from Argentina, and the training data was spanish from Peru. I made a new embedding and accuracy increased by a fair margin (we are looking for insults and other curse words. In spanish a curse word from a country can be a normal word for another: in Spain ""coger"" means ""to take"", and in Argentina it means ""to f__k"". ""concha"" means shell in most countries, but in Argentina it means ""c__t"". You get the idea).
+
+I trained the network with 300.000 messages. Roughly 40% of these were classified as ""moderated"". I tried all sorts of combinations of cycles and epochs. The accuracy slowly increased to nearly 0.9, and loss stays around 0.5000.
+
+But when testing the neural network, ""operative"" messages generally seem to be correctly classified, with accuracy around 0.9, but ""moderated"" messages aren't. They are classified around 0.6 or less. At some point I tried multiple insults in a message (even pasting sample data as input data), but it didn't seem to improve.
+
+Word2Vec works fantastically. The words are correctly ""lumped"" together (learned a few insults in Peruvian spanish thanks to it).
+
+I put the neural network in production for a week, to gather statistics. Basically 90% of the messages went unclassified, and 5% were correctly classified and 5% wrong. Since the network has two categories, this seems to mean the neural network is just giving random guesses.
+
+So, the questions are:
+
+
+- Is it possible to accomplish this task with a neural network?
+- Is the structure of this neural network correct for this task?
+- Are 300k messages enough to train the neural network?
+- Do I need to clean up the data from uppercase, special characters, numbers etc?
+
+"
+['hardware']," Title: Does software remain even when hardware is demolished?Body: For example, if I constructed a neural network and the computer running it where to be demolished, is the information/program of the neural network still an existent entity within or outside the remnants of the hardware?
+"
+"['python', 'machine-learning', 'neural-networks']"," Title: What is batch / batch size in neural networks?Body: I have some problems with understanding of the batch concept and batch size. I messed something up. First I start it consider based on convolutional neural network I heard two versions:
+
+
+- When the batch size is set to 50, the first network is fed with 50 images and then learned / recalculated (it doesn't make sense to me, because in this case the network learns one of 50 images).
+- When the batch size is set to 50, one of 50 neurons is recalculated in the learning process on a single image.
+
+
+Both of these explanations seem to be wrong to me, so I assume, that I completely don't understand this. What is batch / batch size in RNN? Could you show any example?
+
+I can tell you how I would teach a recurrent neural network. Let's say, that I would like to teach a neural network to predict the weather the next day:
+
+
+- I would take weather data from an expected area from the last 30,000 days.
+- I would assume that my prediction would be based on measurements from the last 365 days.
+- I would take data from day 1 to 365 - feed RNN with it and learn.
+- Then I would take data from day 2 to 366 => feed + learn
+- Then day 3 to 367 => feed + learn
+- And so on.
+
+
+Is this 365 measurement concept a batch size?
+"
+"['neural-networks', 'machine-learning', 'game-ai', 'monte-carlo-tree-search', 'minimax']"," Title: Why do neural nets and machine learning tend to work well with MCTS, but not with regular Minimax game-playing AI?Body: I've often heard MCTS grouped together with neural nets and machine learning. From what I gather, MCTS uses a refined intuition (from maching learning) to evaluate positions. This allows it to better guess which moves are worth playing out more.
+
+But I've almost never heard of using machine learning for Minimax+alpha-beta engines. Couldn't machine learning be used for the Engine to better guess which move is best, and then look at that move's subtree first? A major optimization of the minimax algorithm is move-ordering, and this seems like a good way to accomplish that.
+"
+['genetic-algorithms']," Title: Stereo matching using genetic algorithmBody: I have been reading a few papers (paper1, paper2) on stereo matching using genetic algorithms. I understand how genetic algorithms work in general and how stereo matching works, but I do not understand how genetic algorithms are used in stereo matching.
+
+The first paper by Han et al says that ""1) individual is a disparity set, 2) a chromosome has a 2D structure for handling image signals efficiently, and 3) a fitness function is composed of certain constraints which are commonly used in stereo matching"".
+
+Does it mean that an individual is a disparity map with random numbers?
+Then a chromosome is a block within the individual's disparity map.
+The constraint used for fitness function could be the famous epipolar line.
+
+I dont seem to understand how this works and even WHY you should use genetic algorithm on an algorithm that at its simplest form uses 5 for loops, for example, like in here.
+"
+"['game-ai', 'reference-request']"," Title: What kind of AI technique can I use to play the ""Lines"" game?Body: I am trying to find a good approach to create a computer player for the game "Lines" from gamious on Android. The concept of the game is pretty straightforward :
+
+Lines is an abstract ‘zen’ game experience where form is just as important as function. Place or remove Dots to initiate a colourful race that fills a drawing. The colour that dominates the race wins.
+
+The game starts with a drawing (that can be described as a set of "blank" lines, with connection to other lines). Dots of different colour are placed somewhat randomly on the lines. The player get a colour assigned. When the game start, paint start flowing from the dots and filling the (at first blank) lines of the drawing. You win if your colour dominates.
+The game gives you different tools to win (the game starts when all of them have been used) :
+
+- [0 to 2] scissor to cut lines
+- [0 to 5] additional dot of your own color to place on the drawing
+- [0 to 4] enemy dots eraser
+- [0 to 3] additional straight lines to connect different part of the drawing
+
+A quick example: the first image is the initial state of a round. "My" colour is the yellow (1 enemy = brown) and I have 4 tools (2 eraser and 2 lines). The second image shows the game running after I used the tools to put my colour in a winning position (yes, we can do better)
+
+
+If I try to approach this as a classical optimization problem, things get messy pretty fast :
+
+- highly non-linear
+- high number of dimensions
+
+AI seems to be the right way to go, but I would like your help to get in the right direction: what would be your approach to create an AI to play this game?
+To limit the scope of this question, you can consider that I already have a data structure to represent the game initial state, the use of different tools and the game "physics". What I really want to do is finding how to create an AI which can learn how to efficiently use the tools.
+Regarding my experience, I took 2 semesters of AI classes during the last year getting my engineering degree and have used non-linear optimization tools for a while: you can go technical, but I am not fully understand it.
+"
+"['machine-learning', 'deep-learning', 'convolutional-neural-networks', 'channel']"," Title: What is the concept of channels in CNNs?Body: I am trying to understand what channels mean in convolutional neural networks. When working with grayscale and colored images, I understand that the number of channels is set to 1 and 3 (in the first conv layer), respectively, where 3 corresponds to red, green, and blue.
+Say you have a colored image that is $200 \times 200$ pixels. The standard is such that the input matrix is a $200 \times 200$ matrix with 3 channels. The first convolutional layer would have a filter that is size $N \times M \times 3$, where $N,M < 200$ (I think they're usually set to 3 or 5).
+Would it be possible to structure the input data differently, such that the number of channels now becomes the width or height of the image? i.e., the number of channels would be 200, the input matrix would then be $200 \times 3$ or $3 \times 200$. What would be the advantage/disadvantage of this formulation versus the standard (# of channels = 3)? Obviously, this would limit your filter's spatial size, but dramatically increase it in the depth direction.
+I am really posing this question because I don't quite understand the concept of channels in CNNs.
+"
+"['neural-networks', 'natural-language-processing', 'recurrent-neural-networks', 'keras', 'long-short-term-memory']"," Title: LSTM language model not workingBody: I am trying to use a Keras LSTM neural network for character level language modelling. As the input, I give it the last 50 characters and it has to output the next one. It has 3 layers of 400 neurons each. For the training data, I am using 'War of The Worlds' by H.G. Wells which adds up to 269639 training samples and 67410 validation samples.
+
+After 7 epochs the validation accuracy has reached 35.1% and the validation loss has reached 2.31. However, after being fed the first sentence of war of the worlds to start it outputs:
+
+
+ the the the the the the the the the the the the the the the the...
+
+
+I'm not sure where I'm going wrong; I don't want it to overfit and output passages straight from the training data but I also don't want it to just output 'the' repeatedly. I'm really at a loss as to what I should do to improve it.
+
+Any help would be greatly appreciated. Thanks!
+"
+"['neural-networks', 'deep-learning', 'applications']"," Title: If deep learning is a black box, then why are companies still investing in it?Body: If deep learning is a black box, then why are companies still investing in it?
+"
+"['terminology', 'papers', 'adversarial-ml', 'adversarial-attacks', 'cycle-gan']"," Title: What is an adversarial attack?Body: I'm reading this really interesting article CycleGAN, a Master of Steganography. I understand everything up until this paragraph:
+
+we may view the CycleGAN training procedure as continually mounting an adversarial attack on $G$, by optimizing a generator $F$ to generate adversarial maps that force $G$ to produce a desired image. Since we have demonstrated that it is possible to generate these adversarial maps using gradient descent, it is nearly certain that the training procedure is also causing $F$ to generate these adversarial maps. As $G$ is also being optimized, however, $G$ may actually be seen as cooperating in this attack by learning to become increasingly susceptible to attacks. We observe that the magnitude of the difference $y^{*}-y_{0}$ necessary to generate a convincing adversarial example by Equation 3 decreases as the CycleGAN model trains, indicating cooperation of $G$ to support adversarial maps.
+
+How is the CycleGAN training procedure an adversarial attack?
+I don't really understand the quoted explanation.
+"
+"['convolutional-neural-networks', 'image-recognition']"," Title: Smaller interest area for images than the size of the image in classification neural networksBody: I have the following binary classification problem, my labeled dataset contains images 96x96 px. Now in every image the interest area is of size 32x32 px in the center of the image, and the images are labeled based on that 32x32 px area. If whatever i am trying to detect is in the outer region of the 32x32 px area the label of that image is not affected.
+
+The problem here is that if i use the whole image when traing, my model will not learn that the interest area is only in the center of the image but on the other hand if i crop the images to be of size 32x32 i am loosing a lot of information which can help the model to train on. I found out that i am getting the best results if i crop the images to be of size 64x64 (kind of a trade of).
+
+Now going to the test set, for the test set it doesn't make sense to use this trade of because the model is not learning anything anyways so i would rather crop the test images to 32x32 but then the test set and train set sizes are not the same.
+
+Has anyone came across this problem before? Can i just pad the test images to the size of the train images? is this a good way to go?
+"
+"['machine-learning', 'reinforcement-learning', 'reward-normalization']"," Title: Why is the reward signal normalized in openAI's REINFORCE?Body: Pytorch's example for the REINFORCE algorithm for reinforcement learning has the following code:
+
+import argparse
+import gym
+import numpy as np
+from itertools import count
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.optim as optim
+from torch.distributions import Categorical
+
+
+parser = argparse.ArgumentParser(description='PyTorch REINFORCE example')
+parser.add_argument('--gamma', type=float, default=0.99, metavar='G',
+ help='discount factor (default: 0.99)')
+parser.add_argument('--seed', type=int, default=543, metavar='N',
+ help='random seed (default: 543)')
+parser.add_argument('--render', action='store_true',
+ help='render the environment')
+parser.add_argument('--log-interval', type=int, default=10, metavar='N',
+ help='interval between training status logs (default: 10)')
+args = parser.parse_args()
+
+
+env = gym.make('CartPole-v0')
+env.seed(args.seed)
+torch.manual_seed(args.seed)
+
+
+class Policy(nn.Module):
+ def __init__(self):
+ super(Policy, self).__init__()
+ self.affine1 = nn.Linear(4, 128)
+ self.affine2 = nn.Linear(128, 2)
+
+ self.saved_log_probs = []
+ self.rewards = []
+
+ def forward(self, x):
+ x = F.relu(self.affine1(x))
+ action_scores = self.affine2(x)
+ return F.softmax(action_scores, dim=1)
+
+
+policy = Policy()
+optimizer = optim.Adam(policy.parameters(), lr=1e-2)
+eps = np.finfo(np.float32).eps.item()
+
+
+def select_action(state):
+ state = torch.from_numpy(state).float().unsqueeze(0)
+ probs = policy(state)
+ m = Categorical(probs)
+ action = m.sample()
+ policy.saved_log_probs.append(m.log_prob(action))
+ return action.item()
+
+
+def finish_episode():
+ R = 0
+ policy_loss = []
+ rewards = []
+ for r in policy.rewards[::-1]:
+ R = r + args.gamma * R
+ rewards.insert(0, R)
+ rewards = torch.tensor(rewards)
+ rewards = (rewards - rewards.mean()) / (rewards.std() + eps)
+ for log_prob, reward in zip(policy.saved_log_probs, rewards):
+ policy_loss.append(-log_prob * reward)
+ optimizer.zero_grad()
+ policy_loss = torch.cat(policy_loss).sum()
+ policy_loss.backward()
+ optimizer.step()
+ del policy.rewards[:]
+ del policy.saved_log_probs[:]
+
+
+def main():
+ running_reward = 10
+ for i_episode in count(1):
+ state = env.reset()
+ for t in range(10000): # Don't infinite loop while learning
+ action = select_action(state)
+ state, reward, done, _ = env.step(action)
+ if args.render:
+ env.render()
+ policy.rewards.append(reward)
+ if done:
+ break
+
+ running_reward = running_reward * 0.99 + t * 0.01
+ finish_episode()
+ if i_episode % args.log_interval == 0:
+ print('Episode {}\tLast length: {:5d}\tAverage length: {:.2f}'.format(
+ i_episode, t, running_reward))
+ if running_reward > env.spec.reward_threshold:
+ print(""Solved! Running reward is now {} and ""
+ ""the last episode runs to {} time steps!"".format(running_reward, t))
+ break
+
+
+if __name__ == '__main__':
+main()
+
+
+I am interested in the function finish_episode()
:
+
+the line
+
+ rewards = (rewards - rewards.mean()) / (rewards.std() + eps)
+
+
+makes no sense to me.
+
+I thought this might be baseline reduction, but I can't see why divide by the standard deviation.
+
+If it isn't baseline reduction, then why normalize the rewards, and where should the baseline reduction go?
+
+Please explain that line
+"